Vector Store Providers
Vector stores enable semantic search over memories.
Chroma
Local development and prototyping.
Installation
pip install remina-memory[chroma]Configuration
"vector_store": {
"provider": "chroma",
"config": {
"path": "~/.remina/chroma",
"collection_name": "remina",
}
}| Option | Type | Default | Description |
|---|---|---|---|
path | str | ~/.remina/chroma | Database directory |
collection_name | str | remina | Collection name |
Characteristics:
- Zero configuration
- Embedded (no server)
- Persistent storage
- Single-node only
Qdrant
Self-hosted production deployments.
Installation
pip install remina-memory[qdrant]Configuration
"vector_store": {
"provider": "qdrant",
"config": {
"url": "http://localhost:6333",
"api_key": None,
"collection_name": "remina",
"embedding_dims": 1536,
}
}| Option | Type | Default | Description |
|---|---|---|---|
url | str | http://localhost:6333 | Server URL |
api_key | str | None | API key (cloud) |
collection_name | str | remina | Collection name |
embedding_dims | int | 1536 | Embedding dimensions |
Setup
docker run -d \
--name remina-qdrant \
-p 6333:6333 \
-p 6334:6334 \
-v qdrant_storage:/qdrant/storage \
qdrant/qdrantCharacteristics:
- High performance
- Horizontal scaling
- Rich filtering
- Cloud and self-hosted options
Pinecone
Managed, serverless production deployments.
Installation
pip install remina-memory[pinecone]Configuration
"vector_store": {
"provider": "pinecone",
"config": {
"api_key": "your-api-key",
"index_name": "remina",
"namespace": "default",
"embedding_dims": 768,
"cloud": "aws",
"region": "us-east-1",
}
}| Option | Type | Default | Description |
|---|---|---|---|
api_key | str | env var | Pinecone API key |
index_name | str | remina | Index name |
namespace | str | None | Namespace |
embedding_dims | int | 768 | Embedding dimensions |
cloud | str | aws | Cloud provider |
region | str | us-east-1 | Region |
Characteristics:
- Fully managed
- Serverless scaling
- High availability
- Pay-per-use
Vector Store Interface
All vector stores implement:
class VectorStoreBase(ABC):
async def upsert(self, id: str, embedding: List[float], metadata: Dict) -> None
async def upsert_batch(self, items: List[Tuple[str, List[float], Dict]]) -> None
async def search(self, embedding: List[float], limit: int = 10, filters: Dict = None) -> List[VectorSearchResult]
async def delete(self, ids: List[str]) -> None
async def close(self) -> NoneEmbedding Dimensions
⚠️
The embedding_dims must match your embedder's output dimensions.
| Embedder | Model | Dimensions |
|---|---|---|
| OpenAI | text-embedding-3-small | 1536 |
| OpenAI | text-embedding-3-large | 3072 |
| Gemini | text-embedding-004 | 768 |
| Cohere | embed-english-v3.0 | 1024 |
| HuggingFace | all-MiniLM-L6-v2 | 384 |