Barrel VectorDB¶
Embeddable vector database for Erlang with HNSW indexing and semantic search.
Build AI-powered search into your Erlang applications. Barrel VectorDB provides a production-ready vector store with pluggable embedding providers, automatic batching, and sub-millisecond search latency.
-
Quick Start
Get up and running in 5 minutes with vector search
-
Embedding Models
Local, Ollama, OpenAI, and more - choose your embedder
-
Clustering
Scale out with sharding and automatic rebalancing
-
API Reference
HTTP and Erlang APIs for full control
What is Barrel VectorDB?¶
Barrel VectorDB is an embeddable vector database designed for Erlang/OTP applications:
- HNSW Indexing: Pure Erlang HNSW with O(log N) search, or optional FAISS backend for high throughput
- Pluggable Embeddings: Local (sentence-transformers), Ollama, OpenAI, FastEmbed, with fallback chains
- Sub-millisecond Search: P50 ~1ms, P99 ~5ms typical latency
- Production Ready: Built on RocksDB for persistence, gen_batch_server for write coalescing
Quick Example¶
%% Start a store with local embeddings
{ok, _} = barrel_vectordb:start_link(#{
name => my_store,
path => "/tmp/vectors",
embedder => {local, #{}} %% requires Python + sentence-transformers
}).
%% Add documents (text is embedded automatically)
ok = barrel_vectordb:add(my_store, <<"doc1">>, <<"Hello world">>, #{}).
ok = barrel_vectordb:add(my_store, <<"doc2">>, <<"Goodbye world">>, #{}).
%% Search with text query
{ok, Results} = barrel_vectordb:search(my_store, <<"hi there">>, #{k => 5}).
%% => [#{key => <<"doc1">>, text => <<"Hello world">>, score => 0.89, ...}, ...]
Core Features¶
Vector Indexing¶
| Feature | HNSW (Default) | FAISS (Optional) |
|---|---|---|
| Dependencies | None | barrel_faiss NIF |
| Insert speed | Baseline | 1.6-3x faster |
| Search speed | Baseline | 2x faster |
| Delete speed | Fast (native) | Slower (soft delete) |
| Memory | Higher | Lower |
Embedding Providers¶
| Provider | Description |
|---|---|
local |
Python + sentence-transformers (CPU) |
ollama |
Local Ollama server |
openai |
OpenAI Embeddings API |
fastembed |
ONNX-based embeddings |
| Provider chain | Try providers in order |
Advanced Features¶
- SPLADE: Neural sparse embeddings for hybrid search
- ColBERT: Multi-vector late interaction
- CLIP: Cross-modal image-text search
- Reranking: Cross-encoder for improved relevance
- BM25: Pure Erlang lexical search
Get Started¶
-
5 minutes
Install and start searching
-
Vector-only mode
Bring your own embeddings