Changelog¶
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
2.0.0 - 2026-02-20¶
Changed¶
New erlang_python NIF Backend¶
- Breaking: Replaced port-based stdio JSON communication with erlang_python 1.5.0 NIF integration
- All Python providers now use direct
py:callinstead of subprocess communication - 2.7x improvement in batch throughput (936 vs 348 texts/sec with bge-small-en-v1.5)
- Requires erlang_python 1.5.0+ as a dependency
New Modules¶
barrel_embed_py- Erlang wrapper for py:call with timeout supportpriv/barrel_embed/nif_api.py- Thread-safe Python API for model caching
Removed¶
barrel_embed_port_server- Port-based Python server (replaced by NIF)barrel_embed_python_queue- Rate limiting queue (erlang_python handles this)priv/barrel_embed/server.py- Async stdio serverpriv/barrel_embed/__main__.py- CLI entry pointpriv/barrel_embed/providers/- Provider classes (now in nif_api.py)
Migration¶
If you were using custom provider configurations, the API remains the same.
The venv option is still supported and recommended:
1.0.0 - 2026-01-27¶
Added¶
Virtual Environment Support¶
- Added
venvconfiguration option for all Python providers (local, fastembed, splade, colbert, clip) - Proper venv activation in port environment (sets
VIRTUAL_ENV,PATH,PYTHONPATH) - New
scripts/setup_venv.shfor fast venv setup usinguv - Requirements files for different installation profiles:
priv/requirements.txt- Default (sentence-transformers + uvloop)priv/requirements-minimal.txt- Minimal (no ML libs)priv/requirements-full.txt- All providers- Documentation:
docs/venv-setup.md
CI Improvements¶
- Added
integration:venvjob to test Erlang-Python venv communication - Python tests now use
uvfor faster dependency installation
Changed¶
setup_python_venv.shnow usesuvwhen available (falls back to pip)- Python queue default limit changed from
schedulers/2 + 1toschedulers * 2 + 1 - Updated all Python provider documentation with venv examples
0.2.0 - 2026-01-27¶
Added¶
Cloud Providers¶
cohere- Cohere Embed API with input type optimizationvoyage- Voyage AI for RAG and domain-specific embeddings (code, law, finance)jina- Jina AI with 8K context and free tiermistral- Mistral AI with EU data residencyazure- Azure OpenAI for enterprise compliancebedrock- AWS Bedrock (Titan, Cohere models) with IAM and API key authvertex- Google Vertex AI for GCP ecosystem
Documentation¶
- Provider comparison guide (
docs/choosing-provider.md) - Developer guide for adding cloud providers (
docs/dev/adding-provider.md) - Individual documentation pages for all cloud providers
Tooling¶
scripts/setup_python_venv.shfor one-command Python venv setup- Added
--devoption for installing dev dependencies (test + uvloop) - Added
--uvloopoption to install uvloop via pyproject.toml extra - Added
--testoption to run Python tests after setup - Added
-h/--helpoption for usage information - GitLab CI configuration (
.gitlab-ci.yml) - Erlang tests with rebar3 eunit
- Python tests with pytest
- Dialyzer type checking
- Xref cross-reference checks
Testing¶
- Python test suite for uvloop integration (
priv/tests/test_server.py) - uvloop detection and event loop policy tests
- AsyncEmbedServer dispatch and handler tests
- Concurrent task execution tests
Python Engine¶
- Async request multiplexing for concurrent embeddings
- Improved error handling and logging
Changed¶
- Updated hackney dependency to 2.0.1 for HTTP/2 support
- Provider init now properly loads modules before checking exports
- Removed redundant
application:ensure_all_started(hackney)from providers (hackney starts via app.src)
0.1.0 - 2026-01-14¶
Added¶
- Initial release extracted from barrel_vectordb
- Core embedding coordinator (
barrel_embed) with provider chain and fallback support - Provider behaviour (
barrel_embed_provider) for implementing custom providers - Python execution rate limiter (
barrel_embed_python_queue)
Providers¶
local- Local Python with sentence-transformersollama- Ollama server API (supports both/api/embedand/api/embeddings)openai- OpenAI Embeddings APIfastembed- FastEmbed ONNX-based embeddings (lighter than sentence-transformers)splade- SPLADE sparse embeddings for hybrid searchembed_sparse/2,embed_batch_sparse/2for native sparse vectors- Automatic sparse-to-dense conversion for compatibility
colbert- ColBERT multi-vector embeddings for fine-grained matchingembed_multi/2,embed_batch_multi/2for token-level vectorsmaxsim_score/2for late interaction scoring
clip- CLIP image/text cross-modal embeddingsembed_image/2,embed_image_batch/2for image embeddings- Text embeddings in same vector space for cross-modal search
Features¶
- Batch embedding with configurable chunk size
- Provider chain with automatic fallback on failure
- Application supervision tree with ETS-based rate limiting
- Comprehensive EUnit test suite