Build in
with Postgres
Lantern Cloud is an open-source Postgres vector database and toolkit for developers to build production-ready AI applications.
Trusted by companies building for scale
Owen Colegrove
Founder, SciPhi
Lantern helped us scale our cloud database infrastructure to support thousands of developers building RAG applications.
Daksh Gupta
Founder, Greptile
We love Lantern! They made it easy for us to do search over thousands of repositories and millions of lines of code.
Get advanced search functionality inside Postgres with extensions -- no separate vector database or search engine required.
Search over sparse and dense vectors, with support for binary, scalar, and product compression. Scale seamlessly to millions and beyond with serverless indexing.
Use the BM25 ranking algorithm - the industry standard used in ElasticSearch - for more relevant text search results, surpassing the default full text search capabilities in Postgres.
Combine vector search and BM25 text search using RRF or other reranking algorithms for better results.
Scale effortlessly without compromising database performance by offloading vector index creation to a separate machine from your main database.
Infinite scalability
Use as many or as little resources as you'd like, depending on your needs
Performance
Build or update indexes without slowing down your database
Seamless integration
Simply add external=true to vector index creation
Simplify AI workflows by generating embeddings and running LLMs directly within your database.
Generate vectors and run LLMs in Postgres
Run simple SQL commands to generate vectors and run LLM models.
Support for 20+ embedding models and LLMs
Access Open AI, Cohere, Jina AI, and other open-source models.
Automatically generate vector and LLM columns
Create new vector or LLM columns based on your existing data.
ID
Name
Description
Vector
No need to learn a new API or framework. Just use SQL or leverage our integrations with your favorite ORMs.
-- Create a table with a vector column
CREATE TABLE books (id SERIAL PRIMARY KEY, book_embedding REAL[3]);
-- Insert some vectors
INSERT INTO books (book_embedding) VALUES ('{0,1,0}'), ('{3,2,4}');
-- Create an index for faster queries
CREATE INDEX book_index ON books USING lantern_hnsw(book_embedding dist_l2sq_ops)
WITH (M=2, ef_construction=10, ef=4, dim=3);
-- Query the nearest vector
SELECT id FROM books ORDER BY book_embedding <-> '{0,0,0}' LIMIT 1;
-- Query the nearest vector to a text embedding
SELECT id FROM books ORDER BY book_embedding <-> text_embedding('BAAI/bge-base-en', 'My text input') LIMIT 1;
We're open-source, so you can self-host for full control, or use our managed service for easy setup and scaling.
FREE FOREVER
A multi-tenant setup allows us to set up a free tier that never pauses.
$0.00/month
All databases include
Vector and text search
Embedding generation
Automatic upgrades
Query insights
Community support
BEST CHOICE
Designed for running high-performance production workloads
STARTING AT
$44.00/month
Everything in free, plus
Increased compute and storage
Automatic backups
Point-in-time recovery
Asynchronous tasks
Priority support