Voyage AI Embeddings Go Native in MongoDB to Slash Costs and Boost Accuracy
Voyage-context-3 can be accessed through MongoDB’s Atlas Vector Search private preview and the Voyage API.

MongoDB has officially integrated Voyage AI’s context-aware embedding technology into its Atlas platform, empowering developers with powerful tools for long-document understanding and retrieval—without changing their existing workflows.
The centerpiece, voyage-context-3, combines chunk-level detail and full-document awareness to deliver up to 23% better retrieval accuracy while reducing vector database costs by over 99%.
"This is precisely why we acquired Voyage earlier this year. It’s about giving developers building on MongoDB the best tools to deliver fast, reliable, and cost-effective AI. This is especially important for high-stakes AI use cases and in industries such as finance and healthcare, where precision and context are crucial," MongoDB CEO Dev Ittycheria, said.
Voyage-context-3 supports customisable vector dimensions (256–2048) and quantisation options built on efficient “Matryoshka learning”—maintaining accuracy while lowering storage requirements. As a “drop-in replacement” for existing vector systems, it dramatically boosts precision for legal documents, reports, and sensitive use cases in finance and healthcare.
“When building AI that reads and reasons over documents…it’s critical to break those documents into smaller pieces…Voyage-context-3 changes that,” Ittycheria added.
Now available with 200 million free tokens, voyage-context-3 can be accessed through MongoDB’s Atlas Vector Search private preview and the Voyage API.
Comments ()