NoSQL document-oriented database provider Couchbase on Thursday said that it was working to add support for vector capabilities to its database offerings, including its Capella managed database-as-a-service (DBaaS).
The vector capabilities will include similarity search and retrieval-augmented generation (RAG), the company said, adding that the addition of these capabilities will also enhance the performance of the database as all search patterns can be supported within a single index to lower response latency.
Database vendors have been adding vector search capabilities to help enterprises build generative AI-based applications. Earlier in the day, Google Cloud said that it was adding vector support to all its database offerings, including Firestore, Bigtable, CloudSQL for MySQL, CloudSQL for PostgreSQL, and Spanner.
Last year, database vendors including MongoDB, DataStax, and Kinetica added vector search and other generative AI capabilities to their offerings.
Analysts believe that vector support will become table stakes for all databases by the end of 2026.
AWS and Microsoft too, according to Constellation Research’s principal analyst Doug Henschen, have added vector embedding and vector search capabilities to multiple database services.
“Oracle has signalled that they’re working on adding vector support for their database. It’s pretty clear that it’s not hard to add these capabilities and they will eventually be pervasively available,” Henschen added.
In addition to adding vector support to Couchbase Server and Capella, Couchbase is integrating LangChain and LlamaIndex—frameworks for developing generative AI-based applications—to boost developer productivity.
The new capabilities are expected to be available in Couchbase Server and Capella before May, the company said, adding that its mobile and edge database offerings will get the same capabilities in beta within the same time frame.