by CShorten on 12/14/23, 5:04 PM with 0 comments
Even further, this brute force index lives directly in the LSM store without needing any main memory! This is done through a continuous file read that Etienne explains better in the podcast better than I can in this post haha, really amazing engineering work!
Weaviate also comes with some other "self-driving database" features like lazy shard loading for faster startup times with multi-tenancy and automatic resource limiting with the GOMEMLIMIT and other details Etienne shares in the podcast!
I am also beyond excited to present our new integration with Anyscale (@anyscalecompute)! Anyscale has amazing pricing for serving and fine-tuning popular open-source LLMs. At the time of this release we are now integrating the Llama 70B/13B/7B, Mistral 7B, and Code Llama 34B into Weaviate -- but we expect much further development with adding support for fine-tuned models, the super cool new function calling models Anyscale announced yesterday. and other model such as Diffusion and multimodal models!
Here is a full list of new features:
- Lazy Shard Loading - Flat Index + Binary Quantization - Default Segments for PQ - AutoPQ - Auto Resource Limiting - Node Endpoint Update - Generative Anyscale
Check it out here!
https://www.youtube.com/watch?v=e88O18_2wyo