In this session, Zain from Weaviate will discuss how we can use open-source multimodal embedding models in conjunction with large generative multimodal models to perform cross-modal search and multimodal retrieval augmented generation (MM-RAG) at the billion-object scale with the help of open source vector databases.
Topics that were covered:
✅ Understanding Multimodal Embedding Models: Learn how these models integrate various data forms, including images, text, audio, and sensory information, to perform sophisticated data analysis.
✅ Discover Cross-Modal Search and MM-RAG: Unveil techniques for searching across different data modalities and utilizing generative models for large-scale data retrieval and generation.
✅ Real-Time Cross-Modal Retrieval: Learn how real-time processing enables the use of large language models (LLMs) to reason over enterprise-level multimodal data, significantly enhancing decision-making and insights.