Evaluating LLM-Based Apps: New Product Release 🚀

Evaluating LLM-Based Apps: New Product Release 🚀

Tags
https://www.linkedin.com/events/7130665323107635202/about/

📅 November 28th, 2023

⏰ 8.30 AM PST

For the first time since founding LLMOps.Space, we’ll be hosting a product launch event, this time with Deepchecks’ new LLM Evaluation module. This is an exclusive launch event for the community in the sense that the product hasn’t been exposed before this. 🚀

Shir, CTO at Deepchecks, and Yaron, VP Product at Deepchecks will talk about evaluating and monitoring LLMs, and why it is essential if you are building LLM-powered apps. They will cover topics like LLM hallucinations, mitigating risks, bias detection, and more.

Topics that will be covered:

Hallucinations: Cases when the model generates outputs that aren’t grounded in the context given to the LLM. We'll discuss this well-known problem as well as a robust approach towards solving it.

Evaluation Methodologies: We’ll explore various methodologies for evaluating LLMs, including both automated and manual techniques. We’ll also talk about structuring the golden set that will be used for benchmarking the LLM’s performance, as well as why it’s important.

Deepchecks LLM Evaluation: A live demonstration of Deepcheck’s new LLM evaluation module, and the main highlight of this session. 😀

This webinar is ideal for LLM practitioners, data scientists, machine learning engineers, and anyone interested in understanding and building using LLMs. 🏗