LLMs have transformed many applications - putting advanced AI within any developers reach. This unlocked amazing progress - but also introduced a dependency on model providers and ballooning inference costs. This talk will discuss an alternative approach - fine tuning and deploying smaller, specialized language models to cut costs & improve performance. We'll discuss how to identify the right use cases for SLMs, evaluate their performance and deploy them effectively.
Part of Forbes 30 Under 30 list, he's a 2017 Thiel Fellow. Previously, he was a Co-organizer of Hacking Gen Y. Iddo has been programming since he was a kid and continues to contribute to open-source projects. Originally from Haifa, Israel, Iddo is based in San Francisco, CA.
If 2024 is the year of LLMs, then 2025 will be the year for LLM Applications. As LLMs continue to integrate into various applications ranging from chatbots and search engines to creative writing aids, the need to monitor and comprehend their behaviors intensifies.
Observability plays a crucial role in this context. It involves the systematic collection and analysis of data to enhance LLM performance, identify and correct biases, troubleshoot issues, and ensure AI systems are both reliable and trustworthy.
In this discussion, we will explore the concept of LLM observability in depth, focusing on how OpenTelemetry can fit into the world of LLM observability . Additionally, we will talk about challenges around modeling of prompts, completions, events, semantic conventions, and basically our path with the llm-sem-conv working group.
David Campbell, Scale AI, AI Risk Security Platform Lead
In this talk, we will explore the journey of Red Teaming from its origins to its transformation into AI Red Teaming, highlighting its pivotal role in shaping the future of Large Language Models (LLMs) and beyond. Drawing from my firsthand experiences developing and deploying the largest generative red teaming platform to date, I will share insightful antidotes and real-world examples. We will explore how adversarial red teaming fortifies AI applications at every layer—protecting platforms, businesses, and consumers. This includes safeguarding the external application interface, reinforcing LLM guardrails, and enhancing the security of the LLMs' internal algorithms. Join me as we uncover the critical importance of adversarial strategies in securing the AI landscape.
David Campbell is a seasoned technology leader with nearly 20 years of experience in Silicon Valley's startup ecosystem, now spearheading Responsible AI initiatives at Scale AI. As the Lead AI Risk Engineer, David has been pivotal in developing a cutting-edge AI Red Teaming platform... Read More →