Loading…
AI DevSummit 2025 + DeveloperWeek Leadership 2025
Company: Machine Learning clear filter
Wednesday, May 28
 

11:30am PDT

PRO Session: The Future of AI-Driven Discovery: Can We Build Recommendation System That Inspire, Not Just Retain?
Wednesday May 28, 2025 11:30am - 11:55am PDT
Jiahui (Sophia) Bai, Meta, Data Scientist Lead

This talk explores the evolving role of AI-powered recommendation systems in shaping content discovery, moving beyond engagement-driven optimization to fostering meaningful exploration. Today’s recommendation algorithms are finely tuned to maximize user retention, often reinforcing content bubbles and favoring already-popular creators. While effective for keeping users engaged, this approach can limit exposure to new ideas, diverse voices, and serendipitous discovery—key elements that make digital platforms more enriching and dynamic.

We begin by defining the core challenges of AI-driven content recommendations: noisy engagement signals, over-reliance on short-term behavioral data, and algorithmic bias toward dominant content and creators. We explore how improving signal quality—by distinguishing between passive consumption and genuine interest—can lead to more personalized yet diverse recommendations. Next, we tackle the issue of democratizing reach, discussing algorithmic strategies that ensure small and emerging creators have a fair chance to break through without compromising user satisfaction.

The session will also cover rethinking success metrics for recommendation systems, moving beyond click-through rates and watch time to metrics that measure content diversity, long-term engagement, and user well-being. Finally, we will discuss the future of AI-driven discovery, including the role of Generative AI, hybrid human-AI curation, and serendipity-focused models in creating recommendation systems that inspire, inform, and surprise users—rather than just retaining them.
Speakers
avatar for Jiahui (Sophia) Bai

Jiahui (Sophia) Bai

Data Scientist Lead, Meta
Sophia Bai is a Senior Data Scientist at Meta, specializing in AI-driven recommendation systems and product analytics. She has led key initiatives in improving signal quality, optimizing content discovery, and redesigning recommendation models to create more balanced and engaging... Read More →
Wednesday May 28, 2025 11:30am - 11:55am PDT
AI DevSummit Main Stage

3:30pm PDT

PRO Session: The AIOps Revolution: Transforming Database Management with AI and ML
Wednesday May 28, 2025 3:30pm - 3:55pm PDT
Anil Inamdar, NetApp Instaclustr, Global Head of Data Services 

AIOps—having experienced the ups and downs of the hype cycle over the past few years—is now buoyed by rapid AI/ML advances and destined to reach its potential in 2025 and beyond. This means transformative change for how teams think about data and analytics, as maturing ML-powered (and open source) solutions take on and mitigate the complexities of database management. Teams doing their human-best to achieve performant queries through data traffic pattern analysis and keeping tabs on storage growth can now be more confidently helped by ML decision-making.

The AIOps dream is inevitable as ML training sets improve. Automated operations and predictive remediation, including optimized data indexes, reindexing and storage management based on predictive models, is arriving—and this AI DevSummit talk will discuss how to make it all a reality.
Speakers
avatar for Anil Inamdar

Anil Inamdar

Global Head of Data Services, NetApp Instaclustr
Anil Inamdar is the Global Head of Data Services at NetApp Instaclustr. Anil has 20+ years of experience in data and analytics roles. Joining Instaclustr in 2019, he works with organizations to drive successful data-centric digital transformations via the right cultural, operational... Read More →
Wednesday May 28, 2025 3:30pm - 3:55pm PDT
AI DevSummit Main Stage

4:00pm PDT

OPEN Session: AI Cloud Advisor: Applying Domain-Specific Intelligence for Scalable Cloud Optimization
Wednesday May 28, 2025 4:00pm - 4:25pm PDT
Sumit Bhatnagar, Vice President - Software Engineering
Roshan Mahant, LaunchIT Corp, System Architect


This talk will explore the architecture and real-world deployment of AI Cloud Advisor — a custom-trained AI assistant built to optimize multi-cloud environments using Natural Language Processing (NLP), Reinforcement Learning (RL), and Retrieval-Augmented Generation (RAG). Unlike general-purpose models, AI Cloud Advisor is engineered for live, production-ready decision support across AWS, Azure, and GCP. The session will cover the technical challenges and solutions involved in unifying multiple ML paradigms for actionable, contextual insights that have driven up to 69% cloud cost savings and a 50% reduction in troubleshooting time. Ideal for ML engineers and DevOps leaders. 
Speakers
avatar for Sumit Bhatnagar

Sumit Bhatnagar

Creator, AI Cloud Advisor
Sumit Bhatnagar is an experienced engineering leader and AI strategist with 18+ years of experience in building cloud-native systems. He is the creator of AI Cloud Advisor, a real-time AI assistant for optimizing cloud workloads and architectures. Sumit is a member of the Forbes Technology... Read More →
avatar for Roshan Mahant

Roshan Mahant

System Architect, LaunchIT Corp
This talk will explore the architecture and real-world deployment of AI Cloud Advisor — a custom-trained AI assistant built to optimize multi-cloud environments using Natural Language Processing (NLP), Reinforcement Learning (RL), and Retrieval-Augmented Generation (RAG). Unlike... Read More →
Wednesday May 28, 2025 4:00pm - 4:25pm PDT
AI DevSummit Expo Stage
 
Thursday, May 29
 

1:00pm PDT

PRO Session: Numaflow: Powering Scalable Inference on Real-Time Data Streams at Intuit
Thursday May 29, 2025 1:00pm - 1:25pm PDT
Sri Harsha Yayi, intuit, Staff Product Manager
Vigith Maurice, Intuit, Principal Engineer


At Intuit, the ML teams encountered challenges when managing and executing inference on high-throughput streaming data. Integrating with diverse messaging systems like Kafka, Pulsar, and SQS required considerable effort, while enabling intermediate data transformations and inference added complexity. Additionally, dynamically scaling inference workflows to handle fluctuating event volumes posed unique challenges.

To address these issues, we developed Numaflow, an open-source, Kubernetes-native platform designed for scalable event processing. Numaflow simplifies connections to various event sources, empowers teams to process and infer on streaming data with ease, and integrates effortlessly with existing infrastructure. This session is ideal for ML engineers, data scientists, and anyone interested in asynchronous inference on streaming data. We’ll dive into how Numaflow eliminates bottlenecks and optimizes inference on streaming data.
 
Speakers
avatar for Vigith Maurice

Vigith Maurice

Principal Engineer, Intuit
Vigith is a co-creator of Numaproj and Principal Software Engineer for the Intuit Core Platform team in Mountain View, California. One of Vigith's current day-to-day focus areas is the various challenges in building scalable data and AIOps solutions for both batch and high-throughput... Read More →
avatar for Sri Harsha Yayi

Sri Harsha Yayi

Staff Product Manager, Intuit
Sri Harsha Yayi is a Product Manager at Intuit, where he primarily focuses on the company's Modern SaaS Kubernetes platform, specifically within the real time event processing domain. He is the PM for Numaflow, an open-source, Kubernetes native platform designed for the real time... Read More →
Thursday May 29, 2025 1:00pm - 1:25pm PDT
AI DevSummit Main Stage

2:30pm PDT

PRO Session: Building Robust Data Pipelines for Scalable Machine Learning
Thursday May 29, 2025 2:30pm - 2:55pm PDT
Rachita Naik, Lyft, Machine Learning Engineer

This session will provide a comprehensive, hands-on guide to designing efficient, production-ready data pipelines for machine learning model training. Tailored for engineers, ML practitioners, and architects, this talk will break down key technical aspects of data processing, feature management, and pipeline optimization at scale.
Key takeaways include -
1. Optimized Data Ingestion: Efficiently processing real-time and batch data from multiple sources while minimizing latency and ensuring smooth data flow for ML models.
2. Reusable & Scalable Features: Designing centralized feature stores that enable cross-model sharing, reduce redundancy, and support large-scale ML operations.
3. Robust Data Preprocessing: Implementing techniques to clean, transform, and structure raw data, ensuring high-quality inputs that improve model accuracy and efficiency.
4. Ensuring Data Consistency: Maintaining parity between offline training and real-time inference by preventing schema mismatches, data drift, and inconsistencies.
5. Proactive Monitoring & Debugging: Using automated tracking, anomaly detection, and logging to identify bottlenecks, optimize pipeline performance, and ensure data reliability.

This session will combine technical deep dives with real-world lessons from deploying ML pipelines at scale in rideshare applications. Whether you’re designing your first ML pipeline or optimizing existing workflows, you’ll walk away with practical strategies to enhance data efficiency, model reliability, and overall system performance.
Speakers
avatar for Rachita Naik

Rachita Naik

Machine Learning Engineer, Lyft
Rachita Naik is a Machine Learning (ML) Engineer at Lyft, Inc., and a distinguished graduate of Columbia University. With a strong foundation in AI and two years of professional experience, she is dedicated to creating transformative solutions that address complex, real-world challenges... Read More →
Thursday May 29, 2025 2:30pm - 2:55pm PDT
AI DevSummit Main Stage
 
Wednesday, June 4
 

11:30am PDT

[Virtual] PRO Session: The Future of AI-Driven Discovery: Can We Build Recommendation System That Inspire, Not Just Retain?
Wednesday June 4, 2025 11:30am - 11:55am PDT
Jiahui (Sophia) Bai, Meta, Data Scientist Lead

This talk explores the evolving role of AI-powered recommendation systems in shaping content discovery, moving beyond engagement-driven optimization to fostering meaningful exploration. Today’s recommendation algorithms are finely tuned to maximize user retention, often reinforcing content bubbles and favoring already-popular creators. While effective for keeping users engaged, this approach can limit exposure to new ideas, diverse voices, and serendipitous discovery—key elements that make digital platforms more enriching and dynamic.

We begin by defining the core challenges of AI-driven content recommendations: noisy engagement signals, over-reliance on short-term behavioral data, and algorithmic bias toward dominant content and creators. We explore how improving signal quality—by distinguishing between passive consumption and genuine interest—can lead to more personalized yet diverse recommendations. Next, we tackle the issue of democratizing reach, discussing algorithmic strategies that ensure small and emerging creators have a fair chance to break through without compromising user satisfaction.

The session will also cover rethinking success metrics for recommendation systems, moving beyond click-through rates and watch time to metrics that measure content diversity, long-term engagement, and user well-being. Finally, we will discuss the future of AI-driven discovery, including the role of Generative AI, hybrid human-AI curation, and serendipity-focused models in creating recommendation systems that inspire, inform, and surprise users—rather than just retaining them.
Speakers
avatar for Jiahui (Sophia) Bai

Jiahui (Sophia) Bai

Data Scientist Lead, Meta
Sophia Bai is a Senior Data Scientist at Meta, specializing in AI-driven recommendation systems and product analytics. She has led key initiatives in improving signal quality, optimizing content discovery, and redesigning recommendation models to create more balanced and engaging... Read More →
Wednesday June 4, 2025 11:30am - 11:55am PDT
VIRTUAL AI DevSummit Main Stage

3:30pm PDT

[Virtual] PRO Session: The AIOps Revolution: Transforming Database Management with AI and ML
Wednesday June 4, 2025 3:30pm - 3:55pm PDT
Anil Inamdar, NetApp Instaclustr, Global Head of Data Services 


AIOps—having experienced the ups and downs of the hype cycle over the past few years—is now buoyed by rapid AI/ML advances and destined to reach its potential in 2025 and beyond. This means transformative change for how teams think about data and analytics, as maturing ML-powered (and open source) solutions take on and mitigate the complexities of database management. Teams doing their human-best to achieve performant queries through data traffic pattern analysis and keeping tabs on storage growth can now be more confidently helped by ML decision-making.

The AIOps dream is inevitable as ML training sets improve. Automated operations and predictive remediation, including optimized data indexes, reindexing and storage management based on predictive models, is arriving—and this AI DevSummit talk will discuss how to make it all a reality.
Speakers
avatar for Anil Inamdar

Anil Inamdar

Global Head of Data Services, NetApp Instaclustr
Anil Inamdar is the Global Head of Data Services at NetApp Instaclustr. Anil has 20+ years of experience in data and analytics roles. Joining Instaclustr in 2019, he works with organizations to drive successful data-centric digital transformations via the right cultural, operational... Read More →
Wednesday June 4, 2025 3:30pm - 3:55pm PDT
VIRTUAL AI DevSummit Main Stage

4:00pm PDT

[Virtual] OPEN Session: AI Cloud Advisor: Applying Domain-Specific Intelligence for Scalable Cloud Optimization
Wednesday June 4, 2025 4:00pm - 4:25pm PDT
Sumit Bhatnagar, Vice President - Software Engineering
Roshan Mahant, LaunchIT Corp, System Architect


This talk will explore the architecture and real-world deployment of AI Cloud Advisor — a custom-trained AI assistant built to optimize multi-cloud environments using Natural Language Processing (NLP), Reinforcement Learning (RL), and Retrieval-Augmented Generation (RAG). Unlike general-purpose models, AI Cloud Advisor is engineered for live, production-ready decision support across AWS, Azure, and GCP. The session will cover the technical challenges and solutions involved in unifying multiple ML paradigms for actionable, contextual insights that have driven up to 69% cloud cost savings and a 50% reduction in troubleshooting time. Ideal for ML engineers and DevOps leaders. 
Speakers
avatar for Sumit Bhatnagar

Sumit Bhatnagar

Creator, AI Cloud Advisor
Sumit Bhatnagar is an experienced engineering leader and AI strategist with 18+ years of experience in building cloud-native systems. He is the creator of AI Cloud Advisor, a real-time AI assistant for optimizing cloud workloads and architectures. Sumit is a member of the Forbes Technology... Read More →
avatar for Roshan Mahant

Roshan Mahant

System Architect, LaunchIT Corp
This talk will explore the architecture and real-world deployment of AI Cloud Advisor — a custom-trained AI assistant built to optimize multi-cloud environments using Natural Language Processing (NLP), Reinforcement Learning (RL), and Retrieval-Augmented Generation (RAG). Unlike... Read More →
Wednesday June 4, 2025 4:00pm - 4:25pm PDT
VIRTUAL AI DevSummit Expo Stage
 
Thursday, June 5
 

1:00pm PDT

CANCELLED -- [Virtual] PRO Session: Numaflow: Powering Scalable Inference on Real-Time Data Streams at Intuit
Thursday June 5, 2025 1:00pm - 1:25pm PDT
Sri Harsha Yayi, intuit, Staff Product Manager
Vigith Maurice, Intuit, Principal Engineer


At Intuit, the ML teams encountered challenges when managing and executing inference on high-throughput streaming data. Integrating with diverse messaging systems like Kafka, Pulsar, and SQS required considerable effort, while enabling intermediate data transformations and inference added complexity. Additionally, dynamically scaling inference workflows to handle fluctuating event volumes posed unique challenges.

To address these issues, we developed Numaflow, an open-source, Kubernetes-native platform designed for scalable event processing. Numaflow simplifies connections to various event sources, empowers teams to process and infer on streaming data with ease, and integrates effortlessly with existing infrastructure. This session is ideal for ML engineers, data scientists, and anyone interested in asynchronous inference on streaming data. We’ll dive into how Numaflow eliminates bottlenecks and optimizes inference on streaming data.
 
Speakers
avatar for Vigith Maurice

Vigith Maurice

Principal Engineer, Intuit
Vigith is a co-creator of Numaproj and Principal Software Engineer for the Intuit Core Platform team in Mountain View, California. One of Vigith's current day-to-day focus areas is the various challenges in building scalable data and AIOps solutions for both batch and high-throughput... Read More →
avatar for Sri Harsha Yayi

Sri Harsha Yayi

Staff Product Manager, Intuit
Sri Harsha Yayi is a Product Manager at Intuit, where he primarily focuses on the company's Modern SaaS Kubernetes platform, specifically within the real time event processing domain. He is the PM for Numaflow, an open-source, Kubernetes native platform designed for the real time... Read More →
Thursday June 5, 2025 1:00pm - 1:25pm PDT
VIRTUAL AI DevSummit Main Stage

2:30pm PDT

[Virtual] PRO Session: Building Robust Data Pipelines for Scalable Machine Learning
Thursday June 5, 2025 2:30pm - 2:55pm PDT
Rachita Naik, Lyft, Machine Learning Engineer

This session will provide a comprehensive, hands-on guide to designing efficient, production-ready data pipelines for machine learning model training. Tailored for engineers, ML practitioners, and architects, this talk will break down key technical aspects of data processing, feature management, and pipeline optimization at scale.
Key takeaways include -
1. Optimized Data Ingestion: Efficiently processing real-time and batch data from multiple sources while minimizing latency and ensuring smooth data flow for ML models.
2. Reusable & Scalable Features: Designing centralized feature stores that enable cross-model sharing, reduce redundancy, and support large-scale ML operations.
3. Robust Data Preprocessing: Implementing techniques to clean, transform, and structure raw data, ensuring high-quality inputs that improve model accuracy and efficiency.
4. Ensuring Data Consistency: Maintaining parity between offline training and real-time inference by preventing schema mismatches, data drift, and inconsistencies.
5. Proactive Monitoring & Debugging: Using automated tracking, anomaly detection, and logging to identify bottlenecks, optimize pipeline performance, and ensure data reliability.

This session will combine technical deep dives with real-world lessons from deploying ML pipelines at scale in rideshare applications. Whether you’re designing your first ML pipeline or optimizing existing workflows, you’ll walk away with practical strategies to enhance data efficiency, model reliability, and overall system performance.
Speakers
avatar for Rachita Naik

Rachita Naik

Machine Learning Engineer, Lyft
Rachita Naik is a Machine Learning (ML) Engineer at Lyft, Inc., and a distinguished graduate of Columbia University. With a strong foundation in AI and two years of professional experience, she is dedicated to creating transformative solutions that address complex, real-world challenges... Read More →
Thursday June 5, 2025 2:30pm - 2:55pm PDT
VIRTUAL AI DevSummit Main Stage
 

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.