Loading…
AI DevSummit 2025 + DeveloperWeek Leadership 2025
Subject: Machine Learning / Models & Architectures (AI DevSummit) clear filter
arrow_back View All Dates
Thursday, May 29
 

1:30pm PDT

PRO Session: AI Agents in Software Engineering: Can Autonomous AI Teams Build and Maintain Code
Thursday May 29, 2025 1:30pm - 1:55pm PDT
Tanush Sharanarthi, IBM, Staff Software Engineer

AI-powered coding assistants like GitHub Copilot have changed how developers write code, but what if AI could go beyond assisting and work together as a team? This talk explores the potential of multi-agent AI systems, where different AI agents take on specialized roles—one writing code, another reviewing it, a third optimizing performance, and another refactoring—to collaboratively build and maintain software with minimal human input. We’ll dive into real-world applications, emerging research, and the challenges of AI-driven development, from debugging AI-generated code to ensuring reliability. Attendees will gain insight into the future of AI-powered software engineering, whether AI can function as independent development teams, and what this means for the role of human engineers. 
Speakers
avatar for Tanush Sharanarthi

Tanush Sharanarthi

Staff Software Engineer, IBM
Tanush Sharanarthi is a Staff Software Engineer at IBM Silicon Valley Labs with a strong background in software development and artificial intelligence. His work focuses on AI-driven development, large language models, and building intelligent automation systems. He has served as... Read More →
Thursday May 29, 2025 1:30pm - 1:55pm PDT
AI DevSummit Main Stage

2:00pm PDT

PRO Session: Decoding Enterprise AI for Devs: Choosing Between Private LLMs and Public Generative AI Services
Thursday May 29, 2025 2:00pm - 2:25pm PDT
Shomron Jacob, Iterate.aiHead of Applied Machine Learning & Platform

This AIDev Summit session will navigate an increasingly pivotal crossroads: the decision between investing in proprietary, custom-tailored Large Language Models (LLM) or capitalizing on the versatility and ease of public generative AI services.

The session will begin by demystifying the complexities of private LLMs. With domain-specific capabilities and enhanced data security, these models have faster customization and compliance with industry-specific regulations. Yet, they also pose challenges: a bigger investment, infrastructure requirements, and ongoing maintenance. These elements necessitate a thorough examination.

Next, the session will scrutinize public generative AI services, exploring the inherent benefits of these ready-to-use solutions. With their scalability, diverse applications, and lower upfront costs, they hold significant appeal. But they also come with their own set of considerations, such as data privacy, standardized performance, and reduced control over the model’s behavior.
With real-world examples, we will walk through how various organizations have approached this decision, the results they achieved, and the invaluable lessons learned.

The session will then go into a decision-making framework, with the purpose of enabling attendees to assess their options between private LLMs and public generative AI services more effectively.
Speakers
avatar for Shomron Jacob

Shomron Jacob

Head of Applied Machine Learning & Platform, Iterate.ai
Shomron Jacob is the Head of Applied Machine Learning & Platform at Iterate.ai. Shomron began his career as a software engineer but soon found himself learning ML/AI and switched his professional direction to follow it. He lives in Silicon Valley.
Thursday May 29, 2025 2:00pm - 2:25pm PDT
AI DevSummit Main Stage

2:30pm PDT

PRO Session: Building Robust Data Pipelines for Scalable Machine Learning
Thursday May 29, 2025 2:30pm - 2:55pm PDT
Rachita Naik, Lyft, Machine Learning Engineer

This session will provide a comprehensive, hands-on guide to designing efficient, production-ready data pipelines for machine learning model training. Tailored for engineers, ML practitioners, and architects, this talk will break down key technical aspects of data processing, feature management, and pipeline optimization at scale.
Key takeaways include -
1. Optimized Data Ingestion: Efficiently processing real-time and batch data from multiple sources while minimizing latency and ensuring smooth data flow for ML models.
2. Reusable & Scalable Features: Designing centralized feature stores that enable cross-model sharing, reduce redundancy, and support large-scale ML operations.
3. Robust Data Preprocessing: Implementing techniques to clean, transform, and structure raw data, ensuring high-quality inputs that improve model accuracy and efficiency.
4. Ensuring Data Consistency: Maintaining parity between offline training and real-time inference by preventing schema mismatches, data drift, and inconsistencies.
5. Proactive Monitoring & Debugging: Using automated tracking, anomaly detection, and logging to identify bottlenecks, optimize pipeline performance, and ensure data reliability.

This session will combine technical deep dives with real-world lessons from deploying ML pipelines at scale in rideshare applications. Whether you’re designing your first ML pipeline or optimizing existing workflows, you’ll walk away with practical strategies to enhance data efficiency, model reliability, and overall system performance.
Speakers
avatar for Rachita Naik

Rachita Naik

Machine Learning Engineer, Lyft
Rachita Naik is a Machine Learning (ML) Engineer at Lyft, Inc., and a distinguished graduate of Columbia University. With a strong foundation in AI and two years of professional experience, she is dedicated to creating transformative solutions that address complex, real-world challenges... Read More →
Thursday May 29, 2025 2:30pm - 2:55pm PDT
AI DevSummit Main Stage
 

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -