A Confluence of Testing Titans
Spartans Summit 2026
Where testing leaders unite to explore AI-native, agentic quality engineering and redefine how software is tested and shipped faster.
11th March 2026, 3:00 PM - 9:00 PM IST
About Spartans Summit
Spartans Summit is the annual gathering of the TestMu AI Spartans. It brings together testers, QA leaders, and practitioners to explore how AI is transforming testing and shaping the future of quality engineering. Through real-world stories, practical discussions, and forward-looking insights on AI-driven automation, agentic testing, and evolving QA roles, the summit creates a space to learn, connect, and prepare for what’s next. Join the conversations that define the future of testing.
Register to Spartans Summit 2026
Meet the Speakers
Agenda
Spartans Summit 2026
3:00 - 3:15 PM IST
15 Mins
Opening Note
Welcome to Spartans Summit 2026
3:15 - 4:00 PM IST
45 Mins
Session
The Symbiosys Effect: AI & I
Nowdays using artificial intelligence in software testing is kinda standard. Agentic AI, GenAI, AI Agents and so on is just few examples of the last couple months that are monopolizing our time. But are they are going to totally replace standard processes in SDLC? Is the human in the loop? And how software tester can keep up with all the changes around these buzzwords? In this talk we will deep dive into the past, present and future of Software Testing and try to uncover the balance between human and machine.
Ioannis Papadakis
Head of QA, Snappi
4:00 - 4:45 PM IST
45 Mins
Session
Evaluating AI Agents: Testing and Tooling for Reliable Outcomes
As AI agents become more capable and widely used, ensuring their reliability requires new testing approaches. In this session, you’ll learn how to evaluate AI agents through scenario-based testing, intent validation, and end-to-end outcome checks to ensure they behave correctly within real-world business contexts.
Asmita Parab
Technology Consulting - QE Manager, EPAM Systems
4:45 - 5:30 PM IST
45 Mins
Session
Hallucination Hunters: A QA Engineer's Guide to AI Evaluation
As AI becomes part of modern applications, testing systems that don’t always produce the same output introduces new challenges. In this session, you’ll learn how to extend your existing test automation skills to evaluate AI responses, detect issues like hallucinations, and apply practical evaluation techniques to ensure the quality of AI-powered features.
Gaurav Khurana
Senior Test Consultant, Microsoft
4:00 – 5:30 PM IST
90 Mins
WORKSHOP
Master MCP for Testing: Build, Secure & Scale Your Testing
This workshop introduces Model Context Protocol (MCP) and how it can transform modern testing workflows by addressing common automation and scalability challenges. Through a hands-on live build session, participants will create a custom MCP testing tool, explore integrations with tools like Appium and WebDriverAgent, and understand how MCP enables more flexible and intelligent testing systems.
Sai Krishna
Director of Engineering, TestMu AI
Srinivasan Sekar
Director of Engineering, TestMu AI
5:30 – 6:30 PM IST
45 Mins
PANEL DISCUSSION
From Pilot to Pipeline: How Teams Are Actually Scaling AI in Quality Engineering
This panel explores how teams are moving beyond AI pilots to real adoption in quality engineering. Experts will discuss practical use cases, evolving testing strategies, and the metrics that help teams successfully scale AI across the testing pipeline. The conversation will also highlight cultural shifts, skills, and mindset changes needed to make AI a reliable part of modern testing workflows.
Harinee Muralinath
Director, Thoughtworks
Jaydeep Chakrabarty
Director of AI in Tech, Piramal Finance
Pricilla Bilavendran
Team Leader, Billennum
Rahul Parwal
Specialist, ifm engineering
Siddhant Wadhwani
Engineering Manager - SDET, Newfold Digital
6:30 - 7:15 PM IST
45 Mins
Session
Stop Writing Tests. Start Training Models
AI-augmented testing systems do not improve by adding more tests; they improve by learning from data, signals, and feedback loops. Test engineers must therefore rethink their role—from test execution to model enablement. This includes curating high-quality training data, defining behavioral constraints, and continuously validating model outputs against real system risks.
Rohit Mehta
Practise Head QAT, Pratham Software
7:15 - 8:00 PM IST
45 Mins
Session
Agentic AI for Testing: A Self-Updating Quality System That Writes, Runs, Explains, and Governs Tests
“AI in testing” often stops at test-case generation or a chat-based assistant. This talk introduces a more differentiated approach: an agentic quality system that continuously converts product change into verified coverage, explains failures with evidence, and enforces governance so teams can trust what ships.
Partha Sarathi Samal
Quality Engineering Manager, Paramount
8:00 - 8:15 PM IST
15 Mins
Closing Note
Closing Note - Spartans Summit 2026
Testmu AI spartans Community
Join TestMu AI Spartans?
A global community of testers, engineers, and AI builders shaping the agentic future of quality engineering. Spartans collaborate, learn, and lead the shift from traditional testing to intelligent, AI native testing.
- Learn and collaborate with a global community building the future of AI driven testing.
- Share knowledge, create impact, and grow as a trusted voice in the QA ecosystem.
- Get early access to AI innovations, events, and community led initiatives.












