Skip links

Autonomous Testing is Here: How AI and Machine Learning Are Reshaping QA

Podcast – Ep 7 | December 12, 2025 |  Duration: 23 minutes 52 seconds

Host: Snigdha Sukhavasi

Guest: Sankalp Bajpai, QA Expert

Listen on your favorite platforms

Podcast Summary

In this episode of Tech Is Our Passion, host Snigdha sits down with Sankalp Patchfly, a QA professional with over 14 years of hands-on experience in testing, automation, and release management. Sankalp shares how his journey into quality assurance began with a natural curiosity about how systems work and a strong attention to detail. Starting out in manual testing, he grew alongside the industry, moving into automation, leadership roles, and building QA practices that are tightly integrated into agile and CI/CD workflows.

A major part of the conversation focuses on finding the right balance between manual and automated testing. Sankalp explains that automation isn’t a one-size-fits-all solution and should be driven by a product’s maturity, stability, and risk. While automation is essential for regression and speed, manual testing still plays a crucial role in exploratory testing, usability, and early-stage development, areas where human judgment truly matters. He emphasizes that quality should be built into the process from the beginning, not added at the end.

The discussion also covers testing complex enterprise applications and the importance of risk-based testing in agile environments. Sankalp shares real-world examples of how understanding system architecture, prioritizing high-risk areas, and working closely with developers and business teams can prevent costly issues later. He also highlights common pitfalls in fast-moving agile teams, such as over-reliance on automation or excluding QA from planning discussions, and explains how early involvement and collaboration help teams avoid these challenges.

Looking ahead, Sankalp talks about how AI and machine learning are beginning to reshape QA by supporting visual testing, managing flaky tests, and identifying risk areas more intelligently. While autonomous testing continues to evolve, he believes human testers will remain essential for strategy, domain understanding, and user empathy. He concludes with a simple but powerful message: QA isn’t just about finding bugs, it’s about building quality, trust, and better user experiences.

Podcast Transcript

The Journey Into Quality Assurance

Snigdha

Hi everyone, welcome back to Tech Is Our Passion, the show where we bring you stories, strategies, and success from the mind-shaping today’s technology landscape. In this episode, we’re diving deep into the world of quality assurance with someone who’s seen it all from SDLC to SDLC, from manual testing to full-blown automation. So, we’re talking about Sankalp Patchfly. Sankalp brings over 14 years of experience in QA engineering with strong command of risk management, agile sprint practices, test automation, and release coordination. Whether it’s building robust test cases or managing UAT with cross-functional teams, he’s been at the forefront ensuring software quality at scale. Let’s get into it.


Snigdha

Hi, Sankalp, thank you so much for taking out the time to be on our show today. So to kick things off, let’s start with the classic: you know, you’ve had a very long and diverse career in QA. So, what kind of drew you to this field originally and, you know, how has your role along with QA, you know, sort of evolved over the years?

Sankalp 

Thank you, Snigdha, for providing this opportunity, and I’m glad to answer that question. I was initially drawn to QA because early during the days of my career, I started to realize that I had a natural curiosity of how things work and a great big attention to detail. I started seeing things how, you know, no matter how small they are, could impact a user’s experience or any specific business outcome. And that actually started motivating me to pursue quality assurance a little bit more seriously. I started with manual testing and then, you know, as industry evolved, I tried to evolve along with that, and I moved into automation. Over the time, I moved into leadership roles, you know, just implementing QA strategies, test automation frameworks, integrating QA into CI/CD pipelines, and promoting a shift-left mindset.

Balancing Manual and Automated Testing

Snigdha

Okay, perfect. So, from what I understand, you worked extensively in both manual and automated testing, you know, across various tools and platforms. So how do you, you know, sort of determine the right balance between manual and automated testing, like say, for a new project, maybe?

Sankalp

The balance really depends on the context. Like, we have several factors like project goals, risk areas, timelines, and most importantly, how mature an application is. So, if I’m to start in a brand new project, I would start by identifying test cases which are repetitive, stable, and most importantly, are critical for regression; those are my strong candidates for automation. Manual testing is still essential because we come across a lot of exploratory scenarios, usability checks, and general areas that require human intuition, especially in the early days of development when things are still changing. For example, I was recently working on a healthcare platform with frequent UI changes. So, initially, we had to rely on manual testing quite heavily. As the things moved along, like we covered a few sprints, the UI was finally stable and the workflows were clearly defined. We started automating those and, you know, just introduced CI/CD pipelines automating the regression test cases. So, it’s not, you know, binary, it’s like one or the other; we have to maintain the right balance, and it all depends on how your application is behaving and what brings the most value.

Snigdha

Okay, but do you kind of see that, you know, changing in the future with, you know, with all this AI and automation kind of a thing, like say, over the next couple of years or something like that? Like, there’s so much of, I think, talk about automation, automating testing platforms that we see in the market and, you know, like all of that. So, what’s your kind of, you know, take on that?

Sankalp

So, basically, you want to understand what your system currently is, right? Right now, you want to introduce new automation tools. Obviously, the first thing and the most important thing is you need to understand how your application is behaving. Rather than just, you know, picking up one tool and creating your test cases and your flows according to that, it should be the product driving your process. So, once you understand what the domain is, what the product is doing, then eventually you can start understanding different tools which solve your purpose to the max level, which helps business in creating a cleaner product. I think that’s where we should start doing that.

Tailoring Testing Strategies For Complex Applications

 

Snighda

Okay. Yeah. Awesome. So you’ve obviously led testing through all the stages, you know, like functional, unit, integration, regression, UAT. So, how do you, you know, sort of tailor your testing strategies, you know, complex enterprise applications?

Sankalp

Well, the first step is always, like I just mentioned, understanding the full system architecture: front end, back end, integrations, how the data flows from one end to the other. Usually, these complex applications span across multiple teams and technologies. So, when I’m trying to create a test strategy for all the program, you know, as a unit, I try to do it in layers, ensuring that the quality is built in from the start rather than it being inserted at a later point of time. I think to me, a really important factor is the QA team should work in close collaboration with the developers. Ensure that the coverage using the frameworks that we are using, like we often use JUnit or TestNG, we use practices like test-driven development or behavior-driven development. Um, so these things should be, you know, discussed with the stakeholders, the developers. For APIs and service, we use tools like Postman or Rest Assured to automate validations early, which helps us, issues, you know, we can identify issues before even they make it to the UI front. Functional and regression testing, it’s really important, it’s really critical, and we often use tools like Selenium, Playwright, or Cypress depending upon what kind of tech stack your client wants to go with. We usually integrate those with the CI/CD pipelines using Jenkins or, these days, GitHub actions. These are done just to ensure that the code that we have written runs automatically and, you know, you don’t have to run it every time whenever there’s a new code push.

Coming to UAT to work with the business users. So, what we do is we can define personas based on the usage scenarios, acceptance criteria. We often run UAT in a sandbox environment that mirror production, and basically UAT is all about collaborating with your business stakeholders who are acting like your end users. In one of my recent assignments, I was, you know, coordinating testing across engineering, manufacturing, and supplier collaboration modules. Each module has its own data dependencies. To manage this, we broke the testing into domain-specific streams. Started building reusable test components, implemented early integration test between the system API mocks, and, you know, we also created data provisioning scripts so that data could be used throughout all these test layers that we have designed. So, coming down and wrapping up the question, my strategy would always be based on three principles: risk-based prioritization, automation where it saves time and cost, and strong collaboration across the QA, dev, and business teams.

Integrating Risk-based Testing into Agile

Snigdha

Perfect. Thank you for that. I think that was very insightful. Touching upon specifically on risk-based testing, which is a, you know, sort of a crucial part of large-scale QA efforts with your experience in risk identification and control, like how do you sort of integrate risk thinking, you know, into like your sprint cycles?


Sankalp

The risk-based testing is absolutely essential, especially in large-scale products where, you know, testing everything isn’t realistic. Since we are all, you know, most of the projects that we end up working on are agile, you know, have some sort of a risk-based metrics to see which scenarios you want to go with at the start of a sprint, which scenarios can be delayed for later depending upon how critical they are. So, during sprint planning, I usually collaborate with the stakeholders and developers to assess several risk factors like code changes, third-party dependencies, and, you know, newer integrations. These insights usually guide our test coverage. High-risk areas get prioritized in automation, regression, and exploratory testing. We also try to tag the test based on their risk level so that we can run focused executions when needed. Like, for example, if we are close to a release and, you know, there we’ve done a release and there’s a hot fix that needs to be, we have designated test to run in that case, and, you know, just ensure whatever we are sending off is a clean product and it’s not breaking what is existing. Like, I just want to pivot to a recent example I faced across. We were integrating with a third-party ERP tool, and because of its complexity and high business impact, we identified several risk areas and increased the test coverage, simulated scenarios to focus us on those specific areas. We ran API level validations throughout the sprint. So, this kind of proactive approach did help us in, you know, catching a lot of critical data sync issues that would have been seeped onto the UAT and caused, you know, any number of issues in production. So, basically, throughout the sprint, I try to maintain a living risk metrics which is reviewed during the grooming call and retrospectives. If for a certain specific feature a risk level changes, you know, for example, it becomes more stable or a new dependency is introduced, we adapt our strategy based on that new information that has been provided to us.

Building a QA Techstack For The Future

Snigdha

Okay. Okay. Yeah, that makes a lot of sense. Yeah, I think that’s the right approach. You mentioned, you know, a couple of tools, you know, like Selenium and stuff. So, know other than that, they’re like Quality Center, SoapUI, LoadRunner and more. So, if you were to sort of build a QA tech stack for a startup in 2025, what tools would, you know, kind of make your list and why?

Sankalp

I would start with Selenium WebDriver. It’s one of the most used tool for testing your UI changes. Okay, it’s free, it’s open source. It has a great community for the support and you can easily, you know, link it with your Jenkins or GitHub actions to create a seamless pipeline. I think those are the reasons why I would go with Selenium WebDriver as one of my go-to tools for automation. Then we can eventually evolve from Selenium and start introducing tools called like Playwright. That’s a next-gen browser-based automation tool. Selenium is great, it has all the capabilities, but Playwright offers a little bit more, you know, superior capability when it comes to handling the modern applications. Playwright also has, you know, extensive support for multiple browsers and devices. It’s fast, it’s reliable, and it’s really easy to set up. Like, you can just put in your endpoints, you can just put in right all the right parameters, and it’ll give you your results what you what you need. Basically, you can just create entire test suite from that. And and again, an additional point, Postman is also really easy to be linked into your Jenkins pipelines, which is what we want to do with over the time.

Moving on to the test management section, I would like to use something called Zephyr. It’s a Jira-based plugin and, you know, again, the first point I was always looking for is like it can be easily integrated into CI/CD pipelines. The moment you start triggering your test cases through Jenkins or GitHub actions, whatever you want to call it, Zephyr will, you know, run all of the test cases that you have put in the system, and it’ll create a distinctive reports for everybody to read. You can share it across Slack or Teams or whatever channel you want to go with. And last but not the least, you know, like I keep mentioning, we need to use Jenkins, we need to use GitHub actions for creating pipelines. That’s one of the most important factors of running automation test cases. You don’t want to you don’t want somebody to sit and run those test cases at every point of time. You want to make sure that the dependencies are created in such a way that these test cases are picked up automatically as soon as there is a new build available.

Moving on from functional to a little bit of performance, there’s a lot of folks over, you know, you’ll see a lot of folks using JMeter or Gatling. JMeter, just like Selenium WebDriver, it’s well established. It has been used. It has been tried and tested. It’s a go-to tool for almost anyone who wants to step into the world of performance testing without paying for the heavy license fees like LoadRunner. Well, it’s it’s used quite a lot, but it has a pretty expensive license. So, JMeter is free, you can go with that, and, like I just mentioned, for integrating it with with your, like if you want to communicate how your reports are running and everything, you can always go with Slack or Microsoft Teams. So, I think I deviated a little bit, but these are the tools that I would want to go if I if I’m building a tech startup.

QA Pitfalls in Agile Environments

Snigdha

Yeah. No, I thank you for the detail list. I’m sure all the QA enthusiasts, you know, will hoping they should take notes and, you know, they’ll probably it’ll be helpful for them for sure. Thanks for that. So, you did mention earlier about, you know, being agile or, you know, taking an agile approach. So, in agile environments, generally, like say, speed is everything, right? So, but then quality can’t be compromised as well. So, what are some of like the QA pitfalls that you’ve seen in fast-moving agile teams and how do you sort of help teams like avoid them?

Sankalp

Well, I have worked on waterfall models and I have been working in Agile for the past 10 or 11 years as far as I can remember. The only issue I had with waterfall that I don’t want agile to go, you know, down the same line is treating QA as a final step rather than as an integrated process. So, like in waterfall, we go through all of these requirements, development and then testing. So, it’s it’s it’s not, you know, interactive. Whereas in agile, that’s the most important thing I can I can sense, it needs to be interactive. So, for agile, the key thing is to do develop multiple, you know, minimum viable products, which we call MVPs, which means that you build a product, we test it, we send it to the stakeholders, and we get their feedback, and this incorporate in the sprint cycle. I think this is the most important point I see that we need to absolutely adhere to in agile because when you leave testing at the end of a sprint, quality will always suffer. Everyone means wants to meet their deadlines and, you know, things usually do get missed out. Another over-reliance is, you know, people you you’ll hear people say like we want to automate everything. Yes, automation is good, it saves a lot of time, but you cannot, you absolutely cannot, you know, completely disregard the effect meaningful exploratory testing phase has on a product. Automation is fast but it it’s not going to replace human intuition, especially for UI and usability. If you’re building an application that you are, you know, directing towards a specific set of users, you need to have testers in your team that are able to think from there and and, you know, just understand what these users would like or dislike within your application.

In agile, we usually don’t go for a lot of documentation. That’s why teams end up skipping over a proper test planning or risk analysis in order to be, you know, just rushing to deliver, which eventually will lead to missing out on edge cases and a relatively poor regression coverage again. And I have been a part of teams where the QA wasn’t even involved in backlog grooming or, you know, story definition, which is, you know, which baffles me because if a QA doesn’t understand a user acceptance criteria, what we have, it might require a lot of rework in the later point of time. So, to avoid any of these pitfalls, I try to embed my team from day one, which means involving testers into sprint planning, grooming sessions, and daily standups. I try to promote, you know, a shift-left approach encouraging unit and integration testing being done by developers while QA focuses on building automation automation early, that’s the key. Collaborating on test data, doing risk-based exploratory testing. I also, you know, try to teach my team that the automation is a living asset, and we need to keep maintaining and prioritizing it just like any other code that that has been written. For integration, we need to continuously improve on our pipelines so that we can get fast actionable insights with everybody.

Snigdha

Okay. Yeah. Perfect. I think thank you for that. If you were to, you know, sort of touch upon a little bit on release management right now, which, you know, sounds like part choreography, part chaos. So, what’s your, you know, sort of approach to ensuring like smooth collaboration between QA, development, and deployment teams?

Sankalp

You’re right, it is part choreography and part chaos. And my approach is to treat it like a well-rehearsed play where everyone knows their roles and responsibilities, the timeline that we are supposed to follow, and, you know, we also need to have a plan B in case plan A doesn’t work. It starts with early alignment. I bring QA, dev, and ops together to define a release scope with a clear entry and exit criteria and sort of create like a checklist. I was leading a team where we had multi-module release involving engineering, supplier, collaboration, and document control. You know, these were tightly integrated modules. So, to avoid any surprises, we did set up cross, you know, functional checkpoints. We did a lot of dry runs and, you know, just documented everything in Confluence from risk to what happens if you want to roll back some some specific module. The QA’s responsibility was including and not exclusive to testing all the critical parts, and we ensured that those critical parts were clearly automated and, you know, in the pipeline so that we, you know, we could provide real-time results. We always, I think we, yeah, we were using Slack at the time. So, we integrated these test results into Slack. So, all the team, including the stakeholders, had the visibility of what tests were running at what point of time and what was the report that was being generated. We had go/no-go meetings with the business stakeholders where we used to visit each and every task. We used to review the reports about like 48 hours before the release to validate that the release is good to go, and once we got the sign-off, we just went with it. To summarize it, it wasn’t just the tools or, you know, the tech intelligence we were sharing. It had to be, you know, a common mindset for all the team. It’s it’s not a single person’s responsibilities, whether it’s QA, dev, release management. We need to work as a team to ensure that the business stakeholders and the end users are confident of a release, and, you know, that’s that’s that’s what did the trick.

The Importance of Domain Knowledge in QA

Snigdha

Yeah. Yeah, no, I think you’re absolutely right there. And, you know, you did earlier mention that, you know, manual testing and human testers are, you know, kind of still here to stay and even with all the automation that’s happening and AI and stuff. So, in your opinion, you know, how important is say domain knowledge versus tool expertise in becoming a successful QA engineer in, you know, today’s times?

Sankalp

They’re both important, is what I think. But if if I have to prioritize, if I’m a if I’m a new, you know, entry into the QA world, I would start by learning domains as much as I could because the tools can be learned eventually. Even like I mentioned earlier, I started off with manual. I, you know, eventually moved on to automation. But what I’ve seen with the people that I’ve worked with, they were most of the QA engineers were absolute masters of their domain. You know, tools for tools, we have documentation, tutorials, and, you know, these frameworks learn, you know, sorry, these frameworks evolve constantly. So, you need to be on the edge, just learning everything new every day. But if you have a deep understanding of a business domain, it enables you to test smarter and not just harder. It helps you ask the right questions, identify hidden risks that nobody else could see, and most importantly, it helps you validate functionality rather than just value. For example, in healthcare and finance, if you know the requirements, the regulatory requirements or user workflows, it lets you anticipate the edge cases and compliance risk issues that most of the tools that we are using are not unable to catch. So, this kind of insight is what, you know, separates QA from being just a support function into a strategic partner. Tool expertise, I’m not going to downplay the role of it. It’s critical, you know, especially with the advent of DevOps, automation, and AI-driven testing. But if you want to be a real impactful QA, you have to have a real deep understanding of the domain that you are working on. And, you know, if you have the tools knowledge, it’s always an added bonus.


The Future of QA: AI and Automation

Snighda

Definitely. Yeah. I hope all the new wannabe QAs are like, you know, taking notes and, you know, making sure that’s done. So, let’s talk a little bit about the future. I know we’ve touched upon AI before and automation. But just kind of circling back to it, you know, on how this whole AI machine learning is transforming QA, what do you think is going to, you know, happen over the next couple of years, and, you know, where are we moving towards in terms of like autonomous testing?

Sankalp

AI and machine learning are, you know, already changing and reshaping the world of QA. Over and over the next few years, I see us moving steadily towards autonomous intelligent testing. We are evolving from rule-based scripts to AI-driven testing that predicts risk areas, recommends test cases, and even auto-heals your scripts when the UI elements change. Like, we we used a tool called Applitools. It’s a visual learning testing tool based on, you know, you can run it on a complex web application, and it used AI to detect visual regressions that traditional functional test would have usually missed. If if a layout changes in the page, if any of the font has rendering issues, or if in general a content would misalign across different browsers, these tools help us, you know, catching those issues and, you know, just getting them fixed on point. This has reduced our review cycles, you know, and in general help the dev team catch these issues earlier.

And there are tools like Testim or Mabl which we can use for, you know, managing flaky test by using machine learning to understand the DOM that changes over time. It’s a it’s a it’s been a game-changer in a fast-moving agile environment. That being said, I don’t think the full autonomy is here yet. Human, as I keep explaining, are still essential for exploration testing, basic user empathy, and risk-based thinking. But AI is definitely taking over the repetitive and maintenance-heavy parts, letting QA focus more on strategy and value. So, yes, autonomous testing is on the horizon, but it would be AI augmented, not AI only.

Snigdha

Awesome. Perfect. Thank you so much for those insights. And any final thoughts from your end or otherwise, we are good to go from this side?

Sankalp

I just want to reiterate on one point that, yeah, and yeah, it has been misunderstood quite a few times in my experience. Yeah, QA is not about finding bugs. The first word of QA is quality. So, it’s about building quality, which eventually will lead you to building trust in a product. Whether you’re a seasoned professional or just starting out, our role is evolving fast, and that’s exciting. If we stay curious, stay connected to the, less embrace new technologies like AI thoughtfully, QA can lead the way into, you know, shaping better experiences, physically user experiences. The future of testing is not going to be just technical. It’s strategic, and it needs to stay human. So, I think that’s that’s that’s what I want to go with.

Snigdha

Awesome. That’s a solid close. Thank you again so much for taking out the time to be with us today. And thank you for sharing your knowledge and your thoughts and your insights as well. I think I’m sure it’ll be very helpful to those, you know, listening.

Sankalp

It’s been my pleasure. Thank you so much for, you know, just inviting me through. It’s it’s been a great part. Thanks again.

Snigdha

Yeah. Thank you, Sankalp. That was incredibly insightful. You’ve given us a lot to think about from strategy, risk-based testing to the importance of collaboration in agile environments. We appreciate you sharing your journey and lessons. Until next time, keep testing and keep pushing the quality forward.

Stay tuned for key insights from global tech experts

Home
Account
Cart
Search