Thumbnail for Automation Testing Interview Experience | Real-Time QA & SDET Interview Preparation 2026 by RD Automation Learning

Automation Testing Interview Experience | Real-Time QA & SDET Interview Preparation 2026

RD Automation Learning

32m 41s4,580 words~23 min read
Auto-Generated

[0:03]How are you? Yeah, I'm good, sir. Thank you. How are you? Yeah, I'm good too. Thank you. So, can you tell us something about yourself? Sure, sir. So, myself is Sonal Sathi, and I'm basically from Pune location. I have total 3.5 years of experience in software testing with the balance focused on manual as well as automation testing. As I started my career as a QA from Anchantu Private Limited where I, I had mostly work on web as well as mobile application testing. And followed by that, I worked for four different domains with the two different organizations, like Cap Gemini and Vodafone. And there I have worked on intellectual property domain, healthcare domain and recently in the Vodafone with the telecom domain. So my roles and responsibilities as a QA was, I was responsible to gather the requirement from the client in the form of user stories. Analyze them and prepare the high-level scenarios followed by writing the detailed test cases for manual as well as automation. So and like whenever we were just executing those test cases, so I was responsible to execute those manually as well as as per the requirement for automation as well. For automation, I have used Selenium WebDriver with Java for UI automation purpose. And we were using Maven as a project and build management tool with the pom as a design pattern over there. And have like used some hybrid frameworks like where we were just using TestNG for like execution purpose and report generation purpose with the some dependencies like extend report and all. And like we were just for version controlling, we were just using Git and like for CICD purpose, we were using Jenkins.

[2:13]For Jenkins, like lead mostly was involving in that processes. But yeah, I had a basic knowledge about Jenkins as well. So this was the thing and I had the part of Agile methodologies where I had been the part of various Agile ceremonies. Uh like retrospective meeting, client meetings, then sprint planning meeting, internal team meetings, daily stand-ups and some like sprint enhancement meetings. So these things were there and whenever we were just finding any bug, we, we were responsible to raise it by using the Jira tool. And have done apart from this I have done the API testing as well by using the postman as well as rest assured tool. And like for those same API, I have used JMeter for performance testing. Yes, sir, that's it from my side. Great. Now let me ask you one scenario-based question. So, let's say you have 10 scripts, you have 10 test cases.

[3:17]These scripts are failing randomly. Right? So what will be your approach on fixing those scripts? Okay. So like whenever any test cases are randomly failing, these are the test cases which can be known as flaky test cases, which, which can be caused with the any reasons, any, there can be the different reason for it. So if you will see like I will, my approach will be like I will go and see the log first. uh because like whatever test cases have we have executed, uh we will just uh generate the logs by log 4J if we have used in our framework, so with that I will just go through the log, which can give me the idea like which test case is exactly failing.

[4:04]And other thing is like suppose I have the idea like from the TestNG test output folder where all the failing test cases will be there. So from that as well like I can go and I can see which test cases are failing, and I will see first of all, I will see the locators which are, which I have used for like running the scripts. Because for those test cases there there could be chances if some changes has been done by the developer, so some dom or something has changed, so I will just go and check the web elements first. What, what is changes, what changes has been done or same locators are there or not, and second way, second approach will be like I will try to see the waiting mechanism synchronization, whatever I have put like whether I have used the thread dot sleep or like whether I have implicitly put the implicitly wait only. So as per the condition for those particular test cases, I will try to use the explicit wait according to condition, so that for that particular element, I will able to get the response like within the time period as I have mentioned and according to the condition as well.

[5:14]Somewhere if any uh like somewhere if any fluent weight is required for polling time suppose some test cases are failing randomly, so is there any periodically failure is happening, so there I can use the fluent weight things as well.

[5:31]Apart from this, like uh uh like I will try to see whatever test cases I have written, that is that I have executed in the local environment, so that same environment is there in the Jenkins, and in the Jenkins I will try to see whether there is a issues in the Jenkins itself for writing the after writing pipelines.

[5:54]So whether it is environmental issues, because of that the script is running slow and then that's why it is failing, so I'll just connect with the DevOps team and will try to see the Jenkins part and uh like uh like I will try to see what other changes has been done.

[6:17]Any requirement is coming and because of that any developer like have done some changes or in the like, you can say like in that UI or something, so I will just see all those things and from that factor I will try to analyze and fix the issues. Okay, okay, that's a good thing. See, apart from this, you have to tell first of all you will identify if all those test cases are failing in the same step or they are failing in the different different steps. Okay. Are they failing locally or they are failing in the devops? So that you are telling, you will connect with the DevOps team and you'll connect with them from the pipeline perspective, the issue is there. Another thing you can also check if they are failing only in the parallel execution or in the sequential execution. Right? So all those things you can check, you told about logs, so that was good thing. These days you can also enable screenshots, videos in the logs. So that also you can check, you'll check about the networking, you'll check about the console errors, you can check about the exception types, what type of exception you are getting. That's also very critical. And synchronization you mentioned. Uh delayed. Uh you are mentioning about fluent wait. See fluent wait is not that much required, most of the things are handled by explicit wait only. It doesn't mean that all time you will go for explicit wait. There are times when you can, when you have to use implicit wait also. So in an interview they might ask you, which wait you have used in the script? So don't tell both the ways you are using, implicit wait, explicit wait, you'll get rejected. Tell depending on the situation, depending on the scenario you'll take the decision. Right? So you have to give diplomatic answers. Okay. Uh logs you covered, that was good thing. Locators, yeah, locator you can check again. If there is a locator issue or if the UI elements have got changed, those things you can check. Fine. So these things you mentioned. Now let me change this question. Okay. Now let's say you have 10 test cases. Uh or let's say you have 100 test cases now. And after that what happens is, whenever you are running your regression, so this is about regression, so regression will have hundreds or maybe thousands of test cases. So you're running your regression suite, and uh these are your test cases. Now you observe that after the 100th test script executes, your scripts start to fail. Right? So, how would you do the root cause analysis in this case? Okay. So like every time on after 100th only if test case is script is failing, like whatever test cases are there. So I'll tell you the pattern. 1 to 100, it is working fine, but let's say 114th script onwards, failure started. At times when you are running on some other day, after 108th script, failure started. Right? So, after it, after it crosses 100, then the script failure starts and once the script failure starts, all the scripts start to fail. So, how would you do root cause analysis? Um, sir, uh there could be the chances like, uh in the regression, whatever, like versions we have created, builds we were getting from the developer, so till the 100th build, whatever builds we were getting, so that is working fine. But build in the sense like whatever, this is not about the builds. It's not about the builds. It's about scripts. Uh no, like whatever scripts we have written for a particular build, which we were getting from the developers and that that for every version we were just pushing the code and pulling the things from the gate, like whatever is there. So after 100 test cases if it is failing, then there could be the chances because of pipeline, whatever has been written by the Devops team, so because of that scripts are like running slow and that is the reason that test cases are failing.

[11:04]And like there, there could be the chances suppose regression is what like we are just adding new things whenever it is coming from the client. So as per new addition, new features are coming, we were just adding it and in the previous one and we are trying to run it.

[11:21]So there could be the chances still the 100 test cases, everything was running fine, but after 100 scripts whatever was developed by us, after that whatever features has been added by the developer, because of that feature, some, some like changes has been done in the previous scripts. Like we had written the things previously only, but as new features came, so there could be the chances previous scripts might be affected. So that could be the reason because of that some failures are coming after 100 test cases that I feel.

[12:47]Yeah. I'll assist you here. See 1 to 100 it is working fine, but at times 114th or 108th or maybe sometimes 120th, 125th script is failing. Right? Now, it cannot be a DevOps issue. It can be, but see what is DevOps team, DevOps team is just taking your GitHub repository link and it is just adding them to the pipeline to trigger the suit. If you are facing such kind of issues, it can happen that the machine on which you are running the scripts, or maybe in 1 to 100 scripts you are saving lot of screenshots or videos and then what is happening? There is a memory leakage issue happening in your machine. That might happen.

[13:42]So it has nothing to do with the automation tester. What they have to do is it's an environment to the environment issue, they have to increase the RAM or they have to see from optimization they should really see if they are storing screenshots and videos after every particular script. If it is failing, then you can take the screenshot, video, you can add that condition. But if it is getting passed, then screenshot videos, it is not required. Now you told about pushing the scripts and somebody is there in the team who is pushing the scripts, they are not having idea about the previous scripts. So in that case, you can also think about pull request. What is pull request? So before their code gets merged into the regression suite, somebody is there to review your code. Right? That's why we are creating the pull request. What is pull request? Do you create pull request in your project? Uh, sir, I did not do it practically, I couldn't. Okay. So pull request in short form, we tell PR. So once what an automation tester will do an automation tester, he is assigned some test cases to automate. He will develop, he will test that particular thing. He will test it on local machine, he will test it on CI CD, then he will create a pull request. Okay. And that pull request would be reviewed by the seniors, by the leads, by the managers, architects, they will review based on that pull request. They will have that hierarchy, at least two of the senior tester, senior automation testers should be approving your code, then and then only you will your code will get pushed or at least one of the person should approve your code. So those are the PR rules. And these days you have got GitHub copilot, right? Which can review the code as well. Right? So at least one level of the review of the code is done, so nobody will be pushing the scripts like that. GitHub copilot will easily identify, let's say if everyone is on a holiday and there is a single automation tester XYZ, who has to push the code, so at least he can get it reviewed by the GitHub copilot. GitHub copilot will identify the issues, or at least a reviewer will identify the issues. So nobody can simply push the scripts without getting approval from their leads or managers, right? Once you will get that approval, then and then only your script will be getting merged into that particular suit, which is your regression automation suit, right?

[16:21]So here it is an issue of the memory leakage CPU, uh might be getting increased, it might exceed by 100%, and maybe the machine on the VM that has been chosen for running the automation is not having enough configuration, so you need more configuration kind of a machine in that case. And if there is a need for optimizing the suit as well, so you have to take care that why such things are happening. Because generally if you have to keep your automation framework scalable, then you might deploy batches of 100 hundred test on different different VMs to make them up and running. But then what is happening? Okay, and one more thing, if one of the script failure is happening then why other scripts are getting failed? Right? Maybe they have used hard asserts in them. Hard assertions. Right? So that's what you have to tell. Okay, you can tell about hard assertion thing here. Okay, now, fine. So, can you tell me what are your roles and responsibilities in your project? Uh yes, sir. So my roles and responsibilities was like whenever we were just getting any requirement from the client, so we were just analyzing, I was analyzing it. And writing the high-level scenario, preparing it. And after preparing the scenarios, I was just analyzing the things, what could, what can be the automated like whether anything is there for automation or like any only manual testing will be enough. So I will just write down the manual test cases first, I will just reviewed it from the senior person, whoever it was there like lead or manager. And simultaneously I was just in the stand, stand-ups, I was just trying to understand whether automation is required for this sprint or any, is there any dependency on other sprint so that we should wait for the automation or like whether we should go and start writing the automation test cases or not.

[18:14]Accordingly I was just involving in daily stand-ups and trying to understand the things. After writing those test cases and reviewing it, uh whenever we were getting the build from the developer, we were just going through it and according to the acceptance criteria, uh I was just trying to execute those test cases on QA environment. And whenever we were just getting any bug, so we were just raising it inside the Jira. And if there any like, uh, any like, uh, any dependency is there, so we were just connecting with the developers and trying to resolve those things and trying to get it fix as early as possible from the developers.

[18:55]And accordingly I was just giving daily reports in the daily stand-ups as well as to in the like at at end of the day, we will just send it to the manager like what and all things we had done for whole day. And accordingly we were just preparing the reports for those executions and doing the sign-offs whenever every sprint was finishing. So after finish the sprint, we were just trying to involve in the retrospective meetings also, whatever in uh enhancement or any improvement is required over there or like any changes is required from the client side. So I was a part of that client meeting as well where product owner and all was there and they were just trying to analyze, optimize the things and accordingly we just doing the changes. And apart from this whenever any junior person was there, if that person was stucking in their work, so I was trying to help them as well. So like in their work and like assisting them for their further task. And apart from this if any timing was remaining, so I was just trying to learn some new things whenever it was coming in the way. Like suppose any future requirement is there which is which can be required in two third or fourth sprint and I'm working in the first print right now, so whenever time was there, I was trying to learn some new tools according to that requirement. Suppose I'm lagging somewhere in the Jenkins knowledge or like any any new things I had to be added over there, so I was just trying to get that knowledge within that time period. So that was the, you can say, the things for every day of my life. Great, great. Do you use AI for testing? Uh chat GPT we like I have gone through, so whenever any requirement was there and I had to understand it deeply, so how to give the correct command to the chat GPT and how should get the things from that. So for learning purpose, I have used the chat GPT, but other than that AI tool like if you will see like in the Jira, we were just having some Aipro. So where we can just go and try to understand according to the requirement, how we can develop our test cases and also, I was just comparing my test cases. Whatever I have written, that is covering each and everything or any age cases are remaining, should I cover all those things as well? So I was trying to get those from the AI. So that things only I come across with and the co-pilot as I as you told me for the it was a free version that you were using. Yeah, yes. Great. See AI you can use for generating test cases and H cases, corner cases, negative scenarios. Uh you can give the requirement to the AI, and it will develop the test scenarios for you, but then you have to find tune them at times. It might not be with respect to what kind of testing you have to do. So you can find tune them. You can get an idea. It can be a very good assistant. You can get an idea, but then you have to sort of uh mature them and use them properly in your test suite. Or, yes, sir, right, sir, or test suite, right, regression suite. Okay. What are data providers in TestNG? Uh data providers are like whenever parallel testing has to be done. So as we know that we are having TestNG dot XML in TestNG where we can just go and provide the method as a parallel over there and we can just provide the thread count and accordingly it will just run our all the suits or classes or test according to our requirement in the parallel mode. But that's same things we can do while using data provider as well. If we will use data provider annotations and we will write down the utility for that like whatever script is required for those things, and we will just provide all the names for that like whatever suppose username, password, anything if whatever I have to provide or like suppose there is Chrome browser, Firefox browser, any drivers, different drivers I have to put and just perform the parallel testing with those things.

[23:17]So I can just add that in inside the data provider script and accordingly I will just in my test at the test annotation, I will just add at the rate data provider and will and like provide the parameter as a parallel over there. So if I will just provide that then my all the test cases, whatever I have put in TestNG dot XML, that will run parallelly.

[23:41]You are automating an application and you need to wait in Selenium until the number of opened tabs is four. So how would you do this? Can you write the code for this question? I want to understand it first, how would you wait in Selenium until the number of opened tabs is four? Uh will you please elaborate little bit, sir, like open tab is four means like that is not there. You have to wait. So first one tab will open, then second tab will open, then third tab will open, then fourth tab will get open. You have to wait till the number of open tabs is four. You cannot use thread dot sleep because at times the four tab might take one second to open, at times it might take two seconds to open, at times it might take three seconds to open. For every tab, that means we had it, it should wait and then it it should land on the fourth and then. Not for every tab it has to wait. It has to wait till the number of open tabs is equal to four.

[24:42]Okay. So here I can use explicit way it as per the condition. So yeah, can you share your screen? Yes, sir. I'm sharing my screen.

[24:57]Is my screen visible, sir? Uh, yes. Can you maximize? Yeah.

[25:10]Yeah. So first I will just create the driver instant and will just try to call the URL whatever is there.

[25:31]And once I'll call it I can use to other things like I will maximize it the screen and then as per the requirement if it is slow and then I will just try to put the webdriver wait here.

[25:54]By using WebDriver wait class, I'll just try to use this explicit wait. And here I will pass my driver instance. By using this object, I'll try to use until, and here I will just pass the locator for that window.

[26:52]Whatever it will be. Here I can just navigate first, like here.

[27:10]And will try to just pass the locator here for that particular window.

[27:23]And then we'll put one condition here is that current window and getting the locator, uh like this, we'll try to What what locator you are trying to get here? Which locator you are? Uh, that current window locator, where I where fourth window locator.

[27:51]Okay, see, let me share my screen.

[27:59]See you'll use this WebDriver wait. You'll collect, you were correct there, right? WebDriver wait is equal to new WebDriver wait (driver, Duration.ofSeconds(10)); wait.until(ExpectedConditions.numberOfWindowsToBe(4)); Simple. You can keep the count. If it is, yeah, if it is 1, 2, 3, you don't need that. But the question is, you have to wait till the number of windows is tabs is four. So just enter this number of windows to be four, that's it.

[28:38]Yes, sir, right. Okay. Yeah, yeah, so this is Yeah, this is good.

[28:49]Yes, sir, right. And with the driver I have to put duration as well, right, sir? In the like Which duration?

[28:59]Uh, with the WebDriver constructor, whatever WebDriver wait constructor we have created, there I'm passing only driver now. So with that, I have to put the duration as well, right? For how much time I want, that I did not provide, actually, I forgot to put. Yeah, yeah, so that one is the duration of seconds that you have to put and apart from that, you can put the how many number of windows are open, that you can put. Right? So those things you can put. Yes, sir. Right.

[29:30]Okay. Uh what are the challenges that you have faced while working with an automation framework? Uh so the challenges was like whenever we were getting any new build from the developer. So to analyze the things which can be automated, so that things were was there, like that was challenging part. Because in the acceptance criteria, developer has directly written yeah, what things are required, but as a tester, we have to analyze like what things can be automated in that. So to understand those things, previously we were just taking little time to go through first manually, then check the flows. How it is working and then after like familiy becoming familiar with that flows, then only we were just trying to get like, yeah, these things are repeating again and again, which we can automate and if some things are required, some data driven testing where we have to pass the data from multiple times. So for that, we have to write down the scripts, because that was time consuming and like as a like as a team member lead was coming to us and trying to ask like whether why it it was not automated first.

[30:45]So these things were coming in the picture, so because this seen should not be there. For that reason, we had as a tester, we we uh like we had to be more precise previous only whenever we were there inside the every part of the meeting. So we were trying to understand those things. And other than that, like some sometimes like some shadow DOMs or something, some elements was there, some Ajax calls was happening, because of which we were not getting the things what exactly is happening in the UI.

[32:10]So because it was very fast, the script was very fast and we had to go in the log and try to check what exactly is happening, where it is failing and what is the reason behind it. So to these was the challenges which I faced like whenever I was doing automation. And obviously whenever it was going into the Jenkins part, so that was completely pipeline and everything was developed by the DevOps team, we were just had to go and just push our code inside the GitHub.

Need another transcript?

Paste any YouTube URL to get a clean transcript in seconds.

Get a Transcript