Skip to main content

3 posts tagged with "selenium"

View All Tags

· 11 min read
Mark

Those with testing experience are likely familiar with tools like Selenium and Playwright. Although Selenium has its strengths, we generally recommend that our users use Playwright.

In this article, we’ll look at:

Why Playwright is so much faster Advantages of Migrating How each one handles waiting Header control differences Considerations before migrating Key differences to understand Step-by-step conversion process Mapping popular methods

By the end, you will understand the fundamental differences in approaches between the two libraries and how to approach converting the code.

Reasons for Playwright’s Speed

  • Browser Contexts and Pages: Using Playwright, you can run parallel tests in a single browser window. Unlike Playwright, Selenium is slower. When using Selenium, you will have to open a new browser window for each new test.
  • Native Automation Capabilities: Unlike Selenium, it interacts with browser APIs and protocols. Therefore, automation is more native. Selenium also uses the WebDriver protocol, which slows things down and increases the execution time of operations.
  • Handling of Modern Web Applications: Playwright is designed for optimized handling of modern web applications that run on complex JavaScript frameworks and perform asynchronous operations. It offers a more efficient experience with AJAX and Single Page Applications (SPAs).
  • Built-in Waits: Playwright supports automatic waiting for items to be ready, reducing the need to use explicit wait or sleep commands. Unlike Playwright, Selenium uses explicit commands regularly. This practice slows down testing execution. Moreover, if you refuse to use explicit commands, the tests will be unstable.

Advantages of Migrating from Selenium to Playwright

  • Improved Performance and Efficiency: Playwright enables faster and more efficient test execution. At the same time, it better allocates and utilizes resources, resulting in faster development and testing cycles.
  • Enhanced Features: Playwright provides access to various browser engines, so one API is enough to run the test in different browsers. Playwright also supports offline browsers and provides continuous integration capabilities.
  • Better Handling of Modern Web Technologies: Playwright is optimized to work with modern JavaScript frameworks and complex front-end technologies.
  • Simplified Test Scripts: Playwright's API is intuitive and great for developing simpler, more maintainable test scripts.
  • Advanced Features: Playwright supports various features such as network interception, geolocation testing, and mobile device emulation. At the same time, an intuitive interface and clear scripts make using these functions easier and more accessible.

Differences in waiting for selectors when migrating from Selenium to Playwright

The main difference between these frameworks is how Selenium and Playwright wait for selectors or elements to appear in order to perform actions.

Differences between Wait with Selenium and Playwright

There are challenges when using Selenium to handle the dynamic content of AJAX elements or elements that change states multiple times. Therefore, solving a problem often requires combining several waiting strategies.

Waiting with Selenium

Selenium supports three elemental waiting strategies: implicit, explicit, and free. [1]. With an implicit wait, Selenium, if it cannot find an element, throws an exception after a certain period of time. [2]. Explicit waiting is individual for each element and requires writing additional code for each condition. [3]. With free waiting, you can choose the maximum time to wait for a condition and how often it is checked. However, some types of exceptions can be ignored.

Waiting with Playwright

Using Playwright, you can turn to simpler waiting strategies, such as: [1]. Playwright automatically waits until the elements are ready to interact before executing an action. [2]. You don't have to write extra code for each one while you wait for the elements to be ready to interact. [3]. Playwright locators automatically wait until the elements they refer to become available. This makes scripts more concise. [4]. Playwright effectively interacts with web applications whose elements are loaded asynchronously or depend on JavaScript execution.

Impact on test writing and reliability

  • Selenium: When writing tests in Selenium, you need to have a good understanding of the various wait conditions and be able to use them effectively. Because of this, test scripts have become more complex and longer. If the waiting strategies do not meet the test execution conditions, it will cause the tests to be unstable due to a lack of time.
  • Playwright: Since the waiting process is automated, there is no need to write additional code for each condition. The code will be simpler and shorter. This reduces the likelihood of errors in the synchronization and visibility of elements.

Thus, Playwright's automatic waiting and simpler scripts make it a more reliable and efficient way to handle downloaded content.

Network manipulation and Header control comparison

Header management, especially when dealing with proxies and authentication, is one aspect where Selenium and Playwright differ. These relate to their underlying architecture and how they interact with browsers.

Limitations of Selenium and Headers

  • Limited Manipulation of Network Traffic: The main protocol for Selenium is WebDriver. However, its use limits the framework's ability to manipulate network traffic. In particular, Selenium makes it difficult to set custom headers because it prevents you from modifying network requests and responses.
  • Proxy Authentication Challenges: Selenium does not support changing custom headers. Because of this, using Selenium, the user cannot work with some types of secure proxy servers. In addition, the user cannot use scripts that require header manipulation.
  • Workarounds: Developers are forced to turn to external tools or browser extensions to compensate for the limitations of using Selenium.

Playwright and Advanced Header Control

  • Advanced Network Interception and Modification: Unlike Selenium, Playwright allows you to intercept and modify network requests and responses. Specifically, by using Playwright, the developer has the ability to change the header as before it is sent or received from the server.
  • Authenticating Proxies via Headers: Unlike Selenium, Playwright allows you to customize authentication headers for proxies. This is why application developers with authenticated proxy servers choose Playwright.
  • Built-in Support for Different Scenarios: The Playwright API is self-contained and provides extensive network settings. Therefore, developers do not have to look for external tools or use additional browser extensions.

Impact on Testing Capabilities

  • Selenium may require additional tools and browser extensions as it provides limited ability to manipulate network traffic. This is especially true for the lack of the ability to change the custom title.
  • Playwright provides extensive and efficient network processing customization options. It supports changes to custom headers and proxy authentication, so it is a more universal framework.

Considerations Before Migrating from Selenium to Playwright

The transition from Selenium to Playwright should be a conscious step on the part of developers. Playwright offers advanced functionality, but may be overkill for some web applications. There are a few things to consider when migrating from Selenium to Playwright.

  • Learning Curve: Moving from one framework to another will require time to learn their differences and functionality. The same will happen when switching from Selenium to Playwright. It will take time for developers to learn the new API.
  • Codebase Overhaul: Scripts adapted for Selenium will have to be rewritten, taking into account the capabilities of Playwright. The process will take some time.
  • Compatibility and Integration: Before moving to a new framework, it is better to make sure that it is suitable for your application. Playwright must meet the requirements and integrate smoothly with the technology stack and CI/CD pipeline.

You’re migrating from Selenium to Playwright, but where to start?

Rewriting Selenium scripts in Playwright involves several steps due to the differences in syntax and methods between the two tools. Although there are AI converters such as Rayrun and The Python Code, it is important to always carefully check the resulting code. This requires understanding the differences, processes, and comparisons between the two platforms.

Understanding Key Differences

[1]. Syntax and API Differences: Selenium and Playwright offer different approaches to browser automation. Therefore, first of all, you need to trace the differences in the API and syntax of the frameworks. If you approach this issue thoroughly, then there will be no problems when switching to a new framework. [2]. Async/Await Pattern: Playwright uses JavaScript with an asynchronous API. This means that you will need to use the async/await pattern in your scripts. [3]. Browser Contexts and Pages: Selenium and Playwright handle browser windows and tabs differently. This aspect should be given special attention. [4]. Selector Engine: Playwright supports text selectors, CSS, and XPath selectors, making it more convenient and efficient for interacting with dynamic content.

Step-by-Step Conversion Process

[1]. Set Up Playwright: Install Node.js. You will need it for stable operation. Playwright is a Node.js library. After installing Node.js, install Playwright and set up your environment. [2]. Create a Basic Playwright Script: To become familiar with the basic structure and commands of Playwright, create a basic script. Write a simple script that will open a browser, navigate to a page, and perform a few actions. [3]. Map Selenium Commands to Playwright: Your scripts that were written for Selenium need to be adapted for Playwright. That is, define the commands and find their equivalents in the new framework. [4]. Handle Waits and Asynchrony: Adapt your scripts to Playwright's asynchronous API by replacing Selenium's explicit waits with the new framework's automatic waits. [5]. Implement Advanced Features: If your scripts use advanced features such as file uploads, you'll need to know how Playwright handles those scripts. [6]. Run and Debug: After running the Playwright script, you need to track and fix problems that arise when migrating to a new framework. In particular, problems with synchronization or element selectors.

Mapping Selenium Commands to Playwright

Action DescriptionSelenium MethodPlaywright Method
Click on an Elementconst clickable = await driver.findElement(By.id(‘clickable’));<br>await driver.actions().<br>move({ origin: clickable }).<br>pause(1000).<br>press().<br>pause(1000).<br>sendKeys(‘abc’).<br>perform();await page.getByRole(‘button’).click();
Double Click on an ElementSimilar to Click, but use doubleClick() method in Selenium actions chain.await page.getByText(‘Item’).dblclick();
Right Click on an ElementSimilar to Click, but specify the right button in the Selenium actions chain.await page.getByText(‘Item’).click({ button: ‘right’ });
Shift Click on an ElementSimilar to Click, but add a shift key action in the Selenium actions chain.await page.getByText(‘Item’).click({ modifiers: [‘Shift’] });
Hover Over an ElementUse moveToElement() method in Selenium actions chain.await page.getByText(‘Item’).hover();
Fill Text InputUse sendKeys() method on the element found in Selenium.await page.getByRole(‘textbox’).fill(‘Peter’);
Check/Uncheck Checkboxes and Radio ButtonsUse click() method on the element in Selenium for checking. For unchecking, conditionally use click() if checked.await page.getByLabel(‘I agree to the terms above’).check();
Select Options in DropdownUse Select class in Selenium and methods like selectByVisibleText() or selectByValue().await page.getByLabel(‘Choose a color’).selectOption(‘blue’);
Type CharactersUse sendKeys() in Selenium.await page.locator(‘#area’).pressSequentially(‘Hello World!’);
Upload FilesUse sendKeys() on file input element in Selenium with the file path.await page.getByLabel(‘Upload file’).setInputFiles(path.join(__dirname, ‘myfile.pdf’));
Focus on an ElementUse WebElement‘s sendKeys(Keys.TAB) in Selenium to navigate to the element.await page.getByLabel(‘Password’).focus();
Drag and DropUse dragAndDrop() method in Selenium actions chain.await page.locator(‘#item-to-be-dragged’).dragTo(page.locator(‘#item-to-drop-at’));

Tools and Resources

  • Documentation and Guides: You can use the official documentation and guides from the Playwright community. They have sections for new users who are migrating to Playwright from older frameworks, including Selenium.
  • Playwright Test Runner: If you are using Playwright, go to Playwright and try out its Playwright Test Runner. It is optimized for Playwright scripts and will improve the network experience.
  • Refactoring Tools: To refactor and debug your code, you can use helper tools such as Visual Studio Code.

Considerations on converting code from Selenium to Playwright

  • No Direct Conversion: You will have to adapt Selenium scripts to Playwright yourself, as there are no tools that can automatically convert entire scripts.
  • Learning Curve: There may be a learning curve, especially regarding the asynchronous nature of Playwright and its different approach to browser automation.

Deploying playwright-core and separate browsers

Selenium WebDriver is included with FireFox, while Playwright is included with Chrome. We recommend hosting your scripts and browsers on separate servers. This improves security and load balancing. For Playwright you can do this using playwright-core. Using playwright-core, you can open and manage browsers yourself. Another option is to use our group of hosted offline browsers. They are ready to run any of your scenarios.

Conclusion

Converting Selenium scripts to Playwright is a manual process that will require time and an understanding of the differences between the two platforms. However, the benefits of Playwright's modern approach and features allow you to streamline network interactions and improve the performance of your web applications.

· 11 min read
Mark

Emails, files, images, documents, and social networks are all sources of data. And modern businesses spend huge amounts of money on... manually extracting this data and analyzing it. Meanwhile, modern technologies make it possible to completely automate this process. Intelligent data extraction is a trained algorithm that sifts through data. It can instantly prepare a thesis report, compile numbers, catalog personal information, and much more. In this article, we will tell you what data extraction is, how to implement it, and in which areas it is already revolutionizing.

What is Intelligent Data Extraction?

Intelligent data extraction is the result of combining artificial intelligence and machine learning. AI capabilities allow it to thoroughly examine various sources: scanned images, various electronic file formats, articles on websites, threads and photos on social networks, and so on. After examining the source, the intelligent data extraction system selects nuggets of information that it uses to improve various workflows or answer user queries. Intelligent data extraction systems are used in many areas, from finance to medicine. They identify patterns in documentation, analyze customer feedback, promptly provide information upon request, and so on. Intelligent data extraction helps reduce the risk of human errors. Imagine a mountain of documents. Imagine how much time an employee will spend studying it. Imagine how tired he will be, and because of fatigue, he may miss something important. An established intelligent data extraction system will spend less time on analysis. But the main thing is that she won’t get tired and won’t miss anything. This achieves several things:

  • Increased workflow efficiency
  • Reduced number of errors
  • And you can save a lot of money

##How Intelligent Data Extraction Works We have already learned what data extraction is. So let's figure out how it works! We'll follow all the steps from start to finish!

Step #1: Receive data

The setup begins with selecting the information source. This could be anything: scanned images, various electronic file formats, articles on websites, threads and photos on social networks, and so on. But let's take a specific example. Let’s imagine that a conventional bank needs to attract a new client. The sources of raw data in the case of a bank will be digital forms, scanned documents, transaction histories, and more from various channels such as online applications, email, and mobile banking platforms.

Step #2: Pre-treatment

Highlight relevant sources. Get rid of unnecessary sources, the data from which seems redundant to you. You may need to convert your scanned forms into a more convenient digital format. By doing this, you will ensure data consistency.

Step #3: Training the Algorithm

Machine learning models “learn” to interact with data. By analyzing sources of information, they learn to recognize patterns and relationships. Let's remember the example with a conditional bank. To train the algorithms, the bank can provide past loan applications that are in its database. The algorithm will study these applications and learn to recognize data fields such as "Name" and "Annual Income".

Step #4: Extraction

At this stage, the algorithms extract relevant data points. Let's look at the example of a bank. The trained algorithm will extract personal data or amounts from the transaction history on the application form. Note that the algorithm can process huge deposits of data in a short time but will not lose the accuracy of its extraction.

Step #5: Check

Trust, but check. Before accepting an algorithm as fully trained, check how successfully and efficiently it interacts with the data. At this stage, validation will help you confirm the correctness of the extracted data. Let's say, in the case of a bank, when rechecking, you need to pay attention to deleted data using predefined rules.

Step #6: Continuous Improvement

Algorithms learn and improve as they interact with data. Therefore, the accuracy and reliability of their work increase with each request processed. For example, a bank implemented data extraction into its workflow. And after some time, the bank introduced new conditions. No problem! The trained algorithm adapts to them with amazing speed.

What is the effectiveness of Intelligent Data Extraction?

Businesses waste money, time and effort manually extracting data. However, modern technologies and trained algorithms are much more effective. But let's look at the points:

Feature ManualData Extraction IntelligentData Extraction
Time ConsumptionHigh (Hours to days)Minimal (Minutes to hours)
Error RateProne to human errorsSignificantly reduced
Cost Higher (Labor-intensive)Lower (Automation savings)
ScalabilityLimitedHighly scalable
Data Consistency & QualityVariableConsistent and high-quality
AdaptabilityRigid processesAdapts to varying data forms

Applications of Intelligent Data Extraction

Intelligent data extraction helps you automate and improve the handling of user requests or your entire workflow. Let's find out which industries benefited most from this innovation:

1. Healthcare

Precision in healthcare depends on the well-being of patients. Intelligent data extraction simplifies tasks such as managing patient records and transferring information from handwritten prescriptions to electronic medical records. On a busy day with a large influx of patients, a doctor can make a request, and an automated algorithm will fulfill it. For example, it will save test data and attach it to the patient’s medical record. The doctor will leave the task to the system and return to work. But his request will be accurately executed, and the data will be saved in the right place. In addition to administrative tasks, the data extraction system can be assigned research functions. For example, she can be entrusted with the study of medical literature and its thesis retelling. All this makes the work of medical institutions more efficient! And all thanks to modern technologies—that is, an intelligent data extraction system!

2. Surveillance tools

Surveillance tools collect data for digital systems. This data is then processed by application performance monitoring tools. And in this case, intelligent data extraction will provide significant help:

  • Log management: A trained algorithm will reduce the volume of log files several times. It will identify inconsistencies and patterns that indicate system errors. Thanks to this, it will take a couple of hours to find this error, but a couple of seconds!
  • Optimization of metrics: From a huge volume of data, the algorithm will identify relevant metrics that will give a clear picture of the performance of the digital system. Well, then you can carry out timely optimization!
  • Real-time alerts: The algorithm can detect critical incidents and trigger immediate alerts. Thanks to this, the reaction will be quick, and the digital system will be protected from a potential threat.
  • Analysis of user behavior: The algorithm studies user requests, based on which it can suggest improvements to the interface or system responsiveness. Well, the user experience will become more pleasant!

In the legal field, meticulousness is important. Of course, accurate data extraction improves the legal service delivery process. And here's how exactly:

  • Document Review: An automatic algorithm quickly scans the entire volume of data and then extracts the relevant articles, dates, or names of the parties involved. After reviewing any document, the algorithm will identify key points and provide a summary report.
  • Contract analysis: Having studied the conditions specified in various documents, the algorithm will identify possible risks and options for revising any clauses. The algorithm will transmit the information to the specialist, and he, in turn, will be able to advise the client.
  • Case Study: To strategize a case, you need to find a precedent. The algorithm will be able to do this much faster than a human, crawling through a huge amount of data in a matter of moments.
  • Client data management: The algorithm can study clients’ personal files, catalog them, and update and supplement them. So all important information will be available at the right time.

4. Accounting and taxation

Come tax season, data extraction can help accountants easily sort through countless stacks of receipts, financial statements, and transaction records. The algorithm will identify the most important points and present them in the form of a report, and the accountant will be able to save time and effort. Intelligent data extraction will allow you to quickly reconcile records, identify inconsistencies, and make all necessary payments in a timely manner. Additionally, the trained algorithm can be used to analyze data from previous financial years. It will quickly identify deviations and shortcomings and help correct them in a timely manner.

5. Banking and finance

The bank is at any time inundated with numerous inquiries, applications, and demands for immediate consultation. To understand this flow, you need accuracy as well as a quick reaction. And intelligent data extraction will help with this. The client who contacted the bank will provide his data, and the algorithm will instantly analyze the most important points. For example, to approve a loan application, it is necessary to verify the client’s solvency. This means that the algorithm will reveal the client’s credit score, employment records, and asset valuations. In addition, the intelligent data extraction system can notice unusual actions in the client’s personal account and immediately report them. And now the client is freed from problems with scammers. Additionally, a trained automated algorithm is useful in analyzing market reports. It will quickly identify stock trends or key economic signals.

Techniques for Intelligent Data Extraction

For intelligent data extraction (IDE) to reach its full potential, you need to ensure that the data is not only accurate but also useful. To do this, you should use several methods that will help filter the data and protect it:

  • Quality over quantity: Determine what data you and your customers need. Load only relevant and up-to-date data. The total amount of data will be reduced, but the remaining data will be reliable, and their analysis will give extremely accurate results.
  • Update your algorithms regularly: Algorithms need constant training and updating; otherwise, they will become outdated and useless. Provide algorithms with relevant data on which they can improve.
  • Data Verification: Data verification ensures that the data is accurate. However, it is best to carry out the verification in two stages. Primary and secondary verification will help identify inconsistencies and errors, if any. This way, you will save yourself from possible problems and risks.
  • Structured data storage: organize received data. Then the algorithm will be able to retrieve them faster based on your request. If the data is not systematized, the algorithm will have to spend additional time searching and analyzing it.
  • Keep your data private: Nothing is more important than protecting your data! This includes personal or confidential information about you and your customers that scammers can use. Therefore, make sure that this type of data is encrypted.
  • Feedback Loop: Give your users the opportunity to provide feedback. Then they can alert you if your data is inaccurate or out of date. Ultimately, this will show them that you care about them and that their opinions are important to you.
  • Integration with other systems: Check if your IDE system integrates with other business systems. If the integration is broken, there will be problems with data transfer and compatibility.
  • Regular audits: Don't stop at two-step verification before loading data. Extracted data should also be regularly checked for accuracy and consistency. And all this is already in the process of being used. This way, you can identify and fix any system problems early.

Want to Use Intelligent Data Extraction?

Intelligent data extraction helps you explore raw sources of information and turn them into tools to improve workflows and user experiences. However, before you implement trained algorithms, determine exactly how they will benefit you and what problems they will help solve. Intelligent data extraction is constantly being improved; it is rebuilt to new conditions and adapted to your requirements. Modern business has the opportunity not only to collect data and engage in long, very long manual analysis... No, modern business can use the full potential of data for successful activities!

· 5 min read
Mark

Is Selenium worth using? Or is it better to use a more modern library like Puppeteer? Our platform supports both Selenium and Puppeteer. The choice of our clients is influenced by various factors: company culture, tools and languages, and more. However, we always recommend using Selenium. Now we'll tell you why.

Selenium is an HTTP-based JSON API

It's better to choose puppeteer or any CDP-based (Chrome DevTools Protocol) library. But what exactly makes Selenium so inconvenient? Let's find out with a specific example. Let's take the example of loading a website, clicking a button, and getting the title. Puppeteer allows you to do this over a single socket connection. It starts and ends as soon as we connect, and then closes. However, Selenium 6+ HTTP JSON payloads.

  • Selenium routes every HTTP call through a standard TCP handshake, boosts its speed, and forwards it to the final location. You need to check if the settings are set; otherwise, it will take extra time.
  • Selenium makes a lot of API calls. Each of these calls has its own "batch" patterns, which are difficult to rate limits on.
  • Selenium makes it difficult to execute load balancing and round robin queries. To make the request complete, you will need sticky sessions. This is an algorithm that distributes the load not only by the number of connections to servers but also by the IP addresses of network elements. It is possible that you will have to create this algorithm yourself.
  • Selenium does have a binary somewhere that simply sends a CDP message to Chrome. So why do users have to interact with all this HTTP stuff? As time passes, you will have to put in some serious effort to understand how Selenium works. Well, if you use puppeteer, then you don’t have to learn all its capabilities and operating principles from scratch. You can immediately use almost any load balancer (nginx, apache, envoy, etc.). In general, Selenium requires some specialized knowledge, and libraries like puppeteer and playwright allow you to get up and running quickly.

Selenium Requires more Binaries to Track

Both puppeteer and playwright launch with the corresponding version of their browser. All you have to do is start using them... and everything will just work. Well, Selenium will complicate your life. You will have to figure out on your own which version of chromedriver corresponds to which version of Chrome... which version of Selenium you are using can work with. That is, at least three stages at which your integration can break down. Finally, let's remember Selenium Grid, which will also give you headaches if you don't keep an eye on it. In general, all this is a clear disadvantage of Selenium in comparison with more universal and accessible tools.

In Selenium Basic Things are more Complicated

If you are using Selenium, you will have to face problems with basic things as well. For example, you want to add headers to the browser. You'll need this to load test your site or to apply a header to certain authenticated network requests. So you run Selenium, and in order for the proxy to work or to use a proxy with authentication, you need additional drivers or a plugin. And so you spend extra time searching for and installing them. Both puppeteer and playwright simply have drivers or plugins in their libraries. That is, once again, Selenium loses convenience to more universal libraries.

You Can Customize a lot more in Selenium

It's difficult to set up a simple script in Selenium to do something. This is because Selenium caters to many browsers. Here's an example: Selenium's retrieval of the name example.com in NodeJS looks like this:

const { Builder, Capabilities } = require('selenium-webdriver');

(async function example() {
const chromeCapabilities = Capabilities.chrome();
chromeCapabilities.set(
'goog:chromeOptions', {
'prefs': {
'homepage': 'about:blank',
},
args: [
'--headless',
'--no-sandbox',
],
}
);


let driver = new Builder()
.forBrowser('chrome')
.withCapabilities(chromeCapabilities)
.usingServer('http://localhost:3000/webdriver')
.build();

try {
await driver.get('http://www.example.com/');
console.log(await driver.getTitle());
} catch(e) {
console.log('Error', e.message);
} finally {
await driver.quit();
}
})();

Approximately 35 lines of code. It looks good, but completely loses if you compare the script with another library. Let's take puppeteer. He will need about half as much code time:

const puppeteer = require('puppeteer');

(async function example () {
let browser;
try {
browser = await puppeteer.connect({
browserWSEndpoint: 'ws://localhost:3000',
});
const page = await browser.newPage();
await page.goto('https://example.com');
console.log(await page.title());
} catch(e) {
console.log('Error', e.message);
} finally {
await browser.close();
}
})();

Novice developers and users who write simple scripts may not be familiar with the features we wrote about above. And this is normal, because the capabilities described above are not needed for simple and basic things. However, if you start using larger deployments with different browsers and their capabilities, then Selenium will become a headache for you.

So Selenium or...?

Of course, we wrote above about the disadvantages of Selenium. In some ways, it is inferior to more modern libraries. Puppeteer and playwright are more extensive, have a simpler configuration algorithm, and are more flexible in their use. To work with them, you do not need specialized software and so on. And they integrate more easily with other technologies. All these are clear advantages. However, Selenium is still popular. All this is because it has a simple and clear API. Thanks to it, Selenium has abstracted all the different browsers, their respective protocols, and integration issues. Many large projects use Selenium but hide it. And the problems associated with its use are solved for you, so that they do not seriously bother you.