The Home of Web Automation

Re-Evaluate Your Web Automation Status

Re-Evaluate Your Web Automation Status

Starting off this article, means that either you or someone in your team, cares or is becoming restless with the state of your web automation efforts. That's a good start! There has been some long history and plethora of misinformation over web automation that can make efforts around it relatively bumpy, with pushbacks but also some frowning. Defensiveness towards these efforts often comes from past experiences, ballooned expectations, siloing between team horizontals or plain lack of knowledge on the subject. Let us try to rectify the situation!

Prejudice Against Web Automation

Over the years, working on software projects, most commonly in medium and large organizations, you will face the truth that web automation has gotten some bad reputation, especially around the topic of End to End testing. You might have expected otherwise but it is not without reason.

  1. 1)"Flakiness".
  2. 2)Intricate environment setup for CI/CD.
  3. 3)High maintenance costs for application-level workflow tests.

I won't go further explaining the roots of these experiences here, but for anyone that wants to dive a bit deeper there is another article that might give you a better view of why the bad rep around web interface automation testing.

The above reasons being more than just warnings but real experiences for other team members, can quickly cause a dismissive attitude towards investing in web automation. Sadly this is independent of how much versed and up-to-date someone is in the topic. They lived through it and can just recall situations where the important job of "pushing code" slowed down due to a flaky test. Well they cannot remember the good parts because automation should by nature be unobtrusive and notify you only when more care is required.

Distance Between Us

Another important misconception, that I am really refreshed to see fading away, is the imaginary distance in disciplines between Software Engineers and Quality Assurance engineers. There is no denial in the fact that each role from the two involve specialized skills (as every engineering discipline these days), but for one we have moved away from QA engineers being "not so" technically skilled in software and also both roles are in need of the same great engineering practises we see and talk all the time. These facts are slowly but surely start to bridge the fantastic gap between the two disciplines bringing them to a better collective understanding and beneficial information exchange.

That is all the more reason to keep confidence in your aspirations. Provide solid explanations and refined knowledge indicating the benefits clearly to the team.

As the Software World is Progressing, Web Automation Follows Along

During the last decade, software has been blooming, not only with more fields to be applied on, but in the ways that the majority of engineers are developing their applications especially for the web platform. From whole ecosystems over specific JavaScript frameworks like React, Vue and Angular; Down to the level of new languages talking hold of web servers and backend services like Golang, Rust and Elixir. A great era to be in for a technology enthusiast.

As a natural consequence, the work behind and around web automation faced tremendous growth.

Moving Closer to the Host Platform

One of the major hurdles for people working on web automation, was the architectural nature of the tools available. The undeniable leader, that was Selenium, is designed to use an additional server for communication between the actual browsers and client libraries, which in turn must implement a specific protocol to describe the commands to be executed. This architecture has its merits but the speed in which the browser market progresses, and all the stakeholders in it, introduce a fair share of disadvantages that range from trying to keep up with new browser feature additions down to individual client library maintenance cost. From all the moving parts that are not native to the browsers that we aim to instrument, "flakiness" comes into play.

To address the pains mentioned above but to also take web automation many steps further, projects like Puppetter and Playwright have risen and indeed taken off. These and similar projects allow instrumentation much closer to the browser platform. This is achieved at a high level by using the actual protocols that the browser platform uses to expose functionalities and hooks arounds its behaviours, see Chrome DevTools Protocol.

As these intrumentation protocols are native and used internally by the platform, they are maintained by the browser development teams with no further effort from the client. Not to mention that from my experience, the community around those tools is excellent. Let's jump on the meaty topics!

Increased Reliability

By adding abstractions over something which does not operate in the same process, at least (for us high level engineers), we can safely assume that it will be more prone to fail due to the nature of the system. These projects, aim to have less of that!

Additionally you no longer have to "hack" your way through some situations:

  • Instead of using code injection and monkey patching to monitor console logs, you can observe the internal Log API.
  • Instead of relying to the trick of jQuery.active, the Network layer can give you all the information you need. A killer feature is the ease with which you can add request/response interception to your arsenal.
  • Navigation patterns (SPA navigation, click navigation) are handled intuitively with room for customizations. E.g. page.waitForNavigation()
  • More capabilities exposed closer to the host...
  • More Speed

    As expected running closer to the browser is indeed faster in the execution time aspect, but this is not the only or even the major factor for speed. Here we want to focus not on execution times per command but actual large chunks of time that have to do with the nature of the technology for the web automation task. Most of the time chunks that were "wasted" during web automation tasks ,when only Selenium-based tools were an option, was for wait() calls.

    These calls were added not by slopiness of the individual writting the automation taks, but from the absence of a reliable way to make sure that all the network based resources have finished loading during a page navigation.

    This is not the case with tools like Puppeteer and Playwright. Both of these tools are based on their own CDPs so they can understand directly when the Network layer has finished loading resources after a navigation command. Except for a reasonable default, the wait event is even configurable! Having this additional option at hand, the calls to wait() are gone and web automation tasks are much faster.

    More Power

    Together with more reliability, a whole slew of new capabilities are unlocked. With these capabilities, the number of manual and periodic tasks that can be automated grows by a large margin.

  • Gather metrics about how much of your CSS & JS code has been used at any point in time during navigation. Puppeteer example
  • Run performance audits with Google Lighthouse. Puppeteer example
  • Inspect page with specific Geolocation profile. Playwright example
  • Retrieve comprehensive accessibility attributes for specified DOM branch. Puppeteer example
  • Generate PDF version of a page. Playwright example
  • Just a reminder that these are usable metrics that do not need to be fulfilled across browsers. Keep it simple. Gather them up on a SpreadSheet and analyze them.

    Tranquility on CI/CD platforms

    I am really familiar with the frustration DevOps engineers can feel trying to initially setup the sidecar server that is required for running Selenium-evolved solutions. Except for the flakiness, that means getting one more ping for false positive failures, errors can pop up from updated browser drivers with incompatible JAR Selenium files. Down the rabbit hole you go...

    If the use cases you have for web automation do not really require cross-browser support, the CI/CD setup for running libraries like puppeteer or playwright feels like a breeze. There is no requirement for additional server running as you just run the browser binary and especially with headless mode you would not need any graphics capable library/machinery. Fast and painless!

    All May Sound Ideal

    Yes, the reasons to offload many of your web automation efforts to projects like Puppeteer might seem a no-brainer for many use cases but there is a trade-off. The trade-off lies in the cross-browser compatibility aspect. As mentioned, these projects rely on internal protocols that come bundled with the browser engine and base platform, which for both Puppeteer and Playwright is the Blink browser engine used in Chromium. The major support and compatibility is with browsers that are based on this setup, but the plans (or current status) is to extend that.

    The current compatibility status as far as I can understand for both projects is:

  • Puppeteer as of late is providing support for running versions of Microsoft Edge, Firefox (experimental) and other Chromium based browsers.
  • Playwright is advertising support for Firefox, Chromium and WebKit based browsers.
  • Innovation Is There and At Its Finest

    Without giving less credit to other outstanding projects around web automation, there is one that most people will feel comfortable recommending, and of course I am talking about Cypress. Cypress is a browser testing tool, providing a all-in-one solution that allows for fast and reliable application level testing. We won't go far into the why and how here, but just laying down the goals and inspiration behind this effort can show you how much web automation has moved forward.

    Out of the box with just the tool installation you get:

  • Intuitive command and expectations API.
  • Clear test runner UI.
  • No need for "waiting" an element to be ready. Cypress runs inside the browser and knows exactly when the element you request is ready.
  • Time travel on the web interface during the steps of a test.
  • Test failures can be inspected even at the network level.
  • Folks at Cypress already knew that End to End testing was having its bad reputation among developers, and that the term was carrying around all the issues of old we have mentioned before. With the further advancement in the technology around web automation, they created a tool that promised reduced flakiness, test execution speed and CI/CD reliability when running tests without additional dependencies.

    Together with the technology that executes the End to End tests, Cypress team focused on the development experience of the user either that be a developer or a QA engineer. For that they provided a desktop application that allows anyone with minimal knowledge to get up and running with End to End testing and any other automation task. Visual feedback and the project dashboard structure is just on point.

    The community was ripe and in need for a tool like that. They delivered and at least in my opinion they really marked a new "post-Selenium" web UI testing era. Give them a try!

    As a trivia, the architecture powering Cypress is a really interesting specimen that I would recommend giving a look even for opening up new ideas.

    More Idea Generators

  • QA Wolf
  • Puppeteer-snapstub
  • Applitools
  • Final Thoughts

    In this relatively short read, that I hope you enjoyed, you have some really solid points indicating the growth of web automation capabilities and tooling during the last few years. With that in mind, do not let anyone discourage you from trying out, introducing or even re-introducing web automation to your team. There might not be a silver bullet for your individual use cases yet, but I can ensure that there is a more mature and flourishing community around web automation software than ever before.

    If you enjoyed the article and you want to support me so that I keep the content coming...
    Buy me a coffeeBuy me a coffee