With Infrastructure as Code and service-oriented development, a modern web app can consist of countless moving parts developed by multiple development and DevOps teams.
When establishing a high-velocity development environment, the main question is, "How can you guarantee a stellar end-user experience when many engineers are constantly pushing and deploying code?"
Solid, easy-to-write, and clearly defined monitoring practices are the only answer to this question. Development teams must know their responsibilities and monitor their infrastructure and application's health.
There are two approaches to application monitoring: inside-out and outside-in monitoring.
Suppose you're practicing an inside-out monitoring approach; you keep an eye on your apps' health from the inside. You monitor your database operations and critical transactions. And while there's value in this approach, an all-green inside-out monitoring setup doesn't guarantee a flawless user experience. Application bugs or infrastructure misconfigurations can still make it into production unnoticed. Additionally, not every detected infrastructure issue needs to be addressed immediately. For example, if your application scales up automatically or implements smart failovers, infrastructure issues might have little to no user experience impact. Thus, nobody should get alerted or woken up at night to fix things that aren't affecting end-user experience.
On the other hand, outside-in monitoring keeps the end-user experience at the core. Can your visitors log into your application? Can they purchase your application's offered products? Do all your features work from anywhere and at any time? If the user experience or core functionality is affected, you must know about it and act quickly.
Synthetic monitoring is the critical pillar of outside-in monitoring. Let's learn what synthetic monitoring is and how it helps to ship better software faster!
What is synthetic monitoring?
Before diving into synthetic monitoring, let's take a step back and consider automated testing.
Automated tests aren't new in software development. Synthetic tests focus on emulating user behavior, from pinging an HTTP endpoint to controlling real browsers to mimic a realistic user session with application transactions. Synthetic tests allow you to automate critical application flows such as adding a product to a shopping cart or logging in to an account. Once your tests fail, hold production deploys to figure out what broke and prevent a negative user experience in production.
Synthetic tests are essential for high-velocity development teams, but unfortunately, they can't lead to complete confidence. Knowing that your application works during or shortly after a production deployment doesn't help you detect future user experience issues. Maybe your infrastructure is struggling under high load, or a third-party vendor is experiencing a downtime; you must constantly monitor user experience to be safe.
Transforming and reusing your synthetic tests as monitors is the only way to know that your application always works!
How does synthetic monitoring work?
When you adopt synthetic monitoring (sometimes also called active monitoring), you constantly run synthetic tests against your production environment — you create and run your headless browser or API tests during development, and ideally, you follow the Monitoring as Code workflow to reuse them to monitor your production application.
Emulating typical user behavior by testing your mission-critical transaction in high intervals helps you gain confidence that your entire application operates smoothly at any time. And if your application serves a global audience, running synthetic monitoring from multiple locations guarantees you won't miss regional issues.
Synthetic monitoring enables you to be on top of your game and notice issues before your customers do. But there's more to it!
Synthetic monitoring vs. real user monitoring (RUM)
By betting on synthetic monitoring, you take an active approach to monitoring your application's functionality and performance. You're in charge of defining and simulating user interactions in a laboratory environment under your control. But are there other ways to monitor user experience?
Real user monitoring, also called RUM or passive monitoring, is another way to analyze and monitor your application. RUM monitoring offers deep insights into user interactions, performance statistics, and your users' devices and locations. But how does it work?
But when should you use which? The answer is the usual, "it depends". An active monitoring approach enables you to notice and get alerted about issues proactively. Testing and monitoring before and after your production deployments help massively speed up development and prevent production issues. Your synthetic user session unveils and alerts you about production issues before a customer notices.
That said, though, monitoring real users passively provides insights into behavior and environments you probably aren't aware of! Maybe your users use low-end devices, or you have a customer base in a particular part of the world. Real user monitoring gives you these insights. Ideally, you bet on a combination of synthetic and real user monitoring to have all the user experience insights.
We at Checkly are big fans of synthetic monitoring, though; why should you invest in it?
Why should you invest in synthetic monitoring?
Synthetic monitoring on the back of headless browser automation enables development teams to ship fast and confidently.
Let's name more synthetic monitoring benefits.
Know that your application is available online
The most terrifying scenario when running an online business is being offline. Downtimes can be caused by DNS misconfiguration, hosting issues, developers deploying a bug, and countless other reasons. If you bet on synthetic monitoring, you'll know that your apps and their APIs are available online, and if they're not, you'll be the first one to be alerted in case of a failure.
But synthetic monitoring has way more to offer than classical uptime monitoring!
Quickly detected mission-critical issues
Considering your application's essential features, how long would it be acceptable for these to be broken? The answer to this question is your ideal monitoring interval.
You can only fix production issues you know about. If making a purchase is your core business, you probably don't want to test this functionality only once a day after a production deployment.
Synthetic monitoring enables you to test your core functionality daily, every hour, or even every minute. The shorter your synthetic monitoring interval, the quicker your mean time to detect (MTTD) will be. A short MTTD will enable you to fix production issues before your customers reach out to your support channels!
Less noise and more meaningful alerts
Betting on user experience testing with synthetic monitoring leads to more meaningful alerts. A failed transaction might be an issue, but your infrastructure could handle it gracefully. Alerts based on a broken user experience tell you the entire story and must be treated critically and acted on immediately.
More transparency with monitored third-party services and vendors
Depending on your application, third-party providers could be essential to your offering. Let it be a SaaS service to handle your logins or a cloud database storing your user data; all these services could run into issues at any time. If your third party's downtime can become your downtime, synthetic monitoring helps you detect these issues.
By testing user experience, you'll get alerted when things are broken. Whether it's your or your third parties' fault, synthetic monitoring informs you about issues so you can act quickly.
Clear performance benchmarks
Another benefit of running synthetic monitoring with headless browsers is that you can monitor performance implications while testing core functionality. Is your web app fast enough for customers in Australia? Does it provide a good Core Web Vitals experience? Do core flows like the customer login become slower over time?
Many things can cause performance degradation, but synthetic monitoring will unveil a slower user experience with aggregated performance metrics. You can't fix things you don't know about!
But synthetic monitoring also comes with some challenges; let's look at them.
Challenges in synthetic monitoring
While the idea of synthetic monitoring is compelling, establishing it across an organization also comes with technical and organizational challenges.
Building and maintaining modern applications is already complex, and to establish well-running synthetic monitoring practices, it must seamlessly integrate into your developers' software development lifecycle.
If synthetic monitoring is too hard or cumbersome to set up and configuring it "just" adds another layer of complexity, you won't succeed because the development teams won't adopt it.
At Checkly, we believe in Monitoring as Code, which allows you to banish "ClickOps" and handle your monitoring needs the same way you handle your deployments — automated and in code!
But when your synthetic monitoring setup lives in code and can be deployed with a single command, who owns it?
In a previous world, development and operations were two different disciplines. However, the DevOps era and its connected mindset, "You build it, you run it." changed the game.
The same teams were responsible for building and operating their features and applications in production. The same principles should apply to your synthetic monitoring efforts. Development teams must consider building and maintaining but also monitoring their features and applications.
Regarding synthetic monitoring with headless browsers, flaky tests are a significant challenge. Since your tests run on a schedule, failures won't just hold a production deployment; flaky results will lead to useless alerts waking up your on-call engineers.
Avoiding false positive results and creating a solid synthetic monitoring test suite must be one of your core priorities, or you won't succeed.
How do you get started with synthetic monitoring?
To emulate real user interactions and test your mission-critical flows end-to-end, you must be able to control your users' environment. Headless browser automation is essential for this. But what tools can you use to automate browser operations?
The end-to-end testing ecosystem provides many tools to control browsers. But lately, there's one tool standing out. Microsoft's Playwright quickly established itself as one of the leading solutions for synthetic testing. And we at Checkly believe it's the best tool for synthetic monitoring.
Playwright — the leader in synthetic testing
Backed by Microsoft, Playwright quickly became one of the leaders in synthetic and end-to-end testing. Its ability to control headless browsers while providing a stellar developer experience convinced the developer, quality assurance, and DevOps communities.
Fight flaky tests with auto-waiting
As mentioned earlier, avoiding flaky tests and false-positive alerts must be a core priority when setting up synthetic testing and monitoring. Playwright battles this problem with auto-waiting mechanisms and user-first actions and assertions. Focus on testing and validating your application features instead of monitoring application internals!
Time travel debugging with traces
Another headless testing challenge is the "it works on my machine" problem. How can you debug failed tests that were executed in a remote environment? Playwright lets you record every test action and takes network and HTML snapshots of your application's state. All this information is put together in a Playwright trace file for your investigation.
Do your tests fail in CI/CD? Open the generated trace file, travel back in time, and inspect your application's state to see what caused the test failure.
Excellent developer experience
And lastly, to reach high synthetic testing and monitoring coverage, development teams must enjoy writing tests. Playwright's ecosystem provides valuable tools to generate tests, debug them in your favorite editor, and run/write your test cases side by side for immediate feedback!
In summary, Playwright is a valuable tool to level up your synthetic testing game.
But it doesn't provide monitoring capabilities. How could you schedule your tests, get alerted, and gather meaningful monitoring data over time?
Monitoring as Code — synthetic testing and monitoring united in a single workflow
Previously, synthetic testing and monitoring were different practices. They were owned by different teams choosing various tools. At Checkly, we believe in a code-first monitoring approach that unites testing and monitoring.
Leverage Monitoring as Code (MaC) to run and develop your synthetic Playwright tests locally, run them in the global Checkly infrastructure to test your preview deploy environments, and if all your synthetic tests pass, reuse and transform the same test code to synthetic monitors in the Checkly cloud.
By adopting Monitoring as Code, development teams can rely on a single tool to test and monitor their app's user experience. This approach removes friction, nurtures team collaboration, and leads to faster issue discovery and resolution in production environments!
Synthetic monitoring forms the foundation of shipping a seamless user experience. It is a safety net for development and DevOps teams, allowing them to innovate confidently. By integrating synthetic monitoring with a MaC approach, we create a bridge between testing and monitoring, fostering a collaborative environment that enhances the overall health of your application.
Synthetic monitoring and Monitoring as Code anticipate and resolve issues before they impact users and streamlines the process of maintaining a high-performing, user-centric application. In the fast-paced world of development, synthetic monitoring and MaC are tools and essential practices that ensure your application consistently delivers an optimal user experience.
Because you have to remember you can only fix issues you know about.
Frequently asked questions about synthetic monitoring
Learn more about synthetic monitoring and how it relates to uptime monitoring and observability.