(Updated: )

Limitations of API-only testing: Why it shouldn’t be your sole testing strategy

Share on social

Limitations of API-only testing — Reasons why API testing shouldn't be your sole testing strategy
Table of contents

A spicy article hit my inbox the other day. It came with a bold claim — “API testing is better than UI testing”. Absolutes like “A is better than B” rarely hold in the software world. “It depends” is the answer to most tech questions for a reason.

Let’s compare API and UI testing and discuss why one isn’t better than the other. The frenemies are “just different”, and always will be. And that’s a good thing.

API testing and UI testing — two very different challenges

When you compare API and UI testing and say one is better, what’s behind this statement? People often compare the two testing approaches' scalability, stability, or maintenance. However, none of these comparisons is valuable because API and UI testing are very different.

In my experience, the foundation of these comparisons is always the same: API tests are easier to maintain than UI tests. Let’s dive into it and see why it doesn’t matter!

API tests are easy to implement

The claim that API testing is easier to maintain than UI tests most certainly holds. In its simplest form, you could even write a 50-line script in your favorite programming language, make an HTTP call to your local or staging environment, and parse the JSON to evaluate if the response is correct. Talking HTTP isn’t a problem. You could even chain requests and parallelize your script with some shell magic to cover your entire API surface with a single command.

Soon, you’ll realize you need test reports, a proper test runner, and advanced test assertions. Also, nobody knows how to handle your homegrown test script epiphany. So you’ll reach to Postman or similar tools. And congratulations! You can now make HTTP requests and test your entire API surface. At scale! And that’s great! 

Let me be bold and say that the testing tools to validate APIs are good enough today. API testing comes with limited complexity, and the implementation is a solved problem.

How is it with UI tests, though?

UI tests are still a pain to write and maintain

No matter the tool, testing web UIs is challenging. The main reason is that the complexity of testing a website is much bigger than testing API endpoints

And even though modern browser automation tools like Microsoft’s Playwright ease UI testing with stellar DX (developer experience), cross–browser test capabilities, and fresh approaches to user-first automation, web UI testing is just a different beast to tame. Why’s that?

First, controlling a browser to open a page and evaluate the rendered UI is way more complex than making an HTTP call. Do you know how to make a scripted HTTP request? Most likely. Do you know how you could control a browser and validate a rendered page without an automation library? I don’t. However, automating a browser isn’t the only problem in UI testing. 

The technical complexity of modern frontend applications just grew into infinity over the last ten years. Do you remember when Frontends consisted of bare-bones HTML with some jQuery sprinkles? Testing an HTML GET request followed by a form submit POST was and is straightforward because the UI interactions are predictable and limited in complexity.

Today, a website consists of over 2MB of compressed resources, making over 60 requests per page load. On average, at least 20 requests are JavaScript resources adding custom functionality. A simple signup form can easily make a handful of additional requests on a race to force the application into a different state with added form validations, animations, or widgets. And let’s hope no script gets stuck in the network because otherwise, the application might enter a broken state.

In short, a lot is going on in modern web applications, and no matter the tools, testing the modern web is hard. And I’m sorry to bear the news: this won’t change any time soon. No matter the tools.

And yet, does this mean you should prefer API over UI testing for ease of use? Heck no!

What are you trying to solve with your testing strategy?

Comparing testing approaches against each other is the wrong way to look at things because every product is different. Every application has different requirements, serves different customers, and is built with different technologies. There’s no single best way to test things — it always depends. Software development and testing is always about the tradeoffs.

Your testing setup should circle around one single question: why are you investing in automated testing? What do you want to get out of it? The answer to this question and nothing else leads to a test setup that works for you.

If you’re working on a developer-facing product, chances are high that you’re providing an HTTP API. Should you test its endpoints? For sure! Should you aim for 100% test coverage? You’re the only one who can answer this question because writing and maintaining tests takes time and effort. But if you struggle to maintain high availability or fight recurring API bugs, good test coverage won’t harm you.

But let’s assume your core product is a user interface on top of bespoken APIs. Could your application break even though the underlying APIs are up and functioning? Most certainly! A simple missing semicolon in your Frontend code could bring your app to its knees. Let alone frontend logic bugs. Is it worth taking the burden of UI testing in this scenario? I’d say so!

What tests to write and what features to test depends on you and your product. What features are unacceptable to be broken? Testing these features should be the absolute minimum.

When high test coverage misses the point

A high test coverage, whether API or UI, misses the point when it isn’t preventing you from shipping critical user-facing bugs to production. If you’re rigorously testing your API surface because API testing “is better”, great! Having tests is still better than not having them.

But if your test suite fails to enable you to ship bug-free software as quickly as possible, you have to question the purpose of the tests. Are you making the tests about yourself? Are you chasing vanity numbers to make you feel good instead of enabling you to ship good stuff to production? I have been guilty of hunting the holy but pointless 100% test coverage grail many times.

But what should you test, and where’s the balance?

Automated tests are all about user experience (and always will be)

Am I saying you shouldn’t aim for high test coverage? Not at all. Instead, I’m advocating for a user and value-first testing approach. None of your customers cares about your high test coverage. The only thing they care about is a working product.

If your API tests enable you to ship new features quickly and not break production, go for it. But if your test suite has blind spots and your application can break without you noticing, you better take away these blind spots! 

Indeed, UI tests are hard to write because they deal with high complexity, but if you’re providing a UI to customers, you should at least know that it works! Don’t let a missing semicolon take you down. And again, do you need to test all the UI functionality and aim for the holy test coverage grail? Most certainly not, but knowing that your last feature deployment didn’t break mission-critical user flows makes you sleep better. Trust me. 

It is up to you to define how many tests you need to ship features confidently and not lose your users' trust. 

But speaking of trust, do your API and UI deployment tests really prevent you from having user-facing production issues? I doubt it.

Deployment testing isn’t the safety net you have hoped for

Let’s assume you’re shipping an application that serves a global audience and relies on 3rd party APIs. Is testing your deployments enough to guarantee a stellar user experience? What if one of your API dependencies goes down? Will your application follow? 

If the answer is “YES”, be aware that deployment testing might not cut it because your app can (and will) break after your production deployment.

Am I switching the topic from testing to monitoring now? Yes and no.

Historically, testing and monitoring were two very different disciplines. Developers, testers and QA engineers cared about working features in isolation. Platform and site reliability engineers, on the other hand, cared about a working and well-performing production environment. 

And sadly, until today, it’s common for these teams to work for themselves. Developers throw new features over the fence for the infra people to figure out how to keep them running. But why’s that? 

Primarily, it’s because the teams rely on different tools. How should you work together when the testing and monitoring tools differ? It’s challenging for all parties, slows down development, and hinders collaboration.

As solutions engineer Jonathan Canales describes the solution, a unified tool chain enables fast iteration, efficient testing, and seamless monitoring. Amen! 

But how can you unify testing and monitoring and maybe even reuse all your tests for synthetic monitoring? Well… maybe you should adopt Monitoring as Code. It enables you to test your deploys and then reuse the tests for 24/7 global monitoring. This approach ensures that your users are having a great experience with your product. And if they don’t, you’re the first to know about production issues.


So, what’s better now — API or UI testing? The unsatisfying answer is none. And both!

What you test (and when) is on you. You know your product and its mission-critical features the best. You’re the only one to know what features you can’t risk to break. Maintaining automated tests can be hard work, but the ease of implementation is the wrong measure to decide what to go after. 

Automated testing isn’t about you or the effort required to make it work. Automated testing is about shipping great features without breaking things. It’s about keeping your customer happy — that’s all that matters.

Share on social