The practice of separating frontend UI from backend APIs is a popular one but one that brings its own particular brand of headache for developers. The ability to construct a frontend with dummy data gives us the opportunity to prototype our UI quickly before linking it up to an API. However, problems arise when the assumptions we've made about our API-to-be don't match up with reality.
A totally not-contrived scenario
Sarah is in charge of a team of frontend developers. The team has been assigned the task of creating the UI for a new client's web app. Sarah's team specialise in the JAMStack and have been working closely with the design team to create workable designs. They're raring to get going and so start with dummy data, making assumptions about the API after consulting with the API build team.
Gary heads the backend team. They'll be building the API that Sarah's team will use to populate their UI. Gary has discussed how the API will work with Sarah and their respective teams and they have some documentation for how the API should work.
So far, so good.
Despite the documentation written at the start of the project, the teams diverge in their implementations. For example, Sarah's team may write creation API calls expecting HTTP status 201 returned on success with the created entity in the body, while Gary's team send back status code 200 with the message 'OK' instead. The project takes longer and more refactoring is needed to bring things into alignment.
You might be thinking 'Why wasn't there more collaboration?', 'Why wasn't there someone in overall control to sort these things?', 'Just who the hell runs a company like this?' and you'd be right. It's a pretty contrived scenario but we've all been in those situations, either by shitty workplace culture or just frazzled project managers, where things get missed. If you don't document expectations and include that documentation in the workflow then stuff will fall by the wayside. So, how do we solve this problem?
Enter contract testing
Contract testing is a means of assuring that inter-application messages conform to a shared understanding between a consumer and a provider. This understanding is documented in a 'contract' file, which are used by test runners to ensure that calls to test doubles return the same response as the real application would.
In our scenario, this means that Sarah's team could write contracts for each of their service endpoints, detailing both the request and the expected response. These contracts can then be used by Gary's team to test the API and ensure they're sending back the expected response to a given request. This process is known as consumer-driven contract testing. In addition, the contracts show Gary's team exactly the parts of the API that are being used by consumers, which allows them to change non-used, perhaps experimental, parts without breaking implementations.
One of the areas in which contract testing really shines is when adding or changing features on a service. This is especially true in an agile environment when you could have multiple deployments a day and even more useful if those deployments are across a number of microservices. Developers can quickly end up in versioning hell, having to ensure they're hitting the right version of a service to ensure their thoroughly-tested code doesn't break. Automated testing against shared contracts gives developers the confidence to deploy, knowing that their code will work with any implementation of a service or consumer which adheres to the contract.
If Sarah and Gary's client decides later that they'd like to break up the monolith API and move to microservices, potentially spreading the work across more teams, they can rely on the fact that the contracts will ensure the frontend will still have the information it needs to keep functioning. More confidence, more deployments, less faffing about refactoring.
I hope that gives you an overview of this powerful testing methodology. I'll be going into how you can set up contract testing in your own JAMStack apps in another post (or series of posts). We'll dive into setting up the consumer and provider tests. We'll also look into contract brokers, servers which allow you to share contracts and ensure your latest implementation will work against its opposite number.