Abstracting Acceptance Tests
By Adrian Sutton
For the past three months I’ve been settling into my new role with LMAX developing a high-performance financial “exchange”1{#footlink1:1301173295622.footnote}, hence the silence on the blog lately. One of the things I’ve found extremely impressive about LMAX is the impressive acceptance test coverage and the relative ease with which they are maintained. Normally as a set of acceptance tests gets to be extremely large they can become an impediment to being agile and changing the system since a small change to the UI can affect a large number of tests.
At LMAX that issue has been quite well handled by building a custom DSL for writing the acceptance tests so the acceptance tests are described at a very high level and then the details are handled in one place within the DSL – where they can be easily adjusted if the UI needs to be changed. It’s certainly not a new idea, but it’s executed better at LMAX than I’ve seen anywhere else. They regularly have not just testers but also business analysts writing acceptance tests ahead of the new functionality being developed.
One of the key reasons for the success of the DSL at LMAX is getting the level of abstraction right – much higher than most acceptance test examples use. The typical example of a login acceptance test would be something like:
- Open the application
- Enter “fred” in the field “username”
- Enter “mypass” in the field “password”
- Click the “login” button
- Assert that the application welcome screen is shown
However the LMAX DSL would abstract all of that away as just:
- Login as “fred”
The DSL knows what web page to load to login (and how to log out if it’s already logged in), what fields to fill out and has default values for the password and other fields on the login form. You can specify them directly if required, but the DSL takes care of as much as possible.
The second thing I’ve found very clever with the DSL is the way it helps to ensure tests run in an isolated environment as much as possible – even when running in parallel with other tests. There is very heavy use of aliases instead of actual values. So telling the DSL to login as “fred” will actually result in logging in as something like “fred-938797” – the real username being stored under the alias “fred” when the user was setup by the DSL. That way you have can thousands of tests all logging in as “user” and still be isolated from each other.
Interestingly, the LMAX DSL isn’t particularly English-like at all – it’s much closer to Java than English. It aims to be simple enough to understand by smart people who are working with the acceptance tests, but not necessarily the code, regularly. Sometimes developers can assume that non-developers are only capable of reading actual English text and invest too much time in making the DSL readable rather than functional, effective and efficient at expressing the requirements.
There is still a very obvious cost to building and maintaining such a large body of acceptance: some stories can require as much time writing acceptance tests as they do building the actual functionality and there is a lot of time and money invested in having enough hardware to run all those slow acceptance tests. Even so, I’ve seen a huge payoff for that effort even in the short time I’ve worked there. The acceptance tests give people the confidence to try things out and go ahead an refactor code that needs it – even it the changes also require large-scale changes to the unit tests.
1 – strictly speaking I believe it’s a multi-lateral trading facility. Don't ask me what the exact difference is. ↩