I've been meaning to complete one of the Ministry of Testing's 30 Days of Testing challenges for a while. I got about two thirds of the way through the first one as a team exercise, then everyone "got busy with other things", and it died a horrible death, with the checklist languishing on the wall for a couple of months as a reminder of how we'd failed.
When the latest one, 30 Days of Automation in Testing, was announced I tweeted this:
And so here we are, with 11 days of July done, and I thought I had better start writing up what I've been doing & thinking so far.I've had good intentions before, but this time I'm actually gonna do #30DaysofTesting. https://t.co/BoXhpSucrk— Dan Caseley (@Fishbowler) June 29, 2018
Day One - Compare definitions of "automation" and "test automation".
I don't think I quite appreciated how stark this was going to be. Don't get me wrong - this was like a leading question - we all knew these definition were gonna be incongruous.
Automation - methods and systems for controlling other external systems, appliances and machines.
Test Automation - a big mixed bag. Skills, frameworks, packages, test drivers, even the tests themselves.
Were I to take what I know about the field of test automation, and attempt to use the definition of automation to identify what within that field is "the automation", it's a big circle around everything that resides on the machine up to, but not including, the software being "automated". To be clear, in a Selenium-type context, "the automation" includes:
- nothing of the human
- any cucumber-type fixtures and any other sprinkles you like in your repo
- the libraries which enable and structure those tests (e.g. mocha)
- the engine that provides the external interface (e.g. Selenium)
- possibly any purpose built third-party access mechanism, like the WebDriver implementation for/within your browser
- I don't find automation over accessibility gives me any confidence at all, but some tools can identify some things that almost probably wrong.
- Automation on usability is impossible (for some definitions of usability)
- If you include rendering here, then visual checking for browser compatibility comes in, and I do find that cheap & sexy. Viv Richards speaks well about this online, and spoke about it to a packed room at NottsTest earlier this year.
- I love using automation to explore an API. I can sit with 5 lines of code, written as a positive check of a particular endpoint and play with it to explore the behaviour. What if this value were larger, or longer, or omitted?
- I mentioned in my post on the MoT Club about using automation for monitoring - if it's good enough to give you confidence to release, why not use a subset of those checks to establish if it's still running later?