Monday, 12 November 2018

NottsTest Lightning Talks - November 6th

Background

I'm one of the organisers for NottsTest, and certainly one of the biggest challenges isn't venue or beer or pizza. It's content. You're asking people to give up their time to prep, rehearse, travel and deliver content for free to (mostly) strangers.

November was easy though. We'd agreed in August that our hosts were providing a talk in November - a retro on implementing Modern Testing as practice across all of Engineering. This was a follow-on from the talk their Head of Engineering had given in early summer on his plans to kick it all off.

Then things got tricky. Our host felt it was in bad taste to be championing cutting-edge new practices whilst people's jobs were at risk.

We weren't going to get another speaker with ~2 weeks to go, and neither of the organisers had one up the sleeve that we could roll out / recycle. We switched the session to lightning talks, but gave ourselves wiggle room with a backup option of Lean Coffee if lightning talks weren't working. We needn't have bothered - lightning talks were a cracking success!

The Talks

First, I spoke about The Unusual Value of Testing, an experience report from the week, where demoing an API test against a mock API that was validating against a JSON Schema led to other members of the team taking the idea further, applying the same validation on the real implementation of API, moving the API to ReadyAPI and generating the schema automatically, and generating the entire UI form structure from the same JSON Schema along with validation to match. This made the test entirely redundant, since we could be certain (within reason) that the contract that the schema was enforcing was going to be honoured throughout.




Next, Dave Rutt spoke about procrastination, that it's not a laziness, but a scientific inevitability when faced with more time to perform a task than is needed. If you've got 9 months to build a product, you'll procrastinate because you've got loads of time, your monkey brain finds you more entertaining/engaging tasks to do, and your limbic system never gets engaged. If you've got 2 weeks to deliver an iteration, you'll get on with it. Defeat your monkey brain with agile!



Third, Keith spoke about hashing. Not that kind of hashing. Nope, not that kind of hashing either. This was about an athletic activity! Hashing is a collaboration of a team to complete a run with a lot of dead ends and false trails. The idea is that the team complete the run together, not that the fastest runner completes first, so for example, the fast runner could run ahead and scout dead ends.



Next came Christian with an announcement on behalf of Ministry of Testing and the Software Testing Club. The Software Testing Clinic is starting in Nottingham, offering peering and mentoring with other testers on specific skills, all for zero pounds of your money. I'm looking forwards to trying this format out, and reckon this should be a great complementary event to NottsTest!



Penultimate talk was by George, who spoke about the feeling of being secure on the internet, and why you're wrong. He spoke about the recent site that demonstrated being able to see personal info from an Incognito Window, and promised a fuller talk & demo in the future.



Lastly, I had another shot at a lightning talk, this time about being Five Out Of Five, what keeps me there, the Nine Kinds Of Motivation, and why everyone should do something that motivates them. This generated a lot of discussion, so we're considering running an entire evening on careers, CVs, interviews and even the awkward topic of tester salaries!

Wednesday, 11 July 2018

30 Days of Automation in Testing - Days 1 to 4




I've been meaning to complete one of the Ministry of Testing's 30 Days of Testing challenges for a while. I got about two thirds of the way through the first one as a team exercise, then everyone "got busy with other things", and it died a horrible death, with the checklist languishing on the wall for a couple of months as a reminder of how we'd failed.

When the latest one, 30 Days of Automation in Testing, was announced I tweeted this:

And so here we are, with 11 days of July done, and I thought I had better start writing up what I've been doing & thinking so far.

Day One - Compare definitions of "automation" and "test automation".
I don't think I quite appreciated how stark this was going to be. Don't get me wrong - this was like a leading question - we all knew these definition were gonna be incongruous.

Automation - methods and systems for controlling other external systems, appliances and machines.
Test Automation - a big mixed bag. Skills, frameworks, packages, test drivers, even the tests themselves.

Were I to take what I know about the field of test automation, and attempt to use the definition of automation to identify what within that field is "the automation", it's a big circle around everything that resides on the machine up to, but not including, the software being "automated". To be clear, in a Selenium-type context, "the automation" includes:

  • nothing of the human
  • the high-level language in which the tests are written (e.g. JavaScript, Java)
  • any cucumber-type fixtures and any other sprinkles you like in your repo
  • the libraries which enable and structure those tests (e.g. mocha)
  • the engine that provides the external interface (e.g. Selenium)
  • possibly any purpose built third-party access mechanism, like the WebDriver implementation for/within your browser
From here, we've got "a system for controlling the external system" - the website running in the browser.



Day Two - Share something from an automation-related book by Day 30

It's not Day 30 yet. I've had a few books on my todo list for a little while, but Alan Page's AMA on Modern Testing has bumped his book on Leanpub, The "A" Word, right up my list. Gonna try to tackle that this month.


Day Three - Contribute to the Club about automation

I spoke here about my experience choosing an API test tool. It's a topic I've been revisiting internally, where I've now got a new role in a new company, wondering if I should stick to what I know, or whether a new context requires a new tool.



Day Four - Describe what types of testing automation can help with

Sounds like a straight up exam question, no?

The bog standard answer: regression testing. I can check a subset of pre-existing functionality that previously worked still operated to predetermined standards using some prior-written code.

But that's the boring answer.

The real meat here is in feeling the boundaries.
  • I don't find automation over accessibility gives me any confidence at all, but some tools can identify some things that almost probably wrong.
  • Automation on usability is impossible (for some definitions of usability)
    • If you include rendering here, then visual checking for browser compatibility comes in, and I do find that cheap & sexy. Viv Richards speaks well about this online, and spoke about it to a packed room at NottsTest earlier this year.
  • I love using automation to explore an API. I can sit with 5 lines of code, written as a positive check of a particular endpoint and play with it to explore the behaviour. What if this value were larger, or longer, or omitted?
  • I mentioned in my post on the MoT Club about using automation for monitoring - if it's good enough to give you confidence to release, why not use a subset of those checks to establish if it's still running later?
Automation isn't just about testing, though. I regularly script my way around problems. I had a test environment that got trashed regularly, and each build required fetching a dozen packages from a few different online repos. Since the versions were locked, I scripted the fetch. Then took it a step further and popped the output into S3, and scripted the pull, unzip and install of the contents. Effort: 1hr. ROI: Eleventy five inside a week. That's not test automation, but it certainly feels like automation in testing.

Tuesday, 29 May 2018

Hacking the rules (in a safe space)

Have you ever played a game of Shared Assumptions? You might've. I made the name up.

It's a game I've both seen as part of training courses on specification ambiguity and something I would swear I invented when drunk once.

Do you remember Guess Who?


It's a game where players take turns asking yes/no questions about the appearance of the character on  their opponent's card, aiming to be first to identify the character their opponent holds.

Shared Assumptions is a game where you play the same game with the same rules, except that you aren't allowed to ask anything about appearance. I've played this game with friends, with kids, with colleagues, with testers. It's great fun watching people use creative thinking to hack around the seemingly impossible rules. (Writing this reminds me of Nicola Sedgwick's awesome workshop on Gamification at TestBash 2015 - slides here)

If you've never played these rules, I truly encourage you to try this game.

In facilitating this game, I always emphasise that they're playing in a "safe place" because anything you say about a person's appearance must be based on some other characteristic, and so is by its nature, based on a social stereotype of a characteristic, and often a protected one, like age or gender.

Sometimes the results are simple and creative:
  • remembers the "good old days"
  • remembers the release of the first Godfather movie
  • never worries about a bad hair day
  • visits an opticians on a regular basis
Sometimes the results are plain odd:
  • has large feet
  • likes beach holidays
  • allowed on a rollercoaster
  • has experienced a significant life trauma
I've loved playing and facilitating this game, showing groups that in about two thirds of games there's no winner because the pair didn't share an assumption somewhere along the line.

Watching team members dissect where they went wrong and debate whose assumption was incorrect was great to watch. Of course it's an argument between adults over a children's game, but it also highlights the people who want to understand why they're not getting the best possible results.

There's loads of positives about playing this game in a work setting and the lessons it can teach you about teamwork and the lines of the specification that were only ever implied. The single greatest positive I've taken from this game is the look on my kid's face when he realised I get paid to play Guess Who.

Wednesday, 2 May 2018

Choosing an API tool



A story about selecting tools from my previous job.

In the dark days, we explored as a user would, and when things changed, we explored again.

Later, it got lighter, and we used tools like Fiddler to help us explore. We saw more, and we used our tools to explore deeper than we could before.

Before too long, we started automating.

One time, we encountered a problem where we needed to know if a piece of common markup (in this case, a support popup) displayed correctly in all of our sites, in all of the browsers and in all of the languages. We automated visiting all of the pages on which this markup was displayed, taking a screenshot, then quickly reviewing the hundreds of outputs. The cost/benefit was obvious here - one person achieved more with browser automation in a day and a half than a group of people could have done using manual methods.

Later, we used our browser automation skills on other tasks and on other projects - some of it was for confidence in the project, and some was for confidence in the live environment (i.e. used for monitoring).

Important people saw this work and they wanted to see more of this. So we stopped.




Automating all of the things isn't necessarily beneficial, especially not when you're using a heavyweight tool (in our case Nightwatch.js and Selenium). We didn't want to build a massive body of tests where all of the variants were taking many seconds to iterate through. We needed to cover these with something more lightweight, and only use the UI tests when we need them.


(Image source: Slideshare.net)

We do want to automate more of the things. We just don't want it to take hours running them through a real browser. We also don't want to automate everything just because we can - we want the things that are important to us, and will give us real information and confidence in an acceptable timescale.

Our next step: API Tests.

When we began automating using Nightwatch, we were a test team of 2. Now we're a test community of 6, working in a squad structure. Co-ordinating time isn't as easy. We couldn't simply ease this in. We'd need some sort of "big bang" effort, else people would be left behind because of the "impending deadline" for the "current important thing". I got buy-in from the squad leads, booked a meeting room for an afternoon, and told everyone to come prepped for API Testing Fun with a tool of their choice. Everyone was excited to get into this. A few of us had done some little bits of API testing before through a mix of Postman, Fiddler, JMeter, Runscope, PowerShell and JavaScript, but here, we all committed to use a new tool that we hadn't used before. I'd done the most API testing previously, so took to public APIs to come up with challenges for the session that people would solve with their tools, the idea being that the challenges would represent those seen within our domain (e.g. GET/POST, redirection, OAuth), and other than me, nobody had any domain knowledge advantage. We'd use a 30 minute block the next day to debrief.

For discovering APIs for the challenges, I stuck to things I'd used in the past plus a small amount of googling. If you don't have this, or want some variety, try https://any-api.com/ for some public and well-documented APIs. I wish I'd known about this then...

The session was fantastic fun. We had a mix of skills where some people were naturally bent towards an aptitude for this sort of thing, and it naturally lent itself to pairing when people solved quite how apps authenticated with Twitter (this wasn't trivial, and the docs felt fragmented).

At the end of this session, we'd eliminated a few tools as being not feature rich enough for our use cases, or plain too ugly or hard to use. We ditched pyTest, Karate and a couple of others. We ditched everything except Postman and Frisby.js, as these seemed most capable of completing the tasks.

I was investigating Frisby for this session, and you can see the challenges I set and my work to solve them on GitHub.

We regrouped two weeks later where we ran a second session. We each picked either Postman or Frisby (except the two of who used those tools previously - we switched). I provided a new set of challenges, this time inside our business domain - real tests against our live APIs. Given the limited amount of experience most people had gained on API testing by this point, this was still a non-trivial effort. We learned loads and gained velocity given that we had the domain knowledge. The result of the session? Surprisingly inconclusive.

As it turns out, these tools have different strengths and are useful for different things. We decided that Postman was great for exploratory testing and Frisby was great for regression testing. Postman has a testing ability, but felt primitive compared to what you can do in Frisby. Frisby could be used for exploratory testing, but it'd be time consuming. We decided to start implementing regression tests in code whilst also purchasing Postman Pro licenses for the team to use for feature work.

The team felt enabled - every time we considered automating a browser interaction, we could immediately consider pushing UI tests to API tests.

Of course, the journey doesn't end there. When can you push an API call to a simulated calls in an integration test? And so on to component tests, then to unit tests.

Friday, 13 October 2017

The nonsense of gender-influences on testers

Have you been watching Duck Quacks Don't Echo? Lee Mack has guests on and they talk and test lesser-known facts. For instance, did you know that:
  • People with blue eyes have a higher tolerance for alcohol than brown-eyed people
  • The chlorine in swimming pools smells because the pool is dirty
I'll be honest, the gags are naff, and not all facts are interesting facts, but I approve of their testing of things, and every once in a while, there's a fact that tickles my professional interest. For instance, take these three facts:
  • Men are better at multitasking
  • Women are better at remembering driving routes
  • Taxi drivers have a larger hippocampus than regular people
If this were true, these give some interesting ideas towards lots of aspects of testing and test management, not limited to task assignment and team selection. But it's all nonsense of course. If this is to be believed, a male tester would be a good choice of team member for a project with concurrent streams and context switching, whilst a female tester would be a good choice for accurate repro steps and reliably repeating tasks they've seen demonstrated. But surely everyone with some years of industry experience has met members of both genders who has admirable skills in both areas at a level to aspire to? I certainly have. Repeatedly.

But there was science! Admittedly, it's "edutainment" science, but they had people on with doctorates who explained things. I struggled to reconcile this. Then they gave me the fact about the taxi drivers, and it was all made clear.

Taxi drivers have a larger hippocampus because of their constant effort in tactical route selection and thier dependency on short and long term memory. Like a trained muscle, practising this activity makes them better at it, and as they hit the peak in their training, the task becomes easier.
It stands to reason here that the male taxi driver will follow a route better than the female new driver.

My takeaway here is that an experienced person would trump any gender-enabled amateur, and that anyone can do anything they want to with some practice. As a very tall person who has attempted basketball and racket sports this would appear to hold true.

Monday, 29 August 2016

T7: Dig Deeper

During a recent conversation on Testers.io, I asked whether anyone had cool resources or ideas that I could take to my test team meeting as an activity to keep everyone thinking. My team is mixed level, so I want activities that add XP to my juniors but without boring the seniors. (Side note: I hate junior/senior terminology, and want something better. Ideas?)

Ideas weren't forthcoming, so I thought I'd start writing some of my own. If I give this project a name, I'll be able to scope it.

T7: Tools To Train Test Teams To Think

That's really not a cool name, but it'll do for now.

First idea is Dig Deeper. This is a training exercise intended to encourage testers to think beyond "this works" and on to "this does what we want it to do".

This is derived entirely from a section of Explore It! so all credit goes to Elizabeth Hendrickson. Her story was about an installer.

Scenario: a new software installer for the next version of the software.

In Elizabeth's scenario, the tester's initially worked to a "this works" criteria, namely that it ran without error.

Works: Installer runs without error

This was shown later to be insufficient when a member of another team showed that the software wasn't actually installed.

Dig Deeper: Software is installed to correct locations, registry values appropriately set, application launches & performs some basic operations

There could be more "deep" criteria than this, but you see how this works. Take a seemingly reasonable test criteria, and refine it.

We tried this in our last team meeting. The team got the idea very quickly, and solved the 4 example problems I'd prepared. I sought feedback, and it was... middling. These were the problems:
  • It was a bit simple, so doesn't really deal with the complex problems we deal with day-to-day
  • All of the examples were based on things we have done, so it's hard to separate yourself from your domain knowledge to answer the question "properly"
  • At least one of the examples was more "what else" than "dig deeper"

Seems like this could be a good tool, and it's my use of it that needs some work!

I used a very simple slide deck which you're free to pinch: Google Drive.

Any & all feedback welcome!

Friday, 24 April 2015

The Carwash Analogy

I was recently having a discussion with a developer friend of mine about why he should recruit testers (since they currently don't). It bothers me that the end of the conversation didn't end "Dan, you're absolutely right, I'm totally getting me some of them!".

Explaining testing well is no easy task. What if the company is doing well with their current quality level?

The problem here is that testers don't have a largely tangible output. We provide a service. We deal in information, and our net output could be described as confidence.

What if we made the analogy between testing and a car wash? My friend has developers doing unit tests and sanity checks on the end-to-end process, so he's already at Level 2. It's 50p more than Level 1, and probably £2 more than not washing his car. Adding testers dials you up to Level 8. The car wash is much more thorough. There's some premium soaping and scrubbing that's happening at the same time as the Level 2 stuff. There's a bunch of stuff happening that you were never going to get at Level 2 that takes a little longer. Waxing, buffing and the like. Totally premium, and totally costs a few quid more.

So what's the result?

Either:
* you know your car is cleaner as a result of getting Level 8, or
* you're more confident that dirt that was probably removed by Level 2 is definitely gone now

Don't fool yourself. Level 8 doesn't mean sterile. But you certainly gave it your best try. 

Testers don't actually remove issues. I also think this analogy is imperfect in that it draws a parallel between what's probably a perfectly good wash and developers checking their own code, which I feel understates the importance of testing. All the same, I might try this on my developer friend and see if it helps.

Be careful: clean cars are addictive. Once you've sat in something cleaned at Level 8, you'll wonder quite how good Level 9 could be!