Montag, 13. November 2017

RiskStorming - Maping Risks with TestSphere

Why RiskStorming?

Beren van Daele and Andreas Faes were chosen to give a workshop at the Agile & Automation days in Krakau. In that workshop, they were going to utilize the famous TestSphere Cards as a tool to create a test strategy. In the end, Andreas sadly couldn’t make it and Beren asked me if I want to jump in. Feeling a bit guilty and thrilled at the same time I agreed.
onTable.jpg
In our first hangout, we discussed, that we want the created test strategy to be risk-based and I volunteered to look for a nice, visualizing format to generate risks, that would be fun to do during a workshop. Andreas said something along the lines of “this is not easy to find” and he was absolutely right. In the end, I found nothing, that really suited our needs so on a weekend I just put my phone on the table and sorted the TestSphere cards around it only to think to myself that this seems to look like a circle. Thus “RiskStorming” was born.

The idea behind it is to use the TestSphere cards to steer a discussion about product risk and help come up with a test strategy, that actually tackles the identified risks.


How does RiskStorming work?

Screen Shot 2017-11-10 at 23.23.09.pngRiskStorming can help you to guide your thinking towards product risks and how to mitigate them either when on your own or with a group of people. Therefore I will first describe how RiskStorming generally works and then address some topics, which can come up when you facilitate a group session.

(Re)Inventing your test strategy

RiskStorming itself is structured in circles with the application under test in the centre. I found it helpful to have some form of representation of that application on the table, while you go forward, like a smartphone showing the application or any other token, e.g. the company’s mascot. This helps to visualize the rather abstract concept of software. If you are lucky and actually are testing a smartphone app or website you can even access it during the RiskStorming session.

The first circle surrounding the application focuses on “Quality Aspects”, which are covered by the blue TestSphere cards. Think about the most important Quality Aspects for your application and lay them around your representation. The hard thing is not to come up with potentially important Quality Aspects, but acknowledging that not all of them can or will be equally important. Focus. If you use our board we try to foster this by actively limiting the number of Quality Aspects to six.

The second circle focuses on the actual risks. Take a pen and sticky notes then start writing down risks, which threaten the application, specifically those connected to the Quality Aspects you choose to be the most important in the last phase, e.g. if you chose “Security and Permissions” maybe “loss of personal credit card information and social security number” is a valid risk to your application. Put the sticky notes on the Quality Aspects and try to sort them into the ones they belong to.

The last circle deals with risk mitigation. Look at the identified risks and use the full TestSphere card deck to find heuristics, patterns and techniques, which can help you to mitigate them. Since the TestSphere cards are by no means complete you might come up with more ideas than you find cards. Write down your ideas on additional sticky notes, preferable different coloured ones than those showing the risks, and add them. The idea behind using TestSphere cards is to give you new inspirations about how to approach a testing problem, e.g. a lot of testers don’t think about utilizing “Log-digging” for their testing yet this is a very powerful tool.
In the end, you will come up with a mixture of your own sticky notes and TestSphere cards, which is absolutely fine.

Using Riskstorming in a timebox

I believe it is helpful to timebox RiskStorming phases since it makes sure you stay on track and arrive at a result after a given amount of time because the meeting you scheduled for you and your team will end eventually. You can always double check the resulting test strategy after a good night's sleep.
In my experience so far I would spend most of the time on risks and mitigating them and not too much on the Quality Aspects. Here is a setup, that worked now for several rounds:

  • Phase 1: Discussing Quality Aspects: 10 minutes
  • Phase 2: Finding Risks: 25 minutes
  • Phase 3: Finding Risk Mitigations: 25 minutes

Facilitating a Riskstorming session

In addition to knowing the rules and having an idea about good timeboxes, it is also helpful to be aware of questions and struggles participants might have when facilitating a session.

Thinking in bugs, not in risks

More than one group noted down bugs instead of risks, e.g. “order of elements is wrong” instead of “users don’t finish the ordering process” for the Quality Aspect of “User Friendliness”. This can subsequently lead to a shallow testing strategy, which only takes care of very specific incidents. Make sure to remind the participants to come up with risks, not bugs.

Making decisions by reducing resources

In the beginning, we did not give participants a limit on the Quality Aspects, but we changed this once Beren came up with the board. The point is not that there are never more than six important Quality Aspects, but to foster discussions and eventually a decision by the whole group. These discussions give the participants a better understanding of each other's view on the product and what's most important about it.

Giving them time to read the cards

There are a lot of TestSpehre cards. Your participants should have the time to read them properly when trying to assign certain techniques or heuristics to their test strategy. If possible try to make the cards available for them before the session, if not you might want to prolong the “Finding Risk Mitigation” phase.

Make them read the cards

The participants should not only have the time to read the cards, they should actually do so. Look out for people only reading the headline and then applying the card in a wrong way, because they have not grasped the whole meaning, but just assumed they knew what that card is about. Encourage them to take their time and read the cards entirely, especially if a term is new to them.

Don’t restrict them to the cards

It is not possible to cover every technique, every pattern, every heuristic or Oracle, that ever came up in the world of testing with a TestSphere card. This means the participants may come up with ideas how to test for or mitigate a risk they identified, which is not covered by a card. Encourage them to still write it down and put it on the board. Don’t let a finite set of cards get in the way of finding the best possible test strategy.

Make them use the cards

Not restricting to the participants to the cards does not mean they should not use them at all. The idea of using the TestSphere cards is to give people new ideas how to approach a testing strategy. The cards can inspire them to use methods, they have never tried before, e.g. a group of people, which previously approached their testing problems strictly from a business point of view is often surprised that “Log-Digging” can provide them with a lot more options than before.

Be aware if they lack experience in an area

In some sessions, we encountered, that participants did not have much experience with testing in general or certain aspects testing, e.g. security testing. Try to place at least one person in a group proficient in topics, that might come up. If that is not possible be prepared that you will have more coaching to do.

Use the board

Beren came up with a board, that helps you to keep in mind what the different phases are about and also makes it more fun since it amplifies the gamified character of RiskStorming. Then Thomas Harvey made it beautiful. If you print it in the right size the cards perfectly fit the slots.




Here is the board in different sizes for you to download:

TestSphere Riskstorming 4xA3 - A1

TestSphere Riskstorming 8xA4 - A1

TestSphere Riskstorming A1 (can be printed A3 or A4)

TestSphere Riskstorming A3

Impressions

Here are a few pictures of what this actually looks like:



Mittwoch, 26. Juli 2017

Pathway Exploratory Testing

Katrina Clokie has a very good section on her blog she calls “pathways”, which are curated link lists and training exercises you can use to get started with certain software testing aspects. Katrina offers these pathways for Security Testing, Mobile Testing and many other topics. You should definitively check them out.

When a colleague asked me to send him an introduction into Exploratory Testing a while ago I created a list of useful blog posts I think are helpful to get started with Exploratory Testing. His answer was something along the lines “oh my, this is a complete pathway”. I think it was not, but it was not that far off from being one.

Therefore I added some more links and thought of some exercises to evolve this into a proper pathway. In the meantime, more people asked me for information about exploratory testing so from now on I can answer with a link and an offer for a joined coffee.

Just as Katrina's pathways, this is a rabbit hole: A lot of the articles below link to other articles, which will forward you to even more sources. I hope this will be a fun ride for you.

STEP 1: What is Exploratory Testing?

The questions often begin with what Exploratory Testing actually is. And perhaps, more importantly, why would you use it in the first place? Sometimes people have some negative bias as if exploratory testing is just toying with the software and can only be an addition to test case based approaches. Sometimes people seemingly have heard great things and now want in on this cool "new" method. Here are a few articles, that describe what Exploratory Testing is and how you can use it in your project:


Exercise: If you are currently not using exploratory testing techniques, but are testing based on test cases, watch your next test execution very carefully: Are you really performing only the specified steps and checking the specified expected results or are you doing more? Are you looking “left and right”? How do you tell people what you saw when you moved away from the script? How do you remember it yourself? Do you take notes that go beyond the test steps of your test cases? Are your cases/steps answering questions about risks?

STEP 2: How do I manage my Exploratory Testing?

Session-Based Test Management is the most popular method to introduce Exploratory Testing in projects. It helps to make Exploratory Testing manageable and reportable, especially towards people outside of the testing team. Michael Bolton used the metaphor of putting the clouds, that are bursts of performed Exploratory Testing, into boxes so you can count those boxes. 

Here are some articles about and experience reports on Session-Based Test Management:

STEP 3: How do I find test charters?

A topic, that troubles a lot of people starting out with Exploratory Testing and Session-Based Test Management is generating test charters in comparison to generating test cases. Most of them are used to derive test cases from any form of specification document, which verifies that the software works “as specified", which is not necessarily “as intended”. 
But how to create missions for exploration in addition, or instead of, test cases? 

A good idea is to think about the goal you want to achieve with your respective testing mission, Simons examples for different charter types can help you with this. I listed some more models and texts, which can help you in finding test charters:
Exercise: Chose one of the above approaches and write down down three test charters for the software you are currently working on. You can use the charter template provided in the link from STEP 2.

STEP 4: How do I come up with test ideas?

Once you have your testing mission for a session down, the interesting part is having ideas of what to actually test to serve your mission. I have seen testers, who suffered from some form of writer's block when they started their first test sessions. Especially when they were not involved with designing test cases and only executed them in the scripted testing world they knew.

Fortunately, there are a lot of methods and tools out there to help you come up with test ideas. I just listed a few down there, which you will hear or read often when deeper exploring the Exploratory Testing world, or which help me specifically. If you really, really want to go deep into this topic follow the link to Erik Brickarp's text below and be blown away. 
Exercise: Remember the test charters from the last STEP? Now it is time you perform these test sessions. As a first step, you don’t need to test the whole 90 minutes as described in the linked introduction tests, start with 30 minutes if you like to.

STEP 5: I've got to the end of my session. Now what?

After one or more sessions a Debriefing has to take place. I met several people, who skipped on these and asked: "Do I really need that Debriefing?". Short answer: yes. Long answer: Although a lot of people find it tedious to add yet another meeting to their schedule, the Debriefing is still absolutely crucial. It helps to identify new test charters, spread the testing results across teams, identify holes in documentation and processes, improve the overall testing. Regular Debriefings also help the testers to develop their testing skills. All in all, Debriefings belong to the important meetings you really should attend. Here are some helpful links how you can structure a debriefing:
Exercise: A Debriefing is usually a meeting and you should not do it alone and by yourself. I still want to ask you that you take your time and reflect on the test sessions you performed during the last exercise. Write down answers to the following questions: 
What happened during testing? What did you find out? Were there things, you wanted to test but couldn’t and why couldn’t you? What is left to test in maybe another session or did you find interesting new session ideas? What were they? And what were you feeling during the session as a tester or potential user of your software? Was something annoying you or were you positively surprised?

STEP 6: Are there tools, that can help me?

The most obvious tools for an Explorer is that of a classical Explorer: A notebook and a pen to write down what she finds during exploration. Still a lot of people ask about software tools.
There are tools, that help with Session-Based Test Management, for example by aggregating the session metadata in the TASK BREAKDOWN section. Since taking notes is a crucial part during a test session I choose a note-taking tool, Evernote, for my projects. This solution works best for me at the moment, especially with the Mac Client App.

I want to emphasise one tool in particular and that is TestBuddy: TestBuddy is still under development and is being designed specifically for Exploratory Testing and note taking by people, who really love this style of testing. The prototypes I saw look very promising. The link below will bring you to a “waiting list”. Please get in contact with the folks at Qeek, they are eagerly waiting for your feedback and insights. 

STEP 7: How do I document my testing?

In an environment that heavily uses scripted testing and test cases, testers usually document their testing by ticking off the steps of a test case as either "passed" or "failed". A test session in Session-Based Test Management does not work that way instead it reflects much more how a detective, a journalist or scientists take notes during an investigation or experiment. A lot of testers switching to Session-Based Test Management are quite surprised at the amount of writing they have to do during test execution and they struggle in finding the right balance.
My personal belief is that you do not write more documentation than you do when using test cases because those have to be written, too. Test cases tend to be heavily documented, a lot of people just don’t connect this to their test execution since they wrote them weeks before not close to or even during testing as they do with a test session. 
Another thing I want to add is that in test case based approaches testers often don’t correctly document their actual testing, for example by ticking all test steps in a test case because they “kind of did this” although they skipped several steps due to routine. They often don’t write down strange things (not bugs!) outside of the current test case scope and things get lost.

It's important to find a healthy balance when it comes to documenting your sessions. Here is a list of ideas or experience reports and don’t worry: Even experienced testers constantly question the way they take note as the last three links prove.
Exercise: Alan's 10 experiments in the last link are a great way to get started. How about giving it a go?

STEP 8: Will Exploratory Testing pass an audit?

I often hear people writing Exploratory Testing off as a just playing with the software, a non-structured testing approach, that does not survive the scrutiny of an audit. This is not true at all. Testers have been using Exploratory Testing techniques in heavily restricted environments. Here are a few links, which can help you to report your Exploratory Testing beyond a single session and to make it audit-prone:
Exercise: Try to come up with a low tech testing dashboard for your application and discuss it with your team members.

Books

In addition to all these blog posts, pdfs or online articles there are two books specifically about Exploratory Testing, which I recommend to you:


Mittwoch, 21. Juni 2017

CDMET: a mnemonic for generating exploratory testing charters

I gave a workshop about exploratory testing a few weeks ago. Furthermore, some colleagues want to use session based testing in another project and don’t have much experience with it so far. One topic both groups were eager to know more about is how to generate test charters: How do I find missions I want to explore during a test session.

 My short answer to this is “focus on the perceived risks in your software and on unanswered questions”. This statement alone is not very helpful, so I came up with various sources, which can point to interesting testing areas and charters. They are also good to start figuring out the perceived risks.

While I clustered these sources I found that there is a little mnemonic to remember them: CDMET. 
Conversation, Documentation, Monitoring and Earlier Testing. Alongside these four clusters, I listed various ideas, which can help you find test charters. I find them especially useful when combined with other oracles, FEW HICCUPS from Michael Bolton for example.

My list is by no means exhaustive, I still think it can help finding new test charters.

Conversation

Conversation means every form of people speaking to each other. This can span from water cooler talk over regular meetings you join up to meetings you specifically create to talk about testing and risks.
  • talk to various people involved in the product
    • business departments; marketing; product development
    • developers; architects; security people
    • user; customers; managers
  • listen closely and make notes during every meeting you attend
    • daily; retrospective; grooming; planning; debriefing
    • product demonstrations; training
    • status meetings; jour fixes; risk workshops

Documentation

Documentation is everything that is written down and yes this includes source code. There are a variety of official specification documents, which you can use but you should not end there. There are also emails, group chats, user documentation, etc.
  • official specification documents
    • requirement documentations; use cases; business cases
    • user stories; customer journey; personas 
    • UX designs; mock-ups; feature descriptions
  • technical documentation
    • class diagrams, architecture descriptions
    • sequence diagrams, interface descriptions
    • source code
  • general project documentation
    • wiki or confluence for everything someone deemed worthy of writing down
    • chat logs; emails; project plans; risk lists; meeting protocols 
  • test documentation
    • test cases; test notes; test protocols; test reports
    • bug descriptions; beta feedback
    • automation scripts, code and reports
  • user documentation
    • manuals; handbooks; online help; known bugs
    • tutorials; product descriptions; release notes; FAQs
    • flyers; marketing campaigns

Monitoring

Monitoring encompasses everything that I connect with the actual usage of the product because this is a very powerful input for generating new test charters. Therefore I use the term a bit more loosely as people usually do.
  • technical monitoring
    • crash reports; request times; server logs
    • ELK Stack (Elasticsearch, Logstash, Kibana); Grafana
  • user tracking
    • usage statistics for features, time of day, any other contexts
    • interaction rates; ad turnovers
    • top error messages the user faces regularly
  • user feedback
    • App or Play Store Reviews; reviews in magazines or blogs
    • customer services tickets; users reaching out to product team via email
    • social media like Twitter, Facebook, LinkedIn, etc

Earlier Testing

Earlier Testing is very useful to inform your future testing, it basically means: Use what you figured out yesterday to know what you should look at today. This feedback loop can be even faster when you come across an interesting part of the software while already performing a test session. Note this down and create new charters afterwards. 
If you played your hand right Earlier Testing should blend in with some of the other clusters, because you should document your testing and tell other team members about it.
  • results & artifacts
    • test case execution reports; test notes; status reports
    • bug descriptions; beta feedback; debriefings
  • hunches & leftovers
    • whenever you went “huh?” or “that’s interesting" in the past
    • known bug lists; “can’t reproduce” issues
    • unfinished business (“there is so much more I can look at if I had the time")

Whatever helps

You see that some items in the clusters are not strictly separated, a meeting can have a written down protocol for example. It does not really matter if you remember the meeting you had because you recall the talks you had or because you are flipping through the email protocol you wrote down. 
The important part is that thinking about conversations you had, the documentation you can read, the testing you already performed or the monitoring you can access can help you in figuring out what to test next. It surely helps me.



Dienstag, 16. Mai 2017

Mingling Conferences

OOP
Recently a thread started at twitter originating with this tweet:


What then happened were developers asking for testers to join their conferences and vice versa.
We then started gathering developer conferences and testing conferences, which actively want both disciplines mingling. On some of these testing conferences I personally met developers, some are completely built around people meeting, some state they want everybody to join them on their web page.

As twitter is a fleeting medium I want to use this blog post to collect these conferences. I will state which conference it is, where and when it takes place and if it is a developer or tester conference. This turns out to be pretty easy to guess since testing conferences love to put the word “test” in their name.

I want this list to grow so please DM me on twitter, write me a mail or leave a comment so I can curate and update. I am eager to get this blog post out before the energy to do so leaves me so I will start with the mentioned conferences on the twitter thread and over time research more, e.g. by crawling through the respective web pages for partnering conferences. I will also try my best to keep conference dates and places updated in the future, and the table below will hopefully become less ugly, too.
Wish me luck.

Oh, and if you are a UX designer, requirements engineer, product manager or something completely else and think: “Hey, why don’t you want to meet me? I want developers and testers on my conference, too!” Then also contact me, I will add you here. I will add anyone, who wants to help all disciplines mingling more.

Here is the list:



Conference
When?
Where?
2017-08-24 to 2017-08-27
Soltau,
Germany
European Testing Conference
2018-02-08 to 2018-02-10
Amsterdam, Netherlands
2017-05-11 to 2017-05-12
Bucharest, Romania
2017-05-10 to 2017-05-12
Cluj,
Romania
2017-10-06 to 2017-10-06
Munich,
Germany
2017-11-13 to 2017-11-17
Potdam,
Germany
2017-07-21 to 2017-07-22
Munich,
Germany
2017-01-15 to 2017-01-19
Kiilopää,
Finland
2017-04-06 to 2017-04-09
Gran Canaria,
Spain
2017-03-25 to 2017-03-27
Rimini,
Italy
2017-11-09 to 2017-11-12
La Roche-en-Ardenne,
Belgium
2017-10-26 to 2017-10-29
Rochegude,
France
2017-03-09 to 2017-03-12
Ftan,
Switzerland
2017-11-13 to 2017-11-17
Potdam,
Germany
2017-06-15 to 2017-06-18
Dorking,
England
2017-19-15
Zürich,
Switzerland
2017-10-20 to 2017-20-21
Linz,
Austria
2017-03-23 to 2017-03-24
Brighton,
England
2017-01-26 to 2017-01-27
Utrecht,
Netherlands
TestBash Dublin
May 2018
Dublin,
Ireland
2017-10-26 to 2017-10-27
Manchester,
England
2017-11-09 to 2017-11-10
Philadelphia,
USA
2017-11-06 to 2017-11-10
Malmö,
Sweden
2017-05-19 to 2017-05-20
Amsterdam,
Netherlands
2017-10-12 to 2017-10-13
Amsterdam,
Netherlands
2017-11-02
Ede,
Netherlands
2017-09-25 to 2017-09-26
Swansea,
Wales
2017-11-26 to 2017-11-28
Zwartkop Mountains,
South Africa
2018-02-05 to 2017-02-09
Munich,
Germany
2017-10-14
Stockholm,
Sweden
2017-10-24 to 2017-10-26
Ludwigsburg,
Germany
2017-04-20 to 2017-04-21
Lyon,
France


I hope a lot of you people start going to the "other" conferences now. If not we have to take extreme measures: