Smoking out the leak

Per wikipedia, smoke testing was probably first used in the context of plumbing systems, in referring to tests for the detection of cracks, leaks or breaks in closed systems of pipes. Perhaps a QA in the software field may feel like a plumber sometimes, using various wrenches to correctly connect the series of tubes that make the application work for the end user.

This is where a smoke test comes in so handy – especially when the application is in its infancy, undergoing a lot of code churn and with fewer baked in features. Developers are committing code regularly and the build server is running non-stop, generating multiple instance of application to be tested by the overworked plumber … er, QA.

Decisions need to be made

When it comes to testing, not all plumbing problems are the same. Software is complex and every line of change can affect any of the existing code, leading to interminable permutations of paths for the QA to test. There is never enough time. As viable test candidate builds are spun out by the build server, the QA has to decide:

  • “whether to test or not to test”: Decisions have to be made whether to test the build at all. Obviously if there is a compilation error, or if static analysis indicates linting errors, or there is any other build issue, no point in testing the build.
  • “what to test”: Having decided the build is correct, the next decision is to figure out what is the scope of the code change and how that affects the quality of the application. Diffidence and distrust are a QA’s best friends in this regard … especially as a starting estimate. Let the code commit prove it does not reduce the quality level of the application.
  • “who should test”: If the code change needs to be tested, the question must be asked – who is best suited to test. Sometimes, its best that the developer run unit test automation, and validate the build manually by doing a smoke test, to confirm the application meets basic entry level criteria before the busy QA gets a crack at verifying the code change.

Smoking out errors

Developers are by definition the ones who know their code best – after all they are the ones who wrote it. If you consider the QA role as helping to highlight the work of developers, then smoking out errors early and often allows said developers to fix anything that they missed before those errors make it out to end users.

Good developers already write unit tests which they commit along with their code. Ideally these unit tests verify the various paths inside their code change. The interfaces to the module under test are typically mocked up so that unit tests run blazing fast and don’t depend on the correctness of other components to work.

However, to ensure that code changes under test play well with existing components of the application, we need end to end tests. A smoke test is one such end to end test that tests basic functionality, but with the following characteristics.

I’m a little smoke test, short & sweet

Having understood the need for a smoke test, the QA needs to define it so that it is:

  • short meaning the test scenarios can be completed in under 15 minutes.

A smoke test that is too long becomes a bottle neck to code flowing through the development pipeline. QA needs to make the smoke test easy for the developers to execute themselves. Or better still, be converted into an automation script that serves as the entrance criteria for any code change making it into the system.

  • sweet meaning that areas covered by the test are the classic “happy path”

A smoke test needs to cover the easy and efficient workflow, touching the basic functionality and successful scenarios. Smoke testing helps ensure that the code changes don’t mess up something so simple and elemental that its a waste of time running more expensive functional or regression testing.

So, go forth and smoke out those pesky failures, QA … as soon as code is committed, hopefully by developers who made the commit, but definitely by tests that are short and sweet.

How to Test a QA candidate ?

A good candidate for a QA role should exhibit the following traits:

  1. Open mindset to find creative ways to break things (functionality, process, expectations)
  2. Ability to assimilate new paradigms (maybe no prior experience in language, methods)
  3. Curiosity about how things work (breaking down into component systems & connections)
  4. Identify high value end user workflows (who is most impacted by which functionality)
  5. Attention to detail (focus on unit under test, abstracting external dependencies)
  6. Wear different hats and empathize with different points of view (testing roles)
  7. Bug finding effectiveness (spot and raise issues)
  8. Debugging skills (root cause analysis & finding related issues)
  9. Communicate observed behaviors (both working and non-working scenarios)
  10. Prioritize issues based on end user impact (lower priority to technically complex, but low probability bugs; higher priority to issues that affect end user workflows & cause reputation damage)

To test these traits, we can devise some contrived situations/scenarios, embedded with defects and ask the candidate to test.

Scenario 1: Screenshot of a unix terminal with some obscure code, and ask the candidate to explain what is going on.

Things to look for: 

  • Does the candidate verbalize the observed behavior ? 
  • Is there a hypothesis put forth, followed by validation ? 

Scenario 2: Ask the candidate to explain how to test a login form.

Things to look for: 

  • Does the candidate ask questions related to user expectation ? 
  • Is there an attempt to understand how the form fits into the larger model ?
  • Can the candidate formulate the happy path in layers of detail ?
  • How does the candidate validate functionality at each layer of the login process ?

Scenario 3: Describe a fake application with multiple stakeholders and different tiers of end users. Ask the candidate to describe how to formulate the test strategy.

Things to look for: 

  • Does the candidate ask questions about the stakeholders & end users ? 
  • Is there an attempt to break down the application into stories/workflows from the end user perspective ? 
  • Can the candidate capture the top 5 user stories and prioritize them based on who the most important stakeholders/users are ?
  • Are there estimates of relative sizing of the stories – to determine the QA effort.
  • Is there time set aside for directed/time-boxed  experimental testing ? Or is it all scripted testing ?
  • How does the candidate determine when testing is done ?
  • Does the candidate attempt to empathize with different types of end users ?

Scenario 4: Demo a dummy application with known UI issues.

Things to look for: 

  • Does the candidate verbalize the testing flow? 
  • Are there questions about expected functionality and acceptance criteria ?
  • Is there a systematic breakdown of what is tested and how? 
  • How organized is the candidate to keep notes/observations of behavior ?
  • Can the candidate find the known bugs ?

Who are you testing for ?

Know your audience !! Before undertaking a test campaign make sure you understand who is paying for the end result – both in terms of actual dollars, and in terms of the cost of owning and using the software product.

QA campaigns can pass or fail depending on how well you can answer the question – Who are you testing for ? This fundamental question helps guide the seasoned professional from the start of test planning, through to testing release candidate software end to end, prioritizing bugs, and ultimately finalizing the answer to the questions “Is testing done ?”

Understanding your end user

  • Because it gives QA the context and sensitivity to how the software behaves, as different from how it functions (from a technical point of view).
  • Because it will help you focus your testing goals to ultimately serve the end users requirements, as opposed to testing against requirements of the engineering team.

Who is paying for this software ? Who is managing it ?

  • Be aware that often a piece of software has different stake holders who approach it from different points of view.
  • Apart from end users, there are typically administrators, installers, and maintenance staff who all have different ways of using the same code.
  • Often, the software acts like a glue layer, bridging data/assets produced by a set of people/systems with another group of end users who consume the said assets.

Thoughtful analysis and detailed answers to these questions help flush out nuances about user behavior that can guide your testing efforts.

A successful QA campaign will eventually require you to wear the hat of each of these users and include workflows/scenarios that satisfy their expectations. The sooner you can grasp what these roles are, and what they hope to do with the software, the better equipped you are to learn the “why” of what the software is doing along with the “how”.