Software Testing: Lecture Log

Friday 12th March: Revision session 1 — Category-partition method
We looked at Question 1 from the 2007-8 exam, and came up with this solution to the category-partition part. At the end it became apparent that declaring negative limit a “one-off” was perhaps not a great idea because we want to see what happens when regex does and doesn't appear at the end of s, and also we should really break “regex at start/middle/end” into three partitions (possibly as another independent characteristic of s), but all in all this is a pretty good solution: the steps are clear, the constraint application is effective and explained, and the tests look reasonable. There are more rigorous approaches to constraint application (see Ostrand & Balcer's paper), and you can also use decision trees to help, but this approach is clear enough for small problems — and perhaps a little faster. I'd expect the rest of the tests to be filled in though! It's important to think carefully about which partitions are important and which aren't, and ideally to come up with a practical number so that you can write them out (or implement them or whatever) in the time available. Here's another outline of a solution, which includes some ideas for parts 1 and 3 of the same question.
Tuesday 9th March: Lecture 15 — Course review
To clear up some of the issues around citation, I've extended the Practical 1 Review slides a little, clarifying that citation isn't required (I just took the lack of it as an indicator that you weren't reading much), and also showing some examples of how to refer to other people's work in your own writing.

Today's main slides were a review of the entire Software Testing course, trying to draw out some of the important topics and big themes. I also asked for feedback on the course to try and improve next year's offering (you can provide feedback online if you missed the lecture), and quizzed those present for preferred topics and style for the revision sessions (now announced on the ST main page). I finished with a recap of the main sources of trouble in practical and exam work:

  • Make sure you understand the question — and note everything you should do to answer it fully.
  • Manage your time carefully: it can be easier to get the first 50% of the marks on a question than the last 20%, so always budget enough time to attempt all parts of the questions you're answering.
  • Explain your thinking, particularly if you're being clever.
Friday 5th March: Lecture 14 — Higher-order testing
Before the main set of slides I did a quick review of Practical 1, identifying some of the main issues to improve for next time.

The lecture on higher order testing overlaps somewhat with the system testing material; we revisit the big picture, looking at the life cycle as a whole and how the various aspects of testing we've studied relate to various aspects of the development life cycle. We also look at some of the final areas of testing in that life cycle: acceptance and installation testing. Some of the larger management issues are also discussed, particularly how you know when to stop testing.

Tuesday 2nd March: Tutorial 7 — Integration testing
Tutorial 7 explored integration testing, looking at the interactions between two classes from the point of view of data flow between them, and examining the coverage criteria we could apply. The usual things with coverage apply: some paths are impossible so, keep an eye out for them and remember that 100% coverage is often not possible; close approximations of path coverage (all coupling paths for example) usually appear to be more effort than they're worth. Remember too that while we've mainly looked at coupling in the case of data entering a method (through parameters), you can also look at how values that are returned from a method get defined, and where they then get used, you might also need to think about global shared variables, and so on.
Tuesday 2nd March: Lecture 13 — System testing
There are many many different characteristics of software systems which we might want to test. Lecture 13 looked at a few of them (I added a variety of “war stories” to illustrate each). For any individual project, the system tests you apply will be up to you; they should generally be obvious from the overall system requirements and application domain though.
Friday 26th February: Lecture 12 — GUI testing
Historically, GUI testing has made heavy use of manual checklists and human operators, and automation is hard. Lecture 12 discusses this. Modern operating systems support reflection on their GUIs though (often for accessibility — voice operation, etc.), so it's easier to drive automated GUI tests (the tools you're researching for Practical 2 underline this). I discussed some of the notions of coverage you could apply to GUIs too — again, the supporting reading is a good place to look for more information.
Tuesday 23rd February: Tutorial 6 — Mutation testing
Tutorial 6, on mutation testing, is fairly straightforward: make sure you've got the terminology (“killing” mutants, distinct versus equivalent mutants, etc.). If you're trying to develop mutants to survive test suites, then a good starting point is to look at the test suite's coverage, and if you can find a coverage criterion that it doesn't satisfy then try exploiting that.
Tuesday 23rd February: Lecture 11 — Regression testing
The final integration testing slides (from last Friday's lecture) deal with integration coverage, which closely follows on from data flow testing. I then spoke on regression testing (Lecture 11): how your tests can help you over time as you change/evolve/fix your software, and as you bring it new environments and configuration. Issues of resource management (cost of writing regression tests, maintaining them and managing their execution versus not doing so or writing new tests with each release) are significant here.
Friday 19th February: Lecture 10 — Integration testing
Lecture 10 took us beyond the life cycle stages we've been look at so far to the point where we link all the components of a software system together: integration. Many faults become apparent at this point, and we reviewed a case study of the Voyager and Galileo spacecraft as motivation. Testing at this stage needs to be done carefully in order to make sure that faults can be easily located, and the advantages and disadvantages of top-down and bottom-up approaches were discussed. I ran out of time before discussing coverage criteria for integration testing, so will cover them next Tuesday.
Tuesday 16th February: Tutorial 5 — Data flow testing
Tutorial 5 worked through the steps involved in generating tests to achieve data flow coverage: a control flow graph; identifying defs and uses (noting that c-uses are identified as nodes, but p-uses should be listed as each and every edge leaving the node containing the p-use); listing D-U pairs for each variable; and ticking each covered def/D-U pair/D-U path off as you add tests to your suite (always being on the alert for impossible situations — where it's not possible to find a def-clear path from a def to a particular use, for example).
Tuesday 16th February: Lecture 9 — Mutation testing
Lecture 9 introduced mutation testing, something we've played with a little bit in tutorials and Practical 1. Deliberate introduction of bugs into code can be used to estimate the quality of test suites or to estimate remaining defects in code. The underlying idea is that the test suite should very precisely carve out the acceptable behavioural space for the system under test, and any small deviations from acceptable behaviour should be caught by a good test suite. Huge numbers of mutations are possible though (including lots of interesting ones specific to OO — take a look at the paper on MuJava), and we need to constrain these numbers in order to come up with feasible notions of mutation coverage.
Friday 5th February: Lecture 8 — Data flow coverage 2
Lecture 8 reviewed most of the control flow and data flow-based adequacy criteria we've covered so far, going into detail about how short-circuiting affects testing of compound conditions and raising a few tricky issues, such as impossible paths.
Tuesday 2nd February: Tutorial 4 — Structural testing
Tutorial 4 works through the process of generating some tests to satisfy statement, branch and basic condition coverage. Be particularly careful when generating control flow graphs of for loops: the initializer is executed first, the condition is executed at the start of every loop, and the update clause is executed at the end of every loop.
Tuesday 2nd February: Lecture 7 — Data flow coverage 1
Having considered testing inspired by control flow, Lecture 7 looks to data flow. Definitions and uses of variables guide data flow based testing criteria. We outline a number of these, and then review the notion of test criterion subsumption, which gives us a way of relating different test criteria.
Friday 29th January: Lecture 6 — Structural testing 1
Lecture 6 approaches testing from the point of view of being able to see the implementation: now we can think about common programming errors (a range of these are explored), and consider the relationship between our tests and the code. In particular, the question we ask is how we can measure tests against the code — there are many ways, and we look at proportion of lines exercised (statement coverage), proportion of branches taken (branch coverage) and various analyses based on thoroughness of exercise of basic and compound conditions.
Tuesday 26th January: Tutorial 3 — Category-partition testing
Tutorial 3 works through an example application of the category-partition method. We look at identifying ITFs, parameters and environment, at characteristics of these which relate to patterns of behaviour from which we can delineate partitions, and then derive individual test cases from these partitions.
Tuesday 26th January: Lecture 5 — Specification-based testing 2
Continuing our look at the category-partition method, Lecture 5 introduces a combinatorial method of reducing the number of test cases. It also introduces the notion of coverage criteria, and then explores the application of models in the category-partition method. Examples of coverage criteria derived from various models are worked through.
Friday 22nd January: Lecture 4 — Specification-based testing 1
Lecture 4 starts out by examining some sample failure patterns, using these to explore the usefulness of random testing. The conclusion is that there are cases where random testing is unlikely (statistically speaking) to detect the presence of faults; this motivates the adoption of systematic approaches to testing. The first such approach we look at is the category-partition method of testing, a “black-box” method which derives tests from system specifications.
Tuesday 19th January: Tutorial 2 — Getting started with Practical 1
This tutorial focussed on getting started with Practical 1. Much of the material to support this will be taught over the next three lectures. Brief introductions were given to black box and white box testing, and to the notion of test environment, inputs and outputs. Some feedback issues from last year's practical were highlighted, particularly:
  • Time management is critical: make an early start to ensure that any problems you have don't crop up the day before the deadline.
  • If a question is worth 40% of the marks, it deserves a reasonable amount of your effort, and probably more than half a page in your report.
  • Do explain what you're doing (particularly if it's clever or tricky), and do make sure that all of the code you submit gets at least a little mention/explanation in your report.
Tuesday 19th January: Lecture 3 — Testing in the life cycle
Lecture 3, on testing in the development life cycle, motivates software testing by highlighting the quality issues that are rife within the software industry. I illustrated how ST fits into three different development methodologies, and argued that methodologies which engage with testing early and often should save time and money, as well as producing products of higher quality that are more closely aligned with customers' needs.

Also discussed were:

  • Wednesday tutorial slot (vote was for 3pm);
  • whether to move lectures to AT5.05 or somewhere a little further away but more lecture-theatre-like (vote was for AT5.05, but not by a huge margin),
  • and whether people should prepare for tutorials beforehand (10-15 minutes of reading) or at the start of the tutorial (which means we have less time to work). Vote was for preparing beforehand, so I'll make the next week's tutorial sheet available on the Friday beforehand from now on.
Friday 15th January: Lecture 2 — Tools for unit test — JUnit (plus optional tutorial)
Lecture 2 introduced JUnit; it was followed by a tutorial working through the beginnings of writing and running some tests using Eclipse and JUnit. There are many online resources introducing you to Eclipse and JUnit, so please do look at some of them if you're new to Java (or haven't used it in a while). There are some links on the course resources page.
Tuesday 12th January: Lecture 1 — Course overview
The opening lecture comprised a broad, shallow view over the field of Software Testing; there's more detail in the slides than what I actually covered (most lectures comprise many fewer slides than this one!).

Version 1.16, 2010/03/12 16:18:25


Home : Teaching : Courses : St : 2009-2010 

Informatics Forum, 10 Crichton Street, Edinburgh, EH8 9AB, Scotland, UK
Tel: +44 131 651 5661, Fax: +44 131 651 1426, E-mail: school-office@inf.ed.ac.uk
Please contact our webadmin with any comments or corrections. Logging and Cookies
Unless explicitly stated otherwise, all material is copyright © The University of Edinburgh