Software Testing: Practical

This page describes the practical for the Informatics Software Testing course. It will be marked out of 100 points, and is worth 25% of the assessment of the course. This practical will be undertaken in groups, normally of three to four, and will be assessed on a group report and individual submissions. This practical should take approximately 20 hours of work from each participant to complete. If at any time you believe you are likely to exceed this estimate by more than 30% please get in touch with the TA or me to discuss what can be done.

Deadline

The submission deadline for the practical is: Monday 21st March 2016 at 1600.

You will have an opportunity to get formative feedback on your practical work. If you submit a draft to me by email on or before Mon 29th February I will provide feedback on the submission by Friday 11th March. This process will not involve formal assessment and there is no credit allocated to the draft submission. However, it is an opportunity for you to improve your submission prior to the coursework deadline.

The penalty for late submission follows the uniform policy described in the UG3 course guide. Please read that page, and pay particular attention to the section on plagiarism.

Organisation

For this practical you will be split up, where possible, in groups of 3 or 4. You should already have recieved or will shortly receive notification of the members of your group. There are 4 tasks of which the first three will be done as a group and the last one individually.

Each of the first three tasks will count 25% of the final assessment (so the group activity counts 75% of the assessment) and the individual task will count 25% so each member of the group will be allocated the group score on the group tasks plus their score on the individual task.

Deliverables

The overall goal of this project is to produce a short report on the testing of a small program. The report should be at most around 20 pages in length, supported by various other technical deliverables (code for tests). It should be split up into a main body and appendices (the number of pages being counted in the total of 20). The main body of the report should consist of sections with the results of the group and individual tasks. You should clearly label each individual section with the author's student number so marks can be allocated to the correct individual. The appendices should include numbered screenshots, figures (e.g. a control flow graph) and any small piece of code you would like to refer to from the explanations, and should constitute visual aids to your explanations from the main body of the report. Appendices should be referred to and explained from the main body of the report - any which are not will not be counted towards your final assessment.

You should be able to complete the tasks described below with around 15-20 hours of effort per group member so each group has a "budget" of 60-80 hours of effort depending on the size of group and your choice. You should consciously manage that effort. If you find your group has an individual who is not contributing effectively you can raise any concerns with me and I will take it up with the individual concerned.

Background

In this practical you will consider the Java program, StringUtils.java, and Specification.

Tools

You can choose either to use the Eclipse IDE or just to use JUnit and other tools standalone; I have no strong preference - many people find the tools available in Eclipse useful (if you haven't used Eclipse before maybe now is the time to give it a try). You will need some of the following:

  1. If necessary you can download JUnit from here. If you are using Eclipse it is probably already installed in the IDE. This article is a reasonable introduction to using JUnit with Eclipse, but bear in mind its age: in particular it's focused on JUnit 3. Here's a good introduction to JUnit 4 (free registration required).
  2. You will need some kind of coverage analysis tool:
    • In Eclipse you can use EclEmma. This should already be installed on DICE machines within Eclipse. If not, it's easy to install through Eclipse's built in software update mechanism.
    • For stand-alone coverage you should consider something like Cobertura.
    • A review of other OpenSource code coverage tools for Java is available here

Each of the group tasks has an associated tutorial which will help you prepare for it. Please prepare in advance for the tutorial to get the most out of it.

Now you should work through the following activities:

Task 0: Setting Up (1 hr, no credit)

Preparation: If you don't have Eclipse installed and want to use it, you should downlaod it and install it. You can find Eclipse here. Once you have installed Eclipse, you should look at the tutorial. Do enough of the "getting started" tutorial that you have JUnit as a project in Eclipse. You should also install eclemma if you don't have it and intend to use it. You can delay this since it is not essential for the first task.

You should spend some time looking at the JUnit project in Eclipse and become familiar with its structure.

You should also thoroughly read and understand the provided specification and create a plan with the group in addressing the different tasks.

Task 1: Category-Partition Testing (25 marks, group activity)

Preparation: You should thoroughly read and understand section 11.2 of Pezze and Young and the defining paper on the Category Partition method by Ostrand and Balcer, and do tutorial 1.

In this task you will generate a test suite in JUnit by first constructing test case specifications using the category partition approach. In this task you will test the method String replaceString(String inputText, String pattern, String replacement, Character delimiter, boolean inside) that can be found in StringUtils.java. You should document the following parts of the process:

  1. Provide a brief summary of what you think the ITFs are for this method outlining why you think they are independent. You need only test one ITF for this practical. Make it clear what ITF you are testing. The chosen ITF should be the primary function of the method.
  2. Outline the parameters and environment elements you have identified that are relevant to the method. Explain why you are considering each environment element and, in case you dismiss any environment elements, why this is reasonable (try to avoid explanations such as "lack of time to consider more").
  3. Identify the characteristics (categories) of the parameters and environment elements which are relevant for testing and why.
  4. Identify partitions (choices in Ostrand and Balcer) and value classes for the characteristics (see below for an explanation of the difference between partitions and value classes).
  5. Provide a calculation of the initial number of tests.
  6. Decide on any constraints on combinations of the value classes you have identified and mark them by using the notation described in the reading. Here you should attempt to eliminate as many combinations of value classes that could not occur together in test situations as possible.
  7. Provide a new detailed calculation of the number of tests reached after introducing the constraints. Your result should not exceed 25 tests.
  8. Outline the test case specification you have arrived at, in the form used in your reading and tutorial.
  9. Outline the actual tests you have chosen (actual values for the specification from the previous point).

You should then implement your test case specification and test the code for the function. In giving a grade for this part of the practical I will take account of the performance of your test set on a collection of variants of the method.

Deliverables:

  1. A section in your report containing your rationale for the tests (the response to the points from above).
  2. A screenshot in appendices with the results of running the tests.
  3. A file Task1.java that contains your JUnit tests.

Task 2: Coverage Analysis (25 marks, group activity)

Preparation: You should read Pezze and Young chapter 12 and then do tutorial 2 on this topic.

Using some appropriate coverage tool (please specify which), assess the level of branch coverage achieved by your test suite developed in Task 1. Then do the following:

  1. Draw a control flow graph for the method (if you use an automatically generated one, it should be readable not just a jumble).
  2. If the branch coverage is below 90%, explain why this is the case with the aid of the control flow graph
  3. If you feel that the level of branch coverage can be improved, attempt using the control flow graph to define and implement some additional tests that will increase the level of coverage. Reassess the coverage you achieve and compare it with the coverage achieved before you began this exercise.
  4. Write a short evaluation of the adequacy of branch coverage as a measure of the adequacy of the test set for this code. Please be specific to the code by giving examples of covered/uncovered cases for the given problem to support your statements. Refer to the control flow graph where useful.
  5. Using the control flow graph and examples for the given problem, provide a short written evaluation of at least one other coverage criterion as a way of evaluating the adequacy of your test set. Include in this at least one test case specification (not necessarily the actual test) that this new coverage criterion might suggest you need to include in your test set.

Deliverables:

  1. A section in your report containing your response to the points from above.
  2. Screenshots in appendices showing the different levels of coverage you achieved and the figure of the control flow graph.
  3. A file named Task2.java containing any new tests.

Task 3: Mutation-based Test Adequacy (25 marks, group activity)

Preparation: Read Pezze and Young Chapter 10 on Adequacy and then do tutorial 3 which covers mutation. This should help to decide how to generate mutants.

In this section you should consider using mutations to check the adequacy of your test set developed under Task 1. You should do the following:

  1. Develop several erroneous variants (or “mutations”) of the replaceString method.
  2. Document your chosen types of mutation for each variant.
  3. Explain why your test set discovers or fails to discover each mutant.
  4. Consider the set of mutants your Task 1 test set fails to discover:
    1. If there are no such mutants, can you design a mutant that is not discovered by your test set? If you think it is impossible to design sucha mutant provide an argument for this. The argument should be as strong as you can make it.
    2. If you have a mutants that are undiscovered by your test set, can you strengthen the test set to capture all of them. Notice that some variants of a function do not result in changes of behaviour of the system, even though the code is different.
  5. Augument your test set developed in Task 1 to discover as many of the mutants you developed as possible.

Deliverables:

  1. A number of variants of StringUtils.java, call the variant files StringUtils1.java, StringUtils2.java, and so on.
  2. A short section in the report on the adequacy of your test set as assessed by mutation testing. This might include an argument that your code will discover all mutations where a single mutation operation is carried out.
  3. You should include a file with an augumented test suite that catches some of the initially undetected mutants. This could be called Task1-strong.java or something similar.
  4. Screenshots in appendices showing how your tests catch/do not catch the mutations.

Task 4: Regression Testing (25 Marks, individual activity)

Tasks 4 is an individual task.

Preparation: Read Pezze and Young Chapter 22 (Section 5) on Regression Testing.

The user has changed his/her mind about what the replaceString method should accomplish. The user has decided that the replaceString method should only replace all occurences of the pattern in the first matched region, and not all matched regions. You are provided with an updated version of the specification, Spec2 , and code StringUtils2.java for this change. In this task, you should do the following,

  1. Develop JUnit tests that reveal the effect of the changes made to the replaceString method. To do this you will have to write tests that when run against the first version in StringUtils.java and the second version in StringUtils2.java result in different output strings.
  2. Explain why the tests have different outputs between the two versions (max. length 1 page).
  3. Check to see if the change revealing tests exercise/cover the branches in the added/changed code in StringUtils2.java. A simple diff between StringUtils.java and StringUtils2.java will reveal the added/changed code. Use EclEmma to measure branch coverage using these tests. After running the tests with EclEmma, attach a screenshot with the results that highlight the code covered as green and the uncovered code as red and partially covered in yellow. The highlighted code that needs to be shown is only the added/changed code in StringUtils2.java from the original version that has branches. Please do not submit the highlighted code for the full Java program.
  4. If your test set above does not cover the branches in the added/changed code, augment it with tests that will achieve 100% branch coverage of this updated code. Show the results from running the augmented test set with EclEmma and attach a screenshot of the highlighted code showing coverage only for the added/changed code with branches.

Deliverables:

  1. A report titled Task 4 with your UUN containing your response to the points from above.
  2. A file named Task4-yourUUN.java, eg. Task4-s1234567.java, containing regression tests for StringUtils2.java.
  3. Screenshots in appendices showing the coverage of the updated code through highlighting in EclEmma.
  4. If the regression tests in Task4-yourUUN.java, do not cover the branches of the updated code, then include a file Task4-strong-yourUUN.java, with the augmented tests that help achieve full branch coverage of the updated code.
  5. Screenshots in appendices that illustrate the branch coverage of the updated code using the augmented test set.

Submission of the Practical Work

After completing the practical you should have the main report and additional files of tests etc. It will help me with marking if you please exactly adhere to these names. For other sections if you submit additional files use the naming convention: Taskn-XXX.ttt where $n$ is the task number the file relates to and XXX.ttt is a descriptive name and file extension. (including upper/lower case):

report.pdf
A report comprising the main body with your written answers to the tasks and appendices (at least 5 pages) with screenshots, figures, any small pieces of code. The whole report should not exceed 20 pages.
Task1.java
Your tests for Task 1.
Task2.java
Your tests for Task 2.

To submit your work you should designate one member of the group as a submitter for the group. The report should be clearly labeled with your group number. The submitter will gather together the files you wish to submit, and execute this command (if for any reason you have not produced one of the listed files you should omit it from the submit command). The dots at the end of the command signify all the other relevant files:

submit st 1 report.pdf Task1.java Task2.java ...

Questions

Here are some relevant questions for task 1 and task 2:
What's the difference between a partition and a value class?
They're essentially the same thing (“value class” isn't mentioned in Ostrand & Balcer's original paper), but it might be useful to think of partitions as being slightly higher level verbal descriptions (e.g. “none”, “one”, “several”, and “very many”) corresponding to more technical value classes (e.g. 0, 1, 2-100 and 101+). In a more complex project this distinction would be more useful.
Do we need to include our test results?
Yes. Your test case specification should include the results that you expect (c.f. P&Y p.189, Table 11.2). You should also document the actual results you got. A brief commentary to the effect that all tests are passed, or that failures occur and why, would be helpful in demonstrating that you implemented and executed your specification.
My branch coverage is over 90%. What should I do in Task 2?
I've talked about different coverage criteria, and how they're related, and Ntafos' paper in the reading gives a good overview of this. You've just been working with branch coverage. You should make the case for another coverage criterion, and possibly add one or two test cases to improve your score with that criterion.


Home : Teaching : Courses : St : 2015-16 

Informatics Forum, 10 Crichton Street, Edinburgh, EH8 9AB, Scotland, UK
Tel: +44 131 651 5661, Fax: +44 131 651 1426, E-mail: school-office@inf.ed.ac.uk
Please contact our webadmin with any comments or corrections. Logging and Cookies
Unless explicitly stated otherwise, all material is copyright © The University of Edinburgh