Software Testing: Practical

This page describes the practical for the Informatics Software Testing course. It will be marked out of 100 points, and is worth 25% of the assessment of the course. This practical will be undertaken in groups, normally of four (or possibly five), and will be assessed on a group report and individual submissions. This practical should take approximately 20 hours of work from each participant to complete. If at any time you believe you are likely to exceed this estimate by more than 30% please get in touch with me (soa@staffmail.ed.ac.uk) to discuss what can be done.

Deadline

You will have an opportunity to get formative feedback on your practical work. If you submit a draft to me by email on or before Mon 23 February I will provide feedback on the submission by Mon 2 March. This process will not involve formal assessment and there is no credit allocated to the draft submission. However, it is an opportunity for you to improve your submission prior to the coursework deadline. The submission deadline for the practical is: Monday 16 March 2015 at 1600 (week 9 if we do not count Innovative Learning Week)

The penalty for late submission follows the uniform policy described in the UG3 course guide. Please read that page, and pay particular attention to the section on plagiarism.

Organisation

For this practical you will be split up, where possible, in groups of 4 or 5. You should already have recieved notification of the members of your group. There are 8 tasks which, depending on the size of your group, you will approach as a group or individually as follows:

Each of the first three tasks will count 25% of the final assessment (so the group activity counts 75% of the assessment) and each of the individual tasks will count 25% so each member of the group will be allocated the group score on the group tasks plus their score on the task they tackled individually.

Deliverables

The overall goal of this project is to produce a short report on the testing of a small system. The report should be at most around 20 pages in length, supported by various other technical deliverables (code for tests). It should be split up into a main body and appendices (the number of pages being counted in the total of 20). The main body of the report should consist of an introduction followed by sections describing the results of the group and individual tasks. You should clearly label each individual section with the author's student number so marks can be allocated to the correct individual. The appendices should include numbered screenshots, figures (e.g. a control flow graph) and any small piece of code you would like to refer to from the explanations, and should constitute visual aids to your explanations from the main body of the report. Appendices should be referred to and explained from the main body of the report - any which are not will not be counted towards your final assessment.

You should be able to complete the tasks described below with around 15-20 hours of effort per group member so each group has a "budget" of 80-100 hours of effort depending on the size of group and your choice. You should consciously manage that effort. If you find your group has an individual who is not contributing effectively you can raise any concerns with me and I will take it up with the individual concerned.

Background

In this practical you will consider the the JUnit system. This is constructed to support the unit testing of Java programs. This is a very widely used system and the source code includes a reasonably good collection of JUnit tests to help ensure developers do not introduce bugs when they develop the system. Your goal will be to add to the test set and assess the

Tools

You can choose either to use the Eclipse IDE or just to use JUnit and other tools standalone; I have no strong preference - many people find the tools available in Eclipse useful (if you haven't used Eclipse before maybe now is the time to give it a try). You will need some of the following:

  1. If necessary you can download JUnit from here. If you are using Eclipse it is probably already installed in the IDE. This article is a reasonable introduction to using JUnit with Eclipse, but bear in mind its age: in particular it's focused on JUnit 3. Here's a good introduction to JUnit 4 (free registration required).
  2. You will need some kind of coverage analysis tool:
    • In Eclipse you can use EclEmma. It's easy to install through Eclipse's built in software update mechanism.
    • For stand-alone coverage you should consider something like Cobertura.
    • A review of other OpenSource code coverage tools for Java is available here

Each of the group tasks has an associated tutorial which will help you prepare for it. Please consult the tutorial guidelines and arrange for your tutorials. Good planning and organisation will be necessary- the earlier you do the four tutorials the more time you will have to improve your work on the tasks before the practical deadline!

Now you should work through the following activities:

Task 0: Setting Up (1 hr, no credit)

Preparation: If you don't have Eclipse installed and want to use it, you should downlaod it and install it. You can find Eclipse here. Once you have installed Eclipse, you should look at the tutorial. Do enough of the "getting started" tutorial that you have JUnit as a project in Eclipse. You should also install eclemma if you intend to use it. You can delay this since it is not essential for the first task. You should have a look at the preparatory activity 1 which covers basic material on the use of Junit. You should also have a look at the preparatory activity 2 which provides an overview of what you will work on in the first couple of tasks in the practical.

You should spend some time looking at the JUnit project in Eclipse and become familiar with its structure.

You should also read all of this Practical specification and create a plan that specifies at least:

  1. The allocation of individuals to the individual tasks, i.e. for each of task 4-8 a single individual should be allocated to the task.
  2. When and how much effort you are planning to devote to each of the tasks. This can be in any form you like but it should be shared by the entire group and should be easy to change. Remember that not everyone needs to work on all tasks so you might allocate two people to one group task and three to the other two group tasks (this might reduce communication overhead...).

Deliverables: The plan and allocation of tasks - this should be an internal document, shared and evolved by the group (not to be submitted with the practical).

Task 1: Category-Partition Testing (25 marks, group activity)

Preparation: You should thoroughly read and understand section 11.2 of Pezze and Young and the defining paper on the Category Partition method by Ostrand and Balcer, then get together with your group and do tutorial 1.

Tutorial: Contact the tutor (Cristina Alexandru) for this session to arrange a tutorial meeting as indicated in the tutorial guidelines

In this task you will generate a test suite in JUnit by first constructing test case specifications using the category partition approach. In this task you will test the method void sortStrings(Vector values , int left, int right, Swapper swapper) that can be found in junit.runner of the JUnit project. You should document the following parts of the process:

  1. Provide a short specification of the function you are testing by first having a quick look at the code. This is particularly simple but you should think about what the method needs to work correctly and you should try to develop the specification so it is helpful in devising "black box" tests. I.e. those that do not need you to use the code to create tests. This means you need to try to specify the operation of the method without depending on the word "sort".
  2. Provide a brief summary of what you think the ITFs are for this method outlining why you think they are independent. You need only test one ITF for this practical. Make it clear what ITF you are testing. The chosen ITF should be the primary function of the method.
  3. Outline the parameters and environment elements you have identified that are relevant to the method. Explain why you are considering each environment element and, in case you dismiss any environment elements, why this is reasonable (try to avoid explanations such as "lack of time to consider more").
  4. Identify the characteristics (categories) of the parameters and environment elements which are relevant for testing and why.
  5. Identify partitions (choices in Ostrand and Balcer) and value classes for the characteristics (see below for an explanation of the difference between partitions and value classes).
  6. Provide a calculation of the initial number of tests.
  7. Decide on any constraints on combinations of the value classes you have identified and mark them by using the notation described in the reading. Here you should attempt to eliminate as many combinations of value classes that could not occur together in test situations as possible.
  8. Provide a new detailed calculation of the number of tests reached after introducing the constraints. Your result should not exceed 20 tests.
  9. Outline the test case specification you have arrived at, in the form used in your reading and tutorial.
  10. Outline the actual tests you have chosen (actual values for the specification from the previous point).

You should then implement your test case specification and test the code for the function. In giving a grade for this part of the practical I will take account of the performance of your test set on a collection of variants of the method.

Deliverables:

  1. A section in your report containing your rationale for the tests (the response to the points from above).
  2. A screenshot in appendices with the results of running the tests.
  3. A file Task1.java that contains your JUnit tests.

Task 2: Coverage Analysis (25 marks, group activity)

Preparation: You should read Pezze and Young chapter 12 and then do tutorial 2 on this topic before arranging your meeting with the tutor.

Tutorial: Contact the tutor (Donal Stewart) for this session to arrange a tutorial meeting as indicated in the tutorial guidelines

Using some appropriate coverage tool (please specify which), assess the level of statement coverage achieved by your test suite developed in Task 1. Then do the following:

  1. Draw a control flow graph for the method (if you use an automatically generated one, it should be readable not just a jumble).
  2. If the statement coverage is below 90%, explain why this is the case with the aid of the control flow graph
  3. If you feel that the level of statement coverage can be improved, attempt using the control flow graph to define and implement some additional tests that will increase the level of coverage. Reassess the coverage you achieve and compare it with the coverage achieved before you began this exercise.
  4. Write a short evaluation of the adequacy of statement coverage as a measure of the adequacy of the test set for this code. Please be specific to the code by giving examples of covered/uncovered cases for the given problem to support your statements. Refer to the control flow graph where useful.
  5. Using the control flow graph and examples for the given problem, provide a short written evaluation of at least one other coverage criterion as a way of evaluating the adequacy of your test set. Include in this at least one test case specification (not necessarily the actual test) that this new coverage criterion might suggest you need to include in your test set.

Deliverables:

  1. A section in your report containing your response to the points from above.
  2. Screenshots in appendices showing the different levels of coverage you achieved and the figure of the control flow graph.
  3. A file named Task2.java containing any new tests.

Task 3: Mutation-based Test Adequacy (25 marks, group activity)

Preparation: Read Pezze and Young Chapter 10 on Adequacy and then do tutorial 3 which covers mutation. This should help to decide how to generate mutants.

Tutorial: Contact the tutor (Hadi Daneshvar) for this session to arrange a tutorial meeting as indicated in the tutorial guidelines

In this section you should consider using mutations to check the adequacy of your test set developed under Task 1. You should do the following:

  1. Develop several erroneous variants (or “mutations”) of the sortStrings method.
  2. Document your chosen types of mutation for each variant.
  3. Explain why your test set discovers or fails to discover each mutant.
  4. Consider the set of mutants your Task 1 test set fails to discover:
    1. If there are no such mutants, can you design a mutant that is not discovered by your test set? If you think it is impossible to design sucha mutant provide an argument for this. The argument should be as strong as you can make it.
    2. If you have a mutants that are undiscovered by your test set, can you strengthen the test set to capture all of them. Notice that some variants of a function do not result in changes of behaviour of the system, even though the code is different.
  5. Augument your test set developed in Task 1 to discover as many of the mutants you developed as possible.

Deliverables:

  1. A number of variants of sortStrings, call the variant files sortStringsVar1.java, sortStringsVar2.java, and so on.
  2. A short section in the report on the adequacy of your test set as assessed by mutation testing. This might include an argument that your code will discover all mutations where a single mutation operation is carried out.
  3. You should include a file with an augumented test suite that catches some of the initially undetected mutants. This could be called Task1-strong.java or something similar.
  4. Screenshots in appendices showing how your tests catch/do not catch the mutations.
Tasks 4 - 8 are individual tasks. They are less tightly described than tasks 1 - 3 and mainly ask you to write a short report. Once your groups has decided on which individuals are doing which tasks you should contact the tutor (Stuart Anderson) for this session to arrange a tutorial meeting as indicated in the tutorial guidelines. This tutorial will cover your questions about the individual tasks in the practical.

Task 4: Integration Testing and Coupling-based Coverage (25 Marks, individual activity)

Preparation: You should review the slides for lecture 10 and the slides for Chapters 15 and 21 of Pezze and Young. Have look at tutorial 4.

In this section you will assess the effort required to test the integration of a relatively modest system. Assessing the integration of a system like JUnit can be very time consuming. So, since this is an individual task, you are being asked to write a report rather than doing the testing.

Inspect the code in the junit.runner package. This is a fairly large package and it is at the heart of the JUnit system so in your report you should not attempt to be exhaustive Write a short report that:
  1. Identifies the main ways in which the classes in junit.runner package are coupled. Rather than being exhaustive, please identify different kinds of coupling and give examples. For example, what methods are called, are there shared variables, …
  2. Discusses what would be necessary in a test suite to achieve full statement coverage of the code in the package.
  3. Outline what would be necessary to meet the coupling-based coverage criteria for the classes in the junit.runner package.

Deliverables:

  1. A section in your report, labelled individually, exploring coupling between the specified classes.

Task 5: Using Coupling-based Coverage (25 Marks, individual activity)

Preparation: To prepare for this task you should review the slides for lectures 7 and 8, the slides for chapters 6 and 13 of Pezze and work on tutorial 5.

For this section only consider the class ClassPathTestCollector in the junit.runner package. You should do the following:

  1. Look at how the methods inside the class call one another (drawing the graph of how they call one another might be helpful).
  2. Identify the last defs and first uses of variables when one method calls another.

    Note that in the tutorial we look at coupling through return values as well; that's not necessary here.

  3. Work out the coupling paths that must be followed if the test is to conform to the all-coupling-uses coverage criterion. In describing the paths it is probably easiest to label the lines of code (e.g. by numbering them and then describe the path as a sequence of line numbers in the different functions).
  4. Estimate how many different tests you would require to develop an adequate test set to cover your coupling paths.

Deliverables:

  1. An, individually labelled, section in your report annotating the source of the relevant methods with first use and last defs for the variables in the ClassPathTestCollector class and identifying the coupling paths.
  2. Your report should also estimate the number of tests required to cover all the paths you have identified.
  3. If you believe coverage cannot be achieved you should provide an explanation in your report detailing why the coupling-based coverage cannot be met.

Task 6: System Testing (25 marks, individual activity)

Preparation: Re-read the lecture slides that list some of the varieties of testing you might apply during system testing.

Select the three different kinds of system test you consider most appropriate for JUnit. Investigate the best approaches to carrying out these tests on the JUnit system.

Deliverables: A short, individually labelled, section in your report detailing your choice of the three most appropriate system testing activities, together with a justification for this choice. For each of the chosen system testing activities, provide a brief outline of how you would go about such testing for the JUnit system. Try to be reasonably specific, providing examples of test input and identifying what they are intended to test.

Task 7: Database Testing: (25 marks, individual activity)

Preparation: One aspect of testing that does not feature in the practical is managing systems that make extensive use of databases (many systems use databases as an important ancillary). DbUnit is a tool that is designed to handle some aspects of the testing of systems that utilise databases. Prepare by going to the DbUnit web page and read the Overview documentation to help you understand the facilities DbUnit offers.

For this task, you should prepare a short report providing:

Note that you are not being asked to provide an exhaustive analysis of the DbUnit approach you should restrict yourself to between one and two pages of text.

Deliverables: A short section in your report providing this review of DbUnit.

Task 8: Project Review (25 Marks, individual activity)

The final task is a review of the main activities of the group in completing the practical. Write a short report on the progress of the group. Your report should include:

  1. The group's plan for the tasks.
  2. A review of deviations from the plan in terms of time allocation, degree of completion, sequence, completion dates, ...
  3. A short summary of what you see as the successes of the group in project planning and execution.
  4. A short list of three or four areas you see room for improvement and suggestions of how you might improve in these areas.

Deliverables: A section in your report providing a review of the planning of your group's practical work.

Submission of the Practical Work

After completing the practical you should have the main report and additiona files of tests etc. It will help me with marking if you please exactly adhere to these names. For other sections if you submit additional files use the naming convention: Taskn-XXX.ttt where $n$ is the task number the file relates to and XXX.ttt is a descriptive name and file extension. (including upper/lower case):

report.pdf
A report comprising the main body with your written answers to the tasks and appendices (at least 5 pages) with screenshots, figures, any small pieces of code. The whole report should not exceed 20 pages.
Task1.java
Your tests for Task 1.
Task2.java
Your tests for Task 2.

To submit your work you should designate one member of the group as a submitter for the group. The report should be clearly labeled with your group number. The submitter will gather together the files you wish to submit, and execute this command (if for any reason you have not produced one of the listed files you should omit it from the submit command). The dots at the end of the command signify all the other relevant files:

submit st 1 report.pdf Task1.java Task2.java ...

Assessment

In Weeks 9 and 10 of semester each group will be invited to give a 30 minute presentation of their results where they will have an opportunity to demonstrate the tests they have generated and receive feedback on their work. Details of the signup process will appear shortly on the web page.

Questions

Here are some relevant questions for task 1 and task 2:
What's the difference between a partition and a value class?
They're essentially the same thing (“value class” isn't mentioned in Ostrand & Balcer's original paper), but it might be useful to think of partitions as being slightly higher level verbal descriptions (e.g. “none”, “one”, “several”, and “very many”) corresponding to more technical value classes (e.g. 0, 1, 2-100 and 101+). In a more complex project this distinction would be more useful.
Do we need to include our test results?
Yes. Your test case specification should include the results that you expect (c.f. P&Y p.189, Table 11.2). You should also document the actual results you got. A brief commentary to the effect that all tests are passed, or that failures occur and why, would be helpful in demonstrating that you implemented and executed your specification.
My statement coverage is over 90%. What should I do in Task 2?
I've talked about different coverage criteria, and how they're related, and Ntafos' paper in the reading gives a good overview of this. You've just been working with statement coverage. You should make the case for another coverage criterion, and possibly add one or two test cases to improve your score with that criterion.


Home : Teaching : Courses : St : 2014-15 

Informatics Forum, 10 Crichton Street, Edinburgh, EH8 9AB, Scotland, UK
Tel: +44 131 651 5661, Fax: +44 131 651 1426, E-mail: school-office@inf.ed.ac.uk
Please contact our webadmin with any comments or corrections. Logging and Cookies
Unless explicitly stated otherwise, all material is copyright © The University of Edinburgh