Overview

Framework:
RQF
Level:
Level 3
Unit No:
M/618/5241
Credits:
6
Guided learning hours:
42 hours

Aim

Learners will develop an understanding of testing strategies and techniques and the stages from planning to acceptance testing. They will also understand how automation can be applied to software testing and will implement test plans, identify appropriate test data and record results.

Unit Learning Outcomes

1

Understand software testing.

Purpose: to ensure software is functional, secure and meets specified business requirements.

Test methods: unit testing e.g. source code testing; integration testing e.g. big bang, top down-top up; system testing eg usability, performance, compatibility, error handling, security; black box testing e.g. test cases based on inputs and expected outputs; white box testing e.g. data flow, branch, path testing; purpose of each; static testing e.g. walkthrough without executing code; dynamic testing eg from a debugger environment

Test stages: e.g. planning, developing test procedures, carrying out tests, reporting (is software ready?), analysis of results, retesting; alpha e.g. white box testing; beta e.g. usability testing; acceptance e.g. black box testing; non-functional testing; performance testing; acceptance testing.

Different types of testing: Alpha/beta.

Unit testing: identify processes and input and output requirements, identify and isolate code into its smallest testable part, plan tests cases, identify test data, debug code.

Integration testing: identify units of code that will work together, decide on approach to be used (top down, bottom up), define parameters for the way in which the units will work, use modules that have been unit tested as the input for the test.

Performance testing: define performance goals, identify suitable metrics, deploy manual and automatic testing tools as required, analyse data generated by testing tools.

System testing: test software as a complete package, plan destructive testing cases, plan non-destructive testing cases, compare performance with functional requirements specification.

Acceptance testing: identify and engage suitable test users, deploy users to test the program in real or simulated use scenarios, gather feedback from test users, compare user feedback against functional and non-functional requirements specification.

Regression testing: fix errors identified in other stages of testing o retest the identified component to check error is fixed, retest associated components to ensure no unintentional issues have arisen.

Load/stress testing: agree acceptable performance parameters (data access speed, load times, number of concurrent users, system availability), identify and deploy browser-level and protocol-level testing, expose site to low, normal, high and extreme levels of traffic, analyse performance of site against agreed parameters.

Assessment Criteria

  • 1.1

    Explain the purpose and methods of software testing.

  • 1.2

    Describe the different stages and types of software testing.

  • 1.3
    Explain how
    automation is used in software testing.
  • 1.4

    Describe functional and structural testing.


2

Be able to develop and implement a test plan.

Test data: normal, erroneous, extreme (outside limits), learners need to understand the importance of designing test data to confirm a program works correctly under normal and exceptional circumstances (valid, invalid, boundary).

Test plan should include: test specification (including functional and structural techniques, setting minimum criteria for completion; functional e.g. black box testing, structural e.g. white box testing), test cases, test data and expected results, resources and scheduling, recording and checking of results (test log), evaluation.

Test process: test specification; test cases; test data; expected results; resources required; time plan; recording documentation; evaluation of results

Test cases: expected outputs from specified inputs; formal e.g. positive, negative testing; informal e.g. scenario testing.

Test report should specify the presence or absence of errors, make proposals for rectifying errors found and report on the success of the test against the original specification, contents e.g. test plan, test specification, test cases, test procedure specification, test log/records, test incident report (actual v expected result)

Record results: test e.g. branch test, test data, expected result, actual result, corrective action taken.

Assessment Criteria

  • 2.1

    Design appropriate test data.

  • 2.2

    Develop a test plan in line with requirements.

  • 2.3

    Implement a test plan and record results.

  • 2.4

    Produce a test report.