Discovering the Requirements – How do I evaluate success?

Share this post

In the last few postings we have looked at discovering the objectives and requirements for the successful delivery of a Customer Marketing Solution. This has provided an understanding of the functional and non-functional needs of the solution, but a key aspect is to validate that the solution provided meets the day-to-day business and user needs.

To achieve this, a User Acceptance Test (UAT) of the solution is required to prove the solution delivers as expected, provide confidence in the solution, resolve inconsistencies and reduce future cost of ownership (caused by higher ongoing support levels). The key areas to include in a UAT and the approach will be discussed as part of this posting.

This posting is the eighth and last instalment of the Discovering the Requirements series.  If you are joining the discussion now, you may want to start by reading the first posting.

Before looking in detail at the key test case types, it is first worth understanding the goal and scope of the UAT. The UAT is specifically aimed at ensuring the solution meets the identified requirements in the real world, and should hence be based on the defined solution requirements and not the functional design. So when creating the UAT Plan and Cases ensure they cover the requirements and not how the solution has been put together, for example in an earlier posting on Analysis (What analysis will I require?) a specific analysis metric requirement was identified for the “Number of quotes by individual in first 6 months”. To test this, a UAT test case should be devised to cover this piece of analysis and not to test how the actual item was implemented in the design phase as the definition could be the incorrect element.

Analysis test cases are one type of test required to meet the UAT goal and the following types should be included to check different elements of the Customer Marketing solution.

  • Accessibility Tests – Test cases to confirm accessibility to the Customer Marketing solution front end products by the identified user types, access to the core data levels and access/use of basic functionality within each front end product by the identified user types.
  • Reconciliation Tests – Test cases to validate the data loaded into the Customer Marketing solution is correct, complete and comprehensive, which includes checking:
    • All required records (volume) have been transferred.
    • The correct data elements (data fields) have been transferred.
    • The data content has not been corrupted, truncated or lost.
    • Any data quality rules have been applied correctly, including deduplication requirements to deliver a single customer view.
  • Stress and Performance Tests – Test cases to ensure the solution can perform under required usage volumes and provide required levels of performance.
  • Analysis Tests – Test cases to prove the correctness of the identified analysis metrics. Analysis test cases should be based on the analysis scenarios identified in the requirements phase to ensure the UAT checks the solution meets the day-to-day business and user needs, rather than completing individual checks on each metric. A good way to look at this is to consider the design and fitting of a new kitchen, where an approach could be to check each element (kitchen drawers, cupboards, oven, fridge, etc) and sign approval if each item works as expected. This however, does not ensure the kitchen is fit for purpose and looking at an actual scenario such as baking a cake would meet the UAT goal of check the solution meets the day-to-day needs as well as checking several of the individual items as part of the test case.
  • Reporting Tests – Tests cases to confirm the reporting capability and to validate the delivery of any reports. As with the Analysis tests this should be based on scenarios and cover the distribution of the reports to ensure the final recipients can read/use the reports.
  • Campaign Tests – A key aspect of Campaign test is to check the end to end process, covering the creation of a campaign, through to the execution ensuring the message where possible is received and finally ensuring any response is correctly attributed to the given campaign. As with the Analysis and Reporting tests this should be based on scenarios, but it is sensible to start with a basic campaign to check the mechanism first before completing more completed campaign activity. Although an end to end test is required, it should be remembered that this is a test and not a proven live solution, so any communications sent should be to friendly customers (e.g. internal staff).

Now as with any part of a project, planning is everything, so now we have an understanding of the key test case types we can move on to look at the key areas to consider in planning a UAT:

  • Timeframe – When will the UAT be completed and how long will it run for? With any Customer Marketing Solution you will need to validate the initial delivery and subsequent updates of the solution, as well as incorporating end to end testing of the campaign cycle. This will require multiple phases in the solution which will need to be planned allowing time to move from data being supplied in a delayed fashion (to allow for controlled test results) to a parallel test with the supply of data as per the solution requirements.
  • UAT Users – Who will be involved in the UAT activity? Identifying who will be involved in the testing will identify training needs on the new solution to complete the UAT, availability of testers during required UAT timeframe (individuals assigned to this role will have existing day jobs to support) and volume of tests capable of being completed (based on the UAT timeframe and number/availability of UAT users).
  • Data Availability – To ensure a UAT can be completed in a controlled environment, with any inconsistences easily identified, the delivery and availability of the data for use in the UAT will need to be considered. For example, in an early Data Reconciliation test case the total number of orders available within a Customer Marketing Solution will need to be verified against the source system, which may well have moved on since the delivery of data, due to new/amended orders. This will therefore require access to the provided data to cross check volumes if an inconsistency is found.
  • Sign off Criteria – Before starting a UAT understanding how success will be measured is critical and will help focus the mind on must haves and nice to haves during the UAT phase. To complete this any defect found should be classified to a specific severity level:
    • Critical (Severity Level 1) – Full solution or major part of solution (e.g. Analysis capability, reporting capability, etc) is unavailable and no workaround exists to enable the UAT to continue.
    • Major (Severity Level 2) – Part of the solution (e.g. Campaign extraction, Analysis metric creation, etc) is unavailable but an alternative method is available to achieve the desired result and continue the UAT.
    • Minor (Severity Level 3) – The solution is working but a non-critical element is incorrect or inconsistent and will impair the solution usability (e.g. a specific analysis metric is delivering the incorrect results).
    • Aesthetic (Severity Level 4) – The solution is working but the presentation of a non-critical element is incorrect (e.g. An analysis group is name incorrectly ‘F’ instead of ‘Female’, a report heading is misspelt, etc).
    • Enhancement – Any item identified, which although meeting the defined requirements, is required to be changed to meet the newly identified business/user need.

Using these severity levels a sign off criteria can then be devised to act as a guideline for the completion of the UAT. For example in most solutions you would want to have a;

    • 0% tolerance for critical and major defects;
    • 10-15% tolerance for minor defects;
    • 20-25% tolerance of aesthetic defects.

Any unresolved errors should also have an assigned resolution date or acceptance of non-resolution. The assigning of severity levels to each defect can be a hot topic of conversation during a UAT between the client and supplier, so it is sensible to have multiple examples provided.

  • UAT Case Document – A UAT Test Document should be created prior to the start of the UAT to ensure the UAT does not become an internet browsing event with some areas looked at in detail, some not touched and time wasted on blind alleys. The best time to create a UAT Case Document is after the confirmation of the Business Requirements, to ensure the UAT is based on the agreed solution scope and will also reduce any impact of identified new requirements determined as each test case is written. For each test case the following items should be included:
    • Test Case Reference – A reference code for each test case to enable easy identification and management.
    • Test Case Name – A clear name of each test case to help with interpretation.
    • Test Case Description – A description of the test case to help understanding of the activity to be completed.
    • Test Script (Test Steps) – A detailed description of the test to be performed and each step required to be completed / checked.
    • Test User – User assigned to complete the test case, to help with planning, comprehension, required training and to understand the load placed on each tester.
    • Test Phase – Which phase of the UAT will this test case be completed (Note: a test may be repeated following an update of data)
    • Test Date – Date test completed.
    • Expected Result – Clearly defined outcomes which should occur for each test case and step.
    • Actual Result – Area to record the actual results for each test case and step.
    • Outcome – Pass or Fail. A third option often included is Deferred to allow for specific test steps with minor or aesthetic defects being allowed to not make a Pass outcome.
    • Attempts – A counter to allow the number of times a test is performed prior to success to be monitored.
  • UAT Defect Log – A document for capturing, reporting and managing defects found during a UAT. A UAT Defect Log should include the following information:
    • Reference Number – A reference code for each defect to enable easy identification and management.
    • Test Case Reference – A reference code for the test case the defect has been identified under.
    • UAT User – User who identified the defect.
    • Defect Date – Date defect was identified.
    • Defect Description – Description of the defect and the required result.
    • Client Severity – Severity level assigned by the UAT user.
    • Supplier Severity –Severity level assigned by the solution provider.
    • Assigned Severity – Severity level agreed upon.
    • Defect Status – Status of the defect:
      • Open
      • Being Resolved
      • Due for Retesting
      • Resolved
      • Deferred
    • Closed Date – Date defect was resolved or deferred.

Following the successful completion of the UAT, you will have a Customer Marketing Solution which has been based on your marketing objectives and the related requirements to meet those objectives.

Over the last eight postings I have touched on the approach needed to identify the business objectives and requirements to deliver a positive ROI for the business on implementing a Customer Marketing solution. If you have any further questions or would like support / guidance in implementing your solution please contact me through my “Contact Us” page and I will be happy to discuss your needs. Alternatively please follow @BlacklerRoberts on twitter for further insights.

Posted in Database Marketing Tagged with: , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

*

Contact Us

Your Name (required)

Your Company Name (required)

Your Email (required)

Your Contact Number

Subject (required)

Your Message

Please tick the box if you are happy for us to contact you in the future with updates on services provided by BlacklerRoberts Ltd