Smooth Technology Projects: 12 Significant Steps to Success Part 8: Testing Times: UAT, Test, Pilot and Parallel Run

Testing Times in HR technology! Part 8 of the Smooth Steps series guiding folks through the full project life-cycle is all about testing – user acceptance testing (‘UAT’), piloting and the parallel run, which usually refers to payroll. It is a series part packed with practical tips.

Unlike the (implementation) ‘building blocks’, I give the tests and trials of testing rather more focus than you might expect. That’s because from my independent perspective I can stand back and observe that, in general, our Phase 3 customers around the country suffer from an unfortunate triple whammy here:

  1. Less knowledge
  2. Less support
  3. Less planning

This leads to an underestimate of what it takes to test and pilot and a disappointment about the results. This creates a horribly unnecessary consequence that project teams can feel that nothing can be done if trials of new systems don’t go well. I think that’s such a shame: the test period should be a return to the buzz and excitement, as I comment in the article, of the beauty parade that is product demonstrations at the very start. It is your chance to get your hands on your own real-(nearly)-live system!

All this is avoidable with a bit of extra know-how and Testing Times aims to give you just that…..

12 Significant Steps to Success Part 8: Testing Times: UAT, Test, Pilot and Parallel Run

Testing the design and build of your new people system can bring you back to some of the thrills and spills of those product demonstration beauty parades (that were the good days of your system selection. Seeing the system come to life and for the first time in your own hands is really gratifying after intensive work.

From a professional perspective, the testing process can prove frustrating. As consultants, we find that a lack of familiarity with this structured exercise causes an unnecessary waiting game, as we work together to arrive at sign-off milestones. Ultimately this means late go-live dates and budget over-runs.

In this part I look at what the different types of testing are, do and how therefore to approach each. We will look at:

  1. User Acceptance Testing – the difference between “UAT” and Quality Assurance
  2. Testing and Piloting – what’s the difference?
  3. The Parallel Run – the rigour, the realism and the results

To read some more tips on a rigorous project approach revisit part 5 of this series and for advice about handling the chunky implementation stages of the implementation journey, of which testing forms an element, go back to part 7.

Let’s make testing less testing for all and instead a fast-track to smooth success!

Key question: Where does testing fit in the overall project plans?

Test too early as a project team and you’ll test again; test too late and you’ll likely face a choice between accepting either delay or compromise on outcomes. Testing has to sit neatly between the point at which someone believes they’ve done their bit and someone else needs to check it over

It’s important to know what to testRemember that I recommend a stage approach to the project. Package the full scope of the system into neat “product” parcels for formal sign-offs.

Take a practical approach to mini, informal tests (e.g. of a workflow or profile design) after each consultancy exercise so as not to lose the moment where memory is fresh. Ad hoc is allowed as an extra!

User Acceptance Testing: how to use it, how to accept it

User acceptance testing (“UAT”) is:

  • A formal part of testing which should be structured and recorded
  • Proof as to whether the new system delivers to a specification of the business need
  • Your opportunity to decide whether to accept and sign-off configuration work
  • All about the organisational context

User acceptance testing (“UAT”) is not:

  • The first stage of testing a system
  • Proof as to whether the system delivers to an out-of-context technical specification
  • Your opportunity to add more requirements into scope
  • Something you can ask the product supplier to do for you

The first stages of testing an HR system design and configuration will be carried out by the developers and their quality assurers. Systems testing will have happened for any one piece of project-specific build.

The bigger testing exercises that developers carry out are prior to release of new functionality for their wide customer base. In an SaaS context these upgrade releases and bug fixes can now feel painless to you as the end-user. When testing is done by customers at the end of that system testing it is often referred to as beta-testing.

At the UAT stage, the implementers have taken their job as far as they can. They now need you to apply their build and to validate whether it works to do what it should in real-life scenarios. You know those scenarios and the supplier testing team does not – they are keen to know whether the ready-to-test technical spec works for you! Think of it as somewhat like the difference between laboratory testing and clinical trial.

UAT: what to do and how to do it

1) Write your intended test plan (test script) when the requirements analysis is fresh in mind. Get ahead and have this ready at project start in preparation for selection – to be refined later – or as one outcome of a business process review

2) Test scripts can be very detailed and appropriately so. There are templates available for download to see examples. The essence of the plan contents describes:

  • Objectives of testing
  • Business scenarios and functions in and out of scope of testing
  • Roles to test, approve and receive results
  • Pass and fail criteria, against each item and overall
  • You may wish also to describe assumptions, risks and key points of method

Tip: Test scripts relate only in part to a particular system solution. If HR technologies are a significant part of your day-to-day departmental operations you may well be wise to invest time in devising a generic test script that can be applied in any one case (with some refinement) of new software deployment or upgrade.

3) Engage your internal project team to carry out UAT. If you try to involve other functional team members, it could simply be too hard to be efficient and you risk that a less-than-perfect result at this stage jeopardises initial reactions when going live later. To stress, do not ask your supplier to do the job for you. If you wish to involve external advisers (see part 6 on the who is who), only count on those you are sure have a sound grasp of the real requirements of the business.

4) Allow plenty of time. Quite how long is contextual and understand that even on the most modest scale you are likely to need to dedicate more than one focused day. Quite often I see product suppliers offering plan examples which include a 2-week window for UAT by their customer. To plot through different circumstances end-to-end and to record results takes a surprising amount of time.

5) Document results carefully and diligently. The format to which this is to be done should be agreed in the test script (and shared with those who you’ll give that back to) and it is absolutely essential that every scenario tested, outcome observed and score (the pass or fail, or the categorisation of the importance of error) is completed.  Ask the consultancy team what evidence you need to be able to produce, typically a screen shot to accompany your notes, ahead of time so that you don’t have to repeat.

Key question: Is the supplier involved at all in UAT? Do I have to do it all alone? No, you’re not alone. Whilst I offer no wriggle-room around getting out of your user role, do expect your consultants to support you. Largely those experts are waiting for your responses, so keep in touch to report results throughout. You can be efficient with time needed for repeats of the UAT period to limit how long the whole process takes. It is also fine to ask for guidance about writing your script [see above] and to expect a low-key introduction as to where to start in this first trialling. But have confidence! You do not need to have received full training in how to use a system to do UAT. 

Note and report failures. Overall success may then be defined, for example, as “X% of test items have achieved with no problems classified as high or medium on impact – and with 100% of items now tested.” Be sure that all those items that do not pass as individual tests have a decision attached to them as to whether to fix or to accommodate. The exit point of the UAT exercise is then the signing-off of the scope of that testing with a concluding “pass” result.

Testing and Piloting

The technical specification has now been tested by the professionals; the functional specification, against the business need, validated in UAT. In theory, this leaves you good-to-go towards live use of the HR system.

In practice, most wish to see the organisation have a further “test” opportunity by sampling what’s going to happen when the technology is in live use. The opportunity here, and the associated risk otherwise, is all about understanding usability and experience, as well as gauging cultural or communications issues. Another motive for doing more by way of trial is a test of the strain on the system as volumes increase and performance as part of a live IT infrastructure.

When I looked at the role of HR as the professional PM I distinguished planning for a test by your organisation’s end-users and a pilot. These are my words to underline a very useful point of pragmatism. In both cases, you are identifying a sample group and asking that they try out the technology. The difference is what you intend to do with their feedback. Here is why:

Imagine a sample group gives me feedback about how they find using a new tool. What might I learn? I might learn about things users like and don’t like about:

  • Look and feel
  • The process involved in their functions with the system
  • Technical system performance (speed, reliability etc.)
  • Often most strikingly, what they don’t understand or find easy
  • I also gain evidence about more quantifiable results here, such as how long it took users to work through a process or a process with volume

I then need to decide which of those types of feedback I’m going to respond with either (a) changing things to remove any difficulty or (b) working with and around the concern. For example, I could use the feedback to guide how I devise a training plan [and see the next part of this series].

The key point is that with the first response (the change) then I need time to re-iterate design and re-test. In the second case I don’t. That’s a practical point when it comes to planning.

Being clear about the purpose of sampling use amongst your wider organisation in either way will help to ensure users are not frustrated. Let those users know what you do intend to do with their feedback too.

Key question: How should you choose your pilot group?

There is much debate about whether pilot groups should be a representative average, your most supportive (and yet helpfully critical) friends, or the really trickiest parts of the business to reach. It is very common, for example, to see the HR and Finance teams engaged as the pilot group.

A group is identified by location, departmental function or sometimes by association with particular management leads.

My recommendation is that the 2 profiles most likely to help are both the most ardent supporters and those most resistant (or in complex areas). This is because both groups will respond – albeit with differing motives – with a comprehensive battering of use and give you full feedback. The last thing you want is a non-response.

Choose a pilot group proportionate to functionality of technical scope rather than necessarily organisational size. You need big pilot groups to test the volumes and complex array of scenario for functions associated with time-recording and absence, but basic self-service or core functions in HR less so.

The Parallel Run: a realistic run towards results

Parallel running (i.e. concurrent use) of existing and new HR and payroll systems is the safe transition period before letting go of old tech. It is in payroll where this safety is really called for as a true final test. In other areas or types of system, the imperative is not there and keeping a legacy system going is likely to be about practical concerns, such as to help you manage and be sure of your data migration in its completeness.

I focus here on advice to the uninitiated leader or project colleague about how to run payrolls in parallel to perform that last true test with the right degree of rigour which avoids undue risk without overkill:

  • Received wisdom is that two payroll periods are used. Consider three if things tend to be more variable period to period. One pay period alone could be a risky short-cut but viable in the most static of cases
  • Factor in the time of year. Consider seasonality, for example of starters and leavers or of variable data input. Whilst RTI has diluted the issue, year-end requires an extra degree of change. Your period of choice might well be the new tax year first period for parallel rather than setting the new system live and turning off the old one, as can be the convention.
  • Be realistic. To reconcile to the penny is not being realistic. A tolerance factor for discrepancy is needed throughout testing stages and so best identified when each pay element is designed, to be used consistently throughout. Decide on a tolerance for each element and then the total to tolerate suggests itself. £1 on an hourly rate is wildly different from a £1 total net payroll result!
  • Test the system, not your sums – manual corrections are not the answer to “fixing” discrepancy. You could deal with manual interventions and cash lump sum entry by working with the same tolerance factors, but better is to create the accurate data at source. Even the most seasoned payroll pros may need to resist the urge to override a calculation.

If parallel run time tests your patience, then I urge you to bear with it. A maverick view, and one that in theory does hold water, is that adequate scenario testing on the new system with QA and UAT renders parallel payrolls redundant. It is the rare Finance professional happy to accept that as a plan.

Tip: Any type of testing starts to make clean data becomes important. Ensure development environments and those used for training (both forms of playground!) are separate. Keep user testers happy by preparing test grounds with complete and accurate data [read here for further advice in this area]. Beware the particular annoyance of spurious workflow email output! Attach test system email addresses that are false, or those of the testing team, or on divert to a single mailbox.

Step 8 in short!

The types of testing are a point of considerable confusion and worth clarifying. Distinguishing the purpose of testing the developers do, user acceptance testing and testing by the business more widely will help you to make the right choices about how to plan those activities.

  • User Acceptance Testing does what it says on the tin: users (the project team) accept (sign off) the scope of that testing.
  • Whether or not you are truly piloting depends on how much choice you really want to give the organisation.
  • Parallel running achieves the specific rigour of the payroll function to an appropriate degree of risk.

Enjoy the initial excitement of getting your hands on your new technology for the first time “for real” and, armed with advice, allow that to extend into smooth testing success. When the sign-off milestones are achieved, all are set to progress into promoting the results.

For next time are questions of training and communications and taking the system out there!

Take One Step on Step 8!: Please do not side-step your project team role on UAT. Please take it in your hands to know and to apply your own business context for real-life scenario-based testing. The system developers simply cannot do this well. No professional is performing an inadequate service to you in strenuous argument against taking that on


The full series can be read here
Enjoy additional content via our Phase 3 Insights library

To learn more about how Phase 3 can support your HR technology project then please contact Kate and colleagues here:  info@phase3.co.uk

For more information, visit the Our People page by clicking here

Laura Lee image
Written by : Laura Lee

Laura’s role as Head of Marketing sees her continually looking for new opportunities to tell the world how great Phase 3 is.

Our Insights

Other blogs you may be interested in