“It’s got to be perfect” – or does it? By James Proctor

How far on the road to perfection should we travel when launching people technology?

Last weekend, I was watching an old re-run of ‘Friends’, one of my all-time favourite TV shows. The episode I watched featured Monica and her boyfriend (at the time), Pete. Pete was a millionaire who made his fortune in the tech industry creating Moss 865 (something fictional, similar to Microsoft Office) and had recently started wrestling. Here’s a reminder of the scene:

PETE: Well, let me ask you a question. Am I the Ultimate Fighting Champion?

MONICA: Well, no. But…

PETE: Well, I’m not gonna stop until I’m the Ultimate Fighting Champion.

MONICA: That guy stood on your neck until you passed out!

PETE: Let me tell you a story. When I set out to create Moss 865, do you think it just happened overnight? No! There was Moss 1 – that burnt down my Dad’s garage. There was Moss 2 – that would only schedule appointments in January; and 862 others that I learned from – just like I learned from the fight, never to let a guy stand on my neck.

MONICA: You didn’t know that already?

This scene made me think and I scribbled in my notepad – when we launch systems do they need to be perfect or do we wait until 860 versions later to launch the product to our wider audience?

It is something we discuss regularly during projects. Here, I provide a few pointers on how to answer this question:

Question 1: What impact does this have on the business?

If I’m launching a new payroll product, the likelihood is that 100% accuracy is required. Perfection here means getting the right result 100% of the time based on the data and processes. However, we need to be careful, too, that we aren’t doing things to make it seem like the end result is 100% accurate.

For example, when looking at the actual payroll data, have we used ‘adjustments’ to make things match where they aren’t working? Is that acceptable? Sometimes it may be necessary where moving to a new system has the impact of improving accuracy. For example, rather than a manual calculation the system now accurately completes the calculation but it doesn’t match the manual calculation we used to do. In this case, yes, it would be perfectly reasonable to make adjustments to make the parallel run match with the expectation of greater accuracy down the line.

Where the impact is great, such as with a financial system, the likelihood is that the end result must be 100% perfect.

Where systems have a lower impact (e.g. because they are accessed by fewer users or they have less of a direct impact on cash or people) they could be rolled out as an early adoption, to test in parallel with the old system, allowing the kinks to be ironed out.

Question 2: What is the minimum expected standard?

Whilst this may seem like a negative question where there is urgency to replace a system with a hard end date, it may still be a question that is asked. Usually, the answer to this is to replicate at least what the user has at present.

For example, when implementing new recruitment software which only recruiters use at the moment, candidates match the existing functionality to turn on the software and continue to develop the bells and whistles behind the scenes for a ‘Phase 2’ launch. This is common practice where there is a real need to move products. What I don’t mean here is to ‘create the old product in the new product’. Use the fresh look and feel of the new system, but match functional requirements to replace the old product and have a clear roadmap of what further developments you wish to make and when they will happen.

Question 3: What happens if it goes wrong?

Your new system has gone live and suddenly you realise there is an issue – something that wasn’t anticipated. What can you do?

As part of any go live plan, there should be a ‘back out’ plan just in case the worst should happen. One organisation contacted us on go live day to ask how to turn off the system for users because they hadn’t realised their users could see every person in the business. Whilst this is an example of poor testing, there can be some simple reasons why you wish to temporarily disable the system for end users.

My approach to go live is always to pilot first. The technical specification has been tested by the project and testing teams; the functional specification, against the business need, has been validated in User Acceptance Testing. In theory, this leaves you good-to-go towards live use of the system.

In practice, most wish to see the organisation have a further “test” opportunity by sampling what’s going to happen when the system is in live use. The opportunity here (and the associated risk otherwise) is all about understanding usability and experience, as well as gauging cultural or communications issues. Another reason for doing more by way of trial is a test of the strain on the system as volumes increase and performance as part of the live IT infrastructure is replicated.

Conclusions and Practical Tips

With People Technology, the likelihood is that there will be a staged approach to rollout rather than a ‘Big Bang’. Consider using the basic aspects of the system first, then build onto that and work with your supplier or consultants to understand the path of least resistance when increasing the functionality on offer.

Some top tips here include:

1. Know your stakeholders and communicate regularly about what will be happening and when

2. Understand the go live milestone and have a backout plan should the worst happen

3. Decide what should go live and when – and be clear about the future roadmap

4. Build and design your system with the end state in mind – ensuring you don’t have to rebuild everything at a later date

5. Pilot the system in a real world situation – testing can often use perfect data or scenarios which are not realistic

6. Take feedback from pilots and act upon it

7. Test thoroughly before releasing the product to end users

8. Be prepared, on go live day, to use extra assistance. Despite user training, there is likely to be a need for additional support on day one

9. Communicate go live in clusters to avoid every user trying to log on at once – this can cause strain on the system and the speed may be affected

10. Complete lessons learned from your initial go live – and use the experience to improve the next module ‘switch on’ or next phase of project


For more information, visit the Our People page by clicking here

Laura Lee image
Written by : Laura Lee

Laura’s role as Head of Marketing sees her continually looking for new opportunities to tell the world how great Phase 3 is.

Our Insights

Other blogs you may be interested in