Paying (for) attention: why context matters when we engage customers with automation

0
903
John Parkinson, Affiliate Partner, Waterstone Management Group

As we automate more and more processes within business and commerce it’s important to bear in mind that, as one of our former colleagues likes to say, “People are not peripherals” and can’t be treated as though they are. The systems and software we have today are much better than we had ten or twenty years ago, and we might well be tempted to take human interaction out of more and more (especially predictive) decisions. Recognize, however, that this can create unintended consequences when some human participation is needed or expected. This can also guarantee that failures, driven by factors such as a lack of customer engagement or a slow recognition of unexpected problems, will be extreme.

So how to incent our customers (and the employees that serve and support them) to stay engaged with automation — maybe by giving them some of the work or decisions we could automate, but that wouldn’t be that much more efficient if we did so. Maybe by deliberately letting the customer and employee exercise discretion in some parts of their interaction — well-trained call center agents delivering a positive experience when dealing with some types of customer issues is a good example — even when we could just use software to decide and act.

These challenges have been creeping up on us for a long time. In the early 1990s one of us was involved in rescuing a large and innovative project that seemed to be going in a bad direction. The objective had been to automate a set of complex processes that had previously needed two closely coordinated and highly trained people to manage so that a single (still highly trained) person could manage them alone. At the same time, that single person was expected to perform a much wider range of tasks than the two people being replaced could have managed — all through a more powerful and flexible set of “integrated platform technologies”.  Half the people, twice the scope. What’s not to like?

The only problem was that the highly trained and very skilled operators couldn’t do it. They kept making mistakes and “crashing” the system. The trainers said it was the fault of the technology. The technologists said it was the training. Both blamed the operators.

It turned out that the operators were being overloaded with information by the automation that was supposed to make their lives easier. Instead of getting just what they needed at the time they needed it, the systems were trying to tell them everything, all the time. At some point, the next item, no matter how critical, got lost in the flood and, if it was really critical, ignoring it triggered the crash. Three decades later, it often seems that what businesses are doing to “engage” consumers is going the same way — information overload without context.

Back in 1991 we solved the problem by building a model of “situational awareness” into the automation so that it provided only the information that was critical to the operator in each of the many tasks they were being given. Everything else got stored (sometimes for a short while, sometimes permanently) so that it could be recalled later if needed and could be used for trend analysis and prediction when circumstances changed. That let the operators focus on the critical aspects of their work and let the automation run the rest. The automation was “context sensitive” and reacted appropriately — we gave it context. It wasn’t perfect — we could not anticipate every possible situation that might arise — but it virtually eliminated operator error in the wide range of situations we could anticipate.

We also designed sets of “filters” that tried to smooth out the many streams of data that fluctuated widely and rapidly, but where the fluctuations didn’t affect what the operator needed to know — again, giving context. It’s easy to get this wrong if you don’t understand how the underlying systems are expected to perform and how the data you get reflects this performance. It takes a while to calibrate the filters so that they don’t suppress the things you really do need to know (such as rare events that might cause a cascade of engagement issues or failures elsewhere) yet don’t overload you with things that are “strange” but not important (the “glitches” that seem to occur in most complex systems, but that don’t have an impact and rarely or never repeat).

Then, as a “backup” option, we gave the operators the ability to override the automation if they felt they needed to and take control of the system. There was a lot of debate about this – not everyone agreed that it was a good idea – but it was clear from our test data that there were situations that the automation didn’t handle well (it made logical but usually erroneous choices) and where even an overloaded but skilled operator with some freedom of action was a better choice. We then had context + human intervention when desired.

Our final tweak, because we had a record of just about everything that went on in every interaction, was to analyze the recorded data to continuously improve the range of situations that the automation could handle. Over time, we found lots of patterns that we could use to provide better help to the operators (and a better experience to customers) and it looked as though we would eventually get close to a fully automated system, with no operators required. Define the task, set the automation profile and range of situational variables to account for potential unanticipated events, press the button and let the automation take care of the work. As it turned out, this was too optimistic. We never managed to fully anticipate all the possible “unanticipated” events that could be encountered or to build in enough “intelligence” to allow us to remove the essential “human in the loop”.

Which is probably a good thing from the human perspective, although it did mean that the highly skilled and expensively trained operators had much less to do and the system had a hard time keeping them from getting bored. That meant that when they were really needed, they often hadn’t been paying attention, took a while to get oriented and didn’t perform all that well when handling emergencies. In the end, we had to find ways to keep the operators alert, engaged and ready to step in if needed and provide that essential, but not often needed human interaction. That meant they had to have something to do, even if they didn’t really need to be doing it. So they would have to check in from time to time with the automation and verify that some aspect of the process was working correctly. Or they’d be required to generate and comment on a status report at periodic intervals that might vary from every few minutes to once a day or so.

We called this the “attention tax” in the system, because it was non-value added activity that nevertheless had to be “paid” to keep the operators engaged and thus to get the overall system to function smoothly and reliably almost all the time — and no amount of smart automation ever eliminated it entirely.

But it was better to pay the attention tax and keep the humans engaged, alert and available for the jobs that only they can do. It’s the same in business. Context matters.


About Authors:

John Parkinson is a founder and managing director at ParkWood Advisors, LLC and an affiliate partner at Waterstone Management Group. He has been a strategist and advisor for over three decades. He can be reached at: john@parkwoodadvisors.biz .

Nicolette de Guia is the Founder and Managing Partner at N7 Momentum LLC, bringing over a dozen years of experience to the ever-evolving challenges of product marketing and brand building. She can be reached at: ndg@n7momentum.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here