I was recently asked by a journalist whether AI would eventually make all customer service professionals redundant. My answer was an emphatic no, but it made me reflect on why others may hold this view and whether we are clear about what we want from our service professionals, society, and the role of work as we move forward.
From AI voice agents to predictive customer analytics, you don’t need to look too hard to see how rapidly AI is becoming part of everyday life, both in and outside of work. The pace of change is phenomenal – even in the last six months, the landscape has shifted significantly, and there’s little sign of this abating as we hungrily look for improved efficiencies. The natural assumption (and not always the right one) is that the machine is faster!
I know many of you are under pressure to understand and adopt AI at speed, and increasingly it is being woven into our systems and work patterns in a way that simply can’t be unpicked in the future. Given AI’s potential to improve productivity and performance, the rush to keep up – or get ahead – is understandable. But businesses also need to consider and balance the risks of unintended outcomes.
Right now, there is a sizable gap between what we want AI to do for us, and it often lacks the precision, consistency and personal touch that we know is so important to employees and customers, particularly in certain circumstances.
That gap may be narrowing, but will we ever truly be comfortable with the machine dispensing wisdom on delicate and nuanced matters, or providing judgement on moral dilemmas?
Thinking beyond the technology itself
When we’re talking about outcomes, it’s easy to view the technology itself as the problem when things don’t go as planned. With AI, the reality is much more complicated.
AI can only ever be as effective as the data, insight and information that underpins it. Too often, we focus on how sophisticated the model or algorithm is rather than its foundations and principles from which reliable insights can be drawn.
If we organise our data properly, deploy the right people to train systems thoroughly, and conduct the research needed to understand how AI can improve our customers’ experience, the output we get will be clearer.
But whatever the output, it holds little value if you don’t know how to assess and act on it, or if it hasn’t been tested with the necessary moral questions we need to consider.
Digital specialists can help improve inputs and to translate AI outputs into client- or customer-ready solutions. And so can all our people right across the business, if we give them the tools they need, while putting the right guardrails in place and instilling clear, ethical boundaries.
Deploying AI with purpose
For me, this boils down to applying a humanistic approach where it matters. This starts with asking the right questions.
What problem are we trying to solve? What are the outcomes we want to achieve, what are the true benefits for employees, customers and the organisation? Does this deliver on our purpose, make the company more relevant and support a balanced approach to our stakeholders? Are we more effective?
And listening to the answers… even if we may not like what we are hearing!
Once the purpose is clear, we can plot the best route to the desired outcome. Do we need better data? Do we have the right skills? What processes and accountabilities are in place? How will we iterate and course-correct? Where do we apply our get-outs and fail-safes, what lines will we not cross, and how are we applying the right moral judgements?
AI is here to stay – and will enable many positives. But it is our judgement, our curiosity, and our commitment to others, customers, and society that will determine whether it truly delivers on its promise to improve the world we all want to live in.
