22.1 C
New Delhi
Friday, November 22, 2024
HomeTechAI in the workplace is already here. The first battleground? Call centers

AI in the workplace is already here. The first battleground? Call centers


“I got this road map in my head of what it looks like when you’re delivering world-class customer service—what triggers people, what makes people trust you,” Mr. Bragg said. “It’s like when da Vinci was painting.”


Mr. Bragg is one of the top-performing sales agents for HomeServe USA Corp., a home-repair service company that sells plumbing, heating, cooling and electrical repair plans to about five million customers in North America. For 11 of the past 12 months, working from a cavernous call center on the outskirts of town, he has been in the top 10% of its 432 agents, he said, for the simple fact that he listens to what people want.

“I don’t just say stuff and read scripts,” said Mr. Bragg. “I listen to everybody, whoever you are, and I retain what it is that makes that person interested. I can get just about anybody to buy anything.”

Recently, with business growing, HomeServe hired a new agent to assist Mr. Bragg and his co-workers. Named Charlie, she’s an artificial intelligence-powered virtual agent that HomeServe built using a conversational AI platform from Google and other technologies. She answers 11,400 calls a day, routes them to the appropriate departments, processes claims and schedules repair appointments. She whispers in agents’ ears whether a customer is eligible for certain coverage plans and types on agents’ screens why the customer is calling.

“I tell agents to think of Charlie as a personal assistant,” said Jessica Cloud, vice president of automation and innovation.

Charlie isn’t universally liked inside the Chattanooga call center. She can be controlling, including requiring agents to say specific words when they talk to customers, and penalizing them if they don’t. She sometimes routes callers to the wrong department. “We’re taking up a collection to get Charlie a hearing aid,” said Mr. Bragg’s colleague Robert Caldwell, another top-selling agent, sitting in a cubicle nearby.

Sometimes she suggests unwelcome ideas for what agents should say next. Charlie recently told Mr. Bragg a caller wanted to enroll in a repair plan. She didn’t understand that the man’s water pipe had burst, that he was waiting for a repair and that he was livid. When Mr. Bragg picked up the call and repeated what Charlie told him to say—“I see you’re trying to enroll”—the man exploded in rage.

From management, Charlie is getting rave reviews for her efficiency and is about to get a promotion. Soon, she’ll start telling agents specifically what they should say and do next. She’ll also start grading the humans on their performance.

“She’s supposed to make the job easier, not just make us do what she said,” said Mr. Bragg. He worries Charlie makes too many mistakes. “I’m a top performer. She’s not my supervisor.”

‘A massive restructuring’

A new generation of artificial intelligence is rolling out across American workplaces and it is prompting a power struggle between humans and machines.

Recent advances in technologies such as ChatGPT, natural-language processing and biometrics, along with the availability of huge amounts of data to train algorithms, has accelerated efforts to automate some jobs entirely, from pilots and welders to cashiers and food servers. McKinsey & Co. estimates that 25% of work activities in the U.S. across all occupations could be automated by 2030.

Today, however, AI’s biggest impact comes from changing the jobs rather than replacing them. “I don’t see a job apocalypse being imminent. I do see a massive restructuring and reorganization—and job quality is an issue,” said Erik Brynjolfsson, director of the Stanford Digital Economy Lab. McKinsey estimates 60% of the 800 occupations listed by the Bureau of Labor Statistics could see a third of their activities automated over the coming decades.

For workers, the technology promises to eliminate the drudgery of dull, repetitive tasks such as data processing and password resets, while synthesizing huge amounts of information that can be accessed instantly.

But when AI handles the simple stuff, say labor experts, academics and workers, humans are often left with more complex, intense workloads. When algorithms like Charlie’s assume more human decision-making, workers with advanced skills and years of experience can find their roles diminished. And when AI is used to score human behaviors and emotions, employees say the technology isn’t reliable and is vulnerable to bias.

One of the most fertile testing grounds is the call center, or as labor experts call it, the “factory of the information economy,” and HomeServe is among the early adopters. Across the industry, workers are measured on dozens of tasks from “average handle time” to “first call resolution” and worker burnout rates are high. In a 2022 survey, 65% of call-center agents anticipated leaving their jobs in the following two years, according to market research firm Customer Management Practice, which polled 1,000 workers between April and June last year.

Proponents say AI promises to fix much of this by handling monotonous tasks and the stress of decision making. In recent years, companies have begun using machine-learning models to scan and analyze conversations between agents and customers. Conversation analytics quickly identify the words and sentiments customers are expressing to find patterns. The technology can detect how each agent is performing and recommends what the human should say and do next.

New AI technology “helps to take decision-making responsibility away from the agent, so they can act,” said Brittany Bell, customer-success manager at Cresta, a conversation-analytics startup with customers including American Express Co., Cox Communications, Inc. and Signet Jewelers Ltd.’s Blue Nile, during a recent presentation.

When humans turn over decision making to a machine, they no longer use their own knowledge and experience—just ask taxi drivers whose street knowledge has been superseded by Google Maps. In her research about call-center automation, Virginia Doellgast, professor of comparative employment relations at Cornell University, has found that humans who are tightly monitored by an algorithm, forced to follow a script or have little control over how they work are more likely to get burned out and find it harder to solve customer problems.

Adds Julian McCarty, the CEO of conversation-analytics company MosaicVoice: “There’s a balance between empowering an agent and telling them what to say.”

Companies including Comcast Corp., Charter Communications Inc.’s Spectrum and Cox Communications are even further along than HomeServe. They are using conversational AI to detect and measure more subjective human emotions and behavior through a technique called sentiment analysis, a tool that decides if conversations are positive, negative or neutral. Some models evaluate words and context to score conversations, and others include voice pitch, tone and cadence. Comcast analyzes most conversations between customers and agents and scores employees on behaviors such as being “warm and friendly,” and “make it effortless.”

In interviews across a range of companies, call-center agents say they value AI’s ability to access information quickly to help them make decisions. Many object if they are forced to use AI-generated recommendations or say scripted words against their own judgment. Several said they are uncomfortable relying on automated performance reviews using technology that uses subjective measures like sentiment.

“It’s very hard for a robot with no emotions to truly judge how a call is going,” said Lise Hildebrand Stern, who left her job at Spectrum last year after nine months because of the impersonal nature of the AI performance scoring and the stress she said it caused. “My metrics suffered because this system was unable to judge me based on my attitude, unlike a human being would be able to do.”

‘Hi, I’m Charlie’

When HomeServe decided to introduce Charlie, company executives wanted to make sure employees viewed her as a partner.

“I think when people start thinking about artificial intelligence, a lot of folks say, ‘I’m going to be out of a job.’ It was important for our center to know this is not to replace their job, but to augment their job,” said Ms. Cloud, the HomeServe vice president.

To humanize Charlie, the creative team developed an avatar that felt representative of their employees. She’s a 42-year-old biracial brunette from Ohio who likes jazz and has two children. (They chose a Midwestern background because she has no accent, and jazz because someone might listen to it in their neighborhood, Ms. Cloud said.) Management asked agents to suggest gender-neutral names for the robot. Charlie won out over Devon, MacKenzie and Jesse. Sarah—an acronym for “self-assisted robotic agent for HomeServe”—was rejected as too impersonal.

Charlie started out with simple tasks such as greeting callers, saying, “Hi, I’m Charlie, your digital assistant,” and asking basic questions, such as, “Please tell me why you are calling today.” After learning to route callers to the proper department, she was able to reduce average call-handle times by 36 seconds, or more than 10%, Ms. Cloud said.

Charlie is a quick study. By late fall, she was trained to handle a water-leak claim (“Is this a major leak?”), while using empathy (“I’m sorry to hear about your leak”) and determine the urgency of the issue (“Are you able to shut off the water yourself?”) She then booked a contractor to come out for the repair. From start to finish, Charlie’s processing time took less than two minutes compared with a human, who averages eight. She now handles 15% of claims volume and is expected to handle 20% by next year. Chief Transformation Officer Kim Ratcliffe said she hopes Charlie can take over 40% of calls eventually.

“When Charlie gets involved, time resolution is faster for the customer,” said HomeServe USA Chief Executive Officer Tom Rusin. During a major December storm, she helped 10,000 customers, equivalent to 12% of the total affected, to book claims and schedule repairs without talking to an agent. At this rate, she will pay for herself within 18 months of purchase. “It’s taking out hundreds of thousands of minutes from our calls a year,” said Mr. Rusin. “And a minute’s expensive.”

There are growing pains as Charlie gets trained, Mr. Rusin said. “In the beginning, you have to relearn what your agents have been doing for years and teach it to the computer.” At the U.K. office of HomeServe, Hana, the British version of Charlie, routinely failed to route calls to the water line repair department until programmers realized she was mistaking the word “leak” for “lake” because of British accents. Once a data scientist spotted the mistake, the fix was easy. Mr. Rusin is confident Charlie’s early miscues will get worked out.

“It takes a lot of time at the beginning, then I think growth will come exponentially from there,” Mr. Rusin said.

Stress rises

John Maynard Keynes, the noted economist, predicted that technology would eliminate the monotonous nature of work, freeing up humans to toil less and enjoy life more. What companies didn’t anticipate was that the initial chitchat in a routine call can give workers a break and be a pleasant way for people to connect. Once it is gone, the work that remains is complex, intense and often stressful.

At HomeServe, the company has seen higher call volume. Its agents also are handling more complicated calls. “The agent gets the calls that Charlie can’t figure out,” said Catlin Duvall, manager of HomeServe’s repair department. “That’s a larger percentage of our calls. Now when you pick up the phone they have three problems instead of one. It’s better for the customer. It can be more stressful on agents.”

Ms. Hildebrand Stern, the agent who worked at Spectrum in its Appleton, Wis., call center, said the pressure to meet AI metrics added to the stress from irate callers who often cursed at her.

She had worked in customer service her whole adult life, as a hotel front desk manager and a cashier in retail and fast food, and thought call center work would be fulfilling. Although she enjoyed helping customers, she kept scoring low on the AI-generated sentiment scores. She has tinnitus and speaks with a monotone speech pattern, she said, and doesn’t always hear clearly if callers speak softly.

The AI marked her down for not using specific keywords, she said, although she never discovered what words she was supposed to say. She said her supervisor listened to the calls and told her, “it sounds like you’re doing a really good job.”

To try to relax, she’d go home at night and eat macaroni and cheese in front of the TV, watching three or four reruns of “Law & Order SVU.” “I would try to erase the whole day from my memory and come back the next day with a better attitude.”

As the months went by, angry customers kept calling and her automated sentiment scores kept falling, she said. Although the job paid $20 an hour and included a free cable package, she decided it wasn’t worth the cost. “I got to the point where I couldn’t erase it anymore.” Nine months into the job, she quit.

A spokesman for Spectrum’s parent company, Charter Communications, said the company uses sentiment analysis as one component of its performance reviews but that employees receive human input as well. He said the system doesn’t score pitch or tone for employees or customers. The analytics are a valuable resource for assessing how customers feel about the company and for scoring agent performance, he said.

Robot empathy

Sentiment analysis has become one of the buzziest and most-debated new areas of customer-service analytics. Nice Ltd., a software analytics firm with clients such as American Airlines Group Inc., Radisson Hospitality Inc., Morgan Stanley, Walt Disney Co., Comcast and Wonderful Co.’s Teleflora, is a pioneer.

The holy grail is determining customer intent, said Barak Eilam, a former Israeli military intelligence officer who took over as Nice CEO in 2014. Nice’s Enlighten sentiment analysis helps determine what customers want by analyzing “what is said and how it is said,” Mr. Eilam said. The technology uses words and the context in which they are used, as well as changes in pitch, tone and cadence, to analyze customer feelings, according to company marketing materials and Kevin Lee, vice president and global head of digital sales.

During a demo at the company’s offices in Hoboken, N.J., a desktop dashboard displays the progress of a re-enacted conversation between a hotel guest and a reservations agent.

Here’s a reconstruction of how that interaction unfolded:

The guidance is like collision detection in a car, Mr. Lee said, alerting both the agent and manager that a conversation is about to crash and offering recommendations for how to avoid that outcome.

Nice later said its technology no longer uses tone and pitch measurements, because they “fail to add meaningful value,” but wouldn’t explain further how its products had changed.

Telecom giant Comcast uses Nice Enlighten to detect customer sentiment and score agents’ performance on most of their conversations with customers. The company said detailed feedback on every call makes the scores much more accurate and precise.

Chasity Miller, a customer-experience agent for Comcast in Lebanon, Pa, for the past 7½ years, thinks her AI sentiment scores are more scientific and less prone to inconsistencies and human error because they are based on all her interactions, not just the one or two a week that were previously graded by a human manager.

“I score exceptionally high on it,” she said. The system rewards agents for certain word choices, such as “ambassador,” “superfast,” and “let me summarize everything we did today,” she said, which are easy for her to use. Her supervisor told her the system measures tone and pitch, she said. She speaks with enthusiastic fluctuations in her voice, she said, which the AI scores highly. “I can say, ‘you’re a piece of s—!’ But if I say it with an upward fluctuation at the end of the sentence, the AI likes it,” she said.

She said many of her colleagues at the call center are struggling with the scores if they speak with an accent or don’t use a lot of emotion in their voice. “I don’t think I’m a better performer,” she said. “But there’s a bias against a guy’s voice or accents. A lot of tenured agents aren’t saying the magic words.”

Three other Comcast agents scored by Enlighten said they worry the model has biases that favor some groups over others. A former Comcast agent with a Filipino accent who worked at the company for nine years said before AI scoring, she consistently scored “highly effective” and ranked in the top 100 agents for four consecutive years. That qualified her for preferential scheduling. Once the AI came in, she said her sentiment scores dropped below the required levels even though her supervisor said she was saying the right words. She quit in December and went to work at another call center without AI.

Agents say they aren’t generally able to challenge the AI scores even though their ability to be promoted and get raises depends on it.

A Comcast spokesman, Daniel Friedman, said performance scores are based on words and phrases used in call transcripts. He said pitch and tone were originally included but the company turned off that function because it didn’t make scores more accurate. He said the AI measures “warm and friendly” and other behaviors using factors like “intent of what the customer is saying,” whether the employee is “consistently being friendly throughout the call” and “building a personal connection.”

Mr. Friedman said agents are able to challenge the AI any time to supervisors or during frequent group meetings.

‘Next best action’

HomeServe has big plans for Charlie this year. The company will introduce real-time guidance for agents that will suggest what they should say or do next. “It will auto-populate the script so [an agent] doesn’t have to think so much about what to say to get the conversation started,” said Ms. Cloud.

Pop-ups on agents’ screens will suggest the “next best action,” she said. It might detect that a customer already has gas-line insurance and suggest the agent sell water-line coverage as well. Charlie will tell agents how to speak. “She might say, ‘Hey, there’s a long pause here or you’re talking too fast,’ ” Ms. Cloud said. She emphasized that it will be voluntary, not required, for agents to take Charlie’s advice. Also on the agenda: Charlie will start scoring the humans on their call performance.

The company acknowledges that Charlie has yet to win over a small percentage of agents and said it holds frequent agent forums to solicit feedback. The percentage of agents who use the data Charlie provides every day is now over 90%, a spokesman said, up from 70% in 2021. Meanwhile, customer satisfaction is up slightly since Charlie started and HomeServe plans to keep her busy.

“I don’t think anything is off limits because we have to enable our customers to transact in whatever means they’re most comfortable,” said Mr. Rusin, the CEO. “So my philosophy is—automate everything. The choice will ultimately reside with the consumer.”

Robert Caldwell spent 35 years in the restaurant industry and said he loves selling insurance plans for people’s homes. “I feel really and truly like you’re helping people,” he said. “Sometimes they don’t even know.”

A customer-service agent at HomeServe for five years, he’s routinely the top salesman in the department and likes to use his own personal experiences when talking to customers to win their trust. “If Charlie sells a plan, I’m going to sell four plans,” he said.

Dressed in a crisp red cotton shirt with a “HomeServe” label over the pocket, Mr. Caldwell donned his headset, hunched over his keyboard and clicked on his 26th call of the day. A woman from Cypress, Calif., wanted to change her billing method. While she waited for Mr. Caldwell to make the shift, she asked him whether she even needed insurance any more.

“I’m on a fixed income,” she said. “I’m an old lady. My house is old. Everything’s old. What’s the advantage of staying with you guys?”

Mr. Caldwell asked her how old the water and sewer lines were and determined they were at least 60. With pipes that old, she shouldn’t risk canceling the plan, he said, because “it’s not a question of if, but when the old lines will burst. That happened to me in 2013 and I had to pay $4,700.”

After he won her trust, she was an easy sell for an interior plumbing plan. But he hesitated. “I can’t in good conscience add $25 to a utility bill when she can’t afford it as is,” he said. “I can envision this woman in her 80s, choosing between paying for a prescription or paying for my HomeServe plan.”

A younger agent would have pitched her the additional plan, he said, and Charlie would have handled the billing change and probably missed her follow-up question completely. Sometimes the next best action is impossible to program with an algorithm. “This was one of those where it just didn’t feel right,” he said.



Source link

- Advertisment -

YOU MAY ALSO LIKE..

Our Archieves