Consumer Behaviour,Innovation & Technology

AI vs. humans: Who wins in handling service rejections?

• 5 mins read
Share link on Facebook
Share link on LinkedIn
Share link via Email
Copy link

New research shows chatbots can help reduce negative reviews in certain cases, despite the general preference for human agents

There will come a time when personal interaction in handling customer services becomes obsolete. And that time might be sooner than we thought. Companies have increasingly adopted artificial intelligence (AI)-powered chatbots to handle customer requests, leading to improved service management, reduced labour costs, and the efficient delivery of standardised services. But, what is the catch?

According to the latest report by Spherical Insights, Global Chatbot Market, the market size of global chatbots reached US$5.39 billion in 2023, and it is projected to surge to US$42.83 billion by 2033. As chatbots continue to rise in prominence, it is crucial for companies to gain a better understanding of customer reactions to the services provided by this advancing technology.

The use of chatbots is continuing to rise in prominence.

Chatbots are designed to handle customer requests in a standardised manner, relying on preprogrammed procedures and algorithms. “Indeed, consumers often have an aversive attitude toward services provided by robots because they perceive that robots lack uniqueness and empathy,” says Shen Hao, Professor of the Department of Marketing at the Chinese University of Hong Kong (CUHK) Business School.

However, the service from chatbots does not always carry a negative connotation, especially when companies have to reject consumer requests, which is sometimes unavoidable. “Our research finds that when consumers receive a rejection of their service request, they evaluate the service less negatively if the service is handled by a chatbot agent versus a human agent,” Professor Shen says.

In a new study titled The rise of chatbots: The effect of using chatbot agents on consumers’ responses to request rejection, Professor Shen and Professor Xiong Ji of the Southwestern University of Finance and Economics, Professor Yu Shubin of BI Norwegian Business School, explored how and why consumers react differently when their requests are declined by a real person or a chatbot.

Low expectations from chatbots

Chatbots are anticipated to offer highly standardised services characterised by clearly defined steps and minimal outcome variability. “They are considered less able to cater to consumers’ personal preferences,” says Professor Shen, “As a result, consumers tend to have low expectations about the level of flexibility in service from chatbots.”

Drawing on their observations, the team proposed that a rejection of a service request is less likely to negatively impact consumers’ evaluation when handled by a chatbot. Furthermore, they also suggested that consumers may have less appreciation for chatbots as they perceive them as mere rule-followers when processing requests.

When consumers receive a rejection of their service request, they evaluate the service less negatively if the service is handled by a chatbot agent versus a human agent.

Professor Shen Hao

To test their hypotheses, the researchers conducted a series of experimental and field studies. In the first three studies, they found evidence that consumers’ evaluations are less negative if a chatbot rejects their requests compared to a real person. Study 1 consisted of a questionnaire survey through Prolific, an online platform for global participant recruitment, while studies 2 and 3 further supported the researchers’ assumptions using real-life scenarios.

Participants in study 2 were 200 university students in south China, who received gift vouchers from a fictitious company. They had the option to contact either an AI-powered chatbot or a human representative to redeem different gifts if they didn’t like the default one. However, all participants ended up receiving the default gift. Following this, the participants evaluated the service through a questionnaire. As expected, they show a greater level of acceptance towards service rejection from a chatbot.

Study 3 was also conducted in China and, due to the scenario’s design, involved only female participants. This study further indicated that consumers showed fewer negative attitudes toward the company when their request was rejected by a chatbot.

Emotional apologies can fall short

While the use of chatbots in rejecting service requests can mitigate negative impacts, the researchers also explored whether service outcomes will affect consumer reactions. “If consumers are more likely to attribute the service that they receive from a robot to a rule, rather than to the companies’ willingness to help, they should also appreciate the service less when their service request is accepted,” Professor Shen says.

It is inevitable that companies sometimes have to reject a customer’s service request.

In consistent with their conjecture, the research findings in study 4 indicate when consumers’ requests were resolved, their evaluations would be less positive if the service was provided by a chatbot.

In cases of service failure, it would be better if companies apologise to consumers with emotional messages, such as empathy or guilt, to acknowledge their suffering and take responsibility. However, the researchers found that emotional apologies from chatbots are not as effective as those from humans, and may lead to negative results.

“When an apology conveys emotional messages to consumers, they might perceive it as less sincere when it comes from a chatbot versus a human agent,” Professor Shen explains, adding that endowing machines with emotions may unsettle people as it challenges human uniqueness.

However, the results also showed that when the apology lacked emotional expression, consumers considered it equally sincere whether it came from an AI-powered chatbot or a human agent.

Making smart choices in different situations

While existing literature shows that people tend to draw a clear line between robots and human beings and demonstrate algorithm aversion, this research revealed a positive effect of chatbots. “Our findings suggest that consumers still prefer human agents. However, this does not mean that a human agent is always the best choice,” Professor Shen says. “When consumer’ service request is rejected by a chatbot, its perceived inflexibility serves as a buffer, potentially mitigating the negative impact of the service rejection on the overall evaluation of the service.”


How To Detect and Deter Customer Misbehaviour on Social Media

He suggests that companies to utilise chatbots to offer explanations for failed service delivery, which could be cost-effective and most importantly, to alleviate the negative effect of service failure. Meanwhile, Professor Shen also reminds that managers should be aware that personalised services, such as apologies with emotions made by chatbots, may have the opposite effect.

As chatbots may not be suitable for services requiring emotional support or establishing emotional connections, Professor Shen says future research could investigate consumer satisfaction with various types of service agents (such as chatbots or humans) across different service requests.