• 6 minute read
How To Detect and Deter Customer Misbehaviour on Social Media
Research finds companies should consider actively intervening when customers misbehave in online communities
By Pan Jingyi, Principal Writer, China Business Knowledge@CUHK
The rise of social media, it can be said, has been both a blessing and a curse for modern businesses in our increasingly connected world. On the one hand, it has enabled organisations large and small, from the likes of Nike to Apple and Starbucks all the way to smaller local enterprises, to build online brand communities to engage and interact with existing and potential customers. However, at the same time the fostering of these online communities carries with them substantial risk, especially if customers misbehave.
Customer misbehaviour refers to unethical or deviant actions which can threaten a company’s financial and operational well-being, and can manifest in a wide variety of ways. They range from the posting of offensive or inappropriate content, all the way to “trolling”, whereby somebody persistently sends inflammatory or hateful messages to a community. Mishandled, online misbehaviour has the potential to cause havoc to a company’s reputation or even its financials in real and tangible ways.
“Customer misbehaviour could expose companies to undue levels of potential risk” – Prof. Zhao Jianliang
“If companies don’t do anything when customers misbehave, not only would it be unfair to customers who behave responsibly, but it could also expose them to undue levels of potential risk,” says Zhao Jianliang, Presidential Chair Professor at the School of Management and Economics of The Chinese University of Hong Kong (CUHK), Shenzhen. Prof. Zhao and his collaborators sought to look at how companies can address this in their research study titled FairPlay: Detecting and Deterring Online Customer Misbehaviour. The paper was co-written with Prof. Wu Ji from Sun Yat-sen University and Prof. Zheng Zhiqiang from the University of Texas at Dallas.
Detecting Online Misbehaviour
Prof. Zhao and his co-authors conducted this study in collaboration with a leading apparel firm in China, which launched an online business community in 2013 to connect with customers. After collecting over 50,000 posts that were made between June and November 2016 from around 2,500 customers, the researchers analysed them, as well as their corresponding customers registration and demographic information, to identify misbehaviour.
Because online misbehaviour can come in so many different forms, its detection can be challenging. However, the team overcame this by building a proprietary model using natural language processing and deep learning techniques, which they called “Fairplay”, and then using that to analyse the words and topics of customer posts, their posting frequency, as well as demographic information. In doing so, the team was able to improve the detection of online misbehaviour by 7 percent to 9 percent compared with other methods that were in wide use.
The team then conducted a field experiment to test the efficiency of different ways that companies can intervene when online misbehavior occurs. They first sent messages to customers who had previously been identified as having misbehaved, telling them that their posts had violated rules and were to be deleted. The researchers found that while this “punishment” strategy was effective in the short term in reducing customer violations, its effect decays over time and eventually becomes insignificant in the long run. In addition, by punishing customers without giving them a chance to correct their own behavior, it also led to a reduction in purchases, Prof. Zhao says.
Appealing to a ‘Common Identity’
The researchers also tried a softer approach. Instead of deleting the offending post, they sent messages to the misbehaving customer reminding them of their citizenship in the online community, such as by encouraging them to adhere to rules related to the posting of messages. The team found that this strategy, which seeks to appeal to people’s “common identity,” was effective in reducing misbehaviour in the short term, but the effects did not last.
However, researchers found that this method led to customers actually increasing their purchase frequency. Prof. Zhao explained that, compared with just sending a message to punish customers for their misbehaviour, this appeal to a collective identity allows companies to manage the behavior of their customers by highlighting the group’s expectations. “This strategy allows firms to alert a group of people about misbehaviour without making any individual in the group feel embarrassed. It leverages on human nature – people are more willing to accept a soft reminder in place of direct criticism,” says Prof. Zhao.
Finally, the researchers examined a third alternative. They sent customers a message both notifying them that the offending post would be deleted, but at the same time, also sought to reinforce their collective identity by reminding them of the online community’s expected standards of behaviour. The results indicated that this hybrid strategy was most effective in to reducing customer misbehaviour, achieving larger and longer-term reductions in customer misbehavior than the two other alternatives. What was more, the simultaneous appeal to the group’s common identity tends to help to successfully offset the negative effect on customer purchases of the “punishment” aspect of the message.
The Role of Customer Traits
Having established this, the team then turned their attention to the factors influencing the effectiveness of these intervention strategies. They found that customers reacted differently to the policies depending on how much experience they had with the company’s products. In general, a policy of punishment was able to effectively reduce misbehaviours for both novice and experienced customers in the short term. On the other hand, appeals to group identity were effective in reducing misbehavior among experienced customers only. Finally, the combined strategy led to reduced misbehavior for both groups, but was more effective among novice customers.
Prof. Zhao and his collaborators also found that when companies disclosed that they were making use of artificial intelligence technologies to monitor the content that was being posted by customers, then this increases the effectiveness of all three intervention strategies. Prof. Zhao reasons that this was because customers would understand that their posts are being closely scrutinised.
Considering that each strategy works differently, Prof. Zhao notes companies seeking to make use of them to police their online communities should take into careful considerations their practical aspects, such as the cost. “Oftentimes, companies are advised to avoid conflicts with customers and to tolerate misbehaviour. We demonstrate that firms should at least actively consider intervening when customers behave improperly. Companies may consider examining the severity of customer misbehaviours on a case-by-case basis and apply different combinations of intervention methods. At the end of the day, it may be that doing nothing is the best course of action,” he suggests.