Contact Center Solutions Featured Article

Ensuring a Diversity of Data in AI-Led Self-Service

February 03, 2017

“Diversity” is a common theme today, and it often manifests in public and private organizations seeking to bring a wide variety of faces and voices to its problem-solving and management tables. It’s a worthy goal: governments, medical research, institutes of education and private organizations can all benefit from broadening the experience to encompass the perspective of all humans. Data collected from homogenous sources will very probably yield homogenous results that don’t apply to everyone (for example, the years of medical research conducted only on male test subjects, which left a gap in understanding how disease manifests in women).


When it comes to enterprise data, diversity is also a good thing. Companies use a great deal of data today to make predictions, build schedules, manage the workforce and boost the quality of customer service they offer. But they, too, need to be careful to include a little diversity in this data juggling, or they’ll limit the utility of the products they’re building, according to a recent blog post by Aspect’s Lisa Michaud, who titled her post “Data Doesn’t Lie, But It Doesn’t Tell the Whole Truth” and described the mistakes that can be carried into machine learning technology.

“From my personal area of science, gathering diverse textual linguistic data is just as challenging,” she wrote. “It is a key step to creating a chatbot or virtual assistant, but a single developer entering sentences into a chatbot toolkit will create a bot who can answer questions posed by that developer – but not questions from someone who expresses herself in very different ways.”

The phrase “garbage in, garbage out” that applies to computer programming has also been shown to apply to the development of self-service customer support systems. Michaud highlighted the fiasco that occurred with Microsoft’s Tay chatbot, which was trolled by Twitter users and fed a steady diet of racist language. The result was that Tay “learned” some objectionable words and phrases and began using them in results.

A less egregious example of a mistake rather than malice can be seen in natural language systems programmed by speakers with a certain accent (American English, for example). With no input from other speakers of the language, the system can “learn” to recognize only pattern of word pronunciation. In theory, the same thing could occur if the development team of a chatbot represents only one demographic.

“A single developer entering sentences into a chatbot toolkit will create a bot who can answer questions posed by that developer – but not questions from someone who expresses herself in very different ways,” wrote Michaud. “Diverse linguistic data sets are difficult to gather but vitally important for creating general use bots reflecting the broad diversity of global linguistic patterns.”

 

If you’re interested in hearing more about how AI is changing the customer experience be sure to head over to ITEXPO in Fort Lauderdale, Fla. next week. A panel session on the topic will discuss how AI will change the course of the customer journey through the customer lifecycle, creating a more positive experience and, ultimately, driving business revenue and growth


Edited by Stefania Viscusi



Home