Now that chatbots have caught on, brands are trying to figure out how to create those that are appealing, intelligent, and don’t convey negative stereotypes.
This goes beyond the idea of male, female, or gender neutral, and jumps in with both feet to the question of diversity. There is a ‘slippery slope’ between the need to get their important message across, while addressing the requirements of a broad spectrum of people.
Chatbots Gone Wild
Many chatbots make use of a ‘learning process’ that involves human contributions through interaction as well as social media, but the chatbots aren’t always learning good habits. One of the worst examples of this was in the Microsoft launch of their smart chatbot called ‘Tay.’
Designed to appear as a millennial white teenage girl and communicate on Twitter, GroupMe, and Kik, the chatbot quickly learned all humans’ bad habits, and within 24 hours of launch, started spewing misogynistic, racist, and genocidal messages to those that ‘she’ came in contact with. Microsoft may have turned this chatbot off with apologies, but they did explain that Tay ‘learned’ the phrases from humans on the internet.
Another less inflammatory chatbot and more customer service oriented one was the TMY.GRL by Tommy Hilfiger. Launched as an assistant bot to help promote their Gigi Hadid collection, TMY.GRL is yet another white representation that works in conjunction with the e-commerce and shopping cart.
While most of the experience seemed to be pretty standard, many of the products were out of stock. It was then that TMY.GRL tried to be too sympathetic in her responses. The ‘sympathy’ wasn’t at a believable level, combined with the out-of-stock and cumbersome checkout, made the entire experience bad. Chatbots don’t work quite as well for retail.
Chatbots That Got It Right
So, what elements are included in those chatbots that seem to have crossed the barriers and hit on the golden formula of getting it right?
A Venturebeat article narrows it down to these key ingredients:
- Value-oriented concept (insight, usefulness, solving a unique problem)
- Conversational UX (logic, content, overall experience)
- Copywriting (personality, tone, manner)
- Marketing (branding, promotion, discovery funnel)
- Business model (monetization)
- Results (number of users, value creation, engagement)
But, I’d like to add something else: the need for a broad spectrum of writers and strategies from diverse backgrounds, who can address cultural sensitivity issues in chatbots’ personalities, tones, and manner. Without this, a simple chatbot conversation can turn disastrous.
One chatbot at the top of the list that hit all of the marks of excellence is the chatbot ‘Yeshi.’ Designed to increase awareness on the Ethiopian water crisis, Yeshi represents a young Ethiopian girl that must walk two and a half hours per day to get clean water. This is storytelling at its best, integrating an emotional experience with media sharing and geolocation to raise funds. The brand message is clear, and the user connects with Yeshi on a personal level.
A majority of the human-like chatbots are white, and there is a desperate need to have more chatbots of color, such as Yeshi. Without diversity, brands may lose out on billions of dollars in buying power, as people are demanding more personalization and real life interaction with brands.
The systems that are being created are a “work in progress,” and they are teaching the developers about the many different ways to ‘be human.’ It’s estimated that in 2016, over 35,000 chatbots were built around the globe, giving developers a chance to not only create, but expand to embrace our diversity.
Being ‘human’ involves interacting with people who look nothing like us and having the language, or at least learning how, to embrace those differences. Is that too much to ask from a chatbot? Brands may not have a choice.