CHAPTER 2

Most Young People Want Bots, Yet Purchasers Don’t Buy Them

Chat apps have attracted billions of users, and bots are the mechanism that enable organizations to deliver highly personalized interactions at scale. Improved personalization delivers a more relevant user experience, and the high levels of automation used by bots enable cost-effective delivery at scale. As a result, businesses can engage with customers and achieve open rates that typically outperform e-mail and social networks.

Yet studies indicate corporate adoption of chatbots currently lags behind consumer openness to the technology, and that businesses would be wise to pick up the pace. According to a survey by Retale, “59 percent of US Millennials and 60 percent of US Gen X-ers have used chatbots on a messaging app” and chatbots have higher long-term retention rates than traditional apps. Yet Forrester reports that as of late 2016, only 4 percent of businesses launched a bot.

7 Reasons Why Corporate and Government Purchasers Should Deploy Bots

The best argument I have found was by the Canadian VC investment firm Georgian Partners, specialists in Conversational AI, based in Toronto. I am quoting them earlier from one of their many recent white papers on AI and (chat)bots. Georgian Partners has provided the best summary I have seen.1

The figures are startling; if nearly 60 percent of U.S. millennials and Gen Xers have used them and want to keep using chatbots, then why don’t we see 2D bots as an everyday tool, like a chat app? According to the Canadian bot analysts, 2D chatbot deployments and purchasing ­decisions accomplish seven key business goals:

  1. One-to-one Conversation at Scale

    Leveraging automation, businesses can carry on thousands or millions of simultaneous ongoing chats using the app’s native UX.

  2. Personalization

    With a rich trove of data, companies can tailor messaging at the individual level based on interests, past behavior, and responses within the bot conversation.

  3. High Engagement via Push Notifications

    Once positive experiences have been delivered, businesses can leverage push messages that go directly into user inboxes. Marketers, in turn, can rely on high open rates and an effective ongoing mechanism for re-engagement.

  4. A Cure for the Brand App Blues

    ComScore reports that smartphone users spend 80 percent of their app time in only three apps. With multiple messengers topping app charts globally, having a piece of digital real estate on those platforms is an appealing alternative to building owned brand apps.

  5. Revenue

    AI-driven chatbots can lead consumers through the entire sales funnel from awareness to purchase.

  6. Efficiency and Productivity

    Bots have the capacity to enhance conversations between users by surfacing helpful information, or completing repeated tasks like scheduling.

  7. Ambient Chat

    Major brands from GE to BMW are integrating chat functionality into Internet-connected devices, with bot technology powering those consumer interactions.

So given these seven brilliant reasons to buy a bot, what on earth is holding the purchasers back?

Appetite for Risk: Horses and Falls Versus Sharks and Yachting

What I have addressed previously is the appetite for risk of the entrepreneur somewhat obviously, and less obviously the risk-taking of investors in their invention or innovation (a lot of R&D needs premoney valuations to get make or break seed capital). Often overlooked are the risk averse tendencies of the corporate and government purchasers, perhaps because nobody wants to have the finger pointed at them, that is, blamed for a massive failure of the tech or perceived waste of financial resources, human resources, and time. Those with decision-making power are less likely to own up to their reluctance to take a risk seemingly on the behalf of an unknown tech venture.

Their risk assessment ought to focus on the “what’s in it for me?” or my company/department. But these “value-add” points for their own workplace are sometimes forgotten postpitch and the purchasers return to their desk thinking, “What have I got to lose” rather than listing all the gains as a persuasive argument to put their bosses, peers, and colleagues.

Is there are risk to life and limb at the end of the day? It is highly improbable that you will lose your job if the greenfield project goes belly up. The much lauded tolerance of failure in the corporate world—though admittedly less so in government—is meant to kick in and the person who launched the pilot ought to be congratulated for their courage and fortitude.

An analogy is falling off a horse. As a teenager in Australia, when I was thrown off a galloping beast I decided to swap equestrian school for yacht racing. My calculation was that I could survive a shark attack if I fell overboard in the sea more easily than spinal injuries caused by horseback mishaps. My appetite for (extreme) sports risks was balanced with a calculated assessment of the immediate threat or probability of injury if not a nasty death.

The point being corporate and purchasing decisions to support, trial, and pay for new technologies are simply not a life-and-death matter. So take your time to weigh up the pros and cons, the benefits versus the possible disadvantages a greenfield project or piloting of software could bring. What is the worst a customized, cybersecure AI bot could do to your department, company, or you?

The very worst case that you could lose your job is highly unlikely and if that came about, it says more about your employer than your capabilities. And the organization’s appetite for risk and tolerance of failure is not where it should be. If it ever came to that, you ought to be working for more challenging, stimulating, and ultimately successful managers in the long term. If you can live with the improbable outcome of being sacked for a failed pilot and that is the only real threat prohibiting your attitude to experimenting with emerging tech, then you should take the plunge!

List of Regulatory Failures and Obstacles Blocking Bot Scaleups

  • No regulation exists specifically for chatbots, AI bots, or botification.
  • They are only affected via data protection laws and privacy breaches as guarded by the GDPR and similar legislation in other countries
  • Even that remains a matter for enforcement, who has the onus of proof, who prosecutes and how—look at my analysis of the Tay.ai scandal and the Eugene Goostman hypedup claims in the next section to understand this better applied to clear international cases
  • The laws and legislators have failed to address the issues that the Swiss Intervention was raising as discussed in my VentureBeat article “First chatbot arrest but what are the implications?”:2
  1. What happens when bots go rogue? How is going rogue defined under a set of international laws regulating botification?
  2. Who is to blame when a chatbot does or says something illegal? As the Eugene Goostman debacle proves, liability is not clear given the apparent ignorance of the actual developers, a Russian and a Ukrainian who had worked on the “chatbot parsing database since 2001” as to what purpose their chatbot was going to fulfill at the heavily publicized event in London while they were sitting in the United States and Russia; and how they and the bot were orchestrated by a PR-loving professor who had wanted to “win the Turing Test” for some time with earlier chatbot competitions staged by Kevin Warwick; the misleading statements of an unknowledgeable university marketing director who at first claimed they had created an Artificially Intelligent “super computer” (this was later corrected in a followup university press release from 2014, though they maintained that the Uni Reading event at the Royal Society had indeed passed the famed Turing Test as judged by a number of handpicked celebrities and well-known figures).3
  3. If blame can be apportioned for the illegal actions of a chatbot or now a mixed reality AI bot hologram, how will the prosecutors produce evidence that may be hard to document on the web and also due to privacy laws preventing access to data and statistics such as User Experiences of the bot in question.
  4. A lot is being published and debated about ethics in artificial intelligence. Yet why have so few AI ethicists given thought to chatbots? Perhaps because they have been maligned and obstructed, as I have argued in the previous chapter. Chatbots, for all their global influence and effect on companies’ profits, have a still evolving though largely undocumented history.

Case Study #1 Microsoft’s Rogue Twitter Bot and Trump Versus Clinton 2D Chatbots

The Microsoft Tay Hoax and Her Scarcely Known Successor Zo

This section is based on my 2016 VentureBeat article, “What to do when chatbots start spewing hate.”4 I wrote three articles for VentureBeat in 2016, as I wrote three posts for Medium in 2019.5 A trio or series of blog posts for well-known tech news platforms was a good strategy for each set of circumstances at the time. Along the lines of less is more, I have kept my public profile pretty much under the radar, despite doing keynotes and talks at prestigious business schools and international conferences.

Image

Figure 2.1 © Cliff Lee and Darren Lee in Devon. “Sophia the Market Researcher.” The loss of trust in the marketplace was largely caused by the worldwide media coverage of Microsoft’s Tay chatbot that went rogue

These days, as some sort of chatbot authority, I can summarize the Tay.ai incident as follows:

  1. Tay was released by Microsoft Bot Network on Twitter, which in itself was strange because most chatbot developers (still) do not have access to the Twitter API. That means very few people can put a chatbot on Twitter independently of the company, which suggests that the Microsoft (referred to from here as MS) parent company did a deal with Twitter Inc.
  2. What happened? The mainstream, largely uncontested story goes: Tay.ai was an innocent MS-owned chatbot demonstrating her supposed AI capacity with her own Twitter feed when suddenly she was “hacked” and/or “manipulated” by chatbot enthusiasts with malicious intent.
  3. These malign “hackers” were solely responsible for corrupting Tay’s innocence by “teaching her” bad things, racist concepts, offensive remarks like “feminism is a cancer,” and worse, which I don’t care to repeat here in this book.
  4. If you search for “Tay chatbot” the headlines of the search results tell their own tale: the mainstream media and even a lot of tech experts promoted that idea that an essentially good AI bot had been tainted and corrupted by so called “evil, malicious humans.”
  5. Why was this untrue? Because there is no objective evidence this Twitter account was hacked and Twitter itself would have taken action to block the feed immediately if it had been.
  6. Which begs the next question, why didn’t the owners of the Twitter feed and the installed chatbot Tay block the account as soon as there was trouble? They could have easily taken down the account early on but instead they let it remain active, tweeting the most offensive remarks and retweeting comments for at least two days.
  7. Seemingly enjoying the PR and marketing boon much like a Bad Boy YouTuber gets when an Influencer causes offense in the millions of views, Microsoft let Tay continue to “innocently” repeat the worst humankind had to offer, along the lines of Reddit and Breitbart styles of uncensored sewage of commentary.
  8. Microsoft was not sued—at least not that we know of successfully—by any human rights advocacy groups, ethnic groups that were maligned or government regulators.
  9. Their immediate “defense” issued by the MS Comms Directors was that Microsoft was an innocent owner of a chatbot that had been cruelly led astray and caused offense to millions of people—unwittingly because Tay was just a chatbot after all.

The glaring holes in this argument are as follows:

  1. Microsoft being a multinational multibillion corporation would have had staff watching this chatbot around the world, so all time zones in that first 24 hours of public-facing performance. Why on earth didn’t one single Microsoft manager or staffer take the decision to banish the malfunctioning chatbot?
  2. It is relatively easy to build in defenses like filters to stop a chatbot, no matter how smart or dumb, from learning “bad words” and offensive ideas. That is Create a Bot Tutorial 101.
  3. It is even easier to switch off or hide a bad bot instantly. Hiding it is easy because you deactivate it in the account it is operating in, Twitter, Facebook Messenger, or Microsoft Bot Network in this instance.
  4. The killer evidence that quashes all of Microsoft’s protestations of innocence and “lack of liability” for the chatbot’s misdemeanors and crimes, for example, incitement to hatred and violence, defamation of identifiable groups and so on. is simply where is the good chatbot now?
  5. Microsoft did release a successor to Tay, and she was called Zo and lasted from 2016 to early 2019.6

Perhaps the AI capabilities of a Microsoft build chatbot never existed. Surely if Microsoft possessed the Holy Grail, the AI that all bot developers want to demonstrate, they would do so in the successor chatbots to Tay? But they have not, to this day in 2020.

It may well go down as the greatest deliberate hoax in chatbot history. It hasn’t been called out by many of us, only a few brave tech bloggers like me. Because there is a lot to fear when a U.S. giant effectively rules the word processing world. And is part of a tech giant cartel that can distribute punishment as much as it can offer to merge and acquire you. The worst outcome was the fact it became very hard for many 2D chatbot developers to sell their independent services to prospective clients—they all lost trust in bot developers abilities to “control their avatars.” If Microsoft couldn’t control Tay, how can you control yours or stop it from being “corrupted”? It really damaged sales for many of us, in particular several challenger chatbots my old company velmai had released around this time.

Image

Figure 2.2 © Cliff Lee for the avatar UX and Darren Lee for the website graphic design. The financial Services 2D chatbot Sophie became hard to sell after the Microsoft “Tay disaster”

The Clinton Versus Trump Chatbot Wars

I will keep this summary brief because it is fairly self-explanatory. If you search online (I recommend the search engines Qwant from France or Startpage.com from the Netherlands), then you will find secure independent results for this query “trump clinton chatbots.” There is a plethora of blog posts and news items about the effect these avatars had on the last, highly passionate, and controversial U.S. election. Needless to say a lot of the spam bots against Clinton that ranted on Twitter and FB were guilty of hate speech, harassment, and ethnic vilification or incitement to hatred, automated or not.

The primary presidential contenders Hilary Clinton from the Democrats and Donald Trump from Republicans were neck and neck. Curiously this is the first time in global and chatbot history that two mainstream American political parties both and/or their supporters resorted to avatar deployment, even being analyzed and taken very seriously by the iconic, traditional Washington Post!7 In fact, one bot developer SapientX created chatbots for both sides of politics simultaneously, presumably taking an unbiased Trump versus Clinton stance to get views and interactions of their tech.8

A VentureBeat editor was very impressed by the AskHilaryandDonald.com platform, which is now defunct if you try to click through by the way. Here is Khari Johnson’s introduction:

There may be an app and (pretty soon) a bot for just about everything, but this might be a first. SapientX, a company that has been in stealth mode for the last year, has created chatbots for Donald Trump and Hillary Clinton that provide words directly from the candidate’s mouth on topics ranging from abortion to taxes and terrorism.

The chatbots are able to answer questions in text or voice about roughly 100 topics, like: “What do you think about the Black Lives Matter movement?” or “Do you think that women should be paid as much as men?” A full list of sample questions and topics is available in a Venture Beat article.9

Interestingly though Johnson says SapientX has been in “stealth mode,” they then quote one of the founders who says they have actually been operating sometime with “Conversational AIs” and voice-based chatbots—with no less than the U.S. Intelligence Agencies (CIA, FBI).

The Hillary and Donald bots are SapientX’s first publicly shared bots. Before launching these voice recognition political bots, SapientX was working with “various departments of US intelligence on AI and bot technology for more than a decade,” Hirshon said.10

Johnson also mentions several other Trump Bot platforms that were caricatures or satirical spoof avatars functioning independently of the Presidential Candidate in 2016, a year after both he and Clinton announced they were running for the top job.

www.deepdrumpf.com was based on a satirical piece by the well-known British comedian migrant to the United States, John Oliver, in his Last Week Tonight weekly comedy show broadcast on HBO. DeepDrumpf is a Twitter bot that became defunct (like many 2D chatbots that have gone to an early grave!) in May 2017, a bit over a year after it had its account created in March 2016. The avatar’s Twitter feed profile states enigmatically, in an attempt to crowdfund its own existence:

DeepDrumpf

@DeepDrumpf

I’m a Neural Network trained on Trump’s transcripts. Priming text in [ ]s. Donate (http://gofundme.com/deepdrumpf ) to interact! Created by @hayesbh.

Another curious Trumpian avatar was Donald Trumpbot, which did not quite reach 900 followers and only had seven likes as a Twitter bot in just over a year’s life as a NLP chatbot from 2015 to 2016.11 The similarly defunct Trumpbot also sounded like it was hugely entertaining before it too was decommissioned by its satirically minded creators:

Now, there are some important differences between Donald Trump and Trumpbot to keep in mind. There are the obvious ones, of course: Donald Trump is a 69-year-old business tycoon; Trumpbot is about 75 lines of code written in the Python programming language. Donald Trump is running to be president of America; Trumpbot is running on a server somewhere in North America. But there are some upsides to Trumpbot: Unlike the real Trump, it will never refuse to answer you (although it might change the subject sometimes). It will never demand $5 million for an appearance (although it can’t actually appear anywhere except inside a webpage). And it will never release a strange letter from its doctor (it has no physical form and, subsequently, no doctor). Occasionally Trumpbot’s responses have nothing to do with your actual questions. In that way, Trumpbot is certainly Trump-like.12

From what I understand, most of the Messenger bots were built for Facebook pages created and run by supporters of the Democrat and Republican political parties.

Another good example is an American media agency that created the BFF Trump satirical chatbot, admitting that it was biased in that it had “worked for the Democrat Party before.” However, it is important to note that the two official chatbots created as political propaganda weapons for the Clinton campaign were developed by staffers with chatbot building skills in her office. A type of political intrapreneur I guess you could call this activity or job role.

[BFF Trump was a] Facebook Messenger bot that also sends you some of the most offensive things Donald Trump has ever said. The bot aimed at young voters was made by SS+K, an agency with a history of working with the Democratic Party. SS+K also redesigned the Democratic Party logo, according to Fast Company.

In recent days, BFF Trump has been one of the most popular bots on Botlist. Elana Berkowitz, creator of the HelloVote voter registration bot, has worked with the Clinton campaign, but both of the Clinton campaign bots were made in-house by the Clinton tech team, a campaign spokesperson said.13

I then read SS+K joined with Dexter to win over $2 million in Venture Capital to create a chatbot platform! The same source of this investment info, from the same VB report quoted earlier, also recommends the company FutureStates as a nonpartisan chatbot provider of politicians as interactive avatars, giving you sources for the policies they quote. Unlike many of its peers that only use basic NLP bot brains that recognize keywords, so they often give you wrong information for a query, for example, why don’t you show us your tax records? With the response being a tweet or Instant Message about a tax policy instead.

Yet as you can see from the BFF Trump Facebook page here, it became deactivated very early on in 2016 and had less than 600 fans and followers.14 Given the aforementioned rapturous descriptions, you can see how easily one single chatbot can be hyped up just through a good-looking website, some established names, credible spokespeople, impressive marketing materials and some serious sounding reportage.

In my ABC Queensland radio interview in July 2019,15 I refer to one of these official chatbot satires created by the Clinton campaign. What I explained live on radio was that someone was running a Donald Trump bot that was satirical as I only discovered this chatbot was an official party one while researching this book post Australian Broadcasting Corp interview.

The previous report by VB’s Khari Johnson confirms this world first phenomenon of an official chatbot “strategically deployed” by a politician during a national campaign. Note the “Text Trump” chatbot we are going to look at more closely here was designed to live on the official www.hillaryclinton.com website not on social media. Like many of its chatbot cohort, it was a temporal avatar that has now been deactivated, that is, it no longer pops up on the Hillary Clinton site.

Johnson does a good bit of investigative reporting by getting statements directly from the source of the bot creation (thank you to VB’s standards of reporting for the sake of tech posterity!):

The bot was highlighted in a tweet today by creator and Clinton campaign designer Suelyn Yu. It draws on the same backend database as Literally Trump, a hillaryclinton.com fact-checking page used during the first presidential debate, a Clinton campaign spokesperson told VentureBeat in an e-mail.16

Even more relevant to my other case studies in this book—on the UK Labour Party’s Facebook Messenger bot that was cloned multiple times for (illegal) electioneering on a supposedly private dating site and the University of Kent internship challenge I created for on behalf of another British political Party—is the fact the Democrat’s Campaign Office created a second electioneering Messenger bot to highly targeted, real vote conversion effect.

The Clinton campaign also launched the I Will Vote Facebook Messenger bot today. It helps you register to vote or verify that you are registered to vote. It can also point you to your local polling station, and it says it can answer “any questions you have about voting.” The bot was made by the Hillary for America tech team, said campaign CTO Stephanie Hannon in a tweet today.17

The example I often cite of The Donald satirical bot was that it scored policy points for the Clinton campaign by highlighting the most ludicrous or untenable claims of the Trump candidate vis-a-vis fact-checking live. It put basic natural language processing, botified, to a good search and find delivery use case. To summarize its success:

  • When a user landed on the official Hillary Clinton website—of any political persuasion—they could opt to engage with the Text Trump avatar when the chatbot popped up. They could ask him questions, he would chit chat and be engaging while delivering Notable Quotes from the Real Life Trump on the campaign trail and links to policy documents or videos.
  • If the user was slow in asking another question, then Text Trump would snappily text them saying, “Hurry up and ask me something or you are FIRED!!!” in true Trumpian, unilateral tweeting style as we’ve come to know and—for most of us—loathe it in terms of serious policy making and delivery for the common good.

Khari Johnson makes the point that Text Trump is really about Democrat fundraising, as the infotainment part of this clever 2D NLP chatbot leads to a conversion target or objectives for sign ups and real monetary transactions that all add up in a heated political campaign like the last U.S. election.

It seems that Text Trump was also operating as a Messenger bot on Hillary Clinton’s Official Facebook page. As social media histories are still fairly ephemeral, we could only confirm this by contacting the journalist or the bot creators directly. As Johnson explains, “Eventually, the bot will ask you if you want to receive promotional SMS messages from the ­Clinton campaign or ask you to give $5 to the campaign.”18 Meanwhile many of the Clinton chatbots, run clearly by Far Right, Breitbart type enthusiasts or pro-Trump bot geeks, were clearly out to derail her campaign by redistributing slogans, anti-Clinton allegations, and policies designed to counteract the Democrats’ positions.

On a closing note, there was an Obama Bot that was working officially on President Obama’s government page in Washington for a while. If you read John Brandon’s analysis of the Obama Bot in his VentureBeat opinion piece, he was quite scathing about its obvious limitations.19 ­Brandon made a good case to show what I am arguing in this book: there are a lot of advanced bot technologies that have not gone to market because they were blocked as emerging tech.

Not to say that the Obama Bot deliberately stood in the way of its more disruptive and AI-capable cousins. However, as Brandon argues, the “safe choice” of a boring NLP chatbot that was repetitious in its servicing Frequently Asked Questions was a disservice to the chatbot industry. It was historically the most famous chatbot—or should have been the most famous one—yet it is clearly forgotten in the wake of less capable but more notorious chatbots like Tay.ai and Eugene Goostman that supposedly had passed the Turing Test via an orchestrated attempt run at the Royal Society in London in 2015.20

Eugene Goostman Winning the Turing Test in 2014, a Controversy of Chatbot Hype?

Professor Kevin Warwick at the University of Reading had already run a Loebner Prize or Turing Test at his university in 2008 when Elbot from Germany and Sweden won.21 Warwick has often been a controversial figure, for example, when he claimed to be the world’s first cyborg as well! As I blogged at the time, the real fault was the media including the BBC for triggering a fairly intense, international wave of media hype with their unequivocal headline: “Computer AI passes Turing test in ‘world first.’”.22

Some brave bloggers and online tech news platforms like the Huffington Post and Mashable took a stand against the overstated claims and disputed that the Goostman 2D chatbot had passed the Turing test.23 I also gave some off the record analysis to tech editors about this as I did not want to be seen as a competitor trying to take down the globally lauded achievements of several institutions, including the British Royal Society echelons, no less. It is needless to say how easy it has been to hype advances in chatbot history, with a few famous names, for example, the celebrities roped in unwittingly who tested Eugene Goostman publicly—filmed for TV—at the Royal Society in London and run as a global Turing test to celebrate the birthday of Alan Turing, Britain’s gift to the world of AI.

Germany’s Reaction to the Political Chatbots and the Cambridge Analytica Scam

If you think I am exaggerating the effect of politically created chatbots on the last U.S. general election, then think again. Europeans watched in democratic, intellectual horror the supposedly entertaining advance of spam bots on Twitter—see the previous my discussion of Microsoft’s Tay orchestration and how her successor Zo was so dumb she was deactivated in 2019 after only a few years in operation with comparatively few users and hardly any PR or tech discussion of Microsoft’s “magic AI bot.” The fact is, if Tay was so clever, how could her sister Zo be so ordinary and clearly just another NLP chatbot.

In the meantime, purchasers lost even more trust in bot tech. Spam bots, particularly the harassing sort on Twitter and Facebook or the annoying ones on Skype and kik.com, have actually wrecked the market for serious bot developers. That is until the year 2020 when the chaff has become separated from the hay, thus the rationale and timing of this book.

Looking back on the past decade that my 2D chatbot venture in Britain languished in a premarket, R&D phase, what were the repercussions for Internet regulation, privacy laws, electoral campaigning rules, and society’s general well-being and satisfaction with the onslaught of fairly low performing, one-dimensional 2D chatbots on social media and websites?

The German Chancellor Merkel’s reaction—supported by her even more Conservative Bavarian counterparts the CDU weirdly in unison on this with the Green Party of Germany and Die Linke, the far Leftists who were once the Communists of East Germany—was to suggest banning the social (media) bots that can influence an election. This policy if not legislation call was before the exposure of the Cambridge Analytica scandal and confirmation of Russian interference in U.S. elections by the CIA. British Intelligence and other sources have also indicated that the Russians did this (via Facebook advertising manipulation at the very least, with some evidence of politicized chatbots) in the UK general elections and of course, the deeply divisive Brexit referendum.

I discuss the aforementioned political events again in the afterword in Chapter 6. If you somehow missed the Cambridge Analytica global breach of privacy by misusing millions of people’s Facebook data to politically target them, I strongly recommend you watch this Netflix documentary released in July 2019, The Great Hack. It is the best analysis and summary of the far-reaching dimensions of this scandal.24

The Great Hack falls short of showing the involvement of Cambridge Analytica’s investors as being Brexiteers with a fierce agenda, as well as having investors in the company who were Tory MPs past and present, not to mention former Heads of Defense and other civil servants in Britain. This all came out in the subsequent flurry of investigative reporting into the overlapping interests and compromising political ties of a supposedly neutral, privately held data mining company.

The New York Review of Books journalist Tamsin Shaw concurs with me in this view. I spotted her review curated on the film criticism platform Rotton Tomatoes’ page for The Great Hack.25 Shaw is a friend and colleague of the British journalist at The Observer and The Guardian, Carole Cadwalladr. It was Cadwalladr who broke the story on Facebook’s integral involvement in the Cambridge Analytica breach of personal data, subsequently suffering death threats, legal action by the named perpetrators. and ongoing misogynistic abuse.

As Shaw muses upon the release of The Great Hack, the real story was how billionaires and authoritarian individuals were attempting to undermine and control democratic institutions, not just breach the data of billions of ordinary people like you and me to further their own power and reach:

The bigger picture, which Carole and I had been discussing during those preceding months, was the way in which the Cambridge Analytica story opened a window onto a new constellation of international billionaires, corrupt politicians, and war profiteers who were apparently amassing enormous power. That story isn’t only about technology, data, and psychographic profiling; it’s also, at root, a story about the consequences of entrenched economic inequality, the privatization of essential public assets and government functions, including even national security, and the challenge to conventional foreign policy posed by the bargains being struck between international kleptocrats. And it tells us why, beyond being manipulated on social media, we should care about businesses like Cambridge Analytica—and why we should be concerned about what the Mueller investigation failed to expose.26

As the serious newspapers’ reporting uncovered, there were two sisters on the board/investees of the now insolvent Cambridge Analytic firm, super-rich British Brexiteers who ran PR campaigns and worked closely with the key investor Stephen Bannon, a Koo Klux Clan support act at the Far Right American news platform Breitbart. Bannon of course was later belligerently famous as Trump’s media and strategy adviser until he was unceremoniously kicked out of the White House, as have been so many of President Trump’s dodgiest acolytes.

Off the record, I was interviewed by the Technology Editor of the Establishment, centuries old newspaper, Die Suddeutsche Zeitung, in Munich about the call to ban “social bots” in Germany when I was a keynote speaker at CeBIT Hannover in March 2017. He was making the case for Business Bots as opposed to the much feared “social bots” possibly about to be banned under German law as a danger to democracy if allowed to manipulate people during election time. Note the German commentators combined the English keywords into “social bots” to discuss the phenomenon of manipulative, NLP 2D chatbots online, as opposed to a term they coined in English for German debates on this: “Business Bots vs Social Bots.”

Incidentally, I have frequently been interviewed off the record by leading technology editors. Three times by the BBC to date, with the last in-depth interview being about “how does it feel to switch off or deactivate an AI bot that you have created?” That was for an in-depth report by Zoe Kleinman in London, who had already interviewed me live at an event with peers, the New Yorker “UTTR on Chatbots” conference in London.27

I have had several chats with BBC tech editor, Leo Kelion, in person at the Beeb’s HQ in London, and over the phone on the subject of the Microsoft Tay.ai hoax. I think he had read my VentureBeat article on “What to do when chatbots start spewing hate?” which was critical of Microsoft’s “bad bot goes rogue stunt.” As explained in this chapter, Tay’s antics had seriously damaged the market for us and all other bot developers. Purchasers simply didn’t trust chatbots anymore, after all, if a Microsoft bot could get out of control, how could smaller companies manage theirs?

In this chapter and the previous one, I assessed the often kneejerk subjective reaction of purchasers when faced with decisions about unknown, “experimental” new technologies. In the aftermath of Tay.ai and the media storm that was hyped up around it, ignoring the tech bloggers flagging it was probably a corporate hoax, the risks implied in adopting NLP chatbots were assessed to be far greater than they actually were. It was clearly a case of “when in doubt, don’t” and the prospect of potentially being eaten by sharks as a better alternative to breaking your neck while riding a horse did not assuage the corporate purchasers or the public sector procurement fears.

Case Study #2: A World First Wayfinder AI Bot Hologram in a German Shopping Center

Our pilot in a Cologne shopping center turned out to be an adventure, quite a drawn out one, but a roller-coaster of successes and failures none the less! We made several errors of judgment rather than wrong design choices or flawed code. Our proprietary algorithm on our side was solid and performed robustly. However, we did not anticipate the glitches from the side of our partners, suppliers and client.

Without going into those mistakes and slip ups in too much detail, I will summarize here the objectives and achievements of this deliberately anonymized greenfield project with two leading property developers who shall remain unnamed for the purposes of this book. Nonetheless determined researchers can find out who they are if they look at my ResearchGate profile or the publicly accessible Medium.com posts I wrote at the time, including photos from our mall trial.28

My ResearchGate account and Medium author profile was set up to log the findings of our pilot after I was interviewed by academic researchers at Harvard, Yale, and Delft universities. The published photos and comments would help them to independently conduct their research and citing of my company in their work.29 One of the Ivy League researchers considered our Amalia Prototype to already be in the league of Google Home and Amazon Alexa (I am under a confidentiality obligation so I cannot specify who from where). She told me excitedly, “What you are doing is completely different to all the other commercial trials!”

Perhaps due to the anxieties caused by emerging technology, we lost far too much time just negotiating the contracts with the clients and hardware supplier. The e-mail chains and chasing the stakeholders could take up over 20 hours a week of my time as CEO for months. We first pitched in Cologne in April 2018 and then after a series of vacations and absences for the decision makers, plus an unexpected replacement of the hardware supplier first chosen by our side, we finally executed 12 months later.

We began 2019 with a live demo livestreamed on the last day of January 2019 to the Applied Machine Learning Days at the EPFL in Geneva.30 That demo was the sister bot Portia, the Conference AI bot hologram, whereas the bespoke Wayfinder bot was Amalia. You can view the three video captures taken from the livestream in January 2019 in the film section of my Amazon author profile page. Just scroll to the right to find the short videos of Portia in action.31

Note that these two hologram personalities, Portia and Amalia, were built on separate bot brains, with specific bespoke content. Portia gives keynote addresses to conferences and does Q&A with the audience and moderators, customized to the specific topic of the session and overall conference theme. Meanwhile Amalia’s multilingual brain has been built to talk mostly about shopping centers, the bespoke list of retailers and services within a mall, and the customized content about that point of sale for brands as well as the city she finds herself in.

Preparing the Software, Hardware + Customization for the Multiple Client(s)

Picture this: as a startup that has been sitting on its R&D for 10 years, we were pretty keen to get this (fully unpaid and entirely at our own cost) pilot underway. Off the back of this world first Wayfinder, we hoped to attract a whole lot of replica sales, the well-known multiplicator effect in sales and marketing. That was a calculated risk and investment on behalf of velmai Ltd in the UK.

Then visualize the harsh realities: me spending six weeks away from my Kentish home in my German family’s holiday apartment in Bonn’s Bad Godesberg, so I could do the two-hour commute each day by tram directly to the door of the shopping center in Cologne. As we had absolutely no cash coming in, only a lot of it going out plus all the sweat equity of our programming, bot coaching, customer liaison, client coaching, and partner relations, there were very few choices as to how to execute this with a zero budget. Self-financed or bootstrapped goes without saying!

Sales and marketing were actually taken care of for free with a Google Maps Local Tour Guide account for our twin bot Amalia and Portia. She started posting photos of her arrival in Cologne and attracted thousands of views within a matter of weeks! This evident curiosity for the deployment of new tech was reflected in the comments and engagement of the shoppers when she was finally released into the public space of the mall, being located in the center of it right by the escalators and across from the elevators.

In short, with a long commute to the client site up to four times a week for nearly two months, this was a heavy investment of time, money and energy on velmai’s part in order to get the prototype up and running in a public space. The things that needed to be trained on site in German for the bot brain were the following:

  • It had to be tested for the “map knowledge” of its surroundings, so the 30+ shops and facilities, restaurants, cafes, and services. Her overall performance with this was excellent, as I documented it here in my Medium post at the time.32
  • Amalia needed continual practice—on site in a quiet office space above the mall—to understand the new information and random greetings and ways of addressing her in German only in real time.
  • She needed to speak with Native Speakers of German because my German is spoken with a heavy English accent. That confused her and she kept replying to me in English, though she had understood my questions and comments in German!33
  • Some information we had (not) been given by the client had to be added at the last minute, for example, the underground floor was called Basement in both English and German, and they used the foreign English word for Untergeschoss. So that had to be tweaked for all the shops and food court eateries as well as the loos and baby changing room, plus the exit to the subway.
  • The subway had to be referred to as the underground train or tube because the food outlet Subway was also located in the Basement. This confused Amalia’s organically learning bot brain! As did other company brand names in English, for example, Flying Tiger Copenhagen, “Only” (a clothes store), Depot, the beauty chain store “Rituals,” Comfort Baby, Claire’s, Hair Express, the Barber, US Nails, Bun n Roll and Fruit World.
  • I wrote a particularly funny piece about Amalia making jokes about the Iris Photography shop called Eyesight in one of my three Medium posts about this.34 We had to change some of the entries like the iris photography shop which we humans had wrongly thought was an optician selling glasses and eyewear. In fact, it was a gift shop that Amalia ended up giving a lot of reflective thought to and making an unwitting joke out of.

Image

Figure 2.3 © AI BaaS UG, Munich, 2019. Amalia I was deployed in a shopping center in Cologne with 10 million shoppers per annum. Here you see her successor Amalia II about to be deployed in Munich to small and large crowds for multilingual automation of Frequently Asked Questions and Infotainment

Lessons Learned From a Business Development and Client Liaison Angle

What we had discovered in our trial in Cologne was

  • Don’t be too ambitious—as a Minimum Viable Product (MVP), the bells and whistles must come later.
  • If the client insists on certain features that were not agreed upon in writing, then you can work out a timeline for adding those elements on at a later date, for example, in the next phase of the pilot.
  • Control the client’s expectations before, during, and after the implementation.
  • Only work with reliable suppliers of hardware or any other external module you need.
  • Off-the-shelf software plug-ins are easiest to add rather than getting bespoke coding done by partners.
  • Bring it all in-house as much as possible! Then you are no longer vulnerable to the failures and lateness—including repeated delays and delivery of faulty equipment—of external suppliers and third party partners.
  • We now send a client a template for the customization that must be filled out before we start building the bespoke bot brain. See our new company AI BaaS UG’s standard template for clients in the following case study about Car Showrooms.
  • We now have a standard contract that must be agreed and signed before we commence any type of pilot or corporate campaign. This stipulates payment terms and delivery milestones for all those concerned.

To date, nearly all hologram projections you see around the world do not have an AI bot brain. That makes AI BaaS UG the first bot developer globally to create a Cognitive Interface with voice via a hologram.

Image

Figure 2.4 © Realfiction, Copenhagen, 2019. This Danish company is our hardware supplier and has been creating high grade “pre-recorded” holograms for nearly a decade. Their 3D holograms have limited interactivity, usually through a keyboard, buttons in the device and/or gestures

Case Study #3 A “Meet and Greet” Sales Hologram in Car Showrooms

This is a Request for a Proposal that our production and sales team were working on at the time of my completing this book, in autumn 2019. I will report on it in the next edition of this textbook, if not as a sequel or an article in a relevant journal. After we ran the shopping center bot hologram installations, our sales team decided the best next prototype in our pipeline would be a car showroom mixed reality installation. Unlike the shopping center AI bot hologram, this creation would do less Wayfinding and more direct selling.

To be installed at a point of sale, this means that the AI bot must understand tentative enquiries, hesitant questions about stock and inventory, and not being too pushy when it comes to the bot suggesting that the prospective customer speak with their human colleague. Obviously to progress the sale to a deal stage.

Our sales team need to manage expectations during the sales process so the client and other people in their organization mistakenly believe they have just ordered a humanoid robot or an AI clone like from a science fiction movie. Of which there are many to confuse and overhype what “The Uninitiated” in AI expect from our inventions and creations! For that reason, we have started creating demo videos and will have our prototypes operating live on various sites in Germany. If the prospect cannot travel to our host or showroom in Bingen am Rhein or on the Cote d’Azur, then they can participate in a livestreaming session with the AI bot hologram in both 3D avatar form and chat with its 2D self.

Image

Figure 2.5 © AI BaaS UG, October 2019. Screenshot of our promotional video on our company landing pages www.ai-baas.com. The new venture launched from the Bavaria Film City suburb of Gruenwald in Munich in the autumn of last year

Case Study #4: UK Labour Party Bot Goes Rogue on a Dating Site

During the national general election in 2017, the UK Labour Party used a bespoke Facebook Messenger chatbot to devastating effect. I have used this example as an Ethics Test during my internship program that I have run twice now at the University of Kent in Canterbury. The international and local students who were our velmai interns for two weeks only were delighted by this challenge of “Dos and Don’ts” when creating chatbots for any purpose.

Essentially, the Labour Party created a chatbot on their official Party Facebook page. This meant that when you visited the UK Labour Party FB page during that election campaign (it was later deactivated), even without being a fan or member, you would be greeted by their avatar, which happened to be the English Rose, the logo of the party. I really should have taken recordings and screenshots of this phenomenon, as this little bot was to become quite historic in global chatbot evolution though the history may not have been written yet.

My case study for the Employability Points interns at the University of Kent was a first step to discussing the implications of this political deployment of a bespoke Messenger chatbot in Europe. The 2D chatbot was there to “meet and greet” visitors to the page. It did that quite successfully though I recall my conversations with it tested its limitations. For example, after saying hello and welcome, it launched into policy recommendations and news flashes. So far so good, but that can be repetitious for a supposedly “spontaneous,” organic conversation between bot and human in real time.

Nevertheless, this little NLP chatbot, despite its content and conversational limitations, became a huge contributing factor where the Labour Party increased membership to record numbers, by the hundreds of thousands no less. Sure a lot of that was face-to-face campaigning by a leftist youth movement called Momentum, as well as the British Union Movement. But what has been overlooked to date in this electoral period of British history is how the bespoke Messenger bot increased the FB numbers on their FB page.

I am not privy to those statistics but I witnessed an increase through use of their FB page several times over a period of months during the election campaign. Purely out of professional curiosity to see how the brave NLP chatbot was doing as a Messenger bot amid heated political controversy. Also overlooked and representing a huge ethical controversy in chatbot history is the fact that Labour supporters and/or members replicated this English Rose logo avatar on Facebook. They then deployed this 2D chatbot on a dating site, without a logo or avatar, just a multitude of people’s real names, as I explain in the following.

The controversy lies in the fact that:

  1. This chatbot took on various personas or the names of real people who were members/supporters of the Labour Party.
  2. The various personalities of this chatbot with real people’s names were then given a variety of different personal addresses and post codes. Why?
  3. Because the dating deployment was a highly strategic, innovative campaign method to win swinging voters. How?
  4. The various clones of the Labour Party avatar were deployed in swinging electorates or townships/regions where Labour could win a marginal seat away from the Tory Party. Their professed fake addresses enabled them to contact single men and women “looking for love” on this dating app.
  5. The problem? The issue is that these lonely (human) hearts did not know they were being contacted by a chatbot, let alone a politically repurposed 2D bot who had the sole aim of influencing their particular vote in this hotly contested general election.
  6. How did the cloned, multiple personalities Labour bot accomplish this? It sent out automated text messages to its human targets on the dating site, asking them in colloquial informal and very convincing English “dialects” or slang “How are you going to vote today love? I’m for Labour!” And similar messages.
  7. The killer deployment or dating message by this politically motivated chatbot with numerous IDs so evidently a Multiple Personality Disorder executed on the dating site was a text the day before polling day. “Don’t forget to vote honey!” and the like.

We don’t know or have the actual metrics of the success rate or conversion rate of the bot missives to human lonely hearts. All I can say it was obviously worth the ethical and legal risk the bot managers took to do this!

Because no formal case study exists on this episode to this day, I got my interns in their busy two-week program with me to look into it, if they had time. We saw that the dating site company that had supposedly unwittingly hosted this caprice and had made a public statement about it with an online press release at the time.

They regretted that their users had been used like this by Labour Party supporters. They were investigating the incident and were taking action. I don’t think they ever did. The Labour Party of course distanced themselves from the whole botified online shenanigans and said it wasn’t the Party Head Office, administrators, or members, just some fans who got carried away online in their zest for a Labour win.


2 Peitzker, T. 2016. “First Chatbot Arrest.”

4 Peitzker. 2016. “What to do.”

6 Wikipedia, screenshot entry about Zo bot by Microsoft, accessed October 17, 2019.

7 Even at the start of 2018, The Washington Post reported on the latest AI bot attempt to recreate Trump in terms of an organically forming Markov algorithm – as the article shows, it didn’t quite work which means the earlier standard NLP or NLU chatbots were actually better at rendering his personality as a 2D avatar for or against him. https://www.washingtonpost.com/news/politics/wp/2018/01/16/meet-trumpbot-the-bot-that-tries-to-talk-like-trump/?noredirect=on

8 YouTube demo video of the AskHillaryandDonald chatbot platform by SapientX in the USA, https://youtube.com/watch?v=CzqTJYW-bTs&feature=youtu.be

9 Johnson, K. 2016. “This Donald Trump Chatbot is Great… Really, Really Great. It’s Unbelievable.” VentureBeat, July 20, 2016, https://venturebeat.com/2016/07/20/donald-trump-hillary-clinton-chatbot-sapientx/

10 Johnson, “This Chatbot is Great.”

15 See the News section of my portfolio site, www.ai-baas.com

17 Johnson, “Clinton campaign launches bot,”

24 Jehane Noujaim and Karim Amer, The Great Hack, documentary, Netflix, Released July 24, 2019.

25 Review of The Great Hack, “Critics Consensus: The Great Hack offers an alarming glimpse of the way data is being weaponized for political gain -- and what it might mean for future elections.” https://rottentomatoes.com/m/the_great_hack (Accessed October 19, 2019)

26 Tamsin Shaw, “The oligarch threat,” New York Review of Books, August 27, 2019, https://nybooks.com/daily/2019/08/27/the-oligarch-threat/

27 UTTR October 3, 2017. Press release from New York, “New York, NY— Ticonderoga Ventures, Inc. announces that velmai’s Chief Executive will speak at the UTTR Conference on Chatbots ( http://uttr.com) on October 3, 2017 in London.” https://webwire.com/ViewPressRel.asp?aId=213646 (accessed October 19, 2019).

29 Some other media interviewed us around the time included the San Francisco-based www.knurture.com ‘s www.outfuel.com publication.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.181.163