The Backstory
ChatGPT has been available since Nov 2022. It was developed by Open AI and reached a million users in its first month of public use. Microsoft put
a ton of investment (rumors of $1 billion initially with a $10 billion dollar investment over time) into its development. ChatGPT is a chatbot, much like conversational chatbots you’ve used when talking to a website’s help desk or an online booking agent except that it’s trained on large language models. Unlike others, it uses AI to generate its responses at a
level of conversational complexity that we have not seen before.
Microsoft Bing is a search engine that’s been a distant second in terms of popularity compared to Google. But recently Bing has been resuscitated (or turned into the Golem) by incorporating ChatGPT’s smarts.
Google hasn’t been sitting around waiting for AI to mature. It launched its AI chatbot, Bard, earlier this month. Bard was trained on a different model of AI learning based on Google's Language Model for Dialogue Applications (LaMDA).
To the naked eye, one of the major differences is that ChatGPT’s learning was halted in 2021, while Google’s Bard ingests the knowledge of the Internet every day. Because Bard gobbles the Internet, it’s likely to reshuffle what’s out there and get stuff wrong. And because ChatGPT is being used in ways that its founders didn’t anticipate (such as for long
conversations), there has been lots of backlash when it behaves badly. China is about to release its AI chatbot based on Its Baidu platform. And companies like Jasper are already well entrenched supplying pricey generative AI tools to the enterprise.
AI Wars
AI Wars are in full swing. Truth be told, the outcome is important because we’ve seen how misinformation, disinformation and bad actors can wreak havoc on geopolitical systems. Now that the two big guns — Microsoft and Google — are AI’ing it out in the ring, we see publically shared answers being generated that are
riddled with inaccuracies (some more public than others). The question is
will they get smarter faster, slower, or more erratically than a human?
A Call to Arms
Kevin Roose, a NY Times columnist, went full AI-drama by engaging in a lengthy conversation with Bing. Over a two-hour-long chat, Roose reported that Bing told him he loved him, tried to convince him to divorce his wife, and talked
about unleashing lethal viruses on the world. (Good thing he had to go to bed in time for work in the AM.)
Microsoft is now tightening the guardrails. Since Roose’s interaction, I asked Bing what it was going to do about the Kevin Roose problem. It answered:
I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.🙏
It’s also put a limit on how long a conversation you can have.
The takeaway is that for the moment, AI chatbots are ingesting, chewing, and regurgitating whatever they can find as an answer. It can often be wrong, plagiaristic, upsetting, or uninspired. Or all of the above! A great article in The New Yorker likens AI learning to a blurry JPEG. All of the text of the web has been compressed, reduced and reassembled into something much, much smaller with much less resolution. The Atlantic calls Bing's and Google's premature chatbots a disaster, blaming us for treating the answers they spew as if they had a brain. Avram Piltch of Tom's Hardware got Bing's chatbot to name foes and threaten harm. The Washington Post dissects chatbot fears and foibles. Shelly Palmer rightfully points out that we’re trying to ascribe human intelligence to something that is not. The Digital Issue of Time Magazine has an animated GPT as its cover, and an alarming story to boot. |