1. What is generative AI?
These systems use neural networks, which are loosely modeled on the structure of the human brain and learn to complete tasks in similar ways, chiefly through trial-and-error. During training, they’re fed vast amounts of information (for example, every New York Times bestseller published in 2022) and given a task to complete using that data, perhaps: “Write the blurb for a new novel.” Over time, they’re told which words and sentences make sense and which don’t, and subsequent attempts improve. It’s like a child learning to pronounce a difficult word under the instruction of a parent. Slowly, they learn and apply that ability to future efforts. What makes them so different to older computer systems is that the results are probabilistic, meaning responses will vary each time but will gradually get smarter, faster and more nuanced.
A Cheat Sheet to AI Buzzwords and Their Meanings: QuickTake
2. How does ChatGPT work?
ChatGPT is the latest iteration of GPT (Generative Pre-Trained Transformer), a family of text-generating AI programs developed by San Francisco-based laboratory OpenAI. GPTs are trained in a process called unsupervised learning, which involves finding patterns in a dataset without being given labeled examples or explicit instructions on what to look for. The most recent version, GPT-4, builds on its predecessor, GPT-3.5, which ingested text from across the web, including Wikipedia, news sites, books and blogs in an effort to make its answers relevant and well-informed. ChatGPT adds a conversational interface on top of the program. At their heart, systems like ChatGPT are generating convincing chains of words but have no inherent understanding of their significance, or whether they’re biased or misleading. All they know is that they sound like something a person would say.
It was co-founded as a nonprofit by programmer and entrepreneur Sam Altman to develop AI technology that “benefits all of humanity.” Early investors included LinkedIn co-founder Reid Hoffman’s charitable foundation, Khosla Ventures and Elon Musk, who ended his involvement in 2018. OpenAI shifted to create a for-profit entity in 2019, when Microsoft invested $1 billion.
4. What’s been the response to ChatGPT?
More than a million people signed up to use it following the launch in late November. Social media has been abuzz with users trying fun, low-stakes uses for the technology. Some have shared its responses to obscure trivia questions. Others marveled at its sophisticated historical arguments, college “essays,” pop song lyrics, poems about cryptocurrency, meal plans that meet specific dietary needs and solutions to programming challenges. The flurry of interest also raised the profile of OpenAI’s other products, including software that can beat humans at video games and a tool known as Dall-E that can generate images – from the photorealistic to the fantastical – based on text descriptions.
5. Who’s going to make money from all this?
Tech giants like Microsoft have spotted generative AI’s potential to upend the way people navigate the web. Instead of scouring dozens of articles on a topic and firing back a line of relevant text from a website, these systems can deliver a bespoke response. Microsoft deepened its relationship with OpenAI in January with a multiyear investment valued at $10 billion that gave it a part-claim on OpenAI’s future profits in exchange for the computing power of Microsoft’s Azure cloud network. In February, Microsoft integrated a cousin of ChatGPT into its search engine Bing. Questions remain about how to monetize search using these tools when there aren’t pages of results into which you can insert ads.
6. How’s the competition going?
OpenAI spent the months since unleashing ChatGPT refining the program based on feedback identifying problems with accuracy, bias and safety. The result, ChatGPT-4, is “40% more likely” to produce factual responses and is also more creative and collaborative, the lab said. ChatGPT represents a challenge to Microsoft’s rival Google, which had been working on such AI systems for years but kept those efforts mostly within its labs. Google responded by releasing its own chatbot, Bard. The product got off to a rocky start when it made a mistake during a public demonstration in February. The following month, China’s Baidu offered a demo of its “Ernie Bot,” receiving positive reviews from analysts. Facebook parent Meta Platforms Inc. was hurrying to put together a generative AI product group from teams that were previously scattered throughout the company.
7. What other industries could benefit?
The economic potential of generative AI systems goes far beyond web search. They could allow companies to take their automated customer service to a new level of sophistication, producing a relevant answer the first time so users aren’t left waiting to speak to a human. They could also draft blog posts and other types of PR content for companies that would otherwise require the help of a copywriter.
8. What are generative AI’s limitations?
The answers it pieces together from second-hand information can sound so authoritative that users may assume it has verified their accuracy. What it’s really doing is spitting out text that reads well and sounds smart but might be incomplete, biased, partly wrong or, occasionally, nonsense. These systems are only as good as the data they are trained with. Stripped from useful context such as the source of the information, and with few of the typos and other imperfections that can often signal unreliable material, ChatGPT’s content could be a minefield for those who aren’t sufficiently well-versed in a subject to notice a flawed response. This issue led StackOverflow, a computer programming website with a forum for coding advice, to ban ChatGPT responses because they were often inaccurate.
9. What about ethical risks?
As machine intelligence becomes more sophisticated, so does its potential for trickery and mischief-making. Microsoft’s AI bot Tay was taken down in 2016 after some users taught it to make racist and sexist remarks. Another developed by Meta encountered similar issues in 2022. OpenAI has tried to train ChatGPT to refuse inappropriate requests, limiting its ability to spout hate speech and misinformation. Altman, OpenAI’s chief executive officer, has encouraged people to “thumbs down” distasteful or offensive responses to improve the system. But some users have found work-arounds. Generative AI systems might not pick up on gender and racial biases that a human would notice in books and other texts. They are also a potential weapon for deceit. College teachers worry about students getting chatbots to do their homework. Lawmakers may be inundated with letters apparently from constituents complaining about proposed legislation and have no idea if they’re genuine or generated by a chatbot used by a lobbying firm.
–With assistance from Alex Webb and Nate Lanxon.
More stories like this are available on bloomberg.com