Sam Altman, CEO and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing in Washington, DC, U.S., Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT increase questions about the future of creative industries and the ability to tell fact from fiction.
Eric Lee | Bloomberg | Getty Images
Last week, OpenAI CEO Sam Altman charmed a room full of politicians in Washington, DC, at a dinner party, then testified for about three hours about the potential risks of artificial intelligence at a a hearing in the Senate.
After the hearing, he summarized his position on the regulation of AI, using terms not familiar to the general public.
“AGI security is really important and border models should be regulated,” Altman tweeted. “Regulatory capture is bad, and we shouldn’t play with models below the threshold.”
In this case, “AGI” refers to “artificial general intelligence”. As a concept, it is used to refer to an AI that is significantly more advanced than currently possible, an AI that can do most things as well as or better than most humans, including self-improvement.
“Frontier models” are a way of talking about which AI systems are the most expensive to produce and which analyze the most data. Large language models, like OpenAI’s GPT-4, are boundary models, compared to smaller AI models that perform specific tasks like identifying cats in photos.
Most people agree that there must be laws governing AI as the pace of development accelerates.
“Machine learning, deep learning, over the last 10 years or so it’s grown very rapidly. When ChatGPT came out, it grew in ways we never imagined, that it could go that fast,” said My Thai, a computer science teacher. at the University of Florida. “We are afraid to rush into a more powerful system that we don’t fully understand and anticipate what it can do.”
But the language around this debate reveals two major camps among academics, politicians and the tech industry. Some are more concerned with what they call “AI security.“The other side is worried about what they call”AI ethics.“
When Altman addressed Congress, he mostly avoided jargon, but his tweet suggested he was mostly concerned about AI security — a stance shared by many industry leaders at companies like OpenAI, Google DeepMind and well capitalized startups. They worry about the possibility of building a hostile AGI with unimaginable powers. This camp believes that we need urgent attention from governments to regulate development and prevent an untimely end to humanity – an effort similar to nuclear non-proliferation.
“It’s good to hear so many people are starting to take AGI security seriously,” said Mustafa Suleyman, founder of DeepMind and current CEO of Inflection AI. tweeted Friday. “We have to be very ambitious. The Manhattan Project cost 0.4% of US GDP. Imagine what an equivalent security program could achieve today.
But much of the discussion in Congress and the White House about regulation comes through an ethical lens of AI, which focuses on current harms.
From this perspective, governments should enforce transparency about how AI systems collect and use data, restrict its use in areas subject to anti-discrimination law such as housing or employment, and explain why current AI technology is insufficient. That of the White House Proposed AI Bill of Rights late last year included many of these concerns.
This camp was represented at the Congressional hearing by IBM Christina Montgomery, chief privacy officer, told lawmakers that every company working on these technologies should have an “AI ethics” point of contact.
“There must be clear guidance on the end uses of AI or the categories of AI-supported activities that are inherently high risk,” Montgomery told Congress.
How to Understand AI Jargon Like an Insider
See also: How to talk about AI like an insider
Not surprisingly, the AI debate has developed its own jargon. It started as a technical academic field.
Much of the software discussed today is based on so-called large language models (LLMs), which use graphics processing units (GPUs) to predict sentences, images, or statistically probable music, a process called “inference”. Of course, AI models must first be built, in a data analysis process called “training”.
But other terms, especially those from proponents of AI safety, are more cultural in nature and often refer to shared references and jokes.
For example, AI security officials might say they are worried about becoming a trombone. This refers to a thought experiment popularized by philosopher Nick Bostrom which posits that a super-powered AI – a “superintelligence” – could be given the mission of making as many paperclips as possible, and logically decide to kill humans to make paper clips. their remains.
OpenAI’s logo is inspired by this tale, and the company has even made paperclips in the shape of its logo.
Another AI security concept is the “difficult takeoff” Or “fast take off“, which is a phrase that suggests that if someone succeeds in building an AGI, it will already be too late to save humanity.
Sometimes this idea is described in terms of onomatopoeia — “mousse— especially among critics of the concept.
“It’s like you believe in the ridiculous hard take-off ‘foom’ scenario, which makes it seem like you have no understanding of how everything works,” tweeted Meta AI chief Yann LeCun, who is skeptical of AGI’s claims, in a recent social media debate.
AI ethics also has its own jargon.
When describing the limitations of current LLM systems, which cannot understand meaning but merely produce human-looking language, AI ethicists often liken them to “Stochastic parrots.“
The analogy, coined by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in an article written while some of the authors were at Google, points out that while sophisticated AI models can produce realistic-looking text, the software does not understand the concepts behind the language – like a parrot.
When these LLMs invent incorrect facts in the answers, they are “mind-blowing.”
One topic IBM’s Montgomery pressed on during the hearing was “explainabilityin AI results. This means that when researchers and practitioners cannot point to the exact number and path of operations that large AI models use to derive their output, it could mask some inherent biases in the LLM.
“You need to have explainability around the algorithm,” said Adnan Masood, AI architect at UST-Global. “Before, if you looked at classical algorithms, they were like, ‘Why am I making this decision?’ Now with a bigger model, they become this huge model, they are a black box.”
Another important term is “bodyguard“, which encompasses software and policies that Big Tech companies are now building around AI models to ensure they don’t leak data or produce disturbing content, often referred to as “get off the rails.“
It can also refer to specific applications that keep artificial intelligence software off topic, such as Nvidia’s. “NeMo Railing” product.
“Our AI Ethics Committee plays a critical role in overseeing internal AI governance processes, creating reasonable safeguards to ensure we bring the technology into the world in a responsible and safe manner” , Montgomery said this week.
Sometimes these terms can have more than one meaning, as in the case of “emergent behavior.”
A recent Microsoft Research paper titled “sparks of artificial general intelligence” claimed to identify several “emerging behaviors” in OpenAI’s GPT-4, such as the ability to draw animals using a programming language to graphics.
But it can also describe what happens when simple changes are made on a very large scale – like the patterns birds make when fly in a packor, in the case of AI, what happens when ChatGPT and similar products are used by millions of people, such as spam or large-scale disinformation.