Google Creates AI "Green Lane"
Microsoft invests in OpenAI, a congressman calls for AI regulatory agency, and other recent news.
Image credit: Rept0n1x
Recent AI News
Google relaxes ethical reviews in a rush to deploy language models
Nico Grant in The New York Times (January 20th): Google Calls In Help From Larry Page and Sergey Brin for A.I. Fight
This article details Google’s internal plans to productionize technology similar to OpenAI’s ChatGPT. Google perceives ChatGPT as a threat to their core search business, and is investing accordingly. Alarmingly, Google also appears to have relaxed its ethical reviews in order to get the technologies to market faster.
Mr. Pichai has tried to accelerate product approval reviews, according to the presentation reviewed by The Times. The company established a fast-track review process called the “Green Lane” initiative, pushing groups of employees who try to ensure that technology is fair and ethical to more quickly approve its upcoming A.I. technology.
The company will also find ways for teams developing A.I. to conduct their own reviews, and it will “recalibrate” the level of risk it is willing to take when releasing the technology, according to the presentation.
The risks Google has been thinking about are reportedly mainly “copyright, privacy, and antitrust,” which doesn’t even include many AI safety concerns or the untruthful behavior of many of these models. Still, this is an example of competition eroding ethical constraints and procedures, which is not good for safety culture. If competition heats up and no regulation materializes, we may well see more of this. Tech executives are often incentivized to take risks less seriously than they should, because the risks often affect others in ways that won’t be reflected in the bottom line. Google would do well to remember that nothing can be done both prudently and hastily.
DeepMind CEO Urges Caution on AI
Billy Perrigo in TIME (January 12th): DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution
Before Google’s “green lane” initiative was announced, the CEO of its subsidiary DeepMind, Demis Hassabis, gave an interview to TIME. He urged caution in the face of increasing AI advances:
I would advocate not moving fast and breaking things...When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful...Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.
I believe that Hassabis is sincere. But as Google’s recent steps indicate, we shouldn’t count on this sincerity to protect us from economic incentives and future “green lane” initiatives.
Microsoft announces multibillion dollar investment in OpenAI
Microsoft (January 23rd): Microsoft and OpenAI Extend Partnership
Microsoft recently announced a “multibillion dollar” investment in OpenAI, the creator of GPT-3. While terms of the deal were not announced, previous reporting indicated that Microsoft was prepared to invest $10 billion for an eventual 49% stake in the company. This comes just a few years after a roughly $1 billion investment that involved Microsoft building a supercomputer for OpenAI’s use. This deal also includes supercomputing resources. AI is today a highly capital-intensive business, so expect more large investments like this one in the future.
Government AI Risk Management Framework launches soon
National Institute of Standards and Technology (upcoming, January 26th): NIST AI Risk Management Framework (AI RMF 1.0) Launch
The National Institute of Standards and Technology (NIST) will be releasing their AI Risk Management Framework (RMF) in a launch event on January 26th. Government officials as well as business leaders will be in attendance to discuss the framework, which will lay out guidelines for managing risks from AI systems. The guidelines are not binding, but may influence deployments in industry and government and possibly regulation down the line. The most recent draft of the RMF is here. A question will be to what extent the RMF will include reference to catastrophic risks, such as those detailed in this paper (disclosure: I work part time for one of the paper’s authors).
EU AI Act Implementation Remains Unclear
Hadrien Pouget in Lawfare (January 12th): The EU’s AI Act Is Barreling Toward AI Standards That Do Not Exist
The EU AI Act imposes several requirements on AI developers, but leaves those requirements vague. It is up to EU officials to determine standards that decide how, exactly, the requirements should be implemented. However, some requirements, such as those for robustness and transparency, may be infeasible to apply to current deep learning systems. This piece lays out the issue and questions whether it is the law or the technology that will need to give.
US representative calls for a new agency to regulate AI
Ted Lieu in The New York Times (January 23rd): I’m a Congressman Who Codes. A.I. Freaks Me Out
Ted Lieu (D-CA) recently called for a “dedicated agency to regulate A.I.,” arguing it will be too difficult for Congress to adequately keep up with every AI advancement. He likens such a potential agency to the Food and Drug Administration. He doesn’t think such an agency would immediately pass Congress, so his immediate aim is to establish a commission to look into the possibility of establishing an agency.
AI comes with risks unlike any other technology, so it’s not unreasonable to call for an agency. However, any plans to do so will need to answer the question of how to give the agency the talent, resources, and authority it would need to adequately keep up with the breathtaking speed of AI advancements.
AI in popular media
Newspaper Calls ChatGPT a "morally corrupting influence"
Thomas Claburn in The Register (January 20th): OpenAI's ChatGPT is a morally corrupting influence
This article takes a very simple and (in my view) not incredibly well-researched paper and argues that people can be influenced by ChatGPT to express different moral opinions. While the headline is a little outrageous, it stems from true observations: people really can change their views because of outputs from language models. Sometimes the tabloids are half right.
Researcher argues we need AI systems to be conscious
Michael S.A. Graziano in The Wall Street Journal (January 13th): Without Consciousness, AIs Will Be Sociopaths
A consciousness researcher argues that we will need AI systems to be conscious in order for them to avoid being “sociopaths” and harming humans. In this case, consciousness is a red herring. AI could avoid harming humans even if it is not conscious, for example by being consistently provided with strong incentives to avoid harming humans (this is a very difficult problem, but it doesn’t involve consciousness). AI could also harm people (and indeed, lack all empathy) even if it is conscious. Ironically, sociopaths are a relevant example of this.
M3gan [warning: spoilers]
Everyone has been talking about Megan, a recent top grossing horror movie about an AI doll that turns murderous in an attempt to protect her “primary user,” a girl named Cady. The movie is of course wildly unrealistic: an artificial general intelligence is suddenly developed by a toy company at a place where employees seemingly have never once watched a single movie about AI. Still, some of the motifs, such as cutting corners on testing in favor of earlier launches (ahem), large unforeseen risks from new AI systems, and instrumental power-seeking behavior are real possibilities, albeit almost certainly not in doll form. This post goes into more detail.
Extra: Miami Residents Resist Protective Seawall For Aesthetic Reasons
Geoff Dembicki in The Guardian (January 13th), Coastal residents fear ‘hideous’ seawalls will block waterfront views
Recently, residents of Miami have objected to the construction of a seawall designed to protect the city from catastrophic flooding. Why? Because it’s too ugly, and residents fear it would “destroy the soul of the city.” There is a legitimate place for aesthetics in any urban planning, so residents are right to hope for a more beautiful seawall. But if it’s true that this delay will “potentially [result] in nothing being built until the 2030s,” it increases the risk that the city itself, and not just its soul, will be destroyed. As climate change continues, we would do well to weigh catastrophic risks more heavily than Miami seems to be doing -- and we should also do the same with AI.
Nice coverage, the calls for a regulatory agency are particularly interesting and I hadn’t heard about them before.
As someone who’s hardly heard of them before, I’d be curious about the impact of NIST guidelines. Does NIST have enforcement power? Do industry and government actors typically follow their recommendations? Case studies of successes and failures in the past would be interesting, if you think it’d be a topic of broad appeal.