“The word that has been used so repeatedly is scary. And as much as I may tell people, you know, there is enormous good here...what rivets their attention is the science fiction image of an intelligence device out of control, autonomous, self-replicating, potentially creating... pandemic-grade viruses or other kinds of evils purposely engineered by people or just as the result of mistakes...you have provided objective, fact-based views on what the dangers are, and the risks, and potentially even human extinction...these fears need to be addressed.”
— Senator Richard Blumenthal
On July 25th, the Senate Subcommittee on Privacy, Technology, And The Law (of the Judiciary Committee) held a hearing titled Oversight of AI: Principles For Regulation. In this post, I will summarize the hearing and give minimal commentary. The main purpose of this post is to give a digestible summary for people who don’t have the time and/or context to watch it, as well as provide some of my thoughts.
In May, the full Judiciary committee convened to hear testimony on AI. The hearing featured a star witness, OpenAI CEO Sam Altman. But the other choices of witness were somewhat odd. There was Gary Marcus, a neuroscientist known mostly for claiming last year that “deep learning is hitting a wall.” And there was Christina Montgomery of IBM, whose company is frankly a bit player in AI these days. Many senators did not seem very prepared. Much was made of Senator Blumenthal’s statement to OpenAI CEO Sam Altman:
You have said...“development of superhuman intelligence is probably the greatest threat to the continued existence of humanity.” You may have had in mind the effect on jobs.
As is better known these days, Altman did not have in mind the effect on jobs. He had in mind the extinction of humanity.
A lot has changed since May 16th. A well-publicized statement from the Center for AI Safety (my former employer) warned of the risk of human extinction from AI. At the same time, Congress has been paying increasing attention to AI, most notably with Senate Majority Leader Chuck Schumer’s announcements of “insight forums” to convene AI experts in the fall. And in this hearing, the senators, in particular Senator Blumenthal, were much more prepared.
In the rest of this post, I will summarize parts of the hearing I found to be the most interesting. The discussion was very wide ranging, so I can’t possibly cover it all; this post is quite long as it is. I have included quotes to the best of my ability, but for official quotes you should consult the committee record when it is released. If you want to watch the hearing in full, you can do so here.
The Senators
Richard Blumenthal (D-CT), the chair of the Judiciary committee at large, has clearly taken a personal interest in AI. At various points in the hearing, he asked very specific questions, including about defending against rogue AI, open source language models, and even AutoGPT. He also said that he’d previously had private conversations with Amodei and that he’d previously read the writing of the witnesses. Blumenthal asked the most questions in the hearing, and he frequently reiterated his desire to have a US agency specifically tasked with regulating AI that would be “agile, nimble, and fast.”
Josh Hawley (R-MO), ranking member of the subcommittee, has long been critical of tech companies. While he asked some AI specific questions, he tended to paint AI as being just another example of Big Tech cementing its power at the expense of ordinary Americans. At one point, he grilled Amodei on Google’s investment in Anthropic. Hawley appeared especially concerned that AI could concentrate power in a small number of companies and governments.
Amy Klobuchar (D-MN) mostly asked about threats AI could pose to elections as well as its potential to assist with fraud. She seemed to be unsatisfied with current technical and policy solutions to these issues.
Marsha Blackburn (R-TN) asked about privacy rules, intellectual property protections for authors and artists, and how content recommendation systems affect users and creators.
The Witnesses
In his opening statement, Blumenthal called the witnesses “one of the most distinguished panels I’ve seen in my time in Congress.” It certainly was a distinguished group.
Dario Amodei is the CEO of “safety-focused” AI company Anthropic. He was the author of the paper Concrete Problems in AI Safety in 2015, and worked at OpenAI before leaving the company to found Anthropic. In my experience talking to employees, Anthropic really does seem to be a place where many care deeply about risks from AI. They go about building it anyway in the belief that they can help push the industry in the direction of risk mitigation, and in many cases because they also recognize significant upside from AI. In the hearing, Amodei emphasized biological risks from AI he thought could present themselves in 2-3 years. He also mentioned “short term risks we face right now,” and “long term risks related to whether AI can harm humanity.”
Stuart Russell is a Professor of Computer Science at UC Berkeley. He co-authored the standard textbook on AI. For many years, Russell believed that deep learning was not going to get very far, maintaining that there would need to be a paradigm shift in AI before it would be truly dangerous. Nevertheless, he believed that if the goal of AI, artificial general intelligence, was ever achieved, danger could await us. He would frequently conjure an image of humanity receiving a message from an alien civilization warning that it would arrive in 50 years, and replying “humanity is out of the office.” Russell, who now believes that deep learning is “a piece of the puzzle” of truly intelligent systems, declared in the hearing that he thought humanity was back in the office (Blumenthal said he liked the metaphor).
Yoshua Bengio is a Professor of Computer Science at the University of Montreal, winner of the Turing Award, and the world’s second most cited machine learning researcher (after Geoffrey Hinton). While Bengio has not worked in AI safety or policy for very long, he is no stranger to debates on AI risk. In 2019, he appeared as a calm, moderate voice in a testy Facebook exchange between Russell and Yann LeCun. Recently, Bengio has become far more concerned about risks from AI, publishing a number of blog posts including one about rogue AI. In the hearing, Bengio called for more funding on AI safety research and shared his view that the worst AI risks could be sooner than people thought.
In the past, critiques have been laid against Congress for relying too much on tech companies for expertise. It’s notable that two of the three witnesses are instead professors, neither of whom has many ties to industry. Blumenthal also explicitly referenced the possibility of industry capture during the hearing.
AI Timelines
Summary: On several occasions during the hearing, senators and witnesses offered predictions for how long it would be before various AI capabilities would be reached. Amodei said it could be 2-3 years before AI could pose serious biological risks. Blumenthal said of “superhuman AI:” “we’re not decades away, we’re a couple of years away.” Bengio said that it could be anywhere from a few years to a couple of decades before human level AI is reached. Blumenthal repeatedly said that much of what may seem like “science fiction” is now or will soon be reality.
My thoughts: I liked Bengio’s point that the path of AI development is still very uncertain. While some people make confident predictions about how long it will be before we have human-level AI, and recent incredibly rapid progress should make us think it could happen soon, there are reasons to think that it could be a while, such as physical limits on semiconductor technology. We just don’t know; but we do need to prepare for the worst.
Rogue AI
Summary: All of the witnesses mentioned the possibility that AI systems could go rogue and threaten humanity at large. It was interesting that it was Blumenthal who seemed to emphasize this the most in his opening statement and in repeated questions. Judging from his own remarks, this was at least in part because of interest from his constituents.
After Amodei talked about AI being used to help make bioweapons, Blumenthal pointed out that a superhuman AI could make bioweapons on its own. Blumenthal said countermeasures would be necessary to detect “misdirections in AI, including malign operation of AI itself.”
Amodei said that measurement was extremely important for reducing rogue AI risk, and that he was worried about his ability to do it in time. Bengio said that regulation could perhaps reduce the possibility of rogue AI by “100 times.” He also called for the creation of an international organization charged with defending against rogue AI. Russell suggested that the government would not match private investments in AI, and that regulations could increase private investment into AI safety by refusing to allow the deployment of unsafe systems.
Blumenthal asked about the self-replication experiments conducted by OpenAI and Anthropic to test if their models could complete steps towards escaping human control, and asked whether it would make sense to create a “kill switch” on AI. Amodei said in the best case a kill switch would never be used, but we would need defensive measures in case safety mechanisms fail.
Blumenthal also asked about AutoGPT, a system developed by internet users that uses language models to create an autonomous agent that can execute code and browse the internet in pursuit of a goal. Amodei explained what AutoGPT is and said that there wasn’t much danger from it yet, but that he was worried about the direction these kinds of systems were going in.
Blumenthal asked if it would make sense to have incident reporting requirements. Amodei and Russell agreed there should be.
My thoughts: I’m glad to see serious attention on risks from rogue AI and potential countermeasures. I thought it was quite interesting that Blumenthal asked about AutoGPT specifically. While I agree with Amodei’s assessment that current versions of such systems pose negligible risk, I wrote (with colleagues) about dangers of AutoGPT-like future systems and rogue AI systems in An Overview of Catastrophic AI Risks.
Biological Risks
Summary: Amodei heavily emphasized the risk that AI could make biological capabilities more accessible to bad actors, increasing the chance that they could use AI to create biological weapons. Amodei said we had very little time, perhaps 2-3 years, to get something in place before this was a real possibility. Blumenthal said that Amodei had mentioned to him privately that another company was using “graduate students” while Amodei had enlisted “biosecurity experts” to help with this assessment.
My thoughts: There has been much talk of this recently, including two papers, Anthropic’s own post, and a proposed bill in the Senate. The issue is also covered in An Overview of Catastrophic AI Risks.
Misinformation and AI Scams
Summary: There was a fairly long exchange regarding the potential for AI to help with disinformation. Amodei said there were “terms in Claude’s constitution” that reduce disinformation, though Hawley pointed out that what that means is quite vague. Amodei also said that Anthropic was working on watermarking AI content so that it can be identified as originating from its AI models. He said that there should be a legal requirement to watermark.
Bengio said that open source models pose a disinformation risk, as they can be fine tuned for any purpose and can easily be used to generate disinformation. When pressed on what to do about this, he suggested stopping further open sourcing. He also suggested identity verification for all social media users.
Russell wanted uniform labeling of AI generated text, perhaps through the use of encrypted storage of all AI-generated outputs, which he seemed to prefer to watermarking. I tend to agree that watermarking is fairly unlikely to be a great solution for text, though it might work for images, which have far more information in which to “hide” a watermark. Russell also took the time to say that “we don’t want a ministry of truth,” and pointed to the fact that courts proscribe how information can be presented, but not which information can be presented.
Klobuchar was concerned about AI scams and misinformation, and was skeptical of watermarks. When she asked how AI could be prevented from imitating others to commit fraud, the witnesses responded with a variety of technical (e.g. watermarking) and policy solutions (e.g. making it illegal to use AI to impersonate somebody).
My thoughts: Training an AI model not to output “inaccurate” information might help with obvious falsehoods like claims that the earth is flat, but seems unlikely to prevent entirely new fabrications of things like news, which AI models have no way of knowing are true and false. Indeed, consider this conversation I had with Anthropic’s Claude (the prompt and response is entirely false):
In addition, it is possible to attack language models with adversarial prompts that get them to ignore their safety training, though Anthropic’s models seem more resistant than most.
International Affairs
Summary: Klobuchar said that we shouldn’t just defer to the rest of the world on regulations, and we instead need an “American” way to regulate AI. Blackburn said that in the area of privacy regulations, we were losing ground to many other countries.
Bengio said that international collaboration would be necessary for AI safety and that one individual would not be enough. He suggested working with Canada, European researchers, and the Five Eyes and G7. He warned that AI companies that didn’t want to follow regulation could potentially move to an unregulated country if international collaboration wasn’t achieved.
Russell said that China was mostly a “copycat” when it comes to AI, and that most successful AI efforts in China are useful for state security and little else. He also said that a lack of academic freedom made it difficult for science to get done in China. Blumenthal summarized: “hard to produce a superhuman thinking machine if you don’t allow humans to think.”
My thoughts: The US is poised to play a large role in future AI regulation if it acts relatively quickly. Without international collaboration, however, we won’t be able to fully mitigate the most serious risks from AI, which have the potential to cross borders. While it may be difficult to coordinate among countries that are simultaneously competing, we need to try.
Social Media
Summary: Just as in the first Judiciary Committee hearing, there was a lot of talk about social media, an area where many members think Congress did a poor job. Hawley said it was “the same companies” responsible for social media who were developing AI. Blackburn pointed to social media as an example of unintended consequences flowing from tech.
My thoughts: I find it very interesting how many times social media was brought up, given that I think there are many disanalogies.
Intellectual Property
Summary: Blackburn said that authors and artists were being “robbed” of their work. Russell agreed and said that the copyright system was not set up for this. Hawley asked if there should be compensation for people whose data was used in training, and Bengio said it would be difficult to attribute particular outputs, but perhaps some could be.
My thoughts: Recently it was reported that OpenAI hired a former Microsoft lawyer to assist with intellectual property deals with publishers. This issue is going to remain important for a while; our intellectual property system was simply not designed for it.
Semiconductors
Summary: Amodei said that we need to “secure the US supply chain,” including semiconductor manufacturing equipment, chips, and the information security of large models. Hawley asked him where most chips were currently made, and Amodei appeared unprepared to answer the question. Hawley asked what would happen if China invaded Taiwan, and said that the best case would be if the semiconductor factories were sabotaged. Amodei said that chip production facilities needed to be moved out of the area quickly, but that it takes a very long time to move chip facilities.
My thoughts: Computational resources seem likely to be very important for the governance of AI, given that physical resources are more easily controlled than digital resources. Some national security experts such as Jason Matheny have previously called for the registration of training runs using a certain amount of compute, an idea which might only be enforceable with certain kinds of compute hardware features.
Testing
Summary: Amodei emphasized repeatedly that without a good testing regime, AI risks can’t be appropriately managed. Testing, he said, could enable a “race to the top on safety.” He called for more funding of both “measurement and research on measurement,” including through the National Institutes of Standards and Technology and the National AI Research Resource. Amodei also said it was very important to create new tests over time as we learn more about what tests are necessary.
My thoughts: There has been increasing focus on how evaluations could help make AI systems much safer, including from groups such as the Alignment Research Center, which helped OpenAI and Anthropic test their models. I agree that good evaluations are important. However, evaluations are far from sufficient. First, if an AI system fails an evaluation considered to be critical, there needs to be enforcement and technical work to ensure the problem is permanently resolved; the identification of the problem is not enough. Second, as Amodei pointed out in the hearing, evaluations may not be able to properly identify all possible failures, meaning that additional layers of defense will be needed against the greatest threats.
Open Source
Summary: Blumenthal asked about open source models, bringing up the example of a version of the image-generating Stable Diffusion model that was fine tuned to create non consensual sexual materials. He referenced a letter he and Hawley had sent in June to Meta asking about risks from the model it open sourced, Llama. Just last week, Meta open sourced an updated model, Llama 2.
Bengio said that different companies had different ideas of which models would be too dangerous to release, and that the government should have a role in making this more uniform.
Amodei emphasized that open source is great in many cases, and that he wasn’t very worried about the open source models that have been released thus far. But he said he was concerned about the future, where models could be more powerful and open sourcing would no longer be the right decision.
Russell added that it would be important to have the means to identify which model produced harmful content, and that there should be liability for companies that allow their models to be used for harmful purposes.
My thoughts: The safeguards companies put on their AI systems matter much less if those companies, or others openly release their systems. With the advent of increasingly advanced models and models specialized to risky areas like biology, it may not make sense to open source all AI models in the future. Instead, there should be standard risk assessments incorporated into a decision to open source. The government will likely play a role in setting those standards.
Takeaways
The senators for the most part asked smart questions, and Blumenthal in particular appeared more informed compared to a couple of months ago. Of course, the devil is in the details, and it remains unclear what legislation, if any, will come out of these discussions. At least humanity is in the office.
Hey Thomas, great article! It's fantastic to see Senators taking a proactive approach to AI risks and regulation. As someone deeply involved in the field of regulated and safety-critical technology, I believe addressing these concerns is crucial for ensuring a positive future with AI. Looking forward to more insightful pieces from you! Cheers! - Adam
What do you think will become of the testimony? What legislative action is in the pipeline