A Battle for AI in Search Brews
Meanwhile, OpenAI CEO warns the worst case of AI would be "lights out for all of us"
Photo credit Amos777eligius
Some recent news since my last newsletter. It’s been a crazy few weeks!
Microsoft Invests in OpenAI
Microsoft (January 23rd): Microsoft and OpenAI Extend Partnership
Microsoft announced that it was making a “multi billion dollar” investment in OpenAI. The deal is reportedly for $10 billion and includes computational resources that will be essential for OpenAI to train and deploy its large language models like GPT-3. In exchange, Microsoft will reportedly receive 49% of OpenAI, as well as 75% of OpenAI’s profits until it receives its investment back.
Google Invests in Anthropic
James Vincent in The Verge (February 3rd): Google invested $300 million in AI firm founded by former OpenAI researchers
Only a couple weeks later, it was reported that Google had previously invested $300 million in Anthropic, an AI company focused on language models which bills itself on being more focused on AI safety than its competitors. In exchange, it will receive a 10% stake in the company and the right to be Anthropic’s exclusive cloud computing provider. The latter is quite important: much of that $300 million might just go directly back to Google in payments for computational resources. As these deals show, it’s increasingly necessary for AI companies to seek investments from large compute providers, and those providers are eager to oblige.
Microsoft integrates OpenAI model into Bing
Yusuf Medhi on the Microsoft blog (February 7th): Reinventing search with a new AI-powered Microsoft Bing and Edge, your copilot for the web
In addition to regular search results, Bing will now show a language model generated response on the right pane, which will pull from several different sources. The service is currently being tested, but will be released to the public soon.
Google integrates “Bard” model into Google Search
Sundar Pichai on the Google blog (February 6th): An important next step on our AI journey
Not to be outdone, Google is also integrating a language model, called Bard, into their search service. Perhaps the most major change in Google since it first began, Bard will show a summary of results at the very top of the search pane. Bard, unlike Microsoft’s product, doesn’t appear to even cite its sources, leading some to criticize it for effectively appropriating human-written content.
It’s what it looks like: Google and Microsoft are rushing to deploy AI
Several press commentaries have been written about the brewing competition. While Google and Microsoft certainly have a lot to gain from making search results more accessible, it seems likely to me that both Google and Microsoft are going to make mistakes in these rushed deployments. Google, for example, lost more than $100 billion in market value as a result of an advertisement showing its Bard making a factual error. So much for the green lane. ChatGPT has also struggled with these problems, however, so neither Microsoft nor Google is out of the woods yet.
The Lawyers Strike Back
Bobby Allyn for NPR (January 25th): A robot was scheduled to argue in court, then came the jail threats
Joshua Browder, the CEO of DoNotPay, a legal assistance company, recently caused a stir when he announced that the company’s legal AI would whisper into the ear of a defendant in a traffic ticket case, advising them as a lawyer might. Multiple state bars sent him letters demanding he stop the unauthorized practice of law, and one reportedly threatened prosecution. As a result, the stunt was canceled. Leah Wilson of the California State Bar said, “In 2023, we are seeing well-funded, unregulated providers rushing into the market for low-cost legal representation, raising questions again about whether and how these services should be regulated.” As long as language models are prone to making things up, I suggest not using them under penalty of perjury.
NIST RMF Released
NIST (January 26th): AI Risk Management Framework
The National Institute for Standards And Technology recently released their risk management framework. The framework is not binding, but may be adopted as policy in other parts of the government, or influence potential future regulations. It could possibly influence insurance companies or corporate AI standards. A particularly interesting section says:
In cases where an AI system presents unacceptable negative risk levels – such as where significant negative impacts are imminent, severe harms are actually occurring, or catastrophic risks are present – development and deployment should cease in a safe manner until risks can be sufficiently managed
A good explainer of the framework can be found here.
OpenAI CEO says AI could be “unbelievably good” but that the worst case is “lights out for all of us”
Connie Loizos (January 17th): StrictlyVC in conversation with Sam Altman, part two (OpenAI)
Sam Altman, the CEO of OpenAI, gave his thoughts on the future of AI. He said that “the best case is so unbelievably good that it’s hard to-- it’s like hard for me to even imagine” but that “the bad case -- and I think this is important to say -- is like lights out for all of us.” At the same time, he sent mixed messages about his views on AI safety, saying that while “it’s like impossible to overstate the importance of AI safety and alignment work,” “all of the traditional AI safety thinkers reveal a lot more about themselves than they mean to when they talk about what they think the AGI is going to be like.”
Recently, Altman also went to Washington to speak with policymakers.
US Representative urges preparation for advanced AI
Ted Lieu on Twitter (January 26th)
Responding to a tweet describing the interview above, Representative Ted Lieu (D-CA) said that we need to “prepare for the dramatic consequences of artificial general intelligence.” Lieu also introduced the first Congressional resolution written by AI, and previously called for a regulatory agency focused on AI.
Australian member of parliament warns AI could cause “significant harm to humanity”
Paul Karp for the Guardian (February 6th): MP tells Australia’s parliament AI could be used for ‘mass destruction’ in speech part-written by ChatGPT
Julian Hill, in the human-written parts of his speech, said that AI could “could cause significant harm to humanity if its goals and motivations are not aligned with our own” and that “the risk that increasingly worries people far cleverer than me is the unlikelihood that humans will be able to control AGI, or that a malevolent actor may harness AI for mass destruction.” Hill said that action on this issue is “urgent” and used climate change as an example of an issue that would have gone better with earlier action.
ChatGPT continues to make headlines
OpenAI has now been charging $20/month for “ChatGPT Plus,” a version of the service that runs more quickly and gets updates faster. ChatGPT is reported to be the fastest consumer product to ever reach 100 million users, so if even 1% of them purchase the premium, that would generate hundreds of millions in annual revenue for OpenAI. Meanwhile, people continue to find ways to bypass ChatGPT’s content filters.
National AI Research Resource Final Report sent to Congress
Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem
In 2020, Congress established the National AI Research Resource Task Force to look into the possibility of setting up a government-administered cloud that could grant computational resources to academia, government, and other actors, which have struggled to keep up with industry investments. The Task Force has now sent their final report to Congress, recommending the creation of such a resource. Congress now has to decide whether to authorize (approve) and appropriate (fund) the resource. The report is calling for the NAIRR to establish governance protocols for responsible AI development as part of its program, but doesn’t give many specifics. The total budget for the program is estimated by this report at $2.6 billion over six years, but Congress could decide to approve less or more.
The timeline of the report is such that we should not expect any kind of operation for 1.75 years after initial authorization, and 2.75 years before full capacity. As AI research goes, that’s a long time.