Hey folks, welcome to TechCrunch’s regular AI newsletter. If you want it in your inbox every Wednesday, sign up here.
This week was something of a swan song for the Biden administration.
On Monday, the White House announced sweeping new restrictions on the export of AI chips, restrictions that tech giants including Nvidia have strongly criticized. (Nvidia’s business would be seriously harmed by the restrictions, if they went into effect as proposed.) Then, on Tuesday, the administration issued an executive order that opened federal land to AI data centers.
But the obvious question is: will these initiatives have a lasting impact? Will Trump, who takes office on January 20, simply rescind Biden’s laws? So far, however, Trump has not signaled his intentions. But it certainly has the power to undo Biden’s latest AI-related acts.
Biden’s export rules are expected to take effect after a 120-day comment period. The Trump administration will have wide latitude in how the measures will be implemented and whether to modify them in any way.
As for the executive order regarding federal land use, Trump could repeal it. Former PayPal COO David Sacks, Trump’s AI and cryptocurrency “czar,” recently pledged to revoke another AI-related Biden executive order that sets standards for intelligence security artificial.
However, there is reason to believe that the new administration may not shake things up too much.
Along the lines of Biden’s move to free up federal resources for data centers, Trump recently promised accelerated permits for companies investing at least $1 billion in the United States. He also chose Lee Zeldin, who has promised to cut regulations he sees as burdensome to businesses, to lead the EPA.
Some aspects of Biden’s export rules may also remain in place. Some of the regulations target China, and Trump has made no secret that he sees China as the United States’ biggest rival in the field of artificial intelligence.
One point in question is the inclusion of Israel in the list of countries subject to limits on trade in AI hardware. As recently as October, Trump described himself as a “protector” of Israel and signaled that he would likely be more permissive of Israeli military actions in the region.
In any case we will have a clearer picture within the week.
News
ChatGPT, remind me…: Paid users of OpenAI’s ChatGPT can now ask the AI assistant to schedule reminders or recurring requests. The new beta feature, called Activities, will begin rolling out to ChatGPT Plus, Team, and Pro users worldwide this week.
Meta vs OpenAI: Executives and researchers leading Meta’s artificial intelligence efforts were obsessed with beating OpenAI’s GPT-4 model as they developed Meta’s Llama 3 family of models, according to messages made public by a court on Tuesday.
The OpenAI board grows: OpenAI has appointed Adebayo “Bayo” Ogunlesi, an executive at investment firm BlackRock, to its board of directors. The company’s current board bears little resemblance to OpenAI’s board in late 2023, whose members fired CEO Sam Altman only to reinstate him days later.
Blaize goes public: Blaize is set to become the first AI chip startup to go public in 2025. Founded by former Intel engineers in 2011, the company has raised $335 million from investors, including Samsung, for its chips for cameras, drones and other edge devices.
A “reasoning” model who thinks in Chinese: OpenAI’s o1 AI reasoning model “thinks” in languages like Chinese, French, Hindi and Thai sometimes, even when asked a question in English – and no one really knows why.
Research Paper of the Week
A recent study written by Dan Hendrycks, an advisor to xAI, billionaire Elon Musk’s artificial intelligence company, suggests that many security parameters for artificial intelligence are related to the capabilities of artificial intelligence systems. That is, as a system’s overall performance improves, it “scores better” in benchmarks, making the model appear to be “safer.”
“Our analysis reveals that many AI safety benchmarks – about half – often inadvertently capture latent factors closely tied to overall capabilities and raw training computation,” the researchers behind the study write. “Overall, it is difficult to avoid measuring the capabilities of the upstream model in AI safety benchmarks.”
In the study, the researchers propose what they describe as an empirical basis for developing “more meaningful” safety metrics, which they hope will “(advance) the science” of safety assessments in AI.
The model of the week
In a technical paper published Tuesday, Japanese artificial intelligence company Sakana AI detailed Transformer² (“Transformer-squared”), an artificial intelligence system that dynamically adapts to new tasks.
Transformer² first analyzes a task, such as writing code, to understand its requirements. It then applies “task-specific adaptations” and optimizations to tune into that task.
Sakana says the methods behind Transformer² can be applied to open models like Meta’s Llama, and that they offer “a glimpse into a future where AI models are no longer static.”
Get the bag
A small team of developers has released an open alternative to AI-powered search engines like Perplexity and OpenAI’s SearchGPT.
Called PrAIvateSearch, the project is available on GitHub under an MIT license, meaning it can be used largely without restrictions. It is powered by freely available AI templates and services, including Alibaba’s Qwen template family and the DuckDuckGo search engine.
The PrAIvateSearch team says its goal is to “implement functionality similar to SearchGPT,” but in an “open source, local, and private way.” For tips on how to get this up and running, check out the team’s latest blog post.