Hiya, folks, welcome to TechCrunch's regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.
On Monday, Anthropic CEO Dario Amodei sat in for a five-hour podcast interview with AI influencer Lex Fridman. The two covered a range of topics, from timelines for superintelligence to progress on Anthropic's next flagship tech.
To spare you the download, we've pulled out the salient points.
Despite evidence to the contrary, Amodei believes that "scaling up" models is still a viable path toward more capable AI. By scaling up, Amodei clarified that he means increasing not only the amount of compute used to train models, but also models' sizes -- and the size of models' training sets.
"Probably, the scaling is going to continue, and there's some magic to it that we haven't really explained on a theoretical basis yet," Amodei said.
Amodei also doesn't think a shortage of data will present a challenge to AI development, unlike some experts. Either by generating synthetic data or extrapolating out from existing data, AI developers will "get around" data limitations, he says. (It remains to be seen whether the issues with synthetic data are resolvable, I'll note here.)
Amodei does acknowledge that AI compute is likely to become more costly in the near term, partly as a consequence of scaling. He expects companies will spend billions of dollars on clusters to train models next year, and that by 2027, they'll be spending hundreds of billions. (Indeed, OpenAI is rumored to be planning a $100 billion data center.)
And Amodei was candid about how even the best models are unpredictable in nature.
"It's just very hard to control the behavior of a model -- to steer the behavior of a model in all circumstances at once," he said. "There's this 'whack-a-mole' aspect, where you push on one thing and these other things start to move as well, that you may not even notice or measure."
Still, Amodei anticipates that Anthropic -- or a rival -- will create a "superintelligent" AI by 2026 or 2027 -- one exceeding "human-level" performance on a number of tasks. And he worries about the implications of this.
"We are rapidly running out of truly convincing blockers, truly compelling reasons why this will not happen in the next few years," he said. "I worry about economics and the concentration of power. That's actually what I worry about more -- the abuse of power."
Good thing, then, that he's in a position to do something about it.
An AI news app: AI newsreader Particle, launched by former Twitter engineers, aims to help readers better understand the news with the help of AI technology.
Writer raises: Writer has raised $200 million at a $1.9 billion valuation to expand its enterprise-focused generative AI platform.
Build on Trainium: Amazon Web Services (AWS) has launched Build on Trainium, a new program that'll award $110 million to institutions, scientists, and students researching AI using AWS infrastructure.
Red Hat buys a startup: IBM's Red Hat is acquiring Neural Magic, a startup that optimizes AI models to run faster on commodity processors and GPUs.
Free Grok: X, formerly Twitter, is testing a free version of its AI chatbot, Grok.
AI for the Grammy: The Beatles' track "Now and Then," which was refined with the use of AI and released last year, has been nominated for two Grammy awards.
Anthropic for defense: Anthropic is teaming up with data analytics firm Palantir and AWS to provide U.S. intelligence and defense agencies access to Anthropic's Claude family of AI models.
A new domain: OpenAI bought Chat.com, adding to its collection of high-profile domain names.
Google claims to have developed an improved AI model for flood forecasting.
The model, which builds on the company's previous work in this area, can predict flooding conditions accurately up to seven days in advance in dozens of countries. In theory, the model can give a flood forecast for anywhere on Earth, but Google notes that many regions lack historical data to validate against.
Google's offering a waitlist for API access to the model to disaster management and hydrology experts. It's also making forecasts from the model available through its Flood Hub platform.
"By making our forecasts available globally on Flood Hub ... we hope to contribute to the research community," the company writes in a blog post. "These data can be used by expert users and researchers to inform more studies and analysis into how floods impact communities around the world."
Rami Seid, an AI developer, has released a Minecraft-simulating model that can run on a single Nvidia RTX 4090.
Similar to AI startup Decart's recently released "open-world" model, Seid's, called Lucid v1, emulates Minecraft's game world in real time (or close to it). Weighing in at 1 billion parameters, Lucid v1 takes in keyboard and mouse movements and generates frames, simulating all the physics and graphics.
Lucid v1 suffers from the same limitations as other game-simulating models. The resolution is quite low, and it tends to quickly "forget" the level layout -- turn your character around and you'll see a rearranged scene.
But Seid and her partner, Ollin Boer Bohan, say they plan to continue developing the model, which is available for download and powers the online demo here.
DeepMind, Google's premier AI lab, has released the code for AlphaFold 3, its AI-powered protein prediction model.
AlphaFold 3 was announced six months ago, but DeepMind controversially withheld the code. Instead, it provided access via a web server that restricted the number and types of predictions scientists could make.
Critics saw the move as an effort to protect DeepMind's commercial interests at the expense of reproducibility. DeepMind spin-off, Isomorphic Labs, is applying AlphaFold 3, which can model proteins in concert with other molecules, to drug discovery.
Now academics can use the model to make any predictions they like -- including how proteins behave in the presence of potential drugs. Scientists with an academic affiliation can request code access here.