
Is it hot where you’re? It sure is here in London. I’m writing this text with a fan blasting at full power in my direction and still feel like my brain is melting. Last week was the hottest week on record. It’s one more sign that climate change is “uncontrolled,” the UN secretary general said.
Punishing heat waves and extreme weather events like hurricanes and floods are going to turn out to be more common because the climate crisis worsens, making it more essential than ever before to supply accurate weather forecasts.
AI is proving increasingly helpful with that. Up to now 12 months, weather forecasting has been having an AI moment.
Three recent papers from Nvidia, Google DeepMind, and Huawei have introduced machine-learning methods which might be capable of predict weather not less than as accurately as conventional methods, and way more quickly. Last week I wrote about Pangu-Weather, an AI model developed by Huawei. Pangu-Weather is capable of forecast not only weather but in addition the trail of tropical cyclones. Read more here.
Huawei’s Pangu-Weather, Nvidia’s FourcastNet, and Google DeepMind’s GraphCast, are making meteorologists “reconsider how we use machine learning and weather forecasts,” Peter Dueben, head of Earth system modeling on the European Centre for Medium-Range Weather Forecasts (ECMWF), told me for the story.
ECMWF’s weather forecasting model is taken into account the gold standard for medium-term weather forecasting (as much as 15 days ahead). Pangu-Weather managed to get comparable accuracy to the ECMWF model, while Google DeepMind claims in an non-peer-reviewed paper to have beat it 90% of the time within the mixtures they tested.
Using AI to predict weather has an enormous advantage: it’s fast. Traditional forecasting models are big, complex computer algorithms based on atmospheric physics and take hours to run. AI models can create forecasts in only seconds.
But they’re unlikely to exchange conventional weather prediction models anytime soon. AI-powered forecasting models are trained on historical weather data that goes back a long time, which suggests they’re great at predicting events which might be much like the weather of the past. That’s an issue in an era of increasingly unpredictable conditions.
We don’t know if AI models will have the ability to predict rare and extreme weather events, says Dueben. He thinks the best way forward is likely to be for AI tools to be adopted alongside traditional weather forecasting models to get probably the most accurate predictions.
Big Tech’s arrival on the weather forecasting scene is just not purely based on scientific curiosity, reckons Oliver Fuhrer, the top of the numerical prediction department at MeteoSwiss, the Swiss Federal Office of Meteorology and Climatology.
Our economies have gotten increasingly depending on weather, especially with the rise of renewable energy, says Fuhrer. Tech corporations’ businesses are also linked to weather, he adds, pointing to anything from logistics to the variety of search queries for ice cream.
The sphere of weather forecasting could gain lots from the addition of AI. Countries track and record weather data, which suggests there may be loads of publicly available data on the market to make use of in training AI models. When combined with human expertise, AI could help speed up a painstaking process. What’s next isn’t clear, however the prospects are exciting. “A part of it’s also just exploring the space and determining what potential services or business models is likely to be,” Fuhrer says.
Deeper Learning
AI-text detection tools are very easy to idiot
Inside weeks of ChatGPT’s launch, there have been fears that students could be using the chatbot to spin up passable essays in seconds. In response to those fears, startups began making products that promise to identify whether text is written by a human or a machine. Seems it’s relatively easy to trick these tools and avoid detection.
Snake-oil alert: I’ve written about how difficult—if not inconceivable—it’s to detect AI-generated text. As my colleague Rhiannon Williams reports, recent research found that the majority of the tools that claim to have the ability to identify such text perform poorly. Researchers tested 14 detection tools and located that while they were good at spotting human-written text (with 96% accuracy on average), that fell to 74% for AI-generated text, and even lower, to 42%, when that text had been barely tweaked. Read more.
Bits and Bytes
AI corporations are facing a flood of lawsuits over privacy and copyright
What America lacks in AI regulation, it makes up for in multimillion-dollar lawsuits. In late June, a California law firm launched a category motion lawsuit against OpenAI, claiming that the corporate violated the privacy of thousands and thousands of individuals when it scraped data from the web to coach its model. Now, actor and comedian Sarah Silverman is suing OpenAI and Meta for scraping her copyrighted work into their AI models. These cases, together with existing copyright lawsuits by artists, could set a crucial precedent for a way AI is developed within the US.
OpenAI has introduced a brand new concept: “superalignment”
It’s a bird … It’s a plane … It’s superalignment! OpenAI is assembling a team of researchers to work on “superintelligence alignment.” Which means they’ll give attention to solving the technical challenges that may be involved in controlling AI systems which might be smarter than humans.
On one hand, I believe it’s great that OpenAI is working to mitigate the harm that could possibly be done by the superintelligent AI it’s attempting to construct. But alternatively, such AI systems remain wildly hypothetical, and existing systems cause loads of harm today. On the very least, I hope OpenAI comes up with more practical ways to manage this generation of AI models. (OpenAI)
Big Tech says it wants AI regulation, as long as users bear the brunt
This story gives a pleasant overview of the lobbying happening behind the scenes across the AI Act. While tech corporations say they support regulation, they’re pushing back against EU efforts to impose stricter rules around their AI products. (Bloomberg)
How elite schools like Stanford became fixated on the AI apocalypse
Fears about existential AI risk didn’t come from nowhere. In reality, as this piece explains, it’s a billionaire-backed movement that’s recruited a military of elite college students to its cause. And so they’re keen to capitalize on the present moment. (The Washington Post)