Home Learn Why does AI being good at math matter?

Why does AI being good at math matter?

0
Why does AI being good at math matter?

Last week the AI world was buzzing over a brand new paper in Nature from Google DeepMind, through which the lab managed to create an AI system that may solve complex geometry problems. Named AlphaGeometry, the system combines a language model with a style of AI called a symbolic engine, which uses symbols and logical rules to make deductions, writes my colleague June Kim. You possibly can read more about AlphaGeometry here. 

That is the second time in recent months that the AI world has got all enthusiastic about math. The rumor mill went into overdrive last November, when there have been reports that the boardroom drama at OpenAI, which saw CEO Sam Altman temporarily ousted, was attributable to a brand new powerful AI breakthrough. It was reported that the AI system in query was called Q* and will solve complex math calculations. (The corporate has not commented on Q*, and we still don’t know if there was any link to the Altman ouster or not.) I unpacked the drama and hype on this story.

You don’t must be really into math to see why these things is potentially very exciting. Math is absolutely, really hard for AI models. Complex math, equivalent to geometry, requires sophisticated reasoning skills, and plenty of AI researchers consider that the flexibility to crack it could herald more powerful and intelligent systems. Innovations like AlphaGeometry show that we’re edging closer to machines with more human-like reasoning skills. This might allow us to construct more powerful AI tools that may very well be used to assist mathematicians solve equations and maybe give you higher tutoring tools. 

Work like this may also help us use computers to succeed in higher decisions and be more logical, says Conrad Wolfram of Wolfram Research. The corporate is behind WolframAlpha, a solution engine that may handle complex math questions. I caught up with him last week in Athens at EmTech Europe. (We’re hosting one other edition in London in April! Join us? I’ll be there!) 

But there’s a catch. To ensure that us to reap the advantages of AI, humans have to adapt too, he says. We want to have a greater understanding of how the technology works so we are able to approach problems in a way that computers can solve. 

“As computers recuperate, humans need to regulate to this and know more, get more experience about whether that works, where it doesn’t work, where we are able to trust it, or we are able to’t trust it,” Wolfram says. 

Wolfram argues that as we enter the AI age with more powerful computers, humans have to adopt “computational pondering,” which involves defining and understanding an issue and after which breaking it down into pieces in order that a pc can calculate the reply. 

He compares this moment to the rise of mass literacy within the late 18th century, which put an end to the era when just the elite could read and write.  

“The countries that did that first massively benefited for his or her industrial revolution … Now we’d like a mass computational literacy, which is the equivalent of that.” 

Deeper Learning

How satellite images and AI could help fight spatial apartheid in South Africa  

Raesetje Sefala grew up sharing a bedroom along with her six siblings in a cramped township within the Limpopo province of South Africa. The township’s inhabitants, predominantly Black people, had inadequate access to colleges, health care, parks, and hospitals. But just a couple of miles away in Limpopo, white families lived in big, attractive houses, with quick access to all these items. The physical division of communities along economic and racial lines is only one damaging inheritance from South Africa’s era of apartheid.

Fixing the issue using AI: Alongside computer scientists Nyalleng Moorosi and Timnit Gebru on the nonprofit Distributed AI Research Institute (DAIR), which Gebru arrange in 2021, Sefala is deploying computer vision tools and satellite images to investigate the impacts of racial segregation in housing, with the last word hope that their work will help to reverse it. Read more from Abdullahi Tsanni. 

Bits and Bytes

A brand new AI-based risk prediction system could help catch deadly pancreatic cancer cases earlier
The system outperformed current diagnostic standards. Someday it may very well be utilized in a clinical setting to discover patients who might profit from early screening or testing, helping catch the disease earlier and save lives. (MIT Technology Review) 

Meta says it’s developing open-source AGI
Et tu, Zuck? Meta is now an AGI company. In an Instagram Reels video, CEO Mark Zuckerberg announced a brand new long-term goal to construct open-source “full general intelligence.” The corporate is doing this by bringing its generative AI and AI research teams closer together, and constructing the following version of its Llama model and a large computing infrastructure to support that. (Meta) 

Read the complete text of the AI Act
The EU reached a political agreement on the AI Act late last 12 months. Negotiators are still finalizing technical details of the bill, and it still must undergo a round of approvals before it enters into force. Euractiv’s Luca Bertuzzi got hold of the nearly 900-page final text of the bill and a comparison on the way it compares to earlier texts. Here is an easier version of the bill. 

Sharing deepfake nudes could soon develop into a federal crime within the US
The bipartisan Stopping Deepfakes of Intimate Images Act was introduced within the US last week. It could outlaw the nonconsensual sharing of digitally altered nude images. It was prompted by an incident at a Latest Jersey highschool where teenage boys were sharing AI-generated images of their female classmates. (Wall Street Journal) 

A “shocking” amount of the online Is already AI-translated trash
The web is already filled with machine-translated garbage, particularly in languages spoken in Africa and the Global South, researchers at Amazon Web Services found. Over half the sentences on the net have been machine-translated into other languages. This might have severe consequences for the standard of information used to coach future AI models. I wrote about this phenomenon all the best way back in 2022. (Vice) 

LEAVE A REPLY

Please enter your comment!
Please enter your name here