It feels as if a switch has turned on in AI policy. For years, US legislators and American tech corporations were reluctant to introduce—if not outright against—strict technology regulation. Now each have began begging for it.
Last week, OpenAI CEO Sam Altman appeared before a US Senate committee to speak in regards to the risks and potential of AI language models. Altman, together with many senators, called for international standards for artificial intelligence. He also urged the US to control the technology and arrange a brand new agency, very similar to the Food and Drug Administration, to control AI.
For an AI policy nerd like myself, the Senate hearing was each encouraging and frustrating. Encouraging since the conversation seems to have moved past promoting wishy-washy self-regulation and on to rules that might actually hold corporations accountable. Frustrating because the controversy seems to have forgotten the past five-plus years of AI policy. I just published a story all the present international efforts to control AI technology. You may read it here.
I’m not the just one who feels this fashion.
“To suggest that Congress starts from zero just plays into the industry’s favorite narrative, which is that Congress is to this point behind and doesn’t understand technology—how could they ever regulate us?” says Anna Lenhart, a policy fellow on the Institute for Data Democracy and Policy at George Washington University, and a former Hill staffer.
Actually, politicians within the last Congress, which ran from January 2021 to January 2023, introduced a ton of laws around AI. Lenhart put together this neat list of all of the AI regulations proposed during that point. They cover every part from risk assessments to transparency to data protection. None of them made it to the president’s desk, but provided that buzzy (or, to many, scary) recent generative AI tools have captured Washington’s attention, Lenhart expects a few of them to be revamped and make a reappearance in a single form or one other.
Listed below are a couple of to regulate.
Algorithmic Accountability Act
This bill was introduced by Democrats within the US Congress and Senate in 2022, pre-ChatGPT, to deal with the tangible harms of automated decision-making systems, reminiscent of ones that denied people pain medications or rejected their mortgage applications.
The bill would require corporations to do algorithmic impact and risk assessments, says Lenhart. It might also put the Federal Trade Commission answerable for regulating and enforcing rules around AI, and boost its staff numbers.
American Data Privacy Protection Act
This bipartisan bill was an attempt to control how corporations collect and process data. It gained a number of momentum as a option to help women keep their personal health data protected after Roe v. Wade was overturned, but it surely did not pass in time. The controversy across the risks of generative AI could give it the added urgency to go further than last time. ADPPA would ban generative AI corporations from collecting, processing, or transfering data in a discriminatory way. It might also give users more control over how corporations use their data.
An AI agency
Through the hearing, Altman and several other senators suggested we’d like a brand new US agency to control AI. But I believe this can be a little bit of a red herring. The US government needs more technical expertise and resources to control the tech, whether or not it’s in a brand new agency or in a revamped existing one, Lenhart says. And more importantly, any regulator, recent or old, needs the ability to implement the laws.
“It’s easy to create an agency and never give it any powers,” Lenhart says.
Democrats have tried to establish recent protections with the Digital Platform Commission Act, the Data Protection Act, and the Online Privacy Act. But these attempts have failed, as most US bills without bipartisan support are doomed to do.
What’s next?
One other tech-focused agency is probably going on the way in which. Senators Lindsey Graham, a Republican, and Elizabeth Warren, a Democrat, are working together to create a brand new digital regulator that may also have the ability to police and maybe license social media corporations.
Democrat Chuck Schumer can also be rallying the troops within the Senate to introduce a brand new bill that might tackle AI harms specifically. He has gathered bipartisan support to place together a comprehensive AI bill that might arrange guardrails aimed toward promoting responsible AI development. For instance, corporations could be required to permit external experts to audit their tech before it’s released, and to provide users and the federal government more details about their AI systems.
And while Altman seems to have won the Senate Judiciary Committee over, leaders from the commerce committees in each the House and Senate have to be on board for a comprehensive approach to AI regulation to develop into law, Lenhart says.
And it must occur fast, before people lose their interest in generative AI.
“It’s gonna be tricky, but anything’s possible,” Lenhart says.
Deeper Learning
Meta’s recent AI models can recognize and produce speech for greater than 1,000 languages
Meta has built AI models that may recognize and produce speech for greater than 1,000 languages—a tenfold increase on what’s currently available.
Why this matters: It’s a big step towards preserving languages which might be susceptible to disappearing, the corporate says. There are around 7,000 languages on the planet, but existing speech recognition models only cover roughly 100 languages comprehensively. It’s because these sorts of models are likely to require huge amounts of labeled training data, which is barely available for a small variety of languages, including English, Spanish, and Chinese. Read more from Rhiannon Williams here.
Bits and Bytes
Google and Apple’s photo apps still can’t find gorillas
Eight years ago, Google’s photo app mislabeled pictures of Black people as gorillas. The corporate prevented any pictures from being labeled as apes as a short lived fix. But years later, tech corporations haven’t found an answer to the issue, despite big advancements in computer vision (The Recent York Times)
Apple bans employees from using ChatGPT
It’s anxious the chatbot might leak confidential company information. This just isn’t an unreasonable concern, provided that just a few months ago OpenAI had to tug ChatGPT offline due to a bug that leaked user chat history. (The Wall Street Journal)
Here’s how AI will break work for everybody
Big Tech’s push to integrate AI into office tools is not going to spell the top of human labor. It’s the alternative: the simpler work becomes, the more we shall be expected to do. Or as Charlie Warzel writes, this AI boom goes to be less Skynet, more Bain & Company. (The Atlantic)
Does Bard understand how repeatedly “e” appears in “ketchup”?
This was a fun piece with a serious purpose: lifting the lid on how large language models work. Google’s chatbot Bard doesn’t understand how many letters different words have. It’s because as an alternative of recognizing individual letters, these models form words using “tokens.” So for instance, Bard would think the primary letter within the word “ketchup” was “ket,” not “k.” (The Verge)