Last week, held its inaugural EmTech Digital conference in London. It was an important success! I loved seeing so a lot of you there asking excellent questions, and it was a few days filled with brain-tickling insights about where AI goes next.
Listed here are the three essential things I took away from the conference.
1. AI avatars are getting really, really good
UK-based AI unicorn Synthesia teased its next generation of AI avatars, that are much more emotive and realistic than any I actually have ever seen before. The corporate is pitching these avatars as a brand new, more engaging approach to communicate. As a substitute of skimming through pages and pages of onboarding material, for instance, recent employees could watch a video where a hyperrealistic AI avatar explains what they should find out about their job. This has the potential to alter the best way we communicate, allowing content creators to outsource their work to custom avatars and making it easier for organizations to share information with their staff.
2. AI agents are coming
Because of the ChatGPT boom, a lot of us have interacted with an AI assistant that may retrieve information. But the subsequent generation of those tools, called AI agents, can do far more than that. They’re AI models and algorithms that may autonomously make decisions by themselves in a dynamic world. Imagine an AI travel agent that cannot only retrieve information and suggest things to do, but in addition take motion to book things for you, from flights to tours and accommodations. Every AI lab value its salt, from OpenAI to Meta to startups, is racing to construct agents that may reason higher, memorize more steps, and interact with other apps and web sites.
3. Humans should not perfect either
Among the best ways we’ve got of ensuring that AI systems don’t go awry is getting humans to audit and evaluate them. But humans are complicated and biased, and we don’t all the time get things right. To be able to construct machines that meet our expectations and complement our limitations, we must always account for human error from the get-go. In a captivating presentation, Katie Collins, an AI researcher on the University of Cambridge, explained how she found that allowing people to precise how certain or uncertain they’re—for instance, through the use of a percentage to point how confident they’re in labeling data—leads to raised accuracy for AI models overall. The one downside with this approach is that it costs more and takes more time.
And we’re doing all of it again next month, this time on the mothership.
Join us for EmTech Digital on the MIT campus in Cambridge, Massachusetts, on May 22-23, 2024. I’ll be there—join me!
Our unbelievable speakers include Nick Clegg, president of world affairs at Meta, who will speak about elections and AI-generated misinformation. We even have the OpenAI researchers who built the video-generation AI Sora, sharing their vision on how generative AI will change Hollywood. Then Max Tegmark, the MIT professor who wrote an open letter last 12 months calling for a pause on AI development, will take stock of what has happened and discuss how you can make powerful systems more secure. We even have a bunch of top scientists from the labs at Google, OpenAI, AWS, MIT, Nvidia and more.
Readers of The Algorithm get 30% off with the discount code ALGORITHMD24.
I hope to see you there!
Now read the remaining of The Algorithm
Deeper Learning
Researchers taught robots to run. Now they’re teaching them to walk.
Researchers at Oregon State University have successfully trained a humanoid robot called Digit V3 to face, walk, pick up a box, and move it from one location to a different. Meanwhile, a separate group of researchers from the University of California, Berkeley, have focused on teaching Digit to walk in unfamiliar environments while carrying different loads, without toppling over.
What’s the massive deal: Each groups are using an AI technique called sim-to-real reinforcement learning, a burgeoning method of coaching two-legged robots like Digit. Researchers consider it can result in more robust, reliable two-legged machines able to interacting with their surroundings more safely—in addition to learning far more quickly. Read more from Rhiannon Williams.
Bits and Bytes
It’s time to retire the term “user”
The proliferation of AI means we want a brand new word. Tools we once called AI bots have been assigned lofty titles like “copilot,” “assistant,” and “collaborator” to convey a way of partnership as an alternative of a way of automation. But when AI is now a partner, then what are we? (MIT Technology Review)
3 ways the US could help universities compete with tech corporations on AI innovation
Empowering universities to stay on the forefront of AI research will probably be key to realizing the sphere’s long-term potential, argue Ylli Bajraktari, Tom Mitchell, and Daniela Rus. (MIT Technology Review)
AI was speculated to make police body cams higher. What happened?
Latest AI programs that analyze bodycam recordings promise more transparency but are doing little to alter culture. This story serves as a useful reminder that technology is rarely a panacea for these varieties of deep-rooted issues. (MIT Technology Review)
The World Health Organization’s AI chatbot makes stuff up
The World Health Organization launched a “virtual medical expert“ to assist individuals with questions on things like mental health, tobacco use, and healthy eating. However the chatbot often offers outdated information or simply simply makes things up, a standard issue with AI models. That is an important cautionary tale of why it’s not all the time an excellent idea to make use of AI chatbots. Hallucinating chatbots can result in serious consequences once they are applied to necessary tasks akin to giving health advice. (Bloomberg)
Meta is adding AI assistants all over the place in its biggest AI push
The tech giant is rolling out its latest AI model, Llama 3, in most of its apps including Instagram, Facebook, and WhatsApp. People will even have the option to ask its AI assistants for advice, or use them to look for information on the web. (Latest York Times)
Stability AI is in trouble
One among the primary recent generative AI unicorns, the corporate behind the open-source image-generating AI model Stable Diffusion, is shedding 10% of its workforce. Just a few weeks ago its CEO, Emad Mostaque, announced that he was leaving the corporate. Stability has also lost several high-profile researchers and struggled to monetize its product, and it’s facing a slew of lawsuits over copyright. (The Verge)