
I hate going to the gym. Last yr I hired a private trainer for six months within the hope she would brainwash me into adopting healthy exercise habits longer-term. It was great, but personal trainers are prohibitively expensive, and I haven’t stepped foot in a gym once since those six months got here to an end.
That’s why I used to be intrigued once I read my colleague Rhiannon Williams’ latest piece about AI gym trainers.
Lumin Fitness is a gym in Texas staffed just about entirely by virtual AI coaches designed to guide gym goers through workouts (there’s one human worker readily available—to modify every part on and off, perhaps.)
Patrons can complete a solo workout program with the assistance of a virtual coach in their very own designated station, or take part in a high-intensity functional training class with others. Sensors in each the equipment and the floor-to-ceiling LED screens that line the partitions of the gym track users’ movements, and Lumin uses machine learning models to tailor advice.
The gym owners are confident that these recent AI trainers will encourage people like me who feel intimidated or unmotivated to work out. Read more from Rhiannon here.
Over the following few years, artificial intelligence goes to have a much bigger and larger effect on us and the best way we live. We’re already pretty used to tracking our bodies through wearables like smart watches. Getting a pep talk from an AI avatar doesn’t feel like much of a stretch. Persons are also using ChatGPT to provide you with workout plans, as Rhiannon reported earlier this yr.
And it’s not only AI for understanding. Waitrose, a fancy chain of grocery stores within the UK, used generative AI to create recipes for its range of Japanese food. Others are using it to generate books, that are flooding Amazon, including instruction manuals for mushroom foraging. For my birthday last yr, a pricey friend gave me a perfume with notes that were AI-generated. It smells citrusy and cinnamony, a bit floral and spicy, and I haven’t used it much. (Sorry Roosa.)
Even the White House wants us to make use of AI to assist with our health. In a readout from a gathering between Biden officials and AI and healthcare experts last week, Arati Prabhakar, director of the White House Office of Science and Technology Policy, called on the healthcare sector to “seize the powerful tools of AI to enhance health outcomes for more Americans” in clinical settings, drug development, and mitigating public health challenges.
This is sensible. Neural networks are excellent at analyzing data and recognizing patterns, and will help speed up diagnoses, spot things humans may need missed, or help us provide you with recent ideas. And AI personal trainers that gamify exercise may also help people be ok with their achievements, and encourage us to do more exercise, Andy Lane, a professor of sport psychology on the University of Wolverhampton told Rhiannon.
But as AI enters ever-more sensitive areas, we’d like to maintain our wits about us and remember the constraints of the technology. Generative AI systems are excellent at predicting the following likely word in a sentence, but don’t have a grasp on the broader context and meaning of what they’re generating. Neural networks are competent pattern seekers, and may also help us make recent connections between things, but are also easy to trick and break and vulnerable to biases.
The biases of AI systems in settings comparable to healthcare are well documented. But as AI enters recent arenas, I’m looking out for the inevitable weird failures that can crop up. Will the food AI systems recommend skew American? How healthy will the recipes be? And can the workout plans bear in mind physiological differences between female and male bodies, or will they default to male-oriented workout plans?
And most significantly, it’s crucial to recollect these systems haven’t any knowledge of what exercise appears like, what food tastes like, or what we mean by ‘prime quality’. AI workout programs might provide you with dull, robotic exercises. AI recipe makers are likely to suggest combos that taste horrible, or are even poisonous. Mushroom foraging books are likely riddled with misinformation about which varieties are toxic and which are usually not, which could have catastrophic consequences.
Humans also tend to put an excessive amount of trust in computers. It’s only a matter of time before “death by GPS” is replaced by “death by AI-generated mushroom foraging book.” Including labels on AI-generated content is a superb place to begin. On this recent age of AI-powered products, it can be more essential than ever for the broader population to know how these powerful systems work and don’t work. And to take what they are saying with a pinch of salt.
Deeper Learning
How generative AI is boosting the spread of disinformation and propaganda
Governments and political actors all over the world are using AI to create propaganda and censor online content. In a brand new report released by Freedom House, a human rights advocacy group, researchers documented using generative AI in 16 countries “to sow doubt, smear opponents, or influence public debate.”
Downward spiral: The annual report, Freedom on the Net, scores and ranks countries in keeping with their relative degree of web freedom, as measured by a bunch of things like web shutdowns, laws limiting online expression, and retaliation for online speech. The 2023 edition, released on October 4, found that global web freedom declined for the thirteenth consecutive yr, driven partly by the proliferation of artificial intelligence. Read more from Tate Ryan-Mosley in her weekly newsletter on tech policy, The Technocrat.
Bits and Bytes
Predictive policing software is terrible at predicting crimes
The Recent Jersey police department used an algorithm called Geolitica that was right lower than 1% of the time, in keeping with a brand new investigation. We’ve known about how deeply flawed and racist these systems are for years. It’s incredibly frustrating that public money continues to be being wasted on them. (The Markup and Wired)
The G7 plans to ask AI corporations to comply with watermarks and audits
There’s an actual push to provide you with cross-border guidelines for governing AI. Canada, France, Germany, Italy, Japan, the UK and US are proposing voluntary rules for AI corporations that will require them to run more tests before and after launching products, and label AI-generated content using watermarks, amongst other requirements. (Bloomberg)
Could AI “constitutions” result in safer AI systems?
This story looks at “AI constitutions” — a set of values and principles, comparable to honesty and respect, that AI models must follow, as a part of an effort by researchers to stop failures. The thought is being developed by the likes of Google DeepMind and Anthropic, nevertheless it’s unclear if it can work in practice. (FT)
OpenAI is considering making its own AI chips
Training and running massive AI models takes up loads of computing power, and OpenAI is proscribed by the worldwide chip shortage. The corporate is considering developing its own chips in an effort to chop down on the associated fee of developing recent AI models and improving existing ones, comparable to ChatGPT. (Reuters)
Facebook’s recent AI-generated stickers are lewd, rude, and infrequently nude
Meta rolled out a brand new suite of generative AI features, with basic content filters, which allowed users to generate nude Trudeau stickers and Karl Marx with boobs. Quelle surprise. (The Verge)