Home Learn MIT Technology Review Why we’d like higher defenses against VR cyberattacks Now read the remainder of The Algorithm

MIT Technology Review Why we’d like higher defenses against VR cyberattacks Now read the remainder of The Algorithm

0
MIT Technology Review
Why we’d like higher defenses against VR cyberattacks
Now read the remainder of The Algorithm

I remember the primary time I attempted on a VR headset. It was the primary Oculus Rift, and I nearly fainted after experiencing an intense but visually clumsy VR roller-coaster. But that was a decade ago, and the experience has gotten loads smoother and more realistic since. That spectacular level of immersiveness might be an issue, though: it makes us particularly vulnerable to cyberattacks in VR. 

I just published a story a few recent form of security vulnerability discovered by researchers on the University of Chicago. Inspired by the Christoper Nolan movie Inception, the attack allows hackers to create an app that injects malicious code into the Meta Quest VR system. Then it launches a clone of the house screen and apps that appears an identical to the user’s original screen. Once inside, attackers are capable of see, record, and modify all the things the person does with the VR headset, tracking voice, motion, gestures, keystrokes, browsing activity, and even interactions with other people in real time. Latest fear = unlocked. 

The findings are pretty mind-bending, partly since the researchers’ unsuspecting test subjects had absolutely no idea they were under attack. You possibly can read more about it in my story here.

It’s shocking to see how fragile and unsecure these VR systems are, especially considering that Meta’s Quest headset is the most well-liked such product available on the market, utilized by tens of millions of individuals. 

But perhaps more unsettling is how attacks like this may occur without our noticing, and might warp our sense of reality. Past studies have shown how quickly people start treating things in AR or VR as real, says Franzi Roesner, an associate professor of computer science on the University of Washington, who studies security and privacy but was not a part of the study. Even in very basic virtual environments, people start stepping around objects as in the event that they were really there. 

VR has the potential to place misinformation, deception and other problematic content on steroids since it exploits people’s brains, and deceives them physiologically and subconsciously, says Roesner: “The immersion is basically powerful.”  

And since VR technology is comparatively recent, people aren’t vigilantly searching for security flaws or traps while using it. To check how stealthy the inception attack was, the University of Chicago researchers recruited 27 volunteer VR experts to experience it. Considered one of the participants was Jasmine Lu, a pc science PhD researcher on the University of Chicago. She says she has been using, studying, and dealing with VR systems often since 2017. Despite that, the attack took her and just about all the opposite participants by surprise. 

“So far as I could tell, there was not any difference except a little bit of a slower loading time—things that I believe most individuals would just translate as small glitches within the system,” says Lu.  

Considered one of the basic issues people could have to cope with in using VR is whether or not they can trust what they’re seeing, says Roesner. 

Lu agrees. She says that with online browsers, we’ve been trained to acknowledge what looks legitimate and what doesn’t, but with VR, we simply haven’t. People have no idea what an attack looks like. 

This is said to a growing problem we’re seeing with the rise of generative AI, and even with text, audio, and video: it’s notoriously difficult to tell apart real from AI-generated content. The inception attack shows that we’d like to think about VR as one other dimension in a world where it’s getting increasingly difficult to know what’s real and what’s not. 

As more people use these systems, and more products enter the market, the onus is on the tech sector to develop ways to make them safer and trustworthy. 

The excellent news? While VR technologies are commercially available, they’re not all that widely used, says Roesner. So there’s time to start out beefing up defenses now. 


Now read the remainder of The Algorithm

Deeper Learning

An OpenAI spinoff has built an AI model that helps robots learn tasks like humans

In the summertime of 2021, OpenAI quietly shuttered its robotics team, announcing that progress was being stifled by an absence of knowledge obligatory to coach robots in how you can move and reason using artificial intelligence. Now three of OpenAI’s early research scientists say the startup they spun off in 2017, called Covariant, has solved that problem and unveiled a system that mixes the reasoning skills of enormous language models with the physical dexterity of a complicated robot.

Multimodal prompting: The brand new model, called RFM-1, was trained on years of knowledge collected from Covariant’s small fleet of item-picking robots that customers like Crate & Barrel and Bonprix use in warehouses all over the world, in addition to words and videos from the web. Users can prompt the model using five various kinds of input: text, images, video, robot instructions, and measurements. The corporate hopes the system will turn into more capable and efficient because it’s deployed in the actual world. Read more from James O’Donnell here. 

Bits and Bytes

You possibly can now use generative AI to show your stories into comics
By pulling together several different generative models into an easy-to-use package controlled with the push of a button, Lore Machine heralds the arrival of one-click AI. (MIT Technology Review) 

A former Google engineer has been charged with stealing AI trade secrets for Chinese corporations
The race to develop ever more powerful AI systems is becoming dirty. A Chinese engineer downloaded confidential files about Google’s supercomputing data centers to his personal Google Cloud account while working for Chinese corporations. (US Department of Justice)  

There’s been much more drama within the OpenAI saga
This story truly is the  gift that keeps on giving. OpenAI has clapped back at Elon Musk and his lawsuit, which claims the corporate has betrayed its original mission of doing good for the world, by publishing emails showing that Musk was keen to commercialize OpenAI too. Meanwhile, Sam Altman is back on the OpenAI board after his temporary ouster, and it seems that chief technology officer Mira Murati played a much bigger role within the coup against Altman than initially reported. 

A Microsoft whistleblower has warned that the corporate’s AI tool creates violent and sexual images, and ignores copyright
Shane Jones, an engineer who works at Microsoft, says his tests with the corporate’s Copilot Designer gave him concerning and disturbing results. He says the corporate acknowledged his concerns, nevertheless it didn’t take the product off the market. Jones then sent a letter explaining these concerns to the Federal Trade Commission, and Microsoft has since began blocking some terms that generated toxic content. (CNBC)

Silicon Valley is pricing academics out of AI research
AI research is eye-wateringly expensive, and Big Tech, with its huge salaries and computing resources, is draining academia of top talent. This has serious implications for the technology, causing it to be focused on business uses over science. (The Washington Post) 

LEAVE A REPLY

Please enter your comment!
Please enter your name here