Home Community 3 Questions: Honing robot perception and mapping

3 Questions: Honing robot perception and mapping

0
3 Questions: Honing robot perception and mapping

IEEE Transactions on Robotics

Q: Currently your labs are focused on increasing the variety of robots that may work together with the intention to generate 3D maps of the environment. What are some potential benefits to scaling this method?

How: The important thing profit hinges on consistency, within the sense that a robot can create an independent map, and that map is self-consistent but not globally consistent. We’re aiming for the team to have a consistent map of the world; that’s the important thing difference in attempting to form a consensus between robots versus mapping independently.

Carlone: In lots of scenarios it’s also good to have a little bit of redundancy. For instance, if we deploy a single robot in a search-and-rescue mission, and something happens to that robot, it could fail to search out the survivors. If multiple robots are doing the exploring, there’s a a lot better probability of success. Scaling up the team of robots also signifies that any given task could also be accomplished in a shorter period of time.

Q: What are a few of the lessons you’ve learned from recent experiments, and challenges you’ve had to beat while designing these systems?

Carlone: Recently we did a giant mapping experiment on the MIT campus, by which eight robots traversed as much as 8 kilometers in total. The robots haven’t any prior knowledge of the campus, and no GPS. Their foremost tasks are to estimate their very own trajectory and construct a map around it. You wish the robots to know the environment as humans do; humans not only understand the form of obstacles, to get around them without hitting them, but in addition understand that an object is a chair, a desk, and so forth. There’s the semantics part.

The interesting thing is that when the robots meet one another, they exchange information to enhance their map of the environment. As an example, if robots connect, they’ll leverage information to correct their very own trajectory. The challenge is that if you should reach a consensus between robots, you don’t have the bandwidth to exchange an excessive amount of data. One in every of the important thing contributions of our 2022 paper is to deploy a distributed protocol, by which robots exchange limited information but can still agree on how the map looks. They don’t send camera images backwards and forwards but only exchange specific 3D coordinates and clues extracted from the sensor data. As they proceed to exchange such data, they’ll form a consensus.

Straight away we’re constructing color-coded 3D meshes or maps, by which the colour accommodates some semantic information, like “green” corresponds to grass, and “magenta” to a constructing. But as humans, now we have a far more sophisticated understanding of reality, and now we have a number of prior knowledge about relationships between objects. As an example, if I used to be on the lookout for a bed, I’d go to the bedroom as a substitute of exploring your entire house. When you start to know the complex relationships between things, you’ll be able to be much smarter about what the robot can do within the environment. We’re attempting to move from capturing only one layer of semantics, to a more hierarchical representation by which the robots understand rooms, buildings, and other concepts.

Q: What sorts of applications might Kimera and similar technologies result in in the long run?

How: Autonomous vehicle firms are doing a number of mapping of the world and learning from the environments they’re in. The holy grail can be if these vehicles could communicate with one another and share information, then they might improve models and maps that much quicker. The present solutions on the market are individualized. If a truck pulls up next to you, you’ll be able to’t see in a certain direction. Could one other vehicle provide a field of view that your vehicle otherwise doesn’t have? It is a futuristic idea since it requires vehicles to speak in latest ways, and there are privacy issues to beat. But when we could resolve those issues, you could possibly imagine a significantly improved safety situation, where you will have access to data from multiple perspectives, not only your field of view.

Carlone: These technologies can have a number of applications. Earlier I discussed search and rescue. Imagine that you should explore a forest and search for survivors, or map buildings after an earthquake in a way that can assist first responders access people who find themselves trapped. One other setting where these technologies might be applied is in factories. Currently, robots which can be deployed in factories are very rigid. They follow patterns on the ground, and aren’t really capable of understand their surroundings. But in case you’re fascinated by far more flexible factories in the long run, robots can have to cooperate with humans and exist in a much less structured environment.

LEAVE A REPLY

Please enter your comment!
Please enter your name here