This content originally appeared on DEV Community and was authored by Sophie Lee
Imagine a world where you can get someone to look after your house when you are not there or take care of an elderly and be your virtual assistant. Well, you don’t have to imagine as this is already happening, this virtual assistant is none other than a home robot called Amazon Astro, which will take care of the security of your house.
However, for this purpose, it has to navigate the surroundings just like we do as humans. When we navigate our environment, it seems natural to us, however, there are lots of mechanisms involved in it. The mechanisms evolve onto the surface when we try to imitate a similar process for a robot.
Mr. Karthik Poduval a renowned software development engineer in Amazon, was instrumental in enabling the robots to have sight and navigate their surroundings. Mr. Poduval gives us insights into how the robots "see" and interpret their environment.
The Eyes of the System
The vision system comprises- cameras and depth sensors, which allow robots to perceive their surroundings with remarkable accuracy.
At the heart of the vision system lies the innovation for machines to understand depth.
There are two main organs of the depth perception system of the robot-
Stereo Camera Systems: The system uses two cameras to create a three-dimensional view of the world. These two cameras can be used to determine the disparity (difference) between two pictures on the left and right cameras. Later the difference can be converted to a depth.
When things don't have clear details (like a plain wall), the stereo camera system sometimes uses LEDs or lasers to shine extra light on the scene. This adds more texture or details, making it easier for the cameras to match the images and calculate the depth.
Second, we have the Time of Flight (ToF) System: These advanced sensors measure the time it takes for light to bounce off objects, creating highly accurate depth maps.
Synchronisation: the key to clear image
Mr. Poduval further dwells on the importance of synchronisation to get clear images. For stereo cameras, the two cameras must take pictures at the exact time.
To make sure of this, a special signal (called an FSIN signal) is sent from the "leader" camera to the "follower" camera and within 1 or 2 pictures, both cameras are perfectly in sync. This synchronization is crucial, especially when objects are moving fast because if the cameras take pictures at different times, it becomes impossible to match up the images, which might cause blurry images.
Further, to help robots see in low light, infrared illuminators and global shutter cameras are used. Global shutter cameras capture the entire image at once, which avoids the warping (tilting or stretching) effect that can happen with rolling shutters when objects are moving quickly.
The software behind the eyes
He also touches on the sophisticated software that interprets visual data. Algorithms like SLAM (Simultaneous Localization and Mapping) allow robots to not only see their environment but also understand their location within it.
We also have sensor fusion and Robotics Operating Systems ( ROS) which make sure that the machine operates smoothly. Sensor fusion combines inputs from various senses, for eg it will combine the information from Wheel encoders to measure how far it has moved, Ultrasound sensors to detect obstacles, and Cliff detectors to avoid falling off edges to get a clear image of what is happening.
And ROS is a software, that allows different parts of the robot (like its sensors and motors) to talk to each other.
Going Beyond
All these mechanisms combine to ensure the smooth functioning of the vision and sense system of the Amazon Astro home robot, in which Mr. Poduval has played a crucial role.
Combining sophisticated hardware like stereo cameras and time-of-flight sensors with advanced software algorithms, results in a significant improvement in the robot’s ability to take on complex tasks.
As this technology continues to evolve, it will be fascinating to see how it shapes the future of human-robot interaction and cooperation, integrating robots into our daily lives.
This content originally appeared on DEV Community and was authored by Sophie Lee
Sophie Lee | Sciencx (2024-11-06T18:16:40+00:00) Enabling Robots to Perceive: Robotics Visionary Shares Insights on Next-Gen Vision Systems. Retrieved from https://www.scien.cx/2024/11/06/enabling-robots-to-perceive-robotics-visionary-shares-insights-on-next-gen-vision-systems/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.