Industry Insight

On Sensoriums & Self-Organizing Networks

October 28, 2020
Table of Contents

One of the perks of working with cutting-edge technology is that it tends to open doors for you that you alone couldn’t open. Over the course of my career, I’ve been able to meet some pretty fascinating people because of the products I worked with. Like Tom Wheeler, the former chairman of the FCC, who dropped by for a demo of the Structure Sensor at CES. Or Malia Obama, who I had the chance to speak with while she was an intern on the set of Extant. And Tim O’Reilly, the legendary technology conference and publishing luminary, who provided an on-the-spot testimonial for the Structure Sensor as part of our preparation for our Kickstarter campaign.

It was Tim who first introduced me to the concept of a sensorium.

So What is a Sensorium?

The concept of a sensorium begins with biology. Most sentient creatures have more than one sense that they rely on to understand their environment. And these senses typically do not operate independently, but rather in concert with each other to inform the organism that they serve.

But how does the organism interpret the signals it receives from different senses? How does it prioritize one over others, or combine sensory inputs to deduce information? In what manner does its learned behaviors influence which senses it pays more attention to? And in what way does its social environment impact how it uses its senses? The answer to these questions lies within that organism’s sensorium.

Summed up, the sensorium is the manner in which an organism interprets and prioritizes what it senses. It is influenced by past behaviors and its social environment in ways that elevate some senses while deprecating others.

In the context of 3D sensing, Tim O’Reilly suggested that the broader deployment of this sensing modality into computing platforms could fundamentally change the sensorium of sensor-equipped devices.

Nowhere is this more apparent, now, than in the world of robotics.

Robotic Sensoriums

Robots and their capabilities have evolved in lockstep with the sensory inputs available to their designers.

Photo by Photos Hobby on Unsplash
Photo by Photos Hobby on Unsplash

Initially, industrial robotic arms had almost no sensors, and operated according to pre-programmed routines. In the next evolution of robotics, devices like automated guided vehicles (AGVs) gained basic obstacle sensors to ensure they didn’t run over unmarked items (or people!) as they performed their pre-programmed routines. The arrival of more sophisticated perception sensors and feedback systems has allowed for the creation of autonomous mobile robots (AMRs) and collaborative robots (or cobots) that can work in close proximity to people, with significantly reduced risk of danger to their human counterparts.

A modern AMR might have a dozen or more sensors onboard. And, like any other multi-sensory organism, it must employ its sensorium to optimize how its uses those sensory inputs to operate. Until recently, that sensorium was effectively programmed by a software engineer, using algorithms to guide the decision making process within the robot. Emerging deep learning techniques are now letting the robot itself evolve its sensorium, much in the same way a biological organism might also evolve its behavior based on sensory inputs and feedback.

Sensoriums and Self Organizing Networks

Everyday consumers often seamlessly accept technology into their lives without realizing how fully a particular approach has become embedded in their world.

Take computer vision, for instance. Consumers are blissfully ignorant of how and why Facebook is able to automatically tag them in their friends’ photos, and their ability to drive through a toll booth with no need to stop, and how they can now unlock their iPhone by just looking at it.

In the same way, self organizing networks are blossoming as a technology input that is positively impacting our day to day lives. At home, the clearest example of this are mesh networked WiFi systems, which can optimize delivery of bandwidth to a diverse set of devices in a diverse set of locations. In industry, the concept of self organizing networks is finding a home with robots, and specifically around how they share sensory data.

Consider this scenario: a logistics facility deploys 10 AMRs to perform a range of tasks, such as cross-docking, delivering remote goods to packing stations, and waste removal.

To set up those AMRs, an initial map of the facility is created, and downloaded to each robot. However, no facility is static. And the map itself can’t show the presence of moving and movable objects, like workers, equipment and inventory. Thus, each AMR must also rely on its active senses to safely and effectively navigate while it does its job.

But what if those 10 AMRs could cooperate and share sensory data with each other? They would then have greatly enhanced sensing capabilities that would boost safety and efficiency.

This self organizing behavior is effectively a result of the robots’ social environment. And each robot, then, utilizes its live senses, its learned data, and the sensing and learning from the other robots in the network to interpret the world around it. In other words, these robots have a sophisticated sensorium to work with.

Making More Sense for Robots

The more sensing and data robots can receive, the more complete their sensoriums. And, as Tim O’Reilly told us back in 2013, the addition of new sensing modalities to a sensorium makes new capabilities possible.

Our goal at Tangram is to make it simple to add more sensors (and multiple sensors) to any vision-enabled platform, whether it be a robot, a drone or a new vison-enabled device the world has yet to experience. If you’d like to enhance your own robot’s sensorium, get in touch!

Share On: