Let's explore if we can help accelerate your perception development and deployment.
Table of Contents
In the beginning, there were pictures. Simple, static pictures of all sorts of scenes and objects: Dogs. Cars. Houses. People. Fed into systems like Deep Dream, they trained classifiers which then generated their own hallucinated outputs from those inputs. Some of the results were predictably hilarious. But some were also incredibly realistic, and hard to discern from the real-world inputs from which they were derived.
Next, came pre-recorded video. In this phase of image-based machine learning (hereafter referred to as “ML”), these video streams were often used to train systems to separate objects from environments, or to discern different environmental states based on multiple observed factors. These formed some of the first data sets to be used to train machine learning systems used for autonomous vehicles (aka, an “AV”). But, and this is important to note, they were not used in real-time during an AV’s operation.
Now, however, the applications that rely on imagery-trained ML systems have significantly leveled up. These new applications are low-latency and they are real-time. They span a range of industries and use cases, from self driving cars to supply chain robots.
The real-time nature of these applications presents a critical challenge that must be properly managed by the sensor systems that feed them: the data that they receive must be accurate, as it is processed and analyzed in real-time to determine the immediate and subsequent actions of the application.
Sensors, as we know, can fail in multiple ways during deployment:
Worse yet, sensors often fail silently. They still emit a data stream, and the ML system assumes that that means the sensors are functioning 100% properly. However, if any of the above scenarios are occurring, they will be sending anomalous data into the ML system. This can create corrupted data sets that lead to immediate errors, or compounding errors that increase over time.
Consider the below scenario:
Some of the current thinking around sensor failures in machine learning systems suggests that the errors themselves are a valid input for training. This is true, to a degree. The challenge, however, is that the system needs to be able to recognize that a sensor is failing and properly classify that input as such. This is easier to do under controlled circumstances. In the real world, relying on these processes can become a riskier proposition.
Therefore, the best practice remains ensuring that sensors operate accurately and reliably as much as possible. Achieving this requires effort. It involves:
This is not an insignificant amount of work. It requires data analysis, sensor expertise, calibration expertise and copious amounts of engineering time. But, as is the case with any data-driven system, it’s a matter of garbage in, garbage out. The investment in proper sensor operation pays off when data sets deliver the expected outcomes, and the ML-powered agent can operate successfully in the real world.
Of course, we can help ease this burden. Tangram Vision automates sensor management, ensuring that otherwise silent failures are detected and handled efficiently, and uptime with optimal performance is maximized.
Tangram Vision helps perception teams develop and scale autonomy faster.