Let's explore if we can help accelerate your perception development and deployment.
Table of Contents
It turns out that selling calibration software can sometimes be a challenge. Don’t get me wrong, we do sell it. It’s just that many of our customers first commit themselves to a long, frustrating walk in the calibration desert before deciding to work with us.
As we enter our fourth year, we figured we’d summarize what our customers often thought they could do to solve calibration before they started working with us. Let me warn you in advance: we are absolutely plugging our calibration suite in this post! And, for what it’s worth, all of the mistaken approaches we’re about to describe can be used to create an operational calibration stack. As long as you don’t mind spending years of time and hundreds of thousands of dollars first. So, with that said, let’s go wander in the desert for a while…
We’re standing on the shoulders of giants. A decade ago, there was substantial innovation around calibration that led to some of the most widely used techniques and tools that are still around today. Think Zhang’s Method. Or some of the original calibration toolsets that emerged from research labs at institutions like ETH Zurich and the University of Colorado-Boulder. And, ever since these projects launched, print shops the world over have wondered why they’ve been getting print orders for rectangular chess boards (here at Tangram Vision, we call it long chess, and wide chess).
Without these calibration innovations, we wouldn’t have had a foundation from which to build our own modern suite of tools. And, while we appreciate those contributions from a decade or more ago, they have now become a source of misunderstanding within the robotics and perception communities.
Put simply, these tools were created when robotics was largely a hobbyist pursuit, and sensor arrays were often no more complex than a couple of cameras. As a result, they simply can’t scale to solve calibration for modern, sophisticated sensor arrays – yet, many roboticists and perception engineers still erroneously think that they do.
As a result, nearly every customer we work with has already spent months or even years trying to adapt these tools to their modern multi-modal sensor arrays, often with only incremental results. To be fair, some have gotten to the point where they actually have developed calibration systems that work — granted, they take hours or even days to process a calibration. But, the point remains: robotics companies have spent precious time and engineering resources on a fool’s errand, all because of this ingrained misunderstanding.
(I mentioned that we will be plugging our calibration suite in this article, so I will not disappoint. Here’s the first shameless plug for our software: you can calibrate as many cameras, LiDARs, and depth sensors as you have right now, in seconds, with the free trial version of MetriCal. Really, download it now and try it. Even if you’ve already spent two years trying to build your own system.)
Perhaps you’ve tried building on top of an outdated tool, and realized that it was a dead-end. Many have. And many have subsequently decided that the next most viable option is to start from scratch.
And, yes, it is totally possible to build calibration from scratch. In fact, this is exactly what we have done at Tangram Vision! Now, keep in mind that we’re a team of perception engineers who have built multiple calibration systems from scratch before; however, it still took our expert team nearly three years to build our own multi-modal system. If you also have an expert team and three years, you too can do the same. We even wrote a series of blog posts on how to do this if you'd like to start now.
Perhaps it’s because of that misconception that calibration is a solved challenge. Or perhaps it’s simply because calibration is an unsexy, in-the-weeds operational task that isn’t nearly as much fun to solve as building SLAM or scene recognition or any other much flashier perception challenge. Regardless of the reason, calibration is often relegated to the back burner. In many cases, we see it assigned as a half-time or quarter-time task to a single engineer.
The irony is that the perception areas that often receive the lion’s share of engineering time are often heavily reliant on properly calibrated sensors if they are to perform correctly. Consider, for instance, the number of companies that now blend sensor data with artificial intelligence. These systems rely on sensor data entering a machine learning pipeline to train the AI system to recognize objects, segment scenes, etc. But what if the sensor data entering that pipeline is not calibrated? The trained datasets will be incorrect, and the resulting model unusable. Deploying this model will lead to erroneous results and wasted time.
Understaffing your calibration efforts is really an extended symptom of assigning low priority to calibration.
This can take the form of what we mentioned above - assigning calibration as a low-priority side task to an engineer or engineers with focus elsewhere. But understaffing for calibration often remains an issue even when a dedicated perception engineer has been hired to tackle the challenge.
This is because building a scalable calibration system isn’t just a perception engineering task. It’s also a systems engineering task (multiplexing signals on host from multiple sensors), a database engineering task (tracking calibrations by devices over long periods of time), an electrical engineering task (time synchronizing multiple sensors through PTP headers), an operations task (creating processes and instructions for successful implementation of calibration at customer sites), a UI/UX design task (creating end-user-facing surfaces for running calibration processes and visualizing results), and so on.
Hiring one person to either do all of the above by themself, or to act as project manager across your entire organization, is not a guarantee that calibration will be completed, or performant and scalable when completed.
This is the calibration mistake we see most often with customers that are on the cusp of deploying a scaled autonomous fleet. There are a lot of band-aids and crutches that can be applied to achieve a sensor system calibration (like, well, those clunky home-brewed calibration systems we mentioned above that can successfully calibrate in a few hours or days). And these can deliver a false sense of security that calibration is solved and that scalable deployments are possible with just a few more tweaks. That is, until the first deployment. Your customers expect an autonomous system that can work 24/7. Except that when calibration is required, that system has to go offline for hours, or even days.
Waiting until it’s too late is truly the culmination of all of the above challenges. In this situation, calibration is often handed off to an engineer or two as a side project. It was supposed to be finished at the beginning of the development cycle, but was continually de-prioritized when other needs emerged. Those one or two engineers may have finally hacked together a system that could successfully achieve a calibration; however, that system can take hours to complete a calibration on a single unit and can only be successfully operated by the engineers who built it, which means it is expensive to operate.
Does this sound like where you're at now? We can help you get ready to scale now. Yes, this is our second plug for our calibration suite that you can download now and trial for free.
This is a bonus mistake, because it requires having a deeper understanding of sensor calibration and sensor calibration systems, and the hidden value they contain that can help other parts of an autonomy stack.
When calibration becomes a repeatable, predictable, operationalized process, it can start to reveal information that goes beyond the calibration values themselves. We now regularly help our partners use calibration as a diagnostic to understand the performance and quality of other aspects of autonomous systems.
For instance, an operationalized calibration system can reveal when a vendor is delivering chassis with sensor mounts machined outside of agreed upon tolerances. It can reveal mean time between failure (MTBF) values for components like servos and encoders. And it can help enforce quality control measures for sensor components like lenses and coatings.
Calibration is a complex beast that suffers from many misconceptions. Yes, they’re a challenge for Tangram Vision when we start talking with a prospect about what our software can do for them. But these misconceptions are an even bigger challenge for those companies that fall into any or all of the six traps we’ve discussed in this post.
We built Tangram Vision because we’ve been the engineers tasked with calibration in the past. We’ve gone through all of these challenges ourselves, and resolved to build tools to save future engineers from the pain we went through when we were in their shoes. You can still go ahead and pursue building calibration on your own. But you can also try our calibration tools with a free 45-day trial by downloading MetriCal today. If you don’t, we’re going to have to update this blog post with the 7th calibration mistake autonomy companies make…
Tangram Vision helps perception teams develop and scale autonomy faster.