Let's explore if we can help accelerate your perception development and deployment.
Table of Contents
While 2021 was a fascinating year for global events, it was equally interesting for the world of perception and computer vision. Here are our predictions of what will happen in our industry for 2022:
1. Computer Vision and Perception will split into two distinct fields.
Until recently, many considered computer vision and perception to be so closely coupled as to be indistinguishable. To be a computer vision engineer typically meant that one needed to understand perception engineering as well. After all, the computer vision algorithms rely on data inputs from sensors, and to generate valid sensor data requires understanding how to create and optimize perception data sources. But in 2021, something interesting happened. Computer vision became increasingly coupled with machine learning techniques (for instance, feeding an ML system a dataset of precaptured imagery and training classifiers to identify certain objects or events). For the engineers working in this area of computer vision, the need to understand perception and sensors is irrelevant. Many of the new tools also fall under the rubric of “low code” or “no code”, thereby lowering the barrier to entry to any capable software engineer. Compared to the advanced coding and mathematics skills traditionally required of a computer vision engineer, it’s a much more democratic environment.
With that said, those of us who have been in computer vision for a decade or more will recognize that our “brand” has become diluted by this influx of new programmers and programs. What we once knew to be computer vision now means something entirely different to a large swath of our relevant world. As a result, there has been a noticeable and organic trend over the past couple of years to “rebrand” classical computer vision with more distinctive and relevant descriptions. There are now spatial computing companies. There are metaverse platforms. And, like Tangram Vision, there are perception companies and platforms.
We expect that in 2022 these distinctions will harden, with traditional computer vision engineers and companies shifting their identities to reflect the distinctive fields within which they work.
2. Event cameras will have their first real adoption event
As perception (not computer vision!) engineers, we’ve been interested in event cameras (also known as neuromorphic cameras) since they first emerged from research labs in the early 2010s. Event cameras have a number of inherent benefits that should theoretically make them a superior choice to many other sensing modalities in a number of circumstances. Namely:
So why haven’t they been adopted en masse already? In our opinion, there is a classic catch 22 happening here: prospective users are hesitant to buy because prices are high and availability is low, while manufacturers are hesitant to manufacture more because demand is low and therefore they keep prices high.
With that said, Sony’s recent announcement of a partnership with Prophesee suggests that there will be a real push to drive event camera adoption in 2022 with higher production and lower prices. Where will we first see them deployed at scale, and by who? Our hunch is that there may be a role for event cameras as a substitute for high-speed machine vision in industrial settings. Let’s see if we’re right!
3. Consolidation in the LiDAR industry
2020 and 2021 were the years of the LiDAR SPAC. Ouster, Velodyne, Quanergy, Luminar, Aeva, Innoviz and AEye all became publicly-traded companies via this unique merger strategy last year. What does that mean for them? It means that they all gained lots of cash, as well as much more liquid equity. What does it mean for their competitors that did not SPAC? It means that they are at a financial disadvantage, and they are now acquisition targets by all of the SPAC’d LiDAR companies.
A wave of solid state LiDAR companies that were funded in the past five years are now approaching a critical point of maturity. They’ll either have landed a few anchor customers which will let them raise further capital, or they will be forced to get acquired or merge with a like competitor. We’ve seen this happen already, with Ouster acquiring solid-state LiDAR startup Sense Photonics last October. We’ll see who’s next for acquisition in the coming year.
4. The first robotics startup emerges with no computer vision or perception engineers.
A decade ago, I wrote an op-ed for TechCrunch entitled “In The Future, The Business Founder Will Not Be Ignored” which somewhat predicted today’s low-code/no-code movement. In 2012, building a web-based product or mobile app still required a decent amount of technical sophistication. By 2017, it no longer did.
Similarly, I believe we are now reaching a point where building a robotics company can be done in a similar low-code/no-code fashion. A number of companies and open source tool kits have evolved over the past decade to make this possible. What might this stack look like?
Well, first, you’d start with a ClearPath chassis or a Universal Robots cobot. You’d then use ROS2 with PickNik for your fundamental robotics operations needs. For your computer vision stack, you might start with a few OpenCV libraries. You could train that vision system with an ML system like Roboflow. And for integration and maintaining your perception system, you could most certainly use Tangram Vision.
So there you have it. Four predictions for perception in 2022. Will any of these turn out to be true? We’re confident that at least one or two will be. If you’d like to keep up with what we write, be sure to follow us on Twitter, subscribe to our blog RSS feed, and subscribe to our newsletter!
Tangram Vision helps perception teams develop and scale autonomy faster.