Let's explore if we can help accelerate your perception development and deployment.
Table of Contents
In his book Competing Against Luck, the late and influential Harvard Business School professor Clayton Christensen presented his Jobs-to-be-done theory of product development. In essence, when users buy a product, they “hire” it to get a specific job done. Every other need serves to compliment that specific job in a way that’s best for the user.
The canonical example Christensen presented revolves around the motivations of milkshake consumers. A product development team looking to increase milkshake sales asked consumers about what they wanted to see in a milkshake. Thicker consistency? More flavors? No matter what consumers claimed, none of these changes moved the needle. So, the team started asking a new question: what was the milkshake “hired” to do for these consumers? This question made all the difference: they soon learned that many consumers bought a milkshake just to make their commute more interesting! Addressing the job-to-be-done, i.e. creating a more interesting commute through milkshakes, ultimately led to a more successful product.
I have been working with and developing perception sensors, from design to production to utilization, for over 7 years now. This is not a brag; my most memorable moments have been of the “painful lessons learned” type, not the “Eureka!” type. Until recently, I was so busy struggling with sensors that I had never abstracted away exactly what job I was hiring them for. Solving the struggle was my job! Sensor struggles are a fact of life, and there’s not a single person I know in robotics, automation, or embedded vision that would claim differently.
However, let’s put on our milkshake salesman caps and think about the real job-to-be-done for perception sensors. What do we hire them for? In my humble opinion:
The Job of Sensing is to provide useful information about the state of the world now to support decisions made in the future.
No matter the sensor, this is their job at the end of the day. What would it take to hire a sensor that doesn’t just do this job, but excels?
The answer: A lot! Users don’t just expect sensors to do the bare minimum; they expect them to do it…
A lot of great sensors claim to be plug-and-play. However, when it takes hours (sometimes even days!) to muddle through driver software, compiler flags, and undocumented gotchas before that first stream starts running, that claim can feel doubtful. It gets worse when an implementation becomes even more specialized, and documentation is nowhere to be found. This leaves the user with the pain of writing custom integration software and having to maintain that software over the course of their product’s life cycle. So much for expedience.
Job: failed.
Getting that sensor operational and streaming may seem like a success, but the reality is that the data it delivers likely does not fit the application it will be used for. “Another perception sensor might work better here,” you may think…until the realization kicks in that the product you’re working on has been built around the first sensor’s abilities and quirks. Changing to a different sensor and testing its functionalities will be a nightmare. Sunk costs abound.
Job: failed.
Sensors rarely work alone. The most valuable data comes from sensors working together, adding their own unique viewpoint (literally) to an agent’s understanding of the world. Putting these pieces together takes a lot of information that, when done wrong, can seriously degrade performance. Worse yet, most manufacturers don’t care about playing nice with other sensors, leaving it to the user to design and engineer this complex system.
Job: failed.
New users and seasoned programmers alike dread working with a new sensor. How it will act in software isn’t the same as how it will act in testing, which isn’t the same as how it will act deployed in the real world. Knowing what levers and knobs to pull in order to assure consistent performance takes serious experience, enough that those with this ability have a special title: computer vision engineers. If you need an entire field’s worth of training to successfully use sensor suites, there’s something wrong with how we, the sensor industry, are serving users.
Job: failed.
A corollary of the Jobs-to-be-done model is that for every hire, there must be a fire. There must be the conscious decision to let go of the current agent doing the job in order to adopt the new one. For some of us, the current “hires” might get it all done with flying colors. However, as I’ve talked to more and more robotics and automation creators, I have found that most of us are unsatisfied with the way things are, and are still looking to get the job of sensing done right.
Enter Tangram Vision. My co-founder Adam and I founded Tangram because we understand the pains of sensor integration, maintenance, and support; we’ve been on both sides of the table, both having built products that use sensors, as well as having built the sensors that products use. These experiences have led us to cultivate a unique philosophy:
Everyone should be able to get The Job of Sensing done, and get it done well.
We’re building a platform that not only does The Job for robotics and automation, but does it…
…without engineering overhead.
…with flexibility built-in.
…through many lenses.
…without a learning curve.
…for anyone and everyone wanting to enter the world of computer vision.
Our goal is pretty ambitious: we want any sensor user to look at how they do sensor integration and maintenance, “fire” that approach on the spot, and “hire” the Tangram Vision Platform to do it better. Quite frankly, we’re pretty damn excited to build a company around this ambition, because it’s about time someone did.
If we do our jobs right, you can trust Tangram Vision to get The Job done.
The Tangram Vision Platform is currently in development; we’ll be launching our private beta in the next few months. Interested at getting a first look at the platform? Sign up for our notification list today, and we’ll let you know as soon as its ready for release.
Tangram Vision helps perception teams develop and scale autonomy faster.