Let's explore if we can help accelerate your perception development and deployment.
Table of Contents
In [previous](https://www.tangramvision.com/blog/camera-modeling-exploring-distortion-and-distortion-models-part-i) [articles](https://www.tangramvision.com/blog/camera-modeling-exploring-distortion-and-distortion-models-part-ii), we’ve discussed some of the modeling approaches we take here at Tangram, and tried to explore the history of how these modeling approaches were formed. In [part 2](https://www.tangramvision.com/blog/camera-modeling-exploring-distortion-and-distortion-models-part-ii), we mentioned the topic of *Projective Compensation*. Projective Compensation is prevalent across most forms of optimization, but is so rarely discussed in a computer vision context.We aim to provide both a brief intuition behind projective compensation and examples where it crops up in camera calibration. All optimization processes need to consider this topic to some extent, but it holds particular importance when performing calibration. Limiting the effects of projective compensation is one of the easiest ways to ensure that a calibration is consistent and stable.
# What is Projective Compensation?
In optimization, process errors in the estimation of one parameter **project** into the solution for other parameters. Likewise, a change in a parameter’s solution **compensates** for errors in the solution of other parameters. This process of **projective compensation** can happen for any number of parameter errors and solutions in a system.
## Visualizing a 2D example
💡 Follow along: all of the code used to run the least-squares problems below can be found in the Tangram Visions Blog Repository
Let us first consider a Very Simple Problem: we want to determine the \\(x\\) position of a point \\(q\\) (the circle) located between two known points (the triangles). Unfortunately, the only way to find \\(x\\) is to measure its position *relative to our known points*. This means that we’ll have two distances (\\(d_1\\) and \\(d_2\\)) that should help us solve our problem.
After our measurements, we determine that \\(d_1 = 1.12\\) and \\(d_2 = 1.86\\). Since we have more observations (two) than unknowns (one), this problem is *over-determined* and we can use a least-squares optimization to solve. Running this process, we get the following results:
So far, so good, if not terribly interesting. In fact, using an optimization process for this might be a bit overkill, given that we could have just as easily taken the average from our observations.
Let’s add a twist: suppose that when we were recording the positions of our known points (the triangles), we accidentally got the \\(y\\)-coordinate of one of them wrong. Instead of the point being recorded as being at \\((8, 0)\\), we recorded the point to be at \\((8, 0.6)\\):
Notice that neither \\(d_1\\) or \\(d_2\\) have changed, since those were honest observations written down correctly. However, the point that describes \\(q\\) is no longer as tightly determined. Based on our two observations, \\(q\\) could be in either the position of the pink circle or in the position of the purple circle.
Luckily, our problem is still over-determined (i.e. we have more observations of the point than we do unknowns to solve for), so again we can perform a least-squares optimization:
As can be seen, the solution for \\(x\\) has changed dramatically, and the final result has much larger residual error (by a factor of 5-6!).
**This is projective compensation at work**. Errors in one quantity (the \\(y\\)-coordinate of a control point) are being *compensated* for by *projecting* that error across other quantities that are being estimated (the \\(x\\)-coordinate of our point of interest, \\(q\\)). The least-squares optimization still ran, and still minimized error as much as it could, but that error is still in the optimization and has to be accounted for somewhere.
## The Effect of Data
One way to reduce the impact of projective compensation is to have more data. Let’s say we add two more points, as shown in the following figure:
These two new points agree that \\(q\\) is likely to be on the pink point, whereas we still have our erroneous point at \\((8, 0.6)\\) that would imply \\(q\\) should be at the purple location. How much of an effect does this have?
If we say \\(d_3 = 1.36\\) and \\(d_4 = 1.02\\), the least-squares results are then:
This final value for \\(x\\) (6.173) does start to approach our original true value (6.13), but doesn’t quite get there all the way. While additional data does make the situation better, it doesn’t entirely ameliorate our problem.
This demonstrates how projective compensation can also be an effect of our underlying *model*. If we can’t optimize the \\(x\\) and \\(y\\)-coordinates of our “known” points, projective compensation will still occur; we just can’t do anything about it.
# Solving for Projective Compensation
Sadly, the heading above is somewhat of a trick — it is not really possible to “solve for” or directly quantify projective compensation. We can detect if it is occurring, but we cannot quantify the exact amount of error that is being projected across each individual parameter; this effect worsens as more parameters are added to our optimization. **As long as there are any errors within the system, there will always be some degree of projective compensation across a global optimization process.**
A least-squares optimization process, as is often used in perception, is inherently designed to minimize the square-sum error across all observations and parameters. Projective compensation is described by the *correlations* between estimated parameters, not the parameters themselves; there aren’t any tools within the least-squares optimization itself that can minimize this error any more than is already being done.
Since we can’t prevent or minimize projective compensation explicitly, the best we can do is to characterize it by analyzing the *output* of an optimization. In the context of calibration, we’re usually doing this as a quality check on the final results, to make sure that the calibration parameters and extrinsic geometry we have computed were estimated as statistically independent entities.
## The Giveaway: Parameter Correlations & Covariance
Luckily for us, a least-squares optimization produces both final parameter estimations *and* a set of variances and covariances for those parameter estimations. These covariances act as a smoke-signal: if large, we can bet that the two parameters are highly coupled and exhibiting projective compensation.
Unfortunately, in our example above, we couldn’t do this at all! We constrained our problem such that we were not optimizing for the \\(x\\) and \\(y\\)-coordinates of our “known” points. This didn’t stop Projective Compensation from affecting our final solution, but it did make it so that we couldn’t *observe* when it was occurring since we couldn’t directly relate the errors we were witnessing back to a correlation between estimated parameters.
💡 What constitutes a “high correlation” can be somewhat up to interpretation. A good rule of thumb to go by is that if a correlation between two parameters exceeds 0.7, then it is quite large. The 0.5-0.7 range is large, but not egregious, and anything less than 0.5 is a manageable quantity. This is a very broad stroke however, so it doesn’t always apply!
💡 This is one reason why Tangram Vision’s calibration system optimizes the target field / object space in addition to intrinsics and extrinsics of the cameras in the system. Without doing so, we wouldn’t be able to detect these errors or notify users of them at all, but we’d still have to live with the consequences!
# Calibration: Projective Compensation In Practice
When it comes to camera calibration, certain projective compensations are well-studied and understood. We can break down the reasons for projective compensation into two categories:
1. A modeling deficiency, e.g. parameters not being entirely statistically independent.
2. A data deficiency in our input data.
Minimizing these projective compensations can be as simple as collecting images from different perspectives (overcoming modeling deficiency), or by increasing the size of the observed target field / checkerboard / targets themselves (overcoming data deficiency).
We describe each of these two scenarios in a bit more detail.
## Model Deficiencies
Camera calibration is necessarily driven by model selection. No amount of data will correct your model if it is introducing parameter correlations due to the very nature of the math.
For instance, some camera model parameters are not strictly statistically independent; this means that changes in value for one can affect the other. In some scenarios, this is necessary and expected: in Brown-Conrady distortion, we expect \\(k_1\\), \\(k_2\\), \\(k_3\\), etc. to be correlated, since they describe the same physical effect modeled as a polynomial series. However, this can be troublesome and unnecessary in other scenarios such as e.g. \\(f_x\\) and \\(f_y\\), which are always highly correlated because they represent the same underlying value, \\(f\\).
💡 This was somewhat touched upon in our previous articles, where we discussed different modeling approaches to both projection / focal length, as well as lens distortions.
For now, we’ll avoid diving too deep into the intuition behind certain modeling practices, as the specifics are particular to certain camera types and configurations. Just know that choosing the right model is one way we avoid coupling our parameters together, and gives us tools to understand the relationships between the parameters and the data we’re collecting.
## Data Deficiencies
In our Very Simple Problem at the beginning of the article, we demonstrated both how bad input data can damage the solution, as well as how additional observations could improve it (if not perfectly). Our lack of sufficient input data from that system prevented us from observing any pair of parameters independently, i.e. it affected the *observability* of a quantity being modeled. Even if a calibration problem is modeled sufficiently and correctly, it is still possible for observability to suffer because of bad data.
Here is a (non-exhaustive) list of scenarios where projective compensation occurs in camera calibration due to data deficiencies:
- \\(c_x\\) and \\(c_y\\) may be correlated largely with the \\(X\\) and \\(Y\\) components of the camera’s pose, due to a lack of variation of pose perspectives in the data. This results in extrinsic parameters being highly tied to errors in the determination of the intrinsic parameters, and vice-versa.
- \\(c_x\\) may be highly correlated to \\(c_y\\), for the same reason as above. The way to handle such correlations is to add data to the adjustment with the camera rotated 90° about the \\(Z\\) axis, or rotating the target field / checkerboard 90°.
- \\(f\\) may be highly correlated to all of the extrinsic parameters in the standard model, if all the observations lie within a single plane. This can occur if one never moves a calibration checkerboard around much at all. In degenerate cases your calibration optimization won’t converge, but if it does, it will be riddled with projective compensations everywhere, and the variances of the estimated quantities will be quite large.
There are more cases similar to the above. In such cases, it is usually sufficient to add more data to the calibration process. What data to add specifically relates to the parameter correlations observed, and are specific to particular kinds of projective compensation.
💡 Read more about the perils of projective compensation in the Tangram Vision Platform Documentation!
# Conclusion
Projective compensation is an effect that prominently affects many calibration scenarios. From the simple case described in this article, to much more complex cases seen in practice, an understanding of what projective compensation is and how to detect and address it is fundamental for producing stable and reliable calibrations.
Tangram Vision helps you reduce projective compensation where possible, both by providing support for a selection of models specialized for different camera types, as well as providing best-in-class documentation for understanding the right way to collect data for your needs. We’re wrapping this all up in the Tangram Vision Platform, with modules like [TVCal](https://www.tangramvision.com/sdk/multimodal-calibration) for calibration, [TVMux](https://www.tangramvision.com/sdk/sensor-stability-and-management) for streaming, and the Tangram Vision [User Hub](https://hub.tangramvision.com/) for cloud access to our tools. And, as always, check out our [Open-Source Software](https://gitlab.com/tangram-vision/oss) for more perception resources.
# Come Work With Us
Are these the kinds of problems you like to work on? We're hiring perception and computer vision engineers to join our team as we tackle these kinds of challenges and more. Visit our Careers page to see open roles.
Tangram Vision helps perception teams develop and scale autonomy faster.