Array cameras, a type of light-field camera, are not new. The idea of combining several low-resolution imagers to create a high-resolution image has been around for years. But it’s in the news this week because Apple has just bought a small Israeli company, LinX, that promotes itself as providing better, smaller, and more capable sensors for mobile devices using sensor arrays. This has led to a deluge of crazy headlines claiming that suddenly the iPhone will replace the DSLR. Aside from the simple fact that sensor arrays are typically fixed focal length, and their small-size still limits how much light they can gather, there are other reasons to be skeptical. One of the largest advantages of building an imager from an array of smaller sensors is that it can acquire depth information at the same time as it creates an RGB image. That is very helpful for 3D applications and for face and gesture recognition, for example.
As a practical matter, cameras that capture entire light fields have so far found uses in stationery situations where power isn’t constrained. And the additional depth information made available is very helpful for vision applications such as robotic manufacturing. German company Raytrix has a well-established product line for manufacturing and security — where 3D data is important for facial recognition — among other applications. Its solution, like Lytro’s, is a single lens placed on top of a specially designed sensor, but aims at providing capabilities similar to the array of small sensors used by LinX.
Light-field cameras, including array cameras, have had a much more challenging time cracking the mobile market. The best known company in the space, Pelican Imaging, has been working hard on the problem for the better part of a decade. I’ve seen more than one impressive demo from them, and indeed their sensor array coupled with advanced software can provide amazing results. Their solution allows 3D image acquisition, selective focus, and depth-based applications like gesture recognition from an imager that costs about the same as the module in a high-end smartphone. However, it requires a lot of processing power. For mobile this has been a big hurdle. While tablet GPUs are now fast enough to do the processing, device makers are looking for ways to reduce power consumption, not increase it.
As a result, Pelican has introduced a new line of products that are small sensors that work in tandem with a mobile device’s main camera to enable it to also gather depth data. The secret sauce is the fusion of that depth information with the RGB output to provide a unified data stream to applications. Pelican CEO Chris Pickett maintains that the transition from smartphone to depth-enabled computing device will be as important for the mobile market as the change from feature phones to smartphones.