Lytro changed photography. Now can it get anyone to care?

How to build a hardware company

Since modern photography was invented in 1837, everything and nothing has changed. The tools have evolved — we shoot with different cameras and don’t need darkrooms for developing or controlled explosions for flash — but we’re still capturing static, two-dimensional images. But inside a nondescript office park on Terra Bella avenue in Mountain View, California, a new future is coming into focus.

It’s been a long road for Lytro since the company’s founding in 2006. Its first camera was made entirely with off-the-shelf parts, essentially a prototype that turned out well enough that the company stuck it on Target shelves. It was built by founder and executive chairman Ren Ng and three others, who scraped together the fewest, best parts they could find, and hacked together a way to take light-field shots with essentially traditional camera parts. At one point Lytro’s primary manufacturing partner lost the master recipe for the microlenses, the camera’s most important component, and it was six months before production ramped back up.

This time the team is larger and more experienced (and has diversified its partnerships), not to mention finally armed with enough cachet to get suppliers for whatever it needs. The first camera opened the doors required for Lytro to build exactly what it wanted, and what it wanted was to make Illum, an entirely new kind of camera. “With Illum,” Ng says, “we’re able to start to customize that supply chain in a very deep way… to rethink the entire imaging pipeline.” It began with the lens, a round barrel with big zoom and fast aperture throughout. Lytro built a lens that can focus on a subject physically touching its glass, and can shoot with remarkably fast shutter speeds (though it still takes a second or two to process each shot, like an old-school Polaroid printing each shot).

That kind of versatility and brightness is unheard of in the camera industry. The Illum has very little glass, and none of the complicated, expensive aspherical elements that traditional cameras require to reflect light onto the sensor. That’s because the Illum isn’t really capturing a photograph in any traditional sense when you press the button; it doesn’t imprint reflected light on a sensor the way a traditional camera does. Instead, it’s just capturing loose data, and the processor builds a picture later. “So you can make cheaper lenses if you want, you can make lower-quality lenses acceptable, and you can build new lenses that have higher performance than you’ve ever seen before.”

The Illum also has a square blue “Lytro Button” next to its shutter release, which maps in real time the refocusable range of whatever photo you’re framing. A green border frames everything at the front of the photo, all the way to deep orange in the back — it’s constantly showing exactly how someone might be able to tap or click through your photo. It’s a huge aid in understanding how what you see through the camera will translate into an interactive photograph. “Not only are you able to think in that extra dimension,” Lytro’s marketing chief Azmat Ali tells me, “we’ll help you to be able to frame that extra dimension. And when you can frame that extra dimension, your creativity is set free.”

As I wander the offices taking pictures of everything from the catered lunch spread (an impressive taco bar) to the unsuspecting Lytro engineers, it quickly becomes clear that shooting with the Illum is all about depth, and communicating the size of the scene. It took a little more work and a lot more stage direction to take a great shot, but every shot was more interesting than its potential static equivalent. Each one was like looking into a dollhouse, a tiny 3D representation of the world with new decorations and rooms everywhere I look.

Pitts next hands me an iPhone running a basic version of the app Lytro will release alongside Illum. It’s a grid of images, like a thousand other apps. But when I tap an image to open it, the photo wobbles as it springs into place. As it moves, the perspective shifts — I briefly see shadows and reflections change, and peer around the side of a barrel of a gun. I tap on the front of the barrel, and it snaps into focus. Then I tap the man holding the gun, and his grimacing face clarifies in front of my eyes. Before I know it, I’ve spent three minutes tapping different parts of the screen, exploring every part of the scene: his hands, the gun, the costumed insanity in the background at San Diego Comic Con. It’s a different photo each time, a story I’m helping the photographer narrate. The next photo, a close-up shot of a tape measure, shifts its focus backward to reveal the rough-handed carpenter in the background. Each photo feels more immersive, more memorable than others I’ve seen. More real, somehow.

Before Lytro was Lytro, it was Ng’s 203-page computer science thesis, entitled simply “Digital Light Field Photography.” It’s based on two decades of research into “computational photography,” a catch-all term for collecting more data with cameras. Ng combined it with his original research at Stanford, in an area called re-lighting — using computer graphics to map how light affects and changes in virtual worlds — to explore how computation could fundamentally change the things we see. His research and thesis focused on how light becomes data, and data becomes photos.

Light-field photography has been discussed since the 1990s, beginning largely with three Stanford professors, Marc Levoy, Mark Horowitz, and Pat Hanrahan. (The term “light field” was first coined in 1936, and Gabriel Lippmann created something like a light-field camera in 1908, though he didn’t have a name for it.) Instead of measuring color and intensity of light as it hits a sensor in a camera, light-field cameras pass that light through a series of lenses (hundreds of thousands in Lytro’s case), which allows the camera to record the direction each ray of light is moving. Understanding light’s direction makes it possible to measure how far away the source of that light is. So where a traditional camera captures a 2D version of a scene, a light-field shot knows where everything in that scene actually is. A processor turns that data into a 3D model like any you’d see in a video game or special effect, and Lytro displays it as a photograph. It’s a little bit like the small bots in Prometheus, spatially mapping an entire room in order to display it back later. Or think of it as a rudimentary holodeck, projecting a simulated scene that changes as you move through and interact with it.

Lytro didn’t invent the science, just found a way to turn the required technology — which was once made up of 100 DSLRs in a rack at Stanford — into a product you can hold in your hands.

Article source:

Tags: , , , , , , , , , , ,

Leave a Reply

CommentLuv badge