The Future of Computing is Cameras →

Benedict Evans:

This change in assumptions applies to the sensor itself as much as to the image: rather than thinking of a ‘digital camera,' I’d suggest that one should think about the image sensor as an input method, just like the multi-touch screen. That points not just to new types of content but new interaction models. You started with a touch screen and you can use that for an on-screen keyboard and for interaction models that replicate a mouse model, tapping instead of clicking. But next, you can make the keyboard smarter, or have GIFs instead of letters, and you can swipe and pinch. You go beyond virtualising the input models of an older set of hardware on the new sensor, and move to new input models. The same is true of the image sensor. We started with a camera that takes photos, and built, say, filters or a simple social network onto that, and that can be powerful. We can even take video too. But what if you use the screen itself as the camera - not a viewfinder, but the camera itself? The input can be anything that the sensors can capture, and can be processed in any way that you can write the software.

Exactly why social media has evolved from text status updates to photos & video to visual storytelling.

Meanwhile, while we can change what a camera or photo mean, the current explosion in computer vision means that we are also changing how the computer thinks about them. Facebook or your phone can now find pictures of your friend or your your dog, on the beach, but that’s probably only the most obvious application - more and more, a computer can know what's in a image, and what it might represent. That will transform Instagram, Pinterest or of course Tinder. But it will also have all kinds of applications that don't seem obvious now, rather as location has also enabled lots of unexpected use cases. Really, this is another incarnation of the image sensor as input rather than camera - you don't type or say 'chair' or take a photo of the chair - you show the computer the chair. So, again, you remove layers of abstraction, and you change what you have to tell the computer - just as you don't have to tell if where you are. Eric Raymond proposed that a computer should 'never ask the user for any information that it can autodetect, copy, or deduce'; computer vision changes what the computer has to ask. So it's not, really, a camera, taking photos - it's more like an eye, that can see.