Word last week out of Singapore`s Nanyang Technological University (NTU) was that a team of researchers had made a breakthrough in producing a new type of graphene-based image sensor. News sites and tech blogs quickly picked up on the ``1000x more sensitive to light`` aspect of the technology. Roughly translated, that's just shy of 10 stops worth of improvement, but is that even possible?
The idea is that the hexagonal repeating structure of graphene is able to trap photon-generated electrons for longer than conventional structures, which makes for a stronger signal. Note my use of the word "photon" and not "light". What this means is that the technology is sensitive to photons across a wide range of the electromagnetic spectrum, not just visible light, which is not necessarily that useful for most forms of digital photography Here's a quote from Gizmodo member Little John:
"Oh dear! More of this nonsense again.
Whatever this claims it cannot lead to 1000x better low light photography.
Silicon already captures most of the light over the human visual spectrum range. There is no 1000x to be had without boosting the sensitivity beyond visual light (UV or near IR). That's useful for applications like security, machine vision, but it does nothing for photography. Even in conventional CMOS imagers, we have to use infrared cut-off filters to stop IR from reaching the sensor and messing up the color reproduction.
If you're creating normal photographs (not something freaky like IR photography) you can only use ~400-700nm light. Google "silicon QE curve" and you'll see how graphs show how efficient silicon is at capturing light over that range. There just isn't room for a 1000x improvement.
In color sensors, we do throw away light by placing red, green, or blue color filters over each pixel. If a perfect (and that doesn't mean Foveon, which is far from that) stacked RGB pixel could be built, these color filters could be eliminated. But that would only achieve something like a 2x increase in sensitivity. Not 100x. And in any case, graphene sensors would probably require the color filters anyway, until a way to effectively build stacked pixels were developed.
If someone said they can get a 2x increase in sensitivity over the visual range, and 1000x increase in Near IR it would sound somewhat credible, but there is no 100x improvement in photography to be had here people.
(Full disclosure: I work for Aptina, a CMOS image sensor company and have worked in the field since the early 90s.)"
Coming from the image sensor company that partners with Nikon, I think I'll go with his answer. So in other words, it's useful technology, but don't expect Nikon D4 levels of ISO powers at Nikon D800 base ISO quality anytime soon.
The truth is, modern camera sensors are already close to the practical limit of their light gathering ability. Cameras like the Nikon D7000 and D7100 are already 50% efficient at converting photons into electrical signal. The last big hurdle is the light lost to the colour filter array (Bayer pattern), but to put things in perspective, increasing the efficiency from 50% to a theoretical maximum of 100% is only doubling the signal, which translates into one stop's improvement. In other words, there's a reason why people like Thom Hogan are using the term "last camera syndrome"... any improvements in technology at this point will likely be incremental short of something really disruptive arriving on the technology scene. That said, the graphene-based technology does look promising; it's just that it's not as ground breaking for digital photography as some people would hope.
The take home message for the average camera enthusiast is that the majority of image noise that we perceive comes from the quantum nature of light itself. If you want to reduce image noise, you have to increase the amount of light that is reaching the sensor; either by using a longer exposure, or by adding light to your scene with a flash.