
Image Credits: Sony
Smartphones have fast grown into the greatest point-and-shoot cameras ever created, but owing to an ingenious new Sony image sensor, they may soon go another step further. Sony’s semiconductor business has announced the creation of the world’s first two-layer transistor stacked CMOS sensor.
Photodiodes and pixel transistors are now on the same substrate (or layer) in CMOS sensors, but Sony’s new technology splits them into two tiers.
What does this signify in terms of picture quality? This new design, according to Sony, doubles each pixel’s saturation signal level, essentially exposing them to twice as much light.
This should boost the sensor’s dynamic range greatly while also making way for bigger amp transistors to assist minimize night-time noise.
The advantages should be especially noticeable in high-contrast images, such as those with brilliant sunshine and heavy shadows, which were previously difficult for smartphones to handle.
Today’s phones increase their dynamic range using smart multi-frame processing, but this new Sony sensor should provide its software with a far stronger foundation to work with.
Sony hasn’t announced when its new sensors would be mass-produced, but it did say that its new two-layer transistor pixel technology will “help to the realization of increasingly high-quality imagery such as smartphone images.”
Because Sony is by far the largest maker of smartphone camera sensors, this is significant. It possesses 42 percent of the worldwide image sensor market, according to Statista, and recent teardowns of the iPhone 13 Pro Max reveal that it utilizes three Sony IMX 7-series sensors.
The new sensor might be beneficial to mirrorless cameras as well, although the benefits are likely to be greatest for smaller smartphone sensors, which Sony appears to be focused on first.
Camera tech by Sony
The main challenge that camera phones face is getting enough light onto their sensors without making the phone itself a brick.
The solution has recently been substantial advancements in multi-frame processing, but this new Sony sensor might be the first big hardware jump we’ve seen in a long time.
Sony used a 1-inch sensor in one of its phones for the first time on the new Sony Xperia Pro-I. However, this also showed the drawbacks of the traditional method of employing bigger sensors to capture more light. The Xperia Pro-I only employs a fraction of the 20.1MP sensor (12MP), since a phone with the entire 1-inch sensor would be prohibitively bulky.
This is why the new layered sensor is so well suited to cellphones. It has dramatically increased light-gathering capabilities over conventional CMOS sensors while not consider expanding the chip’s size.
‘Stacked’ sensors have also made significant progress in mirrorless cameras. Sony was the first full-frame camera to use a stacked chip in 2017, with the Sony A9 being the first full-frame camera to do so in 2018. The addition of a new layer of DRAM onto the sensor itself, which greatly enhanced read-out rates, was the breakthrough of the stacked architecture in this case.
Thanks to the improvements in electronic shutters that stacked sensors bring, the recent wave of flagship mirrorless cameras have been able to offer incredibly fast burst speeds and 8K video capabilities – with the Nikon Z9 even being able to do away with its mechanical shutter entirely, thanks to the improvements in electronic shutters that stacked sensors bring.
However, the benefits of Sony’s new two-layer transistor pixels for smartphones are more likely to be in the realms of improved dynamic range and reduced noise – and if this technology has the same impact as Sony’s mirrorless camera sensors, it could power another image quality leap for next-generation phones.