Tuesday, 15 April 2014

noise about sensor noise

Just the other day I got a comment on a post about the validity of using my GF images to postulate how much better the Sony a7s may be at its high ISO setting.

I thought about this for a bit and realised that I'd not actually done such a side by side comparison myself but had just 'observed' the changes each time a new camera comes out and thought "oh, well ... nothing startling there". So with that in mind I thought I'd download a few images (as I don't have a more modern camera) and while at it download all datasets from the same source (to get some sort of consistency).

Well clearly its hard to get consistency in testing sources (especially when they get their money from ads and I don't get money). So with that in mind I pulled down GF-1 and GX-7 images from DPReview at 3200ISO to compare.

My view from the start was that sensors have not made any great leap forwards but that signal processing has. Physics would indicate that more pixels from the same size sensor (GF1 is 12MP GX7 is 16) would give more noise per pixel, but if done smartly the camera maker can probably do some pixel binning to use the information from the adjacent photo-sensor site to "settle" the noise a bit.

Then when looking at the data its important to not confuse advances in RAW processing (such as Adobe has built into ACR) and advancements in pre-write in camera, which do some adjustments in hardware on the data from the sensors. And yet again from those.

To sort that out cunning software tricks in demosiac to an RGB image I used DCRAW to produce linear 16 bit files of the images from both cameras.

Firstly lets look at the GX7


Interesting, I had to double check that I had not used the 'low light' version they now include. It looks very clustered to the left perhaps they were trying to extend the (known reduced HIGH ISO) dynamic range?

GF1


This image shows far less clustering and perhaps suggests a better actual dynamic range.

Processing the GX7 file with parameters to use camera white balance (and gamma) [dcraw -v -w -6 -T P1030049.RW2] I get this



where it seems they've chosen to skew the data away from the lower end of the recording spectrum (where floor noise will be the loudest) and then in demosaic time, strech the histogram to get it to fit keeping black is black.

Of course you'd never see this when looking at the JPG (or probably even a Lightroom or ACR image).
So when I've evened out the dark areas of the GX (as will happen in any processing) I get this:



I've chosen to look at the RED channel for each camera (which has more noise than green) to show the levels of noise. The noise in the GF is striking but there is quite a bit of noise in the GX too ...

Probably Panasonic could have reduced that by ignoring the bottom end (as they've done in the GX) and perhaps also by adding some pixel binning before writing it out to the RAW file with a bit of pixel binning.

To simulate that I've down sized the view of the GF1


Looks remarkably similar now to me. If I could take an image with the GF where I ramped up the sensor gain circut and ignored the bottom half and added some pixel binning then I'm sure they'd look about the same.

Why may they have taken this approach? Well if you are interested I suggest reading this article over at the University of Chicago (totally worth the read for the technically inclined). The author examines how you can keep apparent tonal range as long as you have enough noise to cover it up.

NB: from that page



To take his point a bit further, I've taken a stepped image and selectively added noise into the RED (top noise band), then BLUE (middle noise band) then both RED and BLUE (bottom noise band).


Now, do you see 'mottely' colour effect in your images like anything in that simulation? If so its the effect of colour nose between the channels. I discussed that some years ago over here. Actually to make it clearer than I understood in that page the JPG noise being 'funny kind of worse' in those images was the result the JPG noise reduction algorithm smoothing (smudging / wavelet blur) the noise and resulting in the colour channels being different. (then there is high frequency noise and low frequency noise ....).


So I expect that Panasonic is just more cunning than people give them credit for ... reduced the effective tonality of the sensor (by humping it right) and cutting out as much of the floor noise as they could.

So my view now is that there hasn't been any really big changes is sensors, just adding in more pixels and working the signal processing angle to wring out a few more bits of gain.


PS: I had a bit of a late brain wave and thought I'd go sus out what DxO said too..

Overall Ratings:

The GX and the GF1 are rated similarly while my GH1 (my preferred camera anyway) is rated higher than the 'newer sensor'

so then Signal to Noise ratios


again similar with the GH1 leading by a nose

then Dynamic Range


With the GH1 leading the pack at 100 - 200 ISO before the GF1 falls away a bit and the GX holds it with my old faithful GH1

So I feel that all round this backs up my view that there hasn't been much stunning change in Sensors and I really do hope for something excellent from the Sony a7s

3 comments:

Lens Bubble said...

The Canon sensors are the same. More advances on noise reduction but the sensors themselves basically stay pretty much the same for the last few years. Nice clean and noiseless jpegs but when you shoot RAW, you will see nothing much has changed. The opposite used to be true.

Anonymous said...

I've known the article from the Chicago group for quite some time now, it's really nice.
When looking at sensors, you are able to improve tiny aspects in the total noise with the conventional design.
Most important is of course the total amount of light being available. Only for the very highest sensitivities or pushing shadows extremely the amount of read noise, for example really comes into play. www.sensorgen.info is a nice site where one can compare sensor data. Saturation and read noise are different for different camera sensor, and also weighted differently possibly on purpose.
But I have to agree, that software and processing are the metrics which improve the most. But until new technologies arrive and saturate that's a normal process. I'm a scientist myself and we often try to convince companies to use new algorithms or technology. They don't want to. They want to make revenue first and get that predictably by improving slowly, step by step existing technology until nothing improves anymore. It's easier. Only newcomer are open to invest in new technology to get market share.
Recently I got interested in Fuji after reading about random color filter arrays (CFAs). This way you can design demosaicing algorithms to get optimal peak signal/noise ratio and have a different distribution of noise. Publications show that you have more noise (total amount the same) distributed to so called chrominance noise. This has less correlation and is visually more pleasant to the eye (we don't see it). Foveon is an other promising technology.
Nice thoughts form your site here, I enjoyed reading it. As you mentioned binning, s.th. of interest:
a)"Using visible SNR (vSNR) to compare image quality of pixel binning and digital resizing"
Joyce Farrella, Mike Okinchab, Manu Parmarac, and Brian Wandellac
b)Analysis and processing of pixel binning for color image sensor
Xiaodan Jin and Keigo Hirakawa*
EURASIP Journal on Advances in Signal Processing 2012, 2012:125.

Holger

obakesan said...

Holger

thanks for your educational comment!

I guess you may have read that I too find Fuji approach interesting, and liked their SuperCCD technology.

Best Wishes and Happy Easter (passover or whatever)