Just the other day I got a comment on a post about the validity of using my GF images to postulate how much better the Sony a7s may be at its high ISO setting.
I thought about this for a bit and realised that I'd not actually done such a side by side comparison myself but had just 'observed' the changes each time a new camera comes out and thought "oh, well ... nothing startling there". So with that in mind I thought I'd download a few images (as I don't have a more modern camera) and while at it download all datasets from the same source (to get some sort of consistency).
Well clearly its hard to get consistency in testing sources (especially when they get their money from ads and I don't get money). So with that in mind I pulled down GF-1 and GX-7 images from DPReview at 3200ISO to compare.
My view from the start was that sensors have not made any great leap forwards but that signal processing has. Physics would indicate that more pixels from the same size sensor (GF1 is 12MP GX7 is 16) would give more noise per pixel, but if done smartly the camera maker can probably do some pixel binning to use the information from the adjacent photo-sensor site to "settle" the noise a bit.
Then when looking at the data its important to not confuse advances in RAW processing (such as Adobe has built into ACR) and advancements in pre-write in camera, which do some adjustments in hardware on the data from the sensors. And yet again from those.
To sort that out cunning software tricks in demosiac to an RGB image I used DCRAW to produce linear 16 bit files of the images from both cameras.
Firstly lets look at the GX7
Interesting, I had to double check that I had not used the 'low light' version they now include. It looks very clustered to the left perhaps they were trying to extend the (known reduced HIGH ISO) dynamic range?
This image shows far less clustering and perhaps suggests a better actual dynamic range.
Processing the GX7 file with parameters to use camera white balance (and gamma) [dcraw -v -w -6 -T P1030049.RW2] I get this
where it seems they've chosen to skew the data away from the lower end of the recording
spectrum (where floor noise will be the loudest) and then in demosaic
time, strech the histogram to get it to fit keeping black is black.
course you'd never see this when looking at the JPG (or probably even a
Lightroom or ACR image).
So when I've evened out the dark areas of the GX (as will happen in any processing) I get this:
I've chosen to look at the RED channel for each camera (which has more noise than green) to show the levels of noise. The noise in the GF is striking but there is quite a bit of noise in the GX too ...
Probably Panasonic could have reduced that by ignoring the bottom end (as they've done in the GX) and perhaps also by adding some pixel binning before writing it out to the RAW file with a bit of pixel binning.
To simulate that I've down sized the view of the GF1
Looks remarkably similar now to me. If I could take an image with the GF where I ramped up the sensor gain circut and ignored the bottom half and added some pixel binning then I'm sure they'd look about the same.
Why may they have taken this approach? Well if you are interested I suggest reading this article over at the University of Chicago (totally worth the read for the technically inclined). The author examines how you can keep apparent tonal range as long as you have enough noise to cover it up.
NB: from that page
To take his point a bit further, I've taken a stepped image and selectively added noise into the RED (top noise band), then BLUE (middle noise band) then both RED and BLUE (bottom noise band).
Now, do you see 'mottely' colour effect in your images like anything in that simulation? If so its the effect of colour nose between the channels. I discussed that some years ago over here. Actually to make it clearer than I understood in that page the JPG noise being 'funny kind of worse' in those images was the result the JPG noise reduction algorithm smoothing (smudging / wavelet blur) the noise and resulting in the colour channels being different. (then there is high frequency noise and low frequency noise ....).
So I expect that Panasonic is just more cunning than people give them credit for ... reduced the effective tonality of the sensor (by humping it right) and cutting out as much of the floor noise as they could.
So my view now is that there hasn't been any really big changes is sensors, just adding in more pixels and working the signal processing angle to wring out a few more bits of gain.
PS: I had a bit of a late brain wave and thought I'd go sus out what DxO said too..
The GX and the GF1 are rated similarly while my GH1 (my preferred camera anyway) is rated higher than the 'newer sensor'
so then Signal to Noise ratios
again similar with the GH1 leading by a nose
then Dynamic Range
With the GH1 leading the pack at 100 - 200 ISO before the GF1 falls away a bit and the GX holds it with my old faithful GH1
So I feel that all round this backs up my view that there hasn't been much stunning change in Sensors and I really do hope for something excellent from the Sony a7s