Wednesday, 3 March 2010

whats the point

This subject of this post is inspired by my (sadly many) problems in getting staff at professional image businesses to grasp some basic points about digital imaging. If I'm off track here I would appreciate someone setting me straight.

If I put a dot on a page, would you call that a point? I would ... here's a page with two points.

If it was arranged in a grid pattern each of the grid squares could become a pixel, which would also be a point, although it might not be the dot we started with ... but I'll get to that?

The problem is understanding how you name and describe your points, you know, the ones which make images up (not the points I'm making here).

Personally I can't really think of a reason to describe a lone and solitary point, and remembering names is hard for most of us, so having 18million is like remembering the population of Australia by name. Coordinates may be good, but that's so mathematical.

Now, while we're thinking, in the real world points must be somewhere and if they are somewhere we can measure that. God (in her wisdom) gave us the inch for measuring things (even the inch worm measures the marigolds right?), so why not use that as our basic measurement. Thus we can describe our points in some ways:
  • how many dots we have in total or
  • the size of our grid (say, 1024 x 768) or
  • how many dots we have per inch.
Dots per Inch ... its so simple ... ahh its so right, the world can sleep at night knowing we all speak a common language.

But I wish it were so easy, as there's people out there who consider that there are all sorts of points it seems:

  • PPI (points per inch, unlike the points per hour raised in parliment)
  • DPI (our friend dots per inch, although I thought a dot and a point were quite indistinguishable, especially without my readers on..)
  • SPI (samples per inch, a clever point which is important in determining stuff about scan resolutions, but nothing useful in describing a digital image)

the list goes on I'm sure (as I can see pica in some software too, so no doubt some olde typesetters out there prefer that). Where I see simplicity, others see confusion.

So what's my problem and how is this related to the above confusion?

Well here's a typical scenario, I've got some film (say a sheet of 4x5) and I want to get it scanned so I can get it printed on one of those fancy Durst Epsilon printers. I ring up one of the top 3 professional labs in a Capital City in Australia and ask the question:
what do your scans cost and what is the maximum DPI you can scan at?
Typically I get an answer in megabytes.
(which is not helpful to me without knowing heaps more such as: how many points this is going to provide, what film format is being scanned and if those megabytes are going to be delivered in 8 or 16 bits per pixel / point / dot whatever ... seldom is this information that is asked for or even comprehended when you introduce it)
Sometimes after a brief discussion I'll get an answer like:
Ohh, yes, I understand its 300dpi.
which the cognoscenti among you will be able to quickly identify as being quite unlikely, and certainly unlikely to be yielding many of those megabytes previously offered.

Sad, this place used to be so good when they just dealt with my film on Durst Enlargers and printed onto paper with light from a bulb .... still, they've only been digital for 10 years now, can't rush this sort of thing. Awfully technical ...

To compound this I regularly see on the various internet forums when people ask questions about scanning and get drowned in TLA riddled answers.

I'm never quite sure why a question like:
should I scan my film at 2400dpi or 4800dpi?
needs an answer that mix n matches lashings of DPI, SPI and PPI unless the major purpose of the answer is to provide lots of EWV (which is extra wank value for those not in the know on that particular TLA).

I'm an old fashioned guy in the digital age, so for me the thing which is centric to digital imaging is the pixel, you know, that little dot which makes up the screen you're looking at right now.

As I see it, digital images are fundamentally composed of pixels, which is essentially a fancy computer name for a dot.

Describe it any way you want, a pixel is a point to a computer and is a dot to me. It might be a coloured dot, it may have only grey tones, but it is a dot. Printers lay these dots down on to paper (often blurry and indistinct as to the left) and screens light up and show us each dot as a square (if its a TFT monitor as the image above) or a blurred dot if its a CRT.

Any digital image is a file composed of data to describe these dots.

If you're making a file you need to consider a few things; such as how many of these dots you're going to make. Naturally with a digital camera making the image this question is probably answered for you (though there seems no end of confusion of DPI in that area)

Thus as I see it, the centric concept in digital images needs to be pixels and DPI only enters into the discussion when you're scanning and when you're printing.

Scanners move over film and sample it (using a variety of techniques) into pixels. Since film has length and width (you can measure it) we get back to my friend the inch (not the worm, he's busy elswhere).

Now, unless I'm worried about how the dots on my piece of film will resolve into pixels and where a dot will be and if it will be two dots or not, then I'm totally not interested in Samples Per Inch, I am in fact only interested in knowing how many pixels will be made per inch of my film.

Thus I'm really interested in pixels, which are dots, so I'm talking DPI

Now I'm sure that someone out there will be worried that I've glossed over something really important here, but I don't think I have (unless you're doing research in scanner resolution).

Lets go back to the first image at the top of the page with the dots on the page, here it is zoomed in.

The grid on the page represents what the scanner does, it will divide the scan area into spatial coordinates and the scanner writes down the values it sees at each point.

Now if we had a blank white page with two dots on it and put it on a scanner, it will essentially assign numbers for what it sees inside each of the squares.

The left red square is empty, so it'll write down white in that square. Same for most of the squares, except for our two dots.

Even though you and I can see they're two dots because the scanner reads only each region (in my example each square) and assigns a number ... the left one will be some level of grey and the right one will be another level of grey. Sadly the left dot will contribute to the value of the square placed over the right dot. We will no longer have two dots but a bigger single blog. Cruel, but that's how it goes when you draw lines on maps without thinking (just like government does all the time).

Now if you wanted to make sure that you got the two dots as distinctly two dots you would need to increase your sampling, that's right the samples per inch (or for the lovers of TLA's) SPI.

Again the cognoscenti among you will identify that this is essentially breaking the image up into more dots. Yep, you guessed it, more samples is more dots, so again we have it boil back down to how many dots we have per inch.

With all of this going through my mind while I'm talking on the phone to the (probably lovely) lady at the lab I am still no closer to getting any answer as to how bloody many pixels I'm going to get out of my sheet of 4x5 when they scan it ...

Eventually I think to reverse engineer the problem and ask:
ok, what's the largest print you can make from my sheet of 4x5
to which the answer is:
oh, we can produce an A0 print at 300dpi from 4x5
Right, I now reach for my calculator and plug in some numbers ... A0 is about 33.1 × 46.8 inches so they're going to have 300 dots per inch on the print which is 14, 100 pixels ... which will come from my 5 inches of film length so that's around 2820 dpi scan ... or a wee bit better than my Epson 4990 gives me (which I believe is good for a genuine 2400dpi scan)

Why couldn't they just say so!

So my plea to all the printing / scanning bureaus out there is this:
please try to grasp the point of DPI, it'll save us all a nasty headache


Charles Maclauchlan said...

strikes me that the confusion between PPI/DPI is somewhat similar to the bit/baud concerns of last century...or affect / effect. Even if one is clear in their own mind it's quite impossible to be sure that someone else has the same understanding, regardless how well they seem to agree with you.

Another example which I find frustrating (I can't help got me started) is the confusion around f stop. Discussions about vignetting seem to advise a smaller f stop...would that be a smaller number or a smaller opening? Hasselblad in it's various brochures advises photographers to use a smaller f stop in one and in another a larger f stop so they're confused as well.

Oh. Thank you for "EWV." It translates quite nicely to American English.

Charles Maclauchlan said...

First of all let me thank you for the works in American english also.

Bits vs Bauds, affect effect, DPI or PPI. Is each pixel represented by a single dot? Very well could be but surprisingly many don't know the answer...or even that it's a question.

The one that's significant for me is the discussion around avoiding vignetting with my xPan (same issue with my 6x9) The advice is to use F8 or smaller. OK. Smaller f number or smaller opening? Interestingly Hasselblad gives the advice to use f8 or smaller in one brochure and the counter advice to use f8 or higher in another. I guess some questions are beyond science.

Extra Vank Value indeed

Noons said...

And then you have those commercial printing places where I ask: "what colour profile do you use, AdobeRGB or sRGB?", invariably replied to with a blank stare, followed by a "er...Photoshop?"...
The other one that rattles me is the camera megapixel: yup, but it's not really a single colour megapixel like in a scanner, is it? (except in the Foveon)
The Bayer ones are rather an extrapolation of 4 pixels (2 green, one red and one blue) into 4 full colour pixels: not really the best way to define resolution, but WTH: lots of it must be good, because more is always better, right?
Ah well...