Saturday, 30 April 2011

he's gonna restore it

was up in the mountains near here the other day ... people up there are often stashing something in the yard for later


he's gonna restore it


The Corellas aren't sticking around to wait for that

Tuesday, 26 April 2011

stich n shift

To shift the lens (and thus distort the image) or use a distorted image ... that is the question ;-)

Panorama can be a useful tool to help you make your lens wider than it is. Its often a handy tool in the arsenal of the interior and landscape photographer. As photographers learn more about their tools (often by reading on bulletin boards or magazines rather than books and by doing) they come across the concept of a shift lens.

Canon (for instance) make such a tool in the TS-E series.

This is my TS-E 24, or was rather as I've sold it a while ago.

The TS-E lens are lenses which can tilt and shift. This article will focus on the shift aspect and perhaps explain why I no longer have the above (often coveted) tilt shift lens.

The reason you want to "shift" a lens in the first place is to account for perspective changes which occur more obviously with wide angle lenses when you look up or down.

Shifting the lens causes the image to be projected onto the film (or sensor in digital cameras) in a trapezoid shape and bring together lines which your brain knows to be parallel but which the camera makes into a distorted shape by looking up. Try reading this for a more complete explanation.

This can also be corrected for in software, after taking the picture. For example this shot was taken with 2 images stiched together and corrected for in photomatix. I used a 9mm lens on my 4/3 camera (which by itself is about equal to an 18 or 21mm lens on a full frame 35mm camera.


clearly each section is quite wide, but I wanted wider, so I took 2 images (well 3 really but I was too lazy to show the middle one in this exersize).

perhaps this image shows more obviously how each section is altered ... here I have not yet tilted up so the view is still looking down at the altar more.


You can see that the outsides of the image are unscaled and the image scales towards the center thus reducing the amount of losses. Of course one could try to work in such a way as to force the software to make the sides longer, but that would be counter productive. Perhaps I should do a blog post on pano techniques?

So, lets look at an uncorrected version of the image ...


you can see that the software has not only joined them together for me, but looking at the curved edges in the above stitch example has also corrected for some pincushion distortion that the lens has (as well as the perspective fixing). This left me with an image which I can crop out and get essentially the view of something more like a 7mm lens without having to have such a creature.

The image of course has to be distorted for this and some will argue to you that there will be some losses. Here is the 100% pixel crops from the upper left of the image there


so clearly it has been stretched and elongated during this process, but its nothing horrific. Also, recall that if that distortion did not happen somewhere (software post, or lens pre) you would not get the perspective correction.

Its important to point out here that this is exactly what a shifting process does ... stretch and elongate the image by projecting it from an angle. From the wikipedia article above:

when just angling the camera back:

the image is cast onto the sensor / film but is distorted according to our sence of correctness.

When shifting the lens

the image is distorted; note the angles of light are no longer symmetrical entering the camera and falling on the sensor.

One can do this in software by calculation of where the pixels should go, or you can do it in hardware by using a lens to throw the angles differently onto the sensor. Its really all the same and the same losses due to inaccuracy happen in the system (well more or less, optical losses will be different ones to those in calculations).

So, do you remember Polar to rectangular conversion from maths in school? Well assuming you have a triangle like this:

recall you can calculate the length of r if you know the length of X and the angle theta. Alternatively you can measure r and theta if you know the lengths of X and Y.

Its all the same, but the accuracy depends on the tools you have to use.

The difference in the picture is comes down to which is more accurate: the lens or the software?

So, getting back to the question above which is better? Shift or Stitch?

Its reasonably well known that most wide angle lenses get softer towards the edge, so by using the middle of the lens you are starting with a clearer picture and therefore more accurate data to calculate with. Clearly if you were to stitch you would have more pixels to start working with (the combination of two 4000 wide images into one that is say 6000) and thus after stretching into shape you can shrink back to 4000 wide and thus clear up any distortions.

To get the same in stitching as a shot from a wide angle I'll typically use 3 overlapping portrait images to make a landscape, or a stack of 3 landscapes to equal a portrait. You'll need an expensive and well made ultra wide to equal an image from a number of stitched wide or normal lens images. Even then if you're stitching more than 3 images together you'll never match it with a shift lens even going up double the capture area (meaning going from 4/3 to full frame).

but what about stitched shifted images

This is an interesting point, and the one which prompted me to blog about this.

It would seem that if you stitched shifted images together that you'd have the best of both worlds. But its not as it may seem. Recall from the above discussion that as you shift you not only move the lens to correct for perspective, but you move out into the edges of the lens which are not as sharp and perfect as the center. I know that when I shift my TS-E 24 that the image is corrected for, by experience has shown me that it is far less sharp then when it is not shifted.

So by using the lens shifted to capture a mosaic of images to stitch together I may make it easier to stitch together (come on, who does this by hand anyway?) but because your outer images are of lower optical quality you actually make the edges of the image worse than if I had used the lens fixed in its sweet spot and just allowed the software to scale it for your.

So why shift?

Well firstly shift lenses were made back in a time when we did not have software to correct images. They are also based on the fact that we moved away from flexible view cameras to fixed and rigid cameras. On a view camera you don't even consider the concept of a shift lens as all lenses will shift with the front standard. Like this one:

Which is what I do these days if I need movements.

The remaining benefit (more so today) of the shift lens is being able to work with a single capture, for those times when (say due to movement) you can't take 2 images and stitch ... because the subject moves. So now no longer need to shift lenses on my digital and if I need to make a single take I'll use my large format film: where having a single shot is often enough.

where I can scan the 5 inch sheet and get 11,000 good pixels from a simple flatbed or more from a drum scan. Of course in that above image some shift was applied, but being a 'normal' its of marginal benefits. Tilt on the other hand is quite important in the above image.

So while I'm not interested in shifting anymore, I am inclined to tilt ... where I can get stuff like this from a 6x12 (notice the relationships of foreground mid ground and back ground with respect to focus and blur ... try that without tilt!!)

and hopefully equally interesting stuff from the 4/3 too, as I have a Tilt adaptor on the way for my Oly lenses expect to see something on the benefits of Tilt on micro 4/3 soon :-)

Tuesday, 19 April 2011

solar is nuclear

I guess that its moved along enough from the disaster to start discussions on the issues which arise from the Fukushima disaster.

I'd like to start this article off by considering a few things which people often forget; most people know nothing about nuclear power ... or even nuclear issues. I know a little bit as a result of studies in Science (bachelor degree studies), subsequent readings and discussions with friends of mine who are quite specialised in this area.

So like the heading says, Solar (the embodiment of clean the energy movement) is actually nuclear.

I mention this just to start out with thinking differently but still with a rational and dispassionate viewpoint.

We think in abstracts of things because if we didn't we'd be overwhelmed by the complexity.

Energy generation in the world is largely divided into Coal, Nuclear, Hydro and then a few other things, but by far a large share is made by burning things. Burning fossil fuels.

Coal is perhaps between 50% and 30% of the stuff we burn to generate energy around the world (figures are hard to nail down, and vary by country).

We tend to think of Coal as a black lump of pure carbon which we burn and get heat and C02 ... while this is true to a large extent two things need to be considered in this:

1) just how many tons of coal we burn each year
2) what else is in coal?

Coal as it happens contains small amounts of Uranium and radioactive Thorium. But even if it was only 1 part per million (and its higher) that means that for every million tons of coal we burn we have a ton (yes a ton) of what amounts to nuclear waste.

I gotta tell you, globally we burn quite a few hundred million tons of coal every year.

This is not new stuff either, as a look at this link in Scientific American or this link on a US Government site suggests.

But don't just take their words for it ... do some research of your own. If you doubt the figures.

So with our existing coal burning methods we are actually creating a nuclear disposal problem right now.

Of course Nuclear reactors create much more concentrated waste than does Coal flyash ... but that only means we need to break up and dilute the concentrated radioactive waste and store it safely. Remember, it came out of the ground in the first place right?

This is not 'rocket science' either and one such method for this has been around for ages, Synroc, certainly others can and should be developed.

Finally not every nuclear reactor is the same, Pebble Bed reactors
for instance are much safer than the type 1 which popped over at Fukushima. The older hot rod method of boiling water.

Remember, that's what nuclear power does ... just like coal we use it to get heat - to boil water - to run steam engines - to turn generators, just like your basic petrol portable generator, only much bigger.

Clearly we need alternatives, and we need to think about things carefully, because while we may worry about the death toll from nuclear, we need to comprehend the death toll from Coal ... its not just C02. For example in China alone, coal mining accidents kill more than 2000 people each year. Burning coal also leads to smog, acid rain and air toxicity, well before we even enter into the debate on green house gases and "global warming".

If our goal is to reduce the dirty waste from our energy generation perhaps we should look past the fears and ignorances into facts:

figure out what works
figure out what it costs
with that in mind do what we need to do.

There are ultimately better alternatives for energy, and my start to this topic (Solar) is a great alternative. The main criticism for Solar is that its effected by the weather, doesn't work well in winter (when we need the most energy) and is unsightly. An alternative location for solar pannels is space ...

Unlike ground based systems it has much longer (nearly 24 hour) up time, does not loose efficiency because of cloudy days and works well in northern lattitudes.

Right now we lack the technology to do this cheaply (although we can do it right now), but that does not mean we should discount this idea. This would give us almost limitless energy and no ground based pollution.

worth looking at and striving for if you ask me

Wednesday, 13 April 2011

four thirds and sensor size


anyone who's a regular reader of my blog will already know this, so please accept this as a quiet rant for the day ... for those who don't know then please read on and perhaps you'll gain something helpful.

For reasons I can't quite fathom many people seem to be lost on the reality of the words they use. People who are becoming photographers for instance eventually get to the position where they know (as opposed to just be able to repeat the words) that sensor size is important in image making.

This is not only from the perspective of so called Image Quality (goodness don't start me on that topic), but from the perspective of what images look like as an effect of the sensor size. In short the bigger the sensor the easier it is to get shallow depth of field, and (of course conversely) the smaller the sensor the easier it is to get very deep depth of field.

Now one of the first things that a budding photographer learns is that DSLR's take better pictures and that's due to some magic property of it being a DSLR(I guess I should be compiling a list of invalid assumptions shouldn't I). Sadly most people have no idea what a DSLR is (although to quote Monty Python, but clever people like me who talk loudly in restaurants will say its Digital Single Lens Reflex)and why its in any way desirable.

you can take that as a question to ponder ... if you already know the answer then great.

So with the scene set for what I'm talking about one of the most frustrating things for me is people saying (about four thirds and in particular micro four thirds) is something like:
relatively large sensor compared to typical P&S

Review writers at DPreview are also culprits here.

apsVS43rdsVSpsOk ... so lets look at the sizes of a DSLR (which is commonly APS sized) and a four thirds (oh, and recall please that the micro in micro four thirds is not that the sensor is micro, that would be daft. micro 43 uses a regular four thirds sensor. Its the camera that's micro) and a P&S or or even a high quality "prosumer" compact camera like a Canon G11 ...

you can see quickly that the 4/3rds and the APS are really the same size as each other (and of course it will come as no surprise that some of the DSLR's you may pick up ARE 4/3 cameras) and that in fact the sensor in a common P&S is ... well ... puny compared to the micro 4/3

so, lets look at some cameras ... I happen to like DPREVIEW as a good source of the basic data on cameras ... so lets take some examples from them:

is this a DSLR?

of course it is ... but this camera has a 4/3 sensor in it

So then is this a DSLR

of course it is ... it has a APS sensor in it which as you saw above is just about the same size as the 4/3 sensor.

so when you look at this camera:

remember that even though it looks like a compact camera it actually has the exact same size sensor as in the Olympus above ... if it looks small, thats what all the fuss is about micro 4/3 and what differentiates them from compacts.

So if you have the faintest interest in understanding your cameras, please, keep this in mind and do your self a favour and go over to Dpreview and do a little reading and start to grasp why these cameras are not the same as this one.