Wednesday, April 16, 2014

Newtonian Reflector

Newtonian reflector telescopes use two mirrors – a figured primary mirror (the objective) which focuses incoming light, and a flat secondary mirror which redirects that light at an angle so that it can be viewed without getting your head stuck in the tube. The secondary mirror is only practically important – a CCD sensor could be placed directly into the path of the light focused by the objective mirror and it would capture the light without requiring an additional reflection. However, the subject of this post is the primary mirror, particularly: it’s shape.

A 2D parabola is defined by the quadratic equation y = ax^2 + bx + c. If we make some simplifying assumptions (such as: its open side faces up; it “rests” on the x-axis; the focus is on the y-axis at height p) then our equation reduces to y = 4px^2. The first order derivative (e.g. slope) of this equation is dy/dx = 2px. That means, for a given focal length (and radius from the centre of the mirror), we can estimate the height of the reflective surface from the x-axis, as well as the slope at that point. These two computed values show where light is reflected when it encounters the surface of the objective mirror. Remember that incident light is reflected about the normal vector to the surface:
R = D – (2D.N)N

To enable a 3D version (a paraboloid) we can take advantage of the knowledge that we only need rotate the parabola (i.e. the shape is rotationally symmetric about the vertical axis). In 3D, I chose to rotate around the z-axis.

It then becomes rather straight forward to take a solid angle of light, to compute the angles at which each incident ray encounters the parabolic reflector surface (parallax will affect closer light sources more than further away sources) and then note the points at which two or more rays converge to a single point. It turns out that the parabolic shape is ONLY able to focus light that is travelling parallel to the reflector’s primary axis. Closer sources converge further back than the stated focal length; sources at infinity converge exactly at the parabola’s focal point. Off-axis sources result in a complex out-of-focus shape when we attempt to capture them on a flat focal plane like a CCD. Looking at the distribution of focal points for a given field of view it was difficult to imagine a single transformation that might enable coma to be completely (and correctly) removed, though clearly a retail market for coma-correcting lenses abounds.

Thursday, January 24, 2013

Day-glo

Ever wonder what makes those striking day-glo colors shine brighter than everything around them? Our eyes can only see visible light (the colors of the rainbow) but the sun emits electromagnetic waves across a vastly wider spectrum (which we cannot see with our eyes, of course, and not just because it's not a good idea to stare at the sun). But just because we cannot see them, doesn't mean they aren't there. When photons from the ultraviolet region of the spectrum bump into an object, some of the energy can be reflected. Because we can't see them, they go unnoticed. However, certain chemical combinations transform higher-energy ultraviolet photons into lower-energy visible light photons (a process called fluorescence). When those fluorescent objects start emitting more visible light than the non-fluorescent objects around them, they give the perception of being brighter: day-glo. Interesting fact: whiter-than-white washing detergent is known to employ fluorescence to make clothes appear brighter.

Wednesday, October 17, 2012

Scalar

If you're reading this then it's already too late you've probably come across scalars and vectors and pondered their differences. The pair of terms is used in many contexts as diverse as operations performed by a CPU. I think the names go back as far as matrix mathematics, and - if you'll listen - I'll tell you why...

Matrices can be "multiplied" in two ways:

  1. two matrices are multiplied and the result is an arrangement of dot products;
  2. one matrix and one scalar are multiplied and every value in the matrix is scaled by the scalar.

Now, every time an algorithm requires to scale a collection of values by some number, will I name that variable "factor" or "scalar"?

Which would you choose?

Tuesday, October 09, 2012

Rectilinear

In my quest for a better panorama builder, I stumbled upon a few odd facts about cameras. Mine (the Canon G1X) for example, has a rectilinear lens. Or almost rectilinear. What this means is that for every pixel left or right (or indeed up and down) from the center of a photograph represents a constant number of degrees. I've not been able to measure the angle of view too closely, but I can report that it's slightly over 60 degrees (left to right) judging by my initial results. On a photograph 4352 pixels wide this is about 0.0138 degrees per pixel. (I've assumed the same scale factor is present on the vertical but this is not necessary - just an assumption).

Now, imagine you're standing in the middle of a massive sphere, pointing the camera outwards. If you took a photograph, each line of pixels (up/down and left/right) would represent a curved line segment of a great circle on that sphere. And if a pixel is defined by a row and column (or X and Y value) then the position of the light source of that pixel will be the intersection of those two great circles.

Mathematically it's relatively simple to calculate the intersection point of two great circles (we only consider two points and then throw one of those away - an optimisation) and thus the position in space of the pixel's light source. We could map two or more photographs of the same scene (taken from the same place, but at different angles) onto our sphere (from the inside) and with the right manipulations, we'd be able to build a 360 degree * 180 degree panoramic view (i.e. the full sphere).

And what are the right manipulations? They involve matrices, or more specifically, singular value decompositions. If every photograph shares two overlapping points with another photograph (or photographs) then we can test our inner-sphere projection by comparing the angle between the point-pairs in one photograph with the same point-pair in another. They need to be equal angles or else something's likely wrong with our assumption of the lens projection. Given two point-pairs, it's simply a case of using Procrustes analysis (or actually, just a sub-solution of it) to determine a rotation that can be applied to all pixels of one image to align it perfectly with the other image. Once the aligned images wrap around the entire sphere, you might find that the computed scale factor obtained when calibrating the lens was slightly out. So... why bother calibrating in the first place if you already know the lens is rectilinear! Readjust the scale factor and realign the images as necessary. Then fill the remainder of the sphere with your photographs. Simples!

Saturday, March 24, 2012

West-East

Nothing too technical here: I noticed (when viewing the current night sky in Cape Town) that West and East on the map were swapped around but North and South were still oriented as I'd expect. I puzzled for a few seconds, then held the laptop up to the sky. Eureka! When you're lying on your back (outside, on the grass, an unfamiliar concept to anyone in the UK) staring at the stars with your head and toes aligned North to South, West is your right and East is on your left. All of a sardine the map makes sense...

Thursday, October 27, 2011

Anaglyph

Red/cyan 3D anaglyphs are a mix of lo and hi-technology. Let me explain: red objects are red because that's the color of light they reflect; all other colors are absorbed. There are three "primary colors" of light - red, green and blue. So when a transparent sheet of red film absorbs all "other" colors of light, we really just mean it absorbs green and blue.

When we mix 100% green with 100% blue we get cyan. A transparent sheet of cyan film therefore allows green and blue light, yet absorbs red light. If we take a digital photograph (composed of red, green and blue light) and view it through a transparent cyan film, the film absorbs the red light allowing only green and blue to pass. In theory, if I have two photographs and I remove all the red from one, then I view both through transparent cyan film, I won't be able to tell the images apart. One's image emits no red light; the other's red light is blocked by the "filter" - in neither case does any red light reach the viewer.

What if we view an image with only red through our cyan filter? There is no green and no blue light in the image, and all red light gets absorbed by the filter. The image will appear black.


LEFT RIGHT
IMAGE (red only) (green/blue only)
FILTER (red) (cyan)
EYE (left only) (right only)



Now, put on an imaginary pair of red/cyan glasses and look at two digital photographs side by side. Close your left eye (red filter) and look with your right eye (cyan filter). Remember, an image with no green or blue appears black through cyan, so we remove these colors from the left image. An image with only green and blue appears in green and blue (there's no point in having red in the image as it would be absorbed by the filter anyway - plus it would destroy what comes next). With only your right eye, the left image is black and right image is visible. Now, close your right eye (cyan filter) and look with your left eye. The right image has gone black because it doesn't have any red - the green and blue have been absorbed by the red filter. The left image springs to life now, because it has red in it. We have found a way to show each eye a different image - and our brains do the rest.

Sunday, September 18, 2011

Diskette

I was emptying out a bunch of junk from my apartment and came across an old 3.5" floppy disk. My mind - being as it is - immediately wondered what the unit tests would look like if I were writing a virtual floppy disk (because nobody's got real floppy drives these days.)

On my Diskette class, I would expose the following methods to modify/access state:
  • bool ToggleDiskAccessSlot()
  • bool ToggleWriteProtect()
  • void RotateDisk(int numSectors)
  • int Read(int cylinder, int side)
  • void Write(int cylinder, int side, int value)

Using Roy Osherove's naming strategy (a la accepted answer to this question), I'd add a unit test class named DisketteTests. Unit tests - IMHO - are intended to ensure that public methods modify the internal state of an object as expected. A test first sets up the object under test to a known state, then invokes a method, and finally makes an assertion about the state.

I might want to test the ability to read or write while the disk access slot is closed (I'd expect some exception to be thrown, so there's no explicit calls to Assert in the method body, the assertion is made by the test framework based on the test's custom attributes):
[ExpectedException(typeof(InvalidOperationException))]
[TestMethod]
public void Read_DiskAccessSlotIsClosed_ThrowsException()
{
Diskette d = new Diskette();
// intentionally missing the step to ToggleDiskAccessSlot();
Ignore(d.Read(1, 1));
}

A more standard test might look like this:
[TestMethod]
public void Read_DiskAccessSlotIsOpen_GetsCurrentValue()
{
Diskette d = new Diskette();
if (!d.ToggleDiskAccessSlot())
{
Assert.Fail("Unable to open disk access slot");
}
int value = d.Read(1, 1);
Assert.AreEqual(value, 0);
}