Saturday, May 17, 2014

Geostationary

Satellites in a geostationary orbit appear to maintain a fixed position relative to a position on the surface of the Earth. To accomplish this, they need to rotate at the same angular speed that the Earth spins, which makes one full rotation every sidereal day. There are 23.93446122 hours (or 86,164.0604 minutes) in one sidereal day. A full revolution of 2π radians in 86,164.0604 seconds gives us \(\omega = 7.29212 \times 10^{-5} \ radians \cdot s^{-1}\). This is our target angular speed if we want to keep a satellite overhead the same spot.

Earth's standard gravitational parameter is its mass M x G (the gravitational constant), which is roughly μ = 398,600.4418. The standard gravitational parameters for planets are better known than either M or G individually (the best way we have at our disposal to weigh extremely massive objects is by observing their gravitational effect on other bodies like orbiting satellites). Acceleration due to gravity is \(\mu \over r^2\). The equatorial radius of Earth is 6,378.14 km. If we plug in that figure, we'd see acceleration on the surface is the familiar 0.009798285 km/s. 422 km above the surface (in the orbital sphere of the ISS), it's just 0.008619904 km/s (87.97%). The further out we go, the lower the acceleration and the longer the orbital period.

To find out how far away we have to place a geostationary satellite, consider that it will need to be in a circular orbit with an orbital period of one sidereal day. In one second, it will move \(7.29212 \times 10^{-5} radians\); therefore \(r \times (1 - cos(7.29212 \times 10^{-5}))\) will be the distance that it falls towards the Earth in that time. We know from the SUVAT equation \(s = ut + {1 \over 2}at^2\) that in a second, with no initial velocity, that the distance travelled is half the acceleration. We now have an identity \(r (1 - \cos(7.29212 \times 10^{-5})) = {\mu \over r^2}\). Rearrange it and solve: \[r = \sqrt[3]{\mu \over {2 - 2 cos(7.29212 \times 10^{-5})}}\] It turns out that the altitude of 42164.15974 km above the center of the Earth (35786.02274 km above the equatorial surface) is where you'll find geostationary satellites - and they can only exist in that one orbital plane.

Saturday, May 10, 2014

Gravity / ISS

Let's say we're interested in finding how fast the International Space Station has to move in order not to fall back to earth (i.e. to stay in orbit). The earth has an equatorial radius of about 6384km and the satellite is about 422km above the equator when it's overhead on its inclined orbital path. For the sake of simplicity we will ignore the oblate spheroid shape of the earth, and the replace the elliptical orbit with a circular one, 6806km from the center of the earth.

On Earth's surface the acceleration due to gravity is \(9.8 m \cdot s^{-2}\). The further you go away, the lower the force of gravity. 422km above the surface we're told that the acceleration is 89% of surface gravity - it's \(8.722 m \cdot s^{-2}\).

Using the SUVAT equation \(s = ut + {1 \over 2}at^2\) we can see that an object dropped from 422km (i.e. \(g = 8.772 m \cdot s^{-2}\)) would fall 4.361m in the first second.

In the same second, we know that the ISS traces out the circular orbit (i.e. it doesn't crash into the Earth). The angular distance it travels (we could use the unit circle for visual confirmation) is \(\arccos({{r - s} \over r})\) or \(\arccos({{6806000 - 4.361} \over 6806000})\) or \(\arccos({6805995.639 \over 6806000})\) or simply 0.001132041 radians.

If it travels 0.001132041 radians in a second, it will take 5550.316851 seconds to perform a complete revolution of 2π. 5550 seconds is 92½ minutes, a figure that's very close to the published figure (on Wikipedia) and that's quite amazing given the rough estimates we've made to get this far. A circle of radius 6806km has a circumference of 13612km. Divide that circumference by the orbital period in seconds and you get \(7704 m \cdot s^{-1}\). You could also try \(r \sin(\theta)\) or \(6806000 \sin(0.001132041)\) which gives the same result. It's moving pretty quickly, but would have to be even quicker if the orbit was any closer to the surface!

Wednesday, April 16, 2014

Newtonian Reflector

Newtonian reflector telescopes use two mirrors – a figured primary mirror (the objective) which focuses incoming light, and a flat secondary mirror which redirects that light at an angle so that it can be viewed without getting your head stuck in the tube. The secondary mirror is only practically important – a CCD sensor could be placed directly into the path of the light focused by the objective mirror and it would capture the light without requiring an additional reflection. However, the subject of this post is the primary mirror, particularly: it’s shape.

A 2D parabola is defined by the quadratic equation \(y = ax^2 + bx + c\). If we make some simplifying assumptions (such as: its open side faces up; it “rests” on the x-axis; the focus is on the y-axis at height p) then our equation reduces to \(y = 4px^2\). The first order derivative (e.g. slope) of this equation is \({dy \over dx} = 2px\). That means, for a given focal length (and radius from the centre of the mirror), we can estimate the height of the reflective surface from the x-axis, as well as the slope at that point. These two computed values show where light is reflected when it encounters the surface of the objective mirror. Remember that incident light is reflected about the normal vector to the surface:
\(R = D – (2D \cdot N)N\)

To enable a 3D version (a paraboloid) we can take advantage of the knowledge that we only need rotate the parabola (i.e. the shape is rotationally symmetric about the vertical axis). In 3D, I chose to rotate around the z-axis.

It then becomes rather straight forward to take a solid angle of light, to compute the angles at which each incident ray encounters the parabolic reflector surface (parallax will affect closer light sources more than further away sources) and then note the points at which two or more rays converge to a single point. It turns out that the parabolic shape is ONLY able to focus light that is travelling parallel to the reflector’s primary axis. Closer sources converge further back than the stated focal length; sources at infinity converge exactly at the parabola’s focal point. Off-axis sources result in a complex out-of-focus shape when we attempt to capture them on a flat focal plane like a CCD. Looking at the distribution of focal points for a given field of view it was difficult to imagine a single transformation that might enable coma to be completely (and correctly) removed, though clearly a retail market for coma-correcting lenses abounds.

Thursday, January 24, 2013

Day-glo

Ever wonder what makes those striking day-glo colors shine brighter than everything around them? Our eyes can only see visible light (the colors of the rainbow) but the sun emits electromagnetic waves across a vastly wider spectrum (which we cannot see with our eyes, of course, and not just because it's not a good idea to stare at the sun). But just because we cannot see them, doesn't mean they aren't there. When photons from the ultraviolet region of the spectrum bump into an object, some of the energy can be reflected. Because we can't see them, they go unnoticed. However, certain chemical combinations transform higher-energy ultraviolet photons into lower-energy visible light photons (a process called fluorescence). When those fluorescent objects start emitting more visible light than the non-fluorescent objects around them, they give the perception of being brighter: day-glo. Interesting fact: whiter-than-white washing detergent is known to employ fluorescence to make clothes appear brighter.

Wednesday, October 17, 2012

Scalar

If you're reading this then it's already too late you've probably come across scalars and vectors and pondered their differences. The pair of terms is used in many contexts as diverse as operations performed by a CPU. I think the names go back as far as matrix mathematics, and - if you'll listen - I'll tell you why...

Matrices can be "multiplied" in two ways:

  1. two matrices are multiplied and the result is an arrangement of dot products;
  2. one matrix and one scalar are multiplied and every value in the matrix is scaled by the scalar.

Now, every time an algorithm requires to scale a collection of values by some number, will I name that variable "factor" or "scalar"?

Which would you choose?

Tuesday, October 09, 2012

Rectilinear

In my quest for a better panorama builder, I stumbled upon a few odd facts about cameras. Mine (the Canon G1X) for example, has a rectilinear lens. Or almost rectilinear. What this means is that for every pixel left or right (or indeed up and down) from the center of a photograph represents a constant number of degrees. I've not been able to measure the angle of view too closely, but I can report that it's slightly over 60 degrees (left to right) judging by my initial results. On a photograph 4352 pixels wide this is about 0.0138 degrees per pixel. (I've assumed the same scale factor is present on the vertical but this is not necessary - just an assumption).

Now, imagine you're standing in the middle of a massive sphere, pointing the camera outwards. If you took a photograph, each line of pixels (up/down and left/right) would represent a curved line segment of a great circle on that sphere. And if a pixel is defined by a row and column (or X and Y value) then the position of the light source of that pixel will be the intersection of those two great circles.

Mathematically it's relatively simple to calculate the intersection point of two great circles (we only consider two points and then throw one of those away - an optimisation) and thus the position in space of the pixel's light source. We could map two or more photographs of the same scene (taken from the same place, but at different angles) onto our sphere (from the inside) and with the right manipulations, we'd be able to build a 360 degree * 180 degree panoramic view (i.e. the full sphere).

And what are the right manipulations? They involve matrices, or more specifically, singular value decompositions. If every photograph shares two overlapping points with another photograph (or photographs) then we can test our inner-sphere projection by comparing the angle between the point-pairs in one photograph with the same point-pair in another. They need to be equal angles or else something's likely wrong with our assumption of the lens projection. Given two point-pairs, it's simply a case of using Procrustes analysis (or actually, just a sub-solution of it) to determine a rotation that can be applied to all pixels of one image to align it perfectly with the other image. Once the aligned images wrap around the entire sphere, you might find that the computed scale factor obtained when calibrating the lens was slightly out. So... why bother calibrating in the first place if you already know the lens is rectilinear! Readjust the scale factor and realign the images as necessary. Then fill the remainder of the sphere with your photographs. Simples!

Saturday, March 24, 2012

West-East

Nothing too technical here: I noticed (when viewing the current night sky in Cape Town) that West and East on the map were swapped around but North and South were still oriented as I'd expect. I puzzled for a few seconds, then held the laptop up to the sky. Eureka! When you're lying on your back (outside, on the grass, an unfamiliar concept to anyone in the UK) staring at the stars with your head and toes aligned North to South, West is your right and East is on your left. All of a sardine the map makes sense...