tag:blogger.com,1999:blog-116248602019-09-27T13:03:16.326+01:00The Chronicles of NojoUnknownnoreply@blogger.comBlogger182125tag:blogger.com,1999:blog-11624860.post-65217192789853206552019-01-13T09:31:00.001+00:002019-01-13T09:45:19.276+00:00Wake On Lan (WOL)<div dir="ltr" style="text-align: left;" trbidi="on"><p>Wake On Lan (WOL) is possible on the ASUS M3A78-CM board, but it requires a level of finesse to get it working. First, the cryptically named BIOS option "Power > APM Configuration > Power on From S5 by PMEs" must be enabled. Then, it's important to note that WOL won't work when the power switch is off (i.e. SB_PWR LED is still lit, but the computer has been shut down). Instead, the machine needs to be merely suspended in order to pick up the "magic" WOL ethernet packet. You can suspend it with the command:<br/><pre><code>sudo systemctl suspend</code></pre><p>You'll also need to decide what to do with the BIOS setting "Power > APM Configuration > Restore on AC Power Loss"... [Last State] won't bring it back to suspended after an AC power loss, but it will bring it back to running if the server was running when the power went off. If the power goes out when you're suspended, then you're up a certain creek without a paddle. <p>On waking, some network drivers need to be reloaded. Check my github's private ubuntu/WOL repository for the script that does this, or follow the steps in this accepted answer: <a href="https://askubuntu.com/questions/1029250/ubuntu-18-04-ethernet-disconnected-after-suspend/1030154#1030154">ubuntu 18.04 - ethernet disconnected after suspend</a><br/><p>To send a WOL packet, install and run etherwake from some other machine in the same subnet:<br/><pre><code>sudo apt install etherwake<br />sudo etherwake -i enp1s0 00:22:15:DE:31:42</code></pre><br /></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-69167809106925385442018-11-19T07:27:00.000+00:002018-11-19T12:18:35.335+00:00Hypothesis Testing Errors<div dir="ltr" style="text-align: left;" trbidi="on"><p>Hypothesis testing allows us to quantify relationships between known samples and the unknown population from which they were taken. What do I mean by that? Let's say I am interested in the returns from investing in a particular stock. The daily return might be calculated as the ratio of today's closing price over yesterday's closing price. Whether I was to take a <i>sample</i> of 30 returns (over 31 days) or 60 returns (over 61 days), I still <i>couldn't know</i> the <i>population</i> mean return, but I <i>could hypothesize</i> about it... so I do.</p>I choose a "null hypothesis" that the population mean is 4.5%. Given that my sample mean was 5% and there was a 2% standard deviation, 30 observations would produce a test statistic of \(\frac{0.05 - 0.045}{\frac{0.02}{\sqrt{30}}} = 1.37\) standard deviations. In other words, the p-values would be 8.5% and 91.5%. For an 80% confidence two-tailed test (10% in the left tail and 10% in the right tail) we would reject the hypothesis, but at 90% confidence we would accept it. Note how we've already accepted or rejected the hypothesis <b><i>regardless of its truth</i></b>.</p><p>Now, imagine an all-seeing and all-knowing supernatural bystander watching the same events unfurl... they <i>could know</i> the population mean... and even though they wouldn't be obliged to share the exact value with me, let's say that they'd at least tell me if my hypothesis was true or false; that is to say: if I hypothesized that the population mean was 4.5% and it actually was 4.5% then the hypothesis would be true, otherwise if would be false (it could be 4.2% or 4.3% or even -5% or 63%; the point is we don't know).</p><p>If we take our two test results and combine them with the two possible truth values, it produces this 2X2 matrix of outcomes.</p><table><tbody><tr><th></th><th>Accept</th><th>Reject</th></tr><tr><td>True</td><td>Correct</td><td>Type 1 Error</td></tr><tr><td>False</td><td>Type 2 Error</td><td>Correct</td></tr></tbody></table><br /><ul><li>True/False: does the actual population mean match the hypothesized mean?</li><li>Reject/Accept: does our statistic fall outside/inside the confidence interval?</li><li>Correct/Incorrect: did we accept a true (or reject a false) hypothesis or did we commit an error?</li></ul><p>Let's ask the bystander if our hypothesis was indeed true or if it was false. <ul><li>Yes, it's true:<ul><li>At 90% we accepted it</li><li>At 80% we rejected it (Type 1 Error)</li></ul></li><li>No, it's false:<ul><li>At 90% we accepted it (Type 2 Error)</li><li>At 80% we rejected it.</li></ul></li></ul>This last pair of possibilities deserves more analysis: when the bystander tells us our hypothesis was false, it doesn't seem to matter <b>why</b> we were correct to reject the hypothesis; all that matters is that we did.</p><p><a href="https://www.youtube.com/watch?v=BJZpx7Mdde4">This video</a> tries to explain it but I am not confident the author is correct. I'd prefer to side with authors of the CFA curriculum who say - in short - "it's complicated".</p></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-85213427390731065442018-11-07T20:31:00.003+00:002018-11-08T06:31:02.623+00:00Fourier Transform in C#<div dir="ltr" style="text-align: left;" trbidi="on">I am fascinated by the Fourier transform; specifically: its ability to decompose a wave into a series of fundamental waves. Mathematicians might prefer to see it described it more formally, but here I'll be trying to make it understandable.<br/><br/>Obviously, for a wave to have motion, time needs to pass. If we freeze time then our wave cannot oscillate. To say a wave has frequency, we need first to observe a period in which the wave has motion. The longer we observe its movement the more we can mathematically extract from it, but for now let's simply say that we need to acquire a range of samples measured as the wave oscillates. From this point, armed with our array of floating point numerical samples, we menacingly approach the rabbit hole.<br/><br/>The remainder of this post will focus on a single practical aspect of the Fourier transform: how to interpret the result, especially its magnitude. I'm using the <code>MathNet.Numerics</code> package from Nuget, which provides the handy method <code>Fourier.Forward(samples, FourierOptions.NoScaling)</code>.<br/><i>Note: you need to specifically pass <code>FourierOptions.NoScaling</code> if you want control of the output.</i><br/><br/>Since I grew up in an age when Compact Discs were seen as technological marvels, I am going to use the standard 44.1 kHz as my sample rate for the following example. A full second's worth of samples (44100) might seem a reasonable place to start, but as we'll see later, it's by no means a requirement. The chosen sample rate theoretically limits the upper frequency to 22050 Hz, and even though we pass in 44100 samples to <code>Fourier.Forward</code> we get back a symmetrical result with the first 22050 data points carrying the frequencies up to 22050 Hz, and the second 22050 carrying them all in again reverse. We can safely ignore the second half of the output for this post. If our input signal (encoded in the 44100 samples) was a pure sinusoidal signal (of any frequency up to 22050 Hz) oscillating between +1 and -1, it will contribute 22050 units to the frequency bin associated with the signal. Those units can be found in the <code>Complex32.Magnitude</code> property. Multiplying the <i>amplitude</i> of the <i>input</i> signal by any constant A will scale the <i>magnitude</i> to A x 22050. However, if we pass in a fraction of the full second's worth of samples - let's say \(\frac{4}{5}\), which is 35280 instead of 44100 - then we see the number of units fall to \(\sqrt{\frac{4}{5}}A\frac{samplerate}{2}\). And yes, the number would rise if the fraction was greater than one.<br/><br/>It's probably important to note that we haven't changed the sample rate: that's still 44100 and it still means we cannot get any higher frequencies than 22050 Hz. What we've done is alter the resolution. With \(\frac{4}{5}\) of the original resolution each frequency bin would be \(\frac{5}{4}\) as wide so that we could still accommodate all 22050 Hz, but in fewer distinct bins. It's not a net loss, though; it's a trade off: we've lost resolution in the frequency domain but we've gained resolution in the time domain (there will be \(\frac{5}{4}\) as many Fourier transforms in any given time period). Of course, this now means we cannot simply read off the frequency from the bin number; now we need to scale the bin number by our resolution factor. Staying with the \(\frac{4}{5}\) example: 800 Hz would now be found in bin 640, which is \(\frac{4}{5}\) of 800.<br/><br/>So... why did we choose to use the <code>Complex32.Magnitude</code> property, and what does it mean? Each of the data points we get as output from <code>Fourier.Forward</code> is a complex number, having both a <code>Complex32.Real</code> and an <code>Complex32.Imaginary</code> part. Besides these two orthogonal vectors pointing in the real and imaginary axes, another way to interpret them is as a single vector that has two properties: <code>Complex32.Magnitude</code> and <code>Complex32.Phase</code>. Together, these allow us to imagine the frequency bin as a clock with a single spinning hand of variable length. If the hand is long: there's more signal; if it's short: there is less. And the angle of the hand shows us the offset of the signal (relative to convention).<br/><br/><table><tr><td>Input Shift</td><td>Re</td><td>Im</td><td>Mag</td><td>Phase</td></tr><tr><td>+0</td><td>0</td><td>-22050</td><td>22050</td><td>\(-\frac{π}{2}\)</td></tr><tr><td>+\(\frac{π}{2}\)</td><td>22050</td><td>0</td><td>22050</td><td>0</td></tr><tr><td>+π</td><td>0</td><td>22050</td><td>22050</td><td>\(\frac{π}{2}\)</td></tr><tr><td>+\(\frac{3π}{2}\)</td><td>-22050</td><td>0</td><td>22050</td><td>π</td></tr></table><br/><br/>Finally, how did we get 22050 for the maximum <code>Complex32.Magnitude</code> given a maximum amplitude of 1 (and minimum amplitude of -1) for our input signal? Well, that's how the Fourier transform works: it essentially adds up the 22050 <i>vector</i> results of pairing the input signal with the known pure sinusoidal signal. <br /></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-32915399085392753592018-10-28T08:16:00.000+00:002018-10-28T08:16:38.756+00:00Demand and Supply: Part 2<div dir="ltr" style="text-align: left;" trbidi="on">In the <a href="http://chroniclesofnojo.blogspot.com/2018/10/demand-and-supply-part-1.html">previous post</a>, there was a table of coefficients for average cost (AC), total cost (TC) and marginal cost (MC). Let's decompose TC into two components: one that doesn't vary as quantity (Q) varies and another that does. We'll call the components total fixed cost (TFC) and total variable cost (TVC). We can also decompose AC into average fixed cost (AFC) and average variable cost (AVC)<br/><br/><table><tr><th></th><th>AC</th><th>AFC</th><th>AVC</th><th>TC</th><th>TFC</th><th>TVC</th><th>MC</th></tr><tr><td>Q<sup>-1</sup></td><td style="background-color: #333">a</td><td style="background-color: #333">a</td><td style="background-color: #181818">0</td><td> </td><td> </td><td> </td><td style="background-color: #666">0</td></tr><tr><td>Q<sup>0</sup></td><td>b</td><td style="background-color: #181818">0</td><td>b</td><td style="background-color: #333">a</td><td style="background-color: #333">a</td><td style="background-color: #181818">0</td><td>b</td></tr><tr><td>Q<sup>1</sup></td><td>c</td><td style="background-color: #181818">0</td><td>c</td><td>b</td><td style="background-color: #181818">0</td><td>b</td><td>2c</td></tr><tr><td>Q<sup>2</sup></td><td>d</td><td style="background-color: #181818">0</td><td>d</td><td>c</td><td style="background-color: #181818">0</td><td>c</td><td>3d</td></tr><tr><td>Q<sup>3</sup></td><td> </td><td> </td><td> </td><td>d</td><td style="background-color: #181818">0</td><td>d</td><td></td></tr></table><br/>AFC: \(p = \frac{a}{q}\)<br/>AVC: \(p = b + cq + dq^2\)<br/>TFC: \(p = a\)<br/>TVC: \(p = bq + cq^2 + dq^3\)<br/><br/>Note how \(AC = AFC + AVC\) and \(TC = TFC + TVC\)<br/>Note too how MR is unrelated to TFC, but has the same relation to both TC and TVC. <br /></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-58035212817677075522018-10-28T07:03:00.000+00:002018-10-28T07:53:06.396+00:00Demand and Supply: Part 1<div dir="ltr" style="text-align: left;" trbidi="on">Let's use a little bit of mathematics to assist our understanding of demand and supply, focusing here on the slightly more complicated concept of supply (demand functions identically, but we don't often decompose revenue into fixed and variable components). In the study of economics we often encounter three "curves" that describe how the price of an item is related to the quantity (Q) a firm is willing to supply: average cost (AC), total cost (TC) and marginal cost (MC). These three functions are mathematically related by two equations: \(AC = \frac{TC}{Q}\) (which also means that \(TC = ACxQ\)) and \(\frac{dTC}{dQ} = MC\) (which also means that \(TC_q = \int_0^q MC dQ\)). It's too soon to explain, but these are <b><i>not</i></b> quantities the firm is <i>willing</i> to supply, but rather the quantities the firm is <i>able</i> to supply.<br/><br/>A third degree polynomial function should be sufficient for this demonstration. In the table below, I've displayed the coefficients of each power of Q, highlighting the "fixed" component of TC (at Q<sup>0</sup>) and AC (at Q<sup>-1</sup>) noting that MC has no such "fixed" component. <br/><br/><table><tr><th></th><th>AC</th><th>TC</th><th>MC</th></tr><tr><td>Q<sup>-1</sup></td><td style="background-color: #333">a</td><td></td><td style="background-color: #666">0</td></tr><tr><td>Q<sup>0</sup></td><td>b</td><td style="background-color: #333">a</td><td>b</td></tr><tr><td>Q<sup>1</sup></td><td>c</td><td>b</td><td>2c</td></tr><tr><td>Q<sup>2</sup></td><td>d</td><td>c</td><td>3d</td></tr><tr><td>Q<sup>3</sup></td><td></td><td>d</td><td></td></tr></table><br/>AC: \(p = \frac{a}{q} + b + cq + dq^2\)<br />TC: \(p = a + bq + cq^2 + dq^3\)<br />MC: \(p = b + 2cq + 3dq^2\)<br /><br/>If the firm has any fixed costs (e.g. monthly rental payments on commercial property), they would be represented by the coefficient \(a\); if it has variable costs (e.g. monthly wages payable to labour) they would be represented by the coefficients \(b\),\(c\) and \(d\) (these are the costs that vary as Q is varied. <br /></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-42966218124438331782018-09-13T14:47:00.000+01:002018-09-13T20:44:44.465+01:00Eigenv{ector;alue}s<div dir="ltr" style="text-align: left;" trbidi="on">There are some <a href="https://www.youtube.com/watch?v=PFDu9oVAE-g">nice instructional videos</a> on YouTube that explain eigenvectors and eigenvalues. What follows is a brief summary as well as a caveat or two.<br/><br/>We start with a square transformation matrix M, which can have one or more eigenvectors. If you restrict eigenvalues to real numbers (i.e. if you exclude imaginary numbers) then it's possible to have a transformation matrix without any real eigenvalues (e.g. in a pure rotation). It's also possible to have just one eigenvalue but an infinite number of eigenvectors (e.g. common scale in all axes) ... but I've probably distracted you too much already...<br/><br/>Anyway:<br/>What's an eigenvector? It's a vector that only changes (stretches/squeezes) in scale <i>along its <b>span</b></i> when transformed by M.<br/>What's a span? That's a radial line to any vector from the origin of the coordinate system.<br/>Each eigenvector has a corresponding scalar eigenvalue λ that, when the two are multiplied, has the same result as M being multiplied by that eigenvector.<br/>It's a rather circular definition so far:<br/>\(\matrix{M}\times\vec{v} = λ\times\vec{v}\)<br/>We already know about a transformation matrix that represents a uniform scale of λ, and that's \(λ\times\matrix{I}\) (it's the identity matrix, but instead of 1s running down the diagonal, it's got λs). Now we have a starting point for our calculation:<br/>\(\matrix{M}\times\vec{v} = (λ\times\matrix{I})\times\vec{v}\)<br/>As tempting as it looks, we should never divide by a vector. Instead, what we want next is the value of λ that results in:<br/>\((\matrix{M} - (λ\times\matrix{I}))\times\vec{v} = \vec{0}\)<br/>Let's let:<br/>\(\matrix{T} = \matrix{M} - (λ\times\matrix{I})\)<br/>Ordinarily, it would now be trivial to solve for the eigenvector using:<br/>\(\matrix{T}\times\vec{v} = \vec{0}\)<br/>\(=> \matrix{T}^{-1}\times\vec{0} = \vec{v}\)<br/>But we cannot do that in this instance, because:<br/>\(det(\matrix{T}) = 0\)<br/>And that means that T is not invertible. But surely that means we've missed a step? Nobody said we need the determinant to be zero. What's happening? I'll explain: if \(\vec{v} = \vec{0}\) then anything multiplied by \(\vec{v}\) will also equal \(\vec{0}\).<br/>If we restrict our eigenvector to non-zero vectors then the only way we can get a zero vector after transformation is if the transformation matrix collapses a dimension: i.e. the determinant (or scale) of the transform must be zero. That's how we get our constraint.<br/><br/><b>Note:</b> Matrices are either <i>invertible</i> or <i>singular</i>. A matrix is invertible iff the determinant is non-zero, and it is singular iff the determinant is zero. Our transform T <b><i>must be</i></b> singular. That means: it collapses down by a dimension.<br/><br/>So, if the determinant of our 2X2 transformation matrix is zero:<br/>\(det\begin{bmatrix}a-λ & b \\ c & d-λ \\\end{bmatrix} = 0\)<br/>Then we know (from the Leibniz formula of a determinant) \((a-λ) \times (d-λ) - c \times b = 0\)<br/>And we can expand to:<br/>\(λ^2 - λ(a+d) + (a \times d) - (c \times b) = 0\)<br/>That's just a quadratic, easily solved (but take care with complex numbers, <i>i</i> does appear in rotation matrices). So we have our eigenvalue(s) λ and (by substitution) also T. But how do we find the eigenvectors if we cannot take the inverse of T? Well, you can try a technique like <a href="https://en.wikipedia.org/wiki/Gaussian_elimination">row reduction</a>. But don't expect a unique solution... in fact, because the transform has a determinant of zero, the eigenvector is strictly a ratio of X:Y in two dimensions (or X:Y:Z in three dimensions, etc.) and you're going to find the lines overlap; in jargon: they are <i><b>linearly dependent</b></i>. <br /></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-41340244180854198802018-08-31T14:36:00.001+01:002018-08-31T15:02:34.737+01:00Vinyl LP Record<div dir="ltr" style="text-align: left;" trbidi="on"> <br/>In this post I will detail one 12" stereophonic 33⅓ vinyl LP record. Initially I was hoping to be able to image the grooves cut into the surface and recover the audio track visually, but the resolution of a borrowed microscope demonstrated that the task was beyond its capabilities. Unfortunately that's left me to issue a caveat from the outset: <b><i>reader beware;</b> no experimental fact checking has taken place...</i><a href="https://3.bp.blogspot.com/-jbOW7biUEEk/W4lFQqqZxrI/AAAAAAAABls/A6558a1yt2sy1j6CZL9WZbroskAMtj6uQCLcBGAs/s1600/shot0002.png" imageanchor="1" ><img border="0" src="https://3.bp.blogspot.com/-jbOW7biUEEk/W4lFQqqZxrI/AAAAAAAABls/A6558a1yt2sy1j6CZL9WZbroskAMtj6uQCLcBGAs/s400/shot0002.png" width="400" height="300" data-original-width="1600" data-original-height="1200" /></a><br/><br/>First, some mathematical properties:<br/>Rotations per minute: 33⅓ = 100/3<br/>Rotations per second: => 5/9ths<br/>Outermost groove radius: 146.05mm<br/>Innermost groove radius: 60.3mm<br/>This gives us about 85.75mm of playable radius.<br/>We're told that each side can contain up to 23 minutes (1380 seconds) of audio, the playback of which would require the disc to rotate 767 times (5/9 x 1380).<br/>Fitting 767 rotations into 85.75mm gives us roughly 0.112mm separation between adjacent edges of the groove.<br/>Finally, we define the spiral travelled by the stylus at time t as some initial offset less 0.112mm for each 2πθ of rotation as it moves from the outermost to the innermost radius.<br/>let r(t) = 146.05 - 0.112 * (5 / 9) * t; <br/>Let's switch to using µm, as things are starting to get tiny!<br/><br/>Note: the groove on a stereophonic record is a 90 degree cut rotated 45 degrees so that it looks like a V cut into the vinyl. The inner surface (closest to the center of the record) modulates one of the stereo audio tracks, while the outer surface (closest to the edge of the record) modulates the other. To increase likelihood of the stylus tracking the groove the V should never be shallower (or, equivalently, narrower) than 25µm.<br/><br/>We'll divide the 112µm groove into a handful of regions: <ol><li>First, our mandatory 25µm notch, which we later split into an inner 12.5µm and an outer 12.5µm. <li>An optional (but recommended) gap <li>Inner and outer channels for modulating the left and right stereo data </ol>Mathematically we say the groove width is made up of these components: 112µm - 25µm - gap = 2 * audio;<br/>This tells us: once we've picked a size for the gap, we will know the maximum width for each of our audio channels; we don't need to refer to the gap size again once we've worked out the audio channel widths.<br/><br/>Now it's time to scale our input audio signals... For simplicity, let's ignore the bitness of a digital audio signal and assume that the signal amplitude fits into the range from +1 to -1. Note that the 0 point maps exactly onto the half-channel.<br/>*Another assumption: positive samples have shallower grooves*<br/>Remember: at time t, the stylus can be found at radius = r(t) [defined above].<br/>let inner and outer be the left and right audio samples (numbers between -1 and +1) at time t. Combined with the rotation of the disc (giving us θ at time t) these four equations below define the groove geometry...<br/>Surface: inner radial offset = (audio / 2) * (inner - 1) - 12.5µm<br/>Surface: outer radial offset = (audio / 2) * (1 - outer) + 12.5µm<br/>Groove: radial offset = (audio / 2) * (inner - outer) / 2<br/>Groove: vertical offset = (audio / 2) * ((inner + outer) / 2 - 1) - 12.5µm<br/><br/>The image below is a stylized aid for identifying the described parts of a groove, as seen looking down the groove from head on. The image shows the inner channel modulated to a positive audio sample and the outer channel modulated to a negative audio sample. The stylus is offset to the right as a result of this stereo modulation. <a href="https://2.bp.blogspot.com/-ni3BAlyErck/W4lH9g0z2HI/AAAAAAAABmg/cjFYcHss5-IWNIhgn4Fe6uzIRgw62zdfACLcBGAs/s1600/Groove.png" imageanchor="1" ><img border="0" src="https://2.bp.blogspot.com/-ni3BAlyErck/W4lH9g0z2HI/AAAAAAAABmg/cjFYcHss5-IWNIhgn4Fe6uzIRgw62zdfACLcBGAs/s400/Groove.png" width="400" height="244" data-original-width="658" data-original-height="402" /></a><br/><ul><li>Green: maximum and minimum audio modulation points (where audio signal is +1 or -1) <li>Orange: the imaginary "zero" line <li>Light blue: surface of the inner modulation area <li>Red: surface of the outer modulation area <li>Black: stylus position <li>Dark gray: uncut portion of vinyl LP record <li>Light gray: mandatory unusable 25µm notch </ul><b>Questions:</b><ul><li>do we map left to inner and right to outer? <li>do we map positive to high (shallower V) and negative to low (deeper V) <li>what's an appropriate audio channel width? if we know the maximum modulation in the 45 degree vector, we can scale it by SQRT(2) to find the channel width. <li>is it appropriate to map the zero sample point to half of the audio channel width? it might </ul><br /></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-85896062615861702772018-05-03T08:27:00.000+01:002018-05-03T08:27:04.948+01:00Numerical Integration Basics<div dir="ltr" style="text-align: left;" trbidi="on"><p>Consider the equation \(y = 4\). If we plot this on two axes, x and y, it has no slope; it's just a horizontal line. We could also write the equation as \(y = 0x + 4\) because the value of y is unaffected by the value of x. We could also write it as \(y = f(x)\). </p><p>The integral of a function can be used to find the area between the function and the x-axis and is written \(I = \int_a^b f(x) dx\). It's essentially the sum of tiny changes in x (\(dx\)) multiplied by their corresponding y values (\(f(x)\)). The equation reminds us that we need to restrict ourselves to a lower and an upper value of x (otherwise the area would be infinite).</p><p>Let's choose a = x = 0 (the y-axis) and b = x = 3 as those two bounds.</p><p>\(I = \int_0^3 4 dx\) <br />\(I = \left[4 x\right]_0^3\) <br />\(I = \left(4\times3\right) - \left(4\times0\right)\) <br />\(I = 12\) </p><p>We're not restricted to 0 as the lower bound, but it exposes an interesting property: the area under a-to-b is equal to the area under 0-to-b less the area under 0-to-a. Let's try the same integral as above, but from 0-to-1.</p><p>\(I = \int_0^1 4 dx\) <br />\(I = \left[4 x\right]_0^1\) <br />\(I = \left(4\times1\right) - \left(4\times0\right)\) <br />\(I = 4\) </p><p>We might then subtract 4 from 12 to arrive at 8, but why not do this in one go?</p><p>\(I = \int_1^3 4 dx\) <br />\(I = \left[4 x\right]_1^3\) <br />\(I = \left(4\times3\right) - \left(4\times1\right)\) <br />\(I = 8\) </p></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-2651580220063926772018-03-22T04:59:00.005+00:002018-03-22T04:59:36.588+00:00Singleton Pattern vs. C# Static ClassI ran into an old foe the other day when we were discussing interview questions; which to choose: standard singleton pattern or static class? At the most basic level, both constructs are useful in limiting the number of instances of an object that can be created. There are numerous subtle differences at the implementation level, but the single biggest difference is at an object-oriented level: <strong>static classes do not support polymorphism</strong>. This stems from a combination of factors at the language level; in short, a static class cannot inherit from a base class (abstract or concrete) nor can it implement any interface methods. Conversely, you can neither define static virtual methods on a class, nor can you define them on an interface. So, if you're ever using <a title="dependency injection" href="http://en.wikipedia.org/wiki/Dependency_injection">dependency injection</a> - a technique that relies heavily on polymorphism - you will likely find that static classes will be inadequate. There are a number of other runtime level differences, but these are all secondary to the polymorphism issue. The simplest is that a static class can never be instantiated, and all methods are executed against the type instance. Contrast this with the singleton, of which exactly one instance is created, and where methods are executed against that instance. Next time I choose, I'm going anti-static for all but the simplest "utility" classes.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-75686013205431306672018-01-22T09:44:00.001+00:002018-01-22T09:44:24.287+00:00Cisco 7960 SIP Phone<div dir="ltr" style="text-align: left;" trbidi="on">I bought a Cisco 7960 IP phone (preloaded with SIP firmware) to connect with my experimental asterisk server running on a Raspberry Pi. Setting it up wasn't that hard, once I stopped following the bad advice that abounded on Google search results. I'm partially documenting my own experience here - not to help, but to hinder. You don't <b><i>need</i></b> a TFTP server (and therefore you don't need to set any hard-to-reach DHCP options) to distribute configuration files, you can do everything you need through the phone's keypad. If you're going to make changes, you need to perform the "Unlock Configuration" steps each time. <br/><h4>Unlock the Configuration</h4><ol><li>Press Settings <li>Scroll down to 9: Unlock <li>Enter the password "cisco" </ol><h4>Set the TFTP server IP address manually</h4><ol><li>Press Settings <li>Scroll down to 3: Network Configuration <li>Scroll down to TFTP Server <li>Press Edit <li>Enter the IP address of the TFTP server </ol><h4>Configure SIP on the Phone</h4><ol><li>Press Settings <li>Scroll down to SIP Configuration <li>Select the line you want to edit <li>Confirm the following entries exist (or add them) [They should match what you've got setup in sip.config on the Asterisk server). </ol><h4>Setup SIP on the Asterisk Server</h4><ol><li>Add a new block to sip.conf. Make sure the extension and secret match the values you provided when configuring SIP on the phone. <code>[1234] context=home secret=secret etc.</code><li>Add a dialplan rule or two to extensions.conf. Make sure the context in sip.conf matches the section name in extensions.conf. <code>exten => 1234,1,Dial(SIP/1234)</code></li><code>asterisk -r</code> then <code>sip reload</code> and <code>dialplan reload</code></ol><br /></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-32290296758065700382017-10-20T14:41:00.000+01:002017-10-20T14:50:08.345+01:00ADSL2+ Sync Rate<div dir="ltr" style="text-align: left;" trbidi="on">If you have ever called up your ISP support team to complain, you may have heard them quote a number called SNR to describe the quality of your connection.<br /><br />It's not quite as simple as a single number, though.<br /><br />ADSL2+ uses multiple "channels" at once to transfer data between the ISP and your home modem. In the case of my ASUS modem, there are 512 channels: there is 2.208 MHz of available spectrum on the copper wire; these are split into equally spaced 4.3125 kHz wide channels.<br /><br />However, not every channel (or bin) is created equally: there is almost certainly some electrical "noise" or congestion on the cables. Some bins are reserved for normal telephone connections, others for upstream and downstream communication with the server, with a few reserved as guards to keep sections separated.<br /><br />I pay for a 4 Mb/s connection, so let's assume the ISP allows me 1672 bits' worth of downstream sync rate*. It is up to the modem and ISP to negotiate how to allocate the sync rate across the available bins. My modem surveys the ratio of signal to noise (SNR) across each bin in the downstream set of channels. A rule of thumb is that each bit requires about 3dB of SNR to reliably decode the signal without errors. Bins with higher SNR will therefore have higher bit capacity for bits.<br /><br />My modem allows me to connect with SSH and look at three files, which are ordinarily used by the administrative web server interface to present me a way to easily change modem settings and view diagnostics:<br /><ul style="text-align: left;"><li>/var/tmp/spectrum-bpc-ds</li><li>/var/tmp/spectrum-bpc-us</li><li>/var/tmp/spectrum-snr</li></ul>Let's do some very simple mathematics with the data in these files that won't be too difficult to follow:<br /><br />Data taken from the SNR file:<br />1672 bits to allocate<br />268 channels into which they must be allocated<br />10,523.27 dB (the sum of SNR across the available reception channels)<br /><br />Assumptions:<br />3 dB/bit required to convert received signal without errors<br /><br />1) 1672 bits * 3 dB/bit = 5016 dB SNR required to decode the signal<br /><br />2) 10,523.27 dB - 5016 dB = 5507.27 dB unused SNR<br /><br />3) 5507.27 dB / 268 channels = 20.54951 dB unused SNR per channel<br /><br />4) Now, we iterate each of the 268 channels in turn. I'll demonstrate with the first channel, where the SNR was 24.21 dB. <br /><br />a) 24.21 dB - 20.54951 dB = ~3.66 dB to be used for data reception<br /><br />b) 3.66 dB / 3 dB/bit = ~1.22 bits<br /><br />c) round 1.22 bits to a whole number: 1<br /><br />It turns out that this simple algorithm gets us remarkably close to the numbers of bits per channel (BPC) in the other files, which are arrived at by the modem's own algorithms. The difference could even be accounted for by (lack of, or outdated) "bitswap", where bins are periodically re-checked for SNR and reallocated to make the best use of the spectrum as it evolves over time.<br /><br /><br />* Ignore - for now - that those numbers are not equivalent, they are effectively the same thing quoted for different units of time<br /> </div>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-11624860.post-68678094741921482382017-08-23T08:47:00.002+01:002017-08-23T08:58:17.601+01:00OpenVPNThis <a href="https://arstechnica.com/gadgets/2017/05/how-to-build-your-own-vpn-if-youre-rightfully-wary-of-commercial-options/3/">article</a> on Ars Technica inspired me to try and setup a VPN, but it seemed to lack a couple of extra steps that were required to run the server on a Raspberry Pi behind a router. The <a href="https://openvpn.net/index.php/open-source/documentation/howto.html">official</a> OpenVPN documentation was helpful but long-winded, and I resorted to a few askubuntu answers in the end. <ol><li><h4>Certificate names</h4>client-no-pass doesn't have to be replaced with the client host name, but can be (pretty much) any identifier for the client. The important thing is to sign all certificates with the same CA key. <li><h4>VPN topology</h4>The default topology of net30 is perhaps not as easy to comprehend as subnet, and subnet works with Ubuntu, Windows and iPhone clients. <li><h4>Default gateway</h4>On the server, I needed to <code>push redirect-gateway</code>, <code>push "route xxx.xxx.xxx.0 255.255.255.0"</code> and consequently <code>push "dhcp-option DNS xxx.xxx.xxx.1"</code> so that the client would be able to route to the server's local network, and the <code>tun0</code> device would be able to do DNS lookups. <li><h4>Firewall rules</h4>Besides enabling the forwarding of IPv4 so that the Raspberry Pi acted as a router, I need to add three firewall rules in iptables: one for NAT masquerading, and two for accepting new and related established connections. <li><h4>DNS setup for clients</h4>The dhcp-options that were pushed to the client were ignored, which resulted in the client being unable to resolve any DNS names to IP addresses. This was only a problem on Ubuntu clients, but the script <code>/etc/openvpn/update-resolv-conf</code> was provided when I installed openvpn; all I had to do was reference it from client-name.config as a pair of lines: <pre><br /><code>up /etc/openvpn/update-resolv-conf<br />down /etc/openvpn/update-resolv-conf</code><br /></pre></ol>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-24787250116496857832017-08-13T13:37:00.001+01:002017-08-13T13:37:18.074+01:00Grub<div dir="ltr" style="text-align: left;" trbidi="on">It's nice when the Grub bootloader remembers your last choice and reuses it as the default value next time your machine boots. Using Linux, put the following in <code>/etc/default/grub</code>:<br /><pre><code>GRUB_DEFAULT=saved<br />GRUB_SAVEDEFAULT=true<br /></code></pre> Then run:<br /><pre><code>sudo update-grub</code></pre></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-64321895437180272892017-03-29T09:55:00.004+01:002017-03-29T09:58:42.788+01:00SubscribeOn / ObserveOn<div dir="ltr" style="text-align: left;" trbidi="on">Once you have an instance of an IObservable<T> object, Reactive Extensions (Rx) provides at least two points where you can choose an IScheduler which will determine the threading context under which the later operations will run. Those operations can be broken down into two phases: subscription and observation. <p>Subscription refers to the operations that modify the behaviours of an IObservable<T> (e.g. Throttle, Merge, Concat) The IScheduler for the subscription phase is assigned by the SubscribeOn() method. <p>Observation refers to the callback operations that are invoked as the result of observing an IObservable<T> (e.g. OnNext, OnCompleted, OnError) The IScheduler for the observation phase is assigned by the ObserveOn() method. <p>While not the subject of this post, you can also choose an IScheduler implementation to convert any IEnumerable<T> to an IObservable<T> with the ToObservable method. This is relevant when the IEnumerable<T> has the potential to take a long time to be enumerated. <br /></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-76796347246664381632017-03-04T07:33:00.002+00:002017-03-04T09:31:37.708+00:006 Simple Rules for Async/Await<div dir="ltr" style="text-align: left;" trbidi="on"><ol style="text-align: left;"><li>If a method A calls another method B that returns a Task (or Task<T>) then the calling method does not block on the <em>completion</em> of that task, unless it either:</li><ol type="a"><li>awaits that task, or</li><li>waits on the result of that task</li></ol><li>The calling method cannot <em>await</em> a task unless it is declared with the async modifier, which causes the compiler to build a state machine (similar to the IEnumerable<t> iterator) for that awaitable method.</t></li><li>In point 1 above, the behaviour is independent of whether or not B is marked with the <em>async</em> keyword</li><li>It is this non-blocking behaviour of A that allows the calling thread to proceed past the point where B is invoked. It's even possible for the thread's call stack to unwind, and for the thread to go back to reading the Windows message queue if it was <em>the</em> UI thread.</li><li>The compiler will only warn you "because this call is not awaited ..." if B was marked with async.</li><li>It is expected that A will do something with the Task returned by B; at the very least there should be some code to check that the Task did not throw any exceptions. If - instead of B - we have an <em>async</em> method C that returns <em>void</em> then we do not present A with any opportunity to monitor the completion of C. Unobserved exceptions thrown during the execution of C could indicate corrupted program state and can be configured to terminate the application in much the same way that an unhandled exception in synchronous code can unwind a stack fully and terminate the process.</li></ol></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-7473928544934484212016-12-28T07:18:00.002+00:002016-12-28T07:18:16.626+00:00iSCSI Enterprise TargetIn the presence of conflicting information (lots of it) this is how I got iSCSI Enterprise Target running on Ubuntu 16.04 (LTS). First, install the required software with <pre>sudo apt-get install iscsitarget</pre> man ietd indicates that the configuration file is located at /etc/ietd.conf but it isn't, so instead: <pre>sudo nano /etc/iet/ietd.conf</pre> If you're following the "Creating an Open Source SAN" chapter of "Pro Ubuntu Server Administration" by Sander van Vugt, be careful when adding your target <b><i>not</i></b> to include a space between the comma and the type <pre>Target iqn.2008-08.com.sandervanvugt:mytarget<br /> Lun 0 Path=/dev/sdb,Type=fileio<br /> Lun 1 Path=/dev/sdc, Type=fileio</pre> One step that's missing from this chapter is to enable it (why isn't it enabled?). <pre>sudo nano /etc/default/iscsitarget</pre> Ensure this line is present in the configuration <pre>ISCSITARGET_ENABLE=true</pre> Then exit nano, saving changes, and restart the iscsitarget service <pre>sudo /etc/init.d/iscsitarget restart</pre> If everything has gone to plan, you should be able to see these files <pre>cat /proc/net/iet/session<br />cat /proc/net/iet/volume</pre> Otherwise, have a look at the log <pre>tail /var /log/syslog</pre>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-11624860.post-23509424907696803122015-11-02T17:31:00.001+00:002015-11-02T17:34:13.115+00:00SudokuIt's hard to find a newspaper without one, and I personally find that Sudoku solutions are an interesting problem. Randomly filling a grid of 81 cells with numbers from 1-9 wouldn't take long, but the end result probably wouldn't be a Sudoku solution, where each row, each column, and each box, all need to have the numbers 1-9 exactly once. <p>Instead of doing it randomly, we could try a constrained approach: only pick numbers from the set that would form an allowable Sudoku solution. We could start off by defining the set of all values in row M, all values in column N, and all values in box MN. The union of all three sets would indicate the values that have already been picked; therefore they're the numbers we couldn't re-use if we were to make a valid Sudoku solution. \( R(m) \cup C(n) \cup B(m,n) \) is the numbers that have been used, \( (R(m) \cup C(n) \cup B(m,n))' \) are the numbers available to us (this last set is the equivalent of \(R(m)' \cap C(n)' \cap B(m,n)'\)). <p>Let's assume we're looking at an empty cell, in an empty grid. \(R(m)=\emptyset, C(n)=\emptyset, B(m,n)=\emptyset\), therefore we can choose anything from \(\emptyset'\). The world is our oyster. Let's pick 1. We could have picked anything, but 1's a good place to start. In the adjacent cell, \(R(m)\) and \(B(m,n)\) would both yield \(\{1\}\) so we'd be forced to pick anything from \(\{1\}'\). Let's pick 2. Continuing in ascending order (and increasing level of constraint) we'd finally reach the 9th cell where \(R(m)=\{1,2,3,4,5,6,7,8\}\) and \(B(m,n)=\{7,8\}\). The only number we can pick is 9. Our top row is complete. <p>Continuing at the first column of the second row, and proceeding in the same fashion, we keep filling cells. It's too easy, something's got to break. <p>Ah. On the second row, at the seventh colum, we reach Box(2,7). We've already written \(\{4,5,6,1,2,3\}\) into the row, and \(\{7,8,9\}\) into the box. What can we do now? <p>The answer is to <a href="https://en.wikipedia.org/wiki/Backtracking">backtrack</a>. We didn't have to choose the lowest value from \(R(m)' \cap C(n)' \cap B(m,n)'\) every time; in fact - more often than not - we had a wide array of options. So, we backtrack. That involves reverting the operation on the current cell (leaving it blank), moving to the previous cell, making sure it's blank (but we remembered which value(s) we'd already tried), and then trying another value. We don't need to limit ourselves to going back just one cell, either. In fact, in this case, we need to revert the last three cells. A second row that looks like \(\{4,5,6,7,8,9,1,2,3\}\) is a valid partial solution to a Sudoku problem. We use backtracking even more on the third row, producing \(\{7,8,9,1,2,3,4,5,6\}\). Still: a valid partial solution. <p>It turns out that our set-based constraints, plus the backtracking ability, give us the power to solve any Sudoku problem, even when the starting point is ambiguous (in the case of the empty grid). <pre>1 2 3 4 5 6 7 8 9<br />4 5 6 7 8 9 1 2 3<br />7 8 9 1 2 3 4 5 6<br />2 1 4 3 6 5 8 9 7<br />3 6 5 8 9 7 2 1 4<br />8 9 7 2 1 4 3 6 5<br />5 3 1 6 4 2 9 7 8<br />6 4 2 9 7 8 5 3 1<br />9 7 8 5 3 1 6 4 2</pre> Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-7331653092118621872015-10-03T12:46:00.000+01:002015-10-03T12:46:45.857+01:00ForEach<code>var forEach = new TransformManyBlock<IEnumerable<int>, int>(x => x);</code>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-58669154172476964712014-05-17T17:02:00.002+01:002014-06-07T12:17:31.710+01:00GeostationarySatellites in a geostationary orbit appear to maintain a fixed position relative to a position on the surface of the Earth. To accomplish this, they need to rotate at the same angular speed that the Earth spins, which makes one full rotation every sidereal day. There are 23.93446122 hours (or 86,164.0604 minutes) in one sidereal day. A full revolution of 2π radians in 86,164.0604 seconds gives us \(\omega = 7.29212 \times 10^{-5} \ radians \cdot s^{-1}\). This is our target angular speed if we want to keep a satellite overhead the same spot. <br/><br/>Earth's standard gravitational parameter is its mass M x G (the gravitational constant), which is roughly μ = 398,600.4418. The standard gravitational parameters for planets are better known than either M or G individually (the best way we have at our disposal to weigh extremely massive objects is by observing their gravitational effect on other bodies like orbiting satellites). Acceleration due to gravity is \(\mu \over r^2\). The equatorial radius of Earth is <code>6,378.14 km</code>. If we plug in that figure, we'd see acceleration on the surface is the familiar <code>0.009798285 km/s</code>. 422 km above the surface (in the orbital sphere of the ISS), it's just <code>0.008619904 km/s</code> (87.97%). The further out we go, the lower the acceleration and the longer the orbital period. <br/><br/>To find out how far away we have to place a geostationary satellite, consider that it will need to be in a circular orbit with an orbital period of one sidereal day. In one second, it will move \(7.29212 \times 10^{-5} radians\); therefore \(r \times (1 - cos(7.29212 \times 10^{-5}))\) will be the distance that it falls towards the Earth in that time. We know from the SUVAT equation \(s = ut + {1 \over 2}at^2\) that in a second, with no initial velocity, that the distance travelled is half the acceleration. We now have an identity \(r (1 - \cos(7.29212 \times 10^{-5})) = {\mu \over r^2}\). Rearrange it and solve: \[r = \sqrt[3]{\mu \over {2 - 2 cos(7.29212 \times 10^{-5})}}\] It turns out that the altitude of 42164.15974 km above the center of the Earth (35786.02274 km above the equatorial surface) is where you'll find geostationary satellites - and they can only exist in that one orbital plane. Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-47973581743115907612014-05-10T22:41:00.002+01:002014-06-07T11:49:12.608+01:00Gravity / ISSLet's say we're interested in finding how fast the <a href="http://en.wikipedia.org/wiki/International_Space_Station">International Space Station</a> has to move in order not to fall back to earth (i.e. to stay in orbit). The earth has an equatorial radius of about 6384km and the satellite is about 422km above the equator when it's overhead on its inclined orbital path. For the sake of simplicity we will ignore the oblate spheroid shape of the earth, and the replace the elliptical orbit with a circular one, 6806km from the center of the earth. <br/><br/>On Earth's surface the acceleration due to gravity is \(9.8 m \cdot s^{-2}\). The further you go away, the lower the force of gravity. 422km above the surface we're told that the acceleration is 89% of surface gravity - it's \(8.722 m \cdot s^{-2}\). <br/><br/>Using the <a href="http://en.wikipedia.org/wiki/Equations_of_motion#SUVAT_equations">SUVAT equation</a> \(s = ut + {1 \over 2}at^2\) we can see that an object dropped from 422km (i.e. \(g = 8.772 m \cdot s^{-2}\)) would fall 4.361m in the first second. <br/><br/>In the same second, we know that the ISS traces out the circular orbit (i.e. it doesn't crash into the Earth). The angular distance it travels (we could use the unit circle for visual confirmation) is \(\arccos({{r - s} \over r})\) or \(\arccos({{6806000 - 4.361} \over 6806000})\) or \(\arccos({6805995.639 \over 6806000})\) or simply 0.001132041 radians. <br/><br/>If it travels 0.001132041 radians in a second, it will take 5550.316851 seconds to perform a complete revolution of 2π. 5550 seconds is 92½ minutes, a figure that's very close to the published figure (on Wikipedia) and that's quite amazing given the rough estimates we've made to get this far. A circle of radius 6806km has a circumference of 13612km. Divide that circumference by the orbital period in seconds and you get \(7704 m \cdot s^{-1}\). You could also try \(r \sin(\theta)\) or \(6806000 \sin(0.001132041)\) which gives the same result. It's moving pretty quickly, but would have to be even quicker if the orbit was any closer to the surface!Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-11624860.post-45759822224210359592014-04-16T18:36:00.002+01:002014-06-07T11:31:12.974+01:00Newtonian Reflector<p>Newtonian reflector telescopes use two mirrors – a figured primary mirror (the objective) which focuses incoming light, and a flat secondary mirror which redirects that light at an angle so that it can be viewed without getting your head stuck in the tube. The secondary mirror is only practically important – a CCD sensor could be placed directly into the path of the light focused by the objective mirror and it would capture the light without requiring an additional reflection. However, the subject of this post is the primary mirror, particularly: it’s shape. </p><p>A 2D parabola is defined by the quadratic equation \(y = ax^2 + bx + c\). If we make some simplifying assumptions (such as: its open side faces up; it “rests” on the x-axis; the focus is on the y-axis at height p) then our equation reduces to \(y = 4px^2\). The first order derivative (e.g. slope) of this equation is \({dy \over dx} = 2px\). That means, for a given focal length (and radius from the centre of the mirror), we can estimate the height of the reflective surface from the x-axis, as well as the slope at that point. These two computed values show where light is reflected when it encounters the surface of the objective mirror. Remember that incident light is reflected about the normal vector to the surface: <br/>\(R = D – (2D \cdot N)N\) </p><p>To enable a 3D version (a paraboloid) we can take advantage of the knowledge that we only need rotate the parabola (i.e. the shape is rotationally symmetric about the vertical axis). In 3D, I chose to rotate around the z-axis. </p><p>It then becomes rather straight forward to take a solid angle of light, to compute the angles at which each incident ray encounters the parabolic reflector surface (parallax will affect closer light sources more than further away sources) and then note the points at which two or more rays converge to a single point. It turns out that the parabolic shape is ONLY able to focus light that is travelling parallel to the reflector’s primary axis. Closer sources converge further back than the stated focal length; sources at infinity converge exactly at the parabola’s focal point. Off-axis sources result in a complex out-of-focus shape when we attempt to capture them on a flat focal plane like a CCD. Looking at the distribution of focal points for a given field of view it was difficult to imagine a single transformation that might enable coma to be completely (and correctly) removed, though clearly a retail market for coma-correcting lenses abounds.</p><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-VginzLso9jM/U07A2lgLdnI/AAAAAAAABNg/j0cuPHCOegE/s1600/output3d-ln.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://2.bp.blogspot.com/-VginzLso9jM/U07A2lgLdnI/AAAAAAAABNg/j0cuPHCOegE/s200/output3d-ln.png" /></a></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-52263978407753606392013-01-24T18:09:00.001+00:002013-01-24T18:11:31.388+00:00Day-gloEver wonder what makes those striking day-glo colors shine brighter than everything around them? Our eyes can only see visible light (the colors of the rainbow) but the sun emits electromagnetic waves across a vastly wider spectrum (which we cannot see with our eyes, of course, and not just because it's not a good idea to stare at the sun). But just because we cannot see them, doesn't mean they aren't there. When photons from the ultraviolet region of the spectrum bump into an object, some of the energy can be reflected. Because we can't see them, they go unnoticed. However, certain chemical combinations transform higher-energy ultraviolet photons into lower-energy visible light photons (a process called fluorescence). When those fluorescent objects start emitting more visible light than the non-fluorescent objects around them, they give the perception of being brighter: day-glo. Interesting fact: whiter-than-white washing detergent is known to employ fluorescence to make clothes appear brighter.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-44579098664839308022012-10-17T00:52:00.000+01:002012-10-17T00:55:08.140+01:00ScalarIf you're reading this then <strike>it's already too late</strike> you've probably come across scalars and vectors and pondered their differences. The pair of terms is used in many contexts as diverse as operations performed by a CPU. I think the names go back as far as matrix mathematics, and - if you'll listen - I'll tell you why...<p>Matrices can be "multiplied" in two ways:<ol><li>two matrices are multiplied and the result is an arrangement of dot products;<li>one matrix and one scalar are multiplied and every value in the matrix is <i><b>scaled</b></i> by the scalar.</ol><p>Now, every time an algorithm requires to scale a collection of values by some number, will I name that variable "factor" or "scalar"?<p>Which would you choose? Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-54771702809564949482012-10-09T05:22:00.001+01:002012-10-09T14:03:15.571+01:00RectilinearIn my quest for a better panorama builder, I stumbled upon a few odd facts about cameras. Mine (the Canon G1X) for example, has a rectilinear lens. Or almost rectilinear. What this means is that for every pixel left or right (or indeed up and down) from the center of a photograph represents a constant number of degrees. I've not been able to measure the angle of view too closely, but I can report that it's slightly over 60 degrees (left to right) judging by my initial results. On a photograph 4352 pixels wide this is about 0.0138 degrees per pixel. (I've assumed the same scale factor is present on the vertical but this is not necessary - just an assumption).<p> Now, imagine you're standing in the middle of a massive sphere, pointing the camera outwards. If you took a photograph, each line of pixels (up/down and left/right) would represent a curved line segment of a great circle on that sphere. And if a pixel is defined by a row and column (or X and Y value) then the position of the light source of that pixel will be the intersection of those two great circles.<p>Mathematically it's relatively simple to calculate the intersection point of two great circles (we only consider two points and then throw one of those away - an optimisation) and thus the position in space of the pixel's light source. We could map two or more photographs of the same scene (taken from the same place, but at different angles) onto our sphere (from the inside) and with the right manipulations, we'd be able to build a 360 degree * 180 degree panoramic view (i.e. the full sphere). <p>And what are the right manipulations? They involve matrices, or more specifically, singular value decompositions. If every photograph shares two overlapping points with another photograph (or photographs) then we can test our inner-sphere projection by comparing the angle between the point-pairs in one photograph with the same point-pair in another. They need to be equal angles or else something's likely wrong with our assumption of the lens projection. Given two point-pairs, it's <i>simply</i> a case of using Procrustes analysis (or actually, just a sub-solution of it) to determine a rotation that can be applied to all pixels of one image to align it perfectly with the other image. Once the aligned images wrap around the entire sphere, you might find that the computed scale factor obtained when calibrating the lens was slightly out. So... why bother calibrating in the first place if you already know the lens is rectilinear! Readjust the scale factor and realign the images as necessary. Then fill the remainder of the sphere with your photographs. Simples!Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-11624860.post-16204003515665473022012-03-24T20:55:00.004+00:002012-10-17T15:50:06.033+01:00West-EastNothing too technical here: I noticed (when <a href="http://www.astroviewer.com/current-night-sky.php?lon=18.46&lat=-33.93&city=Cape+Town&tz=UTC%2B2">viewing the current night sky in Cape Town</a>) that West and East on the map were swapped around but North and South were still oriented as I'd expect. I puzzled for a few seconds, then held the laptop up to the sky. Eureka! When you're lying on your back (outside, on the grass, an unfamiliar concept to anyone in the UK) staring at the stars with your head and toes aligned North to South, West is your right and East is on your left. All of a sardine the map makes sense...Unknownnoreply@blogger.com0Cape Town, South Africa-33.9248685 18.4240553-34.346497500000005 17.7923413 -33.5032395 19.055769299999998