Monday, March 29, 2010

Hello F#

I just got my Release Candidate of Visual Studio 2010 up and running so I thought I'd do the obligatory recursive greeting function (not as pretty as Ruby or Python, but we're talking about F# here!)
let rec reverse (value : string) =
if value.Length < 2 then
value
else
value.Chars(value.Length - 1).ToString() + reverse(value.Substring(0, value.Length - 1))

[<EntryPoint>]
let main (args) =
printfn "%s" (reverse("!dlroW ,olleH"))
0

Tuesday, March 23, 2010

Installing memcached

On ubuntu:
* download the memcached source from memcached.org (it's on Google Code at http://code.google.com/p/memcached/)
* download the libevent source from http://www.monkey.org/~provos/libevent/
* open a terminal window
* cd to where the two tar.gz files are
* tar -zxvf memcached-1.4.4.tar.gz
* tar -zxvf libevent-1.4.10-stable.tar.gz
* cd into the libevent folder
* ./configure && make
* sudo make install
* cd up and into the memcached folder
* ./configure && make
* sudo make install
* at this point we could try starting memcached, but it would probably fall over with an error about . we need to give the memcached an easy way to find libevent. (running whereis libevent is good enough for us to verify where it's actually been installed: I suspect /usr/local/lib)
* sudo gedit /etc/ld.so.conf.d/libevent-i386.conf
* on it's own line in the file, enter the following, and save and quit gedit:
* /usr/local/lib/
* even though you may have just installed it on an amd64 box, the file name needs to be i386
* sudo ldconfig is the final set up step
* now fire up memcached in verbose mode (use the -vv parameter)
* alternatively run it in daemon mode (as it was intended, with the -d parameter)
* once it's up and run you can muck around with it by starting a telnet session
* telnet localhost 11211
* try some of the commands in the very helpful reference http://lzone.de/articles/memcached.htm
* :-)

Monday, March 22, 2010

"Velocity" Part 1

It's just the CTP3 version, but it's nearly rough enough to put me off new technology for a while. I eventually got it set up on a single machine using SQL Server for its configuration mechanism, which necessitated creating a login, a database named "VelocityCacheConfig" and linking the login with a user "Velocity" who was db_datareader, db_datawriter and db_owner.

Pre-requisites: .NET 3.5 SP1 and Windows PowerShell.

You need to run the supplied PowerShell script as admin. Fair's fair. It's intended usage is to manage the cache cluster and you wouldn't want just anybody doing that.

There was some error message about the firewall.

It created a new region for every item added to the cache.

It took a long time to Remove-Cache and you can't New-Cache with the same cache name until the old one's removed. Get-CacheHelp lists the PowerShell cmdlets for working with the cache.

Need to add a reference to ClientLibrary.dll and CacheBaseLibrary.dll to your Visual Studio project.

Before running the following C#, you'd have to Start-CacheCluster, and Add-Cache "Test".

using Microsoft.Data.Caching;

DataCacheServerEndpoint[] endpoints = new DataCacheServerEndpoint[]
{
new DataCacheServerEndpoint("SCANDIUM", 22233, "DistributedCacheService")
};
DataCacheFactory factory = new DataCacheFactory(endpoints, true, false);
DataCache cache = factory.GetCache("Test");

Wednesday, March 10, 2010

SqlBulkCopy

SQL Server 2008 has table-valued parameters in ADO.NET, but if you're stuck with a poorly performing inserts and a previous version of SQL Server, you can still get some awesome speed with the System.Data.SqlClient.SqlBulkCopy class. Just create a new one (in a using block), set up the destination table name and optionally a set of column mappings, and fire away. I ran a test with 65536 rows (width: 3 x INT + 1 x FLOAT) and it completed in 750ms compared with 36'447ms to send the equivalent table row-by-bleeding-row using multiple stored procedure invocations. Also, it's much easier to use than writing your own wrapper around bcp.exe (the only option if you're stuck with Sybase)!

Tuesday, March 09, 2010

Unicode

UTF-16 is a way of representing all of the UCS code points in two bytes (or four). It can encode all of the code points from the Basic Multilingual Plane (BMP) in just two bytes, but code points in other planes are encoded into surrogate pairs. UCS-2, as used by SQL Server for all Unicode text, was the precursor to UTF-16 and can only handle code points from the BMP. It is forward compatible with UTF-16, but any code point outside of the BMP encoded in UTF-16 will appear to be two separate code points *inside* the BMP if the encoding is UCS-2. The data will be preserved - it is only the semantics (i.e. the abstract "character(s)") that differ.

Note: This is extremely unlikely for modern business applications, as the languages outside the BMP are academic, and/or historical, such as Phoenician. For these purposes, UCS-2 and UTF-16 can be considered equivalent and interchangeable.

Side note: UTF-8 is just another way of representing all of the UCS code points from all of the planes, but the bits are encoded in a different way to UTF-16.

Extra note: .NET uses UTF-16 for its in memory encoding of the System.String type. A .NET System.Char is limited to 16-bits, and therefore a single char cannot hold a UTF-16 encoded surrogate pair (e.g. any code point not on the BMP). The char data type is similar to UCS-2 in this respect. Getting the char at a specific index of a string will always return a single 16-bit char, essentially breaking up a surrogate pair if one existed at this index of the string.

Saturday, March 06, 2010

Alternative to Clustering

Today my desktop machine had a hardware failure. Everything froze up, I tried rebooting and all I got was a symphony of beeps from the BIOS. Nada. If this had been a production database server and it wasn't in a cluster (as you can tell I'm having a lot of clustering problems lately) I would have been up the creek without a paddle. As it happens, I have a spare box lying around with very similar spec, and I was able to take my boot disk out of box 1 and put it into box 2 with very little downtime. It got me thinking. Even if you don't have a cluster with a warm machine waiting for failover, you could still reduce the cost of the outage by keeping a cold server of similar spec waiting for just such an occasion. If the data and log files are stored in a SAN, you could theoretically just bring up the cold server and attach the databases, and do some DNS magic on the clients... Would it work? Well, isn't that what DR days are for?

System.IO.Compression and LeaveOpen

I got into the habit of stacking up blocks of using statements to avoid lots of nesting and indentation in my code, ultimately making it more readable to humans!

using (MemoryStream stream = new MemoryStream())
using (DeflateStream deflater = new DeflateStream(stream, CompressionMode.Compress))
using (StreamWriter writer = new StreamWriter(deflater))
{
// write strings into a compressed memory stream
}


The above example shows where this doesn't work though. The intention was to compress string data into an in-memory buffer, but it wasn't working as expected. There was a bug in my code!

When you're using a DeflateStream or a GZipStream, you're (hopefully) writing more bytes into it than it's writing to its underlying stream. You may choose to Flush() it but both streams are still open and you can continue to write data into the compression stream. Until the compression stream is closed, however, the underlying stream will not contain completely valid and readable data. When you Close() it, it writes out the final bytes that make the stream valid. By default, the behaviour of the compression stream is to Close() the underlying stream when it's closed, but when that underlying stream is a MemoryStream this leaves you with a valid compressed byte stream somewhere in memory that's inaccessible!

What you need to do instead is leave the underlying stream open, using the extra constructor argument on the compression stream, like this:

using (MemoryStream stream = new MemoryStream())
{
using (DeflateStream deflater = new DeflateStream(stream, CompressionMode.Compress, true))
using (StreamWriter writer = new StreamWriter(deflater))
{
// write strings into a compressed memory stream
}
// access the still-open memory stream's buffer
}

Friday, March 05, 2010

Powershell Script for New Guid

> function NewID { [System.Console]::WriteLine([System.Guid]::NewGuid()); }
> NewID
83ced965-68e6-454d-aa8e-2f056ae1a030

Money and Trouble

Use "Swim Lanes" to isolate the money makers and to isolate the trouble makers. The principle is intended to give higher quality service to paying customers, or more specifically the ones on whom your business is more dependent. It is also intended to isolate failures in troublesome components so that they do not affect the overall system negatively.

Monday, March 01, 2010

Failover Fail

I couldn't get my SQL Server Failover Cluster up and running! Full disclosure: I couldn't even get the Windows Server 2008 Failover Cluster running. It failed the Validation Check "Validate SCSI-3 Persistent Reservation"; it looks like iscsitarget doesn't support that particular feature. Back to square one.