The Value of Experience

I was reading a blog post by my colleague Doug Finke in reference to a “programmer competency matrix” by Sijin Joseph. I took a look at the matrix and it seemed like a set of pretty reasonable benchmarks for a programmer’s growth. My only reservation with the chart was that they claim that you need a certain number of years experience under your belt to be a certain grade of programmer. Here’s Sijin’s criteria:

  • Level 0: 1 year
  • Level 1: 2-5 years
  • Level 2: 6-9 years
  • Level 3: 10+ years

To be perfectly honest, I find the entire idea reprehensible. According to this matrix, nobody could be considered an “expert” programmer in C#, since the language has only been around since 2001. I’m not breaking the news to Anders. Ok, maybe that’s a bit of a “gotcha” exception, but I think the entire idea can’t hold water. The problem with the assertion is that it assumes that all years are equal in quality. There’s no comparison between a year in a challenging company on the cutting edge of your technology field working with the leaders in industry and a year making small changes to an enterprise CMS. No offense to the latter group, but it’s just the ugly truth.

What really bothers me is that this kind of fallacy isn’t limited to a few people, but, as I’m sure we all know, almost every employer has some kind of experience requirement in strict years. In fact, one employer I was looking at after graduation actually wanted candidates to have experience with C# since day 1, according to the advertisement. We, as an industry, have got to dispel the myth of causation (and even really correlation) between years experience and skill in development. Basing our recruitment standards on this primitive metric only encourages age-warfare, not unlike a number of discussions I’ve seen on Slashdot, with the younger programmers calling older ones “set in their ways,” and the older programmers calling younger guys “re-creators of the wheel in their inexperience.” It’s got to stop. We have to gather together and celebrate knowledge in our field and, in turn, work to better everyone in our technical community.

This is what I think makes Lab49 recruitment so effective: we don’t predicate someone’s candidacy on experience. Don’t have a degree in computer science or software engineering? Doesn’t matter. Don’t have ten years experience in software? So what? What matters is that you are skilled in your technology, be it C#, Flex, Java, or C++, and you’re someone who works well on a team. When you think about it, this is just good business. Who cares about someone’s pedigree? We’re in the business of creating effective software solutions, not showing off our employees’ education and past merits.

I think moving beyond simply removing the years experience criterion, résumés are becoming more and more deprecated. It’s not because people don’t have significant accomplishments that are noteworthy, but rather that people use ambiguous language, and even outright lies on their résumés in a desperate attempt to be hired for a job for which they’re not qualified, and for every gem in a stack of résumés, there’s a lot of rocks. This is where the community steps in. People involved in their local technology meet ups and people involved in online communities like StackOverflow show their skill on a regular basis, and it’s much harder to BS someone, especially if they start asking questions.

Over the next few years I suspect we’ll start seeing a paradigm shift in the way employers find talent. I won’t prognosticate that StackOverflow and its careers division will be the future, but I’m willing to be it’s going to have a hand in it, at least. The change in talent-hunting strategy, I think, is just one facet of a larger shift in our industry towards building communities. It’s already underway, so all we have to do now is embrace the change and jump in.

Alpha-Blending Colors in PowerShell

The other day I was given the task of converting a particularly poorly designed VisualBrush into a LinearGradientBrush. One of the problems I came across very quickly was the use of semi-transparent colors layered on top of each other, and, of course, I needed a “flattened” color for my GradientStop. Now, I could have used Paint.NET or GIMP or Photoshop to put out a couple layers of colors, set the transparencies and used the color dropper to get the result. Of course, since I’m not a designer, I don’t have any of those things installed on my work computer, so I decided to just find the equation to blend the channels myself. It didn’t take long, and Wikipedia delivered the goods. According to [[Alpha_compositing the article]], the formula to merge two colors, [latex]C_a[/latex] and [latex]C_b[/latex], into some output color, [latex]C_o[/latex], looks like this:

[latex]C_o = C_a\alpha_a+C_b\alpha_b(1-\alpha_a)[/latex]

Since a color can be thought of as a three-tuple of its R, G, and B channels, the formula is easily distributed to each of these values.

At this point, I decided I could probably just pull out a calculator and crunch the numbers. But maybe, in about the same time, I could also whip something together, say in PowerShell, to do it for me. Since I’m still learning PowerShell, I figured the learning experience would be worth at least something.

So the first thing I needed was the ability to parse an ARGB hex code into a hashtable with each channel separated out. Here’s what I got.

function Blend-Colors ([string[]] $colors) {
    # $argbHexColorRegex should recognize all 4-byte hex color strings prefixed with a '#'
    # and assign them to groups named a, r, g, and b for each channel, respectively.
    $argbHexColorRegex = "(?i)^#(?'a'[0-9A-F]{2})(?'r'[0-9A-F]{2})(?'g'[0-9A-F]{2})(?'b'[0-9A-F]{2})$"

    foreach($colorHex in $colors) {
        if($colorHex -match $argbHexColorRegex) {
            $color = @{
                a = [int]::Parse($matches.a, 'AllowHexSpecifier')
                r = [int]::Parse($matches.r, 'AllowHexSpecifier');
                g = [int]::Parse($matches.g, 'AllowHexSpecifier');
                b = [int]::Parse($matches.b, 'AllowHexSpecifier');
            }

            $color | ft
        } else {
            throw "Invalid color: $colorHex"
        }
    }
}

Ok, not too shabby. If you’re wondering where $matches came from, as a side effect of the -match operation, the $matches hashtable is set with all the matched groups defined. I just prefer to use the object syntax in this case over the index syntax.

So, I’m not real happy with the amount of repetition going on in the assignment of my hashtable. My first attempt to clean it up looked like this:

$color = ('a','r','g','b') | % { @{ $_ = [int]::Parse($matches.$_, 'AllowHexSpecifier') } }

But that just gave me an array of hashtables with one attribute in each. I asked Doug Finke what he thought and he recommended this modification:

('a','r','g','b') | % { $color = @{} } { $color.$_ = [int]::Parse($matches.$_, 'AllowHexSpecifier') }

Cool! It seems a little obvious now, though.

Next on the agenda was to translate the blending equation into PowerShell. Since the equation is a little more involved, I decided to abstract it out into a subfunction like so:

function Merge-Channel ($c0, $c1, $c) {
	$a0 = $c0.a / 255
	$a1 = $c1.a / 255
	return $a0 * $c0.$c + $a1 * $c1.$c * (1 - $a0)
}

$c0 and $c1 are color hashtables and $c is the name of the channel to blend. I had to divide the alpha channels by 255 to produce a value compatible with the equation, namely, between 0 and 1.

The reason I chose to accept the entire color and desired channel, rather than a more terse definition accepting the specific channel values and related alpha values was to make calling the code a little more elegant:

('r','g','b') | % `
	{ $mergeColor = @{ a = [Math]::Min(255, $outColor.a + $addColor.a) } } `
	{ $mergeColor.$_ = Merge-Channel $addColor $outColor $_ }

When blending, the alpha channels simply sum, so I put that in my hashtable initializer and just iterated over the color channels.

Now the final step is to return our value back in hex form. Fortunately, the formatting styles for int make this really easy:

return '#{0:x2}{1:x2}{2:x2}{3:x2}' -f (('a','r','g','b') | % { [int][Math]::Round($baseColor.$_, 0) })

I’m rounding to ensure the highest accuracy to the blended color, as opposed to simply truncating.

Putting it all together, I decided to create two array constants, $argb and $rgb to alias arrays of the channels. While I was at it, I also promoted my $argbHexColorRegex to a constant just for good measure. Finally, I made the base color white, so there would be something to blend against. The result looks like this:

function Blend-Colors ([Parameter(Mandatory=$true)] [string[]] $colors) {
    # $argbHexColorRegex should recognize all 4-byte hex color strings prefixed with a '#'
    # and assign them to groups named a, r, g, and b for each channel, respectively.
    Set-Variable argbHexColorRegex -Option Constant `
        -Value "(?i)^#(?'a'[0-9A-F]{2})(?'r'[0-9A-F]{2})(?'g'[0-9A-F]{2})(?'b'[0-9A-F]{2})$"
    Set-Variable argb -Option Constant -Value 'a','r','g','b'
    Set-Variable rgb -Option Constant -Value 'r','g','b'

    function Merge-Channel ($c0, $c1, $c) {
        $a0 = $c0.a / 255
        $a1 = $c1.a / 255
        return $a0 * $c0.$c + $a1 * $c1.$c * (1 - $a0)
    }

    $argb | % { $outColor = @{} } { $outColor.$_ = 255 } # set $outColor to white (#FFFFFFFF)

    foreach($color in $colors) {
        if(-not ($color -match $argbHexColorRegex)) {
            throw "Invalid color: $color"
        }

        $argb | % { $addColor = @{} } { $addColor.$_ =  [int]::Parse($matches.$_, 'AllowHexSpecifier') }

        $rgb | % `
            { $mergeColor = @{ a = [Math]::Min(255, $outColor.a + $addColor.a) } } `
            { $mergeColor.$_ = Merge-Channel $addColor $outColor $_ }

        $outColor = $mergeColor
    }

    return '#{0:x2}{1:x2}{2:x2}{3:x2}' -f ($argb | % { [int][Math]::Round($outColor.$_, 0) })
}

This is looking really good, and, really, I might have just stopped here. The only things I was missing at this point were pipelining and documentation, and since my solution had become completely over-engineered as it was, I decided I might as well go for broke.

The first thing I wanted to do was abstract out my initialization of $outColor to a parameter. Since I’d have to parse the string, I’d also need to abstract my color hex parser.

function Parse-Color ([string] $hex) {
	if($hex -match $argbHexColorRegex) {
		$argb | % { $color = @{} } { $color.$_ =  [int]::Parse($matches.$_, 'AllowHexSpecifier') }
		return $color;
	} else {
		return $null;
	}
}

The reason I decided to return null instead of throwing an error immediately is because I wanted to treat errors differently in both places. Specifically, if an invalid string is passed in to the -background, I want to throw an argument exception, and if something invalid comes in over the pipeline, I just want to write the error to the error output and keep on trucking.

I tried a few different approaches to being able to both accept input over the parameter list and I finally found out about [Parameter(ValueFromPipeline=$true)]. Here is my test setup:

function Get-Range([int]$max) {
    for($i=0; $i -lt $max; $i++) {
        Write-Host "pushing $i to pipeline"
        Write-Output $i
    }
}

function Test-Pipeline([Parameter(ValueFromPipeline=$true)][int[]]$vals = $null) {
    process {
        foreach($item in @($vals)){
            Write-Host "processing $item from pipeline"
        }
    }
}

And my test output:

> Get-Range 3 | Test-Pipeline
pushing 0 to pipeline
processing 0 from pipeline
pushing 1 to pipeline
processing 1 from pipeline
pushing 2 to pipeline

> Test-Pipeline ('1','2','3')
processing 1 from pipeline
processing 2 from pipeline
processing 3 from pipeline

Notice the @($vals) in my foreach? That’s to protect against null inputs by ensuring $vals is a list.

Now that I’ve got all my pieces together, I just need to put everything in place with a splash of documentation.

<#
    .SYNOPSIS
    Takes a list of ARGB hex values and blends them in order against a specified background.

    .PARAMETER background
    The background color to blend against, defaults to white.

    .PARAMETER colors
    A list of ARGB hex color strings, can be pushed from the pipeline.

    .EXAMPLE
    Blend-Colors '#ff121212', '#705F6A87'

    .LINK
    http://en.wikipedia.org/wiki/Alpha_compositing
#>
function Blend-Colors (
    [string] $background = '#FFFFFFFF',
    [Parameter(ValueFromPipeline = $true)] [string[]] $colors = $null) {
    begin {
        # $argbHexColorRegex should recognize all 4-byte hex color strings prefixed with a '#'
        # and assign them to groups named a, r, g, and b for each channel, respectively.
        Set-Variable argbHexColorRegex -Option Constant `
            -Value "(?i)^#(?'a'[0-9A-F]{2})(?'r'[0-9A-F]{2})(?'g'[0-9A-F]{2})(?'b'[0-9A-F]{2})$"

        Set-Variable argb -Option Constant -Value 'a','r','g','b'
        Set-Variable rgb -Option Constant -Value 'r','g','b'

        function Parse-Color ([string] $hex) {
            if($hex -match $argbHexColorRegex) {
                $argb | % { $color = @{} } { $color.$_ =  [int]::Parse($matches.$_, 'AllowHexSpecifier') }
                return $color;
            } else {
                return $null;
            }
        }

        function Merge-Channel ($c0, $c1, $c) {
            $a0 = $c0.a / 255
            $a1 = $c1.a / 255
            return $a0 * $c0.$c + $a1 * $c1.$c * (1 - $a0)
        }

        $outColor = Parse-Color $background
        if(-not $outColor) {
            throw (New-Object ArgumentException -ArgumentList "Invalid color: '$background'", 'background')
        }
    }
    process {
        foreach($color in @($colors)){
            $addColor = Parse-Color $color
            if(-not $addColor) {
                Write-Error "Invalid input color: $_"
                break
            }

            $rgb | % `
                { $mergeColor = @{ a = [Math]::Min(255, $outColor.a + $addColor.a) } } `
                { $mergeColor.$_ = Merge-Channel $addColor $outColor $_ }

            $outColor = $mergeColor
        }
    }
    end {
        return '#{0:x2}{1:x2}{2:x2}{3:x2}' -f ($argb | % { [int][Math]::Round($outColor.$_, 0) })
    }
}

Now all you have to do is save it in %userprofile%\My Documents\WindowsPowerShell\Modules\UITools as UITools.psm1 and call Import-Module UITools to bring in this function.

The Extension Method Pack

Since .NET 3.0 came out, I’ve been enjoying taking advantage of extension methods and the ability to create my own. The thing I’ve noticed is that a handful of them are useful to almost any application, above and beyond what Microsoft provides in System.Linq. So over the last few days I took the time to gather these methods together, unit test them, and run them through FXCop to make a high-quality package ready to go in any application with a little re-namespacing.

I’ve broken each code sample into independent blocks wherein all necessary dependencies are contained, so you can take any extension method a la carte or you can get everything from the attached zip file. My solution was built in .NET 4.0 in Visual Studio 2010, but everything should work just fine in .NET 3.5 with Visual Studio 2008.

Also included in the zip file are my unit tests, which may help you understand usage of some of the more esoteric extensions, such as ChainGet, and XML comments for your IntelliSense and XML documentation generator.

Edit: The whole solution is now available on github!

Here’s the table of contents, so you can jump around more easily:

IEnumerable

ForEach is pretty straightforward. It mimics List<T>.ForEach, but for all IEnumerable, both generic and weakly typed.

public static void ForEach<T>(
	this IEnumerable<T> collection,
	Action<T> action)
{
	if (collection == null)
		throw new ArgumentNullException("collection");

	if (action == null)
		throw new ArgumentNullException("action");

	foreach (var item in collection)
		action(item);
}
public static void ForEach(
	this IEnumerable collection,
	Action<object> action)
{
	if (collection == null)
		throw new ArgumentNullException("collection");

	if (action == null)
		throw new ArgumentNullException("action");

	foreach (var item in collection)
		action(item);
}

Table of Contents{.more-link.alignright}

Append and Prepend simply take an item and return a a new IEnumerable<T> with that item on the end or beginning, respectively. Prepend is the equivalent of the [[cons]] operation to a list.

public static IEnumerable<T> Append<T>(
	this IEnumerable<T> collection,
	T item)
{
	if (collection == null)
		throw new ArgumentNullException("collection");

	if (item == null)
		throw new ArgumentNullException("item");

	foreach (var colItem in collection)
		yield return colItem;

	yield return item;
}
public static IEnumerable<T> Prepend<T>(
	this IEnumerable<T> collection,
	T item)
{
	if (collection == null)
		throw new ArgumentNullException("collection");

	if (item == null)
		throw new ArgumentNullException("item");

	yield return item;

	foreach (var colItem in collection)
		yield return colItem;
}

Table of Contents{.more-link.alignright}

AsObservable and AsHashSet yield their respective data structures, but check to see if they are already what you want, saving valuable time when dealing with interfaces.

public static ObservableCollection<T> AsObservable<T>(
	this IEnumerable<T> collection)
{
	if (collection == null)
		throw new ArgumentNullException("collection");

	return collection as ObservableCollection<T> ??
		new ObservableCollection<T>(collection);
}
public static HashSet<T> AsHashSet<T>(this IEnumerable<T> collection)
{
	if (collection == null)
		throw new ArgumentNullException("collection");

	return collection as HashSet<T> ?? new HashSet<T>(collection);
}

Table of Contents{.more-link.alignright}

[[Arg max|ArgMax]] and ArgMin are corollaries to Max and Min in the System.Linq namespace, but return the item in the list that produced the highest value from Max or least value from Min, respectively.

public static T ArgMax<T, TValue>(
	this IEnumerable<T> collection,
	Func<T, TValue> function)
	where TValue : IComparable<TValue>
{
	return ArgComp(collection, function, GreaterThan);
}

private static bool GreaterThan<T>(T first, T second)
	where T : IComparable<T>
{
	return first.CompareTo(second) > 0;
}

public static T ArgMin<T, TValue>(
	this IEnumerable<T> collection,
	Func<T, TValue> function)
	where TValue : IComparable<TValue>
{
	return ArgComp(collection, function, LessThan);
}

private static bool LessThan<T>(T first, T second) where T : IComparable<T>
{
	return first.CompareTo(second) < 0;
}

private static T ArgComp<T, TValue>(
	IEnumerable<T> collection, Func<T, TValue> function,
	Func<TValue, TValue, bool> accept)
	where TValue : IComparable<TValue>
{
	if (collection == null)
		throw new ArgumentNullException("collection");

	if (function == null)
		throw new ArgumentNullException("function");

	var isSet = false;
	var maxArg = default(T);
	var maxValue = default(TValue);

	foreach (var item in collection)
	{
		var value = function(item);
		if (!isSet || accept(value, maxValue))
		{
			maxArg = item;
			maxValue = value;
			isSet = true;
		}
	}

	return maxArg;
}

Table of Contents{.more-link.alignright}

ICollection

AddAll imitates List<T>.AddRange.

public static void AddAll<T>(
	this ICollection<T> collection,
	IEnumerable<T> additions)
{
	if (collection == null)
		throw new ArgumentNullException("collection");

	if (additions == null)
		throw new ArgumentNullException("additions");

	if (collection.IsReadOnly)
		throw new InvalidOperationException("collection is read only");

	foreach (var item in additions)
		collection.Add(item);
}

Table of Contents{.more-link.alignright}

RemoveAll imitates List<T>.RemoveAll. A second overload allows you to specify the removals if you already have them.

public static IEnumerable<T> RemoveAll<T>(
	this ICollection<T> collection,
	Predicate<T> predicate)
{
	if (collection == null)
		throw new ArgumentNullException("collection");

	if (predicate == null)
		throw new ArgumentNullException("predicate");

	if (collection.IsReadOnly)
		throw new InvalidOperationException("collection is read only");

	// we can't possibly remove more than the entire list.
	var removals = new List<T>(collection.Count);

	// this is an O(n + m * k) operation where n is collection.Count,
	// m is removals.Count, and K is the removal operation time. Because
	// we know n >= m, this is an O(n + n * k) operation or just O(n * k).

	foreach (var item in collection)
		if (predicate(item))
			removals.Add(item);

	foreach (var item in removals)
		collection.Remove(item);

	return removals;
}
public static void RemoveAll<T>(
	this ICollection<T> collection,
	IEnumerable<T> removals)
{
	if (collection == null)
		throw new ArgumentNullException("collection");

	if (removals == null)
		throw new ArgumentNullException("removals");

	if (collection.IsReadOnly)
		throw new InvalidOperationException("collection is read only");

	foreach (var item in removals)
		collection.Remove(item);
}

Table of Contents{.more-link.alignright}

IDictionary

All of these methods have to do with [[Hash table hash tables]]. I use them pretty frequently and these methods are able to make life a lot easier.

Add and AddAll insert the key and a new collection into the dictionary if the key doesn’t already exist and then adds the item or items to the collection.

public static void Add<TKey, TCol, TItem>(
	this IDictionary<TKey, TCol> dictionary,
	TKey key,
	TItem item)
	where TCol : ICollection<TItem>, new()
{
	if (dictionary == null)
		throw new ArgumentNullException("dictionary");

	if (key == null)
		throw new ArgumentNullException("key");

	TCol col;
	if (dictionary.TryGetValue(key, out col))
	{
		if (col.IsReadOnly)
			throw new InvalidOperationException("bucket is read only");
	}
	else
		dictionary.Add(key, col = new TCol());

	col.Add(item);
}
public static void AddAll<TKey, TCol, TItem>(
	this IDictionary<TKey, TCol> dictionary,
	TKey key,
	IEnumerable<TItem> additions)
	where TCol : ICollection<TItem>, new()
{
	if (dictionary == null)
		throw new ArgumentNullException("dictionary");

	if (key == null)
		throw new ArgumentNullException("key");

	if (additions == null)
		throw new ArgumentNullException("additions");

	TCol col;
	if (!dictionary.TryGetValue(key, out col))
		dictionary.Add(key, col = new TCol());

	foreach (var item in additions)
		col.Add(item);
}

Table of Contents{.more-link.alignright}

Remove and RemoveAll simply remove items from the collection associated with the specified key, if there is one. For the predicate overloads, you need to explicitly construct your Predicate<T> delegate because the C# compiler has trouble doing the type inference.

public static void Remove<TKey, TCol, TItem>(
	this IDictionary<TKey, TCol> dictionary,
	TKey key,
	TItem item)
	where TCol : ICollection<TItem>
{
	if (dictionary == null)
		throw new ArgumentNullException("dictionary");

	if (key == null)
		throw new ArgumentNullException("key");

	TCol col;
	if (dictionary.TryGetValue(key, out col))
		col.Remove(item);
}
public static void RemoveAll<TKey, TCol, TItem>(
	this IDictionary<TKey, TCol> dictionary,
	TKey key,
	IEnumerable<TItem> removals)
	where TCol : ICollection<TItem>
{
	if (dictionary == null)
		throw new ArgumentNullException("dictionary");

	if (key == null)
		throw new ArgumentNullException("key");

	if (removals == null)
		throw new ArgumentNullException("removals");

	TCol col;
	if (dictionary.TryGetValue(key, out col))
		foreach (var item in removals)
			col.Remove(item);
}
public static IEnumerable<TItem> RemoveAll<TKey, TCol, TItem>(
	this IDictionary<TKey, TCol> dictionary,
	TKey key,
	Predicate<TItem> predicate)
	where TCol : ICollection<TItem>
{
	if (dictionary == null)
		throw new ArgumentNullException("dictionary");

	if (key == null)
		throw new ArgumentNullException("key");

	if (predicate == null)
		throw new ArgumentNullException("predicate");

	var removals = new List<TItem>();

	TCol col;
	if (dictionary.TryGetValue(key, out col))
	{
		foreach (var item in col)
			if (predicate(item))
				removals.Add(item);

		foreach (var item in removals)
			col.Remove(item);
	}

	return removals;
}

// Usage:
Dictionary<int, List<int>> myDictionary;
myDictionary.RemoveAll(4, new Predicate<int>(i => i < 42));

Table of Contents{.more-link.alignright}

RemoveAndClean and RemoveAllAndClean both remove items from the collection associated with the specified key and if the resulting collection is empty, they remove the key from the dictionary as well. These come in both predicate and list forms.

public static void RemoveAndClean<TKey, TCol, TItem>(
	this IDictionary<TKey, TCol> dictionary,
	TKey key,
	TItem item)
	where TCol : ICollection<TItem>
{
	if (dictionary == null)
		throw new ArgumentNullException("dictionary");

	if (key == null)
		throw new ArgumentNullException("key");

	TCol col;
	if (dictionary.TryGetValue(key, out col))
	{
		col.Remove(item);

		if (col.Count == 0)
			dictionary.Remove(key);
	}
}
public static void RemoveAllAndClean<TKey, TCol, TItem>(
	this IDictionary<TKey, TCol> dictionary,
	TKey key,
	IEnumerable<TItem> removals)
	where TCol : ICollection<TItem>
{
	if (dictionary == null)
		throw new ArgumentNullException("dictionary");

	if (key == null)
		throw new ArgumentNullException("key");

	if (removals == null)
		throw new ArgumentNullException("removals");

	TCol col;
	if (dictionary.TryGetValue(key, out col))
	{
		foreach (var item in removals)
			col.Remove(item);

		if (col.Count == 0)
			dictionary.Remove(key);
	}
}
public static IEnumerable<TItem> RemoveAllAndClean<TKey, TCol, TItem>(
	this IDictionary<TKey, TCol> dictionary,
	TKey key,
	Predicate<TItem> predicate)
	where TCol : ICollection<TItem>
{
	if (dictionary == null)
		throw new ArgumentNullException("dictionary");

	if (key == null)
		throw new ArgumentNullException("key");

	if (predicate == null)
		throw new ArgumentNullException("predicate");

	var removals = new List<TItem>();

	TCol col;
	if (dictionary.TryGetValue(key, out col))
	{
		foreach (var item in col)
			if (predicate(item))
				removals.Add(item);

		foreach (var item in removals)
			col.Remove(item);

		if (col.Count == 0)
			dictionary.Remove(key);
	}

	return removals;
}

// Usage:
Dictionary<int, List<int>> myDictionary;
myDictionary.RemoveAll(4, new Predicate<int>(i => i < 42));

Table of Contents{.more-link.alignright}

Clean simply goes through all the keys and removes entries with empty collections.

public static void Clean<TKey, TCol>(this IDictionary<TKey, TCol> dictionary)
	where TCol : ICollection
{
	if (dictionary == null)
		throw new ArgumentNullException("dictionary");

	var keys = dictionary.Keys.ToList();

	foreach (var key in keys)
		if (dictionary[key].Count == 0)
			dictionary.Remove(key);
}

Table of Contents{.more-link.alignright}

Object

As casts an object to a specified type, executes an action with it, and returns whether or not the cast was successful.

public static bool As<T>(this object obj, Action<T> action)
	where T : class
{
	if (obj == null)
		throw new ArgumentNullException("obj");

	if (action == null)
		throw new ArgumentNullException("action");

	var target = obj as T;
	if (target == null)
		return false;

	action(target);
	return true;
}

Table of Contents{.more-link.alignright}

AsValueType does the same thing as As, but for value types.

public static bool AsValueType<T>(this object obj, Action<T> action)
	where T : struct
{
	if (obj == null)
		throw new ArgumentNullException("obj");

	if (action == null)
		throw new ArgumentNullException("action");

	if (obj is T)
	{
		action((T)obj);
		return true;
	}

	return false;
}

Table of Contents{.more-link.alignright}

ChainGet attempts to resolve a chain of member accesses and returns the result or default(TValue). Since TValue could be a value type, there is also an overload that has an out parameter indicating whether the value was obtained.

Be careful with this extension. Since it uses reflection, it’s a bit slow. For discrete usage, each call is under 1 ms, but if you use it in a loop with many items, the performance hit will become more tangible.

public static TValue ChainGet<TRoot, TValue>(
	this TRoot root,
	Expression<Func<TRoot, TValue>> getExpression)
{
	bool success;
	return ChainGet(root, getExpression, out success);
}

public static TValue ChainGet<TRoot, TValue>(
	this TRoot root,
	Expression<Func<TRoot, TValue>> getExpression,
	out bool success)
{
	// it's ok if root is null!

	if (getExpression == null)
		throw new ArgumentNullException("getExpression");

	var members = new Stack<MemberAccessInfo>();

	Expression expr = getExpression.Body;
	while (expr != null)
	{
		if (expr.NodeType == ExpressionType.Parameter)
			break;

		var memberExpr = expr as MemberExpression;
		if (memberExpr == null)
			throw new ArgumentException(
				"Given expression is not a member access chain.",
				"getExpression");

		members.Push(new MemberAccessInfo(memberExpr.Member));

		expr = memberExpr.Expression;
	}

	object node = root;
	foreach (var member in members)
	{
		if (node == null)
		{
			success = false;
			return default(TValue);
		}

		node = member.GetValue(node);
	}

	success = true;
	return (TValue)node;
}

private class MemberAccessInfo
{
	private PropertyInfo _propertyInfo;
	private FieldInfo _fieldInfo;

	public MemberAccessInfo(MemberInfo info)
	{
		_propertyInfo = info as PropertyInfo;
		_fieldInfo = info as FieldInfo;
	}

	public object GetValue(object target)
	{
		if (_propertyInfo != null)
			return _propertyInfo.GetValue(target, null);
		else if (_fieldInfo != null)
			return _fieldInfo.GetValue(target);
		else
			throw new InvalidOperationException();
	}
}

// Usage:
var myValue = obj.ChainGet(o => o.MyProperty.MySubProperty.MySubSubProperty.MyValue);

Table of Contents{.more-link.alignright}

DependencyObject

These extensions work just like their non-safe counterparts on DependencyObject, but will call Dispatcher.Invoke to do operations if the current thread isn’t a UI thread.

public static object SafeGetValue(
	this DependencyObject obj,
	DependencyProperty dp)
{
	if (obj == null)
		throw new ArgumentNullException("obj");

	if (dp == null)
		throw new ArgumentNullException("dp");

	if (obj.CheckAccess())
		return obj.GetValue(dp);

	var self = new Func
		<DependencyObject, DependencyProperty, object>
		(SafeGetValue);

	return Dispatcher.Invoke(self, obj, dp);
}
public static void SafeSetValue(
	this DependencyObject obj,
	DependencyProperty dp,
	object value)
{
	if (obj == null)
		throw new ArgumentNullException("obj");

	if (dp == null)
		throw new ArgumentNullException("dp");

	if (obj.CheckAccess())
		obj.SetValue(dp, value);
	else
	{
		var self = new Action
			<DependencyObject, DependencyProperty, object>
			(SafeSetValue);
		Dispatcher.Invoke(self, obj, dp, value);
	}
}
public static void SafeSetValue(
	this DependencyObject obj,
	DependencyPropertyKey key,
	object value)
{
	if (obj == null)
		throw new ArgumentNullException("obj");

	if (key == null)
		throw new ArgumentNullException("key");

	if (obj.CheckAccess())
		obj.SetValue(key, value);
	else
	{
		var self = new Action
			<DependencyObject, DependencyPropertyKey, object>
			(SafeSetValue);
		Dispatcher.Invoke(self, obj, key, value);
	}
}

Table of Contents{.more-link.alignright}

Getting to New York

Continued from Part I._Continued from [Part I](http://northhorizon.net/2010/getting-to-new-york-part-1/).

Part II. On the Phone

Sunday evening, January 18, I decided it might be a good idea to brush up on my .NET framework knowledge to prepare for my interview the next morning. Judging by the latter questions of Lab49’s “preliminary screening test,” these guys really didn’t mess around. I pulled off my bookshelf my trusty copy of CLR via C#, which is, in my opinion, the best book you can read if you really want to take your understanding of C# and .NET from “intermediate” to “expert”. C#  Developers: no excuses, read this book cover to cover. As it turns out, my interviewer, Nick, must be a fan of the same book. When he called me that Monday morning, after introducing himself, Nick threw me a couple softballs before turning up the heat. I was queried at length about generics, delegates, anonymous methods, and the garbage collector (among other things), all of which I was more than happy to explicate in the greatest of detail, having refreshed myself on their inner workings the night before. Nick’s attention then turned to the newer .NET 3.5 features, which I had been using for almost two years, and I was more than happy to talk about those, too. I must admit, he stumped me on a concept called “attached behaviors”. I was familiar with attached properties, but it wasn’t until recently that I’ve become fully aware of attached behaviors. I’ll have another article discussing what I learned in the future.

After Nick finished grilling me for information, I had my turn to ask him questions. I seem to remember having a list of things to talk about, but I was suffering from some strange variant of vertigo, so I went with my usual developer talking points. For the record, Nick is one of the nicest guys ever. As I would find out later, Lab49 is composed solely of superb people. You may be thinking that I’m generalizing or hyperbolizing, but in all seriousness, I have yet to find a single bad apple or even mildly distasteful person at Lab49. Every time I think I’ve found one, they prove me wrong. Even the Java guys are top notch, and that’s saying something. In any case, I finished the interview enjoying a discussion of the usual programmer minutiae, talking about podcasts and developer philosophy. I’m not sure if it’s normal for one to feel a sense of camaraderie with his interviewer, but I know I sure did.

Later that day, I received an email from James, a Recruitment Coordinator at Lab49, asking for a “telephone conversation” with Nemo, the Director of Recruitment. I figured it was one of those psychological profiles one of my friends had been subjected to in a recent interview. I don’t think I could have been more wrong. The next morning, Nemo called, introduced himself, asked for clarification on a few points of my résumé, and opened himself up for questions. I asked him the usual questions on how Lab49 was structured, the promotion strategy, and what Lab49 does in general.

Nemo explained to me that Lab49 is somewhat loosely structured, with no real middle management. As projects start and finish, you report to the project manager and engagement manager, but every project is custom-tailored to the clients’ needs. I pressed Nemo to explain how a successful company works without the infrastructure almost every company of equal size has. Nemo couldn’t really explain how it worked, but only that it did, and quite well. Some may say, “when the cat’s away,” but I might interject that maybe without the threat of imminent death, a mouse might be more free to do something more constructive with its time than cower for its life.

Coming from a consulting background, I could really tell Lab49 really is a consulting firm in the greatest sense of the term. The answer to everything is, “it depends,” and, “what the client needs,” which is clearly working out well for them, but can be frustrating trying to pin down something concrete on which to make a decision. In general, Labbers work on-site with the clients to best make use of human and information resources. In my opinion, there’s definitely a beneficial side effect to it: when Lab49 is working side-by-side with the client, the work has a face, rather than some unknown bunch of people dropping code in a folder every week or two.

Nemo continued by telling me that Lab49 has titles out of necessity, but they don’t play as much of a role as in other companies. I really appreciated that. Being a young guy, it’s typically difficult for me to get my ideas out in a space where I’ve got a title that’s easily negligible. I can say with authority that’s not been the case at Lab49. Every day I work with guys who have decades of experience on me, but are interested to hear what I have to say. It’s not about seniority; it’s about being the best at what you do and bringing new and innovative ideas to the table. For that very reason, I prefer to keep my title to myself. There’s no real company policy on the publicity of your title, but it’s not on my business card and I certainly wouldn’t wear it on my sleeve.

Nemo concluded the interview by saying that Daniel Chait, one of the founders of Lab49 would want to talk to me over Skype before an in-person interview. That was the first time anyone had said anything about an in-person interview, and while I had expected it at some point, I couldn’t help but be excited to see some light at the end of the tunnel. I confidently told Nemo that I was available for the remainder of the day, which might have caught him a bit off-guard. He said Daniel was a busy guy and he’d see when he was available. I was confirmed for later that afternoon by email not too long after hanging up.

I’m not sure if it was then or later that I started feeling extremely skeptical of this Lab49 place. I remember telling my friend, Chris, that, “I know Nirvana Corp burned down, but Lab49 makes me wonder,” quoting from an episode of Dilbert, The Animated Series.  I found it difficult to believe that such a place could exist, so little was my faith in humanity, let alone developers.

I then realized I was about to have my fourth communiqué with a company in three business days, whereas almost no Dallas-based company had insomuch as replied in a week.  These guys really don’t mess around, I remember thinking.

At 3:00 PM and after a few quirky Skype issues, I was on webcam with Daniel Chait, now fully appreciating the extra money I’d spent on a nice webcam. Chait went over my experience thoroughly, fully examining the roles I had played on each team. I’m not sure why, but at times I felt very intimidated. It’s perplexing because going over the conversation in my head Daniel wasn’t in the least bit condescending nor indirect in his questions. We discussed my work in WPF and LINQ and lambda expressions. Having taken Advanced Programming Languages and feeling like (but by no means) an expert in functional languages, I was glad to be on more solid ground. I had a compulsion to share with Daniel a couple extension methods I had written to succinctly state an equation in lambda. Unfortunately, since it was a compulsion I wasn’t prepared at all with the code snippet. I brought up my Machine Learning projects folder, which I knew it was in but couldn’t remember exactly which project I had written the extension for. Fumbling around, I must have spent a full five minutes of awkward silence finding the thing, which seemed anti-climactic for such a long wait, and even more so for me, since it felt like an eternity.

Afterward, I told my roommate that I wasn’t sure how that interview had gone. It was without a doubt by my estimation my weakest interview, and likewise the most important. I spent a little while going over the events, trying to find out why I felt I had done so poorly. All I can say is that I feel very confident of my technical skills and far less strongly about my experience, which was a major focus of Chait’s interview. That and that awkward silence.  I told myself that if they wanted a guy with more experience, that was a perfectly legitimate reason not to go forward, and I shouldn’t be worried about it. And if they rejected me outright for not having that bit of code on hand, they could go to Hell.

I was glad some of my friends wanted to go out that night so I could get my mind off the whole thing. When I got back, there was an email from James waiting for me, inviting me to New York for an in-person interview. I slept well that night.

The next day I made reservations airfare and lodging and ran both by James, as Lab49 kindly picked up the tab. It might be standard operating procedure for companies, but the fact is that they don’t have to do it, and I would have paid for the ticket myself if they hadn’t. Now all that was left to do was wait out the week and familiarize myself as best I could with my travel plans.

Clearly this company had their act together and was vigorously pursuing this discovery period. I was also reacquainting myself with the ever-greater possibility of leaving everything for New York. My father was supportive, and because he’s not usually too keen on my adventures, I took it for a good sign. That being the case, I started warming my close family and friends up to the idea that I might end up in New York. Most of them were surprised, marveling at the apparent spontaneity and haste in the whole process. Only months before had I told them I’d be staying in Dallas for the “foreseeable future” as a consultant writing software for my own company.

Who knew the foreseeable future was so short?

Site Update: Comments

I finally buckled down and wrote the comments theme for the site, so feel free to make use of the lovely AJAX-enabled comments on the full article pages!