Getting to New York

A number of people have asked me how in the world I ended up in New York City. Here is the story, to the best of my memory. The entire series of events takes place over three or four weeks, so I will break it up into multiple parts.

Part I. The Call

I had been consulting for right about a year and a half, and things were looking pretty good. Chris and I, under the company name of Gestault Solutions, had managed to architect and implement a complete resource scheduling system for outpatient healthcare centers. We were not without the help and business guidance of Randall, who is perhaps one of the most intelligent and honorable businessmen I will ever meet.  I’ll be the first to admit, it was an incredible undertaking for such a small team in such a short time, and I while I may wince at a few spots in the codebase, I’m proud to have worked with Chris and Randall on it.

Of course, every company has problems, and while ours weren’t so much in the codebase, but in those pesky people we call “clients,” who seem to be the most disorganized and incoherent bunch of people under a roof. I really hate business politics. Our main client, to whom we were very dependent upon for revenue decided that they wanted to fork our legacy product, iOrder, and use their staff to continue incremental feature improvements on it. We had concluded several months earlier that while iOrder was doing very well for when it was written and the circumstances thereof, that it was outdated and had several architectural flaws that hampered further development. Fortunately, Chris and I were both very adept at the suite of new .NET technologies, notably WPF and very quickly WCF, so we were able to leverage both of those frameworks to get well-designed software out the door orders of magnitude faster than teams using older technologies.

Our client’s team was expecting to make a release in September of 2009 with a pittance of feature improvements. Since we were brought on to help their team transition, we had a good idea of how things were progressing, and it was our professional opinions at the time that their chances of success were slim to none, based on the skill sets of their developers and architectural choices. It’s worth pointing out that these guys were accustomed to maintaining software, making tweaks and small changes and this was new development, which requires a whole different set of skills. It may not surprise you then that as of this writing, they still haven’t finished their handful of upgrades.

Around the same time as the client’s stated release date, we were testing our completely new WPF user interface with a good bit of success. There were some initial bugs, but that’s to be expected. Despite having blown their own release date and given the opportunity to ditch their ailing project and purchase our finished one, our client (if you could call them that at this point) decided to persist with their team with a slightly revised schedule. I’m not sure how many deadlines they had, but they are now aiming for a June 2010 release. From what I’ve seen, a pig with a model rocket strapped to its back has a better chance of getting off the ground.

We continued working on our project, albeit at a lower rate through the end of 2009 and beginning of 2010, but it was looking like we were not going to pick up again anytime soon. Randall had found a second client to bring onboard, but they weren’t in a hurry to do anything either. I had just graduated in December and was looking to kick off my career, but every week things became a bit grimmer, despite Randall’s best efforts to show our main client reason.

I wasn’t really worried until Chris started interviewing. That was when I got my résumé in order and started making phone calls. The job market wasn’t looking very interesting to me; there was a lot of web work, internal management systems, so forth and so on, but very little WPF work or companies on the cutting edge looking for employees. I was also looking for a senior position, which may seem a bit audacious for my age and credentials, but I was more than willing to prove to anybody that I wasn’t wasting their time with my job application. A recruiter called me and introduced me to a particular FedEx/Kinkos project designing some kiosk interface with a bunch of contracted developers. I was pretty skeptical, since without strong project leadership, success becomes a fleeting goal, and I’d hate to have my name pinned to a failed project, even if it wasn’t my fault.

On Friday, 5 January, I was called by a guy who introduced himself as Ryan Elberg from Lab49, a software consulting company. He noted, “I saw your résumé on Dice and that you were available for relocation.”

“Depends on the job,” I quickly replied. “Where are you located?”

“New York. Are you interested?”

I had marked the “Available for Relocation” checkbox on Dice, the monster.com of IT, in case a company in Houston, Austin, or San Antonio was looking for a programmer.

Not really knowing what to say, I went with my previous remark, “Depends on the job. Which part of New York?” It could have been Albany, for all I knew.

“Downtown New York City.”

At this point I was just stalling for time. I had no real intention of moving to New York, but there are a lot of premier software companies in NYC, so why waste a good job opportunity? I quickly decided that I’d go through the interview process, which would surely include a face-to-face interview, at which time I could decide whether I wanted to be there or not.

Ryan asked me when I would be available for a quick pre-screening test. I told him I was ready immediately, so he commenced with a little multiple choice test over the phone. Apparently, I did quite well, so he set up another phone interview with a developer named Nick for the following Monday.

After I hung up, I couldn’t help but feel a bit exhilarated. It’s not everyday people quiz me on CLR garbage collection and such. At the same time, I was filled with dread toward to idea of leaving my friends and family behind in my beloved Texas, a bastion of freedom and conservativism, for New York, a place with neither family nor friends, except my Uncle far upstate in Rome. And New York City was recently in the news for Mayor Bloomberg’s thoughts on how someone with a concealed handgun license could shoot up a movie theater. Not exactly the kind of rhetoric I was accustomed to in Texas.

I decided to procrastinate on figuring out what I wanted to do. Being fresh out of college, I was an expert at putting things off until later. Maybe something would happen along the interview process where I could find an out and not end up moving. It’d be interesting to say that someone called me from New York thinking I’d be great for their team, but I turned them down because they didn’t have it together.

I was in for a surprise.

Continued in Part II.

Single Instance ClickOnce

I’m currently working on a project where the client wants to be able to click on a link and bring up a WPF UI with relevant information and available actions. Furthermore, the client wants to be able to keep clicking links and continue to reuse the same instance of the WPF UI. We decided our best option would be to go with a ClickOnce deployment, but we were unsure of how to get the web page to talk to our application.

Getting the first instance open was easy. MSDN has a great article on how to get a query string out of your URI. Straight from the article:

private NameValueCollection GetQueryStringParameters()
{
    NameValueCollection nameValueTable = new NameValueCollection();

    if (ApplicationDeployment.IsNetworkDeployed)
    {
        string queryString = ApplicationDeployment.CurrentDeployment.ActivationUri.Query;
        nameValueTable = HttpUtility.ParseQueryString(queryString);
    }

    return (nameValueTable);
}
I was also aware that you can use a Mutex to synchronize your processes. It’s a much more elegant solution than a process table search. The only question was how do we pass data from one instance of my application to another? A common solution I saw was to defer to native methods and pass strings around that way, but I felt this was too inflexible of a solution, and it requires full trust to boot. Our particular application requires full trust anyway, but in the spirit of getting things done right, we decided to give WCF [[Inter-process communication IPC]] a shot, and it definitely paid off.

My first proof of concept was much more messy than this, but I’ll spare you the clutter and discuss my better organized version. In any case, you will need to reference System.Deployment, System.ServiceModel, and System.Web.

I decided that I wanted to completely encapsulate the WCF and process synchronization logic, so I setup a class called ApplicationInstanceMonitor<T> : IApplicationMonitor<T>, where T is the type of the message you want to send with IPC. I decorated it with the following service behavior so we can use a singleton instance. Usually I’d go with a re-entrant concurrency mode, but since we won’t be handling very many requests, one at a time is sufficient.

[ServiceBehavior(
	InstanceContextMode = InstanceContextMode.Single,
	ConcurrencyMode = ConcurrencyMode.Single)]

Its implemented interface is pretty simple:

[ServiceContract]
public interface IApplicationInstanceMonitor<T>
{
	[OperationContract(IsOneWay = true)]
	void NotifyNewInstance(T message);
}

Next I created a method to deal with the Mutex, take action to setup IPC as necessary, and report its status.

private readonly string _mutexName; // Set by constructor
private Mutex _processLock;

public bool Assert()
{
	if (_processLock != null)
		throw new InvalidOperationException("Assert() has already been called.");

	bool created;
	_processLock = new Mutex(true, _mutexName, out created);

	if (created)
		StartIpcServer();
	else
		ConnectToIpcServer();

	return created;
}

For the WCF setup, I simply needed a URI to bind to (set or generated by the constructor) and a binding:

_binding = new NetNamedPipeBinding(NetNamedPipeSecurityMode.Transport);

So connecting to WCF was rudimentary:

private readonly Uri _ipcUri; // Set by constructor
private readonly NetNamedPipeBinding _binding; // Set as above
private ServiceHost _ipcServer;
private ChannelFactory<IApplicationInstanceMonitor<T>> _channelFactory;
private IApplicationInstanceMonitor _ipcChannel;

private void StartIpcServer()
{
	_ipcServer = new ServiceHost(this, _ipcUri);
	_ipcServer.AddServiceEndpoint(typeof(IApplicationInstanceMonitor), _binding, _ipcUri);

	_ipcServer.Open();

	_ipcChannel = this;
}

private void ConnectToIpcServer()
{
	_channelFactory = new ChannelFactory<IApplicationInstanceMonitor<T>>(
		_binding, new EndpointAddress(_ipcUri));
	_ipcChannel = _channelFactory.CreateChannel();
}

Now all that’s left to do in our class is expose out the useful methods. I use an explicitly implemented service contract interface to deal with the server-side, since that’s what WCF will call into and the regular implementation for the client side.

public event EventHandler<NewInstanceCreatedEventArgs<T>> NewInstanceCreated;

// NOTE: This is not a subclass, but here for ease of viewing.
public class NewInstanceCreatedEventArgs<T> : EventArgs
{
	public NewInstanceCreatedEventArgs(T message)
		: base()
	{
		Message = message;
	}

	public T Message { get; private set; }
}

public void NotifyNewInstance(T message)
{
	// Client side

	if (_ipcChannel == null)
		throw new InvalidOperationException("Not connected to IPC Server.");

	_ipcChannel.NotifyNewInstance(message);
}

void IApplicationInstanceMonitor.NotifyNewInstance(T message)
{
	// Server side

	if (NewInstanceCreated != null)
		NewInstanceCreated(this, new NewInstanceCreatedEventArgs(message));
}

There it is! Now all we need to hook it up is in our App.xaml.cs (assuming you’re doing WPF)

private Window1 _window; // Set by constructor
protected override void OnStartup(StartupEventArgs e)
{
	if (_instanceMonitor.Assert())
	{
		// This is the only instance.

		_instanceMonitor.NewInstanceCreated += OnNewInstanceCreated;

		_window.Show();
	}
	else
	{
		// Defer to another instance.

		_instanceMonitor.NotifyNewInstance(new MyMessage { QueryString = GetQueryString() });

		Shutdown();
	}
}

private void OnNewInstanceCreated(object sender, NewInstanceCreatedEventArgs e)
{
	// Handle your message here

	_window.Activate();
}

public string GetQueryString()
{
	return ApplicationDeployment.IsNetworkDeployed ?
		ApplicationDeployment.CurrentDeployment.ActivationUri.Query :
		string.Empty;
}

public void HandleQueryString(string query)
{
	var args = HttpUtility.ParseQueryString(query);

	_window.Message = args["message"] ?? "No message provided";
}

[Serializable]
public class MyMessage
{
	public string QueryString { get; set; }
}

And now we’re done. Just a bit of HTML and you’ll be communicating with your ClickOnce in no time!

Edit: This is now on GitHub!

The Qualifications for Writing Textbooks

It’s certainly true that there are many unqualified authors that the self-taught programmer has to look out for, but what about the textbooks we read at universities? I won’t argue for a second that tenured professors are the least bit unqualified to discuss the intricacies of algorithms and abstract data structures, but who writes the book on using real world implementations? Who is qualified to write the book on Java or SQL Server? I’d want to see someone who has been using the technology for several years or a good portion of the technology’s life if it’s relatively new. Unfortunately, I find this is rarely the case. I think publishers could run a pitch like this:

From the people who confused you with the most elementary algorithms comes a series of things you really need to know to do your job! Watch, mystified, as the book which you have a test on in eight hours invents symbols to explain the details of single digit addition! You might have been top of your class in high school and showed promise through your core and basic classes, but just you wait! Our authors have learned and mastered techniques to simultaneously insult your intelligence and dumbfound you with a needless amount of abstraction and complexity! All this can be yours for the low-low price of $280 for our 120 page full color, hardcover book!

I’ve always been upset with textbook publishers, ever since my days in Computer Science 1, sophomore year in high school. The authors of that poorly constructed book (the binding literally only lasted about six months before self-destructing) took some kind of sadistic enjoyment in coming up with unintelligible vocabulary, patterns, and practices in which to subject students. In those days our teacher was our savior; no matter how mind-gratingly stupid and dull the book was, Mr. Patterson invariably sorted things out and never hesitated to say the book was moronic. I have to thank him for saving my sanity or, at least, staving off the inevitable future.

This semester, I draw particular complaint with the book in my databases class, inappropriately named “Fundamentals of Database Systems (5e),”  by Ramez Elmasri and Shamkant B. Navathe. I’ve been writing SQL for about four years now, but it wasn’t untill my last job that I think I really learned the full potential of a database system. My old boss was an absolute god in writing SQL and I  made certain that the learning experience more than accounted for the pay. It was under his tutelage that I learned how well-designed database systems were made and how to retrieve amazing data no one thought possible. I wouldn’t consider myself a sage in designing databases, but I think that I’m definitely qualified to pass judgment on most designs, especially the ones that would be considered “fundamental.”

During the early stages of my class, my professor discussed the all-important topic of table keys. (For my non-programmer friends, you use database keys to look up data in a particular table. Since some tables can have millions of rows, you have to choose a key that is unique and is as small as possible for quick lookup.) It’s a well-held practice in modern databases that the database is really the best agent to tell you what the keys are, so in general, what you want to do is tell it what your data is, and, like a coat-check, it gives you a little token (a 32-bit signed integer) to get your data again. Because the database chooses the key, you can be assured that it is both unique and small enough to quickly look things up. This is known as an auto-increment integer primary key column, mainly because the integer that you’re given is 1 (or some increment) more than the key that was given out last. Even though it’s not a traditional mechanism in database research, every major database (MSSQL, Oracle, MySQL, Protegé) has some incarnation of it. Traditionally, the programmer chooses one or more fields that combined are unique. While this is effective, it takes significantly longer for the server to prepare the key to look up the data and the number of fields that must be duplicated on tables so the original data can be found becomes tremendous. My database professor still disagrees with me, but I have faith that he’ll have an epiphiny sometime and understand and regret the error of his ways.

So why is it that these authors, from whom my professor quotes, have got it all wrong? In lieu of real research, I did some investigative Googling to find the qualifications of these men.

Ramez Elmarsi

  • B.S., Electrical Engineering, Alexandria University (Egypt, 1972)
  • M.S., Ph.D., Computer Science, Stanford University (1980)

Elmarsi has spent the bulk of his life in research while consulting for Honeywell developing a distributed database testbed and associated tools from 1982 to 1987. He now teaches at the University of Texas at Arlington and consults for various law firms.

Source: http://ranger.uta.edu/~elmasri/

Shamkant B. Navathe

  • Ph.D., Industrial and Operations Engineering, University of Michigan (1976)
  • M.S., Computer and Information Science, Ohio State University (1970)
  • B.E., Electrical Communications Engineering, Indian Institute of Science (1968)
  • B.Sc., Physics, Mathematics, University of Poona (India, 1965)

Navathe was a Systems Engineer for IBM from 1968-1969 before working for Electronic Data Systems in 1971. Starting in 1975, Navathe began teaching at NYU as an associate professor and then moved to the University of Florida for a few years until he ultimately settled at the Georgia Institute of Technology, where he teaches now.

Source: http://www.cc.gatech.edu/computing/Database/faculty/sham/

There’s no doubt that these men have the academic prowess demanded by their degrees and both have an extensive list of published works. But what really disturbs me is their professional résumé. Navathe stopped working professionally in 1971 and Elmarsi has never worked as a full-time employee at any company. This is when I understood exactly why my book is heavy on unintuitive mathematical symbols wielded like a bat with barbed wire and light on intelligence or edifying information. This field changes dramatically every couple years and unless you can keep up to date on the very latest technology, you are willing yourself into obsolescence. But yet some charlatans authors have the audacity to publish texts advising the unsuspecting student to continue using the old technologies of yore, rather than using what is new and efficient. Perhaps even more inexcusable is the book selection committee at my university choosing textbooks no one can read, let alone understand to remind all of us undergrad students that we’re not Ph.D.’s and we haven’t joined their elite clique.

I think this problem is far more widespread than I have broached here. A year or so ago, I wrote a letter I never delivered to my dean, more or less as a venting exercise. I think I’m going to clean it up and discuss my points here, in the event someone who can do something stumbles upon my humble blog.

The Definition of Computer Science

What is Computer Science?

It’s always interesting to see how different people answer the question. Especially people who have or plan to have degrees in it. Dictionary.com defines it as:

the science that deals with the theory and methods of processing information in digital computers, the design of computer hardware and software, and the applications of computers.

I see this broken down into:

  • Science
  • Theory
  • Design of hardware and software
  • Applications for computation

Normally, I’d say fighting the dictionary on definitions would be rather fruitless, but this is by far one of the most vague definitions I’ve ever seen. I think a good analog for CS is math and physics. In that much more mature field, mathematics provides laws that say x should exist. It is then up to the applied physicist to prove in a lab that the theory is true. Even then, it is up to a company and its engineers to productize and actually use the fruits of science. Computer Science, in stark contrast according to this definition, is all of those things rolled up into one. It’s no wonder why no one in this field knows what it means.

To me, the entire term “Computer Science” is a huge misnomer. It’s certainly not science. Science, at least to me, involves creating a hypothesis, setting up an experiment, and proving the truth of your assumptions. Last time I checked, there was no such thing in CS. Nobody sits around and hypothesizes about algorithms, let alone commercial programs. I think upper management would have a conniption if their products were experiments.

So what about computers? Surely you use computers in CS! I suppose most of my professors didn’t get that memo, and, historically, that really hasn’t been the case either. Edsger Dijkstra, one of the most renowned computer scientists in our short history, never used a computer. Even his musings, as late as 2001, were handwritten. Dijkstra had a very particular view of CS. To him, Computer Science was purely theory – really quite close to a real science in terms of algorithm design and such. It was his opinion that all these programs we run on our computers are perversions of computation. In the spirit of our forerunner, my professor in Advanced Data Structures forwent computers in favor of the more conventional pencil and paper. To be fair, I’m almost certain he was a machine, based upon his ability to do the most tedious and mundane calculations ad nauseum with the patience of any of Intel’s creations.

Since it seems my degree is almost as ambiguous as a Liberal Arts degree, I will posit my suggestions for changing it. Fortunately, unlike my comparison, my degree is both useful and salvageable.

My university has done a good job of trying to distill the ideas that I’ve covered here, and, for that, I commend them. It is, however, nothing short of unprovidential that they’ve managed to distill one form of mud into two forms of mud. The school offers degrees in Computer Science and Software Engineering. Despite what you might think, SE is hardly its namesake. What they ended up with is a theory-application mix (CS) and a application-business mix (SE). At least it’s a step in the right direction.

I think we need three degrees, as you have probably noticed that I’ve been hinting all along:

  1. Theoretical Computation
  2. Software Engineering
  3. Software Management

Theoretical Computation would focus on algorithm design and computability, as its name suggests. It’s not a science, and it has no computers in it – just computability, or the ability for something to be derived from calculations and deterministic processes.

Software Engineering would actually live up to its name. Programming would be king, like the pencil is to the artist. Software architecture would be a major emphasis, as well as applied algorithm design, which could reach over into Theoretical Computation.

Finally comes Software Management. Because software planning is a huge issue for any company that writes software, be it boxed or internally distributed, we need people who are trained to help the process along. These people need to be business savvy to communicate deadlines and goals to the non-technical arm(s) of the company. Furthermore, Software Management majors have to be technical enough to be able to choose good employees.

A lot of people in the field feel that there will be a strong resistance to change for such a degree plan, even if it makes sense. I can understand the impetus, but at the same time the degree itself hasn’t existed for forty years, so there is always hope for reorganization.

W3C Validation and the Web

I try to keep up on several blogs, one of which is Jeff Atwood’s Coding Horror. Recently, he chose the topic of W3 Validation and its necessity, or lack thereof. I also seem to have made a few statements on a similar topic, so perhaps my view is nothing short of expected. What is strange, however, is Jeff’s point of view, considering he and I are kindred spirits in the world of .NET and C#.

Jeff’s argument is pretty simple:

  • The web is a forgiving place.
  • Many big-name websites don’t pass validation.
  • James Bennett doesn’t like XHTML.
  • CSS isn’t as intuitive as HTML-embedded properties.
  • Validity is relative to your standards.

Since Jeff “vehemently agrees” with Mr. Bennett, I decided to investigate his article as well, to try and understand where Atwood was coming from. Benett’s argument is even more basic:

  • No real advantages.
  • Markup errors are fatal.
  • There is some CSS/DOM incompatibility between HTML 4.01 and XHTML.

The majority of these points, are, in my opinion, rooted in ignorance and xenophobia, both of which are disastrously fatal in our industry, especially on the Web. It is true the Web is very forgiving, but only because it must be so. HTML is a novel language, in the sense that you don’t move from line to line, from procedure to procedure, but rather declare the structure of how something looks and let the renderer (IE, Firefox, Safari) choose the best way to view it. A good example is when Firefox switched from an iterative to a recursive implementation in its third version and also reimplemented its graphics engine. Because of the declarative nature of HTML, websites weren’t adversely impacted. However, like all new things, HTML has its quirks. Namely, it’s not very easy for computers to understand. In fact, sometimes it’s not all that easy for even humans to understand – maybe a sort of inverted Turing test.

Like all the triumphs in Computer Science, someone identified the problems that exist in HTML and created a far more flexible and extensible language called XML. XML is now one of the best containers for information because of its well-designed and understandable structure and its infinite flexibility. While that’s great for data storage, what about the Web? Well, that’s where XHTML 1.0 came in, as a statically defined language like HTML, but a machine-understandable and consistent language, like XML. XHTML, is, at its very essence, the best of both worlds.

I think Atwood and Bennett had a similar reaction to XHTML as I did:

Back when XHTML went 1.0, I was somewhat surprised to see that they had actually removed elements and attributes from the specification. Usually, moving to a new product means that you get more, not less.

The difference here is that they refused, either consciously or unconsciously to adapt to and understand where the Web was – and still is – going. The W3C is going for a very specific separation of concerns for websites. XHTML is designed to be the static structure of a website. It contains data (text and such), and some hints on how to present it (whether it be in paragraphs, tabular, or a list), but nothing else. This is where CSS comes in. CSS decorates the XHTML, quantifying distances, sizes, backgrounds, and organization. Finally, JavaScript gives the client a smooth feel, dynamically altering the web page to best suit the client, and, as of late, retrieve data on the fly through AJAX and JSON. This is why HTML attributes are almost all deprecated in XHTML – the static structure doesn’t care how wide things are or how tall they are. One of the best examples that I can think of to demonstrate the division of power between XHTML and CSS is CSS Zen Garden. The opening page is modest enough, but choose one of the designs on the right sidebar and the entire layout changes. The only difference between the default and new design is a new CSS style sheet. You’ll notice it has all the same text, all the same navigation, but a completely different look. Why is this useful or important? Well, as a form of broadcast media, it is salient to shape the way your website looks based on the viewer. For instance, if you go to a website on an web-enabled cellphone, I’d want to strip out as many ancillary graphics as possible and conserve as much width as possible. If you’re vision impaired, I want to be able to deliver to you a web page that a TTY device can easily understand. Or, perhaps, you’re a search engine and I want to get straight to the point so you can index better. This should all be easily accomplished by specifying the use and intention of each style sheet and allowing the user to choose the one that best suits their purpose.

Of course, someone always brings up the fact that many name-brand websites aren’t valid at all. I’d also like to point out that many big-name trading firms are going bankrupt. Just because they don’t care doesn’t mean you shouldn’t. The argument itself is infantile. I’m not trying to inflate my ego any (trust me, that’d be a bad idea), but I find writing XHTML 1.1 (which is by far the strictest) to be very easy. When Marlene and I drew up the static format for this website (that is, before we plugged in all the WordPress stuff), I had about three validation errors, all of which were quelled very easily. Personally, I was a little disappointed I had any, but I suppose perfection was never a trait I had.

Jeff also wonders whether CSS validation is possible. I’m not entirely sure why he didn’t bother to check Google. Of course CSS validation exists, and I’m proud to say this website’s CSS is valid as well.

It’s unfortunate that XHTML markup errors are indeed fatal (and the errors aren’t always very helpful), but as a C# developer, I’m used to having compiled code. If I messed up and miscommunicated with the machine (again, perfection is not one of my virtues), I want the machine to tell me that it doesn’t understand, rather than acting like my girlfriend, misinterpreting what I said, and summarily stating that I don’t love her anymore. Obviously it’s simply untrue; I will always love my computer. Girlfriends are a bit different.

So, perhaps the most important question is, why do we need standards? The need for validation clearly hinges upon the need for standards; without validation, how can one tell if standards are being upheld? And without standards, the Internet would be far more segmented than it is already. Jeff is right that most people don’t understand the W3 Valid “stickers” some webmasters (like myself) have on their websites in various locations. This doesn’t really bother me, though. Most people don’t notice half the validations on many of the products they use everyday. On the inside of the battery cover of your cellphone, there exist a multitude of symbols, each representing a different standards body that approved this device to work with the network. Yet millions of people blithely continue on, blissfully ignorant of the years of effort that go into making such a device. Such is the thankless life of an engineer. No one notices when you make amazing accomplishments (and a W3C valid website is hardly a major feat), but everyone complains when the slightest flaw presents itself.

I suppose we just learn to deal with it.