rants

This page contains my own personal opinions. I don’t think any of this is Absolute Truth, and anyone who can disagree with these writings in a calm and sensible way is OK in my book.

Sloth is a Virtue

Properly motivated laziness is an important character trait for a good software engineer. Laziness inspires an engineer to figure out how to make the computer do all the boring, tedious, and repetitive parts of any given job, leaving the interesting part to the end user. This also motivates making tools flexible and reliable, to save time rewriting and debugging later, and is why I tend to implement entire libraries any time I run into a new protocol that doesn’t already have one. I’ve never found my effort wasted in making something versatile.

You Actually Will Need It

There is a principle in extreme programming called YAGNI, “You ain’t gonna need it”. I often disagree with it; my usual practice when developing a solution is to try to solve the general case, so long as I don’t sacrifice efficiency for doing so. Inevitably, sometime in the future, something will come up that needs a variation on the original theme, and at that point I will be glad that I coded up the general solution when I had all of the relevant information still in my working memory. It takes more time at the start, especially when you create a thorough set of unit tests, but doing so will make you think more deeply about the whole topic, and will give you more flexibility in the long run.

Computers Suck!

A very wise system administrator named Matt Nordling, with whom I worked at Verity, had a catchphrase: “Computers suck!” Any prolonged exposure to the things will demonstrate this for you— the way the applications crash, the operating system locks up, the way it manages to interpret your clicks exactly long or a typo fouls up something important.

Once you understand that computers do suck, and accept it as part of the nature of things, you can move on without quite as much frustration. You can even decide to make your own work suck less.

You Can’t Just Code, You Gotta Communicate

Coding is a big part of the job, but a big part of effectiveness is communication. You can’t expect other developers to read your mind, and you can’t always be on hand to instantly answer their questions. If you don’t have effective communication in your team, you’ll lose a lot of overall efficiency. I recommend these essentials:

PTOooie!

PTO— Paid Time Off— is one of the greatest abominations in Silicon Valley. Instead of having a separate reserve of vacation time and sick time, workers are given a single reserve of time used for either purpose.

This means that if someone gets sick, they have a choice:

  1. go in to work while ill, leading to staying sick longer and infecting co-workers, or
  2. give up a day of vacation.

This isn’t exactly a difficult decision, and thousands of people make it this way every year: dose up on whatever suppresses your symptoms enough to let you show up for work and keep that vacation day free!

Any company that uses a PTO policy rather than separate vacation and sick leave is hamstringing itself.

Hungarian Notation

I have a strong dislike for Hungarian notation, which involves making the type of any given variable evident in the name of the variable, leading to variables like szThis, pThat, and mTheOther (indicating a null-terminated string, a pointer, and a class member variable respectively). In my opinion, it is ugly and, in well-written code, completely redundant. If your functions are so huge that you can’t keep track of all your variables by just naming them well, your code needs reorganization. My own habit is to name member variables with an underscore in front and leave the data types obvious through the name (const char *name, int *value, _parent).

This may be a lot easier for people who can just hit C-x 2 and have a window onto a different portion of the same file visible without any mucking around with the mouse to move splitters and scrollbar thumbs around, though.

Perl

Perl is not an all-encompassing evil. Perl is not the divinely ordained solution to all coding challenges. Perl is a very effective scripting tool with its own strengths and weaknesses, and it requires a certain amount of restraint to use effectively. The regular expression parsing is extremely effective, the operating system interfaces are strong (though sometimes you need to hit the OS documentation to figure out what they’re doing), and the syntax and quoting are less of a worry than in shell scripting.

The language has a lot of syntactic sugar. It’s very easy to write obfuscated code in Perl, which means that anyone who wants to have their code be maintainable should take special care to make sure that their code is very easy to read. (Even though it’s possible, please don’t write your code in Latin if your coworkers don’t already know the language well.) For anyone coding in Perl, I suggest:

GUI Development Environments

I have been very disappointed by GUI development environments. Microsoft Visual Studio is so poorly integrated with Visual SourceSafe that I have to hand-hack the project files that say DO NOT EDIT at the top in order to get the projects to talk to the repository. (There’s probably a right way to create a project in such a way that it becomes part of a workspace and the repository, but I don’t know what it is— and if Microsoft can’t make it obvious in the GUI, what’s the point of putting it in there? They may have fixed it in a post-5.0 environment, but at the time I was developing, 6.0 was too new to be stable.) Wind River’s Tornado can’t even represent multiple make rules that build the same kind of target file— I have to hand-edit the project files to put in one rule for %.s: %.html and another one for %.s: %.jpeg. The closest I get to ever using these things is the descendents of dbxtool, and I usually prefer a command line debugger anyway.

If I use GNU make to put together a couple of files of definitions and rules that are then included in all makefiles in a given überproject, I can easily make changes that affect the entire build. Visual Studio will let you change build profiles between debug and release— but you have to hand-configure each project to know what that means. Under make, I can make one change and thirty different projects can build with -O2 instead of -O4 for optimization. Under a GUI environment, I have to hand-edit thirty project files. I suppose I could always use environment variables to handle these things— but now all developers have to propagate environment variable changes.

It’s not like this couldn’t be handled in a GUI environment. A workspace file could be used as a repository for rules and definitions that apply to all projects in the workspace. I just don’t see it happening.

Has anyone heard of a way to do automated nightly builds with GUI projects? I had to do a bit of TCL hacking (helped by a particular VxWorks FAQ) in order to create a setup that could read Tornado workspace and project files and generate GNU makefiles to do a build. Is there a tool for converting MSVS workspaces and the project files therein to NMAKE files or somesuch?

Dilbert

When I was a teenager, back in the 1980’s, I just didn’t get Dilbert. It just had a bunch of people acting like idiots— what was the point? It wasn’t until I got a job in Silicon Valley that I got firsthand experience and discovered that Dilbert is often just a few shades of parody worse than actual reality— and there are times when it’s just a matter of changing the names to protect the innocent.

Run Away from the Holy Grail

Dilbert’s “Holy Grail of technology” is a million lines of undocumented spaghetti logic that only one engineer understands. In theory, this guarantees instant job security for that engineer.

Anyone who thinks this is a good idea should consider the management maxim: “If someone is irreplaceable, fire him.” If this isn’t scary enough, ask yourself when you’ll ever get a vacation if you always need to be on call to fix bugs in this “Holy Grail”?

A professional engineer should never make themselves irreplaceable. Write code as if you might have to hand it off to someone else tomorrow. Document it, and keep the documentation up to date. Even if there’s no prospect of someone else taking over the project, you’ll be glad you did it if you leave your code idle for six months and then have to go back and reacquaint yourself with it to fix a bug or add a feature.

Microsoft Windows

I deplore Microsoft Windows in all its varieties. It’s buggy, unstable, poorly documented, and a generally miserable example of software design. (You want good software? Try Rational Software’s Purify or PureLink. I’ve had Rational products have a segmentation fault— GPF or Access Violation in Windows speak— and recover gracefully. They even told me what to tell tech support, and I got a free coffee mug for my effort.) There are APIs where different functions return 0 or –1 as their error condition, depending on which team worked on the API when. And get a complete listing of possible GetLastError() codes the way you can find errno codes on a UNIX man page? Fat chance. The scripting capacity of batch files is miserable compared to the Bourne shell, let alone Perl. Even NT 4.0 with service pack 6a is so inextricably tied to its own GUI that it can get into a wedged state that requires a reboot to fix. And their notion of interoperability leaves gaping security holes— I call Outlook “Outbreak” for a reason, and use Eudora or Evolution for my E-mail.

Nevertheless, despite all these annoyances, I still work on Windows. There are a lot of applications out there that are still best of breed on Windows, and there’s a lot of money in the Windows market. I try to do cross-platform work whenever possible, and I use tools like Cygwin to make it easier to cope with the environment. I don’t know anyone who can afford to ship a product solely on Unix when there’s a Windows market for it. I refuse to work for Microsoft as a matter of principle, but I don’t let that stop me making products that run just as well on XP as they do on Unix.

I’m looking forward to the success of the WINE project for making it possible to use Windows applications while using a reliable operating system at the core.

VxWorks

VxWorks is a multitasking, UNIX-like environment for embedded systems development. They’ve got some decent core ideas— but their implementations leave a great deal to be desired.

Whoever introduced the FUNCPTR typedef (or promulagated its abuse) should be flogged. There’s nothing wrong with a typedef that simplifies int (*function)(...) into FUNCPTR function, but going ahead to write APIs where FUNCPTR function is used where they ought to use BOOL (*function)(struct ifnet *, char *, int) or typedef BOOL (*EtherInputHookFN)(struct ifnet *, char *, int); EtherInputHookFN function is incompetent. They remove the ability of the compiler to perform type checking, and in cases where the documentation fails to specify the arguments for a callback function (as happens frequently in the VxWorks documentation), developers are left in the dark. (If you’re here as a search hit looking for the answer to “what parameter does ftpdInit take?”, the answer is “same signature as loginUserVerify.” I had to spend a while using find, xargs, and fgrep on my Tornado directory to turn that one up.)

The Tornado GUI is worse than no GUI at all. Its dependency tree mechanism for including and excluding operating system components is simply broken, and some components were not architected to be independent of each other even if there’s an option for them to be separated. At one point, they even had the same symbol defined in two different libraries (DHCP and SNMP). I had to crack open the library with ar, pull out the offending object file, and edit the symbol table in Emacs to fix that one until Wind River could get around to writing a patch.

I haven’t tried embedded Linux yet to compare, but I’m getting pretty thoroughly sick of VxWorks and Tornado.

You Call Yourself an Engineer?

I have been dismayed by the number of people I’ve interviewed who have advanced degrees in computer science, impressive senior engineering titles, and apparent years of experience in the industry who still can’t answer basic questions about software engineering. I’m working on my list of things that a software engineer should know— here’s a start:

This is my list for senior engineers:

I believe there’s an additional grade of engineer above senior engineer, which I call “guru”. Since I’m not a guru yet, it’s not that easy for me to define what makes someone a guru. However, when I see people out there answering the really tough Guru of the Week questions on comp.lang.c++.moderated and devising the nitpicky details of low-level protocols like TCP/IP and IPsec, I know I’m not that good yet. Some of that may be due to my lack of formal computer science training, but I suspect there’s actual skill there, not just book learning.

Web Technology Overload

Given the way people abuse the Web by putting pop-up ads, pop-unders, hostile ActiveX controls, and so on, it’s reasonable to expect that people are browsing with all of these technologies turned off. You have to take this into account when designing web pages. Any serious site has to:

Your Mother Doesn’t Code Here

Clean up after yourself. Never write a routine that allocates resources without creating the matching free function, destructor, or API comment that explains that you can clean up the whole thing by calling free() on the pointer returned by your function. Even if you can be 100% sure that delete[](char *) is identical to free() on one platform, you cannot assume that it will be the same on another; make sure that the cleanup always matches the allocator.

Resources include threads. Detaching your thread does not take you off the hook for knowing how to clean it up; if your thread is spawned by a dynamically loaded library and it gets unloaded while a thread is still running in its address space, you’re in for a crash with an unreadable stack trace as soon as that thread gets a timeslice.

You need cleanup routines for your code even if it’s a program that, in theory, is never supposed to shut down (at least until the system reboots). It’s a lot easier to check for resource leaks by instrumenting it with valgrind or Purify or BoundsChecker, starting it up, running a quick test sequence, and shutting it down than it is to let it run for a long time, see if the resource usage goes up, and then have to start analyzing the code looking for leaks.

I paid for the bits. I own the bits.

When I buy a CD, I buy a whole lot of zeroes and ones. Those eventually get turned into sound for me at one point. I should be able to use them for my own purposes in any way I choose. That includes burning CDs with my own mixes to play during games or at a party or in my alarm clock, converting a dozen albums to one MP3 CDROM so I can pop them in my car stereo, or backing them up to some hypothetical ultra-dense storage medium, just in case my collection is stolen or my house burns down. At any point, I’m only listening to the music at one time; no one is getting ripped off here.

Peer-to-peer (P2P) file sharing services make it easy for people to share their bits, though, at only the cost of time and bandwidth. The recording industry, represented by the RIAA, collectively feel threatened by this— a complete freeloader might wind up with the equivalent bits as if they owned an album, without the artist or the record label getting paid for it. (This is not the first time the RIAA have complained about this— back when audiocassettes came out, they were complaining about the same possibility— people could trade music, or tape it off the radio! How could the music industry survive? The MPAA was similarly up in arms over videocassettes.)

P2P networks can also be used on a “try before you buy” basis. I’ve used Napster and Gnutella. If I found something I didn’t like, it went away to conserve space on my hard drive. If I found something I liked, I bought it or tried my best to do so. (Not always easy; if music goes out of print beyond the reach of Amazon.com, it can take ages for a GEMM agent to track it down.) I may wind up reading a gaming book via PDF or listening to music via MP3 before I buy it for myself, but if I’m going to use it, I’m going to pay for it. In the long term, anyone who wants more art, music, and literature to be created should do their part to support the creators of existing works. (Besides, Acrobat Reader or an MP3 player just don’t compare to being able to read the book or play the CD anywhere you want.)

What really has the RIAA running scared, though, is the prospect that it could become obsolete, and that’s why they’re acting like crazed Luddites who decided to fight the steam looms with lawyers instead of hammers.

Taking an example from my own life: There aren’t any local radio stations, even in a place as diverse as the San Francisco Bay Area, that cater to my musical interests. Every week when Musical Starstreams releases another show onto their web site in MP3 format, I download it over my DSL connection, then burn it onto a CDROM to listen to on my car’s MP3 player. If I hear something I like, I consult the playlist and either buy it immediately or throw it onto my Amazon.com Wishlist to buy later (if my credit card is beginning to back up).

So far, so good. I get to discover new music, Musical Starstreams gets another listener, and the artist who catches my ear sells another copy of their album.

But what if we cut out the recording industry on this one? Suppose I could just download the bits of the CD and burn it in the comfort of my own home, or phone up my local CD burning shop, order an album with my touch-tone phone, and show up in a couple of hours by which time they have downloaded the disc over their high-speed connection, burned it onto a blank CD, printed out the liner notes on a high-quality printer, and shoved the whole thing in a jewel case for me. A service fee goes to the shop and to the server hosting the artist’s music, and the rest of the money goes directly to the artist. (Some fee, whether in the purchase process or the original listening process, should to go to the person or organization sorting out the good music from the dross.)

How common is this likely to be? It works very well for people like myself who are a small niche market. Would it scale up to render the recording industry obsolete, though? Possibly. Should the government enact laws to protect the recording industry from possibly becoming obsolete because technology has made it redundant? No.

So I try to keep up on matters with the Electronic Frontier Foundation and the Home Recording Rights Coalition and support efforts to repeal or invalidate the DMCA, because I don’t believe that these artificial controls should exist. If we let entrenched interests in this country hold back progress, the United States will slowly become a technological backwater. I love my country, and I would prefer that it not stagnate.