Geek Jobs (bumped and updated)

Back when Computer Gaming World magazine was still in print, you’d occasionally get stories about how some flack was talking about how their new graphics engine was enhanced to give Lara Croft an especially realistic butt jiggle.

As a programmer, it always amazes me that some programmers get paid to, you know, program butt jiggle. Or breast jiggle. And, now: pubic hair.

Regarding this pubic hair, the first thing that occurred to me was: Well, now, actors gain and lose weight all the time, they dye and cut their hair or grow it out, was it really so hard to go without “grooming” for a few weeks to get a more “natural” look?

Then I read the part of the article where it mentions “brazilians” and wondered exactly how close up (and on what body parts) this movie was gonna get.

Then it occurred to me that a computer programmer probably wrote a “pubic hair” routine that’s going to be used.

And it struck me what an odd world we live in.

I was also reminded of something Ralph Bakshi said about when he was animating his adult features. To paraphrase, he said that it was nearly impossible to get animators who could do nudity. They would either be too timid, prudish or giggling—or they’d be heavy breathing and too worked up to draw.

Fortunately, programmers mainly have to type.

Update: See what I mean?

Scaling

I’m still playing Ikariam. I’m not really sure why, except that it doesn’t take much time. I’m right around 100 in the ranking of players in my world. I head an alliance, but there’s really not much for us to do.

Ikariam is far from unique. (It apparently has better graphics than most similar web games.) But the gameplay is typical 4x RTS stuff, only in realer time than most Real-Time Strategies. That is, it takes hours or days to reach one island from another.

Basically, though, you accumulate resources which allow you to build buildings which help you collect more resources and (of course) build better troops. The MMO (which I guess this qualifies as, though it’s obviously simpler than an immersive 3D virtual universe) presents an interesting scaling problem. Namely, how do you keep people who have advanced in the game from kicking the noob’s ass?

In a RPG MMO, I believe it’s done–no, I’ve never played an RPG MMO, or any MMO, except Ikariam–by having no-kill zones, by establishing areas where people can go that accords with their level, and so on.

I know there are RTS MMOs out there. Nothing at World of Warcraft level, of course, but I wonder if part of the problem is that it’s difficult to scale an RTS and handle this problem.

One thing that the Ikariam guys do is make everything exponentially more expensive. (Well, it’s not quite exponential but it sure as hell ain’t linear.) As they’ve continued to add features to the game, they’ve included a “corruption” feature.

I don’t know who originated the “corruption” gag first, but I remember it most prominently from Civilizaiton 3. Basically, what it says is that the more colonies you have, the more corruption you have, resulting in diminishing returns for each new colony. (In earlier versions of Civilization, you could spam cities, and win the game wiith a bunch of crappy, small cities that produced just enough to give you a giant military.)

I’m not 100% sure, because they backed into this mechanic, but a new colony might just be a negative benefit until you get rid of the corruption. You get rid of the corruption by taking the same (nearly exponential) expense it cost to expand and applying it to a governor’s residence in every colony you have. So they take the already outrageous amount and multiply it by n where n is the number of colonies.

In other words, expansion is basically out after a few colonies. (I have five colonies plus my capital, but I’ve spent most of my time in the past month playing catch up with those colonies as corruption kicked in.)

So, you can’t expand after a while. What else is there to do? Well, you can attack someone.

Until recently, I went through most of the game without much of a military. Military upkeep is huge. Deploying the military is even huger. Coordinating attacks with allies is nearly impossible.

And the rewards for attacking are dependent on the level of your victim’s dock. So all a guy has to do is demolish his dock and he’s pretty much impervious to anything you can do to him, and you have no chance of making your money back.

They obviously have something in mind: You’re supposed to, eventually, be able to temporarily take over someone else’s colony. That could be make attacking more interesting.

Still, it’s thin gruel. I’m not inclined to buy into it, and I rather dislike the premise of having to pay for a better interface, and then to only receive it temporarily. It might be more ineresting if there were some RPG-like elements, say chances to perform missions or acquire resources by capturing map points.

I could be wrong, but I’m guessing they’re not making much money on it.

Pizza & Programmers

In an article over at CIO, Esther Schindler wonders about the magic of pizza (and a few other food items) in getting programmers to work overtime. There are some noteworthy things about this.

  1. I should probably be reading CIO more.
  2. Esther, who’s an old pal of mine, is both seriously cute, and seriously technical.
  3. Actually related to the article: Programmers and other IT geeks love what they do.

Right now, what do I do? I design and code software for various purposes, and build special purpose Set-Top Boxes for people (like TiVo on steroids).

If I had all the money in the world, what would I do? Deisgn and code software for various purposes, and build even cooler STBs for people. I’d probably work the content angle harder, too–besides super-powered media devices, people need ways to get unfettered content–and that takes a lot of money, or at least more than I have. (For example, I could build or buy a cable network. That kind of money.)

But the point is, I’d be doing almost exactly the same thing that I’m doing now. It’s like Office Space: “What would you do if you had a million bucks? Apart from two chicks at the same time.” I’d need about two million, I think, what with the family to support, but maybe with the investment stuff I’ve learned I could get by on less. (Heh. Get by on less than a million? Can’t be done!)

I know programmers better than other classes of Information Technology types, and I’ve never known one worth a damn that didn’t spent considerable amounts of their free time working on other (non-work) projects.

To give you a personal example, I was heavily into music in my teens and early 20s (as many of us are), and played guitar, keyboards, etc. I also wrote a program that allowed me to create musical scores. This was before there were many of these, sure, but even now, I might pick up one of the modern tools and dislike something about it, and my reaction would be to write my own. That’s what programming geeks do.

I waxed a bit on D&D in an earlier post this week: I can’t tell you how many D&D related programs I wrote in my youth, other than “a lot”.

In my martial arts years, I wrote a program to manage tournaments. It was awesome. Anyone who’s ever been to a karate tournament in particular, and probably most sporting events, knows how poorly organized they are. My sensei told me to slow things up because I was moving people through so fast, there wasn’t enough time to sell them concessions (which is a big revenue generator at tournaments).

This really isn’t all that unusual among craftsmen. I have a friend who’s a master woodworker. He might go to Ikea to pick up a lawn chair, if he really doesn’t care about it. But most likely, he’s gonna build what he wants, exactly as he wants it, to exactly suit his needs. That’s what he does.

So, back to the pizza point: Basically, we’re going to be coding because that’s what we love to do. But we don’t like to be taken advantage of any more than anyone else, though we’re probably less aware of it. Little, consistent gestures, such as pizza, sodas, snacks, oddball breaks–other stuff I outline in my CIO article–all tend to reflect an appreciation.

There’s an ego issue, too. We all have war stories. Overtime is part of the culture, and if you’re being productive, is its own kind of bliss. Think not of teamsters for whom overtime is an excuse to bump the paycheck, think of Michelangelo slaving over the Sistine Chapel. I once stayed at a job, ooh, eight months longer than I knew I should have because I wanted to finish the project.

There was huge, huge stress in my life as a result. I ended up out of work right about the time the tech bubble burst. Hell, I ended up in court as a result. But I’d been nursing some ideas about–well, something really, really technical–and had designed all kinds of theories around this thing, and I was just so, so close to testing them out.

This led to some of the worst times in my life and yet, if I had to do it all over again, I might just.
I ended up proving to my satisfaction the feasibility and limitations of the ideas, and developed a system that was gratifyingly high-performance and low-cost. (Smarter now, I hope, I’d probably just ditch the whole situation and applied the theories elsewhere.)

Note Google’s extremely clever “free day”. A job where I’m encouraged to pursue some wild hare? Oh, yeah!

So, it’s not a really big mystery. It’s just the threshold for getting geeks to work overtime is lower. You can get a lot of mileage handing out cash, and raises and bonuses (bonii!) sure don’t hurt, but pizza can make the office a nicer place to be, without necessitating budgetary oversight, etc.

Two things execs don’t understand about geeks: 1) What they do or can do, so they often just completely misuse them; 2) That they love doing it, and really need oversight the other way. (At Melissa & Doug’s toy company, for example, they pretty much make you go home at six, I’m told.)

I’ve made the comparison before to musicians–you can almost always get musicians to work for free food–and it’s apt. Although, of course, musicians are more likely to actually be starving.

A Compiler For Every Child

Over on Yahoo (hat tip: CodeGear), Robin Raskin has an interesting inversion of “No Child Left Behind” called “All Children Move Forward”.

The language “No Child Left Behind” evokes certain ideas. If you’re familiar with the infantry rule of not leaving men behind, for example, you could see education as a battlefield with lots of wounded–an analogy that works on a lot of levels.

In any event, it’s an inherently defensive phrase. The fact that you’d feel the need to emphasize not leaving children behind suggests that you are, in fact, leaving children behind. Lots of children. Enough for you to make a point out of stopping. (A tee-totaler doesn’t make strong statements about how he’ll never drink again.)

Anyway, the article’s data point is on CodeGear’s deal to authorize a million licenses to Russia. Good for them. Delphi is a great tool, and friendly enough for kids to grasp quickly while having depth they would be hard-pressed to exhaust.

Smart move, too, because those kids will grow up and whose tools will they be familiar with? (Delphi was released 13 years ago last month, which raises some other points of interest.)

What never fails to come up is that there’s no one to teach these tools. (OLPC detractors make this point, as well.) It’s doubtless true that a big chunk of children won’t be able to–or will lack interest to–suss all this out for themselves. But the percentage that will is larger than zero. To the gifted outliers, this will be manna from heaven.

And the rest? Well, remember, there are teachers. Way more than ever before. The internet is full of them. No one really needs to learn much of anything alone these days. The more tools kids have, the better.

More On Languages, Programmers and the Hiring Thereof

This is an extension of an ongoing discussion Esther Schindler and I are having here and at CIO.com, which started as a discussion of programming language popularity, and has extended itself into a discussion of what sort of people one should hire. It’s got a lot of “in-references”, so if you’re not a pretty heavy programming geek, you’ll probably want to skip it.

Blake, you speak as though the choice is between the okay-quality experienced developer and the brilliant developer who doesn’t happen to have programmed in, say, ObjectREXX or JavaScript or what have you. But it doesn’t really work like that.

Well, yeah, it works like that when you foster an IT programming culture that favors results over dogma. Heh. It has to: You’re working in Scheme or Smalltalk or Eiffel; you’ve just ruled out the programmer who, you know, graduated in ‘97 with a Java specialization because of all the money in the tech bubble.

Usually, an employer whose job req says “ObjectREXX would be a good plus” gets plenty of resumes from programmers who do have that experience.

Think if I ran an ad today for ObjectREXX programmers I’d get plenty of resumés? There’s…uh…me…and…you? The same is true for a lot of great tools, like Squeak, Lisp, even regular REXX, and for that matter, even Delphi. But perhaps there’s a point of agreement there.

At some level, what you’re saying is: “This technology is good enough that we can afford to take a hit in the hiring department.” A lot of us made that choice with OS/2, for example, and for the 10 years we used it, it was a good risk, even though it was almost impossible to find people in our area who knew it. It was just that productive.

Recently I worked with a company that programmed their tool in Delphi 7, and they had a bear of a time finding qualified people. We had some heavy discussions about better tools, because I saw that they had developed these huge systems to work around the problems that arise with statically typed languages. I’d say it actually hurt them, because to understand their code, you had to be well-versed in relatively cutting edge Delphi (even though they had stalled at D7, they were using interfaces, modeling tools, code generators, etc.).

But had they used Smalltalk, for example, they could have hugely reduced their burden, and in some respects made their code more accessible. The deal breaker was that they were pretty heavily reliant on code that others had written. I do some Java, not because I’m a fan of the language, but because sometimes that’s where the code I don’t want to write is.

And HR departments, keyword-driven as they are, probably don’t pass along the resumes of the candidates who write, “But I can learn!” in the cover letters. So you may be happy to entertain the brilliant-but-inexperienced, and you might never encounter them.

IT really shouldn’t use HR for hiring much, if at all. They’re not competent to even filter out the first tier candidates. I’ve seen plenty of HR departments post ads where the only qualified candidates would be liars. You know, put ECMAScript and JavaScript on your resumé and see which one gets you hired.

Besides, people who can learn–in my experience, anyway–they don’t advertise it because they don’t realize how rare and precious a commodity it is. Certainly I didn’t until, at one job, I picked up a threadbare manual for a proprietary language and environment, and coded a specimen cataloging system in two weeks, while the consultants whose proprietary environment it was were still busy negotiating their six-month/six-figure contract for the same system.

I don’t say it to boast, as it was a mean feat that any reasonably competent programmer willing to learn could have done. But I couldn’t, as you say, have gotten a job doing it. But I love having random programming challenges thrown at me, and I encourage the hiring of others who feel the same. In fact, the aforementioned Delphi 7 crew I worked with had a test: They gave you two hours to write as much of a program as you could. Easily one of the more fun application tests I’ve ever done.

Plus, of course, someone might say they’re willing to learn, but they don’t actually do so. It’s another tangent entirely, but there’s few things as awful as employing someone who really wants to be good at something but is only semi-competent.

The beauty of IT is that you can often shuffle that person off to a different role. I’m always checking out the systems guys for the latent programmer. You put people in as network admins and switch them to help desk, because they’re good with people, or maybe to DBAs, or wherever.

I wouldn’t recommend any hiring be done on the basis of candidate assertions. At least not ones like “I love to learn.” or “I’m a real people person.” But a programmer can talk to another programmer and within a few minutes ascertain what they’re really capable of.

I do take your point, but I’ve seen lots of discussions among developers and consultants about their need to stay “relevant” with the choice of language (or toolset or whatever) they focus on. If there are lots of jobs asking for C#, and few asking for ObjectREXX, many developers will choose an option that makes them more marketable.

When I was doing ObjectREXX (and VisProRexx!), I was also doing C++, not because I think it shouldn’t have a stake driven through its heart, but because my best tool for small, fast code was Borland’s OS/2 IDE. And I don’t keep up the unholy trinity of PHP, Perl and Python for fun (well, okay, Python is fun). And I can’t write a line of Ruby without thinking, “Well, this is just Smalltalk for the faint-hearted.”

I get the need for relevance. But if I saw a guy whose career path went from C to C++ to Java to C#, I’d hesitate before hiring. What does he do for fun? Because I’m not interested in hiring a programmer who doesn’t program for fun.

On Programming Language Popularity

My old pal and CIO editor Esther Schindler has written a blog entry with the deliberately inflammatory title of “Is Computer Language Popularity Important?”

Well, you gotta drive traffic somehow.

And it is a good, if hoary, question. While Esther follows up the question of which language is “best” by adding “best for what?”, it should be fairly obvious that the continuation of “Is popularity important” is “important to whom?”

Before going into that, though, let’s pick some points (hey, it’s almost a fisking):

At the developer level, language choice is completely personal. Like picking a brand of hammer or carving knife, the tool should fit well in your hand. Some people think better in Python than in PHP, and God help anyone who thinks in LISP.

The tool analogy is probably the only one available but it’s just not a very good one. If I use CodeGear’s JBuilder to program a web-based front-end in Java, the language doesn’t really parallel with the brand–the brand (Codegear JBuilder) does.

So, is the language the tool, then? No, that’s not a really good analogy either. The environment (JBuilder’s IDE, debugger, etc.) is really the tool. That’s what you’re using to put your materials together with. And therein lies a clue: The language is sort of like the materials–it’s the medium in which you build. (The platform comes in to play as well, like a foundation.)

I doubt anyone would consider the materials used in building as being “completely personal”. But since human-written code is all mulched down in to the same zeroes and ones, there is an interchangeability you get as far as the final product goes.

But even there, the analogy falls apart because code seldom dies. Code is a factory you use to produce software. For any real project, you’ll spend the vast majority of your time going back to the factory to retool the product.

There might be a political correctness to saying the decision is completely personal, but no coder believes that. There are quantifiable differences between toolsets and languages. Otherwise, “best for what?” would make no sense as a question.

Programming language adoption follows the genetic rules behind duck imprinting: the first creature a duckling sees is perceived as its parent. My first programming languages were PL/1 and FORTRAN, and despite writing production code in several others, from standards like C and REXX to obscure languages like Mary2. Yet, I can write FORTRAN code in any language.

A lot of people believe this. And I suppose it must be true for some people. They tell me they think using language, so I believe them. When I tell them I don’t think using language–I think using thoughts that I then translate into language–they often don’t believe me.

Esther and I used to hang out on Compuserve’s Computer Language Magazine Forum with such luminaries as Jeff Duntemann and David Gerrold. (Just to name two: That forum was a freaking Algonquin Round Table of programming genius. It was there I learned what little humility I have.) If memory serves, David was on the language side–but he’s also a professional writer of quality sci-fi, so you know, he’s got that goin’ for him.

When I program, I think of what I want to accomplish–the end result–and then I organize it using whatever conceptual tools I have handy (often graph paper, if I need to write stuff down), and then (and only then) do I code it.

I learned, in order: Basic, Assembler, PL/I, Pascal, C, Smalltalk, Rexx, C++, Eiffel, Java…well, and it gets blurry after this point, but I’ve used most of the common scripting languages (PHP, Perl, Python, Javascript, Ruby) as well as C#. I occasionally still reluctantly do macros in Basic, but I spent most of my seven years programming Basic hoping for something better. (PL/I was great but I sure didn’t have it on my 8086.) I’ve probably done more Pascal programming in the past 20 years than any other language but I surely don’t think in it. (At least, not until the last possible moment.) But I’ve gone through periods where I’ve done nothing but REXX or C# or whatever. (Right now I’m primarily doing Delphi-style Pascal, Squeak-flavored Smalltalk, SQL, Flash, and Javascript.)

In fact, to my mind, a good language is one you don’t think about much. Smalltalk versus, say, C++, with its subtlety and ambiguities.

Smalltalk has probably been the biggest influence on me, and I had been programming for 10+ years before I learned it. (But it’s also not just a language, so it can change the way you see the game.)

How do you decide which languages are “acceptable” in your shop? Is it because your favored development environment (such as Eclipse or .NET) builds in support for one language suite? (Thus shops choosing Visual Studio would bless C#, VB.NET, and occasionally C++, with a smattering of IronPython accepted.) Or do you look for what programming languages are “in” and thus it’s easy to hire developers affordably?

So, now we’re getting to the “whom” in the popularity question. She’s primarily talking about employers. For most programmers, they almost couldn’t care how popular their languages are. For me, I want any language I use to be popular enough that if I have a need for a particular piece of code, I don’t have to write it if I don’t want to, I can find it on the web.

(Obviously this is tongue-in-cheek, but wouldn’t that be great: Only write code when you want to or when it would make you rich.)

As for when I’m hiring, or have a say in hiring, I never look for programming languages. I look for aptitude and flexibility. In most cases, I’d rather take a great programmer and teach them a new language, then a mediocre programmer who’s familiar with a particular paradigm.

I could say a lot more than this but that’s enough for now.

Pride of Workmanship

The new cable box is pretty good, though it lacks some of the niceties of the old one.

One thing I can’t believe–makes me groan whenever I see it–is that the movie guide is in alphabetical order, but includes “The” as part of the sort.

So, you might see:

Amadeus
Batman
Dogma
The Apple Dumpling Gang
The Color Purple

(Sigh.) I’ve learned to accept the use of ASCII-like sorting, such that all the numbered entries appear at the top (40 Year Old Virgin should go in “F”), but this is just plain don’t-care-sloppiness.

The guide itself seems to have different content, and less guide info, including missing the information about when a program is going to stop being available.

Meanwhile, the new cable box takes, I’m told, 45 minutes to reboot. Let that sink in for moment. 45. Minutes. Actually, when we rebooted here, it took 4 hours, because it’s not just rebooting, it’s downloading information, and lots of people were apparently rebooting at the same time.

I can download many gigabytes in four hours. Why do I get the feeling that this particular application hasn’t been optimized?

Project: Delphi for .NET review

CodeGear has come out with a new version of Delphi for .NET development.

The company seems a lot more vital than Borland, though I can’t comment on their “Application Lifecycle Management” business. Maybe that rocks the ALM world.

But as far as development goes, CodeGear has revitalized Delphi and made forays into PHP, Ruby, and Eclipse-specific Java tools.

In fact, my current problem is that an important outlet for me, DevSource.com, isn’t particularly interested in non-dotNet/non-Visual Studio stuff. It’s always been an MS-sponsored site, but the previous editor (Esther Schindler) was willing to entertain anything she thought would drive eyes to the site. This isn’t to suggest that the current editor (Jeffrey Cogswell) is doing anything wrong–it’s entirely possible that the readership is only interested in MS stuff–but that I need to find more outlets.

I’ve been considering starting my own online tech magazine, which probably isn’t very bright given the other projects I have going, but I miss the days of PC Techniques. I need reviews of software tools, but I love light-hearted writing and little projects that remind us why we got into the business to begin with.

In the meantime, I have a review in progress about Delphi-dotNet for DevSource.