[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Industry versus academia



DLWeinreb@attbi.com writes:

> The great majority of computer programmers out there don't have
> the kind of background that most people on this list do.  These
> are just guys trying to get a job done.

A little background might help put my comments in the right context.
I'm one of those weirdos with industrial experience, but little
academic experience.  Even among my fellow weirdos, I'm a weirdo:
though I'm still an industrial hacker, I teach at Ohio State's CIS
department.

I recently posted to an OSU-CIS-only newsgroup about why reusable
software isn't getting the kind of industrial reception that academics
thought it might, and should, get.  The obstacles facing industrial
adoption of reusable software are similar to the obstacles with the
use of sophisticated tools, especially the kind that give programmers
tremendous productivity -- after they surf the steep learning curve.

Managers frequently avoid risk instead of managing it.  Some go so far
as to hide from it.  Argue all you like about how dumb it is, but
that's a simple fact of life: most people working in corporations
today are trying to maintain the status quo.  So when we talk about
revolutionary steps forward, they want nothing to do with it.  They
want to do what everyone else seems to be doing, maybe with a slightly
different twist that makes it feel a little cooler than what the other
guy is up to.  Something to give an edge if the boss or shareholders
want to know what's happening, but nothing risky enough to make anyone
question the wisdom of what's being done.

These are the kinds of things that earn rewards over time.  Managers
are infrequently willing to spend more money up front so that they can
spend less money over time.  High-payoff/steep-learning-curve
technologies are therefore doomed from the start, at least in terms of
mainstream adoption, because the cost of getting people up to speed on
the technology is seen as "additional expense" whose return might or
might not ever be realized.  And then there's the issue of
preservation of working capital.

Almost incredibly, this approach can even make sense in many cases.
(The following is purely anecdotal evidence from my own experience.
Anyone know of a good study with proper datasets and whatnot on the
topic?)

Getting funding to build something is hard enough.  The more you ask
for, the less likely you are to get the funding needed.  Thus, cost of
entry is a major barrier to getting projects started.  Selling
something that will cost twice as much but give you thrice the
productivity will rarely be seen as a win.  Any decision is an
investment, and the costlier the decision, the riskier the decision
is, both politically (overseeing a project that claims much and
delivers little is a great way to stall your career) and economically.
It gets even worse when you consider that the danger that your project
will be starved for funding after you've started increases
significantly over the length of a project.  Three developers on a
project for a week can be paid for with discretionary money that's
lying around, assuming that not too many other such projects have been
funded the same way.  Three developers on a project for a year is not
only expensive over time, but runs the risk of having the money played
with to deal with the quarter-by-quarter ups and downs of the
environment.  Multi-year projects are almost impossible to predict
funding on.  You can get all of the money approved, and if word comes
down from On High(tm) to save money, funding that a manager was
promised can suddenly disappear.

Thus, a manager can wind up overseeing a project that turns into
complete failure.  That can have serious consequences on upward
mobility, which is why such endeavors get riskier with each
opportunity to monkey with the funding.

(In a lot of places, it's probably easier to get thirty programmers to
work on something for two months than it is to get three programmers
to work on something for one year.  Getting approval to try something
that's over quickly, even when throwing a lot of people at it, is less
risky in that there is so little opportunity for someone to pull the
plug before a two-month project is finished.)

Here is where we run into another problem.  Many developers are sadly
incapable of developing software.  In part, this is because too many
organizations do not think of "programming" as a viable career and
structure their environments to reflect that.  The result is that
people get out of school, write code for five years or so, and then
move into management.  The problem is that studies have shown that to
get really good at anything complex -- like speaking a foreign
language fluently, mastering an instrument, or being a skilled
programmer -- takes ten years.  So we have people writing code --
sometimes even "senior programmers" who really should still be the
apprentice of a more skilled master.  The result is that the code
sucks.  It's not that they're (necessarily) stupid; they just don't
have the skills they need to do what they need to do given the
resources and time available to them.

In many cases, the wins experienced by sophisticated tools just aren't
great enough for the underskilled (too green to be very good) and
agnostic ("I've got a job to do and don't care what gets it done")
"programmers" to see.  The advantage of component-based software, in
economic terms, is a gamble.

Seen in this context, Java is brilliant.  Nothing revolutionary.
Familiar Algol-derived syntax.  No crazy CLOS-style object system.
Let programmers forget about free() and malloc().  Get them out of the
business of pointer arithmetic.  These are all of the things needed to
keep mediocre programmers from reliving the confusion and coredumps
they experienced in college.

It seems that the smarter we make the tools, the smarter the users
have to be to take advantage of their power.  It's not an "ease of
use" issue: it's a simple question of mastering the domain,
understanding what each of those switches does and how it impacts the
way the whole thing works together.  Smarter tools tend to have more
options, so they need smarter people to make good use of them.

Now, we know the success stories of Smart People using good tools to
beat the averages, but it's hard to get beyond the management and risk
issues.  This far-out stuff is not what the herd is doing.  Good
programmers are expensive.  Why pay a "mere programmer" $120k or more,
when you've got "IT Directors" (who couldn't code their way out of a
proverbial wet paper bag) making $90k?  Again, you'll see the result
in time, but the initial expenditure can be difficult to make -- or
even impossible, depending on the environment.

This isn't something that tools alone can fix.  Certainly tools that
keep the mediocre from shooting themselves in the foot aren't the
solution.  We need to allow programming to be a viable career option
so that people can do it long enough to get really good at it, we need
to give them the kinds of tools that will allow them to approach Very
Hard Problems(tm), and we need to take a long-range view of what we're
doing so that we give ourselves time to realize the return on our
investments.  At the same time, we need to work in small, measurable
iterations, so that we reduce the risk posed by ups and downs that
come from budgetary constraints.

I cannot think of any way that we can realistically address the needs
of both very good programmers (including those who aspire to be very
good and are willing to invest the time and effort necessary) and the
mediocre.  Perhaps a Very Good Programmer environment could itself be
built up to provide ease of use and whatnot for a mediocre programmer,
letting the very good crawl underneath and use magic when they see
fit.  But how to make all that work together seems difficult at best.

-- 
Matt Curtin, CISSP, IAM, INTP.  Keywords: Lisp, Unix, Internet, INFOSEC.
Founder, Interhack Corporation +1 614 545 HACK http://web.interhack.com/
Author of /Developing Trust: Online Privacy and Security/ (Apress, 2001)