[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Fine-grainedness of massively parallel computers



On Mon, Mar 31, 2003 at 03:10:00PM -0800, Steve Dekorte wrote:
> I think it would be more interesting to explore the question of how
> fine-grain can combined data/computation parallelism be taken.
Existing experiments include:
* People at MIT trying to build compilers that will dynamically compile
 applications into DFPGA configurations.
* People doing FPGA-like stuff based on networks of FPUs
 instead of networks of simple gates.
* Chuck Moore's MISC processors: he can build a stack computer,
 with its ALU, a memory coprocessor and an IO coprocessor,
 all in some 10K transistors or so. This is so small that you could
 add it to every memory chip, and its transistor count would be lost
 in the noise. Then you could use a lot of such chips in a massively
 parallel architecture.
* Your usual clusters of PCs have many workstation-class machines
 connected through a fast network.

In each of these cases, you have a more or less complex "basic block"
that gets dynamically configured and connected to other "basic blocks".
In the finer-grained solutions, the basic block is so very basic that
you don't want to program it by hand, and it doesn't fit the programming
model currently used in everyday life. That's why only the latter
coarse-grained solution "wins": it doesn't require the programmer to
adapt his programming tools.

Now, the problem is precisely that those programming tools are low-level,
for an outdated centralized computational model -- and you must be
binary compatible with it, tool-compatible with it, concept-compatible
with it, least you lose all your investment in proprietary hardware
and software locking, developer proficiency, etc.

There is a human cost involved. And to overcome this compatibility problem,
will require provable gains much beyond the transition costs, and/or a
considerable lowering of the transition costs.

Using the finer-grained hardware without changing the software assumptions
is an abstraction inversion that wastes any theoretical advantage of
the finer-grained hardware. If declarative languages became more popular,
it could break this abstraction locking problem. But then again,
abstraction locking is intrinsic to proprietary software and hardware.

[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[  TUNES project for a Free Reflective Computing System  | http://tunes.org  ]
As long as software is not free, we'll have hardware compatibility,
hence bad, expensive, hardware that has decades-long obsolete design.