The Mill

Enough public policy, time for more shiny technology.  For a while now I've been following some presentations Ivan Goddard has been putting on a series of lectures for this group called Out Of The Box Computing which has some seriously ambitious plans to totally re-think how computers work.  I'm afraid those videos (and the rest of this post) are going to get a bit technical, so fair warning.

It's sort of interesting to see how little the base abstractions governing how we interact with computers have changed over the last 50 years.  Back then instructions executed one by one, taking however many clock cycles they took before the processor passed on to the next instruction. 

First people pipelined computers so that though instructions were only started one at a time you could start a new instruction before a previous instruction had finished.  Then people created computers that could issue more than one instruction with each cycle of the clock.  Finally, people created computers that could safely re-arrange the order of instructions on the fly, looking down the instruction stream for instructions it can execute early, out of order, without hurting the correctness of the program.  But while the compiler can see the complex web of interdependencies between instructions before it serializes them into a program, and while the CPU can reconstruct them later, the program itself says nothing about this.

All of this has costs and it's those costs the Mill is trying to get around by making actual binary programs preserve these interdependencies.  They aren't the only people who have thought about this.  There was school of instruction set design call VLIW that would try to approximate this by saying which instructions could be issued at the same time.  But while that is part of the important information, the model it follows assumes that each group of instructions finishes before the new one begins, so there's still a considerable amount of information being lost.

Now, what the Mill is purporting to do comes at a cost.  The same CPU you have in your laptop today can execute code that was written decades ago for a machine that was entirely different inside.  By exposing the latency and execution width in the instruction set the Mill is forcing programs to be recompiled with each new generation of processors that is developed.  For something like your desktop PC where you expect to buy software in pre-compiled form and use it for years that is simply not workable.  So I don't think the Mill has any chance of ever conquering the world of PCs.  But while the world of PCs looms large in our minds since those are what we interact with every day, the world of computers is much larger than that.  The high end embedded processors you would find in a cell phone tower, or the servers inside Google or Facebook only run a very limited set of software and that software is often written directly to the task at hand, and could be recompiled if needs be.

On the other hand, the history of computing is littered with failed designs that were too radical and that tried to change too many things at once and failed because the task of making each and every innovation works was just too much.  On the other hand, many of the innovations the team is proposing, such as backless memory or their load semantics, are wonderful ideas that ought to be able to work quite well even apart from all the other new ideas they're trying to bring forward. 

So if you've studied programming or computer architecture go ahead and watch their videos.  Each one blew my mind at least once.

Comments

Popular posts from this blog

Various Ukraine topics

Book Review: Power, Sex, Suicide

Seveneves and the Roche limit