Sunday, April 16, 2017

The Coming Interregnum after Moore's Law

An interregnum is a gap in governance, most commonly when a monarch dies without a child old enough to take over.  For decades the world has grown used to the idea that computers would get better and better year by year as engineers were able to fit more and more transistors onto a piece of silicon economically in a process described by Moore's Law.  Sadly, that law looks to be running out of steam.  So we're going to have to go through an interregnum while people discover some other substrate for computation that can be pushed further than silicon transistors could.

Transistors have come a long way since they were invented in 1926.  They couldn't be built by the people who conceived them but in 1947 Shockley et al figured out how to make a practical transistor for use various electronics such as radios and then in 1954 the first transistorized computer was developed: TRADIC.  It was an amazing device at the time because computers were usually room sized whereas TRADIC was only the size of a refrigerator.  It also consumed only 100 Watts of power instead of many kilowatts and was nearly as fast as the best computer built with the then standard vacuum tube.  Sadly transistors were still much more expensive than vacuum tubes.

Then, in 1959 people built the first silicon chip with more than one transistor on it.  People started putting more and more, smaller and smaller transistors on pieces of silicon.  In 1965 Gordon Moore noticed what was happening and predicted that by 1975 people could fit over 65,000 transistors on a single chip.  Sure enough the size of transistors continued to shrink exponentially and in 1975 people were stating to talk about "Moore's Law."

And ever since then the size of transistors has shrunk by a factor of two more or less every 18 months.  For a very long time, until 2005 or so, shrinking transistors brought faster clock speeds.  The amount that transistor leak is governed by the voltage used in them and the size of the gate and until we hit 90 nanometers in 2005 the amount that transistors leaked was tiny compared to the amount of power required to flip them from a 0 to a 1 so everybody left the voltage the same.  Ever since then we've had to worry about leakage currents much bigger than switching currents and so we've shrunk voltage at the cost of no longer increasing clock speeds.  A piece of silicon can only dissipate so much heat per square centimeter.

And now the shrinkage of the transistors themselves looks like it will begin failing.   Here's a Nature article from last year predicting in demise.  Here's Sophie Wilson, the genius behind the original ARM processor saying she doesn't think there's that much time left.  And now Intel has repeatedly delayed moving off of 14 nanometers with constantly slipping deadlines for Cannonlake, its first 10 nanometer chip.  The end is not yet but it looks like it'll certainly be here by 2025.

Thankfully there's good reason to believe that this isn't anything like the end of progress in computation.  For a long time steam engines became more efficient periodically but eventually that stopped because there's a fundamental limit to how efficient an engine can be bounded by an ideal called the Carnot cycle.  When engines got close to that bound progress slowed down.

Luckily there are firm physical limits we understand to how efficiently you can perform computation and we still have a ways to go.  Current gates in high end silicon take around an attoJoule to perform a simple 'and' or 'or' computation but Landauer's Principle says that it's possible to do it for 2.75 zeptoJoules, 500 times less.  And thankfully by reducing the ambient temperature we should be able to do better.  Regarding speed there's another limit, Bremermann's, that we're even further from.

So what could take the place of Moore's law?  Off the top of my head different substrates such as diamond or carbon nanotubes.  Computation through magnetic spin.  Computation through photons.  Quantum computers.   Ballistic electrons.  Nano-mechanical systems.  Nano-electro-mechanical systems.  Pure chemical systems like DNA computing.  There are quite a few options and all are far from being ready for commercial use.  Still, there's no reason to think that we won't be able to make one of them work eventually.  In the mean time maybe we'll face stagnation or maybe we'll have a golden age of computer architecture where we learn to do more with the transistors we have.  Only time will tell.

Tuesday, April 4, 2017

Freedom of contract requires the government

There's a very real sense in which any sort of free market you could have can only exist when it is created by a government.  Without government there might be no third party to prevent me from selling my goods to the person over the hill but at the same time there's nothing preventing that person from just taking my goods and not paying me if he feels that he has a greater capacity for violence.  A world without government doesn't look like the "war of all against all" that Hobbes described in Leviathan since people still have their senses of friendship and kinship but those only extend so far.  Trading without the arm of the state to preclude violence takes trust that has to be built up over long periods of time and tends to occur in very stereotyped forms.  You can't run a modern supply multi-stage supply chain or anything like it without the umbrella of a state unless, against all odds, Nozick's ideas about capitalist anarchy actually work out in practice.

But beyond protecting us and our property from violence governments do something else to enable markets which I think is commonly underappreciated.  They're willing to enforce contracts.

Not all contracts of course.  You can't sell yourself into slavery, to take just one example.  But in general if you and another person come to a formal agreement then if one of you breaks it you can go to the government and it will use all its vast power to make sure the breaker either fulfills their obligations or pays up.  That's a lot of bother on the government's part but it's proven absolutely vital in the development of commercial economies and in the end led to success for the governments which were willing to go to that trouble.

Lets say that I'm really good at making widgets.  And this guy I know knows who wants to buy widgets.  Ideally we'd go into business together and form a widget company.  But if later I could learn from our business who was buying the widgets I could just drop my partner and sell to them directly.  Knowing this that guy might never go into business with me and might have to resort to complicated and inefficient means to prevent me from knowing who the buyers are.  So the ability to create a contract between us preventing this can make us both better off.

Or look at capital.  When I was reading Medieval Machines I was struck by how many of the milling developments in the high medieval era were the result of a bunch of people pooling their money into joint ventures like damning a river to use water power to grind grain.  Without prior and enforceable agreements as to who can use what how the development of large mechanical projects in Europe might not have gotten as far as it did as quickly as it did.

That the industrial revolution happened in Europe rather than China presents a bit of a puzzle.  China in wasn't as wealthy as England with as high wages, but the Yangtze delta was and that was an area as large as England.  Markets were about as free in China on average as they were in England.  The people were as educated though you can quibble about different emphases.  Coal, coal near population centers was certainly a difference I'll grant.  But another important difference was that the English bureaucracy wasn't nearly as selective as the Chinese one.

The idea of selecting officials by written examination was introduced early in Chinese history and it worked really, amazingly well allowing China to amass a unified state far larger than its neighbors and create far greater prosperity as well.  But the very selectiveness of China's meritocracy ended up being a problem because by the early modern era France and England had been expanding their bureaucracies to have roughly ten times as many officials per capita as Russia or China did.  The quality might not have been as high as in China but greater numbers meant more attention to more things and part of that was the enforcement of commercial law between merchants.

In theory Chinese merchants could have contracts but in practice the courts were busy and couldn't be bothered.  There were stories of merchants pretending that a murder had occurred in order to get a judge to see them so they could ask the judge to adjudicate a commercial dispute.  And I wonder if that, along with China's coal deposits being far away from its developed areas, can explain why the industrial revolution happened in Europe.

In the modern day we still see that different people have different access to commercial law, though basically every country cares about it at least in theory.  Fernando de Soto wrote a book, The Mystery of Capital, on how in many third world countries capitalism exists for the rich but not for the poor who have no title to their property and no real access to the court system.  The elites have real contracts but the poor don't and as a consequence inequality is redoubled.

One heartening trend in recent year is the introduction of widespread biometrics in India.  This is still a long way from all the tools of capitalism being widely available but legal identity is the first step and is an encouraging sign.

UPDATE:  And this is the sort of thing that identity provided by biometrics could hopefully help with.  Enough people have been declared legally dead and their land seized that there's a support group for them.

Thursday, March 30, 2017

Some recent books on conciousness

Recently I finished reading, well, starting three books in a row that were about consciousness.  Which, of course, it quite enough to do a blog post.

The first was Consciousness and the Brain by Stanislas Dehaene.  It was excellent.  Often when we talk about consciousness philosophically we get lost in depths of abstraction.  This was about consciousness as a scientifically observable phenomenon.  How to tell if someone is conscious of something?  Ask them if they saw it.  People are conscious of things when they notice them but not when they're asleep or not paying attention to them or in various other circumstances.  Insects can't report what they see so we'll get back to the problem of insect consciousness later.

It turns out there's a lot of investigation you can do within that framework that's still very interesting.  And all the philosophical debate about whether qualia are separable from observations is neatly sidestepped for now.

Investigations you can do start out with subliminal messages.  If you see a word or phrase for long enough you become aware of it but it has to be present for more than roughly 50 milliseconds for that to happen.  And we can look at a brain with various imaging technologies and see the difference in it's reaction between seeing a number for 40 milliseconds and 80, the difference is apparently very obvious.

The author goes on to talk about how much processing the brain can do on input before it becomes conscious, turning written words into meaning for instance but not parsing entire sentences.  And also that while subconscious cues can influence your immediate behavior their effects fall off rapidly and disappear entirely after less than 2 seconds.  This applies even to the most basic of functions like Pavlovian conditioning.  If two stimuli are present at the same time and subconscious then conditioning can occur but if the stimuli are separated in time then they have to rise to conscious awareness for conditioning to occur.  So consciousness is entirely prior to memory, something I hadn't known at all.

Next up was Stealing Fire: How Silicon Valley, the Navy SEALs, and Maverick Scientists Are Revolutionizing the Way We Live and Work by Steven Kotler.  This is the one I didn't manage to finish.  The idea that altered states of consciousness can be useful is one that I was interested in but the book ended up being blindly and shallowly enthusiastic about the concept in a way that I thought wasn't really teaching me anything I could rely on.  The author continued to just give examples in which altered consciousness could be cool without ever touching the limitations or potential drawbacks.  In Consciousness and the Brain for instance Dehaene talked about the power of sleeping on a problem and how your subconscious might come up with an answer for you, but also the need to think it over carefully consciously first and the need to double check the answer consciously later since intuition isn't always reliable.  Stealing Fire just talked about how subconscious processing was really cool and powerful without talking about what had to happen before and after.  It also didn't really have any clear idea of what it meant by the word consciousness and conflated altered consciousness and unconsciousness in its examples.

Finally, there was Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness by Peter Godfrey-Smith.  I enjoyed that book but I wasn't blown away like I was by Consciousness and the Brain.  Octopuses and other cephelopods are cool animals which diverged evolutionarily from humans back when nervous systems were measured in the hundreds of neurons.  Yet, somehow these creatures evolved a very sophisticated intelligence entirely independently.  Much of the book was taken up by describing Octopuses, Squids, and Cuttlefishes - their physiology and psychology - and I found that bit very interesting.  The author also had some ideas about consciousness which he shared which I thought weren't very interesting, but that was a small part of the book.  The main argument was that cephelopods can't have a reflective consciousness like we do because a human can hear themselves talk but a squid can't see itself change color to communicate.  Aside from the obvious objection that congenitally deaf people seem to have reflective consciousness the author relies a lot on introspection to formulate his idea which is notoriously unreliable.  I'm sure the author is correct when he says he always thinks in words but people are different in how they think.  I know that I often go to explain an idea and get to what seems like a simple part of it that seems like it should be a single words but then as I unpack it mentally it turns into a sentence then a paragraph.  Still the way an octopus can change color to match its surrounding without being able to see color itself was very interesting and overall I'm glad I read the book.

Wednesday, March 8, 2017

Blue Origin's New Glen rocket

This is an exciting time to be someone interested in spaceflight.  Not as exciting as the original space race, of course, but hopefully we'll get close within the coming decade.  SpaceX has been making a lot of news with its landings of the first stages of its rockets after using them and its big plans for future Mars missionsh and sending people around the moon.

The United Launch Alliance, which handles most of the government's launches and has a very good reliability record, also has some fairly ambitions plans it's announced involving the development of the space near Earth and it's next generation upper stage.  But probably the most exciting competitor to SpaceX right now (besides NASA) is Blue Origin.

SpaceX has its Falcon 9 which is launching satellites now and its more ambitious and out there ITS.  Blue Origin has until now had its New Shepard rocket for shuttling tourists up to space briefly but recently its been talking more about the New Glenn, a vehicle somewhere in ambition between SpaceX's existing and imagined rockets.  You can watch the video Blue put together about the New Glenn here.

This is a rather large rocket.  Not quite Saturn V large, as you can see here, but close.
There are apparently two different configurations it can fly in, one with a third stage and one without.  The two stage version is the one they made the video of and which they provided some more details for recently.  It's not much taller than a Falcon 9 but is much thicker, at 7 meters in diameter to the Falcon 9's 3.7 meters.   The Falcon had to be long and thin in order for SpaceX to ship it by truck all the way from California, where they are made, to Florida where they are launched.  Blue Origin has invested in a big factory in Florida near their launch pad so that isn't as much of a concern for them.

There are still a number of unkowns for the 2 stage New Glenn but we know how large a payload it can send into low Earth orbit, 45 tons, and how much it send on its way toward geostationary orbit, 13 tons.  This compares with 22.8 and 8.3 tons for the Falcon 9.  It's interesting that the numbers are so far apart for the New Glenn compared to the Falcon 9 but there's a reason for that.

Remember from back when I blogged about rocket performance that the amount you can accelerate for a rocket is related to how much of the combined rocket/payload mass is fuel.  The New Glenn seems to have a fairly heavy rocket engine for its second stage.  That means that it can lift a heavy payload into orbit quickly before gravity has had time to cause too much in the way of a slowdown.  But it also means that the total non-payload mass of the second stage is higher so even if the payload goes down to something fairly small the engine and other weights will still prevent the overall mass ratio from getting too high.  Hence the stage will have a hard time getting a reasonable payload into higher energy orbits.

I suppose that has to do with the optional 3rd stage.  If you're going to have a third stage you want your second stage to burn it's fuel fairly quickly with a high power engine so that the third stage can get to its business.  I'd imagine that the adding the third stage would add a bit to the low Earth orbit payload of one of these rockets but would add far more to how much it can get into a GTO orbit.

The other big difference I noticed, besides sizes, between New Glenn and Falcon 9 was how the rockets are supposed to land.  SpaceX likes to land the Falcon 9 back on land when possible but also has a barge that floats out in the sea for the rocket to land on if the Falcon 9 doesn't have enough extra fuel after getting its payload to space to send itself all the way back to the pad it took off from.  SpaceX is clearly thinking about pad landings as what they want to do in the future in order to quickly refurbish the rocket and launch again.

For New Glenn landing at sea seems to be the idea with a return to the launch site being something that they're not really considering seriously.  You can see this in the ship that they're using.   Rather than cheap barge they seem, in the video, to be thinking about using a second-hand tanker with a big landing pad re-fitted onto the top.  The side-thrusters on a tanker aren't very powerful and so it would have to be moving forward to keep good control of its orientation to provide a stable landing site but overall it can give a much bigger target for the rocket to land on and in heavier seas too.  Also, it should be able to sail back to port much more quickly than SpaceX's barge can.  On the other hand it will probably be a lot more expensive to operate too.

So Blue Origin is investing in a bigger, more powerful ship to make landing at sea easier and SpaceX is mostly thinking about returning to land with sea landings something of an afterthought.  I suppose time will tell which is a better strategy.

Thursday, February 9, 2017

Hydrogen versus gas in cars

For a long time people have talked about the idea of using hydrogen to fuel our transportation systems.  There are some obvious advantages to this.  When you burn hydrogen you don't release any CO2, the hydrogen combines with oxygen to form simple water.  Batteries like you'd find in an electric car also store and release energy without emitting CO2 but those don't hold as much energy as the equivalent weight of hydrogen.  Hydrogen theoretically stores 40,000 Wh/kg whereas even a good battery like the one used in a Tesla only stores  100 Wh/kg.  The theoretical values aren't the whole story since converting hydrogen to motive force is less efficient than converting charge in a battery.  And also you need tanks to store the hydrogen which I'll come back to later.  But those don't overcome the magnitude of the difference and long range hydrogen cars are feasible in a way that long range electric cars aren't.

On the other hand hydrogen has some big problems compared to batteries or other fuels.  At room temperature and pressure hydrogen only nets you 3 Wh/L compared to 9,500 Wh/L for gasoline or roughly 500 Wh/L for lithium ion batteries (I couldn't find Tesla's exact number).  To make hydrogen feasible you have to compress it a lot.  According to the ideal gas law you'd need to keep your hydrogen at 450 bars to get up to a good-for-long-range 1,250 Wh/L but Wikipedia tells me it's really 690 for some reason.  690 bars is a lot, requiring a very heavy pressure vessel if it's going to be robust to car crashes.  For a 10 L, .3 kg tank of hydrogen the back of my envelope tells me you'd need something like 100 kg of steel.  I have no idea if that's actually accurate but it suggests that hydrogen's range advantage over batteries isn't so very great.

Lets say you've got a bunch of the hydrogen you were going to put into your car.  There are other things you can do with it instead.  Something that's been used industrially for over a hundred years is the Sabatier reaction, turning hydrogen and carbon dioxide into methane.  Methane is also a gas, like hydrogen, but it's a far denser one cutting down on the pressure needed to get a tank of it into something that can fit into a car.   Some energy is lost in the conversion but only around 25%.  And that's just the 100 year old technology.  Turning hydrogen and carbon dioxide into methanol, ethanol, or conventional hydrocarbons is something that currently exists in pilot plants.  A development would be needed to make this practical to generate enough to replace all our current gasoline production but it would be far, far easier than converting all our cars, infrastructure, etc to run on hydrogen.

But of course this scheme is currently missing one of the chief advantages that both electric and hydrogen cars enjoy: the federal subsidy for zero-emission vehicles.  A vehicle running on synthetic gas might absorb a gram of CO2 from the air for every gram that comes out its tailpipe but there's still CO2 coming out of its tailpipe.  So legally it's not in the right category to benefit from existing government subsidies.  Maybe if this caught on the government might do something, who knows.  But really this sort of issue is why I'm much prefer a carbon tax or some other sort of universal and evenhanded program rather than our current mostly ad hoc system for trying to reduce carbon emissions.  Because really, nobody knows how many other situations like this there are where a fixation on details distracts us from the problem of reducing the amount of carbon dioxide in the air.

Monday, January 23, 2017

Collected book reviews

At Bill Gates's recommendation I recently read  The Grid by Gretchen Bakke.  There's been a lot of talk recently about the US having an infrastructure problem but mostly it's in the context of our roads and bridges.  But really our electrical grid is also very important for modern life and impending changes to electricity generation look to increase the strain on the grid.  The cost of solar electricity on average has been plunging but the demand for electricity doesn't go down when a cloud passes over a solar farm.  We really want to be spreading out fluctuations generation between further flung power plants but to do that we need to build new, better, power lines.  There's also a good history of early electrical innovation though I was already familiar with much of it.

Donald Trump's election finally prompted me to get around to reading The Revolt of the Public by Martin Gurri.  He wrote about how social media has made the failures of expert opinion more apparent to the public and given the public the confidence to challenge the rule of existing experts in many areas and has ignited political protests around the world.  Before reading this book I hadn't ever stepped back and taken stock of the wide variety of protests that have taken place in areas as diverse as Tunis, Spain, and the US.  I suppose I'd liken myself to a frog who didn't realize the water it's in is heating up.

As people's political self efficacy, to use a phrase from my excellent high school civics textbook, grows they protest mistakes but don't have any unified theory of what to change specifically - just the notion that it must be possible to do better.  Apart from protests this means a lot of politicians who are outsiders or who can present themselves as outsiders get elected.  Obama painted himself as an outsider.  Trump did too.  I can only hope that the democrats get some movie star or other for 2012?  In any event I guess this makes the Greek referendum I was confused about make more sense.  The people conducting it were newly in power so the voters trusted them.  But the forces shaping political results will continue to mostly so I fear we're all going to keep being disappointed.  And I should really write a post at some point on veto points and legitimacy.

Another book I finished recently was The Gene: An Intimate History by Siddhartha Mukherjee, also recommended by Bill Gates.  I found the start a bit dull as I'd already been familiar with the story of Darwin and Mendel.  But as we got into the 20th century I was fascinated by all the techniques used to extend our knowledge of genetics and all the stories I hadn't heard before.  I wasn't particularly impressed by his analysis of the implications of genetic technologies.  For instance in one place he implicitly seemed to assume abortion is murder but in another place he implicitly assumed it isn't without noticing.  In his description of the first trial of a genetic therapy he also skimmed over what seemed, to me, to be the most perverse logic I've ever heard of outside a Kafka novel.  The idea was that if researchers provided the option of experimental medicine to parents of children who were doomed to die in pain of a rare genetic condition then the parents would feel they had no choice but to take part in trials of the therapy.  That would violate their consent.  Therefore, only people with a non-life-threatening variant of this disease could take part in the dangerous clinical trial.

The Coming Interregnum after Moore's Law

An interregnum is a gap in governance, most commonly when a monarch dies without a child old enough to take over.  For decades the world has...