Sunday, December 10, 2017

Tax Rates and Growth

People trying to justify the recent Republican tax plan often talk about the importance of long run economic growth.  And you can see how, if that were true, it could be a really important argument.  The difference between 2% and 3% annual growth over a hundred years would be a factor of two and a half.  If that sort of change were really possible it would justify quite a bit.  Even if all that extra growth went to rich people then even at the new, lower, tax rates that would be much more tax money available for social programs.

Sadly there's no way cutting taxes could have such a large effect in the US.

The theory behind the cuts is that people become more productive when they've got more or better machines.  Machines cost money, so leaving businesses more money to buy machines will make them more productive.  More machines leads to more money leads to more machines in a virtuous cycle.  Except that in a developed economy it's often very hard to figure out how to usefully add more machines.

At work I mostly use a computer.  Giving me a second computer might increase how much I get done by a bit, but it wouldn't increase it very much.  Diminishing returns is a fact of life and finding ways to usefully spend money increasing productivity is hard.  Some people are working at companies where they have to make due with 20 year old computers and it's quite possible that they would work faster if they could upgrade.  But that's not typical in the US.  It does exist and there are cases where extra money can result in research that yields better computers for everyone.  This is why we try to slant the tax code to favor investment over consumption.  But again expecting lower taxes to increase growth by a large sustained amount is wrong.

There are cases, not in the modern US, where it can play out like that.  If you happen to live in a country that's pretty poor and where most businesses don't have the latest technology then you could quite plausibly have huge growth rates based on money to machine to money virtuous cycles.  That's what's happening in China right now.  It also happened in Japan 50 years ago.  If there's some other country that's figured out how build awesome machines you don't have yet then you might very well find yourself limited by how many of them you can buy - though you might also have other problems that are more pressing.

In this case what you really want to do, as a government interested in growth, is to increase the rate savings rate, how much people invest relative to how much they consume. 

One strategy for this, pursued by prewar Japan, is to just tax poor people and give out the money as business loans.  Even today richer people save more of their income and that was more true when you have a population of peasants without access to banks. 

The early 19th century US didn't have quite the bureaucratic sophistication of late 19th century Japan but managed to do something similar.  By introducing tariffs that raised the price of imported cloth and other goods they indirectly increased the profits of factory owners, allowing them to invest more by buying pirated copies of machines already in use in Britain.  It's much easier to apply taxes at single ports of entry on obvious things like ship arrivals than it is to do something like an income tax.

The early 20th century Soviet Union took this approach to new extremes.  Most new machinery in the early days was paid for by selling grain abroad.  The need to more efficiently expropriate grain was part of the reason for the drive to collectivize agriculture.  Big centralized farms are again much easier for the state to manage than lots of little spread out farms.  Sometimes there was mass starvation but the country industrialized rapidly.

You might have noticed that China's economy has been growing rapidly recently but there hasn't been a lot of mass starvation.  As far as I can tell the main difference was that they jump-started things with foreign investment  These days the trillions in domestic savings drown out the $100 billion or so in foreign capital but when the current boom was starting money from Taiwan, Japan, etc was crucial.  This seems like a far more human way of jump-starting growth than the other methods above.


Wednesday, November 29, 2017

RISC-V is doing well

Back in 2010 some researchers at the University of Berkley started work on an instruction set architecture (ISA) that was going to be both open for anybody to use and incorporating modern ideas.  All computers run by performing a series of operations, like loading a 16 bit value from memory, adding two 32 bit numbers together and returning the highest possible 32 bit value if the result can't be represented in 32 bits, or taking the cosine of a 32 bit number.  An ISA defines which operations are basic to the computer and which have to be assembled out of other instructions.  It also tells you how you represent your instructions as sequences of 1s and 0s in memory.  And it specifies various other things such as how memory accesses from different cores interact.

You might have heard of the great RISC versus CISC wars of the 1980s.  For a long time it was very expensive to move data from memory to the computer core (it was only every one back then) and back.  Transferring instructions is part of that and to save on instruction memory designers wanted to use as few instructions as possible and so made their instructions as powerful as they could.  And since programming back then was usually done by programmers writing individual instructions rather than using compilers this also meant less work for the programmer.

But as time went on the amount of data that computers worked on grew faster than the number of instructions they used making code size merely important rather than critical.  Programmers started to use compilers more.  And some researchers at Berkley and Stanford realized that there were a lot of advantages to using a less complex instruction set.  If you simplified your ISA you would have an easier time doing things like starting one instruction before the previous one had finished because there were fewer complicated interactions.  Less instructions meant less work for designers.  And in the early 80s you could fit an entire RISC core on a single silicon chip rather than having to spread it across multiple chips.  That made it cheaper and faster.

A lot has changed since the 80s.  Some aspects of the RISC philosophy have fallen by the wayside but others are embraced by everyone designing a new ISA for general purpose computers.  And RISC-V is, of course, firmly in the RISC camp.

Other people have created ISA that are open for anybody to use and free of patents but none of them had ever really taken off.  I'm not familiar with them so I'm not going to speculate on why.  In contrast RISC-V has gotten a lot of people interested.  There are a number of concrete processors that adhere to the architecture that have been designed at Berkly and other places and which have also been released for people to use freely which may be part of it.

When I first heard of these efforts a couple of years ago I was impressed.  Back when I was doing my thesis I could see how an open chip design could be been useful for me to modify and try out my ideas for my thesis.  Now that there were these designs out there free to modify and with working compilers and other software out there lots of academics working on processor design were going to have a very powerful tool.  So RISC-V clearly had a bright future in academia.

In the outside world there were certain benefits.  RISC-V makes it very easy to add your own new instructions for any special purpose you might have.  So companies with special purposes in mind would have a reason to look at it.  I wasn't optimistic about a wider impact, though.

Well, it now looks like I was underestimating it.  At the seventh RISC-V Workshop yesterday Western Digital announced that they were moving to RISC-V for the microcontrollers in their hard drives which tell the drive head where to go, communicate back to the motherboard, etc.  That's potentially billions of RISC-V cores shipped in commercial products every year.

A while ago NVidia also announced that they were looking at RISC-V for microcontrollers orchestrating things in their graphics cards while the GPU cores did the computational heavy lifting.  They mentioned that the ability to add their own extra instructions was a big draw.

So that's some success in embedded microcontrollers.  That makes sense for people who want more customization or who don't want to pay licensing fees to, say, ARM.  A few days ago I certainly hadn't been expecting people to be seriously considering RISC-V for application cores running all sorts of different programs such as in a phone or laptop.  If you're receiving applications from third parties they can't make use of any special extra instructions you have so the RISC-V flexibility isn't a factor.  And nobody has created applications for RISC-V, though you can always compile existing code for it if you have access to the source.

Well, I still think that but another of the talks at the Workshop was for a fairly hefty 4 core chip that would do pretty well inside a laptop.  I'm not sure anyone is going to put it there but I'm sure people will be using it for servers, where you're running a narrower selection of software.  There's support for RISC-V being added to Linux though it isn't fully supported yet.

The whole thing is moving faster outside of academia than I would have expected and I'm interested in seeing what the future brings.

Sunday, November 19, 2017

Expertise, the president versus congress

Since writing a post way back about the way complexity is a problem for Congress I've been happy to discover that the ideas in those aren't at all original and that these are the sort of things people write papers on.  Here's a good article on one recent paper.  I suppose I should have seen that coming.  Possibly I got the idea from somewhere initially then forgot about reading it.

But anyways, figuring out what you need to know to write legislation is hard.  It would be cool if Congress had a big budget to hire outside experts but they have to make do listening to what lobbyists tell them and trying to decide which to believe.  Of course there is on part of government that has a huge budget to hire people with specialist knowledge and which has tons of them on staff.  That is, the executive branch.

That's an angle on this whole situation I'd completely overlooked.  A president proposing legislation can use the Department of Education to draft school reform bills, use the EPA and Department of Energy to draft climate control legislation, etc.  People talk about the imperial presidency.  I expect that this is a pretty big factor in how we got that.

There's also some cause for hope here.  We got the Congressional Budget Office from Congress wanting to push back at having to rely on the White House when budgeting.  To quote Wikipedia:
Congress wanted to protect its power of the purse from the executive. The CBO was created "within the legislative branch to bolster Congress’s budgetary understanding and ability to act. Lawmakers' aim was both technical and political: Generate a source of budgetary expertise to aid in writing annual budgets and lessen the legislature’s reliance on the president's Office of Management and Budget.
 What got me thinking about this power dynamic was watching the recent floundering of Congress on their health care plans and other matters.  Partially this is an issue of leadership but part of the problem also seems to be that the executive is just not interested in the matter.  Or possibly that so many political appointments haven't been completed he's not able to.

Wednesday, July 26, 2017

Rockets VII: Staging

See also parts I, IIIIIIVV, and VI.

Space is sort of hard to get to.  You've got one of the Space Shuttle Main Engines (SSMEs), which are really efficient rockets which'll give you a vof 4.4 km/s in vacuum.  That's pretty efficient for burning stuff and about as well as we can do for a rocket that can take off from Earth.  But lets say we do want to take off from Earth.  Well, plugging that into the rocket equation we see that you need a mass ratio of  8.5 to 1 to get the 9.4 km/s you need to reach orbit.  Especially when the big tank you need to hold the hydrogen for your rocket probably has a mass ratio of 10 to 1 before you add in the engines or the shuttle and payload.  Thankfully the shuttle also had its boosters which finished burning early then the big casings that contained all that solid fuel were dropped into the ocean where they couldn't slow the rest of the shuttle down.

Being able to drop heavy pieces of your rocket when you're part way to orbit has been a part of rocketry since the beginning.  Here's how it works in theory.  Lets say you have a rocket that's easy to build and which can carry 1/5 of it's weight as payload.  But lets say it only has a delta-v of 5 km/s.  Well that doesn't make it to orbit.  Ah, but we can make another one that's 5 times bigger and can carry the smaller rocket.   We launch the big one, get it up to 5 km/s and it releases the small one which gets its payload up to 10 km/s for a nice high orbit.  Our overall mass ratio is 25 to 1.  In theory if you just made a single rocket stage with a mass ratio of 25 to 1 that would be just as good - but that's impossible.  The tanks you need for fuel limit you to 20 to 1.  Add in rocket engines powerful enough to lift the rocket against Earth's gravity and your mass ratio goes down further.  You need some sort of staging to get a chemical rocket to Earth orbit.

Early on people couldn't do that sort of theoretical, one rocket carried by another, staging.  When you start a rocket on the ground there's gravity pushing the fuel into the engine.  When you start a rocket on the ground and the engine just doesn't start you can just fix whatever's wrong and try again.  Neither of those is true for upper stages and to start with people didn't know how to deal with that.

What the Soviets and US did was like what the Space Shuttle did.  They lit all their engines on the ground and had some high thrust bits that dropped off early while the rest of the rocket made it to orbit.  The Soviets with the Sputnik had essentially five identical rocket engines.   Four were attached to small tanks and one was attached to a big tank.  Under the combined thrust of the five the rocket would be boosted up into the upper atmosphere quickly then four of the engines would burn through their fuel quickly and then just fall away while the last part had a big enough tank in relation to everything else that it eventually produced enough speed to get into orbit.

The US, with the Atlas rocket, did something similar.  But instead of having different tanks all three engines were attached to a single fairly efficient tank.  The three engines were lit on the ground and loft the rocket up to a high altitude where it would take a long time to come down.  Then the two booster engines would fall away and the remaining small and efficient sustainer engine would take its sweet time accelerating the rocket to orbit.

The US and Soviets started figuring out how to make true stages after that.  The Soviets put a second stage on top of the rocket with an open cage connecting it to the rest of the rocket.  Before the first stage burned out while the acceleration was still forcing the fuel down into the engine they lit it off.  The US just used solid rockets where the fuel doesn't need any force to keep it in place and there aren't any turbines to spin up.

Eventually both learned to use further techniques like having little ullage motors produce just enough force to settle the fuel while a new stage was being lighted letting them both use liquid rocket stages that were entirely sequential.  Upper stages can use engines that are designed to operate in vacuum with the larger engine bells that let you direct your propellant better but which would cause problems if they had to fight against atmospheric pressure.  People talk about taking a single stage to orbit.  Elon Musk says that the first stage of his Falcon 9 could just barely make it to orbit if it didn't have to carry other stages or a payload.  But there's no reason to go to space unless you're taking something there.  Until we develop high efficiency rockets that also produce high thrust we'll have to continue to use staging to make it to space.

Tuesday, July 25, 2017

Adoption curves are often steeper than you think

When I was younger there were a lot of wondrous devices that were predicted in some science fiction books I read that I thought I might never get to use.  For instance when I was reading some Tom Clancy book or other in high school some government servant pulled out his pocket device that combined a cell phone, a GPS, and a PDA.  This was of course a super expensive device that was only available to top government officials and the very wealthy.  It was, of course, basically an iPhone but with less features.  There was another novel I read in college, Snow Crash, where the protagonist managed to get access to a piece of software normally reserved for the rich and powerful.  It was a 3D model of the Earth overlaid with satellite imagery that you could manipulate and zoom in on any location smoothly plus a lot of extra information.  It was basically Google Earth except with a few more features.

Every technology has an adoption curve.  Once indoor plumbing was for the rich only but now it's illegal for even the poorest of us to try to save money by building a house without it, for valid public health reasons.  Once TVs, refrigerators, and all sorts of other things were available to the few but eventually ended up with mass adoption.  Here's a nice chart courtesy of The Atlantic:


It's not all smooth or even a constant march forward but the trend is clear.  Are the slopes steeper more recently?  Maybe or maybe that's just an artifact of what technologies the chart maker was aware of.

Whenever I'm in a discussion about some new technology someone always points out that it'll be just for the rich.  Often that's true at first.  But sometimes, as with the iPhone it only makes sense to build it when a large swath of the moderately well off population can buy it.  And sometimes, as with Google Earth, it doesn't make sense to restrict who can use it.  But even if it does start off just for the rich even the poor will get to use it eventually.  If the materials involved are cheap but the design of it is costly then it'll probably be adopted quickly.  Vice versa and maybe the opposite will be true.  But it will probably reach wider use eventually and we should only talk about the period where it is the preserve of the wealthy rather than assume that will be the whole of the future.

Thursday, May 11, 2017

Book review: The Righteous Mind

The Righteous Mind: Why Good People Are Divided by Politics and Religion by Jonathan Haidt is a book about morality.  It's about the ways in which people form moral opinions and the underlying basis for human morality.

The book begins with a description of how the author's views have changed over time and how the conventional view of how children develop morality has changed over time.  To oversimplify, at one point people viewed children as inherently immoral and needing to be taught about morality.  Then children were viewed as being inherently moral and only needing to be protected from forces that would stunt there inherent goodness.  Then Lawrence Kohlberg re-invented morality as a rational way of rationally reconciling one's own desires with the desires of others.  And finally he came to the conclusion that people have inborn moral intuitions that we use to construct our theories of morality.  That won't be any surprise to readers of The Secret of Our Success but of course this book actually came out before that one, even though I read them in the opposite order.

Carving at the Joints

The author looked at the response people gave when asked about moral questions and came up with Moral Foundation Theory which is that all our moral intuitions around grounded in one of five foundations.

  • Care, wanting others to have healthy and happy.
  • Fairness, wanting people to interact justly.
  • Loyalty, wanting members of your group to act in the group's best interest.
  • Authority, the wanting people to respect legitimate authority.
  • Sanctity, wanting pure things not to be polluted.
Haidt further argues that we arrive at most moral conclusions intuitively and only come up with rational explanations for our intuitions afterwards in an ad hoc manner.  He also argues that modern liberals are morally stunted because they rely only on the first two foundations and ignore the other three.  I'll come back to that in a bit but first I want to talk about the five foundations he came up with.

Being an "armchair scientist" who comes up with "just so stories" is something that scientists usually look down on.  Haidt mentions this directly at one point.  But it's an important criticism nonetheless and good scientists should always be looking for ways to test their hypothesis.  This isn't to say that there's no place for armchair theorizing in science.  Einstein essentially came up with his theory of relatively in his armchair working from a bit of evidence and an intuition for what sort of solution would be the most mathematically beautiful.  But because of that we consider Einstein a remarkable genius and most people who concoct theories and stories in their armchairs get it wrong.

When Haidt started laying out his moral foundations my thoughts immediately turned to the various theories of personality that people have come up with over the years.  Hippocrates had his four humors theory of personality.  More recently Myers and Briggs have their theory of personality which has become popular.  But seeing that it was hard to agree on how to divide human personality a number of scientists got together and tried, though several iterations, to look at hundreds of traits and see if they naturally correlated with each other and formed clusters.  They did and so the Big Five personality schema came to be.  There might be some flaw in it but it's a lot better than you could come up with just sitting in your armchair and thinking about how the people you know are different.

Earlier theories weren't entirely wrong.  Pretty much every theory of personality had a measure of introversion/extroversion and the Big Five does as well.  But to the extent that other measures of personality are predictive and consistent for a person over time it's mostly only to the extent that they agree with the Big Five measures.

So when Haidt talked with a few fellow researchers about the survey responses he got and tried to sort them into categories it's almost a certainty that he failed to carve nature at the joints.  Care and Sanctity do seem like natural categories to me that are maybe as clear as extroversion is but I wasn't at all convinced that Fairness, Loyalty, and Authority were natural categories and in particular Fairness seemed to be covering a lot of complexity.

Sure enough, after Haidt does some experiments, gets some pushback, and adds a sixth foundation of Liberty which is broken our of Fairness.  But without some sort of factor analysis I'm not at all sure that the six factors in Haidt's new moral matrix actually correspond naturally to the foundations of individual morality.   Still, I think the notion that there are foundations to our sense of morality is a useful one.

Reasons for Reasons

Part of the start of Haidt's moral foundations theory was noticing that when he gave a story, say about a man engaging in sexual congress with a dead animal, to a western college student they would feel it was wrong but when they tried to persuade the ostensibly skeptical interviewer that it was wrong they would try to point to or make up concrete harms caused the act.  By contrast people from other cultures or with less education would often feel more comfortable saying that it's just wrong without any recourse to specific harms.  

Haidt points to this as an example of the Sanctity moral foundation which seems more or less true since I think he probably got that foundation right.  But he also presents the college student's inability to articulate that it's just wrong as them being out of touch with all the foundations of morality.  I'm not sure that's right.  When you're arguing about harm you can be sure that anybody with an intact sense of empathy will have a fairly similar sense of harm to what you have even if they might not have seen some chain of connections whereby an act might cause a harm.  But people's notions of purity are very culturally based and whether a pig, a snake, a menstruating woman, etc are intrinsically impure is going to be wrapped up with their culture in a way that you can't be sure of persuading them no matter how clearly you lay out the facts.  So trying to frame immorality in terms of harm might be a proper response to living in a cosmopolitan society regardless of how in touch one is with one's moral foundations.

The whole notion of moral persuasion struck me as something like a gaping hole in the center of the book.  The author says that we use reasoned argument to persuade but also says that reason is nearly useless in terms of morality.  Ok, but then why do we use reason to persuade other people.  If I'm trying to persuade someone that something is wrong I don't just think of what moral foundation it violates then repeat "Authority, authority, authority" to make my case.  Nor do I rely solely peer pressure though that's related too.  I make reasoned arguments that somehow seem to have an effect on people's moral intuitions going forward.  Occasionally I even make a moral argument to myself that changes my own intuitions though I'll grant to the author that that isn't very frequent.

Reflective Equilibrium

The problem with being guided solely be one's intuitions is that they're inconsistent.  I might feel in my gut that that if someone makes a mistake calculating the change from a purchase and I get an extra dollar that's entirely fair.  I might also feel that if they make a mistake and I lose a dollar that's intrinsically unfair.  And I might feel that hypocrisy is and even being ashamed at my own hypocrisy.   So how should I act?

 Let's say that my friend says something really offensive so I raise my sword and strike him down.  Then the next day when I'm calmer I feel very sorry about it.  Was I acting morally when I struck?  Was I even acting in accordance with my morality when I struck though I felt full of a righteous certainty at the time?

Defining terms is a tricky business.  People might argue for hours about whether a tree falling in the woods makes a sound.  If I define sound as sensing the world with my ears or as vibrations in the air then I've got a workable definition.  But if you choose to define sound as sensing the world about you with your eyes then you'll be running counter to other people's understanding of what sound is and you'll fail to communicate with them.  Likewise if you talk about morality only in terms of what people find intuitive without regard to what they find persuasive then you're not really talking about what everybody else means when they discuss morality.

Haidt discusses research that shows that even young children can have moral intuitions the same as adults can.  But I still think that most of us would agree that there are ways in which young children are less moral than adults and that efforts to teach them to share, for instance, are meaningful moral instruction.  I don't want to say how much instruction versus experience versus reason promotes the growth of morality because I don't know.  But I am sure that each makes some sort of contribution.

Darwin and Society

Haidt spends a lot of the book arguing that liberals should embrace a broader conception of morality.   He says that using just Care and Fairness is like cooking with only salt and sugar and that you need more flavors to make a tasty dish.  He describes how many of the moral philosophers who constructed theories based on just a single foundation were autistic.  He says that the reason that the Democrats never win elections is that they only pay attention to those two foundations and that to win they'll have to embrace all five.  But then he somehow claims he "has been entirely descriptive until now" and launches into his real argument.

He outlines how societies that stick together have certain advantages over societies that suffer from people defecting from common norms all the time.  There's some data about how, for instance, religious people give more to charity and some anecdotes about orthodox Jewish diamond merchants.  And there's a lot of woolly speculation on gene and culture evolving together and so forth.  And finally, because Europe has a birth rate below replacement, we have to stop being so individualistic and embrace more collectivism.

I think it's true that social cohesion is underappreciated as a force by liberals but I have two responses to that line of argument in general.

First, while it's true that more collectivist countries like, e.g., Pakistan have populations that are increasing faster than Germany's there are exceptions.  China is also very collectivist generally but also has a declining population.  The US is generally more individualistic than Europe but has an increasing population.  It looks like birth rate has more to do with a country's wealth than with its social cohesion. But if we're imagining a world of Darwinistic competition between groups wealth brings power and being less powerful is surely a dangerous strategy even if it lets you have more babies.  It looks like despite all the advantages conformity can bring individualism bring advantages too in the realm of wealth and science and we can't just evaluate the benefits of one without taking account of the benefits of the other.

For my second point I'll invoke Hume who Haidt praises many times in the book.  An 'ought' cannot be derived only from an 'is'.  Even granting that having a different sense of morality would let us be more successful in terms of being more powerful or having more reproductive success it doesn't follow that that is the right thing to do.  You can value many things besides having lots of babies.  Happiness.  Military power.  Science.  Art.  Helping other groups.

Anybody who commits murder or rape in order to have more babies is a monster.  I wouldn't do that.  You wouldn't' do that either.  In some sense my genes would 'want' me to do that but I'm not obligated to care what they think.  If you're inclined to reduce all morality to a single theory then that's one you could embrace but I think that the the utilitarians and deontologists have much nicer unified theories of morality even if Haidt criticizes their reductive impulse as autistic.  They still beat social Darwinism.

Sunday, April 16, 2017

The Coming Interregnum after Moore's Law

An interregnum is a gap in governance, most commonly when a monarch dies without a child old enough to take over.  For decades the world has grown used to the idea that computers would get better and better year by year as engineers were able to fit more and more transistors onto a piece of silicon economically in a process described by Moore's Law.  Sadly, that law looks to be running out of steam.  So we're going to have to go through an interregnum while people discover some other substrate for computation that can be pushed further than silicon transistors could.

Transistors have come a long way since they were invented in 1926.  They couldn't be built by the people who conceived them but in 1947 Shockley et al figured out how to make a practical transistor for use various electronics such as radios and then in 1954 the first transistorized computer was developed: TRADIC.  It was an amazing device at the time because computers were usually room sized whereas TRADIC was only the size of a refrigerator.  It also consumed only 100 Watts of power instead of many kilowatts and was nearly as fast as the best computer built with the then standard vacuum tube.  Sadly transistors were still much more expensive than vacuum tubes.

Then, in 1959 people built the first silicon chip with more than one transistor on it.  People started putting more and more, smaller and smaller transistors on pieces of silicon.  In 1965 Gordon Moore noticed what was happening and predicted that by 1975 people could fit over 65,000 transistors on a single chip.  Sure enough the size of transistors continued to shrink exponentially and in 1975 people were stating to talk about "Moore's Law."

And ever since then the size of transistors has shrunk by a factor of two more or less every 18 months.  For a very long time, until 2005 or so, shrinking transistors brought faster clock speeds.  The amount that transistor leak is governed by the voltage used in them and the size of the gate and until we hit 90 nanometers in 2005 the amount that transistors leaked was tiny compared to the amount of power required to flip them from a 0 to a 1 so everybody left the voltage the same.  Ever since then we've had to worry about leakage currents much bigger than switching currents and so we've shrunk voltage at the cost of no longer increasing clock speeds.  A piece of silicon can only dissipate so much heat per square centimeter.

And now the shrinkage of the transistors themselves looks like it will begin failing.   Here's a Nature article from last year predicting in demise.  Here's Sophie Wilson, the genius behind the original ARM processor saying she doesn't think there's that much time left.  And now Intel has repeatedly delayed moving off of 14 nanometers with constantly slipping deadlines for Cannonlake, its first 10 nanometer chip.  The end is not yet but it looks like it'll certainly be here by 2025.

Thankfully there's good reason to believe that this isn't anything like the end of progress in computation.  For a long time steam engines became more efficient periodically but eventually that stopped because there's a fundamental limit to how efficient an engine can be bounded by an ideal called the Carnot cycle.  When engines got close to that bound progress slowed down.

Luckily there are firm physical limits we understand to how efficiently you can perform computation and we still have a ways to go.  Current gates in high end silicon take around an attoJoule to perform a simple 'and' or 'or' computation but Landauer's Principle says that it's possible to do it for 2.75 zeptoJoules, 500 times less.  And thankfully by reducing the ambient temperature we should be able to do better.  Regarding speed there's another limit, Bremermann's, that we're even further from.

So what could take the place of Moore's law?  Off the top of my head different substrates such as diamond or carbon nanotubes.  Computation through magnetic spin.  Computation through photons.  Quantum computers.   Ballistic electrons.  Nano-mechanical systems.  Nano-electro-mechanical systems.  Pure chemical systems like DNA computing.  There are quite a few options and all are far from being ready for commercial use.  Still, there's no reason to think that we won't be able to make one of them work eventually.  In the mean time maybe we'll face stagnation or maybe we'll have a golden age of computer architecture where we learn to do more with the transistors we have.  Only time will tell.

Tuesday, April 4, 2017

Freedom of contract requires the government

There's a very real sense in which any sort of free market you could have can only exist when it is created by a government.  Without government there might be no third party to prevent me from selling my goods to the person over the hill but at the same time there's nothing preventing that person from just taking my goods and not paying me if he feels that he has a greater capacity for violence.  A world without government doesn't look like the "war of all against all" that Hobbes described in Leviathan since people still have their senses of friendship and kinship but those only extend so far.  Trading without the arm of the state to preclude violence takes trust that has to be built up over long periods of time and tends to occur in very stereotyped forms.  You can't run a modern supply multi-stage supply chain or anything like it without the umbrella of a state unless, against all odds, Nozick's ideas about capitalist anarchy actually work out in practice.

But beyond protecting us and our property from violence governments do something else to enable markets which I think is commonly underappreciated.  They're willing to enforce contracts.

Not all contracts of course.  You can't sell yourself into slavery, to take just one example.  But in general if you and another person come to a formal agreement then if one of you breaks it you can go to the government and it will use all its vast power to make sure the breaker either fulfills their obligations or pays up.  That's a lot of bother on the government's part but it's proven absolutely vital in the development of commercial economies and in the end led to success for the governments which were willing to go to that trouble.

Lets say that I'm really good at making widgets.  And this guy I know knows who wants to buy widgets.  Ideally we'd go into business together and form a widget company.  But if later I could learn from our business who was buying the widgets I could just drop my partner and sell to them directly.  Knowing this that guy might never go into business with me and might have to resort to complicated and inefficient means to prevent me from knowing who the buyers are.  So the ability to create a contract between us preventing this can make us both better off.

Or look at capital.  When I was reading Medieval Machines I was struck by how many of the milling developments in the high medieval era were the result of a bunch of people pooling their money into joint ventures like damning a river to use water power to grind grain.  Without prior and enforceable agreements as to who can use what how the development of large mechanical projects in Europe might not have gotten as far as it did as quickly as it did.

That the industrial revolution happened in Europe rather than China presents a bit of a puzzle.  China in wasn't as wealthy as England with as high wages, but the Yangtze delta was and that was an area as large as England.  Markets were about as free in China on average as they were in England.  The people were as educated though you can quibble about different emphases.  Coal, coal near population centers was certainly a difference I'll grant.  But another important difference was that the English bureaucracy wasn't nearly as selective as the Chinese one.

The idea of selecting officials by written examination was introduced early in Chinese history and it worked really, amazingly well allowing China to amass a unified state far larger than its neighbors and create far greater prosperity as well.  But the very selectiveness of China's meritocracy ended up being a problem because by the early modern era France and England had been expanding their bureaucracies to have roughly ten times as many officials per capita as Russia or China did.  The quality might not have been as high as in China but greater numbers meant more attention to more things and part of that was the enforcement of commercial law between merchants.

In theory Chinese merchants could have contracts but in practice the courts were busy and couldn't be bothered.  There were stories of merchants pretending that a murder had occurred in order to get a judge to see them so they could ask the judge to adjudicate a commercial dispute.  And I wonder if that, along with China's coal deposits being far away from its developed areas, can explain why the industrial revolution happened in Europe.

In the modern day we still see that different people have different access to commercial law, though basically every country cares about it at least in theory.  Fernando de Soto wrote a book, The Mystery of Capital, on how in many third world countries capitalism exists for the rich but not for the poor who have no title to their property and no real access to the court system.  The elites have real contracts but the poor don't and as a consequence inequality is redoubled.

One heartening trend in recent year is the introduction of widespread biometrics in India.  This is still a long way from all the tools of capitalism being widely available but legal identity is the first step and is an encouraging sign.

UPDATE:  And this is the sort of thing that identity provided by biometrics could hopefully help with.  Enough people have been declared legally dead and their land seized that there's a support group for them.

Thursday, March 30, 2017

Some recent books on conciousness

Recently I finished reading, well, starting three books in a row that were about consciousness.  Which, of course, it quite enough to do a blog post.

The first was Consciousness and the Brain by Stanislas Dehaene.  It was excellent.  Often when we talk about consciousness philosophically we get lost in depths of abstraction.  This was about consciousness as a scientifically observable phenomenon.  How to tell if someone is conscious of something?  Ask them if they saw it.  People are conscious of things when they notice them but not when they're asleep or not paying attention to them or in various other circumstances.  Insects can't report what they see so we'll get back to the problem of insect consciousness later.

It turns out there's a lot of investigation you can do within that framework that's still very interesting.  And all the philosophical debate about whether qualia are separable from observations is neatly sidestepped for now.

Investigations you can do start out with subliminal messages.  If you see a word or phrase for long enough you become aware of it but it has to be present for more than roughly 50 milliseconds for that to happen.  And we can look at a brain with various imaging technologies and see the difference in it's reaction between seeing a number for 40 milliseconds and 80, the difference is apparently very obvious.

The author goes on to talk about how much processing the brain can do on input before it becomes conscious, turning written words into meaning for instance but not parsing entire sentences.  And also that while subconscious cues can influence your immediate behavior their effects fall off rapidly and disappear entirely after less than 2 seconds.  This applies even to the most basic of functions like Pavlovian conditioning.  If two stimuli are present at the same time and subconscious then conditioning can occur but if the stimuli are separated in time then they have to rise to conscious awareness for conditioning to occur.  So consciousness is entirely prior to memory, something I hadn't known at all.

Next up was Stealing Fire: How Silicon Valley, the Navy SEALs, and Maverick Scientists Are Revolutionizing the Way We Live and Work by Steven Kotler.  This is the one I didn't manage to finish.  The idea that altered states of consciousness can be useful is one that I was interested in but the book ended up being blindly and shallowly enthusiastic about the concept in a way that I thought wasn't really teaching me anything I could rely on.  The author continued to just give examples in which altered consciousness could be cool without ever touching the limitations or potential drawbacks.  In Consciousness and the Brain for instance Dehaene talked about the power of sleeping on a problem and how your subconscious might come up with an answer for you, but also the need to think it over carefully consciously first and the need to double check the answer consciously later since intuition isn't always reliable.  Stealing Fire just talked about how subconscious processing was really cool and powerful without talking about what had to happen before and after.  It also didn't really have any clear idea of what it meant by the word consciousness and conflated altered consciousness and unconsciousness in its examples.

Finally, there was Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness by Peter Godfrey-Smith.  I enjoyed that book but I wasn't blown away like I was by Consciousness and the Brain.  Octopuses and other cephelopods are cool animals which diverged evolutionarily from humans back when nervous systems were measured in the hundreds of neurons.  Yet, somehow these creatures evolved a very sophisticated intelligence entirely independently.  Much of the book was taken up by describing Octopuses, Squids, and Cuttlefishes - their physiology and psychology - and I found that bit very interesting.  The author also had some ideas about consciousness which he shared which I thought weren't very interesting, but that was a small part of the book.  The main argument was that cephelopods can't have a reflective consciousness like we do because a human can hear themselves talk but a squid can't see itself change color to communicate.  Aside from the obvious objection that congenitally deaf people seem to have reflective consciousness the author relies a lot on introspection to formulate his idea which is notoriously unreliable.  I'm sure the author is correct when he says he always thinks in words but people are different in how they think.  I know that I often go to explain an idea and get to what seems like a simple part of it that seems like it should be a single words but then as I unpack it mentally it turns into a sentence then a paragraph.  Still the way an octopus can change color to match its surrounding without being able to see color itself was very interesting and overall I'm glad I read the book.

Wednesday, March 8, 2017

Blue Origin's New Glen rocket

This is an exciting time to be someone interested in spaceflight.  Not as exciting as the original space race, of course, but hopefully we'll get close within the coming decade.  SpaceX has been making a lot of news with its landings of the first stages of its rockets after using them and its big plans for future Mars missionsh and sending people around the moon.

The United Launch Alliance, which handles most of the government's launches and has a very good reliability record, also has some fairly ambitions plans it's announced involving the development of the space near Earth and it's next generation upper stage.  But probably the most exciting competitor to SpaceX right now (besides NASA) is Blue Origin.

SpaceX has its Falcon 9 which is launching satellites now and its more ambitious and out there ITS.  Blue Origin has until now had its New Shepard rocket for shuttling tourists up to space briefly but recently its been talking more about the New Glenn, a vehicle somewhere in ambition between SpaceX's existing and imagined rockets.  You can watch the video Blue put together about the New Glenn here.

This is a rather large rocket.  Not quite Saturn V large, as you can see here, but close.
There are apparently two different configurations it can fly in, one with a third stage and one without.  The two stage version is the one they made the video of and which they provided some more details for recently.  It's not much taller than a Falcon 9 but is much thicker, at 7 meters in diameter to the Falcon 9's 3.7 meters.   The Falcon had to be long and thin in order for SpaceX to ship it by truck all the way from California, where they are made, to Florida where they are launched.  Blue Origin has invested in a big factory in Florida near their launch pad so that isn't as much of a concern for them.

There are still a number of unkowns for the 2 stage New Glenn but we know how large a payload it can send into low Earth orbit, 45 tons, and how much it send on its way toward geostationary orbit, 13 tons.  This compares with 22.8 and 8.3 tons for the Falcon 9.  It's interesting that the numbers are so far apart for the New Glenn compared to the Falcon 9 but there's a reason for that.

Remember from back when I blogged about rocket performance that the amount you can accelerate for a rocket is related to how much of the combined rocket/payload mass is fuel.  The New Glenn seems to have a fairly heavy rocket engine for its second stage.  That means that it can lift a heavy payload into orbit quickly before gravity has had time to cause too much in the way of a slowdown.  But it also means that the total non-payload mass of the second stage is higher so even if the payload goes down to something fairly small the engine and other weights will still prevent the overall mass ratio from getting too high.  Hence the stage will have a hard time getting a reasonable payload into higher energy orbits.

I suppose that has to do with the optional 3rd stage.  If you're going to have a third stage you want your second stage to burn it's fuel fairly quickly with a high power engine so that the third stage can get to its business.  I'd imagine that the adding the third stage would add a bit to the low Earth orbit payload of one of these rockets but would add far more to how much it can get into a GTO orbit.

The other big difference I noticed, besides sizes, between New Glenn and Falcon 9 was how the rockets are supposed to land.  SpaceX likes to land the Falcon 9 back on land when possible but also has a barge that floats out in the sea for the rocket to land on if the Falcon 9 doesn't have enough extra fuel after getting its payload to space to send itself all the way back to the pad it took off from.  SpaceX is clearly thinking about pad landings as what they want to do in the future in order to quickly refurbish the rocket and launch again.

For New Glenn landing at sea seems to be the idea with a return to the launch site being something that they're not really considering seriously.  You can see this in the ship that they're using.   Rather than cheap barge they seem, in the video, to be thinking about using a second-hand tanker with a big landing pad re-fitted onto the top.  The side-thrusters on a tanker aren't very powerful and so it would have to be moving forward to keep good control of its orientation to provide a stable landing site but overall it can give a much bigger target for the rocket to land on and in heavier seas too.  Also, it should be able to sail back to port much more quickly than SpaceX's barge can.  On the other hand it will probably be a lot more expensive to operate too.

So Blue Origin is investing in a bigger, more powerful ship to make landing at sea easier and SpaceX is mostly thinking about returning to land with sea landings something of an afterthought.  I suppose time will tell which is a better strategy.

Thursday, February 9, 2017

Hydrogen versus gas in cars

For a long time people have talked about the idea of using hydrogen to fuel our transportation systems.  There are some obvious advantages to this.  When you burn hydrogen you don't release any CO2, the hydrogen combines with oxygen to form simple water.  Batteries like you'd find in an electric car also store and release energy without emitting CO2 but those don't hold as much energy as the equivalent weight of hydrogen.  Hydrogen theoretically stores 40,000 Wh/kg whereas even a good battery like the one used in a Tesla only stores  100 Wh/kg.  The theoretical values aren't the whole story since converting hydrogen to motive force is less efficient than converting charge in a battery.  And also you need tanks to store the hydrogen which I'll come back to later.  But those don't overcome the magnitude of the difference and long range hydrogen cars are feasible in a way that long range electric cars aren't.

On the other hand hydrogen has some big problems compared to batteries or other fuels.  At room temperature and pressure hydrogen only nets you 3 Wh/L compared to 9,500 Wh/L for gasoline or roughly 500 Wh/L for lithium ion batteries (I couldn't find Tesla's exact number).  To make hydrogen feasible you have to compress it a lot.  According to the ideal gas law you'd need to keep your hydrogen at 450 bars to get up to a good-for-long-range 1,250 Wh/L but Wikipedia tells me it's really 690 for some reason.  690 bars is a lot, requiring a very heavy pressure vessel if it's going to be robust to car crashes.  For a 10 L, .3 kg tank of hydrogen the back of my envelope tells me you'd need something like 100 kg of steel.  I have no idea if that's actually accurate but it suggests that hydrogen's range advantage over batteries isn't so very great.

Lets say you've got a bunch of the hydrogen you were going to put into your car.  There are other things you can do with it instead.  Something that's been used industrially for over a hundred years is the Sabatier reaction, turning hydrogen and carbon dioxide into methane.  Methane is also a gas, like hydrogen, but it's a far denser one cutting down on the pressure needed to get a tank of it into something that can fit into a car.   Some energy is lost in the conversion but only around 25%.  And that's just the 100 year old technology.  Turning hydrogen and carbon dioxide into methanol, ethanol, or conventional hydrocarbons is something that currently exists in pilot plants.  A development would be needed to make this practical to generate enough to replace all our current gasoline production but it would be far, far easier than converting all our cars, infrastructure, etc to run on hydrogen.

But of course this scheme is currently missing one of the chief advantages that both electric and hydrogen cars enjoy: the federal subsidy for zero-emission vehicles.  A vehicle running on synthetic gas might absorb a gram of CO2 from the air for every gram that comes out its tailpipe but there's still CO2 coming out of its tailpipe.  So legally it's not in the right category to benefit from existing government subsidies.  Maybe if this caught on the government might do something, who knows.  But really this sort of issue is why I'm much prefer a carbon tax or some other sort of universal and evenhanded program rather than our current mostly ad hoc system for trying to reduce carbon emissions.  Because really, nobody knows how many other situations like this there are where a fixation on details distracts us from the problem of reducing the amount of carbon dioxide in the air.

Monday, January 23, 2017

Collected book reviews

At Bill Gates's recommendation I recently read  The Grid by Gretchen Bakke.  There's been a lot of talk recently about the US having an infrastructure problem but mostly it's in the context of our roads and bridges.  But really our electrical grid is also very important for modern life and impending changes to electricity generation look to increase the strain on the grid.  The cost of solar electricity on average has been plunging but the demand for electricity doesn't go down when a cloud passes over a solar farm.  We really want to be spreading out fluctuations generation between further flung power plants but to do that we need to build new, better, power lines.  There's also a good history of early electrical innovation though I was already familiar with much of it.

Donald Trump's election finally prompted me to get around to reading The Revolt of the Public by Martin Gurri.  He wrote about how social media has made the failures of expert opinion more apparent to the public and given the public the confidence to challenge the rule of existing experts in many areas and has ignited political protests around the world.  Before reading this book I hadn't ever stepped back and taken stock of the wide variety of protests that have taken place in areas as diverse as Tunis, Spain, and the US.  I suppose I'd liken myself to a frog who didn't realize the water it's in is heating up.

As people's political self efficacy, to use a phrase from my excellent high school civics textbook, grows they protest mistakes but don't have any unified theory of what to change specifically - just the notion that it must be possible to do better.  Apart from protests this means a lot of politicians who are outsiders or who can present themselves as outsiders get elected.  Obama painted himself as an outsider.  Trump did too.  I can only hope that the democrats get some movie star or other for 2012?  In any event I guess this makes the Greek referendum I was confused about make more sense.  The people conducting it were newly in power so the voters trusted them.  But the forces shaping political results will continue to mostly so I fear we're all going to keep being disappointed.  And I should really write a post at some point on veto points and legitimacy.

Another book I finished recently was The Gene: An Intimate History by Siddhartha Mukherjee, also recommended by Bill Gates.  I found the start a bit dull as I'd already been familiar with the story of Darwin and Mendel.  But as we got into the 20th century I was fascinated by all the techniques used to extend our knowledge of genetics and all the stories I hadn't heard before.  I wasn't particularly impressed by his analysis of the implications of genetic technologies.  For instance in one place he implicitly seemed to assume abortion is murder but in another place he implicitly assumed it isn't without noticing.  In his description of the first trial of a genetic therapy he also skimmed over what seemed, to me, to be the most perverse logic I've ever heard of outside a Kafka novel.  The idea was that if researchers provided the option of experimental medicine to parents of children who were doomed to die in pain of a rare genetic condition then the parents would feel they had no choice but to take part in trials of the therapy.  That would violate their consent.  Therefore, only people with a non-life-threatening variant of this disease could take part in the dangerous clinical trial.

Tax Rates and Growth

People trying to justify the recent Republican tax plan often talk about the importance of long run economic growth.  And you can see how, i...