Tuesday, January 23, 2018

Book review: The Wizard and the Prophet

I just recently finished The Wizard and the Prophet by Charles C. Mann.  He'd previously written a book about the Columbian exchange I'd really liked, 1493, so I was ready to like this book too.

It concerns the dueling ideals of two men regarding man's relationship with the environment.  The prophet of the title, William Vogt, believed that the world has a finite carrying capacity that humans had to respect and that we had to limit ourselves to what the Earth could sustain.  The wizard, Norman Borlaug, worked tirelessly to increase the yields of the crops that man depends on and allowed large new generations of people to grow up without the famine that had plagued their parents.

Going through the book Mann seems to do an admirable job of looking at the lives of each; their successes and failures and the events that led them to be the people they were.  And the books makes a valiant effort to portray both fairly though, as you might expect, I end up sympathizing with the wizards more than the prophets.

I do worry, though, that it's the third position Mann introduces that is the correct one.  Vogt believes that mankind must constrain its reproduction and stop consuming as much.  Borlaug believes that mankind must learn to better use the environment to support ever more people.  Lynn Margulis believes that it would be unprecedented for mankind to do either of these so we should expect overpopulation and dieoff in the future.

My first reaction was "Wait, is this the same same Lynn Margulis who..." and yes it was.  She had argued that symbiosis rather than competition was the primary force in the evolution of our cells.  It was entirely true that mitochondria and chloroplasts were once independent bacteria who came to live inside eukaryotic cells.  It was untrue that the flagellum or the other orgenelles of the cell had also originated as symbiotes. 

We are lucky that affluence has reduced our desire to have many children.  Yet, there are those who desire many children even in affluence and there's no reason to think that this desire isn't at least partially heritable.  We may stem this, for a time, with violence but the will to violence fades .  We may race ahead of necessity in terms of our civilization's ability to provide sustenance.  Yet, the sun only puts out so much energy.  There are limits to the computation cycles that can be extracted from a unit of energy.  And expanding at the speed of light resources grow as the cube of time but demand grows exponentially and an exponent must always beet a polynomial in the end.

I'm closer to an average than a sum utilitarian so I can swallow this repugnant conclusion, even if I don't want to.

Tuesday, December 19, 2017

Charitable Giving in 2017

I don't know if I've mentioned it on this blog but a while ago I took the Giving What We Can pledge to donate 10% of my pre-tax income every year to efficient charities.

In general I'd recommend following Givewell's recommendations fairly closely.  In past years I've strayed beyond them a bit, in particular giving a bit extra to charities that provide micronutrients in poor countries on the theory that allowing kids to grow up healthier is investing in the future and maybe I ought to prioritize that over saving as many lives as I can right now.

On the other hand, Givewell's top charity is the Against Malaria Foundation and we, that is humanity in general, have been making huge strides against Malaria recently.   It isn't an impossible project to eliminate it in the wild.  I'm too young to have helped with the elimination of Smallpox but this would be another ancient enemy laid to rest.  The Against Malaria Foundation is a finite entity and might not be able to absorb all the money that everybody donating through Givewell can muster.  But if they can't Givewell will trickle the money down to other, nearly as important, charities that can use is so I gave the money to them to use as they see fit.

If I were emperor of the world with all resources at my command I would be careful to distribute my money through a variety of causes and means in case one was mistaken and independent analysis could show that.  But as just one person among many I feel comfortable contributing to the one organization I trust the most.

In previous years I've tended to double up my giving by year.  Donating in January of, say, 08 then again in December of 08 then January of 10 then December of 10.  I give as much as I intend, roughly, per year but I get to double up my charitable deduction while in off years I take the standard deduction on my income tax.  This year with the new tax bill passing I'm not confident on this working out for me so I figure I'll do the simple thing and just donate for the year I've saved in, before my bracket and standard deduction change.

Also, while I may think that giving to the most efficient charity I can find is the best use of my money overall I still feel bad when someone asks me for money on the streets of Cambridge and I don't give them any.  In theory I wouldn't mind them having a few dollars from my pocket but I also feel bad if I give money to the people who are willing to stand out in public all day asking for it while not giving any money to the the people who don't.  To assuage all my feelings of guilt I donate $5 to the Greater Boston Food bank whenever anybody asks for money from me on the street, which comes to $210 this year.

Sunday, December 10, 2017

Tax Rates and Growth

People trying to justify the recent Republican tax plan often talk about the importance of long run economic growth.  And you can see how, if that were true, it could be a really important argument.  The difference between 2% and 3% annual growth over a hundred years would be a factor of two and a half.  If that sort of change were really possible it would justify quite a bit.  Even if all that extra growth went to rich people then even at the new, lower, tax rates that would be much more tax money available for social programs.

Sadly there's no way cutting taxes could have such a large effect in the US.

The theory behind the cuts is that people become more productive when they've got more or better machines.  Machines cost money, so leaving businesses more money to buy machines will make them more productive.  More machines leads to more money leads to more machines in a virtuous cycle.  Except that in a developed economy it's often very hard to figure out how to usefully add more machines.

At work I mostly use a computer.  Giving me a second computer might increase how much I get done by a bit, but it wouldn't increase it very much.  Diminishing returns is a fact of life and finding ways to usefully spend money increasing productivity is hard.  Some people are working at companies where they have to make due with 20 year old computers and it's quite possible that they would work faster if they could upgrade.  But that's not typical in the US.  It does exist and there are cases where extra money can result in research that yields better computers for everyone.  This is why we try to slant the tax code to favor investment over consumption.  But again expecting lower taxes to increase growth by a large sustained amount is wrong.

There are cases, not in the modern US, where it can play out like that.  If you happen to live in a country that's pretty poor and where most businesses don't have the latest technology then you could quite plausibly have huge growth rates based on money to machine to money virtuous cycles.  That's what's happening in China right now.  It also happened in Japan 50 years ago.  If there's some other country that's figured out how build awesome machines you don't have yet then you might very well find yourself limited by how many of them you can buy - though you might also have other problems that are more pressing.

In this case what you really want to do, as a government interested in growth, is to increase the rate savings rate, how much people invest relative to how much they consume. 

One strategy for this, pursued by prewar Japan, is to just tax poor people and give out the money as business loans.  Even today richer people save more of their income and that was more true when you have a population of peasants without access to banks. 

The early 19th century US didn't have quite the bureaucratic sophistication of late 19th century Japan but managed to do something similar.  By introducing tariffs that raised the price of imported cloth and other goods they indirectly increased the profits of factory owners, allowing them to invest more by buying pirated copies of machines already in use in Britain.  It's much easier to apply taxes at single ports of entry on obvious things like ship arrivals than it is to do something like an income tax.

The early 20th century Soviet Union took this approach to new extremes.  Most new machinery in the early days was paid for by selling grain abroad.  The need to more efficiently expropriate grain was part of the reason for the drive to collectivize agriculture.  Big centralized farms are again much easier for the state to manage than lots of little spread out farms.  Sometimes there was mass starvation but the country industrialized rapidly.

You might have noticed that China's economy has been growing rapidly recently but there hasn't been a lot of mass starvation.  As far as I can tell the main difference was that they jump-started things with foreign investment  These days the trillions in domestic savings drown out the $100 billion or so in foreign capital but when the current boom was starting money from Taiwan, Japan, etc was crucial.  This seems like a far more human way of jump-starting growth than the other methods above.


Wednesday, November 29, 2017

RISC-V is doing well

Back in 2010 some researchers at the University of Berkley started work on an instruction set architecture (ISA) that was going to be both open for anybody to use and incorporating modern ideas.  All computers run by performing a series of operations, like loading a 16 bit value from memory, adding two 32 bit numbers together and returning the highest possible 32 bit value if the result can't be represented in 32 bits, or taking the cosine of a 32 bit number.  An ISA defines which operations are basic to the computer and which have to be assembled out of other instructions.  It also tells you how you represent your instructions as sequences of 1s and 0s in memory.  And it specifies various other things such as how memory accesses from different cores interact.

You might have heard of the great RISC versus CISC wars of the 1980s.  For a long time it was very expensive to move data from memory to the computer core (it was only every one back then) and back.  Transferring instructions is part of that and to save on instruction memory designers wanted to use as few instructions as possible and so made their instructions as powerful as they could.  And since programming back then was usually done by programmers writing individual instructions rather than using compilers this also meant less work for the programmer.

But as time went on the amount of data that computers worked on grew faster than the number of instructions they used making code size merely important rather than critical.  Programmers started to use compilers more.  And some researchers at Berkley and Stanford realized that there were a lot of advantages to using a less complex instruction set.  If you simplified your ISA you would have an easier time doing things like starting one instruction before the previous one had finished because there were fewer complicated interactions.  Less instructions meant less work for designers.  And in the early 80s you could fit an entire RISC core on a single silicon chip rather than having to spread it across multiple chips.  That made it cheaper and faster.

A lot has changed since the 80s.  Some aspects of the RISC philosophy have fallen by the wayside but others are embraced by everyone designing a new ISA for general purpose computers.  And RISC-V is, of course, firmly in the RISC camp.

Other people have created ISA that are open for anybody to use and free of patents but none of them had ever really taken off.  I'm not familiar with them so I'm not going to speculate on why.  In contrast RISC-V has gotten a lot of people interested.  There are a number of concrete processors that adhere to the architecture that have been designed at Berkly and other places and which have also been released for people to use freely which may be part of it.

When I first heard of these efforts a couple of years ago I was impressed.  Back when I was doing my thesis I could see how an open chip design could be been useful for me to modify and try out my ideas for my thesis.  Now that there were these designs out there free to modify and with working compilers and other software out there lots of academics working on processor design were going to have a very powerful tool.  So RISC-V clearly had a bright future in academia.

In the outside world there were certain benefits.  RISC-V makes it very easy to add your own new instructions for any special purpose you might have.  So companies with special purposes in mind would have a reason to look at it.  I wasn't optimistic about a wider impact, though.

Well, it now looks like I was underestimating it.  At the seventh RISC-V Workshop yesterday Western Digital announced that they were moving to RISC-V for the microcontrollers in their hard drives which tell the drive head where to go, communicate back to the motherboard, etc.  That's potentially billions of RISC-V cores shipped in commercial products every year.

A while ago NVidia also announced that they were looking at RISC-V for microcontrollers orchestrating things in their graphics cards while the GPU cores did the computational heavy lifting.  They mentioned that the ability to add their own extra instructions was a big draw.

So that's some success in embedded microcontrollers.  That makes sense for people who want more customization or who don't want to pay licensing fees to, say, ARM.  A few days ago I certainly hadn't been expecting people to be seriously considering RISC-V for application cores running all sorts of different programs such as in a phone or laptop.  If you're receiving applications from third parties they can't make use of any special extra instructions you have so the RISC-V flexibility isn't a factor.  And nobody has created applications for RISC-V, though you can always compile existing code for it if you have access to the source.

Well, I still think that but another of the talks at the Workshop was for a fairly hefty 4 core chip that would do pretty well inside a laptop.  I'm not sure anyone is going to put it there but I'm sure people will be using it for servers, where you're running a narrower selection of software.  There's support for RISC-V being added to Linux though it isn't fully supported yet.

The whole thing is moving faster outside of academia than I would have expected and I'm interested in seeing what the future brings.

Sunday, November 19, 2017

Expertise, the president versus congress

Since writing a post way back about the way complexity is a problem for Congress I've been happy to discover that the ideas in those aren't at all original and that these are the sort of things people write papers on.  Here's a good article on one recent paper.  I suppose I should have seen that coming.  Possibly I got the idea from somewhere initially then forgot about reading it.

But anyways, figuring out what you need to know to write legislation is hard.  It would be cool if Congress had a big budget to hire outside experts but they have to make do listening to what lobbyists tell them and trying to decide which to believe.  Of course there is on part of government that has a huge budget to hire people with specialist knowledge and which has tons of them on staff.  That is, the executive branch.

That's an angle on this whole situation I'd completely overlooked.  A president proposing legislation can use the Department of Education to draft school reform bills, use the EPA and Department of Energy to draft climate control legislation, etc.  People talk about the imperial presidency.  I expect that this is a pretty big factor in how we got that.

There's also some cause for hope here.  We got the Congressional Budget Office from Congress wanting to push back at having to rely on the White House when budgeting.  To quote Wikipedia:
Congress wanted to protect its power of the purse from the executive. The CBO was created "within the legislative branch to bolster Congress’s budgetary understanding and ability to act. Lawmakers' aim was both technical and political: Generate a source of budgetary expertise to aid in writing annual budgets and lessen the legislature’s reliance on the president's Office of Management and Budget.
 What got me thinking about this power dynamic was watching the recent floundering of Congress on their health care plans and other matters.  Partially this is an issue of leadership but part of the problem also seems to be that the executive is just not interested in the matter.  Or possibly that so many political appointments haven't been completed he's not able to.

Wednesday, July 26, 2017

Rockets VII: Staging

See also parts I, IIIIIIVV, and VI.

Space is sort of hard to get to.  You've got one of the Space Shuttle Main Engines (SSMEs), which are really efficient rockets which'll give you a vof 4.4 km/s in vacuum.  That's pretty efficient for burning stuff and about as well as we can do for a rocket that can take off from Earth.  But lets say we do want to take off from Earth.  Well, plugging that into the rocket equation we see that you need a mass ratio of  8.5 to 1 to get the 9.4 km/s you need to reach orbit.  Especially when the big tank you need to hold the hydrogen for your rocket probably has a mass ratio of 10 to 1 before you add in the engines or the shuttle and payload.  Thankfully the shuttle also had its boosters which finished burning early then the big casings that contained all that solid fuel were dropped into the ocean where they couldn't slow the rest of the shuttle down.

Being able to drop heavy pieces of your rocket when you're part way to orbit has been a part of rocketry since the beginning.  Here's how it works in theory.  Lets say you have a rocket that's easy to build and which can carry 1/5 of it's weight as payload.  But lets say it only has a delta-v of 5 km/s.  Well that doesn't make it to orbit.  Ah, but we can make another one that's 5 times bigger and can carry the smaller rocket.   We launch the big one, get it up to 5 km/s and it releases the small one which gets its payload up to 10 km/s for a nice high orbit.  Our overall mass ratio is 25 to 1.  In theory if you just made a single rocket stage with a mass ratio of 25 to 1 that would be just as good - but that's impossible.  The tanks you need for fuel limit you to 20 to 1.  Add in rocket engines powerful enough to lift the rocket against Earth's gravity and your mass ratio goes down further.  You need some sort of staging to get a chemical rocket to Earth orbit.

Early on people couldn't do that sort of theoretical, one rocket carried by another, staging.  When you start a rocket on the ground there's gravity pushing the fuel into the engine.  When you start a rocket on the ground and the engine just doesn't start you can just fix whatever's wrong and try again.  Neither of those is true for upper stages and to start with people didn't know how to deal with that.

What the Soviets and US did was like what the Space Shuttle did.  They lit all their engines on the ground and had some high thrust bits that dropped off early while the rest of the rocket made it to orbit.  The Soviets with the Sputnik had essentially five identical rocket engines.   Four were attached to small tanks and one was attached to a big tank.  Under the combined thrust of the five the rocket would be boosted up into the upper atmosphere quickly then four of the engines would burn through their fuel quickly and then just fall away while the last part had a big enough tank in relation to everything else that it eventually produced enough speed to get into orbit.

The US, with the Atlas rocket, did something similar.  But instead of having different tanks all three engines were attached to a single fairly efficient tank.  The three engines were lit on the ground and loft the rocket up to a high altitude where it would take a long time to come down.  Then the two booster engines would fall away and the remaining small and efficient sustainer engine would take its sweet time accelerating the rocket to orbit.

The US and Soviets started figuring out how to make true stages after that.  The Soviets put a second stage on top of the rocket with an open cage connecting it to the rest of the rocket.  Before the first stage burned out while the acceleration was still forcing the fuel down into the engine they lit it off.  The US just used solid rockets where the fuel doesn't need any force to keep it in place and there aren't any turbines to spin up.

Eventually both learned to use further techniques like having little ullage motors produce just enough force to settle the fuel while a new stage was being lighted letting them both use liquid rocket stages that were entirely sequential.  Upper stages can use engines that are designed to operate in vacuum with the larger engine bells that let you direct your propellant better but which would cause problems if they had to fight against atmospheric pressure.  People talk about taking a single stage to orbit.  Elon Musk says that the first stage of his Falcon 9 could just barely make it to orbit if it didn't have to carry other stages or a payload.  But there's no reason to go to space unless you're taking something there.  Until we develop high efficiency rockets that also produce high thrust we'll have to continue to use staging to make it to space.

Tuesday, July 25, 2017

Adoption curves are often steeper than you think

When I was younger there were a lot of wondrous devices that were predicted in some science fiction books I read that I thought I might never get to use.  For instance when I was reading some Tom Clancy book or other in high school some government servant pulled out his pocket device that combined a cell phone, a GPS, and a PDA.  This was of course a super expensive device that was only available to top government officials and the very wealthy.  It was, of course, basically an iPhone but with less features.  There was another novel I read in college, Snow Crash, where the protagonist managed to get access to a piece of software normally reserved for the rich and powerful.  It was a 3D model of the Earth overlaid with satellite imagery that you could manipulate and zoom in on any location smoothly plus a lot of extra information.  It was basically Google Earth except with a few more features.

Every technology has an adoption curve.  Once indoor plumbing was for the rich only but now it's illegal for even the poorest of us to try to save money by building a house without it, for valid public health reasons.  Once TVs, refrigerators, and all sorts of other things were available to the few but eventually ended up with mass adoption.  Here's a nice chart courtesy of The Atlantic:


It's not all smooth or even a constant march forward but the trend is clear.  Are the slopes steeper more recently?  Maybe or maybe that's just an artifact of what technologies the chart maker was aware of.

Whenever I'm in a discussion about some new technology someone always points out that it'll be just for the rich.  Often that's true at first.  But sometimes, as with the iPhone it only makes sense to build it when a large swath of the moderately well off population can buy it.  And sometimes, as with Google Earth, it doesn't make sense to restrict who can use it.  But even if it does start off just for the rich even the poor will get to use it eventually.  If the materials involved are cheap but the design of it is costly then it'll probably be adopted quickly.  Vice versa and maybe the opposite will be true.  But it will probably reach wider use eventually and we should only talk about the period where it is the preserve of the wealthy rather than assume that will be the whole of the future.

Book review: The Wizard and the Prophet

I just recently finished The Wizard and the Prophet  by Charles C. Mann.  He'd previously written a book about the Columbian exchange I&...