Sunday, December 30, 2018

A new charity for 2018

I'd previously written about year end charitable giving.  This year I'm giving money to a new organization, though I'm giving a lot to Givewell too.  The Alliance to Feed the Earth in Disasters, or ALLFED, is researching ways to rapidly scale up food production in the case that traditional means might suddenly become less viable.  Say a super-volcano erupts, or a huge meteor strikes, or a nuclear winter happens, or something along those lines.  Humanity would have to get through years of reduced sunlight that would make growing plants very hard.

Crop failures would naturally lead to massive starvation but ALLFED is looking into ways to turn biomass we might have laying around into edibles via routes such as fungus or methane eating bacteria.  Food produced this way wouldn't be especially tasty but it would hopefully be enough to keep people alive until the dust settles out of the atmosphere and normal farming can start back up.

The idea here is to research the technology needed ahead of time with emphasis on techniques that can be ramped up quickly in an emergency.  This is an area that nobody has really looked into before so small donations can go a long way.  And while we should hope to never need any of this technology this sort of insurance for our civilization isn't something that should be neglected.  So a portion of the 10% of my pre-tax income that goes to charity every year is going to ALLFED.

Also, I'm going to be moving to monthly charitable donations going forward since I'm not going to be doing the tax year shenanigans I was up to previously.  Charitable organizations say that makes it easier for them to plan, and it will make it easier for me to budget too.

Sunday, December 16, 2018

Ways of thinking and remembering names

So, imagine someone is walking along, down a street.  They see the store their going to and they enter through the shop door.  So, when you were imagining this, did you see it?  Which direction was the person walking, relative to your mind's eye?  Did they turn to the left or right to enter the store?  What color was everything?

When someone first did this exercise with me they were walking away from me and turned to the left.  I could still see them inside the shop through the wall because nothing in my visualization had any color.  For other people they might be imagining it more like a video where things have color and you can't see people after they move behind walls.  Some people don't form mental images of the scene at all, here's a widely shared Facebook post by someone who was surprised to discover that other people actually did form mental images in their minds.  Francis Galton was the first person to study this in 1880.

A related topic is the notion of how people go about their thinking.  That is, some people will talk to themselves inside their head when they're thinking about a problem and other people won't.  According to this article this is another property of people that varies quite a bit between individuals with some people having no inner monologue and some people talking inside their head almost all the time.  For myself, I do have an inner monologue sometimes but mostly only when I'm thinking of what I'm going to say - for example I've done quite a bit of it in organizing this blog post.  Other than that I mostly just use single words inside my head when I need the concept behind the word but I'm not practiced with it enough to just use the concept directly.

This is all interesting and has led me into some speculation.  Do people who use words inside their heads use people's names when they're thinking about that person?  I almost never do but it makes sense that that would be common.  And if so then does this correlate with people's abilities to remember names?  It seems that if you're using a name inside your head all the time you should have an easier time remembering the name.  This seems like just the sort of thing an enterprising grad student could create a study for and I hope someone does.  Or it might just be that someone already has and I haven't heard of it.  If any of you reading know or have your own related experiences please let me know.

Saturday, December 1, 2018

The Danger of Going Up

Every year people climb up Mount Everest.  Generally the number keeps growing.  Not everybody makes it up.  Some people turn back.  Other people die on the mountain.  Generally around 1 in 100 of the people who attempt to climb it.  K2 is another mountain nearby.  For a little bit people thought it might be taller than Everest.  It isn't taller, but it is deadlier and about 1 in 10 of the people who try to climb it die.

Why do people climb dangerous mountains?  To test themselves.  For the sense of achievement.  And despite the danger nobody is seriously proposing that we stop people from attempting these summits.  If they want to risk their lives they can.  They know the risks.  We should stop people who don't know what they're doing but you'll never even get to base camp without serious dedication and preparation.

If we allow mountain climbers to face severe dangers for the sake of achieving something few others have why don't we allow this with astronauts?  We did once, it was almost amazing that none of the Mercury 7 were killed in accidents.  Gus Grissom nearly was and the Apollo 1 astronauts were killed in a fire during spaceflight tests.  When we were locking in competition with the Soviets these risks seemed bearable but they don't any more.

Certainly astronauts are as aware of the dangers present as any mountaineer contemplating K2.  NASA requires them to toe the safety first line to remain in the program but it's clear to everyone watching that if riskier missions were offered there would be no shortage of volunteers.

Perhaps the difference is that astronauts are doing something for us.  They're exploring the cosmos on our behalf in a way that people looking to climb some mountain aren't.  And that instills in us a sense of obligation to shield them from risk even if they would bear that risk willingly.

Tuesday, November 27, 2018

What do all those transistors do?

The CPU in your laptop or desktop has a lot of transistors in it.  The Core i7-6700HQ that I'm typing this on has 1.35 billion of the little guys.  Buck back in the day on of the earliest computers, ENIAC, had only 20 thousand vacuum tubes which more or less fufilled the same role as transistors do now.  So what do all those extra transistors we've added accomplish, if we were able to do useful mathematical operations with just 20,000?  Most of the increase in the speed at which we run computers, the clock rate, has come from replacing large and slow transistors with smaller and faster transistors after all.

Well first, what was ENIAC doing with it's transistors?  A single transistor isn't very useful.  If you're willing to use a resistor too you can perform an operation like making an output the logical and of one input and the inverse of a second input, call it AND(A, NOT(B)).  But that circuit is a very jury rigged thing which will be slow, unreliable, and fairly power hunger.  You can make much more solid and-ish circuit, NOT(AND(A,B)) or NAND, from a couple of transistors and a resistor if you don't mind being power hungry.  But given that Moore's Law has made transistors cheaper and cheaper relative to resistors anybody these days would use a four transistor NAND circuit which is faster and more power efficient than the resistor and transistor equivalent.

What if you want to add two numbers together?  Digital circuits output zeros and ones so first you have to decide how many base 2 digits you want to be able to represent since you need enough output wires for the largest number the machine can handle.  In a modern all-transistor design you'll be using 16 transistor for every bit of the addition.  On a 32 bit adder which many processors used around 2000 and which is roughly equivalent to what ENIAC used this'll be 512 total.

But you don't want a computer that only adds numbers. You want a wide variety of instructions you can execute, you want some way of choosing what instruction you execute next, and you want to interact with memory. At this point you're up to 10,000s of transistors. That will give you a CPU that can do all the things ENIAC could do.

Now lets say you don't want your entire operating system to crash when there is a bug in any program that you run. This involves more transistors. And you probably want to be able to start one multi-cycle instruction before that last one finishes (pipelining). This might get you up to executing one instruction every other clock cycle on average. That'll cost transistors as well. This will grow your chip up to 100,000s of transistors and will give you performance like the Intel 386 form the mid 80s.

But this will still seem very slow compared to the computers we use nowadays. You want to be able to execute more than one instruction at a time. Doing that isn't very hard but figuring out which instructions can be executed in parallel and still give you the right result is actually very hard and takes a lot of transistors to do well. This is what we call out of order execution like what the first Intel Pentium Pro had in the mid 90s and it will take about 10 million transistors in total.

But now the size of the pool of memory that we're working with is getting bigger and bigger. Most people these days have gigabytes of memory in their computers. The bigger the pool is the longer it takes to grab any arbitrary byte from it. So what we do is have a series of pools, a very fast 10kB one, a slightly slower 100kB, a big 10MB one on the chip, and then finally your 8GB of main memory. And we have the chip figure out what data to put where so that the most of the time when we go to look for some data it's in the nearby small pool and doesn't take very long to get and we're only waiting to hear back from main memory occasionally. This and growing the structures that look forward for more instruction to execute are how computers changed until the mid 2000s. Also going from 32 to 64 bits so that they could refer to more than 4GB of memory, the biggest number you can say in only 32 bits is 4294967296 so any memory location over that number couldn't be used by a 32 bit computer. This'll get us up to 100 million transistors.

And from the mid 2000s to the mid 2010s we've made the structures that figure out which instructions to execute next even bigger and more complicated letting us execute even more instructions at once. As we grow performance this way the number of transistors we needs grows as the square of the performance, on average. And we've added more cores on the same chips letting us grow performance linearly with transistors as long as software people can figure out ways to actually use all the cores. And now we're up to billions of transistors.

All this raises the question of whether you could just take a design 10,000 transistor cores you would have used back in the day and put 100 of those cores in a CPU instead of the 4 you'd normally buy.  To some extent you can do that.  If you want to have them all talk to each other and with a good amount of memory you have to increase their width to 64 bits but that doesn't take so very many transistors if the rest of the design stays simple.  And they'll be slower individually but each of the 10x transistor steps causes something like a doubling of performance rather than a 10x increase in performance.  The problem is that it's hard to write software in such a way the work can be broken up neatly into 100 different threads of execution.  And some operating systems, such as Windows, tend to problems dividing up work efficiently between more than 30 or so threads.

In theory you could have 2 large cores for cases where the software doesn't support a lot of division of work and another 30 tiny cores to handle cases where the work can be divided easily.  The operating system would have to be aware of this, though, and prefer running tasks on the fast cores first before less tasks trickle down to the slower cores.  Something of the sort has been done on mobile phones where you might have 2 fast cores and 4 slow cores.  But on your laptop you mostly see this sort of thing with a few large cores running your applications and a gaggle of small cores in the GPU doing the graphics processing.

Sunday, November 18, 2018

Book Review: Radical Abundance

Eric Drexler is the person who came up with the name "nanotechnology" and is probably the one most responsible for public awareness of the idea.  After a pretty long haitus he's published a new book and I thought I'd take a look but before I get to the new book I think I should provide some history.

Way back in 1986 Drexler published Engines of Creation.  In it he outlined a vision of a world remade by the ability to engineer chemicals the way we engineer widgets today, assembling them precisely using mechanical arms placing bits in position rather than waiting for the random thermal motion of molecules to bring parts into contact where they can stick.  In the book he envisioned tiny robots called assemblers constructed with atomic precision using tiny arms to assemble other devices - and also replicate themselves.  That ability to replicate could potentially be a big danger if an error in programming caused them to replicated without bound like a cancer.  The cells that make up our bodies are contained by having to have had a clear evolutionary path from point to point.  It took a billion years for mitochondria to evolve since that required multiple things adjusting in a cell at once.  There are quite plausibly ways to drastically increase the efficiency of self-replicating organisms that could be designed by engineers which evolution wouldn't find before the Sun consumes the Earth.  Specifically, you could have these nanobots made out of much stronger molecules and life could use and also, thanks to placing things deliberately rather than wait for random motion to put things into place, they could potentially replicate much more quickly.  So it's possible that, if you can make good enough self-replicating nanobots and they have a fault you might see them end up consuming Earth's biosphere and turning it into grey goo.

After this publication Drexler went off to do some doctorate work which he later published at Nanosystems in 1992.  Rather than conceptualizing molecular manufacturing in terms of tiny robots he wrote about it in terms of molecular factories and assembly lines.  There are a lot of advantages to this approach.  Encasing a molecule in a tube as some operation is done to it limits its range of thermal motion much more effectively than trying to hold it it at the end of an arm.  By having the structure of a conveyor line encode a sequence of operations you don't need a nano-scale general purpose computer which made up the bulk of the atoms in the back-of-the-envelope design for a nanobot in Engines of Creation.  And you don't have to worry about a factory replicating itself in the same way you do with a tiny robot.

And that brings us to Radical Abundance published in 2013.  The book is a policy book like Engines of Creation was but the engineering in it is very much all post Nanosystems; looking at construction in terms of assembly lines rather than craftsman nanobots.  There's also a large segment on the public perception of nanotechnology and how it has changed over time.  Because of excitement about nanotechnology in the late 90s and early 00s a lot of chemistry tried to re-brand itself as nanotechnology to absorb nanotechnology research dollars.  This would involve molecules smaller than 100nm created by normal thermal motion chemistry rather than through forcing molecules together with external guidance.

Additionally, a lot of the public thinks of nanotechnology in terms of tiny nanobots instead of in terms of factories made out of atomically precise parts.  This is, of course, because that's how Engines of Creation conceptualized it and because Engines is a much easier to read book than Nanosystems is.  I was expecting some sort of mea culpa from Drexler on the matter and maybe an explanation on what he'd gotten wrong in his first book.  But Radical Abundance never seems to let slip that the ideas Drexler now finds so annoying were spread by his younger self.  I think a section going into a bit of detail on why nanobots aren't workable would have been a useful addition to this book but Drexler just dismisses them here with eye rolling.

But because of these two ways in which the word "nanotechnology" no longer means what he intended it to back in '86 he is now promoting "Atomic Precision Manufacturing" or APM as a term for the technologies he's interested in.  I wish him good luck with that but I'm doubtful he'll succeed.

What seems more likely to succeed are attempts to build these "APM" capabilities, at least eventually.  Our science of the nanoscale is currently imperfect but engineering practice can let us proceed anyways.  It's very hard to look at a protien and figure out how it will fold when placed in water.  I currently have my desktop give some spare cycles to figuring out some medically important proteins.  But that doesn't mean we can't just design proteins to fold understandably if we're willing to be a bit less efficient than evolution is.  We don't understand how every molecular surface interacts but we can restrict ourselves to the surfaces we do understand.  We can't predict the exact strength of every structure but we can design in sufficient safety factors to cover our uncertainty.  There's a lot of scientific work being done pointing towards APM and Drexler gives a good high level over view of it.

I'm going to say that I don't have any firm idea of what sort of time scale progress on APM/nanotechnology will happen on.  But I think I'm persuaded that it will happen eventually.

And just because I saw it recently, here's A Capella Science's song about current research in nanobots.

Tuesday, October 23, 2018

The old anarchists of Iceland

A while ago I made a post on Anarchy in History, on how most traditional societies didn't have governments as we'd understand them as some group with a monopoly on the use of force.  But just because a country exists as an anarchy doesn't mean that the people in it are anarchists.  They just have their traditions and laws and accommodations and don't hold their opinions on government as an ideology.  In the same way most people who have lived under monarchies haven't considered the alternatives in a way that would make them monarchists, you mostly find those in places like the 19th century or ancient Greece where monarchies and republics lived side by side.

There is one society, though, that I think you could reasonably claim was actually anarchist.  That would be medieval Iceland.  Iceland was settled by mostly Norwegians around 900 AD.  This was around the time the famous Harald Fairhair was unifying Norway under a kingdom.  We know most of what we do about Icelandic history and the way viking societies saw themselves from the sagas preserved in Iceland telling the tails of what happened back in the olden days.  In them many of the settlers were portrayed as fleeing from Harald's imposition of centralized rule and taxes and such.

Once on Iceland they rejected Harald's centralization.  There were laws but they were derived from custom instead of being set by a person.  There were courts but there was no executive apparatus to enforce them with violence.  If the courts declared someone outlaw, or in other words free game, it was mostly up to the friends and family of whoever had been aggrieved to extract punishment.  Just the same as in most traditional societies.  But the settlers of Iceland knew that there were other ways of doing things and decided not to.

Now, there's good evidence that Harald wasn't actually ever a historical person.  But what matters here is that Icelandic self conception was that they were a free people who weren't oppressed by a king.

There's things to quibble with.  For instance slave owners had a monopoly on violence over their slaves.  But I still think it's fair to say that Iceland was the one and only anarchist society I'm aware of, while it lasted.

Tuesday, September 18, 2018

Fusion optimism

Fusion is famously the technology that's always 20 years in the future.  And because of that I hadn't really seriously considered it as a realistic solution to climate change, sustainable energy, etc.  But recently I've become more optimistic about the prospects of fusion so I thought maybe I'd explain why I've changed my views.

What is fusion?  Well, it's a process that releases energy by combining elements together.  Atoms tend to resist this, so they have to be smashed together very hard in order for it to work.  In a thermonuclear bomb the fusion is triggered by a standard fission bomb going off to provide the needed energy.  Atomic weapons are rather hard to contain, though, and impractical to use as part of a power plant.  So for fusion we'd need to heat up some hydrogen very hot and keep it all together rather than expanding and escaping as very, very hot things like to do.

In the Sun our star's gravity is enough to keep everything together.  That works very nicely but sadly masses as large as the Sun play together with power plants every worse than atomic explosions do.  But gravity is widely regarded as the weakest of the fundamental forces for good reason and if it takes a impractical amount of gravity to keep some hot hydrogen contained it only huge but doable amount of electromagnetism to do the same thing.

Because the super hot hydrogen you want to use for fusion is constantly knock electrons off it's no longer a gas but an electrically active plasma and you can use a magnetic field to pin it in place.  For the temperatures needed not the sort of magnetic field that will fit on your desk but more one the size of a house.  Still, a lot more practical than atomic explosions or Sun sized masses.

It would be much easier if a reactor could be made smaller so that they could be build and modified quickly and cheaply.  But you need the magnetic field strength combined with its size to be enough to keep the super hot particles in place.  To be practical you need superconductors to make the magnetic field and those stop working if the magnetic field gets too strong.  Hence, with the maximum magnetic field our materials can sustain there's a minimum size.  Which has been expensively large.

Back in the mid 1970s the people running our fusion research program looked at the problems involved and figured that they'd need a lot of funding to iterate on building huge building sized reactors until all the bugs got worked out.  The came up with a set of set of plans for how quickly they thought they could get fusion working reliably depending on how big their budget for building trial house sized magnetic reactors was.  Here it is.


As you can see the level of funding eventually selected corresponds with enough money to pay a few scientists to think about the problem but with no giant house-sized magnets built to see if their ideas work, somewhat below what people in 1976 thought would never successfully create fusion.

Put in this perspective the people saying that we could have fusion in 20 years way back in the 1970s seem a lot more reasonable.  It wasn't that we would have fusion but that we could with a large enough investment.  An investment that never materialized.

Well, now in the 2000s the Europeans have gotten around to building a house sized magnet facility called ITER that, if funding keeps being poured in, should eventually get the kinks worked out.

But something really cool had happened since the 1970s.  This technology called Magnetic Resonance Imaging, MRI, has taken off the field of medicine letting us look into human bodies for diseases.  This is a huge business and a very practical and immediate one.  MRI machines want superconducting magnets handling intense fields and so lots of people have had very practical and immediate incentives to figure out how to create superconductors that can handle higher field strengths.  And so, they have.

A group at MIT came up with something they call the ARC reactor (groan) that uses a sort of superconductor that can handle fields almost twice strong as the superconductor used at ITER.  "Almost twice" might not sound like much but apparently the energy you can get out of a fusion reactor scales as the fourth power of the field strength and so there was scope to make the reactor as powerful as ITER in a tenth the volume.  And now they've formed a commercial company around this.

Just building the reactors isn't the end of the story.  There's a lot of engineering effort that has to go into dealing with issues like plasma instability, having the reactor materials deal with neutron damage, and so on.  But having reactors lets engineers and scientists try out solutions.  It's possible that one of these challenges might be insurmountable.  But there's no reason to believe that and if they aren't then we should expect to see practical fusion power, finally and actually, within 20 years. 

Tuesday, July 3, 2018

Book Review: The White Man's Burder

I recently finished reading William Easterly's The White Man's Burden: Why the West's Efforts to Aid the Rest Have Done So Much Ill and So Little Good.  Reading some reviews online before starting it I was expecting it to be much more of a screed than it actually was.  Easterly criticizes the efforts of the World Bank and International Monetary Fund to further the development of poorer countries but his nuanced criticisms are in sharp contrast to most of their critics.  There's a lot of discussion about all the complicated moving parts that go into a modern commercial society, all the problems with both causing trust and causing trust to be justified between commercial actors and how people have found ad hoc solutions in the absence of regular institutions, and how it's very hard to know in advance which interventions will actually make things better rather than just funnel money into someone's pocket.  He provides some fairly compelling statistics showing that development aid of this type doesn't help on average and explains a bit why there can be occasional negative consequences to match the occasional positive consequences.  I think in my earlier review of Dancing in the Glory of Monsters I gave an illustration of the damage that ill-considered aid can cause.

He talks a bit about programs that actually help.  Initiatives lead by residents of the countries being aided get a lot of praise.  But he also talks about accountability.  Having one organization do one thing and having their impact on that thing measured.  So if this had been written later I think he might have specifically praised Effective Altruism institutions like Givewell.

There's also a chapter on peacekeeping, which I found less convincing.  Steven Pinker had some pretty convincing statistical arguments in Better Angels of Our Nature that blue helmeted peacekeepers are a substantial net benefit.  Easterly, though, uses anecdotes like what happened in Somalia to argue that peacekeepers are useless, in sharp contrast to his statistics and scorning of anecdote in the aid chapters.

Overall I'd recommend this book, particularly for the dives into the nitty-gritty details of institutions in the third world.

Saturday, June 9, 2018

Waymo is getting serious

First of all, in case you aren't following this sort of thing, the Uber crash I mentioned in my last post on self driving cars is actually a lot worse than it looked at first.  According to the NTSB report the victim of the crash had been detected but
emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.
 which makes me wonder if this rises to the level of criminal negligence.  Certainly there are currently no localities that are allowing Uber to test on their roads which might be the end of Uber's program.

Tesla has also been in the news with some crashes involving its cars running on autopilot.  But autopilot of the sort you normally have in a boat or plane or one of Tesla's cars isn't full self driving.  The pilot or driver is supposed to remain on alert and react if the vehicle starts to do something unsafe.  In a plane or boat normally this involves noticing and reacting in the space of a couple of minute or so.  But in a car you might need to react in seconds which makes this whole approach much more dangerous.

The NHTSA defines a number of levels of autonomy.

Levels 0 and 1 aren't a problem because the user is always going to have to be paying attention.  Well, no more of a problem than we're already dealing with.  Levels 4 or 5 are fine because the user doesn't need to pay attention.  The car may get stuck but it can be trusted to pull over to the side of the road while the driver prepares to drive.  But levels 2 and 3 where the driver has to remain alert while not doing anything which I don't think is very realistic.

In contrast to all of this, it looks like Google's self driving car spinoff Waymo is doing everything right.  They're going straight for level 4 or 5.  They're testing carefully and not taking shortcuts like Uber.  And they've been working on the problem since 2009 having just accumulated 7 million miles of driving which is pretty impressive.  Going by some stats off Wikipedia a fatal accident happens around once every 2 million miles and Google hasn't had one yet.  There have been some accidents, even serious ones like this one recently, but that was a vehicle swerving into an oncoming lane and hitting the Waymo car.  In every incident I've heard of the Waymo car was not at fault so I think we can say at this point that Waymo cars are safer than human drivers, on average.

But now Waymo has just ordered 62,000 minivans from Crysler.  They'd been giving rides for free to people around Phoenix but only with a human driver monitoring things.  But with tens of thousands of cars they're going to be looking to be going fully autonomous.  It looks like fully autonomous self driving cars are about to actually be a practical thing, at least in Phoenix or whatever city those 60,000 cars will be serving.

Tuesday, May 22, 2018

Nuclear power in space

I'm a little behind on my blogging but a couple of months ago NASA released some information on their new Kilopower small reactor for use in space.  So I thought this would be a good opportunity to write something about power generation in space. 

Most satellites and probes in the inner solar system use solar power.  This was one of the first practical uses of solar power back when solar panels were very expensive and now that solar panels have gotten much, much cheaper it's even more of a good idea.  The solar panels on your roof are built tough and will weigh about 10 kg for every square meter or, on the Earth.  At 1300 watts of sunlight coming in times an optimistic 20% efficiency for typical solar cells you've got 26 watts per kilogram.  According to Wikipedia more expensive spacecraft-grade solar panels will give you 77 watts per kilogram, but even then the majority of the expense is going to be lifting the weight of the panel into orbit.

Solar panels are great if you're as near to the Sun as Earth is but they start to work less well as you get further from the sun.  The incoming sunlight per meter goes down as the square of the distance from the sun so when you're out at Jupiter, 5 times as far as Earth from the Sun, you're only receiving 50 watts per square meter and your spacecraft grade panels are only generating 3 watts per kilogram.

For trips to the outer solar system NASA has traditionally relied on radioisotope thermoelectric generators or RTGs.  The idea is that you take some particularly radioactive isotope, such as Plutonium-238, and just let it generate heat by decaying.  These will actually generate about the same electric power per kg of weight as solar panels will at Jupiter's orbit but without the worry of always having to point your panels at the sun.  And as you travel beyond Jupiter to Saturn they become much more mass efficient.

They do have their problems, though.  In order to be intrinsically radioactive enough to generate such heat the fuel has to have a pretty short half life.  Plutonium-238 will half decay away in just 90 years.  That means all the natural P-238 is gone and any we use has to be synthesized at great expense.  It also means that half the power will be gone after 90 years.  Missions don't last 90 years but over the long distances of the outer solar system they can last more than a decade and the power output from these generators will go down noticeably on those time frames.

And then there's the problem of radiation.  To generate heat the material must be very radioactive by its nature and that means dangerous.  If there were to be an explosion during launch the plutonium might be lost.  The RTG is designed to stay as one intact unit in the event of catastrophe and sink to the bottom of the ocean but it still causes worry.

That's one area where a nuclear reactor rather than an RTG has benefits.  Kilopower and most other reactors mostly run of Uranium-235.  In small quantities U-235 is pretty safe.  Well, it's pretty toxic but toxic in the manageable sort of way that lead or mercury or bismuth are toxic.  It has a half life of 700 million years, quite a bit more than P-238.  Long enough for large amounts of naturally occurring U-235 to remain since the creation of the solar system.  And while it gives off a little bit of radiation from decay that decay occurs so slowly that you can mostly just ignore it.

Why do nuclear accidents like Fukushima release so much radiation then, if the uranium fuel that goes into the reactor is fairly safe?  Because reactors work by transforming uranium into other elements to release the energy stored in it.  And the elements that are created are frequently even more naturally radioactive than P-238 is.  So a reactor that has been running is just as dangerous as an RTG is.  But a reactor that has not yet been turned on is actually fairly safe.

So that's a clear reason to prefer a reactor over an RTG in the outer solar system where solar doesn't work very well.  What about closer in?  Mostly you would want to consider nuclear in cases where you need reliable power for things such as life support through the night.  On Mars the night is going to be 12 hours long and adding enough lithium-ion batteries to last that long would effectively be a 20 Watt/kg system, and since you need twice as much solar panel during the day to both charge your batteries and run your life support you're down to a net of 13 continuous watts of power per kg of solar panel and batteries.  Which is still better than nuclear.

But what about the Moon which rotates once a month, with nights over 300 hours long?  With that the heaps of batteries you have to pile under your solar cells only give you .8 watts per kg of equipment making it clearly less mass efficient than nuclear.  Is it worth the problem of dealing with radiation?  That's for NASA to decide.  And it would only be NASA that gets to make that decision.  Such lightweight reactors have to use highly refined uranium that could be made into nuclear weapons in the wrong hands.  So there isn't much chance of SpaceX or other private ventures getting their hands on these.

Wednesday, May 16, 2018

Falling fertility rates shouldn't be a problem forever

There are a lot of countries in the world with declining populations.  Historically this isn't unusual if there's famine or disease or war causing it but in the developed world this is mostly just people choosing to have fewer children.  In Japan it's all the way down to 1.46 children per woman which, since half of kids are male, means that every generation will tend to be three quarters as large as the one before it.

There are a lot of reasons for this.  People having better options than they did in the past seems like the biggest one.  There are differences within the industrialized world in many ways involving religionattitudes towards child care, and other thing.

But those are societal factors and there are also individual factors at play in a couple's decision to have kids.  People have different personalities and personality seems to have an effect on child bearing just like you'd expect.  Here's a study I recently saw linked on twitter.


So, for instance, people who are more Agreeable based on the Big Five personality traits social scientists normally use seem to be more likely to have children.  And these personality traits seem to be substantially heritable.  So if nothing were to change we should probably expect evolution to have its say and the population decline to eventually halt and reverse itself through natural means.

I don't actually expect things to turn out that way.  Rather, I expect technological improvements in the ease of having and raising children to make a bigger difference sooner.  And societal improvements would be nice too but I'm not counting on those.  How we're going to take care of large numbers of older people as they live longer without a large working population is going to be a concern for the next few generations but there's no reason to worry about humanity going extinct or anything.

Of course technological progress cuts both ways and maybe virtual reality will reduce childbearing rates further.  But again I expect the next generation being more like the sort of people who had kids in the last generation to eventually rescue things.

But the sad moral of the story is that we might have we might have gotten out of the Malthusalian trap.  Maybe even for quite a while.  But I'm pretty sure that in the very long run the human population will increase again until it's reduced to a subsistence level. 

Tuesday, March 20, 2018

Self driving cars are like airplanes

Yesterday, for the first time, an autonomous car killed a pedestrian.  It isn't clear that the car was at fault but we're almost certainly going to have an accident where the car was at fault at some point.  At this point autonomous cars haven't driven enough miles for us to know if they're currently safer or more dangerous than human drivers.  But I think they have the potential to be much safer in the long run for a combination of technical and institutional reasons.

Technically, cars can learn in the same way that humans can.  But while we humans are mostly limited to learning from the situations we encounter a fleet of cars can hope to learn as a unit.  Some accident occurs, engineers analyze it, and then no car in that fleet will make that particular mistake again.  It's reasonable to think about robo-cars trained on a body of experience far greater than a human could amass in a lifetime.

And I think that robotic car manufacturers have the right incentives to care about this too, for the same reason that airlines do.  People are just less inclined to trust other people guaranteeing their safety than their are their own abilities.  This is true when people choose to drive long distances rather than fly because they're afraid of some airplane disaster.  This is even true when they know that people like them are more likely to kill themselves driving than a professional pilot is to kill them flying.  It's sort of irrational but it's a fact.  And I'm pretty sure the same principle applies to autonomous cars.

For this reason airlines are scared of airplane crashes in a way most other industries aren't.  When a planes crashed into the World Trade Center on 9/11 lots of people became afraid to fly and started driving instead.  This caused 1600 extra car deaths in the year after 9/11, increasing the death toll of the attack by 50%.  Many airlines nearly went out of business.  So while any individual airline might be tempted to skimp on safety they're all terrified that their competitors will and then cause a crash that will seriously hurt everyone.  In many industries the regulated parties use their influence to make the regulator weaker but for the FAA the airlines do their best to make it stronger.

I think the same dynamic is liable to occur in the autonomous cars industry too.  People are just inclined to trust their own driving over a computer's unless there's very clear evidence that the computer is better.  As long as people can drive themselves on public roads autonomous car companies will be scared that an accident involving one of their competitor's cars will make people want to do just that.  So I expect that once the industry grows and stabilizes enough for a good regulatory body it will be pretty demanding.  And I expect that the companies that get involved will be fairly safety conscious about their autonomous cars even if they're lax about other things.

UPDATE:  Actually the crash that prompted me to write this actually looks pretty bad for Uber, but I still think the forces involved will make autonomous cars safer in the long run.

Monday, March 5, 2018

The Drake Equation again

I was walking in to work today and as I did I was listening to a nice podcast on the Drake Equation.  The Drake Equation is an estimate of the number of civilizations in the galaxy based on things like how many planets there are, how many develop life, etc.  I learned a lot in the Podcast but it reminded me of a post I'd been meaning to make about why I think the origin of life probably wasn't the hard part in creating us.  Also, I promise this post on the Drake Equation is more pleasant than the last one.

A graph:

Dates taken from Wikipedia's timeline of life and timeline of the future.

It was just a pretty short amount of time, geologically, from when the Earth cooled down enough for oceans to start forming until we have proof of the first life - just 120 million years.  And that's probably a conservative estimate.  But from there it took three quarters of a billion years for photosynthesis to arise.  Then one and a half billion until one bacteria swallowed another in such a way as to turn it into a mitochondria and then become a big complex eukaryotic cell.  Then another billion before we had real multicellular life. 

So just looking at the timelines involved it seems like the origin of life wasn't the hard part.  If you're interested Nick Lane has some excellent books about the biochemical difficulties of these steps and why life might have been the easy part but for the Drake Equation the important part is that becoming complex took a long time.

And it's also important that life only had so much time to become complex because in just another billion years the brightening Sun will heat up our planet enough to evaporate the oceans and then there's not much chance of intelligent life evolving.  And it's very lucky that photosynthesis showed up so early.  If the Earth's carbon dioxide atmosphere hadn't been broken down into oxygen then Earth might have had a runaway greenhouse effect by now.  And without oxygen in the atmosphere to form ozone we might have lost the hydrogen atoms we need for water to the Sun's solar wind.

Looking at that timeline makes me feel optimistic that the reason there don't seem to be any aliens in the galaxy is that evolving intelligent life is hard, rather than that intelligent life tends to meet a grisly end.

Tuesday, January 30, 2018

What are the effects of carbon dioxide in the atmosphere?

Apart from the obvious, or course.  Once, when our planet was young, the atmosphere was very high in carbon dioxide.  Then photosynthesis evolved and the cyanobacteria that spread across the oceans turned most of that carbon dioxide into oxygen about two and a half billion years ago.  Modern complex life, like ourselves, loves oxygen but for the creatures at the time this was a huge problem since oxygen was toxic to them and most of them died.  Plus the interruption of the greenhouse effect combined with the dimmer sun we had back then to turn Earth into a giant snowball for a bit.

But here we are billions of years later and carbon dioxide levels are increasing.  We should be worried about increasing temperatures but are there other reasons to worry?  For a very long time, compared to humanity, Earth's carbon dioxide levels were around 280 parts per million.  That's been going up recently and is now at 407 parts per million, a 45% increase.

If you are a photosynthesizing plant you might have complex feelings about this but if you're a mammal who really just wants to expel carbon dioxide from your lungs then there really isn't any upside.  We know that sufficiently high carbon dioxide concentrations will kill us but that takes  an atmosphere that's well above 10% carbon dioxide, 10,000 parts per million, very far from any levels we could plausibly reach by burning too much coal.  But some researchers have investigated the effect on people's cognitive abilities from going from 600 ppm to 1000 ppm and found easily measurable declines.  Now, poorly ventilated indoor spaces might easily get up to 2000 ppm so that's something to think about when designing office spaces.  But if a 400 ppm difference can result in obvious differences for people proofreading papers when briefly exposed can we be sure that there isn't some difference in human physiology that would stem from a 100 ppm difference across an entire childhood?  How about a 200 or 300 ppm difference?

This probably won't be a huge difference but it's not something that's been researched and not something you really could ethically research in humans.  But it is something that I think is underplayed in in our rhetoric about climate change.

Wednesday, January 24, 2018

Genetic engineering and chlorophyll

One of the interesting discussions in The Wizard and the Prophet was what the wizards are trying to get up to next in terms of trying to increase food production.  One idea goes to the fundamentals of photosynthesis.

The most important protein in photosynthesis is affectionately known as RuBisCO and makes up about half the protein in a leaf.  Photosynthesis seems to be pretty hard and so RuBisCO doesn't work as well as most other catalyst proteins.  It's supposed to grab the carbon in carbon dioxide from the air but frequently grabs plain oxygen instead.  I suppose it worked a lot better before the Great Oxygen Catastrophe.  Some plants have versions that are a bit more selective but they work more slowly.  Some are faster but they mess up which to grab more frequently.  Biologists hoped they could improve RuBisCO but it seems that evolution did about as good a job as could be done.

There are some plants, though, that do have method of photosynthesis that's often better than the run of the mill one.  They spend a little energy to concentrate carbon dioxide in cells where the RuBisCO is and so when RuBisCO grabs a random air molecule its more likely to be CO2 instead of O2, speeding up photosynthesis.  The extra energy means this isn't always a benefit, but it usually is.

Many plant species have developed this C4 carbon fixation process, as it's known, in nature.  Notably corn does it this way.  But researchers are hoping that you could develop a strain of rice that works that way too.  That is very ambitious.  Changing a single protein like RuBisCO is easy but this would involve growing whole new structures in the leaves to channel the CO2 and that doesn't involve just new proteins but new development paths.

That's very nifty but when I was reading the chapter my mind was going somewhere else.  The Great Oxygen Catastrophe was very good to use oxygen breathing creatures but it really sucked for plants.  Wouldn't the kind thing, for plants, be to let them live in a high CO2, low O2 environment?  And you could give the plants a high speed, low specificity version of RuBisCO that would work really well in that environment.

I'm mostly thinking of this in terms of growing plants in outer space but it could be applied in greenhouses too.  Enclosed environments also reduce the need for pesticides and herbicides though obviously the enclosures are expensive and have their own environmental impact.  And certainly there would be an energy cost in getting the oxygen the plants produce out of these buildings.

Oxygen free environments are dangerous.  But ones filled with CO2 less so.  Normally when you aren't getting any oxygen you feel perfectly fine as you get stupider and more lethargic until you die.  Here's a video from Smarter Every Day showing someone in a low pressure environment similar to a depressurized airplane and how he acts until given a breath mask.  When you hold your breath and feel like you're running out of air that sensation is caused by the buildup of CO2 rather than the lack of O2.  These always went together in our ancestral environment so there was no need for us to distinguish.  So at least in these greenhouses people with mask trouble could at least notice something wrong and leave.

I'm sure that some of you are thinking, "wait, how does this relate to all the CO2 we're pumping into the atmosphere?"   Well, plants will like the extra CO2 but the extra heat will make RuBisCO even less selective so for moderate warming it depends on the plant in question and for high global warming its generally bad.  Plus there are all those other very hard to predict changes in rainfall, etc, which would almost certainly be painful.  So lets not forget to work on better renewable energy too.

Tuesday, January 23, 2018

Book review: The Wizard and the Prophet

I just recently finished The Wizard and the Prophet by Charles C. Mann.  He'd previously written a book about the Columbian exchange I'd really liked, 1493, so I was ready to like this book too.

It concerns the dueling ideals of two men regarding man's relationship with the environment.  The prophet of the title, William Vogt, believed that the world has a finite carrying capacity that humans had to respect and that we had to limit ourselves to what the Earth could sustain.  The wizard, Norman Borlaug, worked tirelessly to increase the yields of the crops that man depends on and allowed large new generations of people to grow up without the famine that had plagued their parents.

Going through the book Mann seems to do an admirable job of looking at the lives of each; their successes and failures and the events that led them to be the people they were.  And the books makes a valiant effort to portray both fairly though, as you might expect, I end up sympathizing with the wizards more than the prophets.

I do worry, though, that it's the third position Mann introduces that is the correct one.  Vogt believes that mankind must constrain its reproduction and stop consuming as much.  Borlaug believes that mankind must learn to better use the environment to support ever more people.  Lynn Margulis believes that it would be unprecedented for mankind to do either of these so we should expect overpopulation and dieoff in the future.

My first reaction was "Wait, is this the same same Lynn Margulis who..." and yes it was.  She had argued that symbiosis rather than competition was the primary force in the evolution of our cells.  It was entirely true that mitochondria and chloroplasts were once independent bacteria who came to live inside eukaryotic cells.  It was untrue that the flagellum or the other orgenelles of the cell had also originated as symbiotes. 

We are lucky that affluence has reduced our desire to have many children.  Yet, there are those who desire many children even in affluence and there's no reason to think that this desire isn't at least partially heritable.  We may stem this, for a time, with violence but the will to violence fades .  We may race ahead of necessity in terms of our civilization's ability to provide sustenance.  Yet, the sun only puts out so much energy.  There are limits to the computation cycles that can be extracted from a unit of energy.  And expanding at the speed of light resources grow as the cube of time but demand grows exponentially and an exponent must always beet a polynomial in the end.

I'm closer to an average than a sum utilitarian so I can swallow this repugnant conclusion, even if I don't want to.

The limitations of blindsight

Blindsight, made famous by a book of the same name in science fiction circles by Peter Watts, is a disorder caused by damage to the primary...