Tuesday, March 20, 2018

Self driving cars are like airplanes

Yesterday, for the first time, an autonomous car killed a pedestrian.  It isn't clear that the car was at fault but we're almost certainly going to have an accident where the car was at fault at some point.  At this point autonomous cars haven't driven enough miles for us to know if they're currently safer or more dangerous than human drivers.  But I think they have the potential to be much safer in the long run for a combination of technical and institutional reasons.

Technically, cars can learn in the same way that humans can.  But while we humans are mostly limited to learning from the situations we encounter a fleet of cars can hope to learn as a unit.  Some accident occurs, engineers analyze it, and then no car in that fleet will make that particular mistake again.  It's reasonable to think about robo-cars trained on a body of experience far greater than a human could amass in a lifetime.

And I think that robotic car manufacturers have the right incentives to care about this too, for the same reason that airlines do.  People are just less inclined to trust other people guaranteeing their safety than their are their own abilities.  This is true when people choose to drive long distances rather than fly because they're afraid of some airplane disaster.  This is even true when they know that people like them are more likely to kill themselves driving than a professional pilot is to kill them flying.  It's sort of irrational but it's a fact.  And I'm pretty sure the same principle applies to autonomous cars.

For this reason airlines are scared of airplane crashes in a way most other industries aren't.  When a planes crashed into the World Trade Center on 9/11 lots of people became afraid to fly and started driving instead.  This caused 1600 extra car deaths in the year after 9/11, increasing the death toll of the attack by 50%.  Many airlines nearly went out of business.  So while any individual airline might be tempted to skimp on safety they're all terrified that their competitors will and then cause a crash that will seriously hurt everyone.  In many industries the regulated parties use their influence to make the regulator weaker but for the FAA the airlines do their best to make it stronger.

I think the same dynamic is liable to occur in the autonomous cars industry too.  People are just inclined to trust their own driving over a computer's unless there's very clear evidence that the computer is better.  As long as people can drive themselves on public roads autonomous car companies will be scared that an accident involving one of their competitor's cars will make people want to do just that.  So I expect that once the industry grows and stabilizes enough for a good regulatory body it will be pretty demanding.  And I expect that the companies that get involved will be fairly safety conscious about their autonomous cars even if they're lax about other things.

UPDATE:  Actually the crash that prompted me to write this actually looks pretty bad for Uber, but I still think the forces involved will make autonomous cars safer in the long run.

Monday, March 5, 2018

The Drake Equation again

I was walking in to work today and as I did I was listening to a nice podcast on the Drake Equation.  The Drake Equation is an estimate of the number of civilizations in the galaxy based on things like how many planets there are, how many develop life, etc.  I learned a lot in the Podcast but it reminded me of a post I'd been meaning to make about why I think the origin of life probably wasn't the hard part in creating us.  Also, I promise this post on the Drake Equation is more pleasant than the last one.

A graph:

Dates taken from Wikipedia's timeline of life and timeline of the future.

It was just a pretty short amount of time, geologically, from when the Earth cooled down enough for oceans to start forming until we have proof of the first life - just 120 million years.  And that's probably a conservative estimate.  But from there it took three quarters of a billion years for photosynthesis to arise.  Then one and a half billion until one bacteria swallowed another in such a way as to turn it into a mitochondria and then become a big complex eukaryotic cell.  Then another billion before we had real multicellular life. 

So just looking at the timelines involved it seems like the origin of life wasn't the hard part.  If you're interested Nick Lane has some excellent books about the biochemical difficulties of these steps and why life might have been the easy part but for the Drake Equation the important part is that becoming complex took a long time.

And it's also important that life only had so much time to become complex because in just another billion years the brightening Sun will heat up our planet enough to evaporate the oceans and then there's not much chance of intelligent life evolving.  And it's very lucky that photosynthesis showed up so early.  If the Earth's carbon dioxide atmosphere hadn't been broken down into oxygen then Earth might have had a runaway greenhouse effect by now.  And without oxygen in the atmosphere to form ozone we might have lost the hydrogen atoms we need for water to the Sun's solar wind.

Looking at that timeline makes me feel optimistic that the reason there don't seem to be any aliens in the galaxy is that evolving intelligent life is hard, rather than that intelligent life tends to meet a grisly end.

The limitations of blindsight

Blindsight, made famous by a book of the same name in science fiction circles by Peter Watts, is a disorder caused by damage to the primary...