Monday, November 11, 2019

The limitations of blindsight

Blindsight, made famous by a book of the same name in science fiction circles by Peter Watts, is a disorder caused by damage to the primary visual cortex.  Sufferers typically lose all ability to consciously perceive any sight from the eye corresponding to part of the cortex damaged.  Which sounds sort of like blindness.  If you cover their normally working eye, but a tomato on a table in front of them, and ask them what's there they'll have no idea and be unable to do so.  But this is called blindsight rather than just blindness.  If you then ask this person to point to the object in front of them they'll point right at the tomato.

How does this work?  Well, the brain is composed of different parts that connect to each other in different ways and serve different purposes.  Strange as it may seem the part that corresponds to you knowing that you know there's a tomato there and the part that lets you point to the tomato are different and its possible to cut one off from your eyes while leaving the other attached.  The brain has a certain amount of plasticity but also a certain amount that's fixed and it seems that brains can't adapt to this sort of damage.

If you can still pick up things when you aren't aware of them then you might ask why evolution ever bothered to give us consciousness in the first place.  If we don't need it to interact with the world around us, then what is it doing?  Couldn't we conceivably be better off without wasting precious brain space on being aware of the world when a leaner and more efficient brain might speed through life without wasting time on reflection?  This is the question that Peter Watts was asking in his book *Blindsight* but as you might guess from my recent post on AlphaStar versus AlphaGo the answer is probably a no.

Indirect consciousness of consciousness

Consciousness is one of those words that people argue about endlessly, see the entry in the Stanford Encyclopedia of Philosophy.  Beyond philosophy there's been a certain amount of scientific study of consciousness too.  One of the most interesting, ironically enough, has its roots in Behaviorism.  Behaviorism is the idea in psychology that we can't necessarily trust what people say or believe about their motivations so we should root our conception of psychology just in how people behave.  Or in other words that we should avoid anthropomorphizing people.  It worked as a paradigm well enough and eventually some people in that tradition asked, "Well, if we can't just use introspection how can we study consciousness?"

Well, one obvious behavior tied to our commonsense idea of consciousness is that people can talk about their experiences.  If you show a tomato to a conscious person and ask them what they saw they'll be able to say that they saw a tomato.  If you show a tomato to an unconscious person then they won't.  Even after they woke up.  Even if you were able to open their eye without waking them up.  So that's a lever scientists were able to use to study consciousness scientifically through experiment.

One important idea that came out of this was the idea of subliminal sensations.  If you project a tomato on a screen for a certain number of milliseconds they'll be able to tell you there was a tomato there.  And at a small enough duration there'll be no effect on the subject.  But there's an in between region where they won't know what they saw but the image can still have an effect on their behavior, say in how fast they press a button if a piece of text they see next is a vegetable or not.  They'll be faster on the button if they've been primed with a tomato.  This is what is known as subliminal priming.

So we're back to information getting into the brain but us not being aware of it.  These scientists aren't looking directly at consciousness, the mystical sense of selfness, but they are looking pretty directly at our conscious awareness or unawareness of things.  You can even hook some sensors to people's brains and be able to tell if a stimulus was subliminal or not.

The importance of being liminal

Ok, so now that we can look at conscious awareness in the eye, so to speak, what does it do.  Well, the first is sort of implicit in the nature of how the testing is done.  People can speak of what they know but of the rest they must remain silent.  As social creatures being able to talk to each other about our surroundings is very important for planning and coordinating.  And that's great as far as humans or other advanced creatures go but is this some facility of the brain that had to evolve along with language?  If so why do so many different creatures have varying different abilities to communicate instead of getting it all at once.  Well, there's another important thing that comes along with conscious awareness.

Subliminal stimuli disappear from the brain in a second or two.  They fade quickly away in the sensors we strap to people's heads.  The effects fade rapidly away in their responses to tests too.  It seems that everything that goes into your memory gets there by going through conscious experience.

And that's the reason you can't have a creature without consciousness and expect it to interact productively with the world.  I can sit down at a table, close my eyes, wait a second, and still point to the objects I'd seen on the table through memory.  Someone relying on blindsight can't do that.  Maybe, for the philosophers that believe in a metaphysical consciousness that exists beyond matter the person with blindsight had qualia in some place beyond our mortal ken but it didn't have any effect on the world we live in.  But it didn't happen in this world and that's why evolution keeps this sort of conscious experience around despite the fact that it involves precious brain matter that could have been used for something else.

Tuesday, July 23, 2019

Review of Democracy for Realists

One of the first posts I made on this blog was a review of The Myth of the Rational Voter by Bryan Caplan.  That book convinced me that retrospective voting, mostly politicians' fear of it, is the greatest part of what makes democracy work in practice.

And we do need to explain why democracy works in practice, because the evidence is that it does.  Democracy causes countries to be more peaceful, richer, and when women were given the vote childhood mortality went down.  So there does seem to be something to this democracy business where we have to explain.

In Democracy for Realists Christopher H. Achen and Larry M. Bartels set out to "assail the romantic folk-theory at the heart of contemporary thinking about democratic politics and government, and offers a provocative alternative view grounded in the actual human nature of democratic citizens."

Well, that's not the only thing you could investigate about democracies.  You could look at:
  • How individual people or different groups in society decide who to vote for,
  • Who wins elections, or
  • Which policies democracies end up pursuing.
When looking at voting behavior there are also a lot of stories you could tell to explain voting behavior.  Some stories you could tell might be:
  • The people make prospective decisions, voting based on which politician agrees with them on the issues.
  • The people make retrospective decisions, voting the bums out of office when they don't like how things have gone recently.
  • The people express their identity.
  • The people vote for the most attractive candidate or the tallest.
  • The people vote for the candidate that bought the most advertisements.
  • The people vote randomly.
For any given person their behavior will almost certainly be a mix of those and each of those factors certainly has some effect on voting.  And you could try to make some estimate as to which might be the most important.  But it's also true that which of those stories is most important will depend on which of the three questions we're asking.  Imagine a world in which 40% of voters always voted Republican for reason of identity, 40% always Democrat, 10% voted randomly, and 10% voted prospectively.  Most of the votes in that election would be identity based, but those votes would all cancel each other out.  In any large election the Central Limit Theorem would also split the random vote evenly.  So if you looked at it from the perspective of one purpose voting would be all about people expressing their identity.  But if you are looking at figuring out why one party sometimes wins and why the other party sometimes wins it's all about people studying the issues.

Democracy for Realists did an excellent job of convincing me that most voting is identity based but that fact in no way contradicts The Myth of the Rational Voter's case that retrospective voting is the most important for deciding who wins.  We've always seemed to have had an even or nearly even partisan identity split between two major parties.  The Republicans and Democrats were evenly matched when the South was Democrat and when things switched around the the South went Republican they were still evenly matched somehow.  There have been a few brief periods of one party dominance, such as when the Federalists imploded, but those seldom seem to last for long and a situation where elections shift one way and then another based on the decisions of people who aren't firmly committed to one partisan camp or another regain their supremacy.

Democracy for Realists also spent some time trying to show that there is no way that retrospective voting could produce good results, but not in any way I found convincing.

First is the idea that often people apply retrospective voting to things that aren't actually under the control of the government.  The one example they give is the financial Panic of 1857 which they claim couldn't have been prevented because Keyes hadn't published his General Theory yet.  I'm pretty sure the authors are unfamiliar with the silver/easy money versus gold/hard money axis of political conflict in the United States in the 19th century to make that claim.  They also mention shark attacks swinging the election in New Jersey in 1916 as another example, though there's a lot of things we do these days to reduce the number of shark attacks.

In one sense I'm being unfair to Achen and Bartels's argument because coming up with a solution to those problems was probably beyond the wisdom and ability of the politicians in charge during those times.  But importantly, neither the voters nor the politicians were in a position to know that.

Democracy for Realists argues that since politicians can't be sure of re-election even if they work to the best of their ability they have no incentive to work hard for re-election.  To me that seems ridiculously black and white.  I have no perfect assurance that going to work tomorrow will result in my getting paid since some accountant might have absconded with the company's money without my knowledge.  And if I developed some lethal disease and could give myself a 50% chance recovery through an unpleasant drug regime I would undergo that even though it wouldn't be assured of curing me.  We all regularly put effort into things that don't have sure payoffs and those politicians who win elections are those who, mostly, are willing to work hard to win them.

These forces aren't 100% unique to democracy.  Dictatorships also have to worry a bit about being overthrown and can't ignore public opinion entirely.  But a free press and regular elections make the needs of the population a lot more salient for democratic politicians.

Just as we shouldn't rely on voters knowing which government policies can best prevent shark attacks or financial crises we shouldn't have our system depend on them knowing if such crises can be prevented or not.  The most important thing is that politicians are incentivized to figure out some way to control a crisis regardless of whether they or the voters know a way to do it.  And if they can't they'll be voted out and the next politician in will keep working to figure it out so that it doesn't get them booted too.

One criticism that Democracy for Realists makes of retrospective voting that I do think is on the mark is the electorate's recency bias.  If a politician has a four year term of which the first two see dismal economic performance but great economic performance on the last two then chances are they'll be re-elected.  But a good two years followed by a bad two years is a big disadvantage.  Politicians have noticed this and often engage in short sighted policies in election years in order to retain office.  This is a real problem and drawback of regular elections.

Another big problem is the mis-assignment of blame.  People often blame the president's party for the actions of a Congress controlled by another party.  The Democrats in Hoover's second term knew that by not passing the recovery legislation the president asked for they could greatly increase their chances of winning the presidency, and so they did.  The Republicans in 2010 knew that by thwarting Obama they could increase their odds of winning in 2012, and so they did.  This is a real problem and drawback of our system of separation of powers.

Of course, I think the best solution to these is something more like the English parliamentary system.  Parlimentary systems seem to do better than presidential systems in terms of stability so it's worth considering.

To summarize, democracy is a flawed system but it seems to do a better job of pushing politicians to look after the needs of their constituents than all those other forms that have been tried from time to time.

Thursday, June 6, 2019

Sometimes you need a new word

Lets say I'm telling a story about some hiker heading up into the mountains.  I mention that when passing under a cliff a pebble came loose and landed on him.  That might sting but it wouldn't occur to you to ask if he died.  Lets say that instead I mention that a boulder had landed on him.  Then you'd expect him to be quite dead.

That's the nice thing about these old English words that've been around a while and encode distinctions that make sense in our everyday lives.  The difference between the pebble and the boulder is just a matter of degree but it's one where the quantitative difference is big enough to become qualitative.  If we just had one word whoever I was telling the story to would have to ask questions and because we run into rocks so often I'd know when I'd have to use an adjective.

When you're talking about scientific things, though, you don't often have this choice of words.  Energy is energy and you're expected to use a number if you want to say whether it's a lot or a little.  Same with radiation.  The radiation emitted by your cell phone is so subtle you'd never be able to perceive it but a large enough amount of that same type of radiation could cause incredible pain or even cook you alive.

So "radiation" as a word can be dangerously ambiguous even before considering how it's frequency effects things.  "Ionizing radiation" is a completely different beast in terms of hazards even though we often use the shortcut "radiation" to refer to it too.

And the word "radiation" quickly brings us to the word "fallout."  In the first hydrogen bomb test at Bikini Atoll the explosion was even large than scientists anticipated.  Huge amounts of calcium from the coral of the atoll were sucked into the fireball.  It was bombarded with neutrons and transmuted to elements that were ferociously radioactive.  The bunker built for the observers was barely enough to save them from the radiation as particles of this radioactive material rained out of the sky.  Far downwind, the crew of the Lucky Dragon fishing boat were also exposed.  They were burned and sickened and one of them died.

The public soon learned that radioactive fallout was a deadly serious matter.  Bombs that explode high in the air aren't so bad, the nitogen, and oxygen that make up most of the air can get whacked by neutrons and not turn into anything too unpleasant.  But the ground is another matter and nukes detonated close to the ground to destroy command bunkers, missile silos, hardened aircraft hangers, or sub pens would send plumes of deadly fallout downwind.

There's a website, Nukemap, you can use to play around with if you feel like frightening yourself about nuclear wars.  According to it a 5 megaton ground explosion in New York could, if the wind is blowing the wrong way, deposit about 1 Gray/hour's worth of fallout here in Boston.  The single Gray from the first hour would be enough to give me radiation sickness but not kill me but unless I could get to shelter it would keep building up.  I'm not quite sure how to figure the decay curve but without a shelter I'd either die in a day if I was lucky or die in a couple of weeks if I wasn't.

So fallout, in this context, is an immediately lethal threat.  A deadly danger which people believed, entirely accurately, would probably kill you if you were exposed to it.

In this context it's very understandable that when people heard that the Three Mile Island nuclear plant had released radioactive fallout, they were just a tad concerned.

The problem is that we were using the same word, fallout, to refer to both the 10 gray doses that'll kill you dead quickly in a nuclear war and the .0008 gray doses you might get around Three Mile Island which would increase your lifetime odds of getting cancer by about 5 in a million assuming the most pessimistic model of radiation induced cancer.  There's a unit people have put together called a micromort for thinking about very small chances of death.  Assuming that all cancer is deadly the radiation was 5 micromorts which is about as dangerous as going scuba diving or driving 1000 miles by car.  Not so small a risk that it doesn't deserve respect but small enough that it doesn't deserve fear.

Fukushima had similar levels of radiation released.  Expose millions of people to a several micromort risk and you will get deaths at a population level but again, not at a level worth treating with fear rather than caution.  Chernobyl, well, Chernobyl occupied a "stone" level between the "boulder" level of World War III's fallout and the "pebble" fallout of Three Mile Island.

I've talked like the effect of using one word for two very different situation has been all about making people too afraid of small amounts of fallout but that's not the only direction this confusion goes in.  Growing up after the Cold War and MAD and only worrying about radiation from power plants I had this idea that fallout was something that would maybe give you cancer after a few years and that mostly people had fallout shelters during the Cold War as a matter of long term health rather than short term survival.  I was very badly misled by my understanding of the term.   I think that this is a problem for people of my generation that while we might be more afraid of radiation from reactor meltdowns than we should be we also treat the idea of fallout from a nuclear war with inappropriately low levels of terror.  Terror is the appropriate response to the continued existence of the huge US and Russian nuclear arsenals and when we think about the dangers that may face humanity it's important to remember that the shadow of the mushroom cloud hasn't gone away.

Monday, April 29, 2019


So, MIT has this IM system called Zephyr that I still unaccountably find useful.  Clients generally let you display a signature with your message that might be some static bit of text or might be the result of a script if you’re more into that.  I have a script that selects from a bunch of sayings, jokes, etc that I’ve collected over the years.  And which I now want to inflict on you.
Please forgive the puns and don’t take these too seriously.

  • Unfortunately the universe doesn’t agree with me.  We’ll see which one of us is still standing when this is over. 
  • Reality is what you can get away with.
  • The truth is whatever you can’t escape.
  • I used to think that the brain was the most wonderful organ in my body.  Then I remembered who was telling me this.
  • I feel more like I do now than I did a while ago.
  • I intend to live forever. So far, so good.
  • Don’t ascribe to malice what can be adequately explained by stupidity.
  • You can’t know that this sentence is true.
  • Imagine there were no hypothetical situations.
  • The views expressed here do not necessarily represent the unanimous views of all parts of my mind.
  • Don’t immanentize the eschaton!
  • Because anti-induction has never worked in the past I can be sure it will now.
  • Knowledge is power.  Power corrupts.  Study hard, be evil.
  • Put the romance back in necromancery.
  • Everyone generalizes from one example. Or at least I do.
  • You don’t understand society until you can build one out of nothing but signals and incentives.
  • When you have eliminated the impossible, whatever remains, however unlikely, is probably an artifact of an incomplete hypothesis space.
  • I, for one, like roman numerals.
  • Debugging is like being a detective in a crime novel where you’re also the murderer.
  • I don’t have pet peeves. But I do feed a number of feral peeves that live in the neighborhood.
  • Napoleon Bonaparte was a master strategist who achieved immortality by living on in the form of delusional people all over the future
  • “Roses” is how / you start poems of this meter / but poems about poems / are more meta and neater.
  • I know not with what weapons World War 3 will be fought, but World War 4 will be fought with adorable cockroach-sized swords.
  • When did the Japanese start eating eggs?  A long γŸγΎγ”!
  • Usually the explanation for why a thing exists is not the reason it started existing, but rather the reason it continues existing.
  • The adjective “indescribable” is, by definition, never correct.
  • Failure isn’t an option.  It’s mandatory.
  • Start every day like you woke up surrounded by a circle of wizards who perform a summoning spell once a century
  • Omniscience makes reasoning about counterfactuals harder.
  • Any machine is a smoke machine when you use it wrong enough.
  • I believe that inside every tool is a hammer
  • I said raise the barn, not raze it!
  • Remember with increasing sample size, your averages become more reliable - The Ns justify the means.
  • New EA cause area: Banning everything else Thomas Midgley invented, just to be safe.
  • Your eyes don’t see, you do.
  • My favorite three bean soup is vanilla soy latte.
  • You will forget that you ever read this zsig.
  • Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
  • Made in China? Silly plate, you are made of China.
  • Give a man a fire and hell be warm for a day. teach a man to fire and youll get your liver pecked out by an eagle every day for the rest of eternity
  • When trying to understand entropy, remember that sitting still with your eyes closed will make you ever more lost - not within the universe, but between universes.
  • Nothing in life is as important as you think it is, while you are thinking about it.
  • Blessed are those who can gaze into a drop of water and see all the worlds and be like who cares that’s still zero information content. 
  • The First Rule of Robot Fight Club is you DO NOT TALK about Robot Fight Club, or, through inaction, allow Robot Fight Club to be talked about.
  • Correlation correlates with causation because causation causes correlations.
  • Absence of evidence is evidence of absence.
  • Market exchange is a pathetically inadequate substitute for love, but it scales better.
  • Computer science is like omnipotence without omniscience.
  • Your existence is not impossible.  But it’s also not very likely.
  • Finally, a study that backs up everything I’ve always said about confirmation bias!
  • Nobody is smart enough to be wrong all the time.
  • Everything happens for a reason. The reason is a chaotic intersection of chance and the laws of physics.
  • Essentially, all models are wrong, but some are useful.
  • We think much less than we think we think.
  • If at first you don’t succeed, try, try again. Then quit. No use being a damn fool about it.
  • Because ten billion years’ time is so fragile, so ephemeral, it arouses such a bittersweet, almost heartbreaking fondness.
  • Language will evolve irregardless of barriers.
  • A library of all possible books contains less information than a single volume.
  • Is it crazy how saying sentences backwards creates backwards sentences saying how crazy it is?
  • Do unto others 20% better than you would expect them to do unto you, to correct for subjective error.
  • Though through rough boughs
  • I’m just sayin’, everyone that confuses correlation with causation eventually ends up dead.
  • I believe that this nation should commit itself to achieving the goal, until we’ve landed on the moon, of preventing this decade from ending.
  • If you die in a documentary, you die in real life.
  • My intuition pump won’t turn off and now my basement is full of scary ideas.
  • One Weird Trick to hijack the inner voice of hundreds of minds by posting this message
  • Most supposed conspiracy “theorists” don’t come up with their own theories; they are conspiracy *enthusiasts* at best.
  • Have you tried throwing money at the problem? Yes? Well have you tried throwing it harder, using deadlier forms of currency?
  • Have you tried reducing the problem to a harder one which no one will expect you to solve?
  • Have you tried raising the temperature until you have enough thermal energy to overcome the problem’s energy barrier?
  • Keep your identities small, so you can fit more of them in your head.
  • You are a useful abstraction.
  • I Went To The Platonic Realm And All I Got Was THE Lousy T-Shirt.
  • A society where ubiquitous 3D printing makes the delivery of physical objects obsolete. A post-post society.
  • Appeals to Purity Intuitions Considered Toxic
  • Yog Sothoth is the golden key, the accursed result of the NSA’s demands. Do not call up what you can’t put down, cried the opsec researchers.
  • Known thy enemy and know theyself.  You can combine these tasks and so double efficiency using the obvious method.
  • Consciousness is the weakest form of telepathy, where you’re limited to reading your own mind.
  • A good pun is its own reword.
  • A new drug prevents the brain from speculating. You’ll never guess what happens when you take it.
  • Philosophy is mainly useful in inoculating you against other philosophy. Else you’ll be vulnerable to the first coherent philosophy you hear.

Monday, April 8, 2019

How likely is it that there was life on Mars?

People have been thinking about life on Mars for a long time, ever since the writings on the illusive channels of Mars were mistranslated as being about canals if not even longer back.  The original Viking landers had some experiments to help detect life and people have been looking ever since.  If life had ever existed on Mars it's quite likely that the loss of hydrogen and breakdown of the magnetic field have ended it so I wouldn't bet on there being life now.  But for reasons I'll explain I think it's actually pretty likely that there was life at some time in the past.

A while ago I blogged about a timeline of life on Earth extending from its origin in Earth's path to its end in the future (if we don't do something about it).  Here it is again.

Of course that's not how things went or will go on Mars.  Well, Mars formed at the same time as Earth did and I can't find any reason that liquid water would have taken a very different amount of time to form.  But we don't know about life, we'd have to add some bits about liquid water going away to cold and low pressures, and it will take longer for the Sun's expansion to bake Mars 'cause it's further out.

How long did Mars stay wet?  We don't really know precisely but this article at least says that we know there was liquid water 3 billion years ago.  So a timeline like this.

As was pointed out when I wrote about that last timeline, we know that on Earth it took at most 120 million years for life to appear after liquid oceans did but we don't really have a minimum estimate.  The geology of Mars seems close enough to Earth's that it should have had the same sort of geothermal vents to provide free energy to pre-photosynthetic life as Earth had.

To nail down out how hard it is to evolve life really requires looking a bunch of planets that evolved life and a bunch that didn't.  Hopefully space exploration will get to there at some point.  But until then it sorts of seems that evolving life itself is pretty easy compared to photosynthesis or mitochondria or multiple cells.

Now, you can bring in anthropic reasoning and say that the fact that we're here to observe the result means that we're probably not looking at the typical case and that's fair enough.  But there's no reason to think that that luck should have been concentrated at the first stage as opposed to Earth lucking out in how quickly photosynthesis evolved.  Or having the many disasters that mostly ended life not having been a bit worse.

And Mars is a lot smaller than Earth is.  About 1/3 the surface area.  So you'd expect that it would take about 3 times as long for life or photosynthesis or whatever to arise on Mars as on Earth.  If we were to assume that everything would take proportionately as long to happen on Mars as on Earth that would mean life but no photosynthesis.

This is all highly speculative but it's the best guess I have.  That at some point Mars evolved simple chemosythetic life of the sort you'd find in deep sea vents or such on Earth but the planet dried out before they could even evolve photosynthesis.

Thursday, April 4, 2019

A reason for having an electoral college

I'm not going to try to defend the formula of adding a state's senators to its representatives to figure out how much say it should have in who becomes president.   You can make an argument for giving minorities extra influence but why should that only apply to minorities that correspond to a state's border?  Why should Rhode Island get extra influence so politicians pay attention but up state New York be dominated by New York City?

But there is a reason to group votes at the state level in that votes are counted at the state level.  It's at the state level that voting is organized and if it were the case that a state was strongly dominated by one party or the other then there would be a temptation for that party to over count their vote to influence the national election.  But if the whole weight of a state is going to one party or the other anyways then there's less incentive to do something like that.  And for more balanced states it's (hopefully) harder to do something like that.

This doesn't really work if the mechanics of voting are handled at a level other than the one votes are aggregated at.  But the fact that votes are aggregated tends to remove categories or problems that we don't normally have to worry about.

Sunday, March 24, 2019

What is "Moore's Law"?

Now that we're in the era of people wondering if Moore's Law is ending it's probably worth looking a little bit at what Moore's law really is.

Way back in 1965 Gordon Moore of Intel publish a paper showing that as the years went by the number of transistors you could economically cram onto a piece of silicon went up.  The more transistors per chip the cheaper it was to make the chip but at the same time the greater the chance that one of those transistors would have a defect and the whole chip would have to be thrown out.  Between those two forces there was a happy medium number of transistors that would get you the most compute for you buck, and Moore observed in 1965 that that happy medium number of components tended to double every year with improving technology.  He also observed that with decreasing device size the power used per transistor would decrease and that the overall power consumption would remain manageable despite the explosion in the number of circuit elements.

Of course, nobody called that Moore's law back then.  And Moore proceeded to continue talking about the future of the integrated circuit industry, especially around more, smaller transistors to integrated circuits.  And it was not until 1975 that people began to really pay attention and start throwing around the term "Moore's Law."  This confuses matters because it was also in 1975 that another famous paper was published by a group headed by Robert Dennard.  It laid out some of the consequences of shrinking transistors.  They laid out how smaller transistors would be able to be blocked faster over time.

When "Moore's Law" came into being as a phrase the size of transistors, how many it made sense to put on a chip, how fast they were clocked, and how much power they used per calculation all got better proportionally to each other and nobody really bothered distinguishing between them.  At one point someone asked Moore about this and he issued a memo endorsing this broader view of the usage of "Moore's law."  Sadly I don't know where to find it online but I received a photocopy in my MIT course notes on the subject.

Everything went along quite well until 2005.  For a long time the fact that some leakage occured in transistors was well known by the quantities of charge leaked were so small that nobody worried about it.  The amount of charge used in switching bits from 0 to 1 and back was far larger.  But as the generations of transistors went by the amount of leakage, compared to active power, kept getting bigger and in 2005 the two curves suddenly intersected and people had to start making changes to how they designed their transistors.

With that the relationship between transistor size and speed that Dennard et al had discovered broke down and while transistors continued to shrink they could no longer be clocked so aggressively and clock rates have only progressed incrementally since then.  We have continued to add more, smaller transistors to our chips and we've managed to continue making them use less power.  But 1 of the four elements is gone.

As I've written before Moore's Law won't last forever.  You can only take a transistor and shrink it so much.  But I wonder if I might not have been precisely right.  Yes, transistors may keep shrinking but will the price of a transistor stop going down?  Will we stop being able to make transistors more and more reliably?  I think we we'll stop that too eventually but that part, the original part, of Moore's law might be the last part to run out of steam.

Thankfully, again, we know the fundamental limits to how efficient we can make a computational process and we're only at 0.00001% of the theoretical limit.  We've been making computation more power efficient for a lot longer than we've been making transistors too, through mechanical computers then electromechanical ones then vacuum tubes then finally the transistor.  So it makes sense that progress along that axis will continue even after Moore's law is completely over, though who knows what form it will take.

Thursday, January 31, 2019

Is AlphaStar what intuition without reason looks like?

Deepmind, a group owned by Google, has been making waves recently with their game playing AIs.  First there was the one that taught itself to play Atari games.  Then, most famously, they created AlphaGo which went on to beat the world champion at Go.  That really generated waves since nobody had been expecting a computer to crack Go any time soon.  They've also done a few other things such as figuring out what shapes proteins fold into just from their chemical composition.

Last Thursday they revealed their newest creation, AlphaStar the Starcraft 2 playing bot.  Starcraft 2 is the sequel to a game I played way back in high school.  It's an example of what is called a realtime strategy game or RTS.  The way these work is that you have a bunch of soldiers or other forces which you use to fight your opponent.  But at the same time you have other units under your command that can gather resources which you can use to create more units.  So you have to make tradeoffs between creating more resource gatherers to help you in the long run or more fighting units to help you right now.  And since you build your army as the game progresses you have to decide on its composition which will hopefully perform well against your opponent's composition.  Your giving orders to all your units in real time, your worker might be in the middle of gathering resources when you say "Hey, build this building there" and then it starts doing that instead.  Also, the way these generally work is that you can't see the entire map at once but only those places where your units can see, so there's an aspect of scouting out what your opponent is doing and also preventing them from scouting what you're doing.

All of this is very different from a game like Go or Chess where the entire board is known and the players take turns moving.

All this raises the question of how AlphaStar works.  The team has written a blog entry about this but it doesn't shed a whole lot of light.  We know that it uses a deep neural network like AlphaGo did and we know that it decides on a variable amount of time to wait between actions.  More information will presumably be available when they publish their journal paper but I'm going to go and do some speculation here.

AlphaGo was a combination of a deep neural network and an alpha beta search.  The way an alpha beta search works is that, at the top level you look at all the possible moves you can make and choose the one that gives you the best result.  How do you know which one gives the best result?  Well, for the board after each move you run an alpha-beta search from your opponent's position after you have made that move and assume that they make the best move from their position.  In chess you look at each of the dozen or so possible moves you could make, look at the dozen moves your opponent could make from each for 144 total, look at the dozen you could make in response to each of those for 1728 position total...  If you hit a victory condition for you or your opponent you can stop searching but usually the exponential explosion of possibilities will overwhelm the computer before that happens.  So a simple way to do it is search to some depth and have some metric of who is ahead, like assigning point values to pieces and totaling those.  More sophisticated chess programs have more sophisticated ways of determining how good a given board position is.  They also have ways of guessing what the best move is and only considering sensible moves rather than exploring down every path to the same depth.

AlphaGo's big breakthrough was using expertly trained neural networks to decide how good a board position was and which moves seemed promising and were worth exploring.  Neural networks work very well for that sort of thing.

 But as far as I can tell AlphaStar works entirely with neural networks and doesn't have anything like the alpha beta framework that AlphaGo used.  That seems to have left it with some vulnerabilities of the sort that AlphaGo didn't have.  It won most of its games but in the final exhibition game the player who goes by the nickname MaNa was able to beat it.  Partially this was just good play but there was a window where AlphaStar had the upper hand and was sending its army to go and destroy MaNa's base.  MaNa, to prevent this, sent a couple of its units in a flying vehicle to go and harass AlphaStar's production buildings.  AlphaStar saw this and turned its bit army around to head off this threat.  Then MaNa loaded up these units into their transport and retreated and AlphaStar moved its army back to the attack.

So far, so typical of a high level Starcraft 2 game.  But then MaNa did the same thing and AlphaStar responded the same way.  And it happened a third time.  Then a fourth.  At this point a human player would have seen that this would keep happening and adjusted their play.  But AlphaStar was, in a very robotic way, incapable of seeing the pattern and gave MaNa enough time to create a counter to its army and lost when MaNa finally attacked.

In his book, Thinking, Fast and Slow, Daniel Kahneman describes two different systems that seem to coexist in each of our brains.  There's system 1 which is fast, automatic, frequent, emotional, stereotypic, and unconscious.  Then there's system 2 which is slow, effortful, infrequent, logical, calculating, conscious.  We use system 1 most of the time, to quote Kahneman "We think much less than we think we think" but we also use system 2 for many tasks. 

When I look at AlphaStar's performance in that game it looks like a system with a finely tuned intuition for which moves will be best at any given moment, a superbly trained system 1.  But at the same time it seems to utterly lack any reflective capability or abstract reasoning, which is to say a system 2.  AlphaGo had the framework of the alpha beta search to fall back on when intuition failed and so effectively had both but AlphaStar doesn't, rendering it vulnerable to humans who grasp its weakness.

Deep neural networks have taken the AI world by storm for good reason.  They seem to be able to duplicate basically anything a human system 1 can do.  But they can't substitute for system 2.  This is maybe a bit ironic since partial substitution for system 2 is what computers have historically been best at.  Still, it looks like we're at least one paradigm short of an AI that can fully replicate human intelligence.

Tuesday, January 1, 2019

Drones still need license plates

wrote earlier about how we need to figure out some way to give drones some equivalent of license plates to let their users be identified.  Anonymity is all fine and dandy online but far less so when someone is interacting with the real world.

Well, recently a drone shut down a major UK airport for a day.  Police were entirely unable to find out who was responsible.   Hopefully the manhunt will scare people off trying similar things in the future but, well, the person responsible did get away so maybe not.

We're trying to make our SARS-2 tests better than we should

Ok, that's a somewhat provocative title but I think it's basically accurate.  During this pandemic the US in particular has had a pr...