Sunday, November 18, 2012

Tax and Spend, often better than the alternative

To grossly simplify policymaking, when there's a problem there are usually three options available to the government.  You can always ignore the problem.  You can raise money and pay someone to deal with the problem.  Or you can pass laws to force some third party to deal with the problem.  When phrased that way the last often sounds like a bad idea, but if pick a third party that is unpopular or that seems like maybe they ought to be helping with the problem anyway then the specific plans can sound quite appealing.  But I'd argue that there are a couple of reasons to resist the urge to do this even when it sounds like a good idea at first.

The first is that by putting a burden on a specific group, you're creating an incentive for people not to join that group.  If you'd prefer that people didn't join that group then this is a pretty good deal.  But often you have a group like, say the people who employ poor people which you don't want to shrink.  But when you're passing a law it's often easy to assume that the group is static and never grows or shrinks, that the people in it are objects to be acted upon and wont' respond to the incentives around them by choosing new actions of their own.

The other problem is that while things the government pays for appear in the budget and so get re-examined at budget time each year in terms of whether they're worth their cost mandates on groups don't naturally lend themselves to being re-examined this way. 

And, of course, there's always the risk that the costs being created won't be born by the people you assume are going to bear them, but that the people you've given a mandate to will turn around and find a way to pass the expense on to someone else.  This post was inspired by see a section of a paper that Tyler Cowen had linked to:

I consider the labor-market effects of mandates which raise the costs of employing a demographically identifiable group. The efficiency of these policies will be largely dependent on the extent to which their costs are shifted to group-specific wages. I study several state and federal mandates which stipulated that childbirth be covered comprehensively in health insurance plans, raising the relative cost of insuring women of childbearing age. I find substantial shifting of the costs of these mandates to the wages of the targeted group. Correspondingly, I find little effect on total labor input for that group.
 So there you have it.  Mandating that employers pay for better insurance coverage for women just resulted in the financial burden falling on women.  Where the government had seen that it didn't like the way the results of the market were falling out and decided to pay for the cost of the insurance itself, we wouldn't have seen that.

People comparing the US and European states often think of the US government as much smaller, but a lot of that is simply the US's greater preference for achieving it's ends by mandate rather than by paying for things directly. 

Non-Volatile Memory Arrives

Previously I've talked about RRAM, and how non-volatile memory is going to come in and cause lots disruption in computing.  The non-volatile part still looks to be happening but it appears I might be wrong about it being RRAM that does it, though, since now samples of the first standard memory sticks of non-volatile RAM are actually being sent out (PDF), and it's not RRAM like I expected.

Rather, its MRAM or magnetic RAM.  That stuff has actually been around for a while, I used some back in '08 when I needed a bit of non-volatile memory I could write to very fast but didn't need a large amount of storage.  That last was the reason it wasn't in wide use, though.  MRAM was fast, and low power, and many other wonderful things.  But each individual MRAM cell was also very big, which meant that you couldn't fit very many of them on a chip.  And that meant that on a bit-by-bit basis it was very expensive.  But recently people have figured out a way to make MRAM cells much smaller using a technology called Spin Torque (ST) MRAM.  And that's what the memory sticks going out now are using.

ST-MRAM does have one disadvantage compared to regular MRAM.  Whereas standard memroy and plain MRAM could be written to all day forever and never wear out but it looks like the new ST-MRAM will have a finite write endurance. Luckily what I've been able to gather says that that write endurance is on the same order of magnitude as RRAM - stupendously larger than that of Flash memory - and so something you could reasonably put in your computer if you have a few levels of cache sitting between your processor and main memory, like all modern computers do.

This is still early stages, though.  While Everspin is sending out samples of a standard DDR3-1600 memory that you could plug into your PC and use, this first one is only 64 megabits.  Which is much better than a memory module made from the older style of MRAM would have been, but still not horribly impressive.  You see, they decided to make the first batch on an old, cheap, and conservative 90 nm process.  These days, state of the art components come out on 20nm or 22nm processes.  Especially memory, which due to being fairly simple and regular is often the first thing foundries are able to produce in bulk on a new process.

Was this because it's much less expensive to get things fabbed on older processes and they weren't sure of market demand yet?  Is it just that they're getting the kinks worked out as they scale MRAM down from humongous to reasonable sizes?  Probably both, I'd guess but if they are able to scale down the chips to take advantage of modern processes then you'll be seeing 1 or 2 GB of memory per stick which is pretty reasonable.  Not quite as dense as our current DRAM, but if they're already able to achieve current DRAM speeds at 90nms I imagine that later generations will end up much faster.  And much more power efficient too, since the cells don't have to be refreshed regularly.

Will ST-MRAM dominate future main memory?  Or will other technologies come into their own before it scales down far enough to be competitive from a storage perspective?  I don't know but I bet things will be interesting.

Here are a couple of other places that have covered this: SemiAccurate, XBitLabs.

Sunday, June 10, 2012

Cognative Dissonance

There are lots of ideas and concepts from psychology that I wish more people knew about.  I was reading the first post of a new blog recently, and I was struck by a thought.  The post talked about two propositions that were both pretty reasonable, but were seldom both believed by the same person because they had different implications for policy.  Someone only believing one or the other could go about their lives believing that their actions were perfect and had no downsides, but believing both meant that you would have to live with having made a trade off, whatever you did.  It occurred to me that this was a perfect example of what a psychologist would call Cognitive Dissonance and that since situations like this were so common that the human bias towards trying to resolve cognitive dissonance by changing one's beliefs was actually more problematic than I'd thought.

So what is cognitive dissonance?  Well, its when you have two beliefs that, when combined, make you feel bad so you try to resolve the bad feeling by changing your beliefs.  Take the famous Aesop's Fable about the fox and the grapes.  The fox sees some grapes hanging up high and thinks "Mmm, tasty".  He tries to get the grapes but he isn't able to jump high enough.  Torn by the ideas that the grapes were tasty and that he wasn't able to have the grapes, he decides that the grapes were probably sour.

"But Andrew!" someone knowledgeable about psychology may object, "isn't this effect well known?  Like since the 50s?  Why are you going on about something so basic and so well studied?"

Well, partially the reason is that I didn't appreciate the full significance of the idea until recently.  And partially I expect that many of the readers here won't have run into the idea before either.  Also, its always useful to be reminded about these things because it can be really hard to avoid them.  Recently I was involved in a debate about whether it was useful for people to learn about the law, or whether it was impossible to learn enough law to guarantee that you would never unknowingly commit a crime.  Of course, said like that its pretty easy to see that there's no contradiction, but none us participating saw that clearly enough to bring it up.

The easiest place to see this if you're looking is in politics, where people's intense desire to associate with a group is strongest.  A lot of political disagreement boils down to values, like whether we should have gay marriage or not, but a lot is still bound up in factual questions about the consequences of various decisions, like how we should regulate banks.  In the later case you should normally expect that there'll be consequences that are both good and bad to any policy decision, and if you find yourself thinking that the evidence goes all one way or all the other way that's a good sign that your probably deluding yourself.  It's ok to think that the benefits are lopsided, that happens, but you should be able to admit that some of your opponents points are valid without feeling the need to stretch your creativity to deny every last one.

Complexity and Democracy

Its always seemed to me that complexity is one of the greatest impediments to good government.  This might just be my bias as someone who's studied computer science but not law, but it seems that the two have a lot in common in that you're writing down instructions to be followed in a variety of situations, and the more rules you create the less well you'll be able to predict the consequences of any change, since there are so many possible interactions.  That's a very general critique, though, and thankfully the fact that so many of the pages of law that Congress generates each year are special cases or exemptions means that any two pieces have less chance of interacting than you might suspect at first glance.

But I'm not writing this post to talk about complexity and lawmaking in general, I'm here to talk about complexity and democracy, and the special challenges you get in a democratic system when the system of laws becomes more complex.

On one hand you have the ability of voters to oversee the activities of their representatives.  There are arguments that voter ignorance is a big concern, but I think I've been convinced by Bryan Caplan that this isn't actually as much of a problem as it might seem at first.  You could argue that complexity makes it easier for people to hold on to their preferences over beliefs, since the more complex things are the easier it is to find isolated incidents that support any prejudice.  On the other hand, the world is a big and complicated place already, even without laws, and I'm not sure that complex laws have much effect here.  A more compelling argument is that public debate and discussion can help people overcome their prejudices, but the more topics of debate there are the shallower an argument they'll here from the other side, and the more different arguments there are from their own political side there are to distract them.

A deeper problem, though, is that more and more complex laws make it harder for even well meaning representatives to draft good laws in the first place.  I remember how ignorant of technology most of the people in the SOPA hearings seemed, but would I seem any less ignorant if forced to talk about legal matters?  Or take farming.  Like a good little neo-liberal I think that we ought to get rid of farm subsidies, but how would that work in practice?  I know very little about the actual financial workings of farms, and if I were to try something clever on my own beyond a slow linear phaseout, I'd risk breaking things.  It might be that corn subsidies could be gotten rid of immediately but soy subsidies couldn't, but how could I know that?  I'd have to have detailed knowledge of the farm industry, and that could only come from an expert - either a lobbyist or someone I'd hired myself.

Lobbyists seem to be, in practice, the way our representatives go about getting information about what laws they ought to pass and what should be in them.  I think I have more sympathy for the legislators in this than most people - Hollywood is very good at sucking money out of the rest of the world and bringing it to the US.  I try to have a cosmopolitan outlook and so I'm not sure that's necessarily a good thing (though it isn't a bad thing either).  However, I also know that that's not how my fellow citizens feel, so I feel a certain sympathy when legislators want to do things to make Hollywood better at sucking money into the US.  And so when Hollywood lobbyists come forward with plans for how the US could rake in even more money from abroad, legislators listen.  The problem, of course, is that Hollywood mostly only thinks about Hollywood and that as a consequence you'd expect the information being provided would be very one sided, even in the if the Hollywood lobbyists were someone interested in doing what's right for the country.  And I'm pretty sure that Hollywood lobbyists are interested in doing what's right for Hollywood, not for the whole country.

In theory you have lobbyists from every industry and interest group in Congress, so Google say could have met with various Legislators and explained why SOPA was a bad deal.  In practice, it seems that Google's lobbyists weren't as good at figuring out what was going on behind the scenes, figuring out who they had to meet with, and then persuading them of their viewpoints.  I suspect that a lot of that has to do with the fact that the various media companies have been doing this for decades, and have had more time to create personal relationships with the various people that matter.  So you have an inbuilt bias in the system towards old companies and industries.  You also have a bias towards more centralized industries, because figuring out who pays for the lobbying is a classic hard coordination problem.  And though you have lobbyist groups for things that aren't industries, they tend to be at an even greater disadvantage in the coordination game.

All of the above isn't to say that this is the only way in which lobbyists exert an influence on politics.  It isn't.  But this is possibly the most significant one.

The obvious alternative to relying on lobbyists for technical information would be for Congress to hire experts themselves.  Unfortunately, this isn't done because most people in Congress are eager to not be seen spending "taxpayer money" on themselves.  I think I'll have to write another post on that in the future, because I have a lot to say and I've already wandered pretty far afield here.

It seems to me that a good way to help deal with these problem would be to try to simplify things as much as feasible.  This doesn't mean "shrinking government" in the way that Republicans politicians are always harping about.  They focus on the size of government in terms of dollars whereas I'm trying to talk about the size in terms of number of laws or maybe number of pages of laws.  If you can wrap up a dozen financial transfers into a single new program, that might be a big win in terms of complexity of government even though the size of the budget remains the same.  There is also the problem that there are many ways in which you can turn a government expenditure into a law saying someone has to accomplish what the expense would.  These are usually more expensive to society as a whole and have costs which fall in less fair ways than a simple expenditure.  Unfortunately, thinking about shrinking government in terms of reducing the size of the budget leads to these sorts of laws.

Wednesday, May 9, 2012

Macro Economics: Supply and Demand and Such

This is almost a followup to the earlier post on inflation, but at times I'll be neglecting some of the complications I mentioned earlier.  If I were a paid teacher I'd feel embarrassed about my pedagogy here, but as I'm just a blogger I'll shrug and move on.

When you read in the news that the price of, say, hats has gone up recently do you have any reason to think that there were more or less hats sold recently?  You might think about an individual store raising its prices, resulting in less sales, but things like that seldom happen with a general prices level.  Generally, there are going to be two possible explanations.  It might be that hats suddenly became much more fashionable, and everyone wanted one.  Since people are willing to pay more for hats, shop owners will charge more - and the higher hat prices will cause people to produce more hats too (demand has risen).  Or it might be that a horrible felt plague has killed off all the felt plants and now the poor haberdashers aren't able to make hats for everyone who wants one. Since different people are willing to pay different amounts of money for hats, sellers are able to sell all their hats even if they raise their prices and just cater to the people willing to pay more for their hats (supply has fallen).  So just observing that prices have risen doesn't tell us whether quantities sold have risen or fallen.  Likewise, seeing the price decrease doesn't tell us if demand has fallen or if the supply has risen.

Economists studying the macro-economy, the overall production of everything rather than the production of just hat makers say, have concepts related to supply and demand called - sensibly enough - aggregate supply and aggregate demand.  These work a bit differently than supply and demand for any given good or service, because they're so inter-related.  A person might choose to spend their money on hats or not, but at least over the long run they're going to spend it on something - probably.  And to the extent that everyone is buying stuff people the money they spend ends up in  someone's pocket, and then people have more money to buy stuff with.  These complications make macro economics more confusing than micro economics, and so there's consequently a lot more debate among economists about the later than the former.

But everyone (except the Austrians) seems to agree that aggregate supply and aggregate demand are still useful concepts, so we have that at least.  But what sort of things make people spend more or less in aggregate?  Headware fashions aren't really big enough to show up at the scale of a country's GDP.  It turns out that there are several things that end up making people all over a country spend more or less in general.  Lets take the Wealth Effect.  Generally when people feel that they have a bunch of money to spend, even if its not money but stock or houses or whatever, they're more likely to spend what they do have.  So if house prices crashed across the nation you'd expect that people would buy less stuff, and that prices of things would decrease (deflation).  Conversely, you can have supply shocks big enough to effect the macro-economy too, if say a war in the middle east disrupts the oil supply or such.  When that happens you expect that people will buy less stuff, but that prices will go up in general (infaltion).

Except there are some added complications about deflation that make it particularly bad for a couple of reasons.  One relates to central banks and interest rates which I might get into in some future blog post, but the other is sticky wages.  If you study psychology and human biases, sooner or later you ought to come across the concept of loss aversion (or you should find a new place to study).  People tend to be much more sensitive about losing things they have than they are about getting new things that they don't have, at least in economic circumstances when you're talking about money.  And people are very unhappy about taking pay cuts when a company becomes less profitable.  So much so that companies will usually fire 10% of their workforce rather than give everyone a 10% paycut.  It may suck for the 10% that get fired, but they aren't going to be around to complain, and thought he remaining 90% will be upset, they'll usually be less upset (or at least less likely to leave) than if they had gotten a paycut.   You can look at graphs of yearly pay changes for, say, US workers and you'll see this big bell-curve around whatever the inflation rate is, but at 0% there'll be a huge spike and below 0% there'll be almost nothing.  

And its important to see there, that even a purely nominal drop in aggregate prices without any real component can end up effecting the real economy, because if people get layed off they aren't making stuff any more, and the overall real economy shrinks.  I think I'll have to do another post later about what is generally done in these sorts of situations, but that'll be another post.

Saturday, May 5, 2012

Book Review: The Honor Code

The Honor Code: How Moral Revolutions Happen is a book by Kwame Anthony Appiah that I read not-so-very-recently, and were it not for a certain laziness ought to have written about immediately.  The author goes over several moral revolutions that have happened in history, times where behaviors that were previously been socially acceptable or even de rigueur suddenly became socially unacceptable.  The specific examples in the book were dueling and slavery in the UK, and foot binding in China.

The thing I found most interesting was that convincing people that certain behaviors was wrong was apparently not very effective at getting people to stop them.  Even when everybody agreed that dueling was in some sense wrong and not something that civilized people ought to engage in, and even after it had become illegal, people were still afraid of other's considering them cowards if they didn't duel.  Likewise with foot-binding, people might have thought that putting young girls through all that pain was wrong, but they still worried about marrying off daughters with unbound feet and so still continued the status quo anyways.

It wasn't arguments that these were wrong that eventually brought change, but rather arguments that they were shameful.  Dueling ended when people of the lower classes started to duel among themselves, and duelists started to be considered buffoons.  Footbinding ended when Chinese people because concious of the fact that it made them appear to be barbaric in the eyes of the westerners who had been defeating them in various wars.  Slavery ended in Britain when white laborers started to think that it served to make all labor less dignified.

Thursday, March 29, 2012

Inflation: a thing that doesn't really exist

Well OK, saying that such and such a thing is really treading on unsteady epistemic grounds.  What does it even mean for a number to exist or not exist anyways?  One position that you can take is that things that can be directly measured, like a person's height, exist; but that things that can't be measured, like their height in inches plus their age, aren't things that really exist.  And though people tend to treat and talk about inflation as one of the former, it really matches the later category much more closely.

The indomitable Google lists inflation as being "3: A general increase in prices and fall in the purchasing value of money".  That's all well and good, but there's never any general level of prices that you can directly observer.   You always have to combine the increases and decreases in various prices with some system to produce something you can call a general level, and the process is actually very straightforward.

First, there is the selection of the goods and services that make up you index.  The easy solution here is to just try to average across everything that makes up a country's GDP in proportion to its contribution, but that makes scaling things like income to GDP problematic.  For the last few decades healthcare costs and higher education costs have been the sectors of the economy that have seen costs increase at the highest rate, but you median worker receives their health insurance from their employer and doesn't have a college degree.  What you really want to do when talking about how inflation has affected a given group of people is to look at the rise in the costs of the particular package of goods and services that that group consumes, and not the price level overall.

Does that mean that they've experienced less inflation than official measures would suggest?  Well, maybe, there are more factors at work here too.

For instance, how do you compare prices levels when some goods are no longer produced, and some have just been invented?  For instance, how do you compare the price of a color TV to an old black and white TV from back in the day?  You could look at the times when both were sold and look at the price ratios then, but those prices were a function of both supply and demand, and the ratio varied over time.  The people who look into the matter tend to say that we overestimate inflation due to technological change, but who knows by how much.

And on yet another hand (where did I get a third?), there are things that don't contribute to what an economist would call inflation, but are rising prices.  If there's a rebellion in Libya and the amount of gasoline produced in the world decreases while the demand for gasoline stays constant, that means the price of gas is going to rise.  But because the price change is due to real constraints on supply, this isn't actually a matter of "inflation" based on an economist's standard definition, so there's an unfortunate disjoint there.  And you can't really say that economists are totally wrong because the rising price in't a result of some mistake in Federal Reserve policy, the thing that economists usually consider inflation with respect to.  On the other hand (wait, I have four now?) you could say that you typical consumer is right in thinking of rising gasoline prices as inflation, since that is an increase in the price of the overall basket of good that people like them buy.

So where does that leave us?  Given all of that, I think that maybe we could do well to follow Scott Sumner's lead by just not talking about inflation much.  Inflation per se doesn't tell us much that's practical from a policy perspective by itself, though it is very useful from an investing perspective.  Maybe if we just tried to get into the habit of only talking about supply shocks and demand shocks we could have more sensible conversations about the economy.

Wednesday, March 28, 2012

Book Review: Dancing in the Glory of Monsters

Go to the Wikipedia page on the bloodiest wars in history and look at it for a bit.  Everybody would expect WWII to be on there at the top.  Many people might not realize how many really bloody civil wars China has had, but it stands to reason that with the huge number of people who live there that this might be a significant number.  Most people also wouldn't be surprised that there were bloody conflicts centuries past which they haven't heard of in general.  But look down to conflict #14.  One of the bloodiest wars in human history ended less than a decade ago, and you probably didn't even know that it was happening.

I certainly didn't appreciate it as it was happening.  I read the newspaper and knew that there was a war in the Congo, but I didn't have any sense of the scope of it, nor did I really know what the issues involved were.  Finding that list on Wikipedia, though, helped me to realize that I had overlooked the greatest armed conflict (so far) of my lifetime, and that I ought to know a bit more about it.

When I picked up Dancing in the Glory of Monsters I was expecting a story about Congolese people, however I was surprised at how much of the story was about the neighboring states, especially Rwanda.  When we hear about Rwanda I think most people still think of the Rwandan Genocide, and that's almost central to this story.  The genocide of Tutsi civilians by the Hutu government happened in the shadow of a remarkably successful Tutsi rebellion.  

After the victory of the surviving Tutsis many Hutus feared that they would be treated the same way as they treated their enemies, and fled west into the Congo.  There they settled in refugee camps, ruled by the same military forces that had carried out the genocide, and fed by humanitarian organizations drawn by the suffering of the displace civilians.

This was one of the most interestng part of the book for me.  Everybody knows about conflict diamonds, and how resources that are easily looted by roving armies help contribute to war in countries without well developed human resources.  Well, suffering human being can be a lootable resource too, and with much the same result.  In the Hutu refugee camps western aid organizations worked to head off a potential humanitarian crises.  But because the same people who had perpetrated the genocide were in control of the refugee camps, they were able to charge foreign aid organizations money for access to the refugees, and use the funds to re-arm themselves.  Thus a suffering civilian populace was turned into a lootable resource, the same as mines elsewhere.  

The Tutsis in control of Rwanda, understandably, weren't willing to stand by while this happened and invaded.  They were successful beyond what you would expect such a small country could possibly do against a large country like the Congo, but soon the loot from the war made it an undertaking taht paid for itself, and then fed upon itself.  And all the states around Congo's border's had long ago become fed up with how Mobutu had allowed other country's rebels to use the Congo as a base, so that he could become a regional power broker.  

But the war which began with reasonable, or at least comprehensible aims was usurped to more venal ends.  Truly in the Congo "My eyes are the victim's eyes, my hands are the assailant's hands" became the truth everywhere as each act of violence spawned revenge far beyond the initial targets.  You can read the book, or Wikipedia, but the results are an almost perfect example of how violence can spiral out of control.  The Rwandans started out trying to punish the guilty, but ended up as just another force profiting from looting the Congo's natural resources by force.  

Tuesday, March 20, 2012

Measuring Inequality

I recently saw an article in Slate which was fine as far as it went, but which prompted me to deliver a bit of a rant at Hacker News about how we talk about income inequality in America, which I've cleaned up a bit and am also posting here.

The first thing that annoys me about how most people talk about inequality in America is that people often just use one metric, and ignore all the others.  You could talk about:

  1. Total Compensation - including untaxed things like health insurance as well as paychecks and capital gains.
  2. Taxed income - which the SSA confusingly refers to as "Total Compensation", which is income plus capital gains and bonuses but not things you don't pay taxes on.
  3. Pure income - your paycheck.
  4. Wealth -  how much you have in your bank account plus how much your house is worth.
  5. Private consumption - how much you spend each year on stuff for yourself, rent and food and fun things.
  6. Total consumption - private consumption plus whatever goods and services the government provides for you.
I always tend to prefer to look at consumption, because that is the metric that effects how people live.  Lets say the government passed some new law that said that all income from whatever source was going to be taxed at 100% and redistributed equally among all Americans.  That, I think, would be a pretty egalitarian society but it wouldn't change the US income statistics one iota.  It would, however, show up in the US consumption statistics.  Wealth is an even worse measure, since even in a perfect Utopia where everyone received the same paycheck each year the people who are just retiring are going to be much, much wealthier than the people who are just getting out of college.  

Also, its important to distinguish between statistics that are computed at a household level and statistics that are computed at an individual level.  Poorer households tend to have more children, but that effect is smaller than the extent to which higher income households are likely to have two adults and to be older and thus more likely to be married or have more children.

Another problem with the graph in the linked article is that it treats income before and after 1986 the same way, and doesn't even mention the huge discontinuity in the data that happens in 1986. The 1986 tax reform act strongly incentivised rich people to report things as income which they had previously claimed were business expenses. That does mean that the numbers after 1986 are more comparable to the numbers from before people started feeling the need to claim that things were business expenses, but it does mean that the numbers from 1988 aren't really comparable to the numbers from 1982. And its hard to compare across countries since AFAIK most of Europe is closer to the pre-1986 US system than the post-1986 US system.

And speaking of Europe, its important to remember than income inequality has been rising at more or less the same rate everywhere in the industrialized world (baring the US's 1986 jump). See pretty graphs here.  Now, I should point out that many other countries do more income redistribution than the US does, so similar share of income by the top 1% don't necessarily mean the same thing in different countries, but that's just another reason to look at consumption inequality instead.

Now, its true that the link in the last paragraph only talks about pure income, and doesn't deal with capital gains and such.  As far as I know other countries are broadly similar to the US in how they deal with this issue, but it would still be really nice to change the US tax code to make capital gains as progressive as income, either through a Friedmanequse system or through a consumption tax.  

Important Issues: Immigration

The snow has melted, the Republican candidates are busy beating each other bloody, and with the looming presidential election my thoughts turn to daydreams about my ideal candidate.  This is mostly a blue sky exercise here, some of my beliefs are well outside the mainstream and I'm not going to hold my breath waiting for my favorite policies to be implemented.  But maybe I can do something to help persuade people that these are issues that they ought to care about, like by writing blog posts say.

The most important policy issue I see in the US right now where thing aren't currently going the way I'd like is immigration.  It seems to me that we ought to be allowing much more immigration into this country, that this would give some net benefits to the US, and that it would be a huge boon for those allowed to immigrate.

At one point the US let many more immigrants in than it does today.  But panic about foreign anarchists and concern that too many immigrants were from Asia let to a succession of laws restricting immigration.  Despite the fact that the US has a far larger population nowadays than in the 1910s, we actually allowed more immigration back then in absolute terms.

The benefits to people who are coming to the US are pretty clear.  They'll be a lot better off materially in the US, even illegal immigrants to who don't get the full protection of US labor laws.  It isn't just that they're taking "our" resources, people from poor countries who move to the US benefit from our generally decent infrastructure and legal system, and genuinely produce more stuff than they would have in their home countries.  Some people have estimated that this difference could amount to trillions of dollars to the global economy if everyone who wanted to immigrate wanted to.

You could argue that allowing the best and brightest to come to the US will rob their home countries of those same people.  True enough, perhaps, but at the same time the remittances those people send home are by far the most effective form of foreign aid per dollar, and also far larger than all the other sources of foreign aid.

You could argue that immigrants steal our jobs, but that's just silly because the number of jobs changes with the number of consumers.  Just look at Texas which has done very well in growing it's employment while people have been flocking to the state, to the extent that it's doing much better than most of the country.  You could also argue that immigration drives down the wages of the those at the bottom of the economic heap and here you're on more solid ground.  But the great thing about growing the economic pie is that even if a rising tide doesn't lift all boats by itself, redistributing gains is actually one of the areas that the government is actually pretty good at.  If we're so much more worried about the livelihoods of native born Americans than about those born in, say, Cambodia that we would consider preventing the person from Cambodia from moving to the United States, well, instead we can just take some of that factor of ten or so more money that the Cambodian will make in the US, give it to the native born workers that he's competing with, and everybody will be better off.  Doing something like that would force us to give up the pretension that we care about all people equally, but I think illusions aren't worth much compared to actual improvements in people's standards of living.

One could also argue that we should worry about diluting our culture or such - that too many immigrants might be more than we can handle.  Since apparently 40% of the population of the third world would like to move to the US this is understandable, but that's no reason not to allow a factor of 10 or so more immigrants into the US each year.  The US managed fine in the age of Ellis Island - not perfectly but certainly well enough.

So, who on the current list of presidential contenders score well by this metric?  Assuming he gets the Libertarian Party nod Gary Johnson seems to be someone I agree with on this particular issue.  I'll also say that Obama and Gingrich seem to want to improve things from where they are now.  Conspicuously failing on this metric is Ron Paul, who despite his Libertarian leanings comes off as almost nativist at times.  And Santorum and Romney don't do well, either.

Monday, February 27, 2012

Dark Silicon Followup

Something I saw this last week that prompted me to think about dark silicon again was a project Ubuntu is working on, where you have a phone that is mostly a normal Android phone, but when you plug it into a computer screen and keyboard it can act as a full desktop operating system.  Since a modern smartphone is much more powerful than the desktops of ages past, and since we tend to do more and more things on far away servers, this seems like it might be a model for the future of computing.  You computer fits in your pocket and you use it as a phone most of the time, but then you plug it into some sort of dock and it becomes your desktop.

But it strikes me that in a future like that you're likely to have a bunch of computational resources sitting there in your phone that only light up when the phone has access to a hefty power source.  Heat dissipation out of the phone will still be a hard limitation since there's no way you can fit a fan in a phone form factor, but there's still room for quite a bump.

Pushing people off bridges, and consequences.

Pushing people off of bridges - but only to save the lives of others, of course - has long been one of the staples of debate in moral philosophy.  The original formulation of the famous Trolly Problem is more or less "Suppose you see an out of control trolley about to run over 5 people.  Is it moral to push a fat person under the wheels if it means that only he will die instead of the 5 others."  This started out as a debate among philosophers, then became a tool for cognitive scientists to use by asking people about this topic in surveys, but often with some variation.  What if the one person is your mother?  What if you throw a switch instead of having to push the person yourself?

Researchers have found out many fascinating things about how people respond to moral problems, or at least say they would respond, using this problem.  I'll plug Thinking Fast and Slow here as an excellent overview of modern cognition research in a lot of areas, including this one.  Strangely for people who try to describe morality in terms of simple abstract rules, almost everyone would throw a switch to divert the trolley from hitting five people to a course that would make it hit only one, while very few would actually push someone themselves.

I'd tend to explain that with two forces.  First, in real life I can be much more certain that pushing someone will result in harm than I can be that, further down the track, the train will cause harm by itself.  It could be diverted by someone else, I could be wrong about where it was going, etc.  Second, there is the notion of blame.  I wasn't to blame for the out of control train so I can't be blamed for the five deaths, but if I push someone onto the tracks I'm now the proximate cause of them dying.  In matters of abstract morality you can talk about partial responsibility, but in terms of lynch mobs usually its just a matter of finding the one person who is "to blame".

However, I recently read a blog post discussing the finding that people are much more likely to push if all the participants are related than if they are strangers, going from about a quarter to about half.  The blog post author, Robin Hanson, suggests that being related makes the stakes higher, enough so that more people are willing to violate social norms.

That's possible, but I think a stronger consideration would be that in the case of strangers the relatives of the person you killed are mostly disjoint from the people whose relative you save, while in the case of the brothers most of the people most upset by the killing will also be most relieved by the other results.  I'd suggest running a new experiment, where the person pushed and the five on the tracks are all related to each other but not to the person answering the question.  My bet would be that closer to half than to a quarter would end up saying that they would push.

Wednesday, February 8, 2012

Is there no such thing as bad publicity?

Recent events surrounding the Susan Komen breast cancer charity and Planned Parenthood has gotten me thinking about the idea that all publicity is good publicity. It seems to me that there are times when bad publicity can actually be bad and others where it cannot, and that this mostly has to do with the nature of the recipient rather than anything about the details of the scandal.

As you might expect, Wikipedia has a long list of instances where scandal has brought people great success, and it seems that these cases have a lot in common.

If your primary opponents are obscurity and apathy then the rush of attention a scandal brings can be of huge value. Even if the majority of the people who hear about you are so offended that they don't want to have anything to do with you, some wont' be and through them you'll gain from your new found notoriety. This is why some early 20th century artists explicitly tried to attract the ire of morality crusaders. The crusaders weren't the target audience anyways, but they'd let people who were interested know what was going on.

On the other hand, if you're already well known your potential to gain from notoriety is much smaller. And if you can suffer from disapproval you might be even worse off. The worst people who disapprove of a business can do is often just boycott the product, but a politician or political movement has to appeal to a majority then turning people against you can be a serious negative consequence.

So where does this leave Susan Komen and Planned Parenthood? Both experienced a huge surge in donations with the recent fracas, but for both there's more than just the short term to worry about. If Susan Komen has attracted a large number of conservative who just like to see someone stick it to Planned Parenthood, but who don't care as much about breast cancer then this will just be a flash in the pan. That would come down to how much social conservative care about breast cancer versus social liberals. But, of course, Susan Komen soon reversed their course, so now we're left asking how much the appearance of inconstancy will hurt them, versus more widespread public knowledge about their mission. I'd guess that their revenues will be up a year from now, but I'm not totally sure. Remind me in a year, and we'll see how right I was.

Monday, February 6, 2012

Dark Silicon (dun dun dun!)

For many years, the major limitations of CPU design have been about power. Back in the day, this wasn't much of a problem. Every year transistors got smaller, which meant that you needed less current to flip them from one state to another, which meant that you could flip them more frequently without doing any more (thermodynamic) work than you had done with the last generation of chips.

But then something changed. Leakage current reared its ugly head. As transistors became smaller they also became less substantial. Whereas once the trickle of current that would leak through a transistor when it was off would be tiny compared to the stream it would take to shift it from off to on, as transistors became smaller they became more and more permeable. And to make it even worse, shrinking transistors meant that even if overall power usage remained the same, that power was concentrated in a smaller and smaller area.

Something had to give, and it did. After the ill-fated Pentium IV, Intel figured out that using more, slower transistors meant that the power usage was spread out over a larger area, and the lower voltage those transistors operated at meant that leakage again retreated, and was again less important than the fundamental power needed to drive those transistors from one state to another.

Adding more transistors to execute more instructions at once only goes so far, however, because often one instruction depends on the previous one, and finding places where they don't gets harder and harder the more you try to do it. So one way which CPU designers tried to use all these extra transistors to speed things up was by duplicating the entire CPU, creating more cores. But what happens when you have more cores than you can really use? That's when we start entering the realm of dark silicon.

Its a well known fact that specialized hardware is almost always faster than general purpose hardware. This is why we have graphics cards, a chip that was designed explicitly to do 3D graphics can make assumptions about what its doing, it can neglect areas that aren't needed, it can encode rules directly into silicon instead of having to fetch abstract rules from memory. If you have a graphics chip of a certain number of transistors and a certain power budget, it would take many chips of the same size to do as much work if they were all general purpose CPUs.

But why stop with graphics? If specialized circuits can do a task faster and more efficiently, why not use them for as many things as possible? As Moore's law keeps working and the number of transistors you can cram onto a chip keeps going up, and as the number of general purpose CPUs you can usefully cram onto said chip doesn't, it makes more and more sense to put specialized circuitry on board. Now mind you, those same transistors could be used for more cache, but as caches get bigger each marginal transistor used for such gets less and less valuable. Sooner or later a specialized processor for doing media encoding or encryption looks more and more like a better use for those marginal transistors.

And sure enough that's already happening. Intel's newest chips have both specialized circuits for encryption and media encoding. AMD is planning similar things, and it looks like the age of the co-processor is about to begin. This is called "dark silicon" because most of the time it just sits there, dark and unpowered, not using any power at all. But when you need to do something it's suited for, it thrums to life tears through whatever task you throw at it before growing silent once again.

As time goes on we're likely to see more and more such accelerators come into being. What will be next, more Java accelerators? Physics accelerators? Haskell accelerators? A configurable blocks that can become whatever is needed? In any event operating systems will have to evolve to be able to dispatch tasks to whatever specialized hardware can handle them, or just emulate those specialized functions in software. It looks like Apple might be leading the way here, but the final result might end up looking very different.

The very long run for SARS Covid 2

Many of the worst pandemics that afflict us are from pathogens that don't normally prey on humans.  Probably the most famous pandemic in...