Showing posts with label cognition. Show all posts
Showing posts with label cognition. Show all posts

Friday, June 7, 2019

Sex, Sexuality and doing evolutionary reasoning badly

This post by Darwinian Reactionary provides an excellent example of using evolutionary reasoning badly.

He is using evolutionary reasoning to critique the notion of sexual orientation. There are lots of problems with the concept of sexual orientation. Starting with the fact that human sexuality is multi-dimensional. There is (1) who you fall in love with, (2) who you are sexually attracted to and (3) who or what can provide sexual release. The randier you are, the wider (3) is likely to be, and the broader than (1) and (2) it is likely to be.

As lots of homosexual men down the ages have discovered, a significant proportion of straight young men are, in the right circumstances, seducible. That does not make them bisexual or homosexual, it just makes them randy. Men, particularly young men, in situations which systematically deny them social contact with young women are likely to use other men for sexual release. That is true in prisons, on long sea voyages and in countries which practise sexual apartheid.

The concept of sexual orientation does not really cover all those dimensions. It also does not cover terribly well the evidence that female sexuality seems to be moderately more fluid than male sexuality.

Evolutionary complexity
The problem with Darwinian Reactionary's critique is not that it is directed against the concept of sexual orientation, nor in invoking evolutionary reasoning, it is how evolutionary reasoning is used.

The first difficulty is simply assuming homosexuality is an absolute evolutionary disadvantage: in effect, that it completely blocks reproduction. Lots of homosexuals have had children. It is perfectly reasonable to suggest that homosexuality is somewhat of an evolutionary disadvantage in that homosexuality presumably does reduce the propensity to reduce. It is, however, an empirical matter how much it actually does. An empirical matter that, moreover, is likely to vary significantly from human society to human society.

How much a barrier to reproduction homosexuality actually is matters, because it affects how strong the evolutionary pressure is against any genetic basis for homosexuality. The less of a barrier to reproduction homosexuality turns out to actually be, the less evolutionary selection pressure there is against it, and the less a puzzle its persistence in human populations is.

Let us presume, however, that homosexuality is enough of a barrier to successful fertility as to create a significant and persistent element of evolutionary pressure against it. Then we have a puzzle to be answered: why is it persistent? Note that this is not quite the same puzzle as: why does it exist? The latter is a puzzle of identifying the causal mechanism, the former is a puzzle about the persistence of the causal mechanism.

The “gay uncle helps sibling reproduction” hypothesis has some empirical support, though probably not enough in itself to explain the persistence of homosexuality. Especially if we assume homosexuality is an absolute barrier to reproduction, there may be problems with making the evolutionary mathematics work. It would be an informative exercise to work out what level of depressed reproduction above zero is sufficient for the mathematics to work, remembering that the more children the gay uncle tends to have, the less plausible any advantage to sibling reproduction is. Perhaps both effects cancel each other out, but it seems worth checking range and scale.

Cognitive dimorphism
Leaving aside the problem of assuming an absolute selection disadvantage, a further problem with Darwinian Reactionary’s use of evolutionary reasoning is that it is not based in the complexities of being Homo sapiens.

What is missing from the evolutionary reasoning in Darwinian Reactionary’s post is what is often missing from such reasoning: any sense that we are specifically dealing with Homo sapiens. It is all just logic pertaining to a sexually reproducing species. Nothing specific to Homo sapiens is involved.

Three factors specific to Homo sapiens appear relevant: (1) we are the cultural species, (2) we are the non-kin cooperation species and (3) there are significant, at least partly innate, cognitive differences between men and women. (1) and (2) are relevant because homosexual men have a persistent, cross-cultural tendency to be disproportionately involved in cultural activities, (3) because homosexuals have a persistent, cross-cultural tendency to display cognitive traits more common in the other sex. Indeed, their defining characteristic—who they are sexually attracted to—is the most obvious example of this but, revealingly, not the only one.

As an aside, this makes all the more annoying the tendency to reason abut homosexuality and homosexuals in ways which make it blindly obvious that one has entirely failed to consult the experience of actual gay folk. (A tendency much more obvious in the comments on the aforementioned post than the post itself.) One may choose what one does (or does not do) for sexual release. One does not choose who one falls in love with or what one is attracted to.

Attraction to one’s own sex is just as visceral as attraction to the opposite sex. Indeed, it makes much more sense in terms of having a cognitive feature typical of the opposite sex than being something weirdly free-floating. Though it is then a cognitive feature embedded in a different hormonal pattern. Attraction to men plus testosterone is different than attraction to men plus oestrogen, just as attraction to women plus oestrogen is different form attraction to women plus testosterone. Seeing homosexuals as having a cognitive feature more typical of the other sex also separates homosexuality from genuine para-sexualities (such as paedophilia), which are much rarer and much more clearly connected to trauma and dysfunction.

If the persistent difference in cognitive patterns between the sexes is an evolutionary advantage (and it surely has to have been to be as marked as it is), then some mechanism or mechanisms need to persist to maintain the patterns of cognitive difference by sex. If cognitive convergence between the sexes to the extent of being homosexual discourages reproduction, that would be a mechanism which would help maintain cognitive differences between the sexes. Some of the distinctiveness in physiological tendencies among homosexual men and women may point in that direction. Working out the evolutionary mathematics involved is way, way beyond my mathematical knowledge and understanding, but it would seem a useful exercise. One that gives homosexuality a much broader functional role in evolutionary dynamics that may be sufficient for it to be low instance but persistent, particularly if added to the "gay uncle" effect.

A key feature to remember about evolutionary reasoning is that we are talking about population dynamics. For instance, the persistence of psychopathy and sociopathy (or whatever the current approved labels are) at such low levels in human populations illustrate that (1) lack of empathy and normative engagement are not evolutionary advantages except as, at best, parasitic strategies on the overwhelmingly more dominant strategy(ies) using empathy and normative engagement and (2) if they are not propagating as a minor niche parasite strategy, then they are much more likely to be recurring malfunctions of the mechanisms supporting the dominant evolutionary strategy(ies).

Cultural species
That homosexual men in particular have been persistently, disproportionately involved in cultural activities is not much of a puzzle. To the extent that one does not invest in children of one’s own, the greater the pressure to invest in activities that generate social support and status independent of having one’s own children. Providing cultural services does that.

Having cognitive traits that are more “cross-sex” may well aid in creating broadly resonant cultural services, giving homosexual men both more incentive to invest in, and more capacity to successfully provide, cultural services. (That homosexual women have not been so significant is explicable in terms of the value placed on female fertility being such that taking on other roles was discouraged: especially if their fertility was women’s dominant social leverage.)

In the cultural species, having a low instance but persistent minority disproportionately willing and able to invest in cultural services would seem a clear advantage in realising the benefits of culture. Whether this can plausibly be “cashed out” genetically seems doubtful. But add in the helping to block cognitive convergence plus some level of aid to sibling reproduction, and there may well be enough selection effect to lead to the persistence of a low instance sexual minority in human populations. Which makes Darwinian Reactionary's attempt to characterise homosexuality as "selected against" with therefore straightforward consequences to how homosexuality then can, or cannot, be reasonably characterised a naively simplistic application of evolutionary reasoning.

I am absolutely for using evolutionary reasoning to think about why Homo sapiens are the way we are. Applying evolutionary reasoning to Homo sapiens is, however, a much more complex issue than the sort of naive evolutionism that Darwinian Reactionary is using.

[Cross-posted at Skepticlawyer.]

Sunday, November 20, 2016

Understanding the 2016 US Presidential election

We humans are excellent at motivated reasoning: taking a preferred framing and using it to "explain" events. The more highly educated we are, the better we are at it.

We homo sapiens are also a profoundly cultural species. In particular, we are moralising, status-conscious, coalition builders. We have a powerful, apparently inbuilt, tendency to copy behaviour which either has prestige or comes from folk with prestige. Which gives us even more reasons to buy into framings that reinforce a sense of who we are and where we (seek to) fit.

So, when dealing with something as fraught as the 2016 US Presidential elections, it is best to start, as much as possible, with the empirics: in this case, the voting statistics. The following post is based on the voting statistics from David Leip's Atlas of US elections--a very informative and easily accessed resource.

In 2016, as in 2000, the Republican ticket won the Electoral College, though the Democratic ticket won the popular vote. This is a fairly rare event in US political history (it happened previously in 1824, 1876 and 1888), so to have it happen twice in 5 elections is noteworthy. 

So, comparing the 2000 and 2016 Presidential elections, several things stand out. (All figures are rounded up to a single decimal point.)

In both elections, the third Party vote was above 2%. 
The third Party vote totalled 3.8% in 2000, mainly due to Ralph Nader's candidacy for the Greens winning 2.7% of the vote. It was 5.6% in 2016, mainly due to Gary Johnson's candidacy for the Libertarians winning 3.3% of the vote.

In both elections, the Democratic popular vote win was due to California.
In both the 2000 and 2016 elections, the Republican ticket won the popular vote in the rest of the USA. Since California, like most states, uses a "winner take all" system for its Electoral College delegate selection and since it is leaning more and more Democratic, there is less and less reason for Republican Presidential campaigns to put any effort in campaigning there. 

We can see this effect in the Californian results. In 2000, Al Gore won California 5.9m votes to 4.6m votes. In 2016, Hillary Clinton won California 7.4m votes to 3.9m votes. 

In 2000, George W Bush won the rest of the US popular vote by 0.7m votes. In 2016, Donald Trump won the rest of the US popular vote by 1.8m votes. In both elections, the Democrat advantage in California was larger than the Republican advantage in the rest of the US.

The two elections had very different dynamics compared to the previous Presidential election
The most striking difference in the two elections was how well the Party tickets did compared to the immediately prior Presidential election. In 2016, Donald Trump increased the Republican vote over 2012 by 1m votes. In 2000, George W Bush increased the Republican vote over 1996 by 11.3m--largely due to the collapse in the Reform Party vote.

In 2000, Al Gore increased the Democrat vote over 1996 by 3.6m. In 2016, Hillary Clinton lost 2.4m votes over 2012. (In both elections, the Democrats were the Presidential incumbent Party.)

If we look at the pattern over the previous two elections, in 2012 Mitt Romney increased the Republican vote by 1m while Barack Obama lost 3.6m votes. In other words, Donald Trump essentially replicated Mitt Romney's increase in popular votes while Hillary Clinton continued the decline in the Democratic popular vote, but not quite as much.

So, what we see is a steady trajectory over the 2012 and 2016 Presidential elections--the Democratic popular vote declining significantly, albeit at a slightly slower rate; the Republican vote increasing at a significantly slower, but steady, rate. In votes for President, the Republicans have not been surging nearly as much as the Democrats have been going backwards.  Which strongly suggests analysis should not concentrate on what the Republicans were doing right so much as what the Democrats have been doing wrong.

In popular vote terms, the Democrats currently dominate Presidential politics
In the 7 US Presidential elections after 1988, the Republicans have won the popular vote once: in 2004. But they have won the Presidency 3 times: 2000, 2004, 2016. As, however, the Democrat dominance in the popular vote is essentially a California effect, their popular vote failures may be something of a warning to the Republicans but, short of changing how the Electoral College works (either by abolishing it, or eliminating "winner takes all") the political significance of that will continue to be muted.

Given that the Republicans continue to dominate Congressional and State politics, a constitutional amendment to change the Presidential selection system seems somewhat unlikely. Indeed, the Republican domination of State politics is striking:
Republican America is now so vast that a traveler could drive 3,600 miles across the continent, from Key West, Fla., to the Canadian border crossing at Porthill, Idaho, without ever leaving a state under total GOP control.
Who goes backwards?
As the US population continues to grow, and as it remains very much a Two-Party state, with very strong institutional barriers to third Parties getting anywhere, Democratic or Republican tickets going backwards in the popular vote is somewhat noteworthy. George H W Bush managed it in 1988 (-5.6m) and 1992 (-9.8m).  John McCain managed it in 2008 (-2.1m). The only Democratic candidates to manage it in that time have been Barack Obama in 2012 (-3.6m) and Hillary Clinton (-2.4m).

The Republican Presidential vote has been relatively steady since George W Bush's win in 2004:
2004  62.0m
2008  60.0m
2012  60.9m
2016  61.9m

The Democratic Presidential vote has been much more variable in that time:
2004  59.0m
2008  69.5m
2012  65.9m
2016  63.6m

The Republicans seem to have more solidly attached votes, the Democrats a larger "floating" vote. Donald Trump got (slightly) less votes than President Bush in 2004, despite 12 years of population growth, while continuing the slow increase in the Republican vote since 2008. Hillary Clinton got more votes than John Kerry in 2004 while continuing the significant decline in the Democratic vote since 2008.

Starting with the electoral facts
The story of the 2016 election is the continuing Democratic decline in votes being significantly larger than the slow Republican increase in votes. The story is not how The Donald and the Republicans won the general election, the story is how Hillary and the Democrats lost. Any analysis that does not start from there is imposing its framing on the election. Especially as the much vaunted switch of the "Rust Belt" white working class to the Republicans seems to have been underway from 2012, long before The Donald's upset win in the Republican primaries was even a surreal possibility. 

The victory story for The Donald is how he won the Republican primaries. An analysis which can tie that to the Democrat decline in Presidential votes is one worth considering. 


[Cross-posted at Skepticlawyer.]

Wednesday, May 30, 2012

Decision inertia

Econblogger Scott Sumner, extends an excellent comment on one his posts which raises the issue of policy "stickiness", or what I have long thought of as policy inertia.

That there is such a thing as policy inertia is clear from history or from observation of the world around us. An example of policy inertia is that, during the Pacific War, it was decided to base a Royal Navy task force at Singapore, Force Z. It was to be a fairly standard force of aircraft carrier (HMS Indomitable), battleship (HMS Prince of Wales), battlecruiser (HMS Repulse), and support ships (four destroyers). Unfortunately, HMS Indomitable ran aground in the Caribbean en route. Yet the deployment of the rest of the Task Force went ahead anyway. As a result, Force Z lacked naval air support and HMS Prince of Wales and HMS Repulse were sunk by Japanese aircraft off the coast of Malaysia. (Yes, Admiral Tom Phillips was a dill who did not believe in airpower and failed to call for the RAF air support that was fueled and waiting, but the lack of an aircraft carrier made Force Z much more exposed to air attack in the first place; Mediterranean experience being that even a small number of fighter aircraft could break up attacks on ships even if the attacking aircraft were also supported by fighters.) There was a good reason why an aircraft carrier had been originally assigned to a task force operating in such open waters and HMS Indomitable's mishap should have changed the deployment decision, but policy inertia kicked in.

Generally, changing a policy involves higher institutional transaction costs than keeping with the current policy. People have to come to a new agreement, orders have to be changed, new arrangements worked out, etc. If reputation or credibility effects operate, the costs rise further. If changing the decision involves increased uncertainty (e.g. because past experience provides little or no guidance), that raises costs again; further increasing the policy inertia.

Policy inertia (or policy stickiness, in economic jargon) is an observable tendency of organisations and, if one thinks about it, not a surprising one.

Indeed, a certain amount of price stickiness can be understood as being a form of policy inertia, not merely within the firm but also the signaling and other costs involved in communicating changes in existing arrangements with its customers/clients.

Taking it further, policy inertia is an institutional form of cognitive inertia; the tendency to continue with patterns of action or beliefs in an unconsidered fashion. There has to cognitive inertia, we simply do not have the time or cognitive capacity to continually re-assess every decision; so we use habits, routines and other cognitive shortcuts. Similarly, organisations would find it very costly to reconsider every decision all the time. Particularly when you consider the signaling costs to their own staff and other stakeholders and the problematic effect on expectations.

Of course, if you are a central bank and essentially a single decision (monetary policy) is your raison d'etre, then policy inertia is less acceptable. Alas, reputation affects are likely to be particularly strong (and, sadly, get worse the worse economic conditions get since the amount of avoidable economic misery your policy failures are causing mounts and so the greater the implied humiliation in changing policy).

Either way, policy inertia (or what we more broadly might call decision inertia) should be a feature of institutional and macro-economic analysis.

Tuesday, November 29, 2011

Amazing

It is a striking thing, that those who look for racism always seem to find it.

Another triumph of human analytical ingenuity and confirmation bias.

Sunday, October 16, 2011

We are APES

In an article in the October issue of Quadrant, Paul Monk labels homo sapiens as Apprehensive Pattern-seeking Emotional Story-tellers or APES. As nice a summary of our cognitive nature as I have come across.

Paul Monk writes:
As neuroscientist William Calvin puts it, our brains are susceptible to colourful rhetoric, to being swept along by group dynamics that overwhelm our emotional autonomy and critical faculties, to finding hidden patterns where none exist. They are highly susceptible for these reasons to myths, stories, superstitions and mass emotions. Our memories are selective and unreliable, our decision-making easily swayed by the last thing to make a vivid impression on us; our intuitions about logic, probability and causation are powerful but flawed in a number of ways and these flaws are actually magnified rather than diminished by our creation of complex, increasingly data-dependent social orders.
Given Paul Monk is a principal of Austhink, cognitive biases are his meal ticket.

A fine, if somewhat poetic, description of our Apprehensiveness is provided in John Carroll's flawed-but-engaging (and interestingly flawed) book Jerusalem, Jerusalem: Wow the Ancient City Ignited the Modern World:
Fear is the dread of the known threat. Angst is the dread of the forever unknown, what is essential to becoming. The future does not hold danger, the future is danger. ...
Animals live in the eternal present. Humans live in the eternal coming-into-being. Angst, not fear. ... the inevitable incompleteness of experience, a being that is always becoming. What we call intellect is compelled to record that incompleteness in two dimensions, time and space.Time is measured against the past and the future -- memory and anticipation (Pp28-9).
While he somewhat exaggerates the gap between us and animals (such as higher primates), what Carroll is alluding to here is a distinction in our expectations about the future. Economist Frank Knight famously distinguished between risk and uncertainty:
Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated. ... The essential fact is that 'risk' means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomena depending on which of the two is really present and operating.... It will appear that a measurable uncertainty, or 'risk' proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all.
To put it more simply, uncertainty is risk that is immeasurable, not possible to calculate. But both are about anticipation, apprehensiveness, expectations: about looking forward.

As anyone in business knows, risk is heterogeneous. For example, small business copes with the unknown variances in hiring new people by using any risk-minimising techniques that are available (notably, use of networks that provide implicit “guarantees”: as in “I don’t know X but they were recommended to me by Y, who I do know and I do not believe Y wants to damage their connection to me by recommending a dud”). Large businesses, more able to cover risk and less able to directly connect effort to output, compensate by paying a “corporate premium” that acts as a “hostage” for productive behaviour by the employee. (I see no particular reason why training profiles—which are often used to explain the wage premium in large corporations—should be greatly different between large and small businesses: difficulties of supervision strike me as far more differentiating.)

Interest rates, asset prices and assessments of risks are intimately connected. As David Glasner notes:
... interest rates emerge out of the process of evaluating all durable assets, which are nothing but claims to either fixed or variable future cash flows of various durations and risk characteristics. ... One of the good things about Milton Friedman’s 1956 restatement of the quantity theory of money was his explicit recognition that interest rates are determined not in a narrow subset of markets for fixed income financial assets, but in the complete spectrum of interrelated markets for long-lived physical and financial assets.
(There are some complications in this, which need not detain us for the moment.) What makes an asset an asset is its potential for future use.

In aggregate terms, it is generally reasonable to assume that risk in an economy “bell curves”—that failed judgements of risk and successful judgements of risk cancel out around a positive mean. [If that mean is positive, risk assessments on average are too high and will tend to fall: if the mean is negative, risk assessments on average are too low and will tend to rise.] But suppose some economic shock leads to a sudden downward shift in the general ability to meet established obligations: the [previous experience] assumption of successful overall coverage of risks may [will] no longer apply. There will [likely] be an increase in people’s preference for holding money (to reduce their exposure). Ironically, the overall risk profile of the economy [will then tend to] may improve, since bankruptcy and closure will disproportionately hit those on the tail end of the risk bell curve. The effect will [then] be to put downward pressure on interest rates, reflecting shifting assessments of risk.

In this situation, there may well be an increase in (negative) uncertainty: but this will not be directly reflected in interest rates because these cover only risks-as-calculated. Prices cannot directly incorporate what cannot be calculated but can and will reflect the consequences of uncertainty’s effect on behaviour.

I say negative uncertainty because, as George Ip notes:
… it is not “uncertainty” per se that bothers business. Whether uncertainty is unwelcome depends entirely on what’s at stake. What would you prefer: 100% probability of dying next year, or 50%? Most of us would choose the latter. Similarly, business would prefer zero probability of a burdensome new rule, but if that’s not possible, would certainly take 50% probability over 100%. The administration’s decision to delay implementation of a new ozone standard perpetuates uncertainty. Business welcomed it nonetheless because now they do not have to spend money to meet it for at least two years, and perhaps forever if in the interim a new president chooses never to implement it. Does the Federal Reserve create some uncertainty when it undertakes quantitative easing? Probably, but in the process it makes the stability of inflation around 2% much more certain, and that, most businesses would say, is a reasonable trade-off.
In the absence of any ability to calculate, the framing through which one views the incalculable determines responses. A classic instance of uncertainty shifting from positive to negative is that, when the stock market was booming during the late 1920s, lack of information over the weekend would be interpreted positively. As and after it crashed, lack of information was interpreted negatively.

Economic “confidence”—including business confidence—is, to a large degree, how what cannot be calculated is being framed in a given time period: whether it is being framed positively or negatively and how much so. This is likely to be based on various indicators but, by its non-calculable nature, cannot be definitively so. The wider the range of uncertainty, the more unstable confidence is likely to be, because the greater the possibility of new information changing how the uncertainty is being framed.

Just because something cannot be calculated does not mean we will not frame expectations to cover that uncertainty: it just means that such expectations cover more than is directly inferable from such information as we have. We will apprehensively tell stories based on (at least partly created) patterns that fit with our preferences, because we are APES. But, of course, without preference and expectations we would have no basis to act (other than randomly). Being APES may go with the territory of having a certain level of cognitive complexity.

[Cross-posted at Critical Thinking Applied]

Sunday, January 30, 2011

Risk and uncertainty (revisited)

I have previously posted about the difference between risk and uncertainty based on economist Frank Knight's famous differentiation between risk and uncertainty:
Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated. ... The essential fact is that 'risk' means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomena depending on which of the two is really present and operating.... It will appear that a measurable uncertainty, or 'risk' proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all.
As I noted, not perhaps the clearest distinction.

In the passage above, Knight expresses that difference as risk being “a quantity susceptible of measurement”, uncertainty as being where that was not possible. I would put it a little differently, I would say that ordinary risk is where the expected dangers are sufficiently structured that a pattern of expected risks can be derived from it (so that, even if specific values cannot be calculated, general rankings and ranges can be reasonably derived, even if only of the x is greater than y form) and uncertainty is where there is insufficient confidence in knowledge of how the dangers [likely outcomes] are structured so as to frustrate calculation, even in general terms. That the dangers [likely outcomes] are not able to be expressed mathematically in any useful sense, taking mathematics to be the science of pattern and structure, because there is insufficient pattern or structure within which likely outcomes can be assessed.

Expressed that way, we can think of uncertainty as a realm of possible outcomes across which people have little or no confidence in forming expectations whereas, with risk, people do have confidence in forming expectations. Hence, risk is where the factors driving outcomes are felt to be sufficiently patterned that expectations sufficiently specific to act on can be reasonably formed, uncertainty as where that is not so.

If we think of uncertainty as the range of possible events over which people do not believe they can form sufficiently confident expectations to act on, we can see the inhibiting effect on economic activity [negative] uncertainty is likely to have: it will encourage more holding of money and other liquid assets as “buffers” against adverse outcomes and/or resources to take advantage of opportunities that may present themselves.

Even if the uncertainty is arising out of some change felt to be positive, it would have to be confidently bounded as a positive in all aspects not to retain a “buffer” against adverse and change and, even then, there would be reason to hold resources to use when such positive opportunities present themselves. Either way, increasing one’s holding of money as a store of value rather than using it as a medium of exchange would be sensible, with contractionary effects due to people engaging in fewer transactions.

Though uncertainty due to negative factors will naturally tend to have greater contractionary effect than that due to positive factors. Not merely because the need for a safety “buffer” is a more direct response to fear of loss but also due to the fear of loss being generally greater than hope of gain. A well known, and rational, tendency: for, while both fear of loss and hope of gain are directed to what might happen, what you already do have has far more existential power than what you might have. The gain does not yet exist, and has not been experienced: that which one might lose already does and has been. So, fear of loss naturally tends to be cognitively stronger than hope of gain. Uncertainty has more fearful power in period of contracting economic activity than of expanding economic activity since the possibility of loss looms greater than any hope of gain.

Reducing uncertainty – i.e. increasing the ambit of matters over which expectations can be reasonably formed – will tend to promote economic activity, particularly economic actions with delayed pay-offs (such as creating and benefiting from capital). [Even uncertainty that is read positively is likely to be very unstable, to be easily subject to reversal – reading what Keynes called 'animal spirits' as how uncertainty is currently being framed – since it exists on the absence of a basis in which to frame expectations and so easily reversed by new information.] So business will often prefer policy clarity – even if the policy is hostile or otherwise problematic – to policy uncertainty, since the former gives some structure within which to calculate likely results from actions over time (particularly investment). Creating and sustaining a “bazaar” economy of transactions that are immediate swaps is easy and historically common. Creating and sustaining an economy where transactions across time, notably the production of capital, is encouraged, and so becomes extensive, is harder, and historically rarer.

Hence the long-term economic benefits of the rule of law. It encourages the creating and utilising of capital because it lessens uncertainty about whether one will continue to benefit from the capital one creates, moving beyond what people call ‘sovereign risk’: not the possibility of public debt default, but more general cases of official actions seriously undermining the value of assets (such as confiscation).

But the value of the rule of law extends beyond restraining officials. Contracts, for example, can be seen, not merely as ways of reducing risk, but of lessening uncertainty: but only if they can be enforced. Just as having well-defined and enforceable property rights does much to move economic (and other social actions) out of the realm of uncertainty and into that of mere risk. So people can form expectations with a reasonable degree of confidence, and act upon them.

It is impossible to completely abolish uncertainty, just as it is impossible to completely abolish risk. But public policy that seeks to sustain a stable and prosperous society should aim to decrease uncertainty, and avoid actions that increase it.

Tuesday, June 30, 2009

Applying Barzel: cognition, belief, unions, class

It is a sign of how penetrating book of analysis is that it expands your understanding of matters beyond what the author consider directly in their book. I found that very much to be the case in Prof. Yoram Barzel’s Economic Analysis of Property Rights, which I reviewed in my previous post. So I consider below some applications of his analysis that occurred to me but did not figure in his book.

Consider Barzel’s notion that humans-as-maximisers is perfectly reasonable provided one takes in all applying constraints. Simple concepts of humans-as-maximisers analyse behaviour as if every decision will be fully considered on every occasion—that is, cognition and information are costless. But they are not. Thinking takes time and effort, as does gathering information. So, given such constraints, genuine maximisers will use habits and routines, saving on time and effort.

Moreover, the benefits of cognition depend on one’s capacity to do so. So, the less able one is at thinking, the more one will tend to rely on habits and routines and, in particular, the more one will tend to rely on “piggy-backing” on other folk’s decisions. (Hence John Stuart Mill’s observation that stupid people tend to be conservative: Walter Bagehot famously characterised British politics as the battle between the stupid party and the silly party.) There are all sorts of complexities here, however. Those less able at cognition are still going to tend to be much better informed about their own situation than someone else is. As the saying goes, a stupid man can put on his own trousers better than a wise man can do it for him.
Read More...
The nature of the matter being considered is also important. Thus, the effects of of a belief for the believer will generally be much more subject to feedback to the believer than will the effects of implementing said belief on others. So beliefs that operate as status-markers will have much stronger feedback in terms of their status effect than in terms of their implications for other folk. Hence, dramatically increasing the number of folk who traffic in ideas (e.g. expanding higher education) but who are insulated from the consequences of their ideas for others (e.g. by having tenure, by working in tax-paid institutions) will tend to lower the quality of the ideas being trafficked in (in terms of wider social consequences) but, via feedback effects, increase their role as status-markers.

Transaction costs
Expressing a belief is a form of transaction. Barzel makes much of the characteristics of the transaction as being crucial to understanding how folk will behave. So, for example, as wages rise, workers will tend to move towards self-employment since they will be more able to cope with income variability (p.151).

Consider union officials (not an example Barzel uses). Union officials act as negotiation and risk management agents for workers. Union officials will prefer labour remuneration to be centralised (i.e. channelled through mechanisms they deal with) and complex (divided up into lots of allowances, benefits, etc, particularly deferred and contingent benefits). Complexity increases the union officials' importance to workers as managers of complexity and provide specific measures of their performance (in terms of identified allowances gained) while contingency (e.g. sick leave) and deferral (e.g. firm-specific superannuation) encourage workers to stay "in place".

Workers, however, will generally tend to prefer remuneration to be direct (less time-constraining), simple (easy for them to understand and manage directly) and flexible (so they can shift along various margins, such as hours of work, as convenient for them). So, the larger the aggregation of workers in similar situations (e.g. in manufacturing, construction, public service, etc), thereby having fewer coordination problems, the more unionised they are likely to be.

Thus—unless there are countervailing pressures—as the possibilities of employment diversity increase (particularly true for service industries) and wages (or, at least, household incomes) increase generally, workers will have less tolerance for the gap between their interests and those of union officials, whose utility as negotiation and risk management agents will decline. So unionisation will tend to decline (i.e. fewer workers with transact with union officials as their agents) and will do so more in the private sector than in the public sector.

Indeed, to the extent that union officials impede the application of capital to labour, unions will actually tend to reduce overall living standards and wages. Also, unions are, as coercive bodies, effectively substitutes for state action in the provision of various public goods to workers. The more “hostile” or “indifferent” the state, the more utility there is in union action for workers. The more services the state provides which are genuinely useful for workers—and not connected to union membership—and the more competition from other agents for such (e.g. lawyers), the less benefit unions can provide and the less workers will transact with union officials for services.

In Australia, Bill Kelty’s union amalgamations and centralisations aggravated the process, by increasing the distance between officials and members without any significant economies of scale gains—not a single union official position was abolished as a result of the amalgamations. Thus unionisation declined faster in the outlying States, with greater distance and difference between members and the Sydney or Melbourne headquarters.

Such declines in unionisation are not signs that workers are becoming more stupid, or some are more stupid than others, or that they are increasingly deluded. It is a rational response to shifting circumstances.

As Barzel demonstrates with a wide range of examples, transaction cost analysis has the capacity to greatly improve understanding. Classical Marxism missed the implications of marginal analysis, being stuck on the (false) labour theory of value, so all profit is exploitation, loss and risk of loss are, at best, minor issues, etc. Yet avoidance of loss and risk management are crucial to understanding economic behaviour, particularly commercial behaviour.

Marxism (and its Post-Marxism derivatives) also misses the implications of transaction cost analysis. Assuming, for example, that there are no coordination problems for classes so they can be analysed as coherent historical agents. So all capitalists act together, all landlords act together and so on, as coordination is costless—there are no search costs, no information costs, no monitoring costs, no divergent interests. Scarcity (e.g. of capital) can be unproblematically analysed as monopoly.

Which is nonsense, of course. A firm owner has rather more severe conflicts of interest with his or her competitors than with his or her workers (otherwise firms couldn’t exist). Conversely, the power of unions rests on excluding competing workers (known as “scabs”).

But, by assuming that all capital is held by capitalists and that capitalists have no coordination problems, it follows in Marxian economic logic that, since capitalism is very good at generating capital, the power of capitalists will increase against workers who will become poorer and poorer (either relatively or absolutely)—the immiseration thesis. Which is completely false. Capitalism does generate more and more capital, but that capital is held more and more widely. (So capitalists are, if anything, less and less able to coordinate as a group.) Labour becomes more and more scarce vis-a-vis capital. So the value of labour goes up and up via bidding processes as capitalists bid for increasingly scarce labour. So workers become wealthier and wealthier with higher and higher incomes. (Though, clearly, if women enter the workforce in significant numbers and there is large-scale migration, the scarcity effect for labour from growth in capital will be reduced, particularly for lower skilled workers.)

Note that there is nothing here that bars regularities in behaviour based on class. People in similar situations will tend to act in similar ways. Without such regularities in behaviour, society and social analysis would be equally impossible. But class is hardly the only socially significant line of commonality/difference

Of course, since coordination is a public good, the one body able to relatively easily provide that public good is the state. Hence the necessity of state action (or some other coercive body: organised crime is a coercive rival to state action) to produce genuine, systematic exploitation—and the state with the most overweening power (Stalin’s Soviet Union) was the most effectively exploitative, though North Korea is also extremely exploitative. (In developed societies, unions typically come second in power to enforce coordination, hence their utility for enforcing cartels, such as the Australian waterfront.)

These are just some of the ways Yoram Barzel’s transaction cost and property rights analysis can make the social world around us clearer.