Capitalism is a term coined, or at least popularised, by the enemies of the system they labelled capitalism. It was understood from the start to have a pejorative connotation and the term’s use is still dominated by that pejorative connotation. Despite the efforts of supporters of capitalism-so-labelled to reclaim capitalism as a positive, or at least neutral, label; particularly based on historical experience.
One should always be wary of any term where the pejorative element built in. Even if you somehow do not let the pejorative element infect your own thought, it is going to be there in the mind of many, often most, readers.
Socialism is a term coined, or at least popularised, by the proponents of the system they labelled socialism. It was understood from the start to have a positive, indeed overwhelmingly positive, connotation and its use is still dominated in many quarters by that positive connotation. This despite the efforts of the opponents of “socialism” to give it thoroughly negative connotations, particularly based on historical experience.
Capitalism has at least has some vague consensus on what the term means. Socialism does not even have that, as recent American politics has demonstrated, thanks to the attempts of Sen. Bernie Sanders, self-proclaimed socialist, to win the Democratic Party nomination for President of the United States.
Capitalism has some vague consensus regarding what the term means because almost everyone agrees that there is currently, and has been, a lot of it. Apart from some labelling of command economies as state capitalism, there is a general consensus that we more or less know capitalism when we see it.
There is no such consensus around socialism, mainly because socialists typically want to dissociate the term from every command economy that has ever operated, or patent embarrassments such as Venezuela. Conversely, the enemies of socialism what to hang every command economy that has ever operated, and embarrassments such as Venezuela, on any use of socialism.
If socialism has never been “really” tried, then it can never have failed. Or if there is this new form or conception of socialism that has never been tried, then clearly it has nothing to do with any command economy that has ever operated, or any embarrassment such as Venezuela.
Of course, one might suspect that this attempt to constantly separate socialism from history might be a bit of a warning sign. Especially if folk want to play the game of comparing the ideal of socialism (carefully separated from history) with the practice of capitalism (often using carefully edited, selected or re-construed bits of history).
For me, there is a simple solution. Avoid, as much as possible, using either term. Then you can at least aspire to some analytical rigour.
Other possibilities
That does not remotely foreclose considering new social possibilities. It just means trying to do so with some analytical precision without dragging along the deadweight of fraught ideological conflicts.
Moreover, contemplating the social possibilities that do not seem to be much explored can be a very useful exercise. To consider the dogs that don’t bark in the night.
If not separating workers from the product of their labour, or simply having the workers in charge, is such a fine thing, one might think that would be entirely possible to set up worker-controlled companies. Then the non-alienated, self-controlled workers might be expected to produce so well that they can outcompete capital-owned firms in the market place.
Of course, if your notion of alienation covers any attempt to produce for exchange, then even in a worker-controlled firm workers will be alienated from their labour. Of course, not producing for exchange then reduces Homo sapiens to the economic level of every other species on the planet. One might consider the possibility that producing for exchange permits the scaling up of production and consumption far more extensively or efficiently than any other way of dealing with the issues of subsistence and surplus. So, perhaps giving up an advantage that may predate our emergence as a species is not a good move.
Let’s assume that something we have been doing for maybe 320,000 years or so (and certainly for 200,000 years), exchanging things we have produced, is not some alienating disaster, and go with worker control is good. Worker-controlled firms is still an entirely possible option. So, why don’t we see far more of such?
What is a firm? A firm is a mechanism for lowering transaction costs and dealing with risk. Do we want to dump risk on to labour or on to capital? Surely, on to capital. So, a labour-controlled firm is going to make the decisions, and is going to need capital, but will also want to dump the risk onto the holders of capital.
So, which firms are going to operate better? Those where control ultimately rests with those who have to deal with the risks or those where control ultimately rests with those who get to systematically dump risk on to others?
Clearly the former. The owners of a capital-owned firm get the residual income from the firm because they also cover the residual losses from the firms.
Moreover, when we say “worker controlled”, which workers? The original workers presumably. But what if you want to hire new staff, do they get the same control rights? Suppose the firm has too many workers, it needs to lay off staff, how do you decide that? What are the dynamics of a group of workers who every so often may have to vote on who gets to be ejected from the firm?
Capital-owned firms solve these problems by essentially having a market in control. The more you are willing to buy in, the more control you have. If you want to leave, you sell your control rights. Decisions about hiring and firing are left with those who are managing the firm. (And firms with mechanisms for workers to become shareholders are still capital-owned firms.)
What about coordination issues as a worker-controlled firm gets bigger?
At this point, we can see why the somewhat Darwinian selection processes of markets select for capital-owned firms and not worker-controlled ones. It is not that worker-controlled firms are illegal, it is that they represent a risk-and-decision profile that no one (including workers) are likely to invest in. The closest we get are partnerships, and they represent human-capital firms, not worker-control.
And about the state
Consider again the question: which firms are going to operate better? Those where control ultimately rests with those who have to deal with the risks or those where control ultimately rests with those who get to dump the risk on to others? Here’s something to conjure with. Is not: a structure where control ultimately rests with those who get to dump the risk on to others, a pretty good description of the state?
People (often with good reason) complain about the socialisation of losses and the privatisation of profits. But that is precisely what an awful lot of state politics is about. Shifting benefits to one group and costs, including risks, to another because the coercive power of the state makes that a game that can be played (and is obviously one with significant potential pay-offs). When one sees risks being shifted from capital to labour, there is generally some state action underlying it.
This is why the term state capitalism has a little bit of purchase behind it. If you squint just right.
In a command economy, the state owns all (or almost all) the capital. So, in a command economy, risk regularly gets dumped by the capital-owning state on to labour. Including risks of mass starvation or environmental degradation. But that is not because capital owns the state, but because the state owns the capital.
Lenin, Stalin, Mao, etc. did not control the state due to their ownership of capital, they controlled the creation and use of capital due to their control of the state. To call such capitalist or capitalism is to get the causal drivers entirely the wrong way around.
So, yes, it is significant that state owns the capital in a command economy. It affects its patterns of behaviour and means there is no significant non-state control of surplus, so no significant basis of institutional resistance to the power of (those who control) the state. But the capital is entirely subordinate to the state. So, the society is not capitalist.
And we are back with avoiding the use of terms so weighed down with emotionally-laden connotations. Because, without those connotations, there would be no incentive to so badly mis-characterise the relevant social, political and economic dynamics.
Postings on books (mainly non-fiction), a few films and matters of interest by Lorenzo from Oz (aka Downunder)
Thursday, August 13, 2020
Single-Spouse Marriage Systems: the elite male problem
While most human marriages have been one husband, one wife, most human societies have permitted multiple-spouse marriages. Most commonly, they permitted a man to have more than one wife.
Since fathering a child takes rather less inherent biological effort than mothering one, it is hardly surprising that multiple wives (polygyny) is the most common deviation from single-spouse marriage.
In societies where women make substantial contributions to subsistence — almost invariably, hoe-farming societies — rates of polygyny can get very high. For the cost of adding extra wives is much less than in societies where males dominate subsistence activity: typically plough-farming and pastoralist societies.
Foraging societies tend to have low levels of polygyny, as subsistence contributions are relatively even (the subsistence contribution of women is more constant, that of men more nutrient dense) and entirely labour driven. There are no productive assets, beyond weapons and other hand-held tools.
The landscape management and foraging complexity of Aboriginal societies in Australia generated distinctive patterns of gerontocratic polygyny. Old men married young women, young men married their widows and, in their old age, married young women. It kept fertility down and fostered the transfer of complex foraging knowledge across the generations. They also developed some extraordinarily complex marriage-and-kin systems as part of complex landscape management, as that preserved a rolling network of kin connections.
If women control the main productive asset, and there is no other basis for elite male status, then there is no elite male problem. If that is not the case, then if a society is going to have compulsory single-spouse marriage systems, it is the elite males who have to be convinced. There seems to be two general reasons for the elite males to accept single-spouse marriage.
First, there are very strong pressures for social cohesion. If there is a need to have maximum internal cohesiveness against outside groups — specifically, if there is a need to include low-status males — then a single-spouse system minimises internal sexual competition and maximises the breadth of stakes in the success of the group. The fewer elite males there are, the lower the cohesion pressures need to be for such an arrangement to emerge.
This is the pattern that seems to explain the emergence of single-spouse systems in the classical Mediterranean and in groups such as the early Christians and the Alevis. In the case of classical Greece and Rome, access to slaves further reduced the cost of single-spouse marriage to elite males.
The second reason for single-spouse marriage being accepted by elite males is if the education cost of raising a child, particularly a son, to elite status is sufficiently high. Multiple-wife systems mean less investment by the father in individual children. If such an investment is at a premium, then a single wife is a better option.
This is the pattern you see in the Indian caste system and in the modern world. Brahmins could theoretically have multiple wives, but very rarely did, as the training investment in raising a Brahmin son was so high. Indeed, this very high training cost seems likely to be the reason why the jati system developed — to ensure the daughters of Brahmin, who understood the needs of raising a Brahmin son, were available to marry Brahmin grooms.
Single-spouse systems did not develop as territorial or population expansion devices. On the contrary, polygyny is a much better territorial expansion device because it creates a shortage of wives. Polygyny creates a shortage of wives as a woman who gets married leaves the marriage market, but her husband does not. So low-status men end up excluded from the marriage market. The classic response to this problem is “those people over there have women, take theirs”.
Islam sanctified this pattern, with the Quran explicitly endorsing sexual access to “those your right hand possesses” (Ma malakat aymanukum: i.e. women acquired by the sword). Sanctified sexual predation helped drive the territorial expansion of Islam for a thousand years. From its rise in C7th Arabia to the turning back of Islamic expansion into Europe after the Battle of Vienna, 12 September 1683. (And yes, that is apparently why 11 September was chosen in 2001.)
The Norse (Viking) raids also seemed to have been significantly fuelled by polygyny and died away as the Norse lands Christianised.
If external expansion is fuelled by low-status men seeking women that the local marriage market does not provide them, then elite males have no reason to accept a single-spouse system. Which comes back to; if one wants to explain why a single-spouse system is being accepted, then one has to explain why elite males have accepted the system, as they are giving up the benefits of multiple wives.
These musings are part of the intellectual scaffolding for a book to be published by Connor Court looking at the social dynamics of marriage. As they are somewhat a work in progress, they may be subject to ongoing fiddling.
Since fathering a child takes rather less inherent biological effort than mothering one, it is hardly surprising that multiple wives (polygyny) is the most common deviation from single-spouse marriage.
In societies where women make substantial contributions to subsistence — almost invariably, hoe-farming societies — rates of polygyny can get very high. For the cost of adding extra wives is much less than in societies where males dominate subsistence activity: typically plough-farming and pastoralist societies.
Foraging societies tend to have low levels of polygyny, as subsistence contributions are relatively even (the subsistence contribution of women is more constant, that of men more nutrient dense) and entirely labour driven. There are no productive assets, beyond weapons and other hand-held tools.
The landscape management and foraging complexity of Aboriginal societies in Australia generated distinctive patterns of gerontocratic polygyny. Old men married young women, young men married their widows and, in their old age, married young women. It kept fertility down and fostered the transfer of complex foraging knowledge across the generations. They also developed some extraordinarily complex marriage-and-kin systems as part of complex landscape management, as that preserved a rolling network of kin connections.
If women control the main productive asset, and there is no other basis for elite male status, then there is no elite male problem. If that is not the case, then if a society is going to have compulsory single-spouse marriage systems, it is the elite males who have to be convinced. There seems to be two general reasons for the elite males to accept single-spouse marriage.
First, there are very strong pressures for social cohesion. If there is a need to have maximum internal cohesiveness against outside groups — specifically, if there is a need to include low-status males — then a single-spouse system minimises internal sexual competition and maximises the breadth of stakes in the success of the group. The fewer elite males there are, the lower the cohesion pressures need to be for such an arrangement to emerge.
This is the pattern that seems to explain the emergence of single-spouse systems in the classical Mediterranean and in groups such as the early Christians and the Alevis. In the case of classical Greece and Rome, access to slaves further reduced the cost of single-spouse marriage to elite males.
The second reason for single-spouse marriage being accepted by elite males is if the education cost of raising a child, particularly a son, to elite status is sufficiently high. Multiple-wife systems mean less investment by the father in individual children. If such an investment is at a premium, then a single wife is a better option.
This is the pattern you see in the Indian caste system and in the modern world. Brahmins could theoretically have multiple wives, but very rarely did, as the training investment in raising a Brahmin son was so high. Indeed, this very high training cost seems likely to be the reason why the jati system developed — to ensure the daughters of Brahmin, who understood the needs of raising a Brahmin son, were available to marry Brahmin grooms.
Single-spouse systems did not develop as territorial or population expansion devices. On the contrary, polygyny is a much better territorial expansion device because it creates a shortage of wives. Polygyny creates a shortage of wives as a woman who gets married leaves the marriage market, but her husband does not. So low-status men end up excluded from the marriage market. The classic response to this problem is “those people over there have women, take theirs”.
Islam sanctified this pattern, with the Quran explicitly endorsing sexual access to “those your right hand possesses” (Ma malakat aymanukum: i.e. women acquired by the sword). Sanctified sexual predation helped drive the territorial expansion of Islam for a thousand years. From its rise in C7th Arabia to the turning back of Islamic expansion into Europe after the Battle of Vienna, 12 September 1683. (And yes, that is apparently why 11 September was chosen in 2001.)
The Norse (Viking) raids also seemed to have been significantly fuelled by polygyny and died away as the Norse lands Christianised.
If external expansion is fuelled by low-status men seeking women that the local marriage market does not provide them, then elite males have no reason to accept a single-spouse system. Which comes back to; if one wants to explain why a single-spouse system is being accepted, then one has to explain why elite males have accepted the system, as they are giving up the benefits of multiple wives.
These musings are part of the intellectual scaffolding for a book to be published by Connor Court looking at the social dynamics of marriage. As they are somewhat a work in progress, they may be subject to ongoing fiddling.
Cooperative stability and the evolution of norms
Groups have been a somewhat vexed issue in evolutionary theory, with group selection, and now multi-level selection, (after the general rejection of group selection) being a hotly debated topic.
Foraging hunting bands are (and presumably were) somewhat fluid entities, although typically embedded in larger social groupings. Hunting bands split, they come together, people move between them.
While low-level normative behaviour has been observed in other species, Homo sapiens engage in levels of normative behaviour way in excess of any other species.
Such normative behaviour clearly originally evolved during our very long foraging history. Clear evidence of long distance exchange (over distances of up to 166km) has been found about the time Homo sapiens has clearly emerged as a species. Strongly suggestive evidence of exchange (over distances around 60km) may predate our emergence as a species. And exchange is a normative behaviour. The key element in exchange in being not “mine!” but “yours!”. Any chest-thumping ape can do “mine!”, it takes a normative species to systematically accept “yours!”.
Our long-term history can be understood as the spiralling up of cooperative, and thus normative, behaviour.
So, this distinctively human level of normative behaviour had to originally develop in a situation where it is likely there was some fluidity in groups. Indeed, normative behaviour can actually increase local group fluidity. First, to have norms that are more than just descriptive (I do it because everyone else does), requires sanctioning behaviour. And sanctioning behaviour can be a cause of group fluidity.
Second, norms economise on information and cognitive effort, facilitating cooperation. Thus having a common normative framework permits easier movement between local groups.
So, it is not clear that group stability was the basis or benefit of the spiralling-up emergence of normative behaviour.
Cooperative stability, is, however, another matter. Philosopher Cristina Bicchieri has developed a well-structured analytical framework for understanding norms and their dynamics. The framework is set out formally in her The Grammar of Society. It is set out more accessibly in her Norms in the Wild, which builds on the experience of herself and others in seeking to change social norms.
Social norms are built on social expectations — empirical expectations (what you expect others to do) and normative expectations (what you expect others to believe you should do). They operate on the basis of schemas (sets of belief) and scripts (patterns of action). They usually involve some system of sanctions.
Stable expectations and scripts make other people’s actions much more predictable, so make cooperation much easier to attain and sustain. Norms therefore generate (or at least anchor) cooperative stability. And cooperative stability can, depending on circumstance, promote group stability. But cooperative stability is what would have been originally selected for, even in situations of relative local group fluidity and even if such cooperative stability increased local group fluidity.
Thus, norms economise hugely on the cognitive and information effort required for cooperation. They are, in a sense, entrenched social bargains. (Or, at least, patterns that greatly reduce or structure the ambit of bargaining required for social cooperation.)
But what about moral norms? Moral norms are more absolute that social norms. As Prof. Bicchieri says, they have an element of unconditionality that social norms do not.
As we developed more sedentary living patterns, and then farming and pastoralism (a period during which our adaptive evolution seems to have sped up, presumably due to the dramatic changes in selective pressures), group stability would have acquired more survival and subsistence value. The absolute or unconditional nature of moral norms could have been selected for, as they promoted group stability. But were selected for by building on the existing capacity for social norms. Which themselves probably developed out of descriptive norms (a norm people prefer to conform to on the expectation that others do).
Thus, the claim is that normative behaviour in general developed out of its ability to foster cooperative stability. The argument by David C. Lahtia and Bret S. Weinstein that moral norms developed as group stability gained a higher survival and subsistence premium would be congruent with this, but as something that occurred relatively late in our evolutionary history. With what was being selected for being the ability to thrive in larger and more stable groups, even if those groups were not themselves stable enough as populations to provide specific evolutionary pressures.
A test for this hypothesis would be to check the relative importance of social norms and moral norms in different human populations. The more forager-based and fluid the social groups, the more dominant social norms can be expected to be. The more sedentary, stable and larger the social groups, the more significant moral norms can be expected to be.
With religions and faith systems, as structures of the sacred, reflecting this pattern in their development through time and across societies.
Prestige and dominance
There is also a likely connection to prestige and dominance. Prestige, bottom-up status, is a key social currency of human cooperation. Foraging societies generally display very strong anti-dominance patterns of behaviour, as dominance behaviour (top-down status) undermines local group cooperation. So, suppressing dominance behaviour would actually increase the capacity for, and the stability of, cooperative behaviour.
As more sedentary patterns of living, then farming and pastoralism, arose, dominance behaviour re-emerged. Including some very extreme patterns of dominance behaviour, such as human sacrifice as part of funeral rites. The more absolute nature of moral norms would more readily sustain dominance behaviour, especially extreme dominance, behaviour, than social norms.
Morality could, however, also provide some protections against dominance behaviour. The so-called golden rule of generalised reciprocity (treat others as you would be treated), forms of which develop as more obviously moralistic religions and faith systems emerge, is reasonably construed to be a social mechanism for dominance-mitigation and cooperation-enhancing (especially exchange-enhancing).
If this is correct, then undermining of any notion of a shared moral identity will be associated with intensified dominance behaviour.
These musings are part of the intellectual scaffolding for a book to be published by Connor Court looking at the social dynamics of marriage. As they are somewhat a work in progress, they may be subject to ongoing fiddling.
Foraging hunting bands are (and presumably were) somewhat fluid entities, although typically embedded in larger social groupings. Hunting bands split, they come together, people move between them.
While low-level normative behaviour has been observed in other species, Homo sapiens engage in levels of normative behaviour way in excess of any other species.
Such normative behaviour clearly originally evolved during our very long foraging history. Clear evidence of long distance exchange (over distances of up to 166km) has been found about the time Homo sapiens has clearly emerged as a species. Strongly suggestive evidence of exchange (over distances around 60km) may predate our emergence as a species. And exchange is a normative behaviour. The key element in exchange in being not “mine!” but “yours!”. Any chest-thumping ape can do “mine!”, it takes a normative species to systematically accept “yours!”.
Our long-term history can be understood as the spiralling up of cooperative, and thus normative, behaviour.
So, this distinctively human level of normative behaviour had to originally develop in a situation where it is likely there was some fluidity in groups. Indeed, normative behaviour can actually increase local group fluidity. First, to have norms that are more than just descriptive (I do it because everyone else does), requires sanctioning behaviour. And sanctioning behaviour can be a cause of group fluidity.
Second, norms economise on information and cognitive effort, facilitating cooperation. Thus having a common normative framework permits easier movement between local groups.
So, it is not clear that group stability was the basis or benefit of the spiralling-up emergence of normative behaviour.
Cooperative stability, is, however, another matter. Philosopher Cristina Bicchieri has developed a well-structured analytical framework for understanding norms and their dynamics. The framework is set out formally in her The Grammar of Society. It is set out more accessibly in her Norms in the Wild, which builds on the experience of herself and others in seeking to change social norms.
Social norms are built on social expectations — empirical expectations (what you expect others to do) and normative expectations (what you expect others to believe you should do). They operate on the basis of schemas (sets of belief) and scripts (patterns of action). They usually involve some system of sanctions.
Stable expectations and scripts make other people’s actions much more predictable, so make cooperation much easier to attain and sustain. Norms therefore generate (or at least anchor) cooperative stability. And cooperative stability can, depending on circumstance, promote group stability. But cooperative stability is what would have been originally selected for, even in situations of relative local group fluidity and even if such cooperative stability increased local group fluidity.
Thus, norms economise hugely on the cognitive and information effort required for cooperation. They are, in a sense, entrenched social bargains. (Or, at least, patterns that greatly reduce or structure the ambit of bargaining required for social cooperation.)
But what about moral norms? Moral norms are more absolute that social norms. As Prof. Bicchieri says, they have an element of unconditionality that social norms do not.
As we developed more sedentary living patterns, and then farming and pastoralism (a period during which our adaptive evolution seems to have sped up, presumably due to the dramatic changes in selective pressures), group stability would have acquired more survival and subsistence value. The absolute or unconditional nature of moral norms could have been selected for, as they promoted group stability. But were selected for by building on the existing capacity for social norms. Which themselves probably developed out of descriptive norms (a norm people prefer to conform to on the expectation that others do).
Thus, the claim is that normative behaviour in general developed out of its ability to foster cooperative stability. The argument by David C. Lahtia and Bret S. Weinstein that moral norms developed as group stability gained a higher survival and subsistence premium would be congruent with this, but as something that occurred relatively late in our evolutionary history. With what was being selected for being the ability to thrive in larger and more stable groups, even if those groups were not themselves stable enough as populations to provide specific evolutionary pressures.
A test for this hypothesis would be to check the relative importance of social norms and moral norms in different human populations. The more forager-based and fluid the social groups, the more dominant social norms can be expected to be. The more sedentary, stable and larger the social groups, the more significant moral norms can be expected to be.
With religions and faith systems, as structures of the sacred, reflecting this pattern in their development through time and across societies.
Prestige and dominance
There is also a likely connection to prestige and dominance. Prestige, bottom-up status, is a key social currency of human cooperation. Foraging societies generally display very strong anti-dominance patterns of behaviour, as dominance behaviour (top-down status) undermines local group cooperation. So, suppressing dominance behaviour would actually increase the capacity for, and the stability of, cooperative behaviour.
As more sedentary patterns of living, then farming and pastoralism, arose, dominance behaviour re-emerged. Including some very extreme patterns of dominance behaviour, such as human sacrifice as part of funeral rites. The more absolute nature of moral norms would more readily sustain dominance behaviour, especially extreme dominance, behaviour, than social norms.
Morality could, however, also provide some protections against dominance behaviour. The so-called golden rule of generalised reciprocity (treat others as you would be treated), forms of which develop as more obviously moralistic religions and faith systems emerge, is reasonably construed to be a social mechanism for dominance-mitigation and cooperation-enhancing (especially exchange-enhancing).
If this is correct, then undermining of any notion of a shared moral identity will be associated with intensified dominance behaviour.
These musings are part of the intellectual scaffolding for a book to be published by Connor Court looking at the social dynamics of marriage. As they are somewhat a work in progress, they may be subject to ongoing fiddling.
Big Tech, Big Food Products and Big Education: leveraging our tastes to stress our bodily and social metabolisms to destruction
We have an epidemic of evolutionary novelty and it is killing us
Bret Weinstein
Everybody is plugged into an apparatus that is sitting in between us, in the way conversation used to and, what it is doing, is that it is feeding us things that confirm what we actually believe, much of which is real, but that, in effect, people, smart scientific people, who know very well that the right way to think carefully is to be falsificationist, to look for things that disconfirm your beliefs, are being fed an overwhelmingly verificationist message and that is causing everybody to be dead sure they know what is going on, when very few of us have any clue.
Bret Weinstein
We Homo sapiens are remarkably adaptable, including in our diets. Human populations have lived for generations on a remarkably varied range of diets.
The consistent feature of human diets is food preparation and cooking. We are cucinivores, food preparers. With the last century or so seeing dramatic shifts in how we process food.
As our technological capacities have expanded, so has our capacity to process food. This has exposed a major weakness. What is palatable does not have a strong connection to what is nutritious. But it is far easier to sell to our palate than to our nutrition. Indeed, it is possible to sell to our palate in a way that actively misleads and misdirects our hunger signals.
This misfiring between palate, hunger sensations and nutrition has led to an increasingly metabolically unhealthy population. Something I have discussed previously.
But a similar mismatch is stressing our social metabolism. (A metabolism being a system for breaking things down — catabolic processes — and building things up — anabolic processes.)
Evolutionary biologists Heather Heying and Bret Weinstein have a nice short discussion of the Twitter recommendation algorithm not putting into their feed tweets from people that they follow that don’t conform to the recommendation algorithm’s apparent inferred positioning of their cognitive preferences. The Twitter algorithm is, in effect, trumping their choices about who to follow by its attempt to identify and target their cognitive palate.
This could be seen as shadow banning. I doubt that this is a significant phenomena, at least in a political context on large online platforms, as it would seem to involve a fair bit of effort. Especially as much the same apparent effect could be created simply by, in this instance, the Twitter algorithm targeting inferred cognitive palate.
The much bigger issue is that the Twitter algorithm is doing at least two noxious things. First, it is actively working against people’s attempts to have a broad range of information sources, at least regarding viewpoint diversity. So it is undermining cognitive nutrition.
Second, it is intensifying, and to some extent creating, information and viewpoint silo-ing. That is, people getting quite different, but patterned, streams of information. Not only patterned, but patterned in a way that separates people into systematically information-restricted-and-differentiated groups sharing common viewpoints.
It is well established in social science that the more conformity of viewpoint and opinion there is in a social group, the more intense the shared views are likely to become and the worse any decisions are likely to be. The former because what is common gets reinforced and intensified. The latter because more and more things that turn out to matter are likely not to be considered at all.
Social media algorithms acting to reinforce viewpoint-and-information narrowing are thereby actively deranging collective cognitive functioning. Or, as we might say, collective sense-making.
So, the recommendation algorithms are undermining attempts to maintain cognitive nutrition and actively assisting in deranging the collective cognitive metabolism. And they are doing it in a way analogous to how Big Food Products is undermining human metabolic health: by targeting our palate in ways disconnected to our nutrition and that also deranges our hunger signals. In the case of Big Tech, it is our cognitive palate, our cognitive nutrition and our social-emotional signals, but the underlying pattern is remarkably similar.
Big Tech are for-profit businesses. There are some inherent complexities involved in online media platform provision that Canadian YouTuber J. J. McCullough has an informative short discussion of. His discussion is specifically about YouTube, but has wider application.
Nevertheless, as with Big Food Products, income seeking encourages Big Tech to target (cognitive) palate when not only is (cognitive) palate not well connected to (cognitive) nutrition but the form of the targeting actively deranges our internal feedback signals. This is also clearly disastrous in its implications.
Especially as Twitter is, as more than one exiting media insider has pointed out, effectively becoming the editor for many mainstream media publications. Journalists are, after all, hardly immune to these patterns. (And those that are more resistant are increasingly bailing to their own, independent, operations using, ironically, online media.)
There was already a serious problem with narrative-driven journalism (see discussion of an obvious case here and a much more significant example here). The interaction of narrative-driven journalism with Big Tech is making the problem with both worse.
Disrupting the public sphere
Analyst Martin Gurri has explored the disruptive effects of new media in his book, originally published in 2014, The Revolt of the Public and the Crisis of Authority in the New Millennium. He also blogs about the issues he raises in the book. In a recent essay he wrote:
The collapse of trust in our leading institutions has exiled the 21st century to the Siberia of post-truth. I want to be clear about what this means. Reality has not changed. It’s still unyielding. Facts today are partial and contradictory — but that’s always been the case. Post-truth, as I define it, signifies a moment of sharply divergent perspectives on every subject or event, without a trusted authority in the room to settle the matter. A telling symptom is that we no longer care to persuade. We aim to impose our facts and annihilate theirs, a process closer to intellectual holy war than to critical thinking.
Historian Niall Ferguson has argued that the Internet has disrupted the public sphere in a way that is deeply analogous to the effect of the printing press on late medieval and early modern Europe.
As that was the time of the Reformation and the wars of religion, not a reassuring analogy.
Ferguson also notes that iconoclasm, tearing down statues and other public icons, is a typical manifestation of the disruption of the public sphere.
One wonders if that makes Robin DiAngelo’s bestseller White Fragility the contemporary equivalent of the C16th and C17th best seller, the Malleus Maleficarum, the Hammer of the Witches. Economist Glenn Loury and linguist John McWhorter have discussed how the current use of racist! is very like the past use of witch!. Or, indeed, the past use of heretic!.
Social psychologist Jonathan Haidt has already famously observed that morality blinds and binds: it brings people together around shared norms and values, but it also blinds their view of evidence and others. Having narrative-driven journalism interact with cognitive-palate-targeting social media makes this dynamic much worse.
Selecting for cognitive intensity
Moreover, any cognitive identity built around a system of belief is going to have some resistance to disconfirming or problematic facts. Create a large enough community of people with such a cognitive identity and a selection process is set up, selecting for mechanisms that protect that identity, that protect that intensified collective cognitive palate.
Something education is supposed to do is to help us make sense of the world around us. (That is something journalism is supposed to do too.) But educators (and journalists) are as prone to human foibles as the rest of us. In particular, they are likely to care about status.
The trouble with schools of education, and schools of journalism, in higher education, is that, to the extent that education or journalism can be said to be academic disciplines at all, they do not have much inherent intellectual heft or status to them. The temptation to compensate for that lack of intellectual heft or status by seeking some other form of status thus becomes very strong, with that very lack of intellectual heft providing little or no countervailing pressure. The obvious form of (compensating) status to embrace is moral status; to embrace some form of activism, of seeking to make the world a better place.
Schools of education and journalism then become strongly prone to shift from being centres of skill-based education (here’s tools to help you go about the tasks of teaching or reporting) to being centres of ideas-based indoctrination (here’s how you become a Good Person making the world a Better Place).
So, we get young people with little life experience who go to such schools of activism to become teachers or journalists, who take those attitudes to their workplaces, spreading such ideas among their students and any young readers, who then go to university … And so the cycle repeats and intensifies.
Moreover, there is nothing to stop this pattern being adopted in other parts of higher education. Hence the proliferation of so-called “grievance studies” courses and degrees, but which are rather better described as gimme degrees in moral self-congratulation. Which universities happily provide, because there’s money in it.
Their graduates then go out into workplaces and bureaucracies, including university administrations, and we get another round of the cycle.
Note that such is better described as status-hacking rather than status built on serious understanding or achievement.
This shifting from education in role-undertaking skills to indoctrination in status-providing ideas is, of course, another manifestation of appealing to cognitive palate rather than genuinely providing cognitive nutrition. With the emphasising on status-providing ideas over what actually works having the effect on the schooling of students that one might expect.
So, there is an expanding social environment in which selection takes place for memes (ideas that differentially replicate in a competitive environment, just like genes) that create status-providing cognitive identities that appeal to cognitive palates and which include protections from disconfirming facts, ideas or concerns.
Sooner or later, a set of super-replicating memes would be likely to evolve, helping to create and intensify narrative-driven journalism. Which then interact (disastrously) with social media structured to appeal to your cognitive palate but mislead and derange mechanisms of cognitive nutrition. (Such as dismissing science, a key provider of cognitive nutrition, as a patriarchal tool of white supremacy.)
Prestige, bottom-up status, is a key currency of human cooperation. In thriving societies and civilisations, prestige is typically harnessed for pro-social activities. In declining societies and civilisations, prestige is increasingly harnessed for anti-social activities. Including social dominance (i.e. top-down) status games.
And, as past waves of iconoclasm and social upheavals have demonstrated, people can be perfectly happy to destroy lots of physical, social and other capital if such destruction offers them status rewards.
Obviously (1) this is the situation we are now in. (2) this is not going to end well.
Bret Weinstein
Everybody is plugged into an apparatus that is sitting in between us, in the way conversation used to and, what it is doing, is that it is feeding us things that confirm what we actually believe, much of which is real, but that, in effect, people, smart scientific people, who know very well that the right way to think carefully is to be falsificationist, to look for things that disconfirm your beliefs, are being fed an overwhelmingly verificationist message and that is causing everybody to be dead sure they know what is going on, when very few of us have any clue.
Bret Weinstein
We Homo sapiens are remarkably adaptable, including in our diets. Human populations have lived for generations on a remarkably varied range of diets.
The consistent feature of human diets is food preparation and cooking. We are cucinivores, food preparers. With the last century or so seeing dramatic shifts in how we process food.
As our technological capacities have expanded, so has our capacity to process food. This has exposed a major weakness. What is palatable does not have a strong connection to what is nutritious. But it is far easier to sell to our palate than to our nutrition. Indeed, it is possible to sell to our palate in a way that actively misleads and misdirects our hunger signals.
This misfiring between palate, hunger sensations and nutrition has led to an increasingly metabolically unhealthy population. Something I have discussed previously.
But a similar mismatch is stressing our social metabolism. (A metabolism being a system for breaking things down — catabolic processes — and building things up — anabolic processes.)
Evolutionary biologists Heather Heying and Bret Weinstein have a nice short discussion of the Twitter recommendation algorithm not putting into their feed tweets from people that they follow that don’t conform to the recommendation algorithm’s apparent inferred positioning of their cognitive preferences. The Twitter algorithm is, in effect, trumping their choices about who to follow by its attempt to identify and target their cognitive palate.
This could be seen as shadow banning. I doubt that this is a significant phenomena, at least in a political context on large online platforms, as it would seem to involve a fair bit of effort. Especially as much the same apparent effect could be created simply by, in this instance, the Twitter algorithm targeting inferred cognitive palate.
The much bigger issue is that the Twitter algorithm is doing at least two noxious things. First, it is actively working against people’s attempts to have a broad range of information sources, at least regarding viewpoint diversity. So it is undermining cognitive nutrition.
Second, it is intensifying, and to some extent creating, information and viewpoint silo-ing. That is, people getting quite different, but patterned, streams of information. Not only patterned, but patterned in a way that separates people into systematically information-restricted-and-differentiated groups sharing common viewpoints.
It is well established in social science that the more conformity of viewpoint and opinion there is in a social group, the more intense the shared views are likely to become and the worse any decisions are likely to be. The former because what is common gets reinforced and intensified. The latter because more and more things that turn out to matter are likely not to be considered at all.
Social media algorithms acting to reinforce viewpoint-and-information narrowing are thereby actively deranging collective cognitive functioning. Or, as we might say, collective sense-making.
So, the recommendation algorithms are undermining attempts to maintain cognitive nutrition and actively assisting in deranging the collective cognitive metabolism. And they are doing it in a way analogous to how Big Food Products is undermining human metabolic health: by targeting our palate in ways disconnected to our nutrition and that also deranges our hunger signals. In the case of Big Tech, it is our cognitive palate, our cognitive nutrition and our social-emotional signals, but the underlying pattern is remarkably similar.
Big Tech are for-profit businesses. There are some inherent complexities involved in online media platform provision that Canadian YouTuber J. J. McCullough has an informative short discussion of. His discussion is specifically about YouTube, but has wider application.
Nevertheless, as with Big Food Products, income seeking encourages Big Tech to target (cognitive) palate when not only is (cognitive) palate not well connected to (cognitive) nutrition but the form of the targeting actively deranges our internal feedback signals. This is also clearly disastrous in its implications.
Especially as Twitter is, as more than one exiting media insider has pointed out, effectively becoming the editor for many mainstream media publications. Journalists are, after all, hardly immune to these patterns. (And those that are more resistant are increasingly bailing to their own, independent, operations using, ironically, online media.)
There was already a serious problem with narrative-driven journalism (see discussion of an obvious case here and a much more significant example here). The interaction of narrative-driven journalism with Big Tech is making the problem with both worse.
Disrupting the public sphere
Analyst Martin Gurri has explored the disruptive effects of new media in his book, originally published in 2014, The Revolt of the Public and the Crisis of Authority in the New Millennium. He also blogs about the issues he raises in the book. In a recent essay he wrote:
The collapse of trust in our leading institutions has exiled the 21st century to the Siberia of post-truth. I want to be clear about what this means. Reality has not changed. It’s still unyielding. Facts today are partial and contradictory — but that’s always been the case. Post-truth, as I define it, signifies a moment of sharply divergent perspectives on every subject or event, without a trusted authority in the room to settle the matter. A telling symptom is that we no longer care to persuade. We aim to impose our facts and annihilate theirs, a process closer to intellectual holy war than to critical thinking.
Historian Niall Ferguson has argued that the Internet has disrupted the public sphere in a way that is deeply analogous to the effect of the printing press on late medieval and early modern Europe.
As that was the time of the Reformation and the wars of religion, not a reassuring analogy.
Ferguson also notes that iconoclasm, tearing down statues and other public icons, is a typical manifestation of the disruption of the public sphere.
One wonders if that makes Robin DiAngelo’s bestseller White Fragility the contemporary equivalent of the C16th and C17th best seller, the Malleus Maleficarum, the Hammer of the Witches. Economist Glenn Loury and linguist John McWhorter have discussed how the current use of racist! is very like the past use of witch!. Or, indeed, the past use of heretic!.
Social psychologist Jonathan Haidt has already famously observed that morality blinds and binds: it brings people together around shared norms and values, but it also blinds their view of evidence and others. Having narrative-driven journalism interact with cognitive-palate-targeting social media makes this dynamic much worse.
Selecting for cognitive intensity
Moreover, any cognitive identity built around a system of belief is going to have some resistance to disconfirming or problematic facts. Create a large enough community of people with such a cognitive identity and a selection process is set up, selecting for mechanisms that protect that identity, that protect that intensified collective cognitive palate.
Something education is supposed to do is to help us make sense of the world around us. (That is something journalism is supposed to do too.) But educators (and journalists) are as prone to human foibles as the rest of us. In particular, they are likely to care about status.
The trouble with schools of education, and schools of journalism, in higher education, is that, to the extent that education or journalism can be said to be academic disciplines at all, they do not have much inherent intellectual heft or status to them. The temptation to compensate for that lack of intellectual heft or status by seeking some other form of status thus becomes very strong, with that very lack of intellectual heft providing little or no countervailing pressure. The obvious form of (compensating) status to embrace is moral status; to embrace some form of activism, of seeking to make the world a better place.
Schools of education and journalism then become strongly prone to shift from being centres of skill-based education (here’s tools to help you go about the tasks of teaching or reporting) to being centres of ideas-based indoctrination (here’s how you become a Good Person making the world a Better Place).
So, we get young people with little life experience who go to such schools of activism to become teachers or journalists, who take those attitudes to their workplaces, spreading such ideas among their students and any young readers, who then go to university … And so the cycle repeats and intensifies.
Moreover, there is nothing to stop this pattern being adopted in other parts of higher education. Hence the proliferation of so-called “grievance studies” courses and degrees, but which are rather better described as gimme degrees in moral self-congratulation. Which universities happily provide, because there’s money in it.
Their graduates then go out into workplaces and bureaucracies, including university administrations, and we get another round of the cycle.
Note that such is better described as status-hacking rather than status built on serious understanding or achievement.
This shifting from education in role-undertaking skills to indoctrination in status-providing ideas is, of course, another manifestation of appealing to cognitive palate rather than genuinely providing cognitive nutrition. With the emphasising on status-providing ideas over what actually works having the effect on the schooling of students that one might expect.
So, there is an expanding social environment in which selection takes place for memes (ideas that differentially replicate in a competitive environment, just like genes) that create status-providing cognitive identities that appeal to cognitive palates and which include protections from disconfirming facts, ideas or concerns.
Sooner or later, a set of super-replicating memes would be likely to evolve, helping to create and intensify narrative-driven journalism. Which then interact (disastrously) with social media structured to appeal to your cognitive palate but mislead and derange mechanisms of cognitive nutrition. (Such as dismissing science, a key provider of cognitive nutrition, as a patriarchal tool of white supremacy.)
Prestige, bottom-up status, is a key currency of human cooperation. In thriving societies and civilisations, prestige is typically harnessed for pro-social activities. In declining societies and civilisations, prestige is increasingly harnessed for anti-social activities. Including social dominance (i.e. top-down) status games.
And, as past waves of iconoclasm and social upheavals have demonstrated, people can be perfectly happy to destroy lots of physical, social and other capital if such destruction offers them status rewards.
Obviously (1) this is the situation we are now in. (2) this is not going to end well.
Health policy and debate misfire: how private corporations, non-profits, medical professionals and government health bureaucracies colonise our ill-health
We have an epidemic of evolutionary novelty, and it is killing us.
Bret Weinstein
Health policy is a matter of lively debate in the US, largely because the US political system has never fully settled on a coherent health system.
Health as a policy issue has all the factors that make for difficult public policy. It really matters to people. There are deep information asymmetries (lots of folk do not know what they need to know and have to rely on others). It can be ferociously expensive. Both public and private provision have obvious problems that the partisans of the other can point to. Lots of people’s incomes are at stake.
Something of a perfect storm of difficulties.
Health issues can be divided into acute (infections, accidents, violence) and chronic (everything else).
Western medicine is generally very good at dealing with acute conditions. Acute conditions have clear indicators of success; they are an immediate, identifiable problem; they fit in with the anatomical foundations of Western medicine; and there is a very broad incentive to get it right. There is certainly grounds for debate about how best to provide and fund acute care. Nevertheless, acute medicine is mostly a relatively straightforward provision-and-insurance problem.
If acute care was split off and dealt with specifically, I strongly suspect reasonable mechanisms with good incentives could be agreed on fairly easily. Mainly because the inherent incentives in acute care are success-oriented.
Chronic conditions are a very different matter. Western medicine has been far less successful at dealing with chronic conditions. Cancer is still often a death sentence and the metabolic health of Western populations (obesity, high blood pressure, diabetes, etc) has been steadily getting worse for decades.
Moreover, as noted here, most of those chronic conditions are related to the mismatch between how we evolved to live and how we do live.
Health expenditure has been consuming ever higher shares of GDP. There is a view that it is perfectly natural that health expenditure should go up. As people get richer, they want to fund better health, they live longer, so of course health expenditure goes up.
I want to suggest that is (mostly) bollocks. As we get richer and more knowledgeable, it should be easier to achieve and maintain good health. We should not be getting chronically sicker, which we are. Health expenditure is going up far more because we are getting chronically sicker than because of some preference for better health or the experience of increased longevity.
Incentives matter
Looked at dispassionately, the reason for the increasing chronic ill-health of Western populations is simple. That is precisely what the current incentives are structured to produce.
Start with Big Food Products. Their incentive is not to provide nutrition, their incentive is to get you to eat more. Since what you eat affects how much you eat, and because palatability and nutritional value are so weakly connected, getting us to eat more is both relatively easy and immensely profitable. If we eat more and more, if we eat more and more of what is palatable but not metabolically healthy, thereby increasing our metabolic stress, we will get more and more metabolically unhealthy. Which we are. But we will eat a lot more of Big Food Products’s multi-billion dollar income-earning offerings on our way.
People will do more of what makes their income go up. People are paid to do more of what makes their income to go up.
Consider Big Pharma. Their incentive is far more to provide suppression of symptoms than it is to provide cures. A genuine cure — you take this, the problem goes away, so you can stop taking this — is much less profitable than suppression of symptoms. For instance, tablets for high blood pressure do not cure what causes the high blood pressure, they suppress the symptoms. An amazing amount of prescriptions are not curative, they merely suppress symptoms.
People will do more of what makes their income go up. People are paid to do more of what makes their income to go up.
Consider health advocacy. The dominant donors for health advocacy non-profits such as heart associations, diabetes associations, and so on are Big Pharma and Big Food Products. Moreover, if those conditions actually went away, if they became insignificant, then so would the point of having those non-profit associations and the jobs they fund.
People will do more of what makes their income go up. People are paid to do more of what makes their income to go up.
Consider Official Psychiatry. What distinguishes psychiatrists from clinical psychologists is that psychiatrists can prescribe drugs. (And clinical psychology is more likely to have a stronger base in actual scientific evidence.) So, psychiatrists have a strong incentive to focus on the prescription of drugs.
Psychiatric drugs are almost invariably not curative. You generally do not take them, get cured by the drug, and stop taking them. We do not know enough about the interaction between neurophysiology, neurochemistry and cognitive patterns to reliably produce curative psychiatric drugs.
Psychiatric drugs typically suppress symptoms. Which means that they are often used much longer than an actual cure would be. Indeed, it is often not clear that they are any better than, and may be worse than, the passage of time. (They may be worse than the passage of time because of the possibility that they could be suppressing curative responses that might occur if things were allowed to run.) But suppression of symptoms can easily provide a more secure stream of income than an actual cure. With failing to ask awkward questions, or consider awkward data, being very successful income-and-authority self-defence devices.
People will do more of what makes their income go up. People are paid to do more of what makes their income to go up.
As an aside, and on a somewhat related issue, in the US, gender-affirming therapy plus the Dutch medical application model is the basis for treating gender dysphoria. This maximises the chance that anyone presenting with gender dysphoria will be medicalised and so become a permanent consumer of artificial hormones, with the more income to be made the earlier in life people transition. The gender-affirming medicalisation approach has been endorsed by various peak medical bodies, bypassing normal interrogation of the scientific evidence. (Once a viewpoint gets enough “this is what good people believe” oomph behind it, it can memetically capture institutions remarkably quickly, particularly if dissent is sanctioned.)
Getting back to chronic conditions in general, general practitioners, your ordinary medicos, are in much the same situation as psychiatrists. Suppression of symptoms can easily provide a more secure stream of income than an actual cure. With failing to ask awkward questions, or consider awkward data, being very successful income-and-authority self-defence devices.
Stop and consider what doctors in the past were paid to deliver as “cures”. It is clearly entirely possible to sustain a medical profession on a very poor knowledge base. Let alone a system that does successfully suppress symptoms, at least to a degree, and for significant amounts of time.
People will do more of what makes their income go up. People are paid to do more of what makes their income to go up.
Consider government health bureaucracies. Notionally, they spend money to improve the health of the populace. In reality, their revenue goes up the chronically sicker the populace gets. The behaviour that is selectively rewarded, by increasing their budgets, is behaviour that generates (or at least does not seriously stop) the populace getting chronically sicker.
Hence we see nutrition guidelines that have not been, and are not, grounded in the science, that make no evolutionary sense (our foraging ancestors did not eat breakfast, did not eat frequently during the day, did not eat much in the way of whole grains, did not eat seed oils and definitely ate a fair bit of fat) but do lead to a chronically sicker population, so increased government health budgets and larger health bureaucracies.
People will do more of what makes their income go up. People are paid to do more of what makes their income to go up.
It is striking how pervasive the effect of the official nutritional guidelines are. They affect all food provided by government agencies — including thereby undermining the metabolic health and capacity of armed service personnel — determine nutritional content of medical training and advice provided from large medical practises.
It is not that there is some malicious conspiracy to preside over ill-health. Instead, social selection processes operate, where access to income flows are what is being selected for. The incentives are to go with the income flows. They are certainly not to have less income.
Especially when huge sums are at stake, as they are, selection will be for ideas that are good at generating income, rather than selecting for truth or scientific accuracy.
To understand how dire the issue of nutrition is, consider this: calories in, calories out. This is at once an obviously true mantra — we have to be in calorie deficit to lose weight — and yet is so profoundly misleading as to effectively be a lie.
For here’s the thing: what we eat affects how much we eat, how much we move and how active our metabolism is. Calories are not remotely equal. There are essential proteins. There are essential fats. There are no essential carbohydrates. With enough fat and protein, your body will make all the glucose it needs. Any nutritional guidelines that encourage you to eat frequently, and to eat lots of carbohydrates, are encouraging you to eat more and to, for most people, metabolically stress your body.
Why would government health bureaucracies produce such nutrition guidelines? Consider what increases health expenditures and who has the most incentive to lobby hardest. Do not delude yourself that they are well-grounded in the scientific evidence, which is a difficult matter in nutrition, making it easier to select for convenient agendas and to maintain policy inertia.
Colonising ill-health
The calories in, calories out, mantra goes with “people are getting obese because we are getting greedier and lazier”. Apparently, something magical happen a few decades ago to destroy people’s culinary moral fibre. This is not an explanation, it is a justification for colonising people’s ill-health.
Consider that what you eat affects how much you eat, how much you move and how active our metabolism is. Consider that rising obesity, and other signs of metabolic disorder, coincide with the adoption of modern processed foods (with lots of seed oils). And then look again at the “greedy and lazy” pseudo-explanation. (At this point, anger is a reasonable reaction.)
Colonisers always claim that they are there, and that they are justified, because of the “problems” and “deficiencies” of the colonised.
We have an entire interlocking set of industries and professions that colonise people’s ill-health far more than they provide genuine cures. And the “greedy and lazy” explanation of obesity blames the victims for their profitable exploitation by those — corporations, non-profits and government bureaucracies — that are colonising our collective ill-health.
Who makes money from people eating less, eating less frequently, and perhaps fasting beneficially? Who makes money from the opposite? And how much?
People will do more of what makes their income go up. People are paid to do more of what makes their income to go up.
Another factor is that the anatomical-structure focus of Western medicine may be not well structured to deal with conditions that are far more about energy flows, about the energetics of the body, rather than its structures per se. Though to have so many incentives working for suppression (or even generation of) symptoms, rather than actual cures, hardly encourages adopting a more effective analytical paradigm. Hence the continuing scandal of the lack of serious nutritional training for doctors.
As has been observed, you take an animal to the vet and the vet will ask you what you have been feeding your pet. You go to your doctor, they are not likely to ask what you have been eating. If they do, there is a high probability they will not be asking from a knowledge base grounded in solid science. (A depressing amount of what passes for nutrition ‘science’ is of remarkably low scientific quality: associational studies in particular have a startling failure rate.)
What all this comes down to is Western populations getting chronically sicker and the budgetary burden of heath expenditures getting progressively worse. Because that is what the incentive structure of Big Food Products, Big Pharma, health advocacy, Official Psychiatry, standard medical practice and Big Health Bureaucracy are all structured to produce. They are structured to colonise our (increasing) mental and physical ill-health far more than providing good nutritional practices or actual cures. (Good nutritional practices and cure may often amount to the same thing.) With the flow of funds from corporations to research, advocacy groups and doctors aggravating the problems
So, where acute care is a relatively straightforward provision-and-insurance problem with generally pro-social incentives, chronic conditions are a profoundly dysfunctional mess that no system of provision will do more than generate ever higher expenditures, so long as that continues to be what the incentives structures are overwhelmingly set up to create.
The endless, deeply ideological, arguments (better characterised as memetic warfare) over public versus private provision in health are just arguments over how much private or public bodies will colonise the chronic ill-health of Western populations. Whether they will do so is not currently in dispute.
(And I agree, private providers will colonise our ill-health more efficiently and with more charm. They have to work harder than government bodies, colonisation generally being easier if folk are coerced into providing the income flow.)
Can anyone, in all the sound and fury over health policy, direct me to anything that suggest policy makers have even asked the right questions? Because if they have not even asked the right questions, how can we expect good answers?
But, then again, who has the incentives to ask the right questions?
Bret Weinstein
Health policy is a matter of lively debate in the US, largely because the US political system has never fully settled on a coherent health system.
Health as a policy issue has all the factors that make for difficult public policy. It really matters to people. There are deep information asymmetries (lots of folk do not know what they need to know and have to rely on others). It can be ferociously expensive. Both public and private provision have obvious problems that the partisans of the other can point to. Lots of people’s incomes are at stake.
Something of a perfect storm of difficulties.
Health issues can be divided into acute (infections, accidents, violence) and chronic (everything else).
Western medicine is generally very good at dealing with acute conditions. Acute conditions have clear indicators of success; they are an immediate, identifiable problem; they fit in with the anatomical foundations of Western medicine; and there is a very broad incentive to get it right. There is certainly grounds for debate about how best to provide and fund acute care. Nevertheless, acute medicine is mostly a relatively straightforward provision-and-insurance problem.
If acute care was split off and dealt with specifically, I strongly suspect reasonable mechanisms with good incentives could be agreed on fairly easily. Mainly because the inherent incentives in acute care are success-oriented.
Chronic conditions are a very different matter. Western medicine has been far less successful at dealing with chronic conditions. Cancer is still often a death sentence and the metabolic health of Western populations (obesity, high blood pressure, diabetes, etc) has been steadily getting worse for decades.
Moreover, as noted here, most of those chronic conditions are related to the mismatch between how we evolved to live and how we do live.
Health expenditure has been consuming ever higher shares of GDP. There is a view that it is perfectly natural that health expenditure should go up. As people get richer, they want to fund better health, they live longer, so of course health expenditure goes up.
I want to suggest that is (mostly) bollocks. As we get richer and more knowledgeable, it should be easier to achieve and maintain good health. We should not be getting chronically sicker, which we are. Health expenditure is going up far more because we are getting chronically sicker than because of some preference for better health or the experience of increased longevity.
Incentives matter
Looked at dispassionately, the reason for the increasing chronic ill-health of Western populations is simple. That is precisely what the current incentives are structured to produce.
Start with Big Food Products. Their incentive is not to provide nutrition, their incentive is to get you to eat more. Since what you eat affects how much you eat, and because palatability and nutritional value are so weakly connected, getting us to eat more is both relatively easy and immensely profitable. If we eat more and more, if we eat more and more of what is palatable but not metabolically healthy, thereby increasing our metabolic stress, we will get more and more metabolically unhealthy. Which we are. But we will eat a lot more of Big Food Products’s multi-billion dollar income-earning offerings on our way.
People will do more of what makes their income go up. People are paid to do more of what makes their income to go up.
Consider Big Pharma. Their incentive is far more to provide suppression of symptoms than it is to provide cures. A genuine cure — you take this, the problem goes away, so you can stop taking this — is much less profitable than suppression of symptoms. For instance, tablets for high blood pressure do not cure what causes the high blood pressure, they suppress the symptoms. An amazing amount of prescriptions are not curative, they merely suppress symptoms.
People will do more of what makes their income go up. People are paid to do more of what makes their income to go up.
Consider health advocacy. The dominant donors for health advocacy non-profits such as heart associations, diabetes associations, and so on are Big Pharma and Big Food Products. Moreover, if those conditions actually went away, if they became insignificant, then so would the point of having those non-profit associations and the jobs they fund.
People will do more of what makes their income go up. People are paid to do more of what makes their income to go up.
Consider Official Psychiatry. What distinguishes psychiatrists from clinical psychologists is that psychiatrists can prescribe drugs. (And clinical psychology is more likely to have a stronger base in actual scientific evidence.) So, psychiatrists have a strong incentive to focus on the prescription of drugs.
Psychiatric drugs are almost invariably not curative. You generally do not take them, get cured by the drug, and stop taking them. We do not know enough about the interaction between neurophysiology, neurochemistry and cognitive patterns to reliably produce curative psychiatric drugs.
Psychiatric drugs typically suppress symptoms. Which means that they are often used much longer than an actual cure would be. Indeed, it is often not clear that they are any better than, and may be worse than, the passage of time. (They may be worse than the passage of time because of the possibility that they could be suppressing curative responses that might occur if things were allowed to run.) But suppression of symptoms can easily provide a more secure stream of income than an actual cure. With failing to ask awkward questions, or consider awkward data, being very successful income-and-authority self-defence devices.
People will do more of what makes their income go up. People are paid to do more of what makes their income to go up.
As an aside, and on a somewhat related issue, in the US, gender-affirming therapy plus the Dutch medical application model is the basis for treating gender dysphoria. This maximises the chance that anyone presenting with gender dysphoria will be medicalised and so become a permanent consumer of artificial hormones, with the more income to be made the earlier in life people transition. The gender-affirming medicalisation approach has been endorsed by various peak medical bodies, bypassing normal interrogation of the scientific evidence. (Once a viewpoint gets enough “this is what good people believe” oomph behind it, it can memetically capture institutions remarkably quickly, particularly if dissent is sanctioned.)
Getting back to chronic conditions in general, general practitioners, your ordinary medicos, are in much the same situation as psychiatrists. Suppression of symptoms can easily provide a more secure stream of income than an actual cure. With failing to ask awkward questions, or consider awkward data, being very successful income-and-authority self-defence devices.
Stop and consider what doctors in the past were paid to deliver as “cures”. It is clearly entirely possible to sustain a medical profession on a very poor knowledge base. Let alone a system that does successfully suppress symptoms, at least to a degree, and for significant amounts of time.
People will do more of what makes their income go up. People are paid to do more of what makes their income to go up.
Consider government health bureaucracies. Notionally, they spend money to improve the health of the populace. In reality, their revenue goes up the chronically sicker the populace gets. The behaviour that is selectively rewarded, by increasing their budgets, is behaviour that generates (or at least does not seriously stop) the populace getting chronically sicker.
Hence we see nutrition guidelines that have not been, and are not, grounded in the science, that make no evolutionary sense (our foraging ancestors did not eat breakfast, did not eat frequently during the day, did not eat much in the way of whole grains, did not eat seed oils and definitely ate a fair bit of fat) but do lead to a chronically sicker population, so increased government health budgets and larger health bureaucracies.
People will do more of what makes their income go up. People are paid to do more of what makes their income to go up.
It is striking how pervasive the effect of the official nutritional guidelines are. They affect all food provided by government agencies — including thereby undermining the metabolic health and capacity of armed service personnel — determine nutritional content of medical training and advice provided from large medical practises.
It is not that there is some malicious conspiracy to preside over ill-health. Instead, social selection processes operate, where access to income flows are what is being selected for. The incentives are to go with the income flows. They are certainly not to have less income.
Especially when huge sums are at stake, as they are, selection will be for ideas that are good at generating income, rather than selecting for truth or scientific accuracy.
To understand how dire the issue of nutrition is, consider this: calories in, calories out. This is at once an obviously true mantra — we have to be in calorie deficit to lose weight — and yet is so profoundly misleading as to effectively be a lie.
For here’s the thing: what we eat affects how much we eat, how much we move and how active our metabolism is. Calories are not remotely equal. There are essential proteins. There are essential fats. There are no essential carbohydrates. With enough fat and protein, your body will make all the glucose it needs. Any nutritional guidelines that encourage you to eat frequently, and to eat lots of carbohydrates, are encouraging you to eat more and to, for most people, metabolically stress your body.
Why would government health bureaucracies produce such nutrition guidelines? Consider what increases health expenditures and who has the most incentive to lobby hardest. Do not delude yourself that they are well-grounded in the scientific evidence, which is a difficult matter in nutrition, making it easier to select for convenient agendas and to maintain policy inertia.
Colonising ill-health
The calories in, calories out, mantra goes with “people are getting obese because we are getting greedier and lazier”. Apparently, something magical happen a few decades ago to destroy people’s culinary moral fibre. This is not an explanation, it is a justification for colonising people’s ill-health.
Consider that what you eat affects how much you eat, how much you move and how active our metabolism is. Consider that rising obesity, and other signs of metabolic disorder, coincide with the adoption of modern processed foods (with lots of seed oils). And then look again at the “greedy and lazy” pseudo-explanation. (At this point, anger is a reasonable reaction.)
Colonisers always claim that they are there, and that they are justified, because of the “problems” and “deficiencies” of the colonised.
We have an entire interlocking set of industries and professions that colonise people’s ill-health far more than they provide genuine cures. And the “greedy and lazy” explanation of obesity blames the victims for their profitable exploitation by those — corporations, non-profits and government bureaucracies — that are colonising our collective ill-health.
Who makes money from people eating less, eating less frequently, and perhaps fasting beneficially? Who makes money from the opposite? And how much?
People will do more of what makes their income go up. People are paid to do more of what makes their income to go up.
Another factor is that the anatomical-structure focus of Western medicine may be not well structured to deal with conditions that are far more about energy flows, about the energetics of the body, rather than its structures per se. Though to have so many incentives working for suppression (or even generation of) symptoms, rather than actual cures, hardly encourages adopting a more effective analytical paradigm. Hence the continuing scandal of the lack of serious nutritional training for doctors.
As has been observed, you take an animal to the vet and the vet will ask you what you have been feeding your pet. You go to your doctor, they are not likely to ask what you have been eating. If they do, there is a high probability they will not be asking from a knowledge base grounded in solid science. (A depressing amount of what passes for nutrition ‘science’ is of remarkably low scientific quality: associational studies in particular have a startling failure rate.)
What all this comes down to is Western populations getting chronically sicker and the budgetary burden of heath expenditures getting progressively worse. Because that is what the incentive structure of Big Food Products, Big Pharma, health advocacy, Official Psychiatry, standard medical practice and Big Health Bureaucracy are all structured to produce. They are structured to colonise our (increasing) mental and physical ill-health far more than providing good nutritional practices or actual cures. (Good nutritional practices and cure may often amount to the same thing.) With the flow of funds from corporations to research, advocacy groups and doctors aggravating the problems
So, where acute care is a relatively straightforward provision-and-insurance problem with generally pro-social incentives, chronic conditions are a profoundly dysfunctional mess that no system of provision will do more than generate ever higher expenditures, so long as that continues to be what the incentives structures are overwhelmingly set up to create.
The endless, deeply ideological, arguments (better characterised as memetic warfare) over public versus private provision in health are just arguments over how much private or public bodies will colonise the chronic ill-health of Western populations. Whether they will do so is not currently in dispute.
(And I agree, private providers will colonise our ill-health more efficiently and with more charm. They have to work harder than government bodies, colonisation generally being easier if folk are coerced into providing the income flow.)
Can anyone, in all the sound and fury over health policy, direct me to anything that suggest policy makers have even asked the right questions? Because if they have not even asked the right questions, how can we expect good answers?
But, then again, who has the incentives to ask the right questions?
Voices from the collapse of mainstream media
The ideas that catch on are the ones that win in narrative warfare. … That rivalrous game-theoretic environment is going to be selecting for what is effective, not what is true. And definitely not what is good for the whole.
Daniel Schmatchenberger
There is a mantra about of “go woke, go broke”. But Jessie Singal has suggested that in media, it is more “go broke, go woke” as the collapse of standard media business models makes it harder to sustain diversity of thought. He made the point in a Rebel Wisdom discussion.
Daniel Schmatchenberger
There is a mantra about of “go woke, go broke”. But Jessie Singal has suggested that in media, it is more “go broke, go woke” as the collapse of standard media business models makes it harder to sustain diversity of thought. He made the point in a Rebel Wisdom discussion.
But behind closed doors, industry leaders will admit the damage that’s being done.
“We are a cancer and there is no cure,” a successful and insightful TV veteran said to me. “But if you could find a cure, it would change the world.”
“We are a cancer and there is no cure,” a successful and insightful TV veteran said to me. “But if you could find a cure, it would change the world.”
As it is, this cancer stokes national division, even in the middle of a civil rights crisis. The model blocks diversity of thought and content because the networks have incentive to amplify fringe voices and events, at the expense of others… all because it pumps up the ratings.
See also her follow up post on how Fox is also narrative driven.
Columnist Barbara Kay’s resignation from the National Post in Canada speaks on the conformity pressures now operating:
Since the early 2000s, journalists have anticipated the demise of their own industry. But we wrongly assumed that this decline would be driven exclusively by economic and technological factors. In recent months especially, it’s become clear that ideological purges have turned a gradual retreat into what now feels like a full-on rout. This is not a case of a lack of demand: The rise of popular new online sites shows that Canadians are eager for fresh voices and good reporting. Rather, legacy outlets are collapsing from within because they’ve outsourced editorial direction to a vocal internal minority that systematically weaponizes social media to destroy internal workplace hierarchies, and which presents its demands in Manichean terms. During the various iterations of political correctness that appeared since the 1990s, National Post editors fought against this trend. But as the public shaming of Rex Murphy shows, some now feel they have no choice but to throw down their weapons and sue for peace.
Bari Weiss’s resignation from the New York Times has been the most famous recent instance of conformity pressures leading to public exit.
Twitter is not on the masthead of The New York Times. But Twitter has become its ultimate editor. As the ethics and mores of that platform have become those of the paper, the paper itself has increasingly become a kind of performance space. Stories are chosen and told in a way to satisfy the narrowest of audiences, rather than to allow a curious public to read about the world and then draw their own conclusions. I was always taught that journalists were charged with writing the first rough draft of history. Now, history itself is one more ephemeral thing molded to fit the needs of a predetermined narrative.
My own forays into Wrongthink have made me the subject of constant bullying by colleagues who disagree with my views.
Andrew Sullivan made similar points in his last column for New York magazine:
What has happened, I think, is relatively simple: A critical mass of the staff and management at New York Magazine and Vox Media no longer want to associate with me, and, in a time of ever tightening budgets, I’m a luxury item they don’t want to afford. And that’s entirely their prerogative. They seem to believe, and this is increasingly the orthodoxy in mainstream media, that any writer not actively committed to critical theory in questions of race, gender, sexual orientation, and gender identity is actively, physically harming co-workers merely by existing in the same virtual space. Actually attacking, and even mocking, critical theory’s ideas and methods, as I have done continually in this space, is therefore out of sync with the values of Vox Media. That, to the best of my understanding, is why I’m out of here.
Matt Taibbi has reported on the general pattern:
Today no one with a salary will stand up for colleagues like Lee Fang. Our brave truth-tellers make great shows of shaking fists at our parody president, but not one of them will talk honestly about the fear running through their own newsrooms. People depend on us to tell them what we see, not what we think. What good are we if we’re afraid to do it?
Documentary film maker Christopher Rufo has recently described the process operating in his part of the media world:
I saw the ideology coming in, in a heavy way, about five years ago where the whole documentary industry was really kind of conforming to identity politics, the structure of identity politics and the reward system of identity politics. … making niche films that have no broad audience and which only please the activist gatekeepers. …
The accepted discourse was quite narrow … The economy of the documentary world is explicitly, and now almost solely, predicated on identity issues. …
I realised quickly that this [conference] was not a place for dialogue, this was a place for kind of sermonising.
Heather Heying, in a recent Dark Horse podcast, where she and her husband Bret Weinstein report on what is happening in Portland, specifically mourns Harper’s going with the standard trend of not-reporting the narrative-inconvenient bits of what is happening in Portland (peaceful protests in the day, riots at night). Not sure which podcast it was now, but this excerpt from a recent podcast provides eye-witness discussion.
But they saw the exclusion/inclusion patterns of narrative-driven media back in 2017 when the New York Times and similar media would not report (or not accurately) what was going on in Evergreen College because it was narrative-inconvenient but Fox and the Wall St Journal would report on it (and generally accurately) because it was narrative-convenient for them.
Jesse Singal, Matt Taibbi, Heather Heying, Bret Weinstein, Arian Pekary. These are not remotely right wing or conservative folk.
Jessie Singal’s “go broke, go woke” comment seems pretty spot on.
This is institutional rot. Going on right in front of us.
See also her follow up post on how Fox is also narrative driven.
Columnist Barbara Kay’s resignation from the National Post in Canada speaks on the conformity pressures now operating:
Since the early 2000s, journalists have anticipated the demise of their own industry. But we wrongly assumed that this decline would be driven exclusively by economic and technological factors. In recent months especially, it’s become clear that ideological purges have turned a gradual retreat into what now feels like a full-on rout. This is not a case of a lack of demand: The rise of popular new online sites shows that Canadians are eager for fresh voices and good reporting. Rather, legacy outlets are collapsing from within because they’ve outsourced editorial direction to a vocal internal minority that systematically weaponizes social media to destroy internal workplace hierarchies, and which presents its demands in Manichean terms. During the various iterations of political correctness that appeared since the 1990s, National Post editors fought against this trend. But as the public shaming of Rex Murphy shows, some now feel they have no choice but to throw down their weapons and sue for peace.
Bari Weiss’s resignation from the New York Times has been the most famous recent instance of conformity pressures leading to public exit.
Twitter is not on the masthead of The New York Times. But Twitter has become its ultimate editor. As the ethics and mores of that platform have become those of the paper, the paper itself has increasingly become a kind of performance space. Stories are chosen and told in a way to satisfy the narrowest of audiences, rather than to allow a curious public to read about the world and then draw their own conclusions. I was always taught that journalists were charged with writing the first rough draft of history. Now, history itself is one more ephemeral thing molded to fit the needs of a predetermined narrative.
My own forays into Wrongthink have made me the subject of constant bullying by colleagues who disagree with my views.
Andrew Sullivan made similar points in his last column for New York magazine:
What has happened, I think, is relatively simple: A critical mass of the staff and management at New York Magazine and Vox Media no longer want to associate with me, and, in a time of ever tightening budgets, I’m a luxury item they don’t want to afford. And that’s entirely their prerogative. They seem to believe, and this is increasingly the orthodoxy in mainstream media, that any writer not actively committed to critical theory in questions of race, gender, sexual orientation, and gender identity is actively, physically harming co-workers merely by existing in the same virtual space. Actually attacking, and even mocking, critical theory’s ideas and methods, as I have done continually in this space, is therefore out of sync with the values of Vox Media. That, to the best of my understanding, is why I’m out of here.
Matt Taibbi has reported on the general pattern:
Today no one with a salary will stand up for colleagues like Lee Fang. Our brave truth-tellers make great shows of shaking fists at our parody president, but not one of them will talk honestly about the fear running through their own newsrooms. People depend on us to tell them what we see, not what we think. What good are we if we’re afraid to do it?
Documentary film maker Christopher Rufo has recently described the process operating in his part of the media world:
I saw the ideology coming in, in a heavy way, about five years ago where the whole documentary industry was really kind of conforming to identity politics, the structure of identity politics and the reward system of identity politics. … making niche films that have no broad audience and which only please the activist gatekeepers. …
The accepted discourse was quite narrow … The economy of the documentary world is explicitly, and now almost solely, predicated on identity issues. …
I realised quickly that this [conference] was not a place for dialogue, this was a place for kind of sermonising.
Heather Heying, in a recent Dark Horse podcast, where she and her husband Bret Weinstein report on what is happening in Portland, specifically mourns Harper’s going with the standard trend of not-reporting the narrative-inconvenient bits of what is happening in Portland (peaceful protests in the day, riots at night). Not sure which podcast it was now, but this excerpt from a recent podcast provides eye-witness discussion.
But they saw the exclusion/inclusion patterns of narrative-driven media back in 2017 when the New York Times and similar media would not report (or not accurately) what was going on in Evergreen College because it was narrative-inconvenient but Fox and the Wall St Journal would report on it (and generally accurately) because it was narrative-convenient for them.
Jesse Singal, Matt Taibbi, Heather Heying, Bret Weinstein, Arian Pekary. These are not remotely right wing or conservative folk.
Jessie Singal’s “go broke, go woke” comment seems pretty spot on.
This is institutional rot. Going on right in front of us.
Tuesday, August 4, 2020
The Desperate Need to put Umpires Back Into the US System of Government
The American Republic is a political system run without umpires. The Presidents and Governors are elected, highly partisan officials. The Speaker of the US House of Representatives, and the equivalent presiding officers of the State legislatures, are highly partisan figures. As the increasingly existential struggles over Supreme Court nominations show, American judges are also partisan figures, though not quite as blatantly.
The American political order has only partisan figures at its centre. There is no institutional figure or authority that is not inherently political and cannot become consumed by processes of political polarisation. There are no umpires.
How has running a political system without umpires worked? Fairly badly. 72 years after the US Constitution came into operation in 1789, the United States were the disunited States, fighting a bitter civil war. 155 years after the end of that civil war, the American Republic is drifting towards another civil war. This is not an impressive record.
Moreover, even with this record, the United States is a relatively successful presidential republic. The overall track record of presidential republics is not impressive.
Fortunately, there is a straightforward solution to the lack of umpires problem. Become a Parliamentary Republic. This could be done very simply.
First, have the President elected by two-thirds vote of the Congress, in a secret ballot. Have the State Governors elected by two-thirds vote of their Legislatures, in a secret ballot. If two-thirds is not broad enough, make it a three-quarters vote.
Second, create an Executive Council. The members of the Executive Council would be the President or Governor, the Majority Leader of the lower house of the Congress or State legislature and such members of Congress or State legislature as the Majority Leader shall recommend. Require all acts of the President or Governor to be done on the advice of the Executive Council except such reserve powers that are necessary for the maintenance of responsible government. (There is a rich common law tradition of reserve powers: I am sure that US lawyers will be able to cope.)
Third, require all Cabinet officers to be members of the Congress or State Legislature.
Fourth, require all regulations be approved by the relevant committee of the Senate or Upper House of the State Legislature before they can have legal force. But permit the Executive Council to remove any regulation at any time.
This fourth element is not necessary to become a Parliamentary Republic, but it is required to bring some reasonable level of accountability to the administrative state. It would also be easy to systematically go through and remove existing regulations, forcing regulatory bodies to get direct legislative approval for any re-issuing of said regulations.
The US would, by these simple constitutional changes, be turned into a Parliamentary Republic. The President and Governors would become symbols of authority without having substantial power. The Power and the Glory would thereby be separated, one of the great virtues of constitutional monarchy. A virtue that Parliamentary republics can also manage.
Umpires would also thereby be inserted into the political system. Even better, the notion of umpires would be inserted into the political system. At the ceremonial heart of politics would be a non-political office.
Moreover, a Parliamentary state can be a thoroughly effective Great Power. Britain conquered about a quarter of the globe as a Parliamentary state.
The US notion of executive Governors was a quick-and-dirty fix to the problem of what do with the colonial Governors after the separation of the revolting colonies from allegiance to the British Crown. A quick-and-dirty fix given a theoretical framework by Montesquieu's hilariously inaccurate theorising about how the British political system worked.
A civil war and an ever-nearer civil war later, executive Presidents and executive Governors is a quick-and-dirty fix that is way past its used-by date. Especially given that presidential republics generally have remarkably poor records.
So, I say to the citizens of the American Republic: bring back umpires! Put a lid on the political polarisation! Become a Parliamentary Republic.
(Alternatively, go the whole hog and become a monarchy again. That seriously separates the power and the glory and creates a thoroughly non-political umpire. Princess Anne, the Princess Royal, is sensible, and potentially available. A former Olympian even. I am sure she wouldn't mind one-upping her brother.
However amusing a thought, doing the full monarchy option is not necessary.)
Becoming a Parliamentary Republic would be surprisingly constitutionally easy, and would put umpires back into a political system that desperately needs some limit to the processes of polarisation that are pulling the American Republic apart.
The American political order has only partisan figures at its centre. There is no institutional figure or authority that is not inherently political and cannot become consumed by processes of political polarisation. There are no umpires.
How has running a political system without umpires worked? Fairly badly. 72 years after the US Constitution came into operation in 1789, the United States were the disunited States, fighting a bitter civil war. 155 years after the end of that civil war, the American Republic is drifting towards another civil war. This is not an impressive record.
Moreover, even with this record, the United States is a relatively successful presidential republic. The overall track record of presidential republics is not impressive.
Fortunately, there is a straightforward solution to the lack of umpires problem. Become a Parliamentary Republic. This could be done very simply.
First, have the President elected by two-thirds vote of the Congress, in a secret ballot. Have the State Governors elected by two-thirds vote of their Legislatures, in a secret ballot. If two-thirds is not broad enough, make it a three-quarters vote.
Second, create an Executive Council. The members of the Executive Council would be the President or Governor, the Majority Leader of the lower house of the Congress or State legislature and such members of Congress or State legislature as the Majority Leader shall recommend. Require all acts of the President or Governor to be done on the advice of the Executive Council except such reserve powers that are necessary for the maintenance of responsible government. (There is a rich common law tradition of reserve powers: I am sure that US lawyers will be able to cope.)
Third, require all Cabinet officers to be members of the Congress or State Legislature.
Fourth, require all regulations be approved by the relevant committee of the Senate or Upper House of the State Legislature before they can have legal force. But permit the Executive Council to remove any regulation at any time.
This fourth element is not necessary to become a Parliamentary Republic, but it is required to bring some reasonable level of accountability to the administrative state. It would also be easy to systematically go through and remove existing regulations, forcing regulatory bodies to get direct legislative approval for any re-issuing of said regulations.
The US would, by these simple constitutional changes, be turned into a Parliamentary Republic. The President and Governors would become symbols of authority without having substantial power. The Power and the Glory would thereby be separated, one of the great virtues of constitutional monarchy. A virtue that Parliamentary republics can also manage.
Umpires would also thereby be inserted into the political system. Even better, the notion of umpires would be inserted into the political system. At the ceremonial heart of politics would be a non-political office.
Moreover, a Parliamentary state can be a thoroughly effective Great Power. Britain conquered about a quarter of the globe as a Parliamentary state.
The US notion of executive Governors was a quick-and-dirty fix to the problem of what do with the colonial Governors after the separation of the revolting colonies from allegiance to the British Crown. A quick-and-dirty fix given a theoretical framework by Montesquieu's hilariously inaccurate theorising about how the British political system worked.
A civil war and an ever-nearer civil war later, executive Presidents and executive Governors is a quick-and-dirty fix that is way past its used-by date. Especially given that presidential republics generally have remarkably poor records.
So, I say to the citizens of the American Republic: bring back umpires! Put a lid on the political polarisation! Become a Parliamentary Republic.
(Alternatively, go the whole hog and become a monarchy again. That seriously separates the power and the glory and creates a thoroughly non-political umpire. Princess Anne, the Princess Royal, is sensible, and potentially available. A former Olympian even. I am sure she wouldn't mind one-upping her brother.
However amusing a thought, doing the full monarchy option is not necessary.)
Becoming a Parliamentary Republic would be surprisingly constitutionally easy, and would put umpires back into a political system that desperately needs some limit to the processes of polarisation that are pulling the American Republic apart.
Some considerations of evolutionary theory
Nothing in biology makes sense except in the light of evolution.
Theodosius Grygorovych Dobzhansky (1900-1975)
Pair bond is not the same as sexual activity
One of the things I find somewhat frustrating is not consistently using terminology that distinguishes between pair bonding and who an organism has sex with. Clearly, there is some overlap and connection there, but they are not the same thing and it encourages sloppy thinking to use the same terminology to cover both.
Thus 'monogamy' can mean only has sex with one person or only has one spouse. This ambiguity can derail discussions and confuse analysis.
Genes play games
All that is required for game theory to operate is patterns of stimulus and response in a competitive environment. We can reasonably think of genes as playing a game, the replication game, where success is measured by replication. If you replicate, you are still in the game, so you are a winner and you remain a winner for as long as you are still being replicated.
These replication games that genes play are not only recurring, they are also probabilistic. A replication pattern will always have some failures. It is generating enough successes to stay in the game that is crucial.
Genes do not have intentions
The Selfish Gene is a very vivid and memorable metaphor. It is also an unfortunate one, as genes do not have intentions. Life is a Game Genes Play, or something similar, might have been a better title.
Parenting is altruistic behaviour
Theodosius Grygorovych Dobzhansky (1900-1975)
Pair bond is not the same as sexual activity
One of the things I find somewhat frustrating is not consistently using terminology that distinguishes between pair bonding and who an organism has sex with. Clearly, there is some overlap and connection there, but they are not the same thing and it encourages sloppy thinking to use the same terminology to cover both.
Thus 'monogamy' can mean only has sex with one person or only has one spouse. This ambiguity can derail discussions and confuse analysis.
Genes play games
All that is required for game theory to operate is patterns of stimulus and response in a competitive environment. We can reasonably think of genes as playing a game, the replication game, where success is measured by replication. If you replicate, you are still in the game, so you are a winner and you remain a winner for as long as you are still being replicated.
These replication games that genes play are not only recurring, they are also probabilistic. A replication pattern will always have some failures. It is generating enough successes to stay in the game that is crucial.
Genes do not have intentions
The Selfish Gene is a very vivid and memorable metaphor. It is also an unfortunate one, as genes do not have intentions. Life is a Game Genes Play, or something similar, might have been a better title.
Parenting is altruistic behaviour
Biologists seem to have this odd notion that parenting is not altruistic behaviour, on the grounds that it is not altruistic to reproduce one’s genes. The notion that acting in a way that permits one’s genes to replicate does not show altruism is to imply that something cannot be altruistic if it conforms to the intentions that genes do not have. Or that genes are part of an organism, even when in another organism.
Parenting absolutely is altruistic behaviour. It is one organism investing in the interests of another. The tensions between the interests of the parent and the child provide a nice example of how of genetic replication is distinct from the interest of the organism. Tensions that are particularly stark in the case of neglectful, abusive or child-abandoning parents.
Parenting is the form of altruism that genes have to induce in order to replicate.
Once it is realised that parenting is altruistic, then it is much less of a leap to develop analyses of forms of altruism, in the sense of acting for the benefit of others, that do not as directly result in the replication of genes specific to the acting organism. Investing in the benefit of other organisms can absolutely be a successful gene replication pattern, at a probabilistic level. Especially if played out across a large enough number of individuals.
Confusing what altruism is—other-directed behaviour to the benefit of the other—with genetic replication patterns is an example of how natural and easy it is for us to confuse our frames of reference. Particularly if we use intentional metaphors (which are, of course, very natural to us) to apply to replication-game-playing genes that do not have intentions.
Survival strategies drive genetic selection
There is a critique of evolutionary theory that says that random mutation is not nearly enough to create the prolix orderings of life. This critique, associated particularly with mathematician and philosopher David Berlinski, is, of course, correct. Selection processes are, however, much more ordered than that.
The survival strategies of organisms create selection pressures. Organisms can only choose survival strategies compatible with their existing biological capacities. But, within those capacities, a range of survival strategies are likely to be possible.
Once enough of a species successfully selects a particular survival strategy, that sets up selection pressures, and selection pressures that are much more ordered than random mutation. (I am riffing off a 9 minute discussion of exploration by organisms by evolutionary biologist Bret Weinstein.)
A very nice example of this provided by human pastoralism. All human pastoralist populations that are large enough, and persist in their pastoralism long enough, develop lactase persistence: the ability to consume milk as adults. This allows them to generate about five times as many useful calories from a given amount of grassland as those who just raise animals for slaughter. This is a huge biological advantage.
Here’s the thing, however. Pastoralist peoples do not develop the same mutation. The lactase persistence mutation that allowed the Indo-Europeans to spread so far is not the same lactase persistence mutation as that of East African pastoralists such as the Masai.
Parenting absolutely is altruistic behaviour. It is one organism investing in the interests of another. The tensions between the interests of the parent and the child provide a nice example of how of genetic replication is distinct from the interest of the organism. Tensions that are particularly stark in the case of neglectful, abusive or child-abandoning parents.
Parenting is the form of altruism that genes have to induce in order to replicate.
Once it is realised that parenting is altruistic, then it is much less of a leap to develop analyses of forms of altruism, in the sense of acting for the benefit of others, that do not as directly result in the replication of genes specific to the acting organism. Investing in the benefit of other organisms can absolutely be a successful gene replication pattern, at a probabilistic level. Especially if played out across a large enough number of individuals.
Confusing what altruism is—other-directed behaviour to the benefit of the other—with genetic replication patterns is an example of how natural and easy it is for us to confuse our frames of reference. Particularly if we use intentional metaphors (which are, of course, very natural to us) to apply to replication-game-playing genes that do not have intentions.
Survival strategies drive genetic selection
There is a critique of evolutionary theory that says that random mutation is not nearly enough to create the prolix orderings of life. This critique, associated particularly with mathematician and philosopher David Berlinski, is, of course, correct. Selection processes are, however, much more ordered than that.
The survival strategies of organisms create selection pressures. Organisms can only choose survival strategies compatible with their existing biological capacities. But, within those capacities, a range of survival strategies are likely to be possible.
Once enough of a species successfully selects a particular survival strategy, that sets up selection pressures, and selection pressures that are much more ordered than random mutation. (I am riffing off a 9 minute discussion of exploration by organisms by evolutionary biologist Bret Weinstein.)
A very nice example of this provided by human pastoralism. All human pastoralist populations that are large enough, and persist in their pastoralism long enough, develop lactase persistence: the ability to consume milk as adults. This allows them to generate about five times as many useful calories from a given amount of grassland as those who just raise animals for slaughter. This is a huge biological advantage.
Here’s the thing, however. Pastoralist peoples do not develop the same mutation. The lactase persistence mutation that allowed the Indo-Europeans to spread so far is not the same lactase persistence mutation as that of East African pastoralists such as the Masai.
So, which mutation develops, and how long it takes to turn up, that clearly has a random element. But the choice of survival strategy sets up selection pressures that are not random. Hence there is more order in observed biology than can be explained by random mutation, as the survival strategies of organisms inject order into selection processes.
Homo selection was for adaptability
The answer to the question of what is the human diet? is: prepared, particularly cooked, food. All human populations cook their food. Even the Inuit, who have to go to considerable extra effort to do so. We are cucinovores, food preparers, especially by cooking.
When we examine the diet of human populations, especially the pre-industrial diet, the striking thing about human diets—apart from food being actively prepared and frequently cooked—is the huge variety of human diets. Human beings can live and reproduce quite successfully on remarkably broad range of diets. (Problems with particular diet patterns usually don’t seriously kick in until after our reproductive peak.) This speaks to our adaptability as a species. An adaptability that has seen us permanently inhabit every single continent except Antarctica.
By adaptability I mean the capacity to choose and operate a variety of survival strategies (and therefore survive in a wide variety of ecological conditions). Prior to the development of farming and pastoralism, human beings were foragers, but there was considerable variety in those foraging strategies. And, of course, foraging then often led onto farming and pastoralism. Modes of living which also had considerable variety within them.
This adaptability has some manifestation across the hominin line, but is what is particularly distinctive in the genus Homo.
The combination of bipedalism and grasping hands led to tool using. This probably started with Australopithecus picking up stones to split bones and skulls from the kills of other species to get at the fat and brain. Access to such nutrient-dense food allowed us to give up gut tissue for brain tissue (the expensive tissue hypothesis).
We then moved on to making tools, increasing our adaptability. Choosing to scavenge in the middle of the day, when the major predators were resting, led to developing sweating and increased long-distance running capacity. We could then start running down animals with our expanded tool use, choosing hunting over scavenging. Giving us even more access to nutrient-dense food.
Being bipedal freed our hands for communicating by gestures. Increased use of tools moved communication to the face and mouth.
Use of fire and other food preparation both expanded the range of possible foods and further accelerated giving up gut tissue for brain tissue. At each stage, the expansion in capacities led to a greater range of possible survival strategies.
The predominant thing being selected for was that adaptability. Hence the rather prolix speciation of the Homo line. Different populations would choose different survival strategies, leading to different selection pressures. Yet, the general pattern is clear: selection for adaptability won out. Homo sapiens were the most adaptable form to appear, and relatively rapidly became the last biped primate left standing in Africa.
Our margin of superior adaptability over Homo neanderthalensis seems to have been relatively thin, as Neanderthals successfully blocked our exit from Africa for many generations. But Homo sapiens had enough of an adaptability advantage to get past the Neanderthal block in two waves, around 80,000 years ago and again around 50,000 years ago. And we Homo sapiens then became the last bipedal primate left standing anywhere by absorbing and replacing all other Homo populations.
So we became the tool-making ape, the running-and-throwing ape, the gesturing-then-talking ape, the fire-using ape, the food-preparing and cooking ape, the trading ape, the artistic ape, the ritual ape ... All manifestations of being the adaptable ape. It was that adaptability which was being selected for. As the more adaptable, the greater the range and variability of ecology and climate we could successfully reproduce in and the greater range of ecological changes (some of us) could survive.
Defence mechanisms and healthy eating
The prime defence mechanisms of animals is mobility. The prime defence mechanisms of plants, given they generally cannot move, is to wage chemical and other biological warfare against their predators. A process that can include considerable production of anti-nutrients.
Human evolution relied at various crucial stages on access to animal food, particularly saturated fats and organ meats, because they are so nutrient-dense. (There is no plant food remotely as nutrient-dense as liver.) For human nutrition, there are essential proteins, there are essential fats. There are no essential carbohydrates. With enough protein and fat, your body can produce all the glucose it needs.
We process animal foods for palatability. We process plant foods for palatability and to render them sufficiently non-toxic to consume.
So, thinking in evolutionary terms, how likely is it that saturated fats are not nutritionally sound for humans? How likely is it that plant-based foods are inherently or systematically nutritionally superior for humans than animal foods?
Markets and persuasion are also selection environments. How likely is it that selection for food-persuasion agendas is going to be for maximising extractable income? Which generates more corporate income, animal products or plant products?
Homo selection was for adaptability
The answer to the question of what is the human diet? is: prepared, particularly cooked, food. All human populations cook their food. Even the Inuit, who have to go to considerable extra effort to do so. We are cucinovores, food preparers, especially by cooking.
When we examine the diet of human populations, especially the pre-industrial diet, the striking thing about human diets—apart from food being actively prepared and frequently cooked—is the huge variety of human diets. Human beings can live and reproduce quite successfully on remarkably broad range of diets. (Problems with particular diet patterns usually don’t seriously kick in until after our reproductive peak.) This speaks to our adaptability as a species. An adaptability that has seen us permanently inhabit every single continent except Antarctica.
By adaptability I mean the capacity to choose and operate a variety of survival strategies (and therefore survive in a wide variety of ecological conditions). Prior to the development of farming and pastoralism, human beings were foragers, but there was considerable variety in those foraging strategies. And, of course, foraging then often led onto farming and pastoralism. Modes of living which also had considerable variety within them.
This adaptability has some manifestation across the hominin line, but is what is particularly distinctive in the genus Homo.
The combination of bipedalism and grasping hands led to tool using. This probably started with Australopithecus picking up stones to split bones and skulls from the kills of other species to get at the fat and brain. Access to such nutrient-dense food allowed us to give up gut tissue for brain tissue (the expensive tissue hypothesis).
We then moved on to making tools, increasing our adaptability. Choosing to scavenge in the middle of the day, when the major predators were resting, led to developing sweating and increased long-distance running capacity. We could then start running down animals with our expanded tool use, choosing hunting over scavenging. Giving us even more access to nutrient-dense food.
Being bipedal freed our hands for communicating by gestures. Increased use of tools moved communication to the face and mouth.
Use of fire and other food preparation both expanded the range of possible foods and further accelerated giving up gut tissue for brain tissue. At each stage, the expansion in capacities led to a greater range of possible survival strategies.
The predominant thing being selected for was that adaptability. Hence the rather prolix speciation of the Homo line. Different populations would choose different survival strategies, leading to different selection pressures. Yet, the general pattern is clear: selection for adaptability won out. Homo sapiens were the most adaptable form to appear, and relatively rapidly became the last biped primate left standing in Africa.
Our margin of superior adaptability over Homo neanderthalensis seems to have been relatively thin, as Neanderthals successfully blocked our exit from Africa for many generations. But Homo sapiens had enough of an adaptability advantage to get past the Neanderthal block in two waves, around 80,000 years ago and again around 50,000 years ago. And we Homo sapiens then became the last bipedal primate left standing anywhere by absorbing and replacing all other Homo populations.
So we became the tool-making ape, the running-and-throwing ape, the gesturing-then-talking ape, the fire-using ape, the food-preparing and cooking ape, the trading ape, the artistic ape, the ritual ape ... All manifestations of being the adaptable ape. It was that adaptability which was being selected for. As the more adaptable, the greater the range and variability of ecology and climate we could successfully reproduce in and the greater range of ecological changes (some of us) could survive.
Defence mechanisms and healthy eating
The prime defence mechanisms of animals is mobility. The prime defence mechanisms of plants, given they generally cannot move, is to wage chemical and other biological warfare against their predators. A process that can include considerable production of anti-nutrients.
Human evolution relied at various crucial stages on access to animal food, particularly saturated fats and organ meats, because they are so nutrient-dense. (There is no plant food remotely as nutrient-dense as liver.) For human nutrition, there are essential proteins, there are essential fats. There are no essential carbohydrates. With enough protein and fat, your body can produce all the glucose it needs.
We process animal foods for palatability. We process plant foods for palatability and to render them sufficiently non-toxic to consume.
So, thinking in evolutionary terms, how likely is it that saturated fats are not nutritionally sound for humans? How likely is it that plant-based foods are inherently or systematically nutritionally superior for humans than animal foods?
Markets and persuasion are also selection environments. How likely is it that selection for food-persuasion agendas is going to be for maximising extractable income? Which generates more corporate income, animal products or plant products?
So, does that raise questions about the anti-meat, pro-plant-based agendas being so assiduously pushed?
Norms economise on the effort required to sustain cooperation
When looking at human sociality, the selfish-versus-altruistic division is not very useful. Our range of interactions with others is much more varied than that.
What is striking is how much we are a normative species. Our nearest genetic relatives, the other African great apes, show some (very limited) level of normative behaviour. Enough to sustain the level of sociality they display. We Homo sapiens show far more.
Norms economise on the effort to sustain cooperation. Once a norm is established, interactions become much more predictable and therefore much more manageable. A whole lot of attention time and calculation effort is no longer required.
Attention and cognitive capacity are both scarce items. We have powerful tendencies to adopt mechanisms that allow us to economise on them. Hence the development of our normative capacity in situations where there were considerable pay-offs to cooperation. Cooperative payoffs that expanded as our normative capacities expanded.
Being so strongly the normative ape has permitted us to be much more complexly social. Which increased our adaptability. Which clearly increased the replication of our genes. A pattern of replication via cooperative sociality that we have spread to every continent, including temporary visits to Antarctica.
Norms can generate significant reciprocative behaviour. In fact, quite complex patterns of such. But they can also generate significant altruism, even amazing levels of self-sacrifice. All of which is a by-product of norms’ ability to increase and spread populations, and thus greater gene replication, through greater social cooperation by economising on the effort to sustain cooperation.
Resources affect reproductive strategies
Birds tend to pair bond, with both parents contributing to raising the chicks. This is because eggs are fragile, chicks cannot feed themselves and once the egg is hatched, neither male nor female birds have an advantage in feeding the chicks. Given a certain level of difficulty in obtaining resources, the male and female expression of genes have equivalent cards to play in the replication game, despite the females producing the eggs. They are therefore equally invested in raising the chicks, if their genes are to replicate. Hence the frequency of not only pair bonding among birds, but of the Dad strategy: the male investing in the protection and feeding of the offspring. As distinct from the Cad strategy: deposit one's sperm but provide no other assistance in the raising of offspring.
In certain circumstances- - for instance, if laying the egg means the female bird needs immediate nutritional replenishment- - then the replication pressure may be for the male bird to do more of the post-egg-laying care. To, in effect, equalise energy expenditure in order for successful replication to be sufficiently likely.
If, however, resources are particularly abundant, then male birds will likely revert to a harem strategy. (I.e. a mate-guarding Cad strategy.) With sufficiently abundant resources, what cards the female-expressed genes have to play in the replication game is much less of a concern for the replication of the male-expressed genes. All the male bird has to do is feed himself and keep other males away. If a completely promiscuous strategy is used (a pure Cad strategy), they might not even have to do the latter. The replication pay-off of more impregnated females being greater than any loss from the drop in investment in individual chicks. Hence it is sufficiently easy for the male-expressed genes to stay in the game such that a replication strategy other than pair bonding is used. The females are thus stuck with the eggs, and so raising the chicks, while more intense male mate selection provides some replication compensation for the lack of male investment in raising the chicks.
The general propensity of birds to pair bond is very unlike mammals, where females have mammary glands (so the food is right there when the baby mammal pops out) and hence are far more inherently invested than male mammals in raising their offspring. Female mammals provide baby-feeding resources from their own body and food consumption. Thus, in mammals, the male and female expression of genes do not have equivalent cards to play in the replication game.
Hence male mammals are very rarely significantly involved in raising their offspring. They may be pure Cads, they may be mate-guarding Cads, or even coalition Cads (a group of males sharing a group of females), but they are very rarely Dads.
Homo sapien males are a conspicuous exception. They are an extreme case of successful replication strongly favouring male investing in offspring. The Homo sapien big-brain strategy has resulted in particularly helpless babies whose brains and cognitive capacities have to develop outside the womb. Hence Homo sapien children take 15 years or more to be successfully raised to reproductive age.
In foraging societies, it is not practical to raise Homo sapien children without male contribution to provision of resources and protection. Hence Homo sapien males generating such a strong Dad strategy, contributing protection and resources to child-raising. (A few farming societies have an Uncle strategy instead - brothers contribute to raising their sister's children.)
With the dramatic expansion in mass prosperity, resource thresholds undermining the Dad strategy as described for birds has kicked in, but in a way that expresses human social complexity. At lower socio-economic levels of mass-prosperity human societies, the balance of resources compared to the cost of raising a (poorer) child has undermined the Dad strategy of investing in children. A Cad can still have children reach adulthood without bothering to contribute to their upbringing.
Conversely, women can abandon the Feeder strategy (seek to keep the guy around) in favour of a Breeder strategy (rely on their own income). Family law systems can also generate incentives, particularly if child custody is presumptively awarded to the mother, to move from a Feeder to a Breeder strategy. Especially as the state can be enlisted to extract income from fathers without that provision of income giving fathers any leverage for any role in their children's lives beyond cash-cow. They become a Shadow Dad - the financial costs without the participation, authority and status benefits of fatherhood.
All these factors encourage the rise in single motherhood, particularly among lower socio-economic levels of society.
These interactions also mostly explain the 'gender gap' in US (and other developed democracies) politics. Single mothers and divorced women have an incentive to vote for the side of politics more favourable to welfare expenditure and for family law to favour women. The latter can include, but is not limited to: establishing fatherhood (but not motherhood) as a matter of strict legal liability, giving mothers presumptive child custody, not requiring a DNA test to allocate paternal financial obligations, using the state to impose paternal financial obligations thereby (as noted above) profoundly undermining the leverage of fathers, not penalising use of false accusations as a legal tactic.
At higher socio-economic levels of human societies, the cost of raising a child is considerably greater and the value of social connections and networking considerably higher. So the Dad strategy is still going strong.
Human marriage is the ritualising of human pair bonding to support the Dad strategy. The above patterns mean that marriage is keeping on keeping on in the upper levels of Anglo and Northern European society, but decaying in the lower levels. To the advantage of the children of the upper levels of society and the disadvantage of the children of the lower.
Same-sex attraction transposes a thoroughly normal characteristicA characteristic that about half the members of a species have is a thoroughly normal characteristic of the species.
About half the human species is sexually attracted to males. A small proportion of those with this normal characteristic are themselves male.
About half the human species is sexually attracted to females. A small proportion of those with this normal characteristic are themselves female.
The same-sex attracted have a characteristic that is normal for the other sex, but not for their own.
Norms economise on the effort required to sustain cooperation
When looking at human sociality, the selfish-versus-altruistic division is not very useful. Our range of interactions with others is much more varied than that.
What is striking is how much we are a normative species. Our nearest genetic relatives, the other African great apes, show some (very limited) level of normative behaviour. Enough to sustain the level of sociality they display. We Homo sapiens show far more.
Norms economise on the effort to sustain cooperation. Once a norm is established, interactions become much more predictable and therefore much more manageable. A whole lot of attention time and calculation effort is no longer required.
Attention and cognitive capacity are both scarce items. We have powerful tendencies to adopt mechanisms that allow us to economise on them. Hence the development of our normative capacity in situations where there were considerable pay-offs to cooperation. Cooperative payoffs that expanded as our normative capacities expanded.
Being so strongly the normative ape has permitted us to be much more complexly social. Which increased our adaptability. Which clearly increased the replication of our genes. A pattern of replication via cooperative sociality that we have spread to every continent, including temporary visits to Antarctica.
Norms can generate significant reciprocative behaviour. In fact, quite complex patterns of such. But they can also generate significant altruism, even amazing levels of self-sacrifice. All of which is a by-product of norms’ ability to increase and spread populations, and thus greater gene replication, through greater social cooperation by economising on the effort to sustain cooperation.
Resources affect reproductive strategies
Birds tend to pair bond, with both parents contributing to raising the chicks. This is because eggs are fragile, chicks cannot feed themselves and once the egg is hatched, neither male nor female birds have an advantage in feeding the chicks. Given a certain level of difficulty in obtaining resources, the male and female expression of genes have equivalent cards to play in the replication game, despite the females producing the eggs. They are therefore equally invested in raising the chicks, if their genes are to replicate. Hence the frequency of not only pair bonding among birds, but of the Dad strategy: the male investing in the protection and feeding of the offspring. As distinct from the Cad strategy: deposit one's sperm but provide no other assistance in the raising of offspring.
In certain circumstances- - for instance, if laying the egg means the female bird needs immediate nutritional replenishment- - then the replication pressure may be for the male bird to do more of the post-egg-laying care. To, in effect, equalise energy expenditure in order for successful replication to be sufficiently likely.
If, however, resources are particularly abundant, then male birds will likely revert to a harem strategy. (I.e. a mate-guarding Cad strategy.) With sufficiently abundant resources, what cards the female-expressed genes have to play in the replication game is much less of a concern for the replication of the male-expressed genes. All the male bird has to do is feed himself and keep other males away. If a completely promiscuous strategy is used (a pure Cad strategy), they might not even have to do the latter. The replication pay-off of more impregnated females being greater than any loss from the drop in investment in individual chicks. Hence it is sufficiently easy for the male-expressed genes to stay in the game such that a replication strategy other than pair bonding is used. The females are thus stuck with the eggs, and so raising the chicks, while more intense male mate selection provides some replication compensation for the lack of male investment in raising the chicks.
The general propensity of birds to pair bond is very unlike mammals, where females have mammary glands (so the food is right there when the baby mammal pops out) and hence are far more inherently invested than male mammals in raising their offspring. Female mammals provide baby-feeding resources from their own body and food consumption. Thus, in mammals, the male and female expression of genes do not have equivalent cards to play in the replication game.
Hence male mammals are very rarely significantly involved in raising their offspring. They may be pure Cads, they may be mate-guarding Cads, or even coalition Cads (a group of males sharing a group of females), but they are very rarely Dads.
Homo sapien males are a conspicuous exception. They are an extreme case of successful replication strongly favouring male investing in offspring. The Homo sapien big-brain strategy has resulted in particularly helpless babies whose brains and cognitive capacities have to develop outside the womb. Hence Homo sapien children take 15 years or more to be successfully raised to reproductive age.
In foraging societies, it is not practical to raise Homo sapien children without male contribution to provision of resources and protection. Hence Homo sapien males generating such a strong Dad strategy, contributing protection and resources to child-raising. (A few farming societies have an Uncle strategy instead - brothers contribute to raising their sister's children.)
With the dramatic expansion in mass prosperity, resource thresholds undermining the Dad strategy as described for birds has kicked in, but in a way that expresses human social complexity. At lower socio-economic levels of mass-prosperity human societies, the balance of resources compared to the cost of raising a (poorer) child has undermined the Dad strategy of investing in children. A Cad can still have children reach adulthood without bothering to contribute to their upbringing.
Conversely, women can abandon the Feeder strategy (seek to keep the guy around) in favour of a Breeder strategy (rely on their own income). Family law systems can also generate incentives, particularly if child custody is presumptively awarded to the mother, to move from a Feeder to a Breeder strategy. Especially as the state can be enlisted to extract income from fathers without that provision of income giving fathers any leverage for any role in their children's lives beyond cash-cow. They become a Shadow Dad - the financial costs without the participation, authority and status benefits of fatherhood.
All these factors encourage the rise in single motherhood, particularly among lower socio-economic levels of society.
These interactions also mostly explain the 'gender gap' in US (and other developed democracies) politics. Single mothers and divorced women have an incentive to vote for the side of politics more favourable to welfare expenditure and for family law to favour women. The latter can include, but is not limited to: establishing fatherhood (but not motherhood) as a matter of strict legal liability, giving mothers presumptive child custody, not requiring a DNA test to allocate paternal financial obligations, using the state to impose paternal financial obligations thereby (as noted above) profoundly undermining the leverage of fathers, not penalising use of false accusations as a legal tactic.
At higher socio-economic levels of human societies, the cost of raising a child is considerably greater and the value of social connections and networking considerably higher. So the Dad strategy is still going strong.
Human marriage is the ritualising of human pair bonding to support the Dad strategy. The above patterns mean that marriage is keeping on keeping on in the upper levels of Anglo and Northern European society, but decaying in the lower levels. To the advantage of the children of the upper levels of society and the disadvantage of the children of the lower.
Same-sex attraction transposes a thoroughly normal characteristicA characteristic that about half the members of a species have is a thoroughly normal characteristic of the species.
About half the human species is sexually attracted to males. A small proportion of those with this normal characteristic are themselves male.
About half the human species is sexually attracted to females. A small proportion of those with this normal characteristic are themselves female.
The same-sex attracted have a characteristic that is normal for the other sex, but not for their own.
The question is with regard to same-sex attraction is not, why does this male or female have this weird characteristic? It is, why does this male or female have a characteristic that is so common in the other sex but rare in their own?
The trait itself does not need explanation, only the transposition. Moreover, if neither the trait nor its transposition is itself significantly genetic, then the gets in the way of genetic replication, if transposed to the other sex, issue becomes moot, as there would not be specific genes, or specific gene expression, that were being selected against by that transposition. There would be no need for an explanation in terms of the chances of specific genes replicating.
Same-sex activity, even same-sex pairing, is much more common in nature than people generally realise. Which tends to weaken the expectation that same-sex attraction has a specific genetic cause rather than, say, an epigenetic or a population-dynamics cause.
Once we realise that it is the transposition that requires explanation, it then becomes more likely to be a population-level question, rather than just something that is weird in an organism because it gets in the way of replicating their genes, so should not be persistent.
The trait itself does not need explanation, only the transposition. Moreover, if neither the trait nor its transposition is itself significantly genetic, then the gets in the way of genetic replication, if transposed to the other sex, issue becomes moot, as there would not be specific genes, or specific gene expression, that were being selected against by that transposition. There would be no need for an explanation in terms of the chances of specific genes replicating.
Same-sex activity, even same-sex pairing, is much more common in nature than people generally realise. Which tends to weaken the expectation that same-sex attraction has a specific genetic cause rather than, say, an epigenetic or a population-dynamics cause.
Once we realise that it is the transposition that requires explanation, it then becomes more likely to be a population-level question, rather than just something that is weird in an organism because it gets in the way of replicating their genes, so should not be persistent.
Especially as it is an empirical question how much same-sex attraction does actually get in the way of gene replication. Thus, in those human societies where marriage is essentially a universal expectation, any reduction in the propensity to replicate any specific genes due to same-sex attraction may well be quite muted.
Suppose the expression of the trait in the other sex is significantly genetic and same-sex attraction is a serious block to gene replication, which it is likely to be in societies where same-sex pairing is accepted. Why would same-sex attraction persist in a population?
One proposal is the ‘gay uncle’ hypothesis—same-sex attraction increases the reproductive success of the siblings of the same-sex attracted person. In societies with very strong kin-group patterns, this has some plausibility as a ‘survive in the gene pool’ mechanism. But while most human societies do have strong kin groups, this is not a human universal across human societies.
Attitudes to same-sex attraction are strikingly varied across human societies. It seems unlikely that mechanisms that occurs in some societies but not in others, or that have varying degrees of social intensity, would have much explanatory value in explaining a recurring pattern across human societies. Especially given that same-sex activity and pairing turns up in many other species. Which brings us back to population-level analysis.
We are a significantly cognitively dimorphic species. That is, an overwhelming majority of men have a bundle of cognitive traits no woman has and an overwhelming majority of women have a bundle of cognitive traits no man has.
It seems likely that the same-sex attracted would disproportionately turn up in the cognitive-traits-bundles overlap group. This would imply that gay men would often display cognitive patterns that we more commonly associate with women and that gay women would often display cognitive patterns that we more commonly associate with men. (This in addition to having the attraction focus that we associate with the opposite sex.) Which is, of course, exactly what we do observe.
What makes us expect that the mechanism to associate cognitive traits with physical sex would operate to perfectly differentiate by physical sex? Why would it not sometimes cross-over? Same-sex attraction then becomes simply a transposition resulting from non-perfect association of cognitive phenotype with sexual phenotype.
Note that there is an interesting benefit to this pattern of overlap and transposition for us as a profoundly cultural species. Having a small, cognitively cross-matched group could assist in communication between the (cognitively dimorphic) sexes. Moreover, in a cultural species, having a small but persistent group more inclined to invest in cultural activity rather than children would also be collectively beneficial. In fact, these two features would run together.
It seems likely that the first occupation to split off, at least in part, from subsistence activities was shaman. Shaman often come from the cognitively cross-matched, displaying same-sex attraction and other cross-gender behaviour. It seems unlikely that any specific pattern of genetic replication would be generated by this role. At the very least, however, having such a persistent cognitively-cross-matched minority would assist in sustaining the advantages that we get from being a cultural species.
A cognitively complex, yet significantly cognitive dimorphic, species would generate more dimensions across which cognitive traits common in one sex might get transposed to another. It is a sign of our adaptability as a species that this too could be turned into an advantage via sustaining cultural and social connections.
These musings are part of the intellectual scaffolding for a book to be published by Connor Court looking at the social dynamics of marriage. As they are somewhat a work in progress, these musings may be subject to ongoing fiddling.
Suppose the expression of the trait in the other sex is significantly genetic and same-sex attraction is a serious block to gene replication, which it is likely to be in societies where same-sex pairing is accepted. Why would same-sex attraction persist in a population?
One proposal is the ‘gay uncle’ hypothesis—same-sex attraction increases the reproductive success of the siblings of the same-sex attracted person. In societies with very strong kin-group patterns, this has some plausibility as a ‘survive in the gene pool’ mechanism. But while most human societies do have strong kin groups, this is not a human universal across human societies.
Attitudes to same-sex attraction are strikingly varied across human societies. It seems unlikely that mechanisms that occurs in some societies but not in others, or that have varying degrees of social intensity, would have much explanatory value in explaining a recurring pattern across human societies. Especially given that same-sex activity and pairing turns up in many other species. Which brings us back to population-level analysis.
We are a significantly cognitively dimorphic species. That is, an overwhelming majority of men have a bundle of cognitive traits no woman has and an overwhelming majority of women have a bundle of cognitive traits no man has.
It seems likely that the same-sex attracted would disproportionately turn up in the cognitive-traits-bundles overlap group. This would imply that gay men would often display cognitive patterns that we more commonly associate with women and that gay women would often display cognitive patterns that we more commonly associate with men. (This in addition to having the attraction focus that we associate with the opposite sex.) Which is, of course, exactly what we do observe.
What makes us expect that the mechanism to associate cognitive traits with physical sex would operate to perfectly differentiate by physical sex? Why would it not sometimes cross-over? Same-sex attraction then becomes simply a transposition resulting from non-perfect association of cognitive phenotype with sexual phenotype.
Note that there is an interesting benefit to this pattern of overlap and transposition for us as a profoundly cultural species. Having a small, cognitively cross-matched group could assist in communication between the (cognitively dimorphic) sexes. Moreover, in a cultural species, having a small but persistent group more inclined to invest in cultural activity rather than children would also be collectively beneficial. In fact, these two features would run together.
It seems likely that the first occupation to split off, at least in part, from subsistence activities was shaman. Shaman often come from the cognitively cross-matched, displaying same-sex attraction and other cross-gender behaviour. It seems unlikely that any specific pattern of genetic replication would be generated by this role. At the very least, however, having such a persistent cognitively-cross-matched minority would assist in sustaining the advantages that we get from being a cultural species.
A cognitively complex, yet significantly cognitive dimorphic, species would generate more dimensions across which cognitive traits common in one sex might get transposed to another. It is a sign of our adaptability as a species that this too could be turned into an advantage via sustaining cultural and social connections.
These musings are part of the intellectual scaffolding for a book to be published by Connor Court looking at the social dynamics of marriage. As they are somewhat a work in progress, these musings may be subject to ongoing fiddling.