Wednesday, December 16, 2020

Social dominance, marriage and the origin of the state

Not all early states were autocratic.

Pericles, the leader of democratic Athens,
revealingly portrayed wearing a military helmet.

In 1970, anthropologist Robert Carnerio advanced the circumscription theory of the origin of the state. The idea being that in environmentally (and socially) circumscribed environments, population pressure leads to warfare between communities that ends up resulting in the violent amalgamation of villages into chiefdoms and chiefdoms into states. Carnerio argued that voluntarist theories of the state fail as smaller units would not give up their sovereignty voluntarily. The “penning in” circumscribing factors could be either physical (deserts, mountains, coasts) or social (complete occupation of arable land by other social units).

Carnerio identified clearly a problem with automatic theories of the origin of the state — that many farming communities did not produce any significant surplus. Though he mis-identified this as a social mechanism failure when in fact it the key constraining factor was whether the crops cultivated in a region were sufficiently seasonal to produce stored food that could be appropriated. That this was a necessary condition for the rise of chiefdoms and states can be seen by the case of New Guinea, which had all the farming, warfare and geographical and social circumscription one could want but never developed any chiefdoms as its entirely non-seasonal crops did not produce the stored food necessary for the required resource (i.e., tribute and tax) base for chiefdoms and states to develop.

Demographer Peter Turchin has added two elements to this model of the origin of the state: first, that power accrues to a group, not just an individual and second, that the cultural means to legitimate chiefly power had to evolve. I am not overly fond of the concept of legitimacy in descriptive analysis, but Turchin’s second factor can reasonably be reformulated as the normative support for chiefly power had to evolve so as to create patterns of systematic deference.

(My problem, building on political scientist Xavier Marquez’s critique, with the concept of legitimacy as a descriptive tool of analysis, is that once we have systematic deference plus analysis of norms, it is not at all clear to me what legitimacy as a descriptive term adds to the analysis. Any collapse in systematic deference can be successfully analysed without using legitimacy at all. The same applies to the development of systematic deference.)

More recently, Carnerio reformulated his theory, seeing physical and social circumscription as an accelerating rather than a necessary factor. Carnerio stated his reformulated theory succinctly as:

A heightened incidence of conquest warfare, due largely to an increase in population pressure, gave rise to the formation of successively larger political units, with autonomous villages being followed by chiefdoms, the process culminating in certain areas with the emergence of the state.

This might reasonably called the coercive-conquest theory of the origin of chiefdoms and states. Alas, this still has a problem: the evidence strongly suggests that conquest wars are a consequence of increased political centralisation, not a cause of it.

If we class wars as wars for plunder, wars for land and wars of conquest, then the wars of non-chiefly societies are overwhelmingly plunder wars while wars of state societies are overwhelmingly conquest wars, with the movement from wars for plunder →wars for land →wars of conquest being more a result of increased political complexity than a cause of it.

Another way to look at it is that the (social and physical) technology of aggression tended to develop ahead of the (social) technology of exploitation. It is only when the technology of exploitation develops sufficiently that war moves from the land and plunder wars of kill the men and boys, take the women, to the conquest wars of force the conquered men to provide taxes and labour service. But for that shift to take place there needs to be the level of social organisation that can achieve the systematic deference and appropriation of resources that we call conquest.

The period after the development of improved technologies of aggression, but before the development of sufficiently effective technologies of exploitation, created what is known as the Neolithic y-chromosome bottleneck, when most male lineages went extinct.

But the warfare theory of state origins can be reformulated without requiring conquest warfare to be the driver. As Carnerio himself has done:

Nonetheless, with each successive war, military leaders tended to enlarge their powers and entrench their position. Moreover, they became increasingly reluctant to surrender these powers when the fighting had stopped. Finally, either through a chief’s peremptory refusal to relinquish his once-delegated war powers, or (less likely perhaps) through the outright conquest of neighboring villages by the chief of the strongest one, the first permanent chiefdoms were established.

It then becomes what we can call the violent conflict theory of the origin of chiefdoms and the state. With the addition of systematic access to stored food to provide the necessary tribute-and-tax base, it has a great deal of evidence to back it up.

There are, however, two striking wrinkles: religion and autocracy. The majority of early states were autocratic. That is, they were dominated by polygynous rulers who sat at the apex of both military power and religious authority and who often functioned as both war leader and priest. Such rulers were typically both the pinnacle of human service to divinity as well as an avatar of divine authority.

The priestly or sacred role of early kingship points to the role of ritual and religion in the development of human cooperation and so the development of human societies. Ritual predates language (non-human animals engage in ritual). Ritual centres are the oldest free-standing human constructions. It is clear that ritual and religion was the other purpose, apart from warfare, for which resources could be collectively acquired and used at significant scale. (While exchange is at least as old as Homo sapiens, commerce as a mechanism for acquisition and use of resources on significant scale was a comparative historical latecomer compared to fortifications, palaces and temples.)

Ritual and religion, and the support of faith intermediaries (priesthoods, religious scholars, etc.), constitute a third pillar of states (along with coercion and resources), at least until recent centuries. Ritual and religion are also compatible with, and a feature of, all forms of government. They thus seem unlikely to be a key driver of differences in nature of states and the evolution of polities.

The wrinkle with autocracy is not all the early states were autocratic. On the contrary, in Mesoamerica, in the ancient Mediterranean, and in classical Northern India, there were a significant number of non-autocratic states. The band →chiefdom →state hierarchy of social complexity is complicated by the existence of more consultative societies. The Greek polis is an obvious example of such non-autocratic polities, but hardly the only one.

Access to resources, particularly appropriated (taxed) resources, and coercion are the two distinctive pillars of the state. They are mutually supporting pillars, as appropriation requires coercion and coercion is funded by appropriation. Indeed, there is a certain chicken-and-egg problem involved. This meant that the development of chiefdoms and states had to be something of a spiralling-up process as instruments of coercion and appropriation mutually scaled up.

In understanding the development of either autocratic or consultative polities, the question then becomes, regarding coercion and appropriation, how widely dispersed is leverage in each? It is a conspicuous feature of the city-states of the classical Mediterranean that citizenship and fighting for your city-state were intimately connected. If you needed the demos, the free poor, to row your war galleys, they had the leverage to gain the vote and you were a society where the demos had the kratos, the authority. If you were not a naval power, the demos typically didn’t have the leverage to gain the vote.

This association between fighting and bargaining leverage turns up elsewhere. The republics of northern India are sometimes referred to as Kshatriya republics. While the governing body of the republic of Tlaxcala was made up of warriors who had passed gruelling public ordeals. (As such public ordeals were tests of character and commitment, they were a reasonable way of choosing decision makers.)

Anthropologist Robert Blanton has developed a dual process theory of state politics, based on strategies of either elite network use seeking exclusory power or a more corporate strategy. I find it an inadequate analytical framework, precisely because bargaining power in polities has so often been based on coercive leverage.

Blanton is clearly, however, correct in that autocracy was not the only way early states were organised. Instead, there is something of a spectrum, across various mixtures of bargaining and dominance strategies.

Autocratic government was predominantly dominance, albeit often entailing implicit social bargains, typically based around differentiated capacity to resist various rules. Non-autocratic governments were based on explicit bargaining, though those outside the bargaining processes were still dominated by the state. (Democracy just meant that the explicit bargaining processes incorporated all free males.)

Dominance needs agents to scale up. Assembling and sustaining reliable agents is the key to successful autocracy. Provided that can be managed, unless explicit bargaining polities incorporate the representative principle, dominance-with-agents scales up far more than does bargaining politics. (I.e., without the representative principle, autocracy scales up more than does non-autocracy.) Across history, that which scales up generally dominates that which does not.

States and marriage patterns

Evidence suggests that the most favourable situation for developing non-autocratic states is societies with nuclear/conjugal families and where elite marriages are monogamous. A possible factor in generating this pattern is that polygyny leads to father-absent child-raising, which leads to more aggressive, dominance-oriented personalities among boys, with the effect being stronger the less co-wives cooperate.

Aggressive, dominance-oriented elite males is not a good basis for non-autocratic social arrangements.

Extended families are associated with more vertical (elder/younger) relations and dependency training, to ensure compliance with assigned tasks and dependence on the family, both of which discourage self-reliance. Again, not a good basis for non-autocratic social arrangements.

But there are differences in social dynamics between elite marital monogamy and elite polygyny, and between conjugal and extended families and kin groups, that are likely more important than these personality-development patterns.

Kin groups and extended families limit the role of bargaining, as they limit the ability of people to move between bargaining proposals and factional groupings. They also create intense loyalties that impede identification with the wider polity. That is particularly problematic for potential office-holders.

It is conspicuous that the three classical Mediterranean city-states whose constitutional history we know most about (Sparta, Athens and Rome) all worked to override kin groups.

Athens and Rome both abolished tribes based on lineage, turning tribes into strictly territorial categories based on where in the city you lived. So they replaced a kin identity with a territorial identity directly linked to the city. Similarly, everything about the raising of a Spartiate encouraged them to identify either with Sparta or with their comrades-in-arms based on age cohorts.

The linguistic evidence suggests that kin groups disappeared from Greek society as the polis became the dominant political form (though extended families lingered on in some rural areas), with the linguistic terms differentiating between male-side and female-side relatives largely dropping out of Greek in the fifth to third centuries BC. The disappearance of kin terms differentiating between who is, or is not, in the kin group is a powerful linguistic marker of the disappearance of kin groups. If the distinction no longer matters, you don’t need the associated terms in your language.

As for polygynous versus monogamous marriage, polygynous marriage is a manifestation of elite dominance as elite males acquire extra wives and concubines at the expense of low-status males. This creates a pool of low-status males without in-group marriage prospects who are likely to be a highly disruptive element in society, not invested in whatever bargains are agreed to.

The bandits that are such a notable feature of Chinese history and literature are a predictable product of a society with a highly polygynous elite whose low-status males had no chance of acquiring women from other societies. (Those people over there have women, steal theirs being a standard response to polygyny creating males without in-group marriage prospects; one displayed by the Norse going a-Viking, every pastoralist raiding culture ever and sanctified by Islam; notably in the “those your right hand possesses” verses in the Quran but also by comments in various hadiths and provisions within Sharia, topped off by the service of houris in the after-life.)

Given that social stability tends to increase wealth and income inequality, and such inequality tends to increase elite polygyny, as Chinese dynasties aged, the bandit problem tended to get worse. Bureaucratic corruption also tended to get worse over time, further undermining law and order. Peace and social stability meant that the population tended to increase, squeezing living standards. Given these patterns, that Chinese dynasties recurrently collapsed in an upsurge of banditry, secret society insurgencies and peasant revolts is not surprising.

Polygyny encouraged autocracy but also undermined it in various ways. Rulers with large harems surrounded by women and eunuchs in palaces that no un-castrated male could stay in overnight led to rulers isolated from their societies to a degree unheard of in the marital-monogamy West.

Elite polygyny also meant that marriage alliances had less value (as any wife or concubine might end up producing the heir) and created elite households that were divided against themselves, as wives competed for the prospects for their children. An elite male with a sole wife who was the mother of his legitimate children could use her as a partner and deputy in a way not open to an elite male with a collection of wives and concubines.

Conversely, a society that adopts monogamous marriage for elite males as well is already engaged in an implicit bargain, paying attention to the interests of lower-status males. This creates the basis for embedding more explicit bargaining into political arrangements.

With elite marital monogamy, there is no significant pool of men without marriage prospects to be socially disruptive. Fathers are much more likely to be involved in raising their children, leading to more self-reliant, but less aggressive and dominance-focused, sons.

Marriage alliances have more value, as an elite male’s (sole) wife is the presumptive mother of his heir(s). His wife becomes a partner and deputy, making it easier to invest in political bargaining.

There were, however, plenty of autocratic states among societies with marital monogamy. So, while elite male marital monogamy (and lack of kin groups) facilitated non-autocratic politics, they did not guarantee it.

Bargaining and leverage

We come back to the matter of leverage. A relatively small polity, not so large that bargaining was impractical, with a limited tax base that wished people to invest in their own military equipment and training, was likely to develop some form of political bargaining, which is to say citizenship, politics. This is still a state-origins-through-violence story, but one where the war leader does not become (or failed to remain) an autocratic ruler but instead evolved into officeholder(s) for a warrior-cum-citizen elite. This could mean a more group-oriented social form (as tribes can be) persisting all the way through the evolution of a state. Alternatively, the normal process of band →chiefdom →state could split off in a different direction.

An armed soldier-citizenry (or warrior-magnates) also has leverage over taxation, as their consent is going to be required to levy taxes. The one thing that historically was relentlessly deadly for bargaining politics was a ruler acquiring a tax base they did not need consent for. For a large enough tax base meant that citizen-soldiers could be replaced by professional soldiers. This is, after all, what happened to Rome. And to post-medieval Spain, which, upon the Spanish crown acquiring access to the gold and silver of the Americas, moved from being a pioneer of parliamentary politics to autocracy (albeit an increasingly inefficient and feeble one).

Of course, modern states have professional armies. What modern democracies rely on is the pervasive adherence to democratic norms, particularly (but not only) by the agents of the state. The hope and expectation is that no would-be autocrat will be able to assemble enough reliable agents to achieve dominance. Generally speaking, it is a case of so far, so good. Though the systematic attack in contemporary Western societies on bargaining politics via the attempt to control what it is deemed legitimate to express, including characterising particular (mass) voting choices as delinquent, is not a healthy trend for democracy. Especially as it is associated with attacks on the status and standing of citizenship. “Punch a Nazi” conveys a very different message than does “Nazis are citizens too”. If citizenship is downgraded, so is the citizenship one relies on your professional soldiers identifying with.

Be that as it may, it is clear that a presumption of autocracy cannot be a satisfactory basis for a good theory of the origin of the state. Yes, autocracy has been the most common path in the development of the state, but it has not been the only one.

Cross-posted from Medium.

Wednesday, December 9, 2020

BOOK REVIEW: Why We Get Sick

If you only read one book on health and nutrition, make it Why We Get Sick by Dr Benjamin Bikman.

Dr Bikman is a research scientist whose work concentrates on insulin. He also teaches. Why We Get Sick: The Hidden Epidemic at the Root of Most Chronic Disease — and How to Fight It has the comprehensiveness of a working scientist and the clarity of a good teacher. Dr Bikman also has various lectures available on YouTube which are well worth watching.

Why We Get Sick is organised into three parts — The Problem (8 chapters), Causes (5 chapters), The Solution (5 chapters) — with regular insert boxes to expound on salient points. The book is essentially about the nature and causes of insulin resistance and how to (re)achieve insulin sensitivity. That is, make your cells properly responsive to insulin rather than improperly resistant to it.

The “calories in, calories out” theory of obesity is simplistic and profoundly misleading, as what we eat, and how often, affects how much we eat and how much energy we expend. In other words, what we eat, and how often, affects the hormonal balance of the body which then affects whether we store fat in our bodies or use it up. The prime way mechanism determining which way our body goes is through our insulin response. Insulin is a hormone with a range of effects but it is primarily about managing our glucose levels.

Eating frequently drives up insulin levels. Eating carbohydrates drives up insulin levels. Fat does not generate a significant insulin spike. Protein with fat only generates a small insulin spike. Carbohydrates drives up insulin. Protein with carbohydrates drives up insulin.

A diet of fat and protein has minimal insulin effect. There are essential fats, there are essential proteins, there are no essential carbohydrates. Fat (if not seed oils or trans fat) is your friend, carbohydrates are not. For most people, a metabolically healthy diet is mostly (good) fat. The sugar you consume is far more likely to be turned into body fat than is the fat you eat.

So, if you follow the standard official nutrition guidelines and eat 4–6 regular meals and snacks a day of mostly carbohydrates, you are setting yourself up for developing insulin resistance.

What are some of the consequences of developing insulin resistance? Obesity, hypertension, heart disease, cancer, neurological conditions including Alzheimer’s, kidney disease, accelerated ageing and infertility. In other words, all the chronic diseases that increasingly plague our species can be largely traced back to insulin resistance. They are all conditions largely generated by metabolic ill-health.

Susceptibility to insulin resistance is partly genetic, partly some stress and environmental factors, but predominantly diet.

In Part I: The Problem: What is Insulin Resistance and Why Does It Matter? Dr Bikman takes us through what insulin resistance is (Chapter 1) and then, chapter by chapter, various health consequences.

Insulin resistance is the cells of the body becoming insulin resistant: that is, less and less responsive to insulin levels, which have to get higher and higher to have their required effect of regulating glucose levels in the body, especially the bloodstream. The more insulin there is flooding your system, the more food is turned into fat. This can be subcutaneous fat (under the skin) or visceral fat (in the thorax, especially around organs such as the liver).

Visceral fat is much worse than subcutaneous fat in its harmful health effects. It is also much less visible. Dr Bikman has a nice passage where he explains that fat you can jiggle is better than fat you cannot.

In Part II: Causes: What Makes Us Insulin Resistant in the First Place? Dr Bikman takes us through the causes of insulin resistance. The influence of age and genetics (Chapter 9), the causal role of hormones (Chapter 10), the interaction between obesity and insulin resistance (Chapter 11), the role of inflammation and oxidative stress (Chapter 12), and lifestyle factors (Chapter 13). Yes, you should get regular sleep (depending on you, 5–7 hours a night). Yes, you should exercise regularly.

Having laid out the full disaster, in Part III: The Solution: How Can We Fight Insulin Resistance? Dr Bikman takes us through the eminently practical (and inexpensive) things we can do to get back to good metabolic health. Eat less carbohydrates and fast regularly. In other words, do what you can to give your body a rest from being flooded with insulin.

As someone who has lost 39kgs (85lbs), and is no longer pre-diabetic, through doing precisely that, I am a believer.

Dr Bikman explains that a low-carb diet does mean you need to eat more salt. Often, when you are hungry, it means your body actually wants water or it wants salt. Having too little salt is more dangerous to your health than too much.

A tip not in the book: salted black coffee. Salt cuts bitterness, so salted black coffee can be a no-calorie, no-insulin affect, flavour-hit that blunts hunger cravings.

Dr Bikman has a nice explanation of the difference between fasting (not eating) and starvation (consuming your own muscles). If you have visible fat, and have no unusual health conditions, you can fast safely, though you may have to keep your salt intake up. He mentions the famous case of the seriously obese Scot who, under medical supervision, fasted for 382 days, consuming only water and minerals. Fat is stored energy, so body fat is what you consume when you fast.

Humans are designed to fast: that is why we are the fat ape. (Ever seen a picture of a chimpanzee with no fur? They are a bundle of muscle.) Breakfast is a hugely overrated meal, and often made up of foods bad for our metabolic health.

So, exercise regularly, eat whole foods, avoid processed foods (and seed oils), cut the carbs and adopt a mild fasting regime. The (inexpensive) path to metabolic health. Why We Get Sick concludes with a clear guide on how to do this.

And no, don’t drink fruit juice, that is flooding your body with sugar (and a particularly metabolically unfortunate sugar, fructose) generating an insulin spike. In fact, generally avoid drinking your calories.

Clear, informative, practical, based on good science: if you only read one book on health and nutrition, make it Why We Get Sick.

Crossposted from Medium.

Sunday, November 29, 2020

The Class That Cannot See Itself

The inability to see oneself is a core element of much modern moralising

We live in a time of a striking pattern. In many ways, societies are much more socially egalitarian than they have ever been. Openly class-based social superiority language is much, much rarer than it used to be.

Conversely, we live in an age of intense moral in-egalitarianism. For instance, academic commentary on Trump voters has predominantly been about voting for Trump as a sign of moral delinquency. As has much of the academic commentary on Brexit voters. Such moral elitism is far more common than any explicit class elitism.

There is, of course, very much an underlying social dimension to this moralised denigration, but it pertains to a class that cannot see itself. Specifically, the possessors of human-and-cultural capital. (Human capital being skills, habits, knowledge and other personal characteristics that affect our productivity.)

The human-and-cultural capital class do not, and in a sense cannot, preen as a class because they very determinedly do not see themselves as a class. They are a meritocratic elite, so obviously not a class. Hence the socially egalitarian language.

If one was to nominate the single biggest difference between contemporary “woke” progressivism and previous iterations of progressivism, it would be the disappearance of class as an object of social concern.

A rather telling indicator of this is that it makes sense to talk of “woke capital”, but woke workers are not a thing. Indeed, if “woke” cultural politics become sufficiently salient in electoral politics, working-class voters become much more likely to vote in dissenting, anti-woke ways.

Nevertheless, modern “woke” progressivism is very much driven by class politics. The class politics of status rather more than the class politics of economic interest. But that is not surprising. It is actually hard for any class to combine on the matter of economic interests, as the biggest competitors for one’s income are people seeking the same sort of income. It is much easier for a class to combine on the matter of class status, for that is a benefit they can, and do, share. Status as a member of a group is non-rivalrous (at least over some range) within the group, though it may be highly rivalrous with respect to other groups.

(In terms of political dynamics, income and benefits from the state that are not dependent on working for the state are, obviously, in a bit of a different category to ordinary economic interest. The more benefits the state hands out, the stronger the incentives to act collectively to increase benefits received and minimise costs paid. Though the weaker become the incentives to act cooperatively to independently create positive social returns.)

Unwillingness to see

There are two levels of self-deception going on in the class politics of “woke” progressivism. The first is the inability to see themselves as a class. The second is the inability to see the mimetic nature of their moralising. (The latter term may be a little mysterious: I will explain presently — mimetic desire is desire copied or imitated from others.)

The inability to see themselves as a class is straightforward, though it goes beyond seeing themselves as a meritocratic elite, and so not a class. “Woke” progressivism is overwhelmingly concentrated in the human-and-cultural capital class. The most “woke” industries are the ones most dominated by the possessors of human-and-cultural capital: entertainment, education, news media and online IT. The industries that constitute the cultural commanding heights of contemporary society.

Marxism is by far the most elaborate schematics of class available to progressive thought. Human capital is not a form of capital that Marxism grapples much with, the concept not being developed in detail until decades after Marx’s death, even if the concept dates dates back to Adam Smith. The human-and-cultural capital class thus becomes effectively invisible as a class, both analytically and, given their sense of themselves as a meritocracy, in self-identity.

Conversely they typically very much see themselves as a moral elite, a moral meritocracy: that they possess the correct, and highly moral, understanding of the world. The sense of being members of a cognitive meritocracy (and elite) converges with a sense of being members of a moral meritocracy (and elite). This is, to invoke a touch of Rene Girard, mimetic moralising: mimetic desire being desires copied from another. They copy moral postures from each other based on a common desire to be, and to be seen to be, members of the moral meritocracy.

Unlike other forms of mimetic desire, mimetic moralising is not inherently rivalrous. On the contrary, moral agreement creates a mutually reinforcing sense of moral status.

This common desire to have a mutually self-reinforcing sense of moral status, not merely cognitive status but also moral status, creates a powerful tendency towards conformity. Specifically, it creates prestige opinions, opinions that mark one as a good, informed person. Opinions that make one a member of the club of the properly, intelligently, moral.

Expansive stigmatisation

The immediate corollary of this is that contradictory opinions become evil, wicked, ignorant, stupid. The opinions that are possessed by those, who by having those opinions, are outside the club of the properly, intelligently moral. Those who do not have moral merit in their opinions, who are not members of the moral meritocracy.

For opinions can only create prestige if contradicting them has negative prestige. Thus, dissent from these morally prestigious opinions cannot be legitimate, because if dissent is legitimate then there is nothing special about the putative prestige opinions. Opinions that it is legitimate to disagree with do not sort the morally meritorious from those not morally meritorious. Thus, they are not boundary-setting opinions that mark membership of the club of the morally meritorious.

Mimetic moralising thus insists on the right to police legitimacy, to police what is seen as morally acceptable. It turns morality into the property of the mimetic elite, who are deemed to have the right to police the public space.

It also makes the set of prestige opinions something of a moveable feast. The key thing is to stay inside the moral club, not to protect some permanent doctrinal purity. On the contrary, being alert enough to keep up with shifts in the prestige opinions, and shifting linguistic taboos, is part of how one proves and maintains membership of the moral meritocracy. With the necessary linguistic sensitivity helping to further sort those who are morally meritorious from those who are not.

The level of linguistic attentiveness required does much to ensure that working-class folk never quite make it into the morally meritorious. Particularly if they express themselves on various taboo-laden topics.

If mimetic moralising is the highest (status) good, then reason, evidence and consistency must be subordinated to it. Indeed, those who invoke reason, evidence and consistency against any of the prestige opinions are enemies of (mimetic) righteousness, because they fail to converge with the righteous opinions and they contest the mimetic elite’s ownership of morality. They thus proclaim their failure to join the club of the morally meritorious. Worse, they threaten the very gatekeeping distinctions that creates the status of being morally meritorious, that make club membership valuable.

Such stigmatisation of those outside the boundary of the morally meritorious has more power if it levers off things already widely accepted as being wrong or abhorrent. Thus, the accusation of racism! works so effectively, not because people are generally racist but precisely because they are generally not. The more racist society actually was, the less effect, the less negative resonance, the accusation would have. Conversely, the less racist society becomes, the more potential effect the accusation of racism has (with some adjustment for diminishing returns from over-use). This pattern is aided by the cognitive tendency to expand the ambit of a category or concept as the thing originally captured by the category or concept become rarer.

As this is a status strategy, the demand for grounds to stigmatise will be driven by the conveniences of club-gatekeeping rather than what is actually happening. Thus, the demand for racism and acts of racism as weapons of stigmatisation will (and does) tend to exceed, often quite significantly, the supply of actual racism and racist acts. There is thus a double inflation: acts that are not racist (or may not even have occurred) will be denounced, hence the startling high rate of hate-crime hoaxes. Meanwhile, actual acts of racism will be inflated in their significance.

There will also be an ongoing search for new grounds of stigmatisation to continue the separation of the morally meritorious from the not so. The multiplication of belief sins (all the -ist and -phobe accusations, those of cultural appropriation and so forth) is precisely what one would expect in a time of mimetic moralising as a status strategy by members of the human-and-cultural-capital class.

There is also an obvious capacity for purity spirals. And for more junior employees to leverage their moral commitment against more established staff less au fait with the linguistic and moral nuances. Or who retain lingering normative commitments outside the mimetic moraliising.

Display versus signal

It is useful to understand the difference between display in general and signalling specifically. In biology and economics, signalling involves the incurring of costs: the greater the cost incurred, the stronger the signal.

The mimetic moralising outlined above involves moral beliefs being on display but it rarely involves incurring any cost in such display. On the contrary, moralising as a status-game is all about the benefits of displaying one’s membership of the morally meritorious.

Even so, keeping up with shifts in linguistic taboos and prestige opinions does take attention, so does work as a signal. Not giving heed to the inconsistencies between, and hypocrisies within, the prestige opinions also works as a signal. Especially if it means wearing derision or critique from those pointing out such inconsistencies and hypocrisies. Thus, inconsistency and hypocrisy acts as more of a feature than a bug: it provides a signal of commitment to the club of the morally meritorious, a willingness to pay the membership dues.

Moral norms are norms held unconditionally: things people believe are morally right regardless of the expectations of others. Social norms are norms based on the expectations about what others will do, and what others expect people to do, that have associated social sanctions. Descriptive norms are norms simply based on expectations about what others will do and do not have associated sanctions. (This is the framework for norms developed by philosopher Christina Bicchieri.)

There is a perennial tendency to parade social norms as moral norms. Adhering to a moral norm, rather than a social norm, is a stronger signal of personal commitment and is more presumptively meritorious.

Parading social norms as moral norms allows use of the most complete language of normative commitment: indeed, the language of trumping normative commitment. It also permits a useful level of self-deception: I am not adopting this outlook because it is expected of me and there are costs if I do not, I am really morally committed to it. It becomes a stronger signal of moral in-group membership, of being soundly clubbable.

Mimetic moralising as a status strategy has a range of consequences. One is that it easily slides into a more general expectation of emotional and social protection. The claim that certain words being published make people feel unsafe makes more sense in this context. Being of the club of the morally meritorious easily generates a wider expectation of cognitive safety and protection, of not having to deal with dissenting ideas in one’s social (including work) milieu. The more one’s sense of identity and status is invested in a sense of being of the morally meritorious, the more that is likely to be the case. The more also one is likely to be willing to stigmatise, or otherwise sanction, those who dissent.

Another consequence is creeping organisational capture. The more people adhere to the mimetic moralising status strategy, the less willing they are likely to be to risk losing its protections and benefits by supporting dissenting voices or considering inconvenient facts or perspectives. Much of the power of mimetic moralising as a status strategy is precisely from mutual confirmation of moral righteousness, of being of the morally meritorious. But the elevation of such mimetic moralising then hollows out any use of contrary norms, even if they are longstanding norms of the organisations or institution. Indeed, particularly with the fading away of Christianity, the process of capture is likely to more effective the less competition there is within the normative space for contrary norms that people can acceptably invoke.

Unawareness required

The capacity for manipulation of all this by self-serving actors is very high. Nevertheless, in general, all this only works if the mimetic elite does not fully and consciously understand what they are doing. There is a real sense in which it is vital that they do not see themselves. For if they saw their attempt to control legitimacy, to possess morality as their property, to stigmatise disagreement, to protect their moralised sense of status, for what it was, it would stop working.

Prestige is a bottom-up status process, so if it is fully revealed, and accepted to have been so revealed, the mimetic prestige game would not grant moral prestige. The social dominance would become obviously raw self-interest. Their stigmatising exclusions, in all their projective exaggerations, transparently self-serving.

Conversely, if this mimetic moralising is the highest good, then mobbing is natural and inherent. Mobbing — that is, scapegoating stigmatisation — unites and protects the mimetic moralising, the shared sense of being a moral meritocracy. They are united in, and by, the stigmatisations that protect their sense of status, of being of the moral meritocracy.

What are the sins that they are stigmatising folk for? Typically, for failing to conform to righteous moral harmony, a society entirely without bigotry or ill-feeling. If some grand social harmony is insisted on, but does not yet exist in society, then someone must be to blame for the lack of harmony. Hence the scapegoating of those who disturb harmony is natural to the grand elevation of social harmony as the proper goal. Harmony being a much more all-pervasive and controlling ideal than mere order.

The more this wonderful harmony-to-be-created looks different from the morally-disorderly past, the more the past can be scapegoated. The past cannot answer back, after all. Unless there are scholars brave enough to stand against the mimetic moralising of their colleagues, and the stigmatisation that is likely to engender. (What was remarkable about the 1619 Project was not that so many of the morally meritorious rolled over for it, following the dynamics of mimetic moralising, but the number of historians who were willing to speak against it; in part due to more traditional leftists pushing back against identitarian progressivism. They had norms external to the mimetic moralising that they were willing to stand up for.)

Separating people from their past also separates them from norms and framings that might be invoked against the mimetic moralising.

This scapegoating of the past leads to further self-deception. People whose moral postures are utterly conventional in their social circles (as being morally conventional, and so mutually meritorious, is precisely the point), who shift their moral postures to keep up with what is conventional, who are assiduously morally conventional within their social milieu, laughably claiming that they would not have adopted the moral postures that were conventional in the past.

This massive, and utterly self-serving, arrogance being the basis for their contemptuous rejection of the past which fails to live up to their lofty standards. Standards for which they make no sacrifices, beyond some (relatively minor) signalling costs. Indeed, which are all about them avoiding any sacrifice. Well, any sacrifice on their part.

Their mimetic moralising involves plenty of sacrificing of others to their sense of righteousness. Sacrificing the ability of outsiders to speak, to be heard; sacrificing reputations, jobs, careers. Lots of sacrifice imposed on others by the mimetic elite, none on themselves.

(This lack of sacrifice is why the notion of virtue signalling is somewhat problematic. There is precious little virtue in doing things that involve no sacrifice. Piety display is a better term for what goes on.)

But such sacrifice of others is how scapegoating and mimetic moralising works. Protecting the sense of status one finds congenial by loading guilt and rejection onto the objects of stigmatised sacrifice. Such an object of sacrifice is not a victim, but full of guilt, so deserves what comes to them as they are sacrificed to preserve the distinction between the morally meritorious versus the stigmatised, morally other who is thereby evil.

Thus, to return to where we started, we can see that the eclipse of class from the language of progressivism makes perfect sense. For modern progressivism is dominated by the class that cannot see itself by a class. It is dominated by mimetic moralising that cannot see its own self-serving conventionality. Nor its stigmatising scapegoating. All of which works by shared habits of self-deception and could not stand genuine self-awareness.

Welcome to the world of the class that cannot see itself attempting to achieve social dominance by policing legitimacy in the service of its mimetic-moralising status strategy. And doing so via the stigmatising scapegoating it imposes on others.

And no, for those wondering, online IT is not going to stop its stigmatising exclusions any time soon. There is far too much status at stake.

Cross-posted from Medium. Like most of my pieces on Medium, this is more of an ongoing meditation rather than a finished piece, so is subject to ongoing fiddling.

Friday, November 20, 2020

Peers and parents, past and present

Typically, the child-rearing not done by parents is done by peers.

Source.

Fatherhood, being (unlike biological paternity) a socially-constructed relationship of social identity, provision, care and attention, varies much more across (and within) cultures than does motherhood, which is far more biologically-grounded (at least until the infant is weaned). A small number of societies traditionally did not recognise the relationship of fatherhood at all, the role of male protector being taken by uncles. Even beyond such outliers, evidence from the anthropological literature is that the father’s attention is a highly variable factor in child-rearing.

The investment in attention and feeding required to raise a human child is considerable. Far more than is required to raise the young of any other species. As pregnancy and lactation ties women to the raising of infants, this leads to recurring risk-management patterns across human societies whereby activities that are compatible with breast-feeding, and other aspects of child-minding, become presumptively female roles. Those activities that are not thus compatible become presumptively male roles. The first scholarly explication of this was in a classic short analysis by anthropologist Judith K. Brown. An analysis that subsequently acquired considerable statistical and other support.

In his The Lifeways of Hunter-Gatherers: The Foraging Spectrum, archaeologist Robert L. Kelly includes a very useful summary of the anthropological evidence on child-rearing and its interaction with the acquisition of culture. (All quotes are from Lifeways, mainly Pp108–110.)

The two dominant social sources for transmitting or acquiring culture are parents and peer groups. They generate rather different patterns. Parents, particularly, mothers, are predictable and consistent providers of resources — breast milk, food, affection, attention and protection. Over time, the parents provide less and less resources until eventually the child is cut off.

It has been reasonably argued that parent-raised children learn that “resources and desirable goods are limited and hard to obtain”. They can also be expected to tend to be assertive and independent, focusing on using knowledge of terrain and technology to gain resources rather than social favours. In nomadic societies, boys tend to “de-emphasize male-male competition and focus more on manipulation of the world through technology”. Parent-raised children can be reasonably expected to show greater within-group variation in beliefs and behaviours.

Peer-rearing produces different patterns. Potentially from a young age (as early as around two years old) the child finds themselves in a group, often with older siblings, who become the focus of the child’s social interaction. A group that has less status and power differences within it than are involved in parent-child interactions. Children learn that there are “many sources of food and desirables other than their parents”. Peer-raised children learn to network (who people are, what they have, how they are disposed towards them); that resources are acquired by manipulating social relations. They learn that “resources are not scarce and can be acquired through persuasion”.

The acquisition of culture within the peer group can also be expected to generate less variation in beliefs and behaviours. (It is a long-standing pattern that any organisations seeking to impose uniformity of outlook tend to be inherently suspicious, even hostile, to family autonomy and seek to undermine the authority and independence of families.)

Shifts in the activities of parents are likely to also shift societies from having children being parent-raised to being them peer-raised. In foraging communities, the shift to sedentism leads to longer foraging trips by males and more intense food acquisition and processing by females, as consumption shifts to more-effort foods. This leads to less parental attention to children, leading to more peer-raising. A mechanism that “may account for why sociocultural change seems to occur so quickly once hunter-gatherers become sedentary”.

Peer-raising typically has quite different effects on boys and girls. Girls are typically assigned as caretakers, contributing to “girls having attitudes favoring nurturance and prosocial behaviors … [and] more restricted spatial ranges”. Fathers being away for extended periods of time “is associated with boys who have poor attitudes towards females, who are aggressive and competitive towards other males, and who, when grown, give little attention to their offspring, encouraging a continuation of peer-rearing”. A pattern we can see in contemporary societies in deprived urban communities.

In sedentary societies, adolescent male peer groups have greater importance than in mobile societies, with more violent initiations and harsher punishments. “These competitive groups define a boy’s success in life more than in mobile societies where, presumably, fathers are more often present.” Street gangs are peer groups with added income or self-protection element

Early C20th Chicago street gang

Source:

Cross-cultural surveys have found that “when men spend a lot of time with their offspring and cooperate in child rearing, there is less cultural emphasis on competition”. If men spend more time away from children, “there is a general physical separation of male and female tasks, and competition among men is encouraged”. (This could be a bit chicken-and-egg, if presumptively male tasks are such that they have to be done elsewhere.) “Partially as a response to male behavior, peer-raised girls show expression of sexual interest and assumption of sexual activity early in life, while also showing negative attitudes towards males and a poor ability to establish long term relationships with one male”.

Shifts in adult labour patterns can be “expected to have dramatic effects on enculturation and hence on cultural change”. These patterns clearly have implications well beyond foraging societies. Much of the literature on fatherlessness, for example, reads as the difference between parent-raised and peer-raised children. A sole parent clearly has less potential to counter-balance peer effects than do two parents.

In complex societies, peer groups may vary considerably. Much of the motivation to get students into “good” schools (whether public or private) is to ensure that they have a better quality peer group. Peer group effects also complicate trying tease out the specific effects of a school or teaching on education outcomes.

One of the counterpoints raised to the literature on fatherlessness is the claim, famously made in psychology Judith Rich Harris’s book The Nurture Assumption, that if one control for income/socio-economic status, the negative effects of sole-parenthood disappear. This is a classic problem of co-dependent variables. Bering a sole-parent family tends to greatly affect (i.e. lower) household income and socio-economic status. Normal regression techniques are not analytically reliable in such circumstances. One has to use techniques such as comparing single-parent with two-parent families who are in the same income range. When that is done, the dis-advantaging tendencies of being raised by sole-parent families become quite clear: as one would expect. (For those interested in pursuing this further, the footnoted citations in this essay provide a useful place to start.)

The anthropological evidence also lends support to identifying fatherlessness as a negative factor in raising children. With much of the effects of fatherlessness being due to the effects of peer groups lacking competition from family life, particularly for boys. This obviously has implications for socially-deprived urban neighbourhoods and other localities suffering from absent fathers. Elijah Anderson’s classic study Code of the Streets (a useful summary essay is here) is very much congruent with this anthropological literature. J.D.Vance’s Hillbilly Elegy also touches on these issues, as does the excellent study Trump’s Democrats.

The developing pattern in many Western societies of marriage (and fatherhood) being strong in the upper reaches of society but decaying in the lower reaches is yet another socially-polarising factor in such societies.

Cross-posted from Medium.

Tuesday, November 17, 2020

Left-Hand Path, Right-Hand Path: someone being wrong on the internet

Good versus evil is not a universal religious or moral framing

I have a wide range of interests, many of them historical. One of the YouYube channels I have watched regularly is Dr Jackson Crawford’s channel on old Norse culture: serious scholarship delivered congenially in (generally) bite-sized pieces.

I have also listened to several of Tom Rowsell’s offerings from his Survive the Jive channel. He is mainly interested in matters Indo-European, but he ranges more widely, and some of his videos can be charming, such as this one on Hinduism in Bali. His material seems to be accurate (I have not spotted a significant error yet) and is engagingly presented.

Which brings me to Arith Harger’s channel. I listened to his video on What is the Left Hand Path? My interest in these matters is entirely historical and intellectual, but he gets SO much wrong in this video. Rather than give a detailed critique, I will cover the same ground based on available scholarship.

The first section of the video is actually quite a good discussion of the difference between Right-Hand Path and Left-Hand Path. He is correct in arguing that it does not neatly line up with good and evil. Even though, for Star Wars fans, the Right-Hand Path seems a bit Jedi-like and the Left-Hand Path rather Sith-like. It is when Harger tries to put the matter in a larger context that he goes seriously wrong.

The original source of the Left-Hand Path/Right-Hand Path distinction is from India. Specifically, from Yogic and Tantric traditions.

There is an excellent introduction to this distinction, and its likely cultural and historical origins, in Thomas McEvilley’s classic essay The Archaeology of Yoga. This is available on Jstor here and on Scribd here. The essay is, in part, a precursor to McEvilley’s masterpiece The Shape of Ancient Thought in which he examines the history of Greek and Indian philosophy and their interactions.

McEvilley argues that the Left-Hand Path/Right-Hand Path distinction is driven by the interaction between the patrilineal (and patricentric) culture of the invading Indo-European pastoralists and that of the matrilineal (and matricentric) culture of the resident farming population. To use slightly old-fashioned scholarly language, Aryans versus Dravidians.

Whether the culture was patrilineal or matrilineal matters because, in a patrilineal culture, a child without a father lacked a crucial element of social identity. Patrilineal cultures tend to be particularly restrictive of female sexuality for that reason. Conversely, in a culture where a child gets their social identity from their mother, not having a designated father is likely to be less of an issue. Especially if uncles can readily substitute as protective male relatives.

So, mystical and occult traditions in patrilineal cultures are likely to be sex-restrictive, to be ascetic. Conversely, mystical and occult traditions in matrilineal cultures are more likely to be sex-permissive. The former naturally inclines towards the development of spirit-focused disciplines, where ascetic denial of indulgence and the body is the seen as the path to self-development. The latter naturally inclines to the development of self-capacity-and-bodily-focused disciplines.

There is no direct connection in this to any good-versus-evil dichotomy. But, outside the monotheistic traditions, good-versus-evil is not the normal religious dichotomy. In religions in the animist-polytheist spectrum, the more normal distinction is order-versus-chaos. This is particularly true in agrarian societies, where a bad harvest presages disease, death and, if there is a sequence of such harvests, disaster. This concern for order can be seen in the ancient Egyptian concept of maat. How to construct and maintain social order is a central concern of Chinese philosophy while to seek the Tao (or Dao) is seek to be in accordance with the natural order of the universe.

Ascetic, sex-restrictive disciplines tend to be more orderly than than more sex-permissive disciplines. Especially if, as is the case in various forms of Left-Hand-Path, the deliberate breaking of conventions and restrictions is seen as a technique to develop one’s capacities. The highly patrilineal Indian elite would clearly tend to see the mother-right vestiges of Dravidian mysticism and occult practices as very much other. So, the Left-Hand-Path would be seen as more chaotic (because it was) and therefore negatively.

Arith Harger does not seem to be aware of any of this background. Of course, admitting the Left-Hand Path/Right-Hand Path distinction is originally Indian does rather get in the way of presenting it as Old Pagan Wisdom. Though somewhat similar, though less developed, patterns of patrilineal pastoralist Indo-European overlay interacting with matrilineal farming-religion survivals can also be traced in European cultures and their pre-Christian religious and occult traditions.

Where Harger gets particularly confused is over the good-versus-evil distinction. This is not remotely an originally European idea. The pre-Christian religious traditions of Europe fairly clearly follow the normal order-versus-chaos division. Thus, in Norse mythology, various monsters of chaos threaten the order upheld by the Aesir in a pattern that recurs across mythologies.

The good-versus-evil distinction is essentially (as far as we can tell) an invention of Zarathustra (aka Zoroaster). It fits nicely in with monotheism — with a Creator God who creates both the material and the moral order. Opposition to such a God is not merely chaotic and disorderly, it is anti-moral and destructive.

Hence the original Jewish understanding of the sin of Sodom and Gomorrah was that they were cities that were systematically anti-moral, preying on the weak and vulnerable and refusing to respect others. (Which makes way more sense than the later interpretation of the key sin of the cities of the plain being unnatural sex: see Chapter Four of Michael Carden’s Sodomy: A History of a Christian Biblical Myth.)

Norman Cohn’s book Cosmos, Chaos and the World to Come is a good introduction to the hugely important shift in moral perspectives from order-versus-chaos to good-versus-evil. A shift in moral perspectives that explains much of the antipathy between the Romans (who were mostly definitely all about order, including in ways that seem wildly immoral, or even evil, to us) and the Jews and later the Christians. If one is going to truly embrace a pagan perspective, good-versus-evil has to go.

So, the Right-Hand Path/Left-Hand Path distinction does not map to good-versus-evil. As it originally arose in the intermixing of cultures on the animist-polytheist spectrum, that is not surprising.

The Right-Hand Path/Left-Hand Path is originally an Indian distinction, but as patrilineal pastoralist Indo-European overlay over the culture and religious perspectives of resident matrilineal agrarians we see in India also occurred in Europe, the distinction has a relatively easy path into European pagan traditions, even revived ones.

But surely it is better to get the history correct, rather than hopelessly confused.

Cross-posted from Medium.

Thursday, November 12, 2020

The US's turn to be hobbled by Leftism?

Ideologies come and go. It is the social dynamics that matter.

Source:

In one of his fun and informed alternative history videos, the operator of the Whatifalthist YouTube channel makes the point that Communism was good for the US, because it hobbled the two countries most able to rival it — Russia and China.

Both Communist regimes killed millions of their citizens in mass famines, depressed fertility rates (intentionally in the case of China) and adopted highly inefficient command economies. Stalin did the US the extra favour of systematically killing off the kulaks — that is, those peasants who showed any initiative.

Stalin was ruthlessly effective at deploying available resources to create heavy industry to support a war machine. (Something that Mao and the Kim family regime of North Korea copied.) Nevertheless, at no stage did the Soviet economy achieve economic growth rates as high as did the Tsarist regime. Nor did the Soviet Union have the cultural, intellectual or scientific vitality of the Tsarist regime. (Soviet technological advances were either a result of theft or massive deployment of resources.) Under the Tsars, Russia had become a major agricultural exporter. The Soviet Union had perennial problems feeding itself.

Adding to the Soviet regime’s record of tyranny and mass murder, bringing back slavery (in the Soviet labour camp system) and serfdom (from 1940 to 1956, no worker could change workplace without the workplace’s permission) made it utterly clear that, no the Soviet Union was no sort of general moral advance over Tsarism. And, despite its Cold War rivalry with US, the Soviet Union was a smaller population state with a smaller economy than Russia would have been if it had never suffered Communist rule.

Both Russia and China have since become various forms of market economies. Nevertheless, the legacy of Communist rule means that both countries have less people and lower standards of living with less intellectual and cultural vitality than they would have if they had never been afflicted by Communism.

A case can be made that it is now the US’s turn to be hobbled by Leftism.

By Leftism I do not mean labourism. Labourism is the working class asserting itself through unions and political organisation. Communist regimes do not permit independent working class political action, unless (as in the case of Poland) they have been weakened by other pressures.

When folks attempt to define Leftism, they typically do so ideologically. But I am interested in the social dynamics, so I am not going to attempt an ideological definition.

In terms of social dynamics, Leftism is the human-and-cultural capital class seeking to achieve social dominance via social transformation politics. It is based on three propositions: (1) that the adherents have a clear understanding of social dynamics; (2) that they possess, or can discover, a clear path to the morally positive transformation of society: and (3) that path will be achieved if sufficient power is handed to people like them.

Leftism is members of the human-and-cultural capital class having massive tickets on themselves. Which is why Leftism tends to end up dominating those institutions and organisations that themselves are dominated by the human-and-cultural-capital class. Leftism not only gives them a (flattering) sense of status and purpose, it also generates a shared status strategy.

Adherence to Leftism means adherence to a set of prestige opinions (what good, smart, informed people believe and what malicious, stupid or ignorant people don’t) ready-made to assert social dominance. The latter is based on the deemed moral necessity of preferring to appoint, or otherwise support, people with said prestige opinions that mark the “good, smart, informed” people while stigmatising, and otherwise penalising, those who demur.

Once a particular version of Leftism achieves sufficient prestige-opinion dominance, it can sweep through organisations and institutional remarkably quickly. Especially bureaucracies, as it provides the advantage of simplifying selection processes (one picks “folk like us”, so folk select in their same image and likeness); simplifying internal coordination (people have common outlooks and expectations); and generates moral projects to be getting along with.

Without being able to mobilise and coordinate people with organisational skill, Leftism could never come to dominate societies sufficiently to hobble them. The existence of such capacity does not, however, mean that Leftism will be beneficial to a society. On the contrary, the features that make it good at gaining positions of power are also what recurrently turns its dominance into a human and social disaster.

Leftism in power hobbles societies because (1) none of its three constituent propositions ever turn out to be sufficiently true to have any other effect other than to hobble their societies and (2) Leftism destroys or distorts feedback mechanisms.

The second factor operates quite straightforwardly. The belief that Leftists have such profound social understanding and moral purpose immediately discredits any feedback that seems to contradict that status and purpose. Moreover, to achieve the deemed morally urgent social transformation, a level of social power has to be achieved sufficient to enable overriding of any resistance. This means breaking up any capacity to organise to resist the Leftist moral project, or to persuade others to do so. The result is a pervasive attack on, and blocking of, feedback mechanisms. This inevitably severely represses the information flows and incentives need to have effective error detection. The ability to detect error, consider alternatives, adjust actions and so on become sufficiently reduced that more disastrous policies can and will be followed.

As for the key propositions of Leftism — (1) that adherents have a clear understanding of social dynamics; (2) that they possess, or can discover, a clear path to the morally positive transformation of society: and (3) that path will be achieved if sufficient power is handed to people like them — they are simply never sufficiently true to have a dominant Leftism having other than having overall a highly negative effect on society.

The first problem is that the content of Leftism is primarily driven by the status strategy, because that is its prime appeal and it is that which the selection processes will operate most strongly to serve. Selection for the most efficacious set of prestige opinions and moralised dominance strategy is not selection for truth or accuracy.

On the contrary, the complexities of reality will get in the way of the prestige-and-dominance strategy, so there will be selection for simplifying moral salience. Including casting entire sections of society into having profoundly negative moral salience, so that any idea, experience or concern coming from them will be presumptively denigrated, dismissed or otherwise discounted. This will strongly tend to block or narrow the social understanding underpinning whatever flavour of Leftism has become dominant.

In particular, the particular flavour of Leftism will have some factor, or small set of factors, that will be deemed “to explain” the dynamics of the society to be transformed. Or, at least, the dynamics of whatever deemed moral failure requires said social transformation. As these factors have to play a set narrative role in the project of social transformation, in order to support that project of social transformation, social understanding must be narrowed so that those factors can play that narrative role. Any social factors that get in the way of that narrative will be denigrated, dismissed or ignored, ensuring that the transformation project will itself be based on a partial understanding, profoundly inadequate to fulfil the role cast for it.

The second problem is that the existing society has to be cast as being a profound moral and social failure so as to justify the deemed social transformation. That means, the society’s actual achievements must be dismissed or belittled. This means that the processes which led to those achievements will be denied, denigrated or dismissed. The more successful the society actually is, the more disastrously inaccurate that process of denial, denigration and dismissal will be. Conversely, the more fragile the underpinnings of existing social achievement, the more disastrous the effect of that denial, denigration and dismissal is likely to be.

The third problem is that the concentration of social power needed to achieve the deemed morally urgent social transformation will tend to aggravate the effect of all the above intellectual flaws and failings, precisely because of the concentration of social power operating on their basis. Moreover, a new selection process will be set up whereby manipulative personalities will be drawn to what is, in effect, the only game in town, given the massive concentration of social power involved. Indeed, there will be something of a selection process in favour of Dark Triad personalities (narcissistic, Machiavellian, psychopathic) as they will be able to operate in, and manipulate, the power dynamics more effectively. In part because they will have fewer scruples, though the sense of moral urgency in the social transformation project will inherently tend to override scruples.

The entire pattern can be understood without any reference to some particular ideological claim. It is the social dynamics that are crucial. The particular doctrines involved mainly have an effect in aspects of how the social dynamics play out, not the fundamental social dynamics themselves.

We can see the patterns of Leftism well underway in the contemporary US: most obviously in California’s various dysfunctions but more generally in those metropolises dominated by progressive politics and in institutions dominated by the human-and-cultural capital class (such as media, Big Tech and education). As there is no sign that contemporary Leftism is losing its fervour or institutional hold — on the contrary, both seem to be increasing — the social dynamics of Leftism in the US have considerable way to go yet. How badly they end up hobbling the US will entirely depend, as it did in Russia and China, on how dominant Leftism becomes.

Cross-posted from Medium.