Tuesday, September 13, 2022

What happened to creative capitalism?

 


The question I have posed above strikes me as being delightfully ambiguous. It could be asking what happened to bring to an end the era in which creative capitalism brought about high rates of productivity growth. Alternatively, it could be asking what happened to the concept of “creative capitalism” that Bill Gates presented to the World Economic Forum (WEF) in 2008.

My focus here is on the second interpretation, but I will end up discussing what has happened to the creativity of capitalism in the more traditional sense.

Why am I interested in the particular form of corporate social responsibility (CSR) that Bill Gates referred to as “creative capitalism”? I don’t hear the Gates concept being much talked about these days, but I think that variants of this form of CSR have become more common over the last decade or so. It is worth considering whether Gates’ approach to CSR is changing corporate sectors in ways that may directly hamper the traditional creativity of capitalism, or indirectly hamper it via impacts on economic policies pursued by governments.


That is why I decided that the time had come to read Creative Capitalism, a book edited by Michael Kinsley, which was published in 2008. The book consists mainly of comments by eminent economists on the “creative capitalism” concept that Bill Gates presented to the WEF. I should confess at this point that deciding to read the book didn’t require me to judge that it might be worth buying. A copy was given to me last year by a friend who was downsizing his library. The book was sitting in my “unread” pile for many months waiting for me to show some interest. I am now glad I read it!

In the next section I will outline Gates’ concept and briefly discuss the different reactions of economists writing 14 years ago. That will be followed by consideration of possible consequences of changes in the nature of capitalism that seem to stem from Gates’ concept and similar ideas.

Gates’ concept

Bill Gates advocated a new approach to capitalism in which businesses would give more attention to recognition and reputation. As he put it:

Recognition enhances a company’s reputation and appeals to customers; above all it attracts good people to the organisation. As such, recognition triggers a market-based reward for good behavior.”

Gates advanced this view in the context of considering how self-interest could be harnessed to provide more rapid improvement in the well-being of poor people. However, pursuit of recognition seems to have become a strong motivator for the environmental and social objectives that are increasingly espoused by corporates. Gates does not mention the potential for pursuit of recognition for good behavior to have a positive influence on investors, but that also seems to have emerged as an important factor in recent years.

My review of the contributions of commentators is highly selective. I just focus here on what I see as the main points that were raised.

Some of the commentators suggested that entrepreneurs with philanthropic objectives might do better to do what Gates did, rather than to follow the approach he advocated in his speech to the WEF. Like some others before him, Gates pursued profits until he become extraordinarily wealthy and then established a foundation to pursue philanthropic objectives. An argument in support of that approach is that the pursuit of multiple “bottom lines” by companies adds to the difficulty of measuring their performance to ensure that executives can be held accountable for outcomes. 

Several of the commentators referred to Milton Friedman’s view, in Capitalism and Freedom, that CSR is a “fundamentally subversive doctrine” because, in a free society, “there is one and only one social responsibility of business – to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud” (p 133).

However, others pointed out that Gates’ proposal is consistent with a free society because he was suggesting that corporates can obtain a market-based reward for choosing to pursue non-pecuniary objectives of employees and consumers. Similarly, it is consistent with a free society for companies to seek to pursue non-pecuniary objectives of the shareholders who own them.

Consequences

It is likely that an increasing tendency for corporates to pursue non-pecuniary objectives would have a negative impact on measured productivity growth. However, that may be largely a problem in the measurement of productivity. Measures of productivity growth are biased to the extent that output indicators do not incorporate non-pecuniary goods that contribute human flourishing. If corporates are efficient vehicles for the pursuit of the non-pecuniary objectives of their shareholders, employees, and customers, it seems reasonable to suppose that pursuit of those objectives would contribute to the flourishing of the people concerned.

“The unknown ideal”

What happens if a company is not an efficient vehicle for the pursuit of the non-pecuniary objectives of its shareholders, employees, and customers?

In considering this question it is important to recognize that corporate sectors consist of large numbers of individual firms which compete for labor, capital, and customers. Individual firms are free to give different weight to different objectives. Some may see their only role as profit maximization, and may even seek recognition by asserting that they see that as a social responsibility. Others may seek a reputation for social responsibility by undertaking marketing exercises, without changing their practices. At the other extreme, some companies may devote themselves largely to pursuit of one or more non-pecuniary objectives, providing only minimal financial returns to shareholders.

It is customary for economists to assert that the market is capable of weeding out firms that are following inefficient strategies. Applying the usual market test, it appears reasonable to suppose that if individual companies pursuing the non-pecuniary objectives of workers, consumers, and shareholders are able to survive, the strategies they are following must pass the market’s efficiency test.

The Hayek quote at the top of this article is followed by his assertion that the argument for liberty rests on “the belief that it will, on balance, release more forces for the good than for the bad” (Constitution of Liberty, p 31). In considering how best to describe the spontaneous order of a free society, Hayek later suggested that capitalism “is an appropriate name at most for the partial realization of such a system in a certain historical phase, but always misleading because it suggests a system which mainly benefits the capitalists, while in fact it is a system which imposes upon enterprise a discipline under which the managers chafe and which each endeavours to escape” (“Law, Legislation, and Liberty”, V1, p 62)

The corporatist quagmire

Unfortunately, in the real world at present, the ability of the market to weed out inefficient firms and the strategies they adopt is greatly hindered by government intervention and expectations of future government intervention. If firms believe that pursuit of certain goals will be rewarded by governments, they have an incentive to establish reputations for pursuing those goals. Firms also have an incentive to seek government assistance as a reward for good behavior. The increasing prevalence of such interactions has led to the development of corporatist, rent-seeking cultures that have contributed to a long-term decline in rates of productivity growth in high-income countries.

It is also important to note that, in the realm of politics, what some people view as good behavior is often viewed in a different light by others. For example, political opinions differ on whether or not it is good for pension funds to take account of environmental policies in their allocation of funds. Investors are often uncertain about which view will prevail in the political arena. Such economic policy uncertainty adds to the normal commercial risks of investment. An example which comes readily to mind is the impact of policy uncertainty on future investment in gas-fired electricity generation in industrialized countries. Normal commercial considerations might suggest that is likely to be a profitable investment to meet demand for electricity when the wind is not blowing and the sun is not shining, but investors have to contend with the possibility that further regulatory interventions to discourage use of fossil fuels will render such investment unprofitable. It is reasonable to predict that blackouts will be more common in jurisdictions where such policy uncertainty prevails.

Political ideologies of governments also seem to be changing in ways that make it more difficult for markets to weed out firms adopting inefficient strategies. Over the last decade or so, the progressive side of politics has encouraged corporates to establish reputations for “woke progressivism”. That seems to have induced political conservatives to become increasingly disenchanted with corporates. That disenchantment has added to the antagonism associated with the increased tendency of many conservatives to espouse economic nationalism and populist views opposed to the corporate sector’s interest in free trade, international capital mobility, and technological progress.

As politics comes to play an increasing role in the investment decisions of businesses, economic growth rates of industrialized countries are likely to decline. Since governments find it difficult to disappoint the expectations of voters, government spending is unlikely to be constrained to a correspond extent. Major economic crises seem likely to become more common. (I have discussed these issues more fully in Chapter 6 of Freedom, Progress, and Human Flourishing.)  

The obvious solution

Immediately after the passage in which Milton Friedman suggested that the social responsibility of business was to serve the interests of stockholders, he suggested that the social responsibility of union leaders is to serve the interests of their members. He then went on to write:

It is the responsibility of the rest of us to establish a framework of law such that an individual in pursuing his own interest is, to quote Adam Smith … “led by an invisible hand to promote an end which was no part of his intention. …” (Capitalism and Freedom, p 133).

Unfortunately, it seems likely that major economic crises will need to be endured before governments of industrialized countries once again see merit in confining themselves to core responsibilities in the manner that Adam Smith suggested.

Conclusion

Companies are increasingly choosing to adopt strategies to improve their reputations with employees, customers, and investors who have interests in social and environmental issues. That would not pose a problem in the context of the spontaneous order of a free society. Pursuit of multiple objectives may add to problems in holding executives accountable for an individual firm’s performance, but free markets are capable of weeding out firms that follow inefficient strategies.

Unfortunately, however, industrialized countries are now corporatist quagmires in which the ability of markets to weed out firms that adopt inefficient strategies is greatly hindered by government intervention and expectations of future government intervention. The obvious solution is to reduce government intervention in markets, but major economic crises will probably need to be endured before that happens.

Tuesday, August 16, 2022

What implications does a livewired brain have for personal development?


 


I was pondering this question while reading David Eagleman’s book, Livewired: the inside story of the ever-changing brain. Eagleman is a neuroscientist, writing about neuroplasticity for a popular audience. My interest in brain plasticity was aroused over a decade ago when I read Norman Doidge’s book, The Brain that Changes Itself, and speculated about some implications of his assertion that “to keep the mind alive requires learning something truly new with intense focus”.

Eagleman prefers “livewired” to “plastic” because the latter term may bring to mind plastic molds rather than flexibility. He suggests that we need the concept of liveware “to grasp this dynamic, adaptable, information-seeking system”.

By the way, Eagleman’s book has left me thinking that in 50 years’ time, people who are shown the above cartoon will still be able to see the humor in it.

The livewired brain

In my view, the most important point that Eagleman makes is that the human brain arrives in the world unfinished: “despite some genetic pre-specification, nature’s approach to growing a brain relies on receiving a vast set of experiences, such as social interaction, conversation, play, exposure to the world, and the rest of the landscape of normal human affairs”.

Experiences during early childhood are to a large extent determinative. If infants don’t have appropriate social and sensory interaction, their brains become malformed and pathological.

As brains mature, neural maps become increasingly solidified. As brains get good at certain jobs, they become less able to attempt others. Adult brains keep most of their connections in place to hold on to what has been learned, with only small areas remaining flexible. Nevertheless, even in the elderly an active mental life fosters new connections.

Eagleman distills the main features of livewiring into seven principles:

  1. Brains match themselves to their input, e.g. when a person is born blind the occipital cortex is completely taken over by other senses.
  2. Brains wrap around the inputs to leverage whatever information streams in. It is possible for one sensory channel to carry another channel’s information, e.g. with appropriate equipment, the brain is able to learn to use information coming from the skin as if it is coming from the eyes.
  3. Brains learn by putting out actions and evaluating feedback, e.g. that is how we learn to communicate with other people, how we can learn to control machinery, and how a damaged spinal cord can be bypassed using signals passed directly from a brain to a muscle stimulator.
  4. Brains retain what matters to them; flexibility is turned on and off in small spots based on relevance; what is learned in one area is passed to an area in the cortex for more permanent storage; the cortical changes involve the addition of new cellular material; brains have a different system for extracting generalities in the environment (slow learning) and for episodic memory (fast learning). “Everything new is understood through the filter of the old.”
  5. Brain lock down stable information. Some parts of the brain are more flexible than others, depending on the input. Brains adjust themselves depending on how you spend your time. When learners direct their own learning, relevance and reward are both present and allow brains to reconfigure.
  6. Plasticity arises because different parts of the system are engaged in a competitive struggle for survival. Competition in the brain forest is analogous to the competition between trees and bushes in a rain forest. The principles of competition poise the brain “on the hair-trigger edge of change”.
  7. Brains build internal models of the world; by paying attention, our brains notice whenever predictions are incorrect and are able to adjust their internal models.

Eagleman argues that the computer hardware/ software analogy tends to lead people astray in thinking about brain function. He suggests that as neurologists illuminate the principles of brain function, those principles will be gainfully employed to create self-configuring devices that use their interaction with the world to complete the patterns of their own wiring.

The book ends with this thought:

“We generally go through life thinking there’s me and there’s the world. But as we’ve seen in this book, who you are emerges from everything you’ve interacted with: your environment, all of your experiences, your friends, your enemies, your culture, your belief system, your era—all of it.”

That could be interpreted by social engineers as an invitation to seek to modify our brains by shaping our environments. I prefer to see it as an invitation to individuals to think about their belief systems and the choices they make that influence their personal environments because their beliefs and choices can have a profound impact on their own personal development. I will explain later the links between personal environment, social capital and individual flourishing.

The idea that individuals can make choices about their personal environments implies the existence of free will. Eagleman is somewhat skeptical about the existence of free will but he speculates that it may be a property of the whole brain as a complex network or system.  He acknowledges that organisms display the property of free will in their interactions with their environments. Self-direction seems to be implicitly acknowledged in the discussion of some topics in Livewired.  For example, there seems to be implicit acknowledgment that individuals may choose what they practice in the discussion of the ten-thousand-hour rule concerning the need for practice to acquire expertise. Self-direction also seems to be implicit in choices many elderly people are making to keep their brains active.

More fundamentally, if brains learn by putting out actions and evaluating feedback it seems reasonable to expect such behavior to encompass actions that are consciously self-directed as well as those occurring without conscious awareness. The idea that by paying attention our brains notice whenever predictions are incorrect and are able to adjust their internal models seems to me to suggest a role for conscious self-direction. If humans are capable of building robots which can adjust their internal models in the light of experience, it seems reasonable to expect individual humans to be capable of using some of the principles of brain function to create better versions of themselves.

The knowledge that human brains are livewired suggests to me that it is not unduly optimistic to believe that individuals begin life with huge potential for self-directed personal development and that this potential in never entirely extinguished as they grow older.

Directing attention to achieve cognitive integrity

Self-direction implies an ability to direct one’s attention sufficiently to consider the consequences of alternative courses of action. An ability to direct one’s attention is a meta-cognitive capacity – it entails a degree of control over one’s own thought processes.  

You might be thinking that exercising control over thought processes is difficult enough for psychologically healthy people, so it must be impossible for people suffering from addictions, obsessions and delusions. However, in a Psychology Today article, Gena Gorlin, a psychologist, has pointed to evidence that people who appear to have a diminished capacity for rational deliberation in some aspects of their lives, can actually be helped by therapies which help them to exercise agency and acquire relevant knowledge.


In a scholarly contribution, published in 2019, Gena Gorlin and a co-author introduced the concept of cognitive integrity to describe “the metacognitive choice to engage in active, reality-oriented cognition”. (Eugenia I. Gorlin and Reinier Schuur, ‘Nurturing our Better Nature: a proposal for Cognitive Integrity as a Foundation for Autonomous Living’, Behavior Genetics, 2019, 49: 154-167. Independent scholars may be able to obtain access by following links on Gena Gorlin’s web site.)

Cognitive integrity is both a state of mental activity and a trait-like disposition. It stands in contrast to passive cognitive processing – being driven by unconsciously activated intention – and active pretense, or self-deception. The pretense of cognition occurs when we procrastinate and make lame excuses to ourselves to avoid doing things that we have chosen to do. Among other things, self-deception can also involve negatively distorted self-assessments, inaccurate causal attribution for life events, and false memories. Those cognitive biases are common among individuals with depression and anxiety.

Gena Gorlin posits that people who engage in repeated exercise of cognitive integrity earn self-trust. By contrast, those who engage in frequent self-deception are likely to harbor an increasing sense of insecurity about their own abilities.

It seems to me that there is a strong overlap between people who practice cognitive integrity and people who are self-authoring and self-transforming, according to definitions adopted by Robert Kegan and Lisa Laskow. A self-authoring mind is self-directed and can generate an internal belief system or ideology. A self-transforming mind can step back from and reflect on the limits of personal ideology. You can read more about that and how I see it as linked to personal integrity in Freedom, Progress, and Human Flourishing (pp 171-173). There is also relevant discussion on this blog.

Personal development as a multi-stage process

The information we have about the livewired nature of brains is suggestive of substantial potential for individual personal development throughout life. The process of personal development can be seen as a multi-stage process involving interaction between a person’s family and social environment and the degree of cognitive integrity they achieve.

In Freedom, Progress, and Human Flourishing, I make use of an analytical framework proposed by the economist, Gary Becker, to propose that the extent to which an individual flourishes at any time during her or his life, is a function of personal capital and social capital.

Personal capital includes all personal resources, natural abilities, skills acquired through education and on-the-job training, and preferences, values and habits acquired from past experiences. For example, habit formation causes previous consumption patterns to have a large impact on current preferences. Those habits can either enhance or inhibit an individual’s flourishing.

Social capital incorporates the influence of other people—family, friends, peer groups, communities. People want respect, acceptance, recognition, prestige, and so on from others and often alter their behavior to obtain it. Social capital can have a positive or negative impact on an individual’s flourishing. For example, peer pressure on a teenager could lead to sexual promiscuity, or to healthy exercise.

This framework recognizes that present choices and experiences affect personal capital in the future, which in turn affects future flourishing. It is difficult to modify the social capital of the networks to which individuals currently belong, but they may have opportunities to leave networks that damage their prospects of flourishing, and to join other networks.

I wrote:

“The journey of life is a multi-stage process. At each stage, the extent that we can flourish depends on effective use of personal capital we have developed in earlier stages, and alertness to opportunities for further investment in personal capital. Investment in personal capital can help us to forge mutually beneficial relationships with others and, if necessary, to enter more favorable social networks. As we flourish, our priorities may change, bringing about changes in preferences and behaviors. At each stage of adult life, flourishing requires values consistent with wise and well-informed self-direction.”


Wednesday, August 10, 2022

How should Bill Carmichael's transparency project be pursued now?

 


Unfortunately, few readers of this blog will know anything about Bill Carmichael or his transparency project. My main purpose here is therefore to explain who he was and why the question I have posed above is worth considering.

W.B. (Bill) Carmichael died recently at the age of 93. In his obituary,  Gary Banks, former chair of the Australian Productivity Commission, described Bill aptly as “an unsung hero” of the Australian Public Service (APS).

In my experience, most members of the APS who are working on economic policy like to claim that they are contributing to the well-being of the public at large. However, I find it difficult to accept such claims unless the people concerned can demonstrate that they are actively seeking to either undo mistakes that governments have made, or to discourage governments from making more mistakes.

Bill Carmichael made a huge contribution in helping to undo mistakes that Australian governments made over many decades in insulating much of the economy from international competition. His efforts in support of trade liberalization have helped Australians to enjoy greater benefits from trade and greater productivity growth than would otherwise have been possible.

Alf Rattigan’s right-hand man

Bill’s contribution to trade liberalization was largely behind the scenes, helping Alf Rattigan, the former chairman of the Tariff Board, to pursue his reform efforts. Rattigan argued successfully that tariff reform was required because industries that had been given high levels of government assistance to compete with imports were inherently less efficient users of resources than those requiring lower levels of assistance or none at all.

As Gary Banks’ obituary indicates, Bill played an important role in developing strategies, writing the key speeches that Alf Rattigan delivered, dealing with difficult bureaucrats, and engaging with economic journalists who were highly influential in informing politicians and the public about the costs of protection and the benefits of international competition. Bill’s contribution reached its pinnacle in the early 1970s when the Industries Assistance Commission (IAC) was established with an economy-wide mandate to ensure greater transparency to processes for provision of government assistance to all industries.

Bill eventually became chairman of the IAC. However, in my view, his most important contribution was made in helping to establish the organisation and ensure that it had access to the professional economic expertise it required to undertake research and produce quality reports.

Bill’s transparency project

Bill Carmichael’s interest in the transparency of trade policy did not end after he retired from the IAC in 1988. My reference to Bill’s transparency project relates specifically to the efforts he made during his retirement to bring greater transparency to trade negotiations. These efforts were made in collaboration with Greg Cutbush, Malcolm Bosworth, and other economists. The best way to describe that project is to quote some passages from an article in which Bill suggested that Australians are being misled about our trade negotiations and agreements. The article, entitled ‘Trade Policy Lessons from Australia’,  was published by East Asia Forum in 2016.

Bill wrote:

The goal of trade policy is not limited to increasing export opportunities. Nor is it just about improving trade balances. Rather trade policy is about taking opportunities to improve the economy’s productive base. When assessing a nation’s experience with bilateral trade agreements, this is the test that should be applied.

In each bilateral agreement Australia has completed to date, projections of the potential gains for Australia, based on unimpeded access to all markets of the other country involved, were released prior to negotiations. These studies did not, and could not, project what was actually achieved in the ensuing negotiations. The quite modest outcomes for Australia from those negotiations meant the projected gains conveyed nothing about what was eventually achieved. Yet the projections were still quoted to support the agreements after they were signed, as though they reflected actual outcomes.

This approach to accounting for the outcome of trade agreements has meant that Australia has missed opportunities for productivity gains. So how, given Australia’s recent experiences, can trade policy and negotiations be better conducted in future?

Australia cannot change how it negotiated its agreements with the United States, Japan, South Korea and China. But policymakers can refine their approach to future negotiations. Australia’s trade policy should be guided by a model based on its conduct in the Uruguay Round of trade negotiations. The Uruguay Round confirmed that the domestic decisions needed to secure gains from unilateral liberalisation and those required to secure the full gains available from negotiations have converged.

The negotiations in the Uruguay Round took place at a time when former prime ministers Bob Hawke and Paul Keating were reducing Australia’s barriers to trade unilaterally. Their productivity-enhancing reforms were subsequently offered and accepted in the Uruguay negotiations as Australia’s contribution to global trade reform. Consequently, Australia secured all the gains available from trade negotiations: the major gains in productivity from reducing the barriers protecting less competitive industries, as well as securing greater access to external markets.

This was the kind of win–win outcome negotiators should seek from all trade agreements. It made a substantial contribution to the prosperity Australia has since enjoyed. 

In future trade negotiations, the Productivity Commission — Australia’s independent policy review institution — could provide a basis for market-opening offers by conducting a public inquiry and reporting to government before negotiations get underway.”

In a subsequent paper, publicly endorsed by a group of trade economists, Bill argued:

“If we are to close the gap between trade diplomacy and economic reality, we need to respect three lessons from experience: first, a major part of our gains from trade agreements depends on what we take to the negotiating table, not what we hope to take away from it ; second, liberalising through trade negotiations cannot be pursued simply as an extension of foreign policy ; and third, … future bilateral agreements should be subject to cost-benefit analysis before ratification.”

How should Bill’s project be pursued?

I raise this question without much optimism that greater transparency of trade policy can be achieved in the short term. There is no more reason to be optimistic that the Department of Foreign Affairs and Trade will suddenly become receptive to ideas that challenge its claims about the benefits of trade agreements it has negotiated than there was to be optimistic that its predecessor, the Department of Trade and Industry, would be receptive in the 1960s to the ideas of Rattigan and Carmichael which challenged the protectionist orthodoxy of that department. Added to this, it is difficult to ignore signs that protectionist sentiment is on the rise again in Australia in the wake of the Covid 19 pandemic and fears that a further deterioration in international relations could lead to disruption of international shipping.

Nevertheless, as Bill might say, none of that should stop us from pursuing longer-term goals.  I hope that some people reading this will feel motivated to think constructively about how Bill Carmichael’s transparency project could be pursued as a longer-term exercise in institutional reform.

Thursday, July 21, 2022

Who was Erasmus and why should we care?


 After I stumbled across that quote a few days ago, it struck me that Erasmus might have something relevant to say to people living today.

However, before I discuss the context in which Erasmus made that statement, it might be helpful to provide some relevant background information about him.

The man and his vocation

Erasmus was born around 1467 and died in 1536.  William Barker, the author of a recently published biography, Erasmus of Rotterdam: The Spirit of a Scholar, tells us that Erasmus had become famous by the time he reached his mid-fifties. Erasmus was a prolific author. The rise of the printing press helped him to establish an international reputation during his lifetime. At that time it was possible for a humanist scholar – one steeped in the literature and culture of ancient Greece and Rome – to have fame equivalent to that of an Einstein or Stephen Hawking in more recent times.

Although Erasmus was a priest, he remained independent of the church hierarchy. Patrons offered gifts and allowances, which he accepted, but he was not dominated by any person or institution. He had an aversion for scholastic theology, believing that the words of the Bible show the message of Jesus more clearly than could any scholastic commentator. He based his famous translation of the New Testament on ancient Greek manuscripts because he believed that some of the original reports written by followers of Jesus had become distorted in the official translation used at that time.

In addition to his Translation of the New Testament, Erasmus’ famous works include The Praise of Folly, and his compilation of Roman and Greek proverbs. The Praise of Folly takes the form of a speech by Folly, seeking to persuade us that she is basic to all our lives. Barker sums up the book as follows:

“The work begins with social criticism, a kind of genial mocking, but it ramps up to direct attacks on various interest groups in the political, intellectual and religious worlds, and, in the amazing final move, suddenly turns inwards, and pulls the reader towards the abyss found in the complete loss of self through a total religious faith.”

As I see it, theological disputes were a particular focus in this book. Erasmus wrote:

I [Folly] am often there, where when one was demanding what authority there was in Holy Writ that commands heretics to be convinced by fire rather than reclaimed by argument; a crabbed old fellow, and one whose supercilious gravity … answered in a great fume that Saint Paul had decreed … “Reject him that is a heretic, after once or twice admonition.” And when he had sundry times, one after another, thundered out the same thing, … at last he explained it thus … . “A heretic must be put to death.” Some laughed, and yet there wanted not others to whom this exposition seemed plainly theological … . “Pray conceive me,” said he, “it is written, ‘Thou shalt not suffer a witch to live.’ But every heretic bewitches the people; therefore …”.

Erasmus’ book of proverbs was also a vehicle for social criticism. For example, in his revised version of this book, his commentary on the proverb, “War is a treat for those who have not tried it”, is a passionate essay praising peace and condemning war. Barker notes, however, that Erasmus’ condemnation of war was not unbounded. He approved of war against the Turks during the 1520s when they had reached the outskirts of Vienna.

Context of the quote

The context of the passage quoted at the top of this article is explained by Paul Grendler in his article, ‘In Praise of Erasmus’ (The Wilson Quarterly 7(2) Spring 1983). The plea, “Let us not devour each other like fish” was in response to an attack by his former friend Ulrich von Hutten, who had become an associate of Martin Luther. Erasmus welcomed Luther as a fellow reformer in 1517 when he began to criticize greedy churchmen and the worship of relics. However, as Luther’s criticism of Catholicism became more abusive, Erasmus counselled moderation. Luther would have none of it:

“You with your peace-loving theology, you don’t care about the truth. The light is not to be put under a bushel, even if the whole world goes to smash”.

The papacy was not inclined to stand idly by while Luther “led souls to hell”. So, Europe went to smash!

Erasmus continued to try to mediate between Catholic and Protestant, asserting that he found much to admire in Luther while disagreeing with him about predestination. The Catholic response was that “Erasmus laid the egg that Luther hatched”.

Unfortunately, Erasmus was unable to persuade the contending parties to refrain from warfare. If political institutions had provide greater support to Erasmus’ message at that time, perhaps it would have been possible for Europeans to have avoided a few centuries of pointless religious warfare.

Contemporary relevance of Erasmus    

William Barker laments that the old discourse of humanism seems to have been eclipsed:

“Something has happened to the humanities and the old discourse of humanism in our time. The ideal of Erasmian humanism – a cosmopolitan, well-educated Republic of Letters – has moved to the margins of our cultural life. A shift in political, ethnic, gender and ecological values has led to a change in the cultural hierarchy.”

Nevertheless, he still sees Erasmus as relevant to the culture of our times:  

“Despite our hesitations and the new trajectories in our literary culture, there are aspects of Erasmus that still survive for us, that take him outside his historical moment and the programmatic frame of humanist education. We can still turn to him for his irony, laughter, and the free exercise of social criticism.”

I agree with all that, but I also see Erasmus’ message about refraining from war over theology as being highly relevant today. When Erasmus was alive, contending parties engaging in theological disputes were obviously willing to use coercive means to impose their will on their opponents. Today, not much has changed. Extremists among contending parties engaged in ideological disputes are still willing to use coercive power to impose their will on their opponents.

Few people who live in the liberal democracies have any difficulty condemning the authoritarianism of dictatorships which seek to prevent individuals from exercising freedom of conscience in their religious observance. However, there are many people among us who unwittingly engage in similar authoritarianism themselves. I am thinking particularly of politicians who are so certain of the correctness of their ideological beliefs that they struggle with the idea that those with opposing views are entitled to exercise freedom of conscience.

The exercise of freedom of conscience over the status of human embryos is the example that comes most readily to mind. I wrote about his in the preceding post. At one extreme, we have politicians claiming that pharmacists who refuse on conscientious grounds to supply medications that could be used to induce abortion are guilty of some kind of civil rights violation. At the other extreme we have politicians arguing that under no circumstances should it be lawful for a woman to exercise freedom of conscience to terminate a pregnancy.

Will this conflict end in open warfare? The only reason I can see for ideological and theological authoritarianism to result in less violent outcomes today than occurred 500 years ago is the existence of democratic political processes. Unfortunately, in some liberal democracies those processes may no longer be sufficiently robust to provide contending parties with appropriate incentives to moderate their extremist agendas.

at this time, those who regard freedom of conscience as of utmost importance should remember the efforts of Erasmus to promote peace 500 years ago, and endeavor to be more successful than he was. “Blessed are the peacemakers …”.


Tuesday, July 5, 2022

How is it possible to believe in both right to life and freedom to choose?

 


The ongoing public debate between “right to life” and “freedom to choose” advocates, seems to be falsely suggesting that a choice must be made between irreconcilable positions. The debate overlooks the legitimate reasons that people have to support both “right to life” and “freedom to choose” in different contexts. I argue in this article that opportunities for human flourishing are likely to be greatest when the political/legal order recognizes the validity of both “right to life” and “freedom to choose” in contexts where those concepts are most relevant.

The article is addressed to people who believe that our main focus in considering the appropriateness of laws relating to termination of pregnancy should be on their implications for human flourishing. I hope that includes all readers.

My starting point is the proposition that opportunities for human flourishing are likely to be greatest within a political/legal order which allows individuals with differing values to flourish in different ways without coming into conflict with each other. The underlying idea here is that individual flourishing is an inherently self-directed process. The advocates of differing values may all think that they have the best recipe for human flourishing, but no-one can flourish if they are forced to live according to values they oppose.


The “live and let live” view presented in the preceding paragraph is not original. It is explained more fully, with references to major contributors to relevant philosophy, in my book Freedom, Progress, and Human Flourishing.

The line of reasoning sketched above suggests that people who hold widely differing views about issues such as termination of pregnancy may be able to live in peace and seek to flourish in their own ways, provided they refrain from attempting to coerce one another to modify their behavior. Such attempted coercion usually involves groups of people using their political power to impose their will on others.  

Of course, we may have good reasons to believe that some people are seeking to flourish in ways that are unlikely to succeed. We can try to persuade them to alter their ways but use of coercion to modify their behavior has potential to reduce further their potential to flourish. Putting people into jail does tend to diminish their opportunities to flourish.

When should the legal order recognize the right to life?

To this point I have obviously been writing about behavior that does not infringe the rights of others. When behavior does infringe the rights of others, it is appropriate for it to be subject to legal constraints. Infanticide is the example that is most pertinent to the current discussion.

The proposition that infants have a right to life is not controversial. Even so, legal systems tend to recognize that extenuating circumstances are often associated with the crime of infanticide. In high-income countries, infanticide is often attributed to post-natal depression. In 18th century Britain, when infanticide more commonly occurred for economic reasons (for example, to give other children in a family a better chance of survival) it was apparently common for juries to practice “pious perjury” to avoid convicting offenders for murder. In the 19th century, laws gave explicit recognition to the possibility that extenuating circumstances might exist in cases of infanticide.

There are strong grounds to argue that late term abortion is tantamount to infanticide because the unborn child is at that stage capable of living outside the womb. It makes sense to argue on that basis that in the final weeks of pregnancy the unborn child has a right to life almost equivalent to that of an infant. The “almost” qualification is appropriate because the mother’s life may sometimes to be endangered if an unborn child is accorded the same right to life as an infant.

When should the legal order recognize that women have a right to choose?

In my view the legal order should recognize that a woman has responsibility to decide what status should be accorded the embryo in her womb in the weeks immediately following conception. She is best placed to make such judgements because the embryo is only capable of existing with the life support that she provides it.

The most common alternative is for politicians to assert that they have a right to decide the status of embryos. They may follow the advice of religious authorities, philosophers of various kinds, the majority view of electors, swinging voters, party leaders, their spouses, their best friends etc. or they may rely on their own intuitions and feelings. Some politicians argue that embryos should be sacrificed to achieve their objectives concerning optimal growth of population, or to enable other species to flourish. Others argue that abortion should be illegal because human life is precious from the moment of conception.

Politicians should show some modesty when contemplating laws that over-ride the natural rights of individual pregnant women to make judgements about the status of  the embryos in their wombs and to act according to their consciences. They have a right to seek to persuade pregnant women to adopt their views on the status of the embryo, but there is no good reason why any of their views should constrain the actions of a woman who is not persuaded by them.

There is nothing in human nature that ensures that every woman with an embryo in her womb will view it as having the status of an entity that is worthy of being provided life support, given the opportunity costs that might entail for herself and her family. If the woman does not wish to maintain life support to the embryo, the use of force to require her to do so imposes a form of involuntary servitude upon her.

The authoritarianism involved in denying women the right to choose in the early stages of pregnancy is compounded by the invasion of privacy that is required to ensure compliance with this policy.

The transition

If it is accepted that right to life should prevail at the late stages of pregnancy and that freedom to choose should prevail at the early stages, that leaves the question of what rules should apply between those stages. It makes sense for the option of termination to be progressively restricted as pregnancy proceeds, rather than legal one day and illegal the next.  

A personal view

The views presented above have focused on what should be lawful or unlawful in a society which rejects authoritarianism and recognizes the rights of individuals with differing values to flourish in different ways. The discussion has been about the ethics of alternative legal orders, rather than personal ethics.

In case anyone thinks they can infer my views on the personal ethics of abortion from what I have written above, I will make them clear now. I subscribe to the view that because human embryos have potential to become human persons they should not be lightly discarded. I think the world would be a better place if more people were persuaded to adopt to that view, but it has potential to become a much worse place if governments attempt to impose it.

Conclusions

Opportunities for human flourishing are likely to be greatest in a political/ legal order which allows individuals to flourish in different ways without coming into conflict with each other.

When behavior infringes the rights of others it is appropriate that it should be forbidden. Infanticide obviously falls into that category. It is appropriate to recognize an unborn child as having a right to life almost equivalent to that of an infant in the final weeks of pregnancy.

The issues involved in the early weeks of pregnancy are quite different because the embryo is totally dependent on a woman to provide it with life support. The woman should be recognized to have responsibility to decide the status of the embryo at that stage. If she does not consider it to have a status worthy of being provided ongoing life support, her view should be respected. Laws requiring women to provide life support against their impose a form of involuntary servitude upon them.


Sunday, June 26, 2022

How did a trading company come to rule India?

 


Spencer went on to suggest that trade would have been more successful in the absence of the privileges that the British government had conferred on the East India Company (EIC):

“Insane longing for empire would never have burdened the Company with the enormous debt which at present paralyzes it. The energy that has been expended in aggressive wars would have been employed in developing the resources of the country. Unenervated by monopolies, trade would have been much more successful.”  

Prior to my recent visit to India I was aware that classical liberals like Herbert Spencer were critical of the East India Company. Since my visit I have become an expert on all matters pertaining to Indian history. Just joking!

I can only claim to be able to sketch the outlines of the story of how the EIC ended up ruling India. I think the story is worth telling as a case study of the unintended consequences of government intervention in international trade.

Spencer was correct in identifying the importance of the EIC’s links to the British government as an important determinant of its behavior, but the context in which it operated also needs to be taken into account.  The most important element of context seems to me to the rivalry between European powers to obtain advantage in trade with India.

Portugal came first.

Perhaps you can recall from school history lessons that Vasco da Gama sailed to India around the Cape of Good Hope in 1498. This was the culmination of voyages of discovery by Portuguese sailors, including the important contribution of Bartolomeu Diaz, who had rounded the Cape some years earlier.


The Portuguese government was heavily involved in this exploration, and in what followed. In his book, The Portuguese in India, M.N. Pearson relates how the king, D. Manuel, invited da Gama to command the expedition when the latter happened to wander through the council chamber where the king was reading documents.

After da Gama’s voyage, the Portuguese court debated whether they should use force to seek a monopoly in the Indian Ocean or be peaceful traders. They chose force. Their aim was to try to monopolize the supply of spices to Europe and to control and tax other Asian trade. There was, of course, a great deal of trade in the Indian Ocean prior to Portuguese intervention, much of it controlled by Muslims (from India as well as the Middle East).

The Portuguese built forts in India to protect their trading activities. Some local rulers saw advantage in giving the Portuguese permission to establish forts, but they often used force. Goa was conquered in 1510. The Portuguese obtained permission to build a fort at Diu in 1535 (and had ceded to them the islands that today form Mumbai) because the sultan of Gujarat, Bahadur Shar, wanted Portuguese help after being defeated by the Mughal emperor, Humayon. The Portuguese obtained Daman from the sultan in 1559 and immediately began construction of the fort at Moti Daman. Building of St Jerome fort (my photo below) commenced in 1614, but was not completed until 1672.


The Dutch eclipsed the Portuguese early in the 17th century.

The Portuguese were unable to prevent competition from the Dutch because the latter were “better financed, better armed, and more numerous”. The Dutch blockaded Goa from 1638 to 1644 and again from 1656 to 1663.

The Dutch East India Company was founded by the Dutch government in 1602, not long after the English formed the EIC. Both organisations were granted trade monopolies, and combined private investment and the powers of the state in a similar manner.

In the early 18th century there was fierce rivalry between the Dutch and English over the spice trade in Indonesia. That ended with the English quietly withdrawing from most of their interests in Indonesia to focus elsewhere, including India.

The transformation of British activities in India

In the 17th century, the EIC established trading posts in Surat, Madras, Bombay and Calcutta with permission from local authorities. The French India Company offered increasing competition during the latter half of the 17th century and into the 18th century.

The initial objectives of both the EIC and the French were commercial, but their conflicts in Europe spilled over into India. The British sought to fortify Fort William in Calcutta against potential attack from the French. In 1756, the French encouraged the nawab of Bengal to attack Fort William. After the fall of Fort William, the surviving British soldiers and Indian sepoys were imprisoned overnight in a dungeon where many died from suffocation and heat exhaustion. The prison became known as the Black Hole of Calcutta. The number of fatalities is disputed, but the incident seems to have provided impetus for the EIC to seek to wield greater political power in India to protect its commercial interests.

My photo of the Black Hole monument in the grounds of St John’s church in Kolkata.

 

EIC forces led by Robert Clive (Clive of India) retook Calcutta in 1757 and went on to defeat the nawab and his French supporters at Plassey. Clive’s victory was aided by a secret agreement with Bengal aristocrats which resulted in a large portion of the nawab's army being led away from the battlefield. The person responsible for this treachery, Mir Jafar, was rewarded by being installed as nawab. Clive rewarded himself and EIC forces from the Bengal Treasury.

A few years later, as governor of Bengal, Clive arranged for the EIC to collect land tax revenues in Bengal by appointing a deputy nawab for this purpose. The conquest of other parts of India was planned and directed from Calcutta. Amartya Sen has noted:

“The profits made by the East India Company from its economic operations in Bengal financed, to a great extent, the wars that the British waged across India in the period of their colonial expansion.”

Consequences and responses

The worst consequences of EIC rule became evident during the Bengal famine of 1770. The company was apparently more concerned to maintain land tax revenue than to relieve to the suffering of peasants.  Its policies contributed to the massive loss of life during the famine. Adam Smith presumably had that in mind when he suggested in Wealth of Nations:

“No other sovereigns ever were, or, from the nature of things, ever could be so perfectly indifferent about the happiness or misery of their subjects, the improvement or waste of their dominions, the glory or disgrace of their administration; as, from irresistible moral causes, the greater part of the proprietors of such a mercantile company are, and necessarily must be.” (V.i.e 26)

By reducing the agricultural labor available to generate taxable income, the famine caused the EIC to experience a subsequent loss of revenue. The British government provided financial relief to the company but arranged to supervise it. Regulation of the EIC was further increased in 1784, when British prime minister William Pitt the Younger, legislated for joint government of British India by the EIC and the government, with the government holding the ultimate authority.

The British government seems to have been engaged in an ongoing balancing act to placate both supporters of the EIC, including investors and former employees, and its critics, including prominent individuals like Edmund Burke and Adam Smith.  

Pitt’s India Act stated that to pursue schemes of conquest and extension of dominion in India are “measures repugnant to the wish, the honour and the policy of this nation”. Perhaps that was an honest statement of the British government’s policy objective, but it is doubtful that it had any impact on the extension of British dominion in India.

Fortune seekers

During the 18th century, India was seen as offering opportunities for young British men to obtain a fortune, become well-connected, and to marry well.

Lachlan Macquarie, who (in my opinion) ultimately become one of the best of Australia’s colonial governors, expressed views, while a young army officer serving in India, that may have been fairly typical.


In his biography of Macquarie, M. H. Ellis notes that in 1788 Pitt and his followers had cramped the style of young army officers in India by reducing their allowances. Macquarie recorded in his diary: “ … our golden dreams, and the flattering prospects we had formed to ourselves in Britain, of soon making our fortunes in the East, must now all vanish into smoke; and we must content ourselves, with merely being able to exist without running into debt” (p 18).

Macquarie’s hopes for a change in fortune rested on being called to active service. He had his wish during the third Anglo-Mysore war. The war ended after the 1792 Siege of Seringapatam led to the signing of a Treaty in which Tipu Sultan surrendered half of his kingdom to the EIC and its allies. Macquarie noted that news of the cessation of hostilities “damped the spirits of every one who wished the downfall of the Tyrant and hoped to have the satisfaction in a few days more, of storming his capital”. The storming of Tipu’s capital would presumably have offered the prospect of looting, but Governor-General Cornwallis managed to maintain the morale of his troops by announcing payment of a “handsome gratuity in lieu of prize money”.   (Ellis, p 39)

India’s civil wars

Disunity within India was another important element of the context in which the EIC ended up ruling India. British colonial expansion occurred at a time when the power of the Mughal empire was declining, with much of its territory falling under the control of the Marathas. In the south of India, the rulers of Mysore and Travancore were also powerful. The EIC sided with different rulers in different locations at different times. For example, at the time of the Third Anglo-Mysore War, referred to above, the Marathas were allies of the EIC. That war occurred because Tipu, an ally of France, had invaded the nearby state of Travancore, which was a British ally.

Why did EIC rule end?

In 1813 the EIC lost its monopoly over British trade with India. The opening of access to competing traders seems to have been partly attributable to growth of the free trade lobby in Britain.  

In 1833, the EIC was reduced to the status of a managing agency for the British government of India. The government took over the company’s debts and obligations, which were to be serviced and paid from tax revenue raised in India.

EIC rule of India finally ended following the Indian Rebellion of 1857, which is now also referred to as the First War of Independence. I took this photo at an Indian airport.

 


Colonial rule was formally transferred to the Crown in the person of Queen Victoria in 1858. The British government took over the Indian possessions, administrative powers and machinery, and the armed forces of the EIC.

In my view, EIC rule ended because the company had a hopeless business model. The company was obviously successful in conducting wars in India, and some employees of the company made fortunes as a consequence. But the company’s attempts to service debts incurred by imposing taxes on the people of India were inherently problematic. Such taxes made it inevitable that the company would incur high ongoing costs to put down rebellions. The EIC’s conquest of Bengal raised expectations that colonial rule might be a profitable activity for the company, but it became incapable of surviving without government financial backing only a few years later.

Was a better option possible?

 John Stuart Mill - in his role as a spin doctor employed by the EIC rather than an eminent philosopher - opened his last ditch defence of the EIC by pointing out that at the same time as the company acquired a “magnificent empire in the East” for Britain “a succession of administrations under the control of Parliament were losing to the Crown of Great Britain another great empire on the opposite side of the Atlantic”. (Mill is quoted more fully by Richard Reeves in John Stuart Mill, Victorian Firebrand, p 258.)

Mill was obviously attempting to present a persuasive case to British politicians at a time when most of them perceived “empire” to be a desirable objective.

These days, people who want to defend the empire-building activities of the EIC in India are more likely to suggest that the institutional legacy of British rule, including a united India (if you overlook the tragedy of partition) would otherwise not have been possible. Amartya Sen has pointed out the weakness of that argument:

“Certainly, when Clive’s East India Company defeated the nawab of Bengal in 1757, there was no single power ruling over all of India. Yet it is a great leap from the proximate story of Britain imposing a single united regime on India (as did actually occur) to the huge claim that only the British could have created a united India out of a set of disparate states.

That way of looking at Indian history would go firmly against the reality of the large domestic empires that had characterised India throughout the millennia. …”

Summing up

The East India Company came to rule India as an unintended consequence of British government intervention seeking trading advantages over other European powers. This intervention occurred against the background of previous involvement in Indian trade by Portuguese and Dutch governments, and in the context of intense rivalry with the French government’s trading company.

The East India Company’s schemes of conquest and dominion were made possible by disunity within India, which provided it with opportunistic allies. However, the company’s business model of taxing subjugated Indians was not capable of generating sufficient revenue to service debts incurred in subjugating them and maintaining order. Rather than let the company fail, the British government became increasingly involved in directing its activities, and ultimately displaced it.  

Friday, June 3, 2022

What makes Meghalaya an interesting place to visit?

 


It is worth visiting Meghalaya just to see waterfalls, such as Nohkhalikai falls, shown above. Located near Cherrapunji, this is tallest waterfall in India. Visitors are likely to be told the sad story of Ka Likai, after whom the falls were named. However, I will not spoil the experience for you by attempting to summarize the story here.

There was a cultural element to much of my sight-seeing in Meghalaya. That was certainly true of my visit to double-decker living root bridge at Nongriat, which I described in the preceding article on this blog as one of the highlights of my trip to India.

In this article I will further discuss my experience of sightseeing in Meghalaya, endeavoring to highlight cultural aspects. My focus is the east of Meghalaya, the part of the state I visited.

Area visited

This map might help those uncertain of the location of Meghalaya. The Indian state of Meghalaya is in India’s north-east, next to the Indian state of Assam, north of Bangladesh, and south of Bhutan.



Upon arrival at the airport in Guwahati (Assam) I was driven to Shillong, where I stayed for 2 nights. After a day of sightseeing to the east of Shillong, I visited a sacred forest on the way to Cherrapunji. I stayed in Cherrapunji for 3 nights, and saw many different things in that general area.

In what follows I will present a few photos to give some broad impressions before making some observations about culture and history of the Khasi people. 

Impressions

Shillong is a busy place. This photo is of tourists and locals at the main shopping centre, called Police Bazar.


This photo shows a scene that is fairly typical of the people and countryside as seen from roads east of Shillong.


Hilltop cultivation seems fairly common in the east of Meghalaya.


Krang Suri Falls are located in the Jaintia Hills east of Shillong. This waterfall may not yet be on the main tourist circuit, but there were quite a few Indian tourists there when I visited.

We stopped off at the Mawphlang Sacred Forest on the way from Shillong to Cherrapunjee. The photo is of an old Australian being shown around the forest by a local guide.  

This is the place in the sacred forest where bulls were once sacrificed to appease the gods. Although bulls are no longer sacrificed, the forest is still treated with great reverence. Nothing is allowed to be removed from it.


The Church of the Epiphany at Mawlynnong was founded in 1902. This village has had a strong tradition of Christianity since Welsh missionaries came here in the 19th century. Mawlynnong has been declared the cleanest village in Asia. Locals link their cleanliness to Christianity, apparently taking to heart the idea that cleanliness is next to godliness.

This photo of people engaged in a dart throwing competition was taken along the road to Dawki (on the Bangladesh border). It reminded me of something similar that I saw a decade ago when I visited Bhutan.


Culture and history

The majority of people in the east of Meghalaya are Khasi. They speak a Mon-Khmer language -the indigenous language family of mainland Southeast Asia - and their ancestors are thought to have migrated from that part of the world.

The inclusion of Meghalaya, and other states of the north-east as part of India, may have more to do with the legacy of British colonialism than with historical links to India. From a Khasi perspective, the central government of India replaced the colonial government of the British. Khasi enjoy a measure of local political autonomy via councils which they elect.

English is an official language of Meghalaya and is widely spoken there. Local guides and hotel staff were all proficient English speakers.

Mr Dipankar - the guide who accompanied me in Meghalaya, spoke excellent English. The only communication problem I became aware of arose when he was not present. I had been invited to have a meal with Hermina Lakiang - a historian associated with the North-Eastern Hill University in Shillong - and had arranged for my driver, Mr Simitar, to take me to her home. I knew that the driver had poor English, but I was slow to understand why he was having difficulty following the verbal directions that the professor was giving him about the location of her home. I later learned that they didn’t have a language in common. The driver was from Guwahati, and had no knowledge of Khasi, and the professor had not advanced her knowledge of Hindi beyond the rudimentary level she had attained at school. There was no reason for her to become a proficient Hindi speaker.


I am most grateful to have had the opportunity to have Hermina Lakiang explain some aspects of the culture and history of the Khasi to me. My understanding was greatly improved as a result of our discussion. However, the views presented below are my own – and the improvement of my understanding of Khasi culture and history was based on little knowledge to begin with.



Khasi follow a matrilineal system of inheritance, with the youngest daughter eligible to inherit the ancestral property. The youngest daughter is apparently expected to learn from mistakes made by her elder siblings.

The majority of Khasi are now Christians, but their ancestors believed in a Supreme Being as well as other deities of water, mountains, and other natural objects.  

Christian missionaries were much less successful in other parts of India, where most people are adherents of Hinduism or Islam, or in neighboring countries where Buddhism prevails. So, how did the Khasi manage to avoid being conquered and converted to Hinduism, Islam, or Buddhism, before British colonial rule exposed them to Christianity?

The most obvious answer is that Khasi are located in hilly regions that were relatively easy to defend and not particularly attractive to potential invaders seeking land that was easy to cultivate.


However, as Sanjib Baruah points out in his book, In the Name of the Nation (2020, 29) the Khasi only became confined to the hills after confrontation with the British East India Company in 1789.





Edward Gait, a British colonial administrator, included a chapter on the “Jaintia Kings”, in his book entitled, A History of Assam, which was first published in 1906. (During the colonial era, the whole of the north-east region of India was referred to as Assam.)


Gait’s account suggests that the Jaintia kings ruled the Sylhet region (now in Bangladesh) from around 1500. These kings had Hindu names, but Gait suggests that the religion and culture of the people was never much influenced by Hinduism. He cites some evidence that matrilineal system of inheritance was still followed by the Jaintia royal family.




Concluding comments

In the light of the observation made earlier to the effect that the inclusion of Meghalaya in India was a legacy of British colonialism, it is worth mentioning that some colonial administrators had expressed fears of what might happen to the culture of the Khasi following transfer of power to Indian hands. Sanjib Baruah cites Robert Reid, a former governor of Assam, among those who had argued in the 1940s for continued British control of the “Hill Areas” on paternalistic grounds (29-30).

The experience of the last 70 years has demonstrated that the fears of British colonists of what might happen under Indian control were unwarranted. As Baruah notes, the colonial safeguards to protect the people in those areas were largely retained and placed under the supervision of elected bodies following decolonization.

The impression I gained from my short visit is that Khasi people are proud of their cultural heritage, and that many are eager to defend it.