Converging paths

Historians sometimes measure the strength and influence of a culture by the rather prosaic method of studying the roads which that culture builds.  Portions of the highways of the Roman Empire remain in existence to the present day, with a number of more modern roads following their routes.  Construction of the American interstate highway system was a dramatic demonstration of the nation’s industrial power and ambitions in the aftermath of the Second World War.  China’s Grand Canal, though a water and not a land road, has united the eastern parts of the country for fourteen centuries and still sees heavy traffic.  As engineering achievements, these systems command respect and even admiration.  As genuine benefits to humanity, their usefulness is questionable.

A well-maintained and comprehensive highway network increases the interdependency of distant areas.  If a resource rare in one district is plentiful in another, and can be easily transported to the locality where it is in demand, its availability encourages residents of the importing area to rely on that external source of supply instead of making the best use possible of their own resources.  Furthermore, frequent imports and exports encourage the commodification of resources: the process by which those resources come to be seen primarily as instruments in a business transaction instead of as real objects necessary for human survival.  As regions begin to rely more and more on what they can obtain from outside sources, their several local economies merge into a single large one, and in the process they lose their self-sufficiency, placing themselves in danger of economic or literal starvation should anything ever happen to the transport network on which they depend.  The resilience of the small, self-contained unit dissolves.

The existence of a highway system, or even a rail system, also stifles the development of new technology for crossing distances.  Since the roads are already there, engineers and inventors focus their efforts on making incremental improvements to the vehicles in use on those roads.  Such improvements are both simpler and more marketable than alternative lines of inquiry.  There is no incentive for them to strike off in new directions or find solutions for personal transport that do not involve roads, as they would be forced to do if they found themselves barred from traveling by natural obstacles and were unable to construct a highway network to solve the problem.

On a personal level, an extensive road network leads to greater social fusion within a large area.  Roads make it easier for residents of an area to travel; when they travel, they grow more accustomed to their neighbors.  The exchange of ideas that takes place, assuming that it does not cause conflict or violence, eventually increases the intellectual and ideological similarities between the two regions.  The result is greater integration of their populations and a threefold decrease in individuality within the society as a whole.  First, minority ideas become increasingly outnumbered, and the intellectual shift causes formerly unexceptional ideas to be excluded from mainstream discourse.  Second, the combined efforts of the larger population are more effective at actively suppressing dissent.  Third, the greater social problems of the combined society encourage the development of a strong and invasive state to resolve them, which further reduces the opportunities for individual expression.

Roads may be a sign of a healthy civilization, or they may be a sign that a civilization has grown too big to be healthy.

The basis of calculation

In 2015, Boeing paid federal income taxes of $1.98 billion on profits of $7.15 billion, giving it an effective tax rate of 27.7%. That same year, ExxonMobil paid $5.4 billion on $21.97 billion of profits and Apple paid $19.1 billion on $72.5 billion, for tax rates of 24.6% and 26.3%, respectively. While these rates are far higher than many anti-corporate activists claim, they are also well below the official American corporate tax rate of 35%. For purposes of comparison, an individual paying tax at similar rates would be making between thirty and ninety thousand dollars in income each year.

The real discrepancy between corporate and personal taxation in the United States arises from one of the less obvious privileges given to corporations by the state: a parallel accounting system to that which it imposes on humans. The individual pays tax on his entire income. The corporation, however, pays tax only on its profits. The individual cannot deduct the cost of his food, housing, or transportation from his income before he pays his taxes; the corporation can. If Boeing, Exxon, and Apple had derived their effective tax rates based on their annual revenues rather than their profits alone, the former of which would correspond to an individual’s gross income, they would have paid 1.8%, 2.1%, and 8.2%. That is, if corporate taxation were administered on the same basis as personal taxation. By contrast, if they paid the same rate on their entire revenues that they actually did on their profits, they would have seen tax bills of $26.6 billion, $63.8 billion, and $61.5 billion. In two cases out of three, they would have been bankrupt because their profits could not have covered the tax demands they would have incurred by paying the same tax rate on the same basis as an ordinary worker.  According to Forbes, the ten most profitable companies in the United States in 2015 paid cumulative taxes of $60 billion on revenues of $665 billion – less than ten percent, or comparable to what an individual making less than ten thousand dollars a year would pay.  If the one hundred most profitable American companies that year had been required to pay the full statutory rate on their total incomes, as individuals do, not one of them would have earned sufficient profits to end the year in the black. Not one.

Corporations are not more efficient than an individual, nor are they shining monuments to the forces of competition within a free market. They are, in fact, less efficient than the individual, because without the benefit of a privilege bestowed by the state, most of them would be forced out of business within a year or two. Somewhat ironically, they could not afford to compete with the individual if taxes were equalized, and thus would not exist at all in a free market. They almost represent a modern form of mercantilist policy, in which the state designs economic legislation to benefit itself and further its objectives – or in which the organizers of the state design the legislation to serve their personal ends. In the United States, the privileges of corporations have been maintained as law by thousands of elected representatives, who have been well rewarded for their silence by those same corporations. But as time-honored as that tradition may be, it must be recognized for what it is: a form of state intervention in the market. The existence of corporations makes the market less free and less efficient, not more so.

A climate of denial

The earth is getting warmer. That much is beyond doubt. Humans are probably responsible for most of the warming. The scientific consensus extends so far.  The process will have profound results on human life.  By the end of the century, as much as a tenth of the global population may have to relocate due to rising sea levels, while agricultural productivity in the United States will drop by at least twenty percent.  Entire nations will disappear as their territories are submerged, causing political as well as social and economic crises.

Though this may come as a surprise to the average reader, both major American political parties share a common stance on the subject of global warming.  What’s more, not one but both of them are effectively climate denialists.

Republican politicians are the more straightforward of the two.  They simply state that the climate is not changing, that average temperatures have not increased, and that the data is insufficient for a conclusion or just plain wrong. Some of them throw snowballs around to prove their point.  Democratic politicians, on the other hand, admit that global warming is a reality.  However, they claim, a few new environmental regulations will be enough to halt the process by forcing the economy to be more carbon-neutral.  They concede the phenomenon but not its long-term effects, except as a scare tactic to force immediate action.

Both Democrats and Republicans are unable to conceive that global warming will realistically affect their day-to-day world in any way.  Manhattan underwater?  Half of Florida disappearing?  It’s not possible.  Too many human activities depend on the cities and industries already established within reach of inundation.  Rebuilding those cities elsewhere and relocating their populations would be so disruptive to the American way of life, practically and ideologically, that politicians of all stripes have convinced themselves that such an emergency is impossible.  If it is inconvenient, it cannot be true, or at least not urgent.

The issue of acceptance or denial of the reality of global warming is, in essence, one of accepting or denying responsibility for that warming.  At every stage of human development, from the first drive of the first Hyksos chariot to the development of the integrated circuit, humanity has been faced with a choice.  Men were never ignorant of the effects of their actions on the environment; it has always been a simple matter for a logician to extrapolate the effects of a new industry or new invention on the world as a whole.  A century ago, before global warming was even thought of, Jules Verne wrote quite candidly in The Underground City that if the earth was composed solely of coal, mankind would end its existence by consuming the entire planet on which it lived.  Men have always had a choice, and both the knowledge and the right to make that choice.  During the past two centuries they have chosen factories and railroads, highways and highrises, automobiles, beef, tupperware, and cell phones over sustainability.  Now they will also be compelled to live with the consequences of that choice, which will include the derangement of the complex societies they have constructed.  Attempting to evade those consequences by a last-minute effort is as much a denial of responsibility as ignoring their existence in the first place.

Identity compulsion

Just as humanity is not determined by physiology, so an individual’s identity is also independent of his physiology.  Identity is a choice arising from the basic factors of individuality and will that contribute to a person’s existence as a human being.  No man can be compelled to have an identity which he has not chosen or at least acquiesced in, nor can he be expected to act in a given way because of his presumed identity.

If physiology is not responsible for identity, then there is also no link between genetics and identity.  A human may carry genes that are predominantly African, or Asian, or European, but those genes do not carry information that will determine his identity.  Furthermore, it would be inaccurate to assume that his identity and interests are aligned with those of other individuals who share a similar genetic background.  If brought up among those whose genes are similar to his own, he is likely to share personality traits and and intellectual outlooks with them, but that is an effect of socialization, not genetics.  No physical urge drives him to share their habits and goals.

Behavior and geographical origins are equally poor determinants of identity.  To a certain extent, an individual’s behavior is influenced by his physiology; however, his behavioral inclinations do not compel him to incorporate them into his identity.  He may suppress, embrace, or rationalize them in any way he chooses.  In the end, it is his will, rather than his body chemistry, that makes the decision.  Likewise, though geography may impact a man’s physical makeup through the foods that it provides for his consumption and the difficulty or ease of survival in a given area, its influence on his conception of himself is minor compared to that of his own thoughts and decisions.

This is not, on the whole, the way that most humans have come to understand identity.  They are unaware of their own role in shaping themselves and tend to attribute large portions of their own identity – as well as the identities of others – to external factors.  A man born with a large proportion of genes from a minority ethnic group is considered, and usually considers himself, to be a member of that minority.  A male who prefers sex with other males is considered to have a homosexual identity; a man born on the west side of the Franco-German border is called a Frenchman even while an infant born a hundred yards to the east at the same moment is called a German.  Furthermore, these are not classifications alone; they have come to determine what society expects from someone who, they assume, shares an identity with an existing group.  A black man born in the United States is expected to accept the genetic fiat and ally himself socially and politically with those who share similar ancestry or be called a race traitor.  A gay man is expected to make his sexual preferences the core of his identity and allow the activism of those with the same preference to guide his actions if he doesn’t want to be labeled a self-loathing homophobe.  And a Frenchman is expected to further the interests of France rather than Germany or be called, simply, a traitor.

Society cannot determine identity.  Only the individual can do that.  By permitting a society to define him and to expect certain actions from him on that basis, he not only surrenders part of his humanity, but also contributes to the continued misunderstanding of identity as a compulsion rather than a choice, as something that has its origins in the group rather than the individual.

The Ecotopian shift

Thirty years after the novel Ecotopia debuted in 1975, author Pat Joseph referred to the society it depicted as “authoritarian”, a term which has recurred often in subsequent popular criticisms of the book.  Sometimes a synonym is used instead, such as “totalitarian” or “the heavy hand of the state”, but the intent is the same: to convey that the Ecotopian government is oppressive, narrow, and no improvement at all over the United States from which it seceded.

But what exactly is Ecotopian totalitarianism? For starters, all businesses are forbidden to employ more than three hundred workers. Businesses are taxed heavily on their profits. All non-biodegradable plastics are banned. Anyone wanting to build a house out of wood is required to contribute a certain amount of time working for a forestry cooperative. Strict border controls are maintained with the hostile nation of the United States. The government possesses a small but effective intelligence network.  Pollution is a criminal offense.  Heavy tariffs keep foreign trade to a minimum.  The proceeds from business taxation are used to provide a guaranteed basic income for every citizen.

Ecotopian totalitarianism is also notable for what it doesn’t involve. The standing army is miniscule and the national defense placed in the hands of citizen-soldiers. Every Ecotopian is trained in the use of arms – not theoretically, but trained to use them on living creatures through hunting and ritual war games. The police, on the other hand, are unarmed. Schools are run by students and teachers with no input from the state. Drugs are legal. Minority communities are given political autonomy if they want it. The population is steadily decreasing, which means less violence and fewer roles for the state to play. There is no state religion. The original American Bill of Rights is part of the constitution, with additions that expand its protections. The legislative process is open to ordinary citizens, who can call in and take part in the debates whenever they choose. No action which does not harm someone else is illegal. Most government takes place at the local rather than the national level.

On the whole, this does not seem like a recipe for classic totalitarianism. The elements most essential to tyranny – a disregard for human rights and a strong army maintained to enforce this disregard – are completely absent in Ecotopia. Free choice in almost all aspects of life, except where it might lead to damage to the environment or to one man accumulating power over his fellows, is guaranteed.

And this means that criticism of Ecotopian society as totalitarian is more informative about how Americans define totalitarianism than it is about Ecotopia. To the American, it doesn’t matter how much freedom you have as a person, or how little danger you are in from the state. The only thing that counts is how much money you can make, and how much status and power you can acquire as a result of your financial success. Ecotopia is really a perfect test case to analyze this sort of thinking. Take away all the dangers of a central government run wild, and replace them with a few environmental regulations and limitations on wealth, and suddenly Ecotopia is totalitarian. Cross the border back into the contemporary United States, with widespread violence, with a massive state army and bureaucracy, with rules governing every aspect of life except for the unrestrained pursuit of wealth, and just as abruptly you are back in the land of freedom. How else can this be explained, if Americans are not using a very different dictionary than the rest of the world?

War and murder

“War and murder,” the author George Griffith once wrote, “are synonymous terms, differing only as wholesale and retail. It is time the world had done with these miserable sophistries.”  The logic here is unassailable.  If murder is defined as the intentional killing of a human being, and war as an act to compel another person to do your will, than any form of warfare that involves killing as part of its compulsion – which, in practice, encompasses most forms of warfare – is murder.  The difference is one of euphemism only.  Like the custom of hlonipa practiced by the Zulus, where the name of a god or spirit or king is declared taboo and replaced with another word in daily conversation, the habit of referring to mass murders sanctioned by states as “war” is a distinction of labeling, not one of reality.  War is usually murder.  The soldier is usually a murderer.  And the official or ruler who starts or supports a war is guilty, often under his own laws were they to be impartially applied, of conspiracy to commit murder and procuring murder.  Culpability does not stop on the battlefield.

Since the days of Saint Augustine, casuists have spent a great deal of time attempting to evade this logic by developing the concept of “just war”, which posits that, under certain circumstances, war is morally permissible or even morally required.  Consequently, acts of murder committed during such a war are not really murder because they were justified, and thus result in no moral guilt.  This evades the entire issue at hand by shifting the premise of the discussion from logical equivalency to ethics.  Murder may or may not be considered justified in ethical terms, depending on the situation.  That does not change the fact that a murder remains a murder because it fits the definition of a murder, in accordance with the law of identity.  But the just war advocates reject this conclusion because the literal definition, in its stark boldness, carries a great deal of ethical power.  They cannot admit that they are committing justified or legitimate murders; they must call their murders “war” instead.  The term “murder” is laden with too many moral overtones for them to be honest about it.

Historian Chris Calton recently wrote that war is the ultimate form of socialism, since its nature requires that it be conducted by the state and it cannot be privatized.  While the first part of that statement is correct, the latter portion overlooks the example set by the early defensive organization of the United States, which relied on subcontracting warfare out to the militia.  Individuals supplied their own weapons and equipment and elected their own officers.  Units served locally and could only be deployed outside their home states with the consent of their members, and then only for limited periods.  While local government administered the units, their operation was almost entirely reliant on private volunteers, greatly limiting the role of the state in how wars were prosecuted and military forces maintained.

It is this aspect of the militia system that serves to address, at least partially, the ethical aspect of war as murder.  If the responsibility for conducting war is devolved all the way down to the individual level, then it is the individual soldiers, rather than rulers remote from the theatre of operations, who will decide on a personal level whether or not to have murder on their consciences.  The Nuremberg defense – “I was just following orders” – is eliminated as a form of justification or absolution.  No order can override the ethical choice of the individual.  War remains logically murder, but at least those who commit it under these conditions will do so honestly and with full knowledge of their actions, without having to blame their leaders or rely upon the compulsion of orders to provide them with a moral escape.  If it is bad enough to have men making war upon one another, it is infinitely worse to have men making war upon others because their leaders are making war upon them.

American soldiers are not heroes

American soldiers are not heroes. That simple statement is almost always sufficient to provoke a violent reaction among Americans, regardless of their political convictions. But it is very difficult for any soldier, anywhere, to become a hero.

First, heroism requires an extraordinary action. It cannot be a routine performance of obligations.  Merely serving in the armed forces, showing up to work at a specified time and performing one’s duties acceptably, does not entitle a soldier to be called a hero.  Granted, soldiers in combat do their jobs under stressful and dangerous conditions; so do miners, who are much more rarely praised for their nerve and bravery in spite of their work being much more important to society.  Heroism is an individual, spontaneous thing, and due to a lack of courage, inspiration, or opportunity, the vast majority of soldiers will never take a step out of their way to perform such an action.

Second, heroism must be to the benefit of others, if possible to the benefit of humanity as a whole. It is difficult to find a case in which the act of murder – for war is simply murder on a larger scale – can confer a net benefit on humanity or even on a particular group within humanity. What balances killing?  How is an Iraqi, for example, benefited because American soldiers killed his president and substituted a new one?  How is an American benefited because an Iraqi he never heard of, who never affected his life in any way, was killed by those same soldiers?  Death is a subtraction from, not an addition to, the good of humanity.  In the words of Andrew Carnegie, “The hero who kills men is the hero of barbarism; the hero of civilisation saves the lives of his fellows.”

Third, heroism must be selfless.  Pursuit of an aggressive policy for one’s own self-interest is not an act of heroism, yet every war in which the United States has been engaged since the American Civil War has been fought for the purpose of expanding American authority around the world.  The soldiers who enforce this policy with their boots do not do so out of a sense of altruism.  They do so because they are paid for it, and because they expect their country and themselves to gain by the transaction.  There is no genuine self-sacrifice involved.

Fourth, a soldier who serves in the army of a nation-state cannot claim that he exemplifies heroism by defending his country, because the army is the means by which the state keeps the people in subjection.  Merely by putting on its uniform and becoming a visible symbol of its power, he helps the state maintain its monopoly on violence – a monopoly that will be employed against him and those he loves without the slightest hesitation if the need arises.  As an agent of the state and its restrictive laws, he is, regardless of his actions, an enemy of humanity rather than its benefactor.

The invention of the “American hero” was a fairly recent historical process. During the Civil War and the Spanish-American War, the men who were applauded were the members of the militia; during the World Wars, that applause was accorded to the draftees and those who enlisted because they were swept up by the spirit of the hour – citizen-soldiers, in other words. Between each of these wars, the members of the regular army received no recognition or adulation. In fact, public opinion was anything but kind to them. At that time, the army was viewed with widespread popular disdain, as a refuge for ruffians and ne’er-do-wells who were too stupid, violent, or indigent to succeed in any other profession.

This perception changed radically with the mass inductions of World War II and was sustained after the war when the United States began to maintain a large peacetime army for the first time in its history. American soldiers came to be portrayed as the active, youthful, principled defenders of the nation who manned the front lines in the battle against communism. Additionally, the conviction that war had become so complex and important that it had to be carried on by skilled professionals supplanted the American tradition of citizen-soldiers and militia.  The resulting composite image of the talented, heroic soldier was strengthened by the association of the armed forces with spaceflight and technology, which gave the impression that soldiers were no longer simple grunts, but rather the best-trained and most capable members of the rising generation.

The army’s new portrait was famously destroyed during the Vietnam War, when prolonged infantry fighting demonstrated that the essential nature of the soldier had not changed. From being objects of respect, soldiers rapidly became vilified as the perpetrators of atrocities. Veterans’ groups and family associations responded immediately, launching campaigns such as the National League of Families of Prisoners of War to promote the soldier as a victim of political abuse whose good intentions and role in protecting civilians made him worthy of praise. Their success laid the foundations for the twenty-first century myth of the hero-soldier, applied to an even greater extent after the failed invasions of Iraq and Afghanistan, where the opportunities for blaming someone other than soldiers for the mistakes of the war were commensurately greater.

American soldiers are not heroes.  They fight for hire in the service of the state, for its profit and their own.  They are not unusual or extraordinary, and there is no reason to surround them with a cult of appreciation.  No man whose objective in life is to force his will on his fellow men is worth holding up as an example of heroism.

Words and the modern candidate

The first presidential debate of the 2016 US election took place last night, and the transcripts are worth looking at for what they reveal about how language is used in American politics.

In an era of government expansion and distrust, the word “reform” was used just twice, both times by Clinton in reference to local police reform only. “Rights” were invoked four times, twice to describe the public’s entitlement to see Trump’s tax returns, and once by Trump himself when he opposed gun rights for anyone who had been arbitrarily placed on a government watch list. The fourth instance was the only general use of the word, when Clinton referred to the rights of young men in minority neighborhoods. The phrase “human rights” was not spoken once during the debate, nor was the word “freedom”. Even “free” was used a mere three times, twice to refer to college and once when Clinton was encouraging Trump to release his taxes. And as for “free market”, “liberty”, “individual”, “decriminalize”, “legalize”, “independent”, “reason”, or “logic”, not a single one of those terms appeared in thirty-six pages of text.

By contrast, the way in which both candidates favored antonyms for the above words was so marked as to seem almost jingoistic. To set a pessimistic tone, some variant of the word “lose” was employed thirteen times. “Disaster” showed up in six places, “mess” in seven, and “attack” in eleven. Counterpoint to these was provided by the use of “security” on four occasions and “military” on five. “Law” was called upon seventeen times, during eight of which it was used as part of the phrase “law and order”. “War” or “warfare” was mentioned even more often: nineteen times. “Company” or “corporation” and their variants were repeated thirty-three times. “Community” saw twenty-six uses, and while the singular “person” was only used six times, the plural “people” showed up in sixty-six instances. “America” was appealed to forty-eight different times, and “country” sixty-four.

It is not necessary to read the way in which these words were strung together in order to understand their meaning. Freedom, human rights, and individuals are clearly of little importance to either candidate, while the power to suppress discord in the state and the welfare of the group are paramount for both of them, as evidenced by their hundreds of uses of plurals. But the welfare of the group only as they understand it; democracy, representation, due process, habeas corpus, and republicanism were other terms notably absent from the debate. Apparently the language of political discourse in the United States no longer includes the concepts of inherent rights or limited government. And how can reform, let alone revolution, emerge among a population that has forgotten the very words in which reform and revolution must be expressed?

Not quite unconstitutional

On September 2, the Huffington Post published an article on the National Defense Authorization Act which stated that Section 1021 of the NDAA “is a direct violation of the U.S. Constitution and our Bill of Rights”, on the grounds that it improperly abrogates the right of habeas corpus. This is inaccurate.

The principle of habeas corpus in American law is defined by Article One, Section Nine of the Constitution, which sets out additional powers of Congress. The clause in question reads, “The Privilege of the Writ of Habeas Corpus shall not be suspended, unless when in Cases of Rebellion or Invasion the public Safety may require it.” In other words, habeas corpus may be legally set aside at any time that the government can construct a superficially plausible case for doing so.

The writ of habeas corpus was first suspended by President Lincoln in April of 1861, an action that was overturned by the US Circuit Court in Ex Parte Merryman, which ruled that only Congress could enact such a suspension. Lincoln ignored the ruling and repeated his previous actions in September of the following year, and in the spring of 1863, Congress passed the Habeas Corpus Suspension Act, authorizing Lincoln to suspend habeas corpus “whenever in his judgment the public safety may require it” for the duration of the war. Although this permission expired with the end of the Civil War in 1865, Congress included provisions for the suspension of habeas corpus in the Second Enforcement Act of 1871. Neither piece of legislation has ever been ruled unconstitutional, nor has the Merryman decision been overturned. All remain valid precedents and clearly establish that Congress may legally terminate habeas corpus at will. The NDAA is merely a new example of an old exercise of powers.

The real problem here is not the NDAA, but the wording of the Constitution. Like much of the later Bill of Rights, the protection of habeas corpus is phrased as something that Congress is obliged to do or not do. It is not guaranteed as a basic human right, above the law and therefore immune from tampering. The looseness of the wording, and the reluctance of the Constitution’s authors to firmly protect the rights they had declared “inalienable” a decade earlier, are more to blame for present abuses than are the legislators of today. For that matter, so are the members of the 37th Congress. If they had not showed the way, the 112th Congress would have had no precedent to rely upon in making such a sweeping change to the American legal system.

One final note. While the NDAA passed the House by a vote of 283-136 and the Senate 86-13, it would require as few as 110 votes in the House and 26 votes in the Senate (50% plus one of a quorum) to pass any additional legislation expanding the suspension of habeas corpus. Add the President to sign the bill into law, and that makes a total of just 137 people. That’s all it would take to set aside one of the fundamental principles of American justice without ever violating the law.

And that’s why people who want civil rights protections should make sure that they close the loopholes in their constitution.

No place for truth

While Germany continues to be criticized by proponents of free speech for its laws criminalizing Holocaust denial, it is far from being a lone offender in this regard. Austria, Belgium, the Czech Republic, France, Greece, Italy, Lithuania, Luxembourg, Poland, Romania, and Russia – in other words, most of Europe – have similar laws on the statute books, to say nothing of countries like Switzerland and the Netherlands, which also prosecute the offense under more general provisions prohibiting hate speech or the defense of genocide. The Council of Europe and the United Nations have upheld these laws over protests that they are both an offense against human rights and a concession to Nazism, thereby privileging stability and order over personal liberty.

However, the true danger inherent in Holocaust denial laws is not their restriction of speech and expression, but rather the relationship that they establish between the state and objective truth. The Holocaust happened, the German government (and that of Austria, Belgium, etc.) declares. It is an historical fact. Therefore, it cannot be questioned, and to attempt to do so is a crime. With the passage of such laws, the state moves from its role as an arbiter of subjective knowledge into that of a selector of objective knowledge. And once the state has chosen its truth of choice, civil rights disappear altogether instead of simply being limited, because, the state can argue, no one has the right to declare what is obviously and verifiably untrue without punishment.

The hazard is compounded by the manner in which the state selects objective truth. It does not do so on its own; instead, it relies on the consensus of scholars and experts. Historians overwhelmingly assert the reality of the Holocaust; therefore, the German government (and Austria, Belgium, etc.) follows their lead. It would be unlikely to fly in the face of public opinion and promote an improbable truth. As the experts say, so the state does.

Suppose the same standard were applied in the United States with regard to another political issue on which there is near-total scholarly consensus: global warming. Although a substantial minority of Americans question whether it exists at all, the scientific community is vocal in asserting that it does. If the state were to adopt the position that scientific truth would be enforced as law, it would negate the value of all debate on the subject, of which there is a considerable amount. Truth is protected, state authorities would say, and just as the right of free speech is not interpreted to protect libel or slander, so they would argue that it does not protect objection to widely recognized facts.  Nor is it deemed to shield speech which is an imminent danger to others.

A similar dissonance is taking place in the United Kingdom at the moment, where academics and writers are decrying the decision of the electorate to leave the European Union. “Too much democracy!” they cry, and go on to say that elites are required in order to save democracy from itself. The popular voice must be moderated by the widsom of Those Who Know, regardless of what effect that has on human rights.

But if objective truth, so far as man can perceive it, is to become identical with the law, the power of those who determine what truth is would be enhanced almost beyond limitation. In this regard, the role and power of the psychiatric community is even more important than that of the pure scientists. For example, psychiatrists protest that those individuals who have been diagnosed with mental disorders should not be permitted to own weapons; what happens if they declare a liking for weapons to be itself a mental disorder, so that a desire becomes its own condemnation? Or a rejection of the scientific consensus on global warming, since only an insane man would ignore the reality of the world around him. Or opposition to universal healthcare, since the sure sign of an ill man is that he doesn’t believe himself to be ill and declines treatment. These positions, of course, are all progressive ones, chosen because it has been political and social progressives who have been most vocal in their insistence that the law must follow knowledge. However, their fundamental demand is intensely conservative: a legislated standard of belief against which behavior can be measured, so that irrational behavior can be determined and dealt with by extra-legal means.

For thousands of years, cynics have said that truth has no place in politics. It is in the best interest of the individual for him to ensure that things stay that way.