Wholesale Wikipedias – May 2021

I almost forgot about this.

https://en.wikipedia.org/wiki/Oil_futures_drunk-trading_incident

https://en.wikipedia.org/wiki/Lady_tasting_tea

https://en.wikipedia.org/wiki/Ribs_(recordings) (see also, samizdat)

https://en.wikipedia.org/wiki/Long_line_(topology)

https://en.wikipedia.org/wiki/Bald-hairy

https://en.wikipedia.org/wiki/Blind_Willie_Johnson

https://en.wikipedia.org/wiki/Non-human_electoral_candidates

https://en.wikipedia.org/wiki/Osama_Vinladen

https://en.wikipedia.org/wiki/List_of_lists_of_lists

The Holy Algorithm

As it will surely not have escaped your insight, this weekend is Easter. Why now? The date of Easter is determined by a complicated process called the Computus Ecclesiasticus. I will just quote the Wikipedia page:

The Easter cycle groups days into lunar months, which are either 29 or 30 days long. There is an exception. The month ending in March normally has thirty days, but if 29 February of a leap year falls within it, it contains 31. As these groups are based on the lunar cycle, over the long term the average month in the lunar calendar is a very good approximation of the synodic month, which is 29.53059 days long. There are 12 synodic months in a lunar year, totaling either 354 or 355 days. The lunar year is about 11 days shorter than the calendar year, which is either 365 or 366 days long. These days by which the solar year exceeds the lunar year are called epacts. It is necessary to add them to the day of the solar year to obtain the correct day in the lunar year. Whenever the epact reaches or exceeds 30, an extra intercalary month (or embolismic month) of 30 days must be inserted into the lunar calendar: then 30 must be subtracted from the epact.

If your thirst of knowledge is not satisfied, here is a 140-page document in Latin with more detail.

As far as I understand, during the Roman Era the Pope or one of his bureaucrats would perform the computus, then communicate the date to the rest of Christianity and everybody could eat their chocolates at the same time. Then, the Middle-Ages happened and communication became much harder, so instead they came up with a formula so people could compute the date of Easter locally. Of course, the initial formulas had problems – with the date of Easter dangerously drifting later and later in the year over centuries, and don’t even get me started on calendar changes. Eventually Carl Friedrich Gauss entered the game and saved humanity once again with a computationally-efficient algorithm (I am over-simplifying the story so you have more time to eat chocolate).

But now is 2021, and I’m wondering how they run the algorithm now, in practice. I looked up “how is the date of Easter calculated” but all the results are about the algorithms themselves, not about their practical implementation. I have a few hypotheses:

  1. There are responsible Christians everywhere who own printed tables with the dates of Easter already computed for the next few generations. If your Internet goes down, you can probably access such tables at the local church.
https://upload.wikimedia.org/wikipedia/commons/e/e4/DiagrammePaques_Flammarion.jpg
Here is such table from 1907 (Wikimedia commons)

Of course this does not really solve the problem: who comes up with these tables in the first place? Who will make new ones when they expire?

2. There is a ceremony in Vatican where a Latin speaker ceremoniously performs the Holy Algorithm by hand, outputs the date of Easter, prints “Amen” for good measure and then messengers spread the result to all of Christianity.

3. Responsible Christians everywhere own a Computus Clock, a physical device that tells you if it is Easter or not. When in doubt, you just pay a visit to that-guy-with-the-computus-clock. Then, it is like hypothesis 1 except it never expires.

4. There is software company (let’s call it Vatican Microsystems®) who managed to persuade the Pope to buy a license for their professional software solution, Computus Pro™ Enterprise Edition 2007 – including 24/7 hotline assistance, that only runs on Windows XP and they have a dedicated computer in Vatican that is used once in a while to run these 30000 lines of hard Haskell or something. Then, it goes just like hypothesis 2.

(Of course, all of these solutions are vulnerable to hacking. It might be as easy as sneaking into a church and replace their Easter tables with a fake. A talented hacker might even have it coincide with April fools.)

If an active member of the Christian community reads this and knows how it is done in practice, I am all ears.

Anyways, happy Easter and Amen, I guess.

The average North-Korean mathematician

Here are the top-fifteen countries ranked by how well their teams do at the International Math Olympiads:

When I first saw this ranking, I was surprised to see that North Koreans have such an impressive track record, especially when you factor in their relatively small population. One possible interpretation is that East Asians are just particularly good at mathematics, just like in the stereotypes, even when they live in one of the world’s worst dictatorships.

But I don’t believe that. In fact, I believe North Koreans are, on average, particularly bad at math. More than 40% of the population is undernourished. Many of the students involved in the IMOs grew up in the 1990s, during the March of Suffering, when hundreds of thousands of North Koreans died of famine. That is not exactly the best context to learn mathematics, not to mention the direct effect of nutrients on the brain. There does not seem to be a lot of famous North Korean mathematicians either1There is actually a candidate from the North Korean IMO team who managed to escape during the 2016 Olympiads in Hong-Kong. He is now living in South Korea. I wish him to become a famous mathematician.. Thus, realistically, if all 18 years-old from North Korea were to take a math test, they would probably score much worse than their South Korean neighbors. And yet, Best Korea reaches almost the same score with only half the source population. What is their secret?

This piece on the current state of mathematics in North Korea gives it away. “The entire nation suffered greatly during and after the March of Suffering, when the economy collapsed. Yet, North Korea maintained its educational system, focusing on the gifted and special schools such as the First High Schools to preserve the next generation. The limited resources were concentrated towards gifted students. Students were tested and selected at the end of elementary school.” In that second interpretation, the primary concern of the North Korean government is to produce a few very brilliant students every year, who will bring back medals from the Olympiads and make the country look good. The rest of the population’s skills at mathematics are less of a concern.

When we receive new information, we update our beliefs to keep them compatible with the new observations, doing an informal version of Bayesian updating. Before learning about the North Korean IMO team, my prior beliefs were something like “most of the country is starving and their education is mostly propaganda, there is no way they can be good at math”. After seeing the IMO results, I had to update. In the first interpretation, we update the mean – the average math skill is higher than I previously thought. In the second interpretation, we leave the mean untouched, but we make the upper tail of the distribution heavier. Most North Koreans are not particularly good at math, but a few of them are heavily nurtured for the sole purpose of winning medals at the IMO. As we will see later in this article, this problem has some pretty important consequences for how we understand society, and those who ignore it might take pretty bad policy decisions.

But first, let’s break it apart and see how it really works. There will be a few formulas, but nothing that can hurt you, I promise. Consider a probability distribution where the outcome x happens with probability p(x). For any integer n, the formula below gives what we call the nth moment of a distribution, centered on \mu.

\int_{\mathbb{R}}p(x)(x-\mu)^ndx

To put it simply, moments describe how things are distributed around a center. For example, if a planet is rotating around its center of mass, you can use moments to describe how its mass is distributed around it. But here I will only talk about their use in statistics, where each moment encodes one particular characteristic of a probability distribution. Let’s sketch some plots to see what it is all about.

First moment: replace n with 1 and μ with 0 in the previous formula. We get

\int_{\mathbb{R}}p(x)(x)dx

which is – suprise – the definition of the mean. Changing the first moment just shifts the distribution towards higher or lower values, while keeping the same shape.

Second moment: for n = 2, we get

\int_{\mathbb{R}}p(x)(x-\mu)^2dx

If we set μ to be (arbitrarily, for simplicity) equal to the mean, we obtain the definition of the variance! The second moment around the mean describes how values are spread away from the average, while the mean remains constant.

Third moment (n = 3): the third moment describes how skewed (asymmetric) the distribution is, while the mean and the variance remain constant.

Fourth moment (n = 4): this describes how leptokurtic or platykurtic your distribution is, while the mean, variance and skew remain constant. These words basically describe how long the tails of your distribution are, or “how extreme the extreme values are”.

You could go on to higher n, each time bringing in more detail about what the distribution really looks like, until you end up with a perfect description of the distribution. By only mentioning the first few moments, you can describe a population with only a few numbers (rather than infinite), but it only gives a “simplified” version of the true distribution, as on the left graph below:

Say you want to describe the height of humans. As everybody knows, height follows a normal distribution, so you could just give the mean and standard deviation of human height, and get a fairly accurate description of the distribution. But there is always a wise-ass in the back of the room to point out that the normal distribution is defined over \mathbb{R}, so for a large enough population, some humans will have a negative height. The problem here is that we only gave information about the first two moments and neglected all the higher ones. As it turns out, humans are only viable within a certain range of height, below or above which people don’t survive. This erodes the tails of the distribution, effectively making it more platykurtic2If I can get even one reader to use the word platykurtic in real life, I’ll consider this article a success..

Let’s come back to the remarkable scores of North Koreans at the Math Olympiads. What these scores teach us is not that North Korean high-schoolers are really good at math, but that many of the high-schoolers who are really good at math are North Koreans. On the distribution plots, it would translate to something like this:

With North Koreans in purple and another country that does worse in the IMOs (say, France), in black. So you are looking at the tails and try to infer something about the rest of the distribution. Recall the plots above. Which one could it be?

Answer: just by looking at the extreme values, you cannot possibly tell, because any of these plots would potentially match. In Bayesian terms, each moment of the distribution has its own prior, and when you encounter new information, you could in principle update any of them to match the new data. So how can we make sure we are not updating the wrong moment? When you have a large representative sample that reflects the entire distribution, this is easy. When you only have information about the “top 10” extreme values, it is impossible. This is unfortunate because the extreme values are precisely what gets all our attention – most of what we see in the media is about the most talented athletes, the most dishonest politicians, the craziest people, the most violent criminals, and so forth. Thus, when we hear new information about extreme cases, it’s important to be careful about which moment to update.

This problem also occurs in reverse – in the same way looking at the tails doesn’t tell you anything about the average, looking at the average doesn’t tell you anything about the tails. An example: on a typical year, more Americans die from falling than from viral infections. So one could argue that we should dedicate more resources to prevent falls than viral infections. Except the number of deaths from falls is fairly stable (you will never have a pandemic of people starting to slip in their bathtubs 100 times more than usual). On the other hand, virus transmission is a multiplicative process, so most outbreaks will be mostly harmless (remember how SARS-cov-1 killed less than 1000 people, those were the days) but a few of them will be really bad. In other words, yearly deaths from falls have a higher mean than deaths from viruses, but since the latter are highly skewed and leptokurtic, they might deserve more attention. (For a detailed analysis of this, just ask Nassim Taleb.)

There are a lot of other interesting things to say about the moments of a probability distribution, like the deep connection between them and the partition function in statistical thermodynamics, or the fact that in my drawings the purple line always crosses the black like exactly n times. But these are for nerds, and it’s time to move on to the secret topic of this article. Let’s talk about SEX AND VIOLENCE.

This will not come as a surprise: most criminals are men. In the USA, men represent 93% of the prison population. Of course, discrimination in the justice system explains some part of the gap, but I doubt it accounts for the whole 9-fold difference. Accordingly, it is a solid cultural stereotypes that men use violence and women use communication. Everybody knows that. Nevertheless, having just read the previous paragraphs, you wonder: “are we really updating the right moment?”

A recent meta-analysis by Thöni et al. sheds some light on the question. Published in the journal Pyschological Science, it synthesizes 23 studies (with >8000 participants), about gender differences in cooperation. In such studies, participants play cooperation games against each other. These games are essentially a multiplayer, continuous version of the Prisoner’s Dilemma – players can choose to be more or less cooperative, with possible strategies ranging from total selfishness to total selflessness.

So, in cooperation games, we expect women to cooperate more often than men, right? After all, women are socialized to be caring, supportive and empathetic, while men are taught to be selfish and dominant, aren’t they? To find out, Thöni et al aligned all of these studies on a single cooperativeness scale, and compared the scores of men and women. Here are the averages, for three different game variants:

This is strange. On average, men and women are just equally cooperative. If society really allows men to behave selfishly, it should be visible somewhere in all these studies. I mean, where are all the criminals/rapists/politicians? It’s undeniable that most of them are men, right?

The problem with the graph above is that it only shows averages, so it misses the most important information – that men’s level of cooperation is much more variable than women’s. So if you zoom on the people who were either very selfish or very cooperative, you find a wild majority of men. If you zoom on people who kind-of cooperated but were also kind-of selfish, you find predominantly women.

As I’m sure you’ve noticed, the title of the Thöni et al paper says “evolutionary perspective”. As far as I’m concerned, I’m fairly skeptical about evolutionary psychology, since it is one of the fields with the worst track record of reproducibility ever. To be fair, a good part of evpsych is just regular psychology where the researchers added a little bit of speculative evolutionary varnish to make it look more exciting. This aside, real evpsych is apparently not so bad. But that’s not the important part of the paper – what matters is that there is increasingly strong evidence that men are indeed more variable than women in behaviors like cooperation. Whether it is due to hormones, culture, discrimination or cultural evolution is up to debate and I don’t think the current data is remotely sufficient to answer this question.

(Side note: if you must read one paper on the topic, I recommend this German study where they measure the testosterone level of fans of a football team, then have them play Prisoner’s Dilemma against fans of a rival team. I wouldn’t draw any strong conclusion from this just yet, but it’s a fun read.)

The thing is, men are not only found to be more variable in cooperation, but in tons of other things. These include aggression, exam grades, PISA scores, all kinds of cognitive tests, personality, creativity, vocational interests and even some neuroanatomical features. In the last few years, support for the greater male variability hypothesis has accumulated, so much that it is no longer possible to claim to understand gender or masculinity without taking it into account.

Alas, that’s not how stereotyping works. Instead, we see news report showing all these male criminals, and assume that our society turns men into violent and selfish creatures and call them toxic3Here is Dworkin: “Men are distinguished from women by their commitment to do violence rather than to be victimized by it. Men are rewarded for learning the practice of violence in virtually any sphere of activity by money, admiration, recognition, respect, and the genuflection of others honoring their sacred and proven masculinity.” (Remember – in the above study, the majority of “unconditional cooperators” were men.). Internet people make up a hashtag to ridicule those who complain about the generalization. We see all these male IMO medalists, and – depending on your favorite political tradition – either assume that men have an unfair advantage in maths, or that they are inherently better at it. The former worldview serves as a basis for public policy. The question of which moment to update rarely even comes up.

This makes me wonder whether this process of looking at the extremes then updating our beliefs about the mean is just the normal way we learn. If that is the case, how many other things are we missing?

Argumentative prison cells

Two persons are trapped in a prison cell. The warden gives them a controversial question they disagree about, and promises to set them free if they manage to reach an honest agreement on the answer. They can discuss and debate for as long as they need, and all the relevant empirical data are available. Importantly, they are not allowed to just pretend to agree: they must genuinely find common ground with each other for the door of the prison cell to open. Needless to say, both participants want to escape the room as soon as possible, so they will do their best to reach a honest agreement1I know some of you would love to stay forever in a room with unlimited time and data – just pretend you want to leave the room for the sake of the thought experiment..

In most cases, a handful of good arguments from each side may be enough to settle the case. Sometimes, they would disagree on the meaning of the question itself, in which case they would first spend some time arguing about terminology, before arguing about the content of the question. In more complicated cases, the subjects might turn to a meta-discussion about the best method to reach agreement and get out of the room. If they must debate about whether to rely on the Scientific Method or the double-crux or any other advanced epistemic jutsu, they have all the time in the world to do that. The question is, is it always possible to escape the Argumentative Escape Room? Given unlimited time, will any two persons necessarily reach an agreement on any possible question, or are there cases where the two persons will never agree, despite their best efforts?

Of course, it is easy to find trivial cases where this will not work. For sure, if one participant is a human and the other is a pigeon, agreement might be hard to reach (although, you can’t say the pigeon really disagrees either, right?). If one participant has Alzheimer’s and forgets everything you say after two minutes, it will be hard to change their mind on any somewhat complicated topic. But these are edge cases.

A more difficult question is whether some people just lack the fundamental intelligence to understand certain arguments, or if anybody can eventually understand anything given enough time. To take an extreme case, suppose one of the participants is a rudimentary AI with a very limited amount of memory space. Some arguments based on experimental data will never fit in that memory. It might be possible, in principle, to compress the data by carefully building layers of abstraction on top of each others, but there is a limit. Likewise, many mathematical proofs require logical disjunction, where you split the claim into a number of particular cases, and prove you are right for each case taken separately. If you are arguing with an AI who firmly disbelieves the 4-color theorem but lacks the hardware to survey the 1482 distinct cases, it is going to be very hard to truly convince it. Without knowing how the brain works, I am not sure how this would translate to humans debating “normal” controversial questions. Let’s say your argument involves some advanced quantum mechanics. Most people won’t understand it at first, but since you have all the time you want, you could just teach QM to the other participant until she gets your point and can agree/disagree with you. I have good hopes that most humans could eventually understand QM given enough time and patience. But it is not clear what are the absolute limits of one particular human brain, and whether these limits differ from person to person.

The problems I mentioned so far are merely “technical” difficulties. If we leave these aside, it seems reasonable to me that the two players will reach agreement on pretty much any factual statement or belief. If everything else fails, both parties can agree that they do not know the correct answer to the question, that more research is needed, that the question does not make sense, that the problem is undecidable. The real problem lies on the other branch of Hume’s fork. What happen if we ask the two participants to agree on moral values?

Is it okay to kill a cow for food? Is it okay to steal bread if your family is starving? Is it okay to kill a stolen cow for food if your family is starving? There is a Nature Versus Nurture kind of problem here. If values are entirely cultural, or come entirely from lived experience, then there is no reason to think that, after a sufficient time spent together, the two participants will never put their sacred values into perspective and find common ground about what is okay or not. On the other hand, if values are in part influenced by your brain’s mechanisms for emotion, empathy or instinct, like the structure of your amygdala or the sensitivity of your oxytocin receptors, then it’s entirely possible that two people will simply have different values, no matter how long they discuss it. We already know from classical twin studies that political opinions are in large part influenced by genetics. In developed countries, genetic factors are responsible for about half of the variance in attitudes towards egalitarianism, immigration and abortion. They might explain one third of the variance in patriotism, nationalism, and homophobia. One study suggested that an intra-nasal administration of oxytocin leads to increased ethnocentrism (but check out this skeptical paper for good measure). There is even a strange study were researchers could bias the reported political opinions of participants by stimulating parts of their brain with magnetic fields2That’s right, scientists MANIPULATED people’s views on IMMIGRATION using MAGNETS. Please, never tell my grandmother about this study.. Thus, it is pretty clear that our opinions and values are not just the result of experience and reasoning, but also involve a lot of weird brain chemistry that we might no be able to change. Genetic differences are only one obvious factor of inescapable disagreement, but they are likely not the only one. For example, it is easy to imagine that some experiences will leave irreversible marks on one’s psyche (for an interesting illustration, look at the story of Gudrun Himmler). Can such barriers ever be overcome through discussion? I’m not sure.

But that is just a fun thought experiment with mildly philosophical implications about the existence of objective truth. Since unlimited time is quite uncommon in the real world, and since reaching honest agreement is rarely the only goal of people who argue with each other, does it ever matter in practice? I think this thought experiment is important, because it clarifies our underlying assumptions about how we collectively handle disagreement.

When one defends the marketplace of ideas, deliberative democracy and absolute free speech, it is implicitly assumed that, for all practical purposes, any disagreement can eventually be solved through discussion and explanation. If it turns out some people will simply never agree because their minds operate in fundamentally different ways, then the marketplace of ideas probably needs a patch. The scenario that Karl Popper describes in his “paradox of intolerance” is precisely such a situation: there are very intolerant people out there who simply can’t be reasoned with, so the best thing you can do is silence them. One essay from Scott Alexander describes two approaches to politics: mistake and conflict. Mistake theory is when you believe everybody wants to benefit the collective, and disagreements come from people being mistaken about the best way to achieve that. Conflict theory is when you believe that people are just advocating for their own personal advantage, and disagreements come from people serving different goals. On first sight, those who believe it is usually possible to escape the room might gravitate towards Mistake Theory, while those who think otherwise might be driven to Conflict Theory. However, things are more complicated.

In a recent study, Alexander Severson found that, when people are presented evidence that political opinions have genetic influences, they typically become more tolerant of the other side. From the conclusion part:

“We proudly weaponize bumper stickers and traffic in taunt-infused comment-thread witticisms in the war against the political other, all in part because we believe that the other side chooses to believe what they believe freely and unencumbered. […] In disavowing this belief and accepting that our own ideologies are partially the byproduct of biological and genetic processes over which we have no control, we may end up promoting a more tolerant and kinder civil society.”

Somehow, since the outgroup’s obviously wrong opinions are altered by their genes, it’s not entirely their fault if they disagree with you, so it becomes a forgivable offense. Alternatively, if differences in our opinions partially reflect differences in our bodies, then peace is only possible if we accept the coexistence of a plurality of opinions, and we may as well embrace it. Interestingly, in this study, about 20% of the participants ignored all the presented evidence, firmly rejecting the idea of any possible genetic influence on opinions. Perhaps the evidence that Severson showed them was not all that convincing, or perhaps the belief that genetics can influence beliefs is itself influenced by genetics, which, at least, would be fun to argue.

I’m curious about whether this question has already been treated by other people, in theory or – even better – experimentally. If you know of anything like that, please let me know.

Wholesale wikipedias – Feb 2021

https://en.wikipedia.org/wiki/List_of_proposed_etymologies_of_OK

https://en.wikipedia.org/wiki/Mariko_Aoki_phenomenon

https://en.wikipedia.org/wiki/Junk_fax

https://en.wikipedia.org/wiki/The_Thing_(listening_device)

https://en.wikipedia.org/wiki/Animal-borne_bomb_attacks

https://en.wikipedia.org/wiki/Collyer_brothers (s/o Gwern)

https://en.wikipedia.org/wiki/Lenin_was_a_mushroom (s/o VeryWhen)

https://en.wikipedia.org/wiki/Cable_bacteria

https://en.wikipedia.org/wiki/Maxine_Asher

Wholesale wikipedias – Jan 2021

Happy new year, dear reader.

https://en.wikipedia.org/wiki/52-hertz_whale

https://en.wikipedia.org/wiki/Demon_Duck_of_Doom

https://en.wikipedia.org/wiki/Isochrony

https://en.wikipedia.org/wiki/Kuai_Kuai_culture

https://en.wikipedia.org/wiki/London_Underground_mosquito

https://en.wikipedia.org/wiki/Beefsteak_Nazi

https://en.wikipedia.org/wiki/Cosmic_ray_visual_phenomena

https://en.wikipedia.org/wiki/List_of_nicknames_used_by_Donald_Trump (s/o Fantastic Anachronism)

Map of randomness

In this paper from 2012 (full text here), Leemis and McQueston show a diagram of how probability distributions are related to each other. As I liked it really much, I extracted the chart from the pdf, turned it into a poster, and printed a giant version of it to stick on the wall of my apartment. I thought I would also share it here:

The full-size vector graphic version (as pdf) can be downloaded here.

Some explanation

Things can be random in many different ways. It’s tempting to think “if it’s not deterministic, then it’s random and we don’t know anything about it”, but that would be wrong. There is an entire bestiary of probability distributions, with different shapes and different properties, that tell you how likely the possible outcomes are relative to each other. What’s interesting is that each distribution describes the outcome of a particular class of stochastic processes, so by looking at how something is distributed, it’s possible to understand better the process that created it. One can even combine simple processes together or morph their parameters to build more complicated processes. The map above tells you how the probability distribution changes when you do that.

Let’s look at an example. You are typing on a keyboard. Every time you push a button, there is a certain probability p that you will hit the wrong one. This super simple process is called the Bernoulli process, it corresponds to the Bernoulli distribution that you can find near the top-right corner of the map. Now you type a whole page, consisting of n characters. How many errors will you make? This is just a sum of n Bernoulli processes, so we look at the map and follow the arrow that says \sum{X_i}, and we reach the binomial distribution1i.i.d. means “Independent and Identically Distributed”. We are assuming your typos are independent from each other.. The number of errors per page follows a binomial distribution with mean np and variance np(1-p). If you write a book with 1000 characters per page and make one typo per hundred characters, the variance of the number of typos from page to page will be 1000*0.01*0.99=9.92ISN’T THAT FASCINATING?.

Let’s complicate things a little bit. Instead of using a typewriter, you are writing with a pen. From time to time, your pen will slip and make an ugly mark. How many ugly marks will you get per page? Again, the map has you covered: this time, instead of having n discrete button presses, we have an infinite number of infinitesimal opportunities for the pen to screw up, so n\to\infty, and p must also become infinitesimally small so that np is finite, otherwise you would just be making an infinite number of ugly marks, and I know you are better than that. Thus, according to the map, the number of screwups per page follows a Poisson distribution. A handy property of the Poisson distribution is that the mean happens to be equal to the variance. So if your pen screws up 10 times per page, you also know the variance will be 10.

You can go on and explore the map on your own (riddle: what is the amount of ink deposited by your pen per page distributed like?). So far, I would say I have encountered only half of the map’s distributions in real life, so there is still a lot of terra incognita for me.

The hundred Coca-Colas of post-brand capitalism

1.

Knowing nothing of the inextricable complexity of the human administration it was flying into, the fly entered through the vent of the workstation’s fan. It slipped into the depths of the circuitboard, causing a single-bit error in the index of the Reference Legal Archive. The intern in charge of proof-reading felt that something was different, but could not pinpoint exactly what. The fully automated computer system had corrected any inconsistency in paragraph numbering. When the updated text of the law was sent to all executive forces., nobody noticed that an entire section had been erased.

In Terry Gilliam’s 1975 film Brazil, a fly gets jammed in the apparatus of a dystopian bureaucratic administration, creating an error which serves as a starting point for the entire story. As our legal systems become increasingly bureaucratic and complicated, it is a fun exercise to think about what could happen if a small modification was randomly introduced into the law, as a mutation in the genome of society. Certain mutations would have no effect, some would lead to the rapid collapse of civilization, and, who knows, some might even be beneficial.

But there is one simple mutation – a deletion of single legal concept – that I believe has the potential to make our society much better in the long run. I am talking about trademarks, and I will explain why I think they should be abandoned. There has been a lot of debate about whether patents or copyright should be abolished, but even anti-patent and anti-copyright activists like the Pirate Party’s founder Rick Falkvinge or the GNU guru Richard Stallman think trademarks are a good thing. This is how far out the Overton Window we are going. Well, I don’t actually think that they should be just erased at once – I am aware that trademarks, by design or by accident, serve all kinds of roles in our current societies, so we couldn’t abolish them just like that, without carefully planning how these roles would be filled instead. But you already know the arguments in favor of the status quo. Rather, I am just going to present the radical idea of abolishing trademarks in a one-sided way, with the hope to make you question whether trademarks are as natural, necessary and optimal as they appear to people who are used to them.

2.

Thank you for coming to this emergency meeting. As you may know, we are facing a problem without precedent. Since this morning, a second Coca-Cola company has entered the market. The first batches are already reaching retail stores as I’m talking.
– A second Coca-Cola company? How so?
– Another Coca-Cola. The same as ours. Identical product, same packaging, same logo. It is just not produced by our company.
– Well, we sue them for trademark infringement, like we always do!
– This is where it gets complicated. Apparently the administration made a mistake when converting the official version of law to some obscure new technical standard. They said it was a computer bug or something, nobody knows. But the entire section about trademarks completely vanished from the law. At the moment, there is nothing we can do legally to protect our brand.
– You’re saying trademarks disappeared just like that? What the hell, don’t they have backups of the law somewhere?
– Of course they do, but you can’t just revert the law of the country to a previous version like that. That would be antidemocratic. As per constitution, the state will only enforce the standard version of the law from the Reference Legal Archive, and any correction will have to be voted. It might take weeks.

I know the fly scenario is highly implausible in real life, but take that as a thought experiment. Let’s suspend our disbelief and assume, for the sake of the story, that all laws related to trademarks suddenly disappeared. In other words, anybody can brand their product as they want, and counterfeits are basically legal. That does not mean one can write whatever they want on the packaging – required information like ingredients, contact info or quantity are still enforced as always –, but the brand is no longer protected. Anybody can start manufacturing Coca-Cola and call it Coca-Cola.

3.

– The marketing department just got the results from panel testing. “The One and Only Coca-Cola” did pretty bad, only 20% of the panel picked it. “The Original Coca-Cola” works much better. People are confident that we are the original one if we write that on the label.
– But we are
not the original Coca-Cola, are we?
– As far as the law is concerned, we are.
– Oh right. What about the holograms?
– Bigger is better. I mean, I don’t want this to escalate out of control, but it’s increasingly clear that people are just choosing whatever package carries the largest hologram. So we designed a new, 12 cm-wide hologram. The largest on the market. Not even “Best Coca-Cola” have such big holograms.
– Actually, they’re no longer called “Best Coca-Cola”. If I remember correctly, they changed their name to “The Original Coca-Cola” last week.

1970 anti-war poster, Berkeley university

This might go on for a while. Eventually, the original companies have to face the hard truth – their brands only existed as long as the State was willing to protect them. Without them, they are just one manufacturer among many others selling the same product under the same name.

But what if it is not the same product? One company might seize the opportunity to sacrifice quality and cut down the costs. To quote Rick Falkvinge: “Trademarks are basically good, as they primarily serve as consumer protection. If it says “Coca-Cola” on the can, I know that The Coca-Cola Company guarantees its quality.” I personally doubt this, and my doubts are supported by blind tests where participants taste food without knowing the brand1“Our conclusion is that brand image is the only explanation for the premium commanded by the supplier brands in the four food product markets. The consumer is paying a premium for the often intangible benefits inherent in a branded product. Only in washing-up liquid did the leading brand offer better intrinsically superior value for money.” – Davies et al., 2004..

Moreover, it’s important to separate the effect of trademarks themselves, from the effect of other regulations. As a case study, let’s look at counterfeit medicines. This is obviously a rampant problem, with about half of the pills sold online being fakes and many people dying because of it. But trademark infringement is not the root of the problem here. The factories who make counterfeit medication break the law in two different ways: first, they infringe a trademark, second, they deliver pills that do not contain the chemical mentioned on the label (or not in the right concentration). The danger of counterfeit medication comes from the latter, and has nothing to do with the trademark. Without trademarks, copycats could copy the name, the logo and the slogans, but they still couldn’t lie about the content or cGMP-compliance, which would still be enforced by law. The reputation of brands could be fully replaced by product certification, where an independent organism delivers a label if the products meets a certain standard, as it already exist for environmental impact, ethics, health, compliance to religious traditions and so forth. There are even certifications that certify certification bodies’ certification procedures. Or, you know, if everything else fails, you can just go for the cheapest product.

Of course, at this point, there are many objections that you can make about how the standards for product certification would work without trademarks. They definitely require some level of legal protection, otherwise anybody could just copy the name and logo of an existent certification but with more lenient criteria, and award it to themselves. But they shouldn’t be protected too much either, otherwise any company could have their own standard that says “manufactured in our factory at [address]” and we just re-invented trademarks. Hopefully, there is a middle ground somewhere, where labels are unique and meaningful, yet flexible enough so they can be fulfilled by any competitor entering the market. That is not going to be a clean and elegant solution, but trademarks were never clean and elegant either. If trademarks did not exist and I was arguing for introducing them, one could also come up with many loopholes and objections: what if your actual last name is McDonald and you want to start a fast-food chain? Should trademarks be transferable to other people and if so, how does that not defeat the purpose of trademarks? If not, what happens when Sir Coca-Cola, First of His Name passes away? What if I start a company called “Coca-CoIa”, where the 7th letter is a capital i instead of an L? Can I trademark an image, a sound, a smell, a taste? In practice, these issues are fixed using a ton of specific laws and jurisprudence, that legal experts must navigate to tell what is ok and what is not. Likewise, without trademarks, a new legal framework would be necessary for product certification to actually work. But why would we even get rid of trademarks?

4.

Something in the city was not the same. You would just walk to work, as you’d been doing everyday for years, but you kept noticing things that you had never paid attention to before. A pigeon nest, a 19th century street lamp, a tree, a wrought iron balcony, the stamped pattern of a manhole. All these things had been here forever, but you could not see them, because the flashing advertisement billboards would catch all your attention.

The Eiffel Tower used as a billboard, 1925-1934. Wikimedia Commons

Without trademarks, there is no point in advertising your brand, since anyone else could just use the same brand and benefit from your advertisement. And this is fortunate, because advertising is the ultimate form of evil. I talked before about how the Chinese government buys “sponsored content” in western journals to print propaganda disguised as legitimate articles. In 2016, as the New York Times distanced themselves from the less-reputable “fake news” media, they realized painfully that their own website was displaying its own fake news in the form of advertisement – like announcing the death of a celebrity who was still alive. In their classic book Manufacturing Consent, Herman and Chomsky describe how journals that rely on advertisement are pressured into printing things that favor the advertiser. That’s not to mention the attention cost of constant interruption, the mass surveillance necessary for “behavioral” advertising, the waste produced by junkmail, or the perpetuation of harmful stereotypes by commercials (although causality is contested). Without trademark protection, most of this would spontaneously disappear, making the world a much better place.

Can we really live without advertisement? The best natural experiment comes from Brazil. In 2006, the city of Saõ Paulo enacted a law called Cidade Limpa, prohibiting all outdoor billboard advertisement. In a survey more than 10 years later, the citizens had no regrets, and the majority of them wanted to keep the ban in place. Other cities have made similar (albeit milder) attempts. Of course, these legal bans might sound a tiny bit authoritarian, and one can wonder where is the safeguard between banning ads and censoring speech. In addition, these policies are not that radically effective – in São Paulo, advertisement started to appear again after a few years, in more convoluted forms, stealthily integrating itself into urban furniture. Abolishing trademarks, on the other hand, would circumvent these problems and cut brand advertisement from its roots. No ban has to be enforced – in fact, it’s not about enforcing a new law, but stopping enforcement of an old law. We remove a little piece of coercion from the state, the police no longer comes when someone infringes a trademark, and the entire advertising industry becomes unprofitable. The most brilliant computer scientists in the world can go back to doing useful things, instead of building machine-learning models for consumer tracking and targeted marketing.

5.

“Help us bring the best content to you, for free”. The old advertisement-based media started a massive communication campaign to persuade citizens to vote trademarks back into the law. Yet, people just had a glimpse of an ad-free society, and many wondered whether they really missed the advertising giants so much.

Needless to say, all the big companies that rely on advertisement for funding would be in immediate danger. Some might try to defend the advertising industry by claiming it allows to obtain things for free. You get free search engines, free bus stops, free newspapers, what is there to complain about? This is a gargantuan scam. Let’s investigate. Internet companies like Twitter, Facebook or Google use advertisement as their primary source of revenue. This includes directly displaying ads to the consumer, as well as accumulating information about their users to sell it to third-parties. In turn, this process manipulates consumers into buying products they wouldn’t otherwise. In effect, advertisement makes you pay a premium on everyday products, and that is where the money comes from. How much is that? In the third quarter of 2020, Facebook made a bit more than $10 billions from North America only. Divide this by 255 millions users that are active monthly, you get $40 per user per quarter, that is $120 per year. And that’s the average for monthly users. If you go to Facebook daily, it will be much more. A similar calculation for Twitter gives about $20 per user and per year worldwide (like for Facebook, it may be much more if you live in a rich country). Google doesn’t disclose how many users they have, but given their worldwide revenues exceeded $160 billions in 2019, even if every 7.8 billions humans on Earth used Google (this is a lower bound) that would still be about $20 per person. Of course, it must be something like an order of magnitude higher if Google also provides your e-mails, document storage, maps, browser and so forth. Oh, and JCDecaux, the arch-evil Great Satan of public space advertising, made €3.9 billions in 2019. Now make a list of all the “free stuff” you get in your daily life (other free websites, applications, TV commercials, movie theater advertisements, sponsored content, …) and calculate the grand total. That’s an expensive free lunch.

Keep in mind this is only a fraction of the real cost of advertisement, since the companies who buy ads or data from Google et al are expecting a positive return on investment. The amount they give to advertising companies is only a lower bound to the premium they can trick consumers into paying. For example, Google claims that people who advertise with them get an average return on investment of 8-to-1. If that is true, what we previously estimated using Google’s revenues must be multiplied by eight to obtain the real cost for the consumer.

Even worse, competitors on a market are engaged in a Moloch-esque red queen race, where each company must spend more and more money on marketing just to stay in the game. Where do all these wasted resources come from, if not from the consumer’s pocket? Without advertisement, I’d speculate that companies would resort to the next best strategy instead, that is cutting prices. Hopefully, the large premium people pay for marketing would be subtracted from the price of day-to-day products.

Finally, for those who still think Internet ads are good because they support the creative class, remember that only a fraction of what you pay goes to the authors, and you would be better off with something like Patreon. As for server costs, a centralized service like Youtube might resort to paid subscription, in which case they would have to compete with decentralized, p2p-based alternatives like PeerTube which may turn out a lot cheaper. Also, when we talk about Internet Giants, we often forget that one of them never relied on ads in the first place – Wikipedia has run entirely on donations for two decades, and they did better than Google’s own attempt at making an encyclopedia.

6.

It was a passive revolution – no plutocrat was be bereft, no king was beheaded, no parliament was burnt, no landowner was expropriated. Removing a tiny piece of legal coercion made the entire society less coercive.

In their modern form, trademarks are about 150 years old2Sumerian merchants were already marking stuff with their seals some 5000 years ago, but this worked in a pretty different way and I don’t think those merchant marks were protected by the State.. This is just old enough so nobody remembers how things worked before trademarks, and we accept them as a part of nature that’s been here forever. 150 years old is also just young enough so the long-term efffects of trademarks have not been thoroughly tested and selected for by cultural evolution. If you want to overthrow a 3000-years old tradition, you should remember Chesterton’s fence and think carefully about why it’s there and why it remained in place for so long. But 150 years old? That could just be a temporary mistake.

Do you think this guy owns a trademark? Probably not. After all, he’s an actor posing for stock photos.

Omnipresent advertising is one of the things that did not go so well in our modern capitalist society. Another one is the emergence of a handful of aristocrats with an astronomical amount of financial power. These commercial empires are, to a large extent, built on the salience of their brands, itself built on advertisement, itself built on trademarks. Once we see trademarks not as something natural and necessary, but as a legal mistake of the 19th century, those empires appear to be built on very artificial foundations. If we removed them, the plutocrats would be forced to adapt, or lose their fortune. On the other side, the fall of brands would be a blessing for individual artisans and local shops. They did not rely on trademarks anyways, and they can use the now-cheap advertisement space to get known from local customers. Nevertheless, as soon as one of them grows big enough to try to advertise their brand, copycats would appear and make the brand useless. Like a rubber band, this would pull companies back to the human scale. Somehow, this echoes a point Guy Debord makes in La Société du Spectacle: “With the generalized separation of the worker and his products, every unitary view of accomplished activity and all direct personal communication among producers are lost.” A bottle of Coca-Cola is a calibrated, standard, almost abstract entity that contains no trace of the individuals who were involved in its production. While Debord sees this as an essential feature of capitalism, I would say that it’s rather a feature of brands, which act as an abstraction layer between the chain of production and the consumers.

Let’s speculate even further. Building a brand and making sure the public knows about it is a major obstacle for new companies. In post-brand capitalism, it may be much easier for newcomers to enter the market. Any company making products with good certifications, for a low enough price could readily compete with the most established industrial trusts. Monopolies would be much harder to establish, and even if someone actually manages to reach a monopoly on something, they could not make a lot of additional profit out of it because some unknown player could just enter the market under the same name as soon as they increase their prices too much. In the long run, economic inequality might even erode a little bit. That’s not too say you can’t bereave the plutocrats in addition to abolishing trademarks, if you are into this kind of things.

7.

I guess it is time for a reality check. First, there is the problem that brand abolition is not exactly the most viable political project. That’s because the people who benefit from advertisement are precisely the ones who are in the best position to define public opinion. It might not be easy to remove something that directly benefits journalists, news sites and search engines.

Second, the obvious: if the government actually decides to store the entire law on a single computer, and if a fly actually does crash into the motherboard and erase everything about trademarks, the world would not instantly become a post-brand utopia – there would most likely be a lot of turmoil and violence and chaos and everybody would be upset at me. If this happens, you are welcome to complain in the comments. That is, if you can find the real Telescopic Turnip among the hundred copycats.