## bookmark_borderAverage North-Koreans Mathematicians

Here are the top-fifteen countries ranked by how well their teams do at the International Math Olympiads:

When I first saw this ranking, I was surprised to see that North Koreans have such an impressive track record, especially when you factor in their relatively small population. One possible interpretation is that East Asians are just particularly good at mathematics, just like in the stereotypes, even when they live in one of the world’s worst dictatorships.

But I don’t believe that. In fact, I believe North Koreans are, on average, particularly bad at math. More than 40% of the population is undernourished. Many of the students involved in the IMOs grew up in the 1990s, during the March of Suffering, when hundreds of thousands of North Koreans died of famine. That is not exactly the best context to learn mathematics, not to mention the direct effect of nutrients on the brain. There does not seem to be a lot of famous North Korean mathematicians either1There is actually a candidate from the North Korean IMO team who managed to escape during the 2016 Olympiads in Hong-Kong. He is now living in South Korea. I wish him to become a famous mathematician.. Thus, realistically, if all 18 years-old from North Korea were to take a math test, they would probably score much worse than their South Korean neighbors. And yet, Best Korea reaches almost the same score with only half the source population. What is their secret?

This piece on the current state of mathematics in North Korea gives it away. “The entire nation suffered greatly during and after the March of Suffering, when the economy collapsed. Yet, North Korea maintained its educational system, focusing on the gifted and special schools such as the First High Schools to preserve the next generation. The limited resources were concentrated towards gifted students. Students were tested and selected at the end of elementary school.” In that second interpretation, the primary concern of the North Korean government is to produce a few very brilliant students every year, who will bring back medals from the Olympiads and make the country look good. The rest of the population’s skills at mathematics are less of a concern.

When we receive new information, we update our beliefs to keep them compatible with the new observations, doing an informal version of Bayesian updating. Before learning about the North Korean IMO team, my prior beliefs were something like “most of the country is starving and their education is mostly propaganda, there is no way they can be good at math”. After seeing the IMO results, I had to update. In the first interpretation, we update the mean – the average math skill is higher than I previously thought. In the second interpretation, we leave the mean untouched, but we make the upper tail of the distribution heavier. Most North Koreans are not particularly good at math, but a few of them are heavily nurtured for the sole purpose of winning medals at the IMO. As we will see later in this article, this problem has some pretty important consequences for how we understand society, and those who ignore it might take pretty bad policy decisions.

But first, let’s break it apart and see how it really works. There will be a few formulas, but nothing that can hurt you, I promise. Consider a probability distribution where the outcome x happens with probability p(x). For any integer n, the formula below gives what we call the nth moment of a distribution, centered on \mu.

\int_{\mathbb{R}}p(x)(x-\mu)^ndx

To put it simply, moments describe how things are distributed around a center. For example, if a planet is rotating around its center of mass, you can use moments to describe how its mass is distributed around it. But here I will only talk about their use in statistics, where each moment encodes one particular characteristic of a probability distribution. Let’s sketch some plots to see what it is all about.

First moment: replace n with 1 and μ with 0 in the previous formula. We get

\int_{\mathbb{R}}p(x)(x)dx

which is – suprise – the definition of the mean. Changing the first moment just shifts the distribution towards higher or lower values, while keeping the same shape.

Second moment: for n = 2, we get

\int_{\mathbb{R}}p(x)(x-\mu)^2dx

If we set μ to be (arbitrarily, for simplicity) equal to the mean, we obtain the definition of the variance! The second moment around the mean describes how values are spread away from the average, while the mean remains constant.

Third moment (n = 3): the third moment describes how skewed (asymmetric) the distribution is, while the mean and the variance remain constant.

Fourth moment (n = 4): this describes how leptokurtic or platykurtic your distribution is, while the mean, variance and skew remain constant. These words basically describe how long the tails of your distribution are, or “how extreme the extreme values are”.

You could go on to higher n, each time bringing in more detail about what the distribution really looks like, until you end up with a perfect description of the distribution. By only mentioning the first few moments, you can describe a population with only a few numbers (rather than infinite), but it only gives a “simplified” version of the true distribution, as on the left graph below:

Say you want to describe the height of humans. As everybody knows, height follows a normal distribution, so you could just give the mean and standard deviation of human height, and get a fairly accurate description of the distribution. But there is always a wise-ass in the back of the room to point out that the normal distribution is defined over \mathbb{R}, so for a large enough population, some humans will have a negative height. The problem here is that we only gave information about the first two moments and neglected all the higher ones. As it turns out, humans are only viable within a certain range of height, below or above which people don’t survive. This erodes the tails of the distribution, effectively making it more platykurtic2If I can get even one reader to use the word platykurtic in real life, I’ll consider this article a success..

Let’s come back to the remarkable scores of North Koreans at the Math Olympiads. What these scores teach us is not that North Korean high-schoolers are really good at math, but that many of the high-schoolers who are really good at math are North Koreans. On the distribution plots, it would translate to something like this:

With North Koreans in purple and another country that does worse in the IMOs (say, France), in black. So you are looking at the tails and try to infer something about the rest of the distribution. Recall the plots above. Which one could it be?

Answer: just by looking at the extreme values, you cannot possibly tell, because any of these plots would potentially match. In Bayesian terms, each moment of the distribution has its own prior, and when you encounter new information, you could in principle update any of them to match the new data. So how can we make sure we are not updating the wrong moment? When you have a large representative sample that reflects the entire distribution, this is easy. When you only have information about the “top 10” extreme values, it is impossible. This is unfortunate because the extreme values are precisely what gets all our attention – most of what we see in the media is about the most talented athletes, the most dishonest politicians, the craziest people, the most violent criminals, and so forth. Thus, when we hear new information about extreme cases, it’s important to be careful about which moment to update.

This problem also occurs in reverse – in the same way looking at the tails doesn’t tell you anything about the average, looking at the average doesn’t tell you anything about the tails. An example: on a typical year, more Americans die from falling than from viral infections. So one could argue that we should dedicate more resources to prevent falls than viral infections. Except the number of deaths from falls is fairly stable (you will never have a pandemic of people starting to slip in their bathtubs 100 times more than usual). On the other hand, virus transmission is a multiplicative process, so most outbreaks will be mostly harmless (remember how SARS-cov-1 killed less than 1000 people, those were the days) but a few of them will be really bad. In other words, yearly deaths from falls have a higher mean than deaths from viruses, but since the latter are highly skewed and leptokurtic, they might deserve more attention. (For a detailed analysis of this, just ask Nassim Taleb.)

There are a lot of other interesting things to say about the moments of a probability distribution, like the deep connection between them and the partition function in statistical thermodynamics, or the fact that in my drawings the purple line always crosses the black like exactly n times. But these are for nerds, and it’s time to move on to the secret topic of this article. Let’s talk about SEX AND VIOLENCE.

This will not come as a surprise: most criminals are men. In the USA, men represent 93% of the prison population. Of course, discrimination in the justice system explains some part of the gap, but I doubt it accounts for the whole 9-fold difference. Accordingly, it is a solid cultural stereotypes that men use violence and women use communication. Everybody knows that. Nevertheless, having just read the previous paragraphs, you wonder: “are we really updating the right moment?”

A recent meta-analysis by Thöni et al. sheds some light on the question. Published in the journal Pyschological Science, it synthesizes 23 studies (with >8000 participants), about gender differences in cooperation. In such studies, participants play cooperation games against each other. These games are essentially a multiplayer, continuous version of the Prisoner’s Dilemma – players can choose to be more or less cooperative, with possible strategies ranging from total selfishness to total selflessness.

So, in cooperation games, we expect women to cooperate more often than men, right? After all, women are socialized to be caring, supportive and empathetic, while men are taught to be selfish and dominant, aren’t they? To find out, Thöni et al aligned all of these studies on a single cooperativeness scale, and compared the scores of men and women. Here are the averages, for three different game variants:

This is strange. On average, men and women are just equally cooperative. If society really allows men to behave selfishly, it should be visible somewhere in all these studies. I mean, where are all the criminals/rapists/politicians? It’s undeniable that most of them are men, right?

The problem with the graph above is that it only shows averages, so it misses the most important information – that men’s level of cooperation is much more variable than women’s. So if you zoom on the people who were either very selfish or very cooperative, you find a wild majority of men. If you zoom on people who kind-of cooperated but were also kind-of selfish, you find predominantly women.

As I’m sure you’ve noticed, the title of the Thöni et al paper says “evolutionary perspective”. As far as I’m concerned, I’m fairly skeptical about evolutionary psychology, since it is one of the fields with the worst track record of reproducibility ever. To be fair, a good part of evpsych is just regular psychology where the researchers added a little bit of speculative evolutionary varnish to make it look more exciting. This aside, real evpsych is apparently not so bad. But that’s not the important part of the paper – what matters is that there is increasingly strong evidence that men are indeed more variable than women in behaviors like cooperation. Whether it is due to hormones, culture, discrimination or cultural evolution is up to debate and I don’t think the current data is remotely sufficient to answer this question.

(Side note: if you must read one paper on the topic, I recommend this German study where they measure the testosterone level of fans of a football team, then have them play Prisoner’s Dilemma against fans of a rival team. I wouldn’t draw any strong conclusion from this just yet, but it’s a fun read.)

The thing is, men are not only found to be more variable in cooperation, but in tons of other things. These include aggression, exam grades, PISA scores, all kinds of cognitive tests, personality, creativity, vocational interests and even some neuroanatomical features. In the last few years, support for the greater male variability hypothesis has accumulated, so much that it is no longer possible to claim to understand gender or masculinity without taking it into account.

Alas, that’s not how stereotyping works. Instead, we see news report showing all these male criminals, and assume that our society turns men into violent and selfish creatures and call them toxic3Here is Dworkin: “Men are distinguished from women by their commitment to do violence rather than to be victimized by it. Men are rewarded for learning the practice of violence in virtually any sphere of activity by money, admiration, recognition, respect, and the genuflection of others honoring their sacred and proven masculinity.” (Remember – in the above study, the majority of “unconditional cooperators” were men.). Internet people make up a hashtag to ridicule those who complain about the generalization. We see all these male IMO medalists, and – depending on your favorite political tradition – either assume that men have an unfair advantage in maths, or that they are inherently better at it. The former worldview serves as a basis for public policy. The question of which moment to update rarely even comes up.

This makes me wonder whether this process of looking at the extremes then updating our beliefs about the mean is just the normal way we learn. If that is the case, how many other things are we missing?

## bookmark_borderMap of randomness

In this paper from 2012 (full text here), Leemis and McQueston show a diagram of how probability distributions are related to each other. As I liked it really much, I extracted the chart from the pdf, turned it into a poster, and printed a giant version of it to stick on the wall of my apartment. I thought I would also share it here:

The full-size vector graphic version (as pdf) can be downloaded here.

### Some explanation

Things can be random in many different ways. It’s tempting to think “if it’s not deterministic, then it’s random and we don’t know anything about it”, but that would be wrong. There is an entire bestiary of probability distributions, with different shapes and different properties, that tell you how likely the possible outcomes are relative to each other. What’s interesting is that each distribution describes the outcome of a particular class of stochastic processes, so by looking at how something is distributed, it’s possible to understand better the process that created it. One can even combine simple processes together or morph their parameters to build more complicated processes. The map above tells you how the probability distribution changes when you do that.

Let’s look at an example. You are typing on a keyboard. Every time you push a button, there is a certain probability p that you will hit the wrong one. This super simple process is called the Bernoulli process, it corresponds to the Bernoulli distribution that you can find near the top-right corner of the map. Now you type a whole page, consisting of n characters. How many errors will you make? This is just a sum of n Bernoulli processes, so we look at the map and follow the arrow that says \sum{X_i}, and we reach the binomial distribution1i.i.d. means “Independent and Identically Distributed”. We are assuming your typos are independent from each other.. The number of errors per page follows a binomial distribution with mean np and variance np(1-p). If you write a book with 1000 characters per page and make one typo per hundred characters, the variance of the number of typos from page to page will be 1000*0.01*0.99=9.92ISN’T THAT FASCINATING?.

Let’s complicate things a little bit. Instead of using a typewriter, you are writing with a pen. From time to time, your pen will slip and make an ugly mark. How many ugly marks will you get per page? Again, the map has you covered: this time, instead of having n discrete button presses, we have an infinite number of infinitesimal opportunities for the pen to screw up, so n\to\infty, and p must also become infinitesimally small so that np is finite, otherwise you would just be making an infinite number of ugly marks, and I know you are better than that. Thus, according to the map, the number of screwups per page follows a Poisson distribution. A handy property of the Poisson distribution is that the mean happens to be equal to the variance. So if your pen screws up 10 times per page, you also know the variance will be 10.

You can go on and explore the map on your own (riddle: what is the amount of ink deposited by your pen per page distributed like?). So far, I would say I have encountered only half of the map’s distributions in real life, so there is still a lot of terra incognita for me.

## bookmark_borderQuantified Pop Culture

We all noticed the gender stereotypes in films, books and video games, and we all know that they shape how we behave in real life (or is it the other way around?). But it would nice to know how common these stereotypes really are. Intuitively, it’s tempting to resort to the availability heuristic, that is to recall a bunch of films where you remember seeing a stereotype, and assume that the number of examples you can find is proportional to its actual prevalence. But the availability heuristic is quite bad in general, especially for pop culture where authors try to subvert your expectations all the time by replacing a stereotype with its exact opposite. Thus, it would be useful to put actual numbers on the frequency of various stereotypes in the entertainment media, before we make any extravagant claim about their importance.

But how do you measure stereotypes in pop culture? The only way would be to go over all the films, books, comics and theater plays, systematically list every single occurrence of every stereotype you see, and compile them into a large database. This would of course represent an astronomical amount of mind-numbingly boring work, and nobody in their right mind would ever want to do that.

But wait – that’s TVtropes! For reasons that I can’t fathom, a group of nerds over the Internet actually performed this mind-numbingly boring work and created a full wiki of every “trope” they could find, with associated examples. All there is left to do is statistics.

Of course, editing TVtropes is not a systematic, unbiased process and there will be all kinds of bias, but it’s certainly better than just guessing based on the examples that come to your mind. In addition, TVtropes have clear rules for what qualifies as a trope or not, and I believe they are enforced. Also, TVtropes is a “naturally-occuring” database – contributors were not trying to make any specific statement about gender stereotypes when they built the wiki, so there should not be too much ideological bias (compared to, say, a gender studies PhD student looking for evidence to back up their favorite hypothesis). I’m almost surprised it has not been used more often in social sciences1I looked it up. Somebody wrote a Master’s thesis about TVtropes, but it’s about how the wiki gets edited, they are not making any use of the content..

So I went ahead and wrote a TVtropes scrapper. It goes through a portal (a page that lists all the tropes related to one topic), visits all the trope pages, then goes to the description of each medium that contains the trope. I even hacked together a small script to extract the publication date of the medium, looking for things like “in [4-digit number]”, “since [4-digit number]” and so on. It’s not 100% accurate, but it should be enough to see how the different stereotypes evolved over time.

I then ran my script on a large portal page called the Gender Dynamic Index, that has all the tropes related to gender in one place. Scrapping it and the pages it links to took about one full day, because TVtropes kept banning me for making too many requests. Sorry for that, TVtropes. Anyways, the scrapper code can be found here, and the dataset in CSV format is here. Using this dataset, we can look into the following questions:

• What are the most common tropes about female characters? About male characters?
• Are some tropes more common in specific media, like video games, television or manga?
• How did trope frequency evolve over time? Did some new tropes emerge in the last decades? Which old tropes went out of fashion?

As a sanity check, here is how the different media are represented in my dataset for each year. You can see the rise of video games starting in the 1980s, so my attempt at extracting the dates is not so bad. There also seem to be a few video games as early as 1960, which is weird. Maybe they are just video games whose story takes place in the sixties and my script got confused.

So what does pop culture say about women? Here are the top 50 tropes, ranked by the number of examples referenced on their wiki page. You can find an absurd lot of detail about any given trope on the dedicated TVtropes page (example).

And this is the top 50 for men:

I was a bit surprised to find “drowning my sorrows” so high in the list of stereotypes about men. It’s about how, in fiction, men tend to drink alcohol when they are sad. Interestingly, this one is equally frequent in all kinds of media, even cartoons2That being said, I don’t know how many of these are children cartoons. It is possible that TVtropes contributors are more likely to mention cartoons for an adult audience.. That does not sound like a very healthy message.

TVtropes also has a special category for tropes that contrast men and women. Here they are:

The tropes are not evenly-distributed across media. Here are a few selected examples, with their relative frequency in different supports:

Next, I took advantage of my super-accurate date-guessing algorithm to plot the evolution of various tropes over time. Guys Smash, Girls Shoot is primarily found in video games, so it’s not surprising that it became more frequent over time. More surprising is the fact that Men Are the Expendable Gender increased so much in frequency in the last decades – given how harmful it is, you would expect the entertainment media to stop perpetuating it. The famous Damsel in Distress trope peaked in the 90s, possibly because it was the scenario-by-default in video games from the 90s3I’ll admit I know very little about video games, I don’t usually play them, so please correct me if that’s wrong.. It does not look like there are that many Damsels in Distress left nowadays. The Girl of The Week, which is how male heroes appear to have a new girlfriend at every episode, has become much less prevalent since the 90s, which is certainly a sign of progress.

Finally, here is a combined plot that show how much each stereotype has changed between the pre-2000 era and the post-2000 era. I chose 2000 as a discontinuity point based on the plot above, but the results stay mostly the same if I move the threshold to other years.

Notice, in yellow, the “corrective” tropes, which are reversed versions of classic gender tropes. As you can expect, most of them became more common after 2000. To my surprise, the two corrective tropes that became less common are the Damsel Out of Distress and the Rebellious Princess, which both fit the “empowering girls” line of thought. On the other end, tropes like Female Gaze or Non-Action Guy are thriving, even though they are less about empowerment and more of a race to the bottom.

Let me know what you think about all of this. Does it match your expectations? If you were a writer, what would you do? If there are further analyses or plots that you would like to see, don’t hesitate to ask in the comments. For instance, I can easily plot the evolution over time, or the distribution by medium, for other tropes that the ones I picked here.

PS: If you enjoy this kind of things, check out this analysis of the vocabulary associated with men and women in literature on The Pudding. They did a great job blending data visualization into illustrations.

Update on 16 nov: one commenter wanted to see the evolution of tropes related to double standards over time. Here is what it looks like:

## bookmark_borderCelebrities, numerosity and the Weber-Fechner law

This article uses the net worth of celebrities as a practical example. Net worth values were shamelessly taken from celebritynetworth.com as of August 2020. They may fluctuate and become obsolete within days, but it does not change anything to the point of the article. Also, I will assume that you, the reader, have a net worth of $0 (trust me, it’s not going to matter). I. I recently had a discussion with my brother about Cristiano Ronaldo becoming the first billionaire footballer ever. We were both surprised, but for opposite reasons. He was surprised that no footballer ever before became a billionaire, while I was surprised that it was ever possible to reach one billion through football, even with associated income like advertisement and clothing. I think this disagreement gives some insight about the way we process large numbers. There are essentially two ways for humans to mentally handle quantities: one is called numeracy and resorts to a set of symbols with rules that tell you how to work with them. The other one is called numerosity and is some kind of analogue scale we use to compare things without resorting to symbols. To demonstrate that numerosity is more sophisticated than it looks, let’s do a thought experiment. Imagine you are in a large room with Jeff Bezos, the richest person in the world. There is a line painted on the floor, with numbers written on each end. One side is marked with a big 0, the other side is marked with «$190 billions ». Mmm, it looks like we are in a thought experiment where we have to stand on a line depending on our net worth, you think. As Jeff Bezos stands on the $190 billion mark, you reluctantly walk to the zero mark right next to the wall, where you belong. You see Bezos smirking at you from the other side. Suddenly, the door opens, and a bunch of world-class football players enter the room. Intuitively, where do you think they will stand on the line? This may come as a surprise, but compared to Jeff Bezos, the net worth of all these legendary footballers is not so different from yours (remember, you’re worth$0). Football players might be millionaires, but they are very unlikely to become billionaires, Cristiano Ronaldo being the exception. Thus, on a line from $0 to$190B, they are basically piled up right next to you. What about superstar singers?

Some singers become much richer than footballers, but they are still much closer to you than to Jeff Bezos. Let’s add a few famous billionaires. Like, people who are actually famous because they are billionaires.

Surprisingly, they are still very close to you in absolute value. Their wealth is still several orders of magnitude below Bezos. What happens if we look at big tech CEOs, like Elon Musk or Larry Page? Surely they belong to the same world as Bezos?

Now, this is indeed getting closer to Bezos. However, in absolute distance, they are still closer to you. Here is the punchline – the absolute wealth difference between Elon Musk and you is smaller than between Elon Musk and Jeff Bezos. This becomes obvious once you realize Bezos’s wealth is more than twice as much as Musk’s wealth.

II.

Why is this so counter-intuitive? This is because, unless we look carefully into the numbers, we are comparing all these large quantities using the numerosity scale, which is logarithmic. Musk has hundreds of thousands times more money than you, and only 3 times less money than Bezos. Since 3 is smaller than hundreds of thousands, you intuitively estimate that Musk is closer to Bezos than to you.

It makes sense: in the graphs above (which use linear scales), the dots for everybody under one billion are almost impossible to distinguish. If you wanted to display these people’s net worth in a readable way, you would need to use a log-scale. In the case of wealth, a log scale is especially appropriate since wealth accumulation is a multiplicative process: the more dollars you already have, the easier it is to acquire one extra dollar. In consequence, wealth can be well-approximated with a log-normal distribution, which is strongly skewed towards low values. Most values are lower than the average, but then you’ve got a few very high values that drive the mean up. A typical feature of this kind of distributions is that high values fall very far from each other. That’s why the richest human in the world (Bezos) beats the second richest (currently Bill Gates, not shown on the graphs) by a margin of several billions.

But our perception of numbers as a log-scale is not restricted to the wealth of celebrities. In fact, it appears to be an universal pattern is numerical cognition, called the Weber-Fechner law. Originally, this law is about sensory input, for example light intensity or sound loudness. But it also applies to counting objects:

In this picture (reprinted from Wikipedia), it is much easier to see the difference between 10 and 20 dots, than between 110 and 120 dots. We seem to have a logarithmic scale hard-wired into our brains.

III.

What really puzzles me about the Weber-Fechner law is that we are performing a logarithmic transformation intuitively, without thinking about it. There is evidence that it is rather innate: pre-school children have been shown to use a logarithmic number line before they learn about digital symbols. After a few years of schooling, children tend to switch away from the logarithmic line to a more linear number cognition system, which can be difficult. Eventually, in high school, they have to learn logarithms again, in an abstract formal way. Logarithms are notoriously difficult to teach (I know plenty of well-educated people who still struggle with them). This is a shame, because all these high-schoolers have been using log scales since they were young, without even realizing it.

## bookmark_borderTrust your sample, not your sample of samples

The train is about to depart. Your ticket in your hand, you check your seat number, walk in the central alley, find your seat and sit down next to another traveler. You look around to see what the other people in the wagon look like.

How many people were there in the wagon you just imagined? If you are like me, it was probably rather crowded, with few empty seats. However, according to these European data, the average occupancy rate of trains is only about 45%, so there should be more empty seats than occupied ones. What is going on?

The issue here is a simple statistical phenomenon: the sample of “all the trains you took in your life” is not quite representative of “all the trains”. The occupancy rate of trains varies all the time. Some trains will be much more crowded than average, some others will be almost empty. And – guess what – the more people there are in a train, the more likely for you to be one of them. A train packed with hundreds of customers will be observed by, well, hundreds of passengers while the empty trains will not be observed at all. Thus, in your empirical sample, trains with n passengers will be over-represented n times compared to trains with only one passenger.

Here is a riddle: you want to estimate the average number of occupants in the trains that arrive to a station. To that end, you survey people leaving the station and ask how many people they saw in the same train. If you were to take the mean of your sample, the average occupancy would be over-estimated, for the reason stated above. How do you calculate the unbiased occupancy rate? Assume every train had at least one occupant (this is necessary since empty trains are never observed, so the number could be virtually anything).

We have an observed distribution P_o(n) and we want to get back to the true distribution P_t(n). As we saw before:

P_o(n) = \frac{nP_t(n)}{\sum_{k}{kP_t(k)}}

Since \sum_{k}{P_t(k)} = 1, the true distribution is

P_t(n) = \frac{P_o(n)/n}{\sum_{k}{P_o(k)/k}}

And the mean occupancy of the trains is

\langle n \rangle = \frac{1}{\sum_{k}{\frac{P_o(k)}{k}}}

which turns out to be the harmonic mean of the observed sample.

Harmonic mean is typically used to average rates. The textbook example is about calculating the average speed of something: if you write down the speed of a car once per kilometer, the average speed is the harmonic mean of your sample, not the arithmetic mean. This is because the car spends less time on the kilometers that it traveled through very fast, so you need to account for that by giving less weight to those kilometers. This is in fact closely related to the train occupancy riddle: in that case, the harmonic mean gives more weight to the trains with fewer people in them, to compensate for the sampling bias.

I don’t know if this statistical bias has a name (if you know, tell me in the comments). It occurs in a lot of situations. A prominent one is the fact that your average Facebook friend has more Facebook friends than average.

Consider how your Facebook friends are sampled: obviously, only people with at least one friend will appear in your sample. So all those idle accounts with no friends at all are already excluded. People with 100 friends are 10 times more likely to appear in your list than people with 10 friends. This leads to a big inflation of the average number of friends your friends have. To put it in a different way, if you have an average number of friends, it’s *perfectly normal* that you have fewer friends than your friends. So there is no need to worry about it.