Map of randomness

In this paper from 2012 (full text here), Leemis and McQueston show a diagram of how probability distributions are related to each other. As I liked it really much, I extracted the chart from the pdf, turned it into a poster, and printed a giant version of it to stick on the wall of my apartment. I thought I would also share it here:

The full-size vector graphic version (as pdf) can be downloaded here.

Some explanation

Things can be random in many different ways. It’s tempting to think “if it’s not deterministic, then it’s random and we don’t know anything about it”, but that would be wrong. There is an entire bestiary of probability distributions, with different shapes and different properties, that tell you how likely the possible outcomes are relative to each other. What’s interesting is that each distribution describes the outcome of a particular class of stochastic processes, so by looking at how something is distributed, it’s possible to understand better the process that created it. One can even combine simple processes together or morph their parameters to build more complicated processes. The map above tells you how the probability distribution changes when you do that.

Let’s look at an example. You are typing on a keyboard. Every time you push a button, there is a certain probability p that you will hit the wrong one. This super simple process is called the Bernoulli process, it corresponds to the Bernoulli distribution that you can find near the top-right corner of the map. Now you type a whole page, consisting of n characters. How many errors will you make? This is just a sum of n Bernoulli processes, so we look at the map and follow the arrow that says \sum{X_i}, and we reach the binomial distribution1i.i.d. means “Independent and Identically Distributed”. We are assuming your typos are independent from each other.. The number of errors per page follows a binomial distribution with mean np and variance np(1-p). If you write a book with 1000 characters per page and make one typo per hundred characters, the variance of the number of typos from page to page will be 1000*0.01*0.99=9.92ISN’T THAT FASCINATING?.

Let’s complicate things a little bit. Instead of using a typewriter, you are writing with a pen. From time to time, your pen will slip and make an ugly mark. How many ugly marks will you get per page? Again, the map has you covered: this time, instead of having n discrete button presses, we have an infinite number of infinitesimal opportunities for the pen to screw up, so n\to\infty, and p must also become infinitesimally small so that np is finite, otherwise you would just be making an infinite number of ugly marks, and I know you are better than that. Thus, according to the map, the number of screwups per page follows a Poisson distribution. A handy property of the Poisson distribution is that the mean happens to be equal to the variance. So if your pen screws up 10 times per page, you also know the variance will be 10.

You can go on and explore the map on your own (riddle: what is the amount of ink deposited by your pen per page distributed like?). So far, I would say I have encountered only half of the map’s distributions in real life, so there is still a lot of terra incognita for me.

Quantifying gender stereotypes in pop culture

We all noticed the gender stereotypes in films, books and video games, and we all know that they shape how we behave in real life (or is it the other way around?). But it would nice to know how common these stereotypes really are. Intuitively, it’s tempting to resort to the availability heuristic, that is to recall a bunch of films where you remember seeing a stereotype, and assume that the number of examples you can find is proportional to its actual prevalence. But the availability heuristic is quite bad in general, especially for pop culture where authors try to subvert your expectations all the time by replacing a stereotype with its exact opposite. Thus, it would be useful to put actual numbers on the frequency of various stereotypes in the entertainment media, before we make any extravagant claim about their importance.

But how do you measure stereotypes in pop culture? The only way would be to go over all the films, books, comics and theater plays, systematically list every single occurrence of every stereotype you see, and compile them into a large database. This would of course represent an astronomical amount of mind-numbingly boring work, and nobody in their right mind would ever want to do that.

But wait – that’s TVtropes! For reasons that I can’t fathom, a group of nerds over the Internet actually performed this mind-numbingly boring work and created a full wiki of every “trope” they could find, with associated examples. All there is left to do is statistics.

Of course, editing TVtropes is not a systematic, unbiased process and there will be all kinds of bias, but it’s certainly better than just guessing based on the examples that come to your mind. In addition, TVtropes have clear rules for what qualifies as a trope or not, and I believe they are enforced. Also, TVtropes is a “naturally-occuring” database – contributors were not trying to make any specific statement about gender stereotypes when they built the wiki, so there should not be too much ideological bias (compared to, say, a gender studies PhD student looking for evidence to back up their favorite hypothesis). I’m almost surprised it has not been used more often in social sciences1I looked it up. Somebody wrote a Master’s thesis about TVtropes, but it’s about how the wiki gets edited, they are not making any use of the content..

So I went ahead and wrote a TVtropes scrapper. It goes through a portal (a page that lists all the tropes related to one topic), visits all the trope pages, then goes to the description of each medium that contains the trope. I even hacked together a small script to extract the publication date of the medium, looking for things like “in [4-digit number]”, “since [4-digit number]” and so on. It’s not 100% accurate, but it should be enough to see how the different stereotypes evolved over time.

I then ran my script on a large portal page called the Gender Dynamic Index, that has all the tropes related to gender in one place. Scrapping it and the pages it links to took about one full day, because TVtropes kept banning me for making too many requests. Sorry for that, TVtropes. Anyways, the scrapper code can be found here, and the dataset in CSV format is here. Using this dataset, we can look into the following questions:

  • What are the most common tropes about female characters? About male characters?
  • Are some tropes more common in specific media, like video games, television or manga?
  • How did trope frequency evolve over time? Did some new tropes emerge in the last decades? Which old tropes went out of fashion?

As a sanity check, here is how the different media are represented in my dataset for each year. You can see the rise of video games starting in the 1980s, so my attempt at extracting the dates is not so bad. There also seem to be a few video games as early as 1960, which is weird. Maybe they are just video games whose story takes place in the sixties and my script got confused.

So what does pop culture say about women? Here are the top 50 tropes, ranked by the number of examples referenced on their wiki page. You can find an absurd lot of detail about any given trope on the dedicated TVtropes page (example).

And this is the top 50 for men:

I was a bit surprised to find “drowning my sorrows” so high in the list of stereotypes about men. It’s about how, in fiction, men tend to drink alcohol when they are sad. Interestingly, this one is equally frequent in all kinds of media, even cartoons2That being said, I don’t know how many of these are children cartoons. It is possible that TVtropes contributors are more likely to mention cartoons for an adult audience.. That does not sound like a very healthy message.

TVtropes also has a special category for tropes that contrast men and women. Here they are:

The tropes are not evenly-distributed across media. Here are a few selected examples, with their relative frequency in different supports:

Next, I took advantage of my super-accurate date-guessing algorithm to plot the evolution of various tropes over time. Guys Smash, Girls Shoot is primarily found in video games, so it’s not surprising that it became more frequent over time. More surprising is the fact that Men Are the Expendable Gender increased so much in frequency in the last decades – given how harmful it is, you would expect the entertainment media to stop perpetuating it. The famous Damsel in Distress trope peaked in the 90s, possibly because it was the scenario-by-default in video games from the 90s3I’ll admit I know very little about video games, I don’t usually play them, so please correct me if that’s wrong.. It does not look like there are that many Damsels in Distress left nowadays. The Girl of The Week, which is how male heroes appear to have a new girlfriend at every episode, has become much less prevalent since the 90s, which is certainly a sign of progress.

Finally, here is a combined plot that show how much each stereotype has changed between the pre-2000 era and the post-2000 era. I chose 2000 as a discontinuity point based on the plot above, but the results stay mostly the same if I move the threshold to other years.

Notice, in yellow, the “corrective” tropes, which are reversed versions of classic gender tropes. As you can expect, most of them became more common after 2000. To my surprise, the two corrective tropes that became less common are the Damsel Out of Distress and the Rebellious Princess, which both fit the “empowering girls” line of thought. On the other end, tropes like Female Gaze or Non-Action Guy are thriving, even though they are less about empowerment and more of a race to the bottom.

Let me know what you think about all of this. Does it match your expectations? If you were a writer, what would you do? If there are further analyses or plots that you would like to see, don’t hesitate to ask in the comments. For instance, I can easily plot the evolution over time, or the distribution by medium, for other tropes that the ones I picked here.

PS: If you enjoy this kind of things, check out this analysis of the vocabulary associated with men and women in literature on The Pudding. They did a great job blending data visualization into illustrations.


Update on 16 nov: one commenter wanted to see the evolution of tropes related to double standards over time. Here is what it looks like:

Celebrities, numerosity and the Weber-Fechner law

This article uses the net worth of celebrities as a practical example. Net worth values were shamelessly taken from celebritynetworth.com as of August 2020. They may fluctuate and become obsolete within days, but it does not change anything to the point of the article. Also, I will assume that you, the reader, have a net worth of $0 (trust me, it’s not going to matter).

I.

I recently had a discussion with my brother about Cristiano Ronaldo becoming the first billionaire footballer ever. We were both surprised, but for opposite reasons. He was surprised that no footballer ever before became a billionaire, while I was surprised that it was ever possible to reach one billion through football, even with associated income like advertisement and clothing. I think this disagreement gives some insight about the way we process large numbers. There are essentially two ways for humans to mentally handle quantities: one  is called numeracy and resorts to a set of symbols with rules that tell you how to work with them. The other one is called numerosity and is some kind of analogue scale we use to compare things without resorting to symbols. To demonstrate that numerosity is more sophisticated than it looks, let’s do a thought experiment.

Imagine you are in a large room with Jeff Bezos, the richest person in the world. There is a line painted on the floor, with numbers written on each end. One side is marked with a big 0, the other side is marked with « $190 billions ». Mmm, it looks like we are in a thought experiment where we have to stand on a line depending on our net worth, you think. As Jeff Bezos stands on the $190 billion mark, you reluctantly walk to the zero mark right next to the wall, where you belong.

You see Bezos smirking at you from the other side. Suddenly, the door opens, and a bunch of world-class football players enter the room. Intuitively, where do you think they will stand on the line?

This may come as a surprise, but compared to Jeff Bezos, the net worth of all these legendary footballers is not so different from yours (remember, you’re worth $0). Football players might be millionaires, but they are very unlikely to become billionaires, Cristiano Ronaldo being the exception. Thus, on a line from $0 to $190B, they are basically piled up right next to you. What about superstar singers?

Some singers become much richer than footballers, but they are still much closer to you than to Jeff Bezos. Let’s add a few famous billionaires. Like, people who are actually famous because they are billionaires.

Surprisingly, they are still very close to you in absolute value. Their wealth is still several orders of magnitude below Bezos. What happens if we look at big tech CEOs, like Elon Musk or Larry Page? Surely they belong to the same world as Bezos?

Now, this is indeed getting closer to Bezos. However, in absolute distance, they are still closer to you. Here is the punchline – the absolute wealth difference between Elon Musk and you is smaller than between Elon Musk and Jeff Bezos. This becomes obvious once you realize Bezos’s wealth is more than twice as much as Musk’s wealth.

II.

Why is this so counter-intuitive? This is because, unless we look carefully into the numbers, we are comparing all these large quantities using the numerosity scale, which is logarithmic. Musk has hundreds of thousands times more money than you, and only 3 times less money than Bezos. Since 3 is smaller than hundreds of thousands, you intuitively estimate that Musk is closer to Bezos than to you.

It makes sense: in the graphs above (which use linear scales), the dots for everybody under one billion are almost impossible to distinguish. If you wanted to display these people’s net worth in a readable way, you would need to use a log-scale. In the case of wealth, a log scale is especially appropriate since wealth accumulation is a multiplicative process: the more dollars you already have, the easier it is to acquire one extra dollar. In consequence, wealth can be well-approximated with a log-normal distribution, which is strongly skewed towards low values. Most values are lower than the average, but then you’ve got a few very high values that drive the mean up. A typical feature of this kind of distributions is that high values fall very far from each other. That’s why the richest human in the world (Bezos) beats the second richest (currently Bill Gates, not shown on the graphs) by a margin of several billions.

But our perception of numbers as a log-scale is not restricted to the wealth of celebrities. In fact, it appears to be an universal pattern is numerical cognition, called the Weber-Fechner law. Originally, this law is about sensory input, for example light intensity or sound loudness. But it also applies to counting objects:

In this picture (reprinted from Wikipedia), it is much easier to see the difference between 10 and 20 dots, than between 110 and 120 dots. We seem to have a logarithmic scale hard-wired into our brains.

III.

What really puzzles me about the Weber-Fechner law is that we are performing a logarithmic transformation intuitively, without thinking about it. There is evidence that it is rather innate: pre-school children have been shown to use a logarithmic number line before they learn about digital symbols. After a few years of schooling, children tend to switch away from the logarithmic line to a more linear number cognition system, which can be difficult. Eventually, in high school, they have to learn logarithms again, in an abstract formal way. Logarithms are notoriously difficult to teach (I know plenty of well-educated people who still struggle with them). This is a shame, because all these high-schoolers have been using log scales since they were young, without even realizing it.

Trust your sample, not your sample of samples

The train is about to depart. Your ticket in your hand, you check your seat number, walk in the central alley, find your seat and sit down next to another traveler. You look around to see what the other people in the wagon look like.

How many people were there in the wagon you just imagined? If you are like me, it was probably rather crowded, with few empty seats. However, according to these European data, the average occupancy rate of trains is only about 45%, so there should be more empty seats than occupied ones. What is going on?

The issue here is a simple statistical phenomenon: the sample of “all the trains you took in your life” is not quite representative of “all the trains”. The occupancy rate of trains varies all the time. Some trains will be much more crowded than average, some others will be almost empty. And – guess what – the more people there are in a train, the more likely for you to be one of them. A train packed with hundreds of customers will be observed by, well, hundreds of passengers while the empty trains will not be observed at all. Thus, in your empirical sample, trains with n passengers will be over-represented n times compared to trains with only one passenger.

Here is a riddle: you want to estimate the average number of occupants in the trains that arrive to a station. To that end, you survey people leaving the station and ask how many people they saw in the same train. If you were to take the mean of your sample, the average occupancy would be over-estimated, for the reason stated above. How do you calculate the unbiased occupancy rate? Assume every train had at least one occupant (this is necessary since empty trains are never observed, so the number could be virtually anything).

We have an observed distribution P_o(n) and we want to get back to the true distribution P_t(n). As we saw before:

P_o(n) = \frac{nP_t(n)}{\sum_{k}{kP_t(k)}}

Since \sum_{k}{P_t(k)} = 1, the true distribution is

P_t(n) = \frac{P_o(n)/n}{\sum_{k}{P_o(k)/k}}

And the mean occupancy of the trains is

\langle n \rangle = \frac{1}{\sum_{k}{\frac{P_o(k)}{k}}}

which turns out to be the harmonic mean of the observed sample.

Harmonic mean is typically used to average rates. The textbook example is about calculating the average speed of something: if you write down the speed of a car once per kilometer, the average speed is the harmonic mean of your sample, not the arithmetic mean. This is because the car spends less time on the kilometers that it traveled through very fast, so you need to account for that by giving less weight to those kilometers. This is in fact closely related to the train occupancy riddle: in that case, the harmonic mean gives more weight to the trains with fewer people in them, to compensate for the sampling bias.

I don’t know if this statistical bias has a name (if you know, tell me in the comments). It occurs in a lot of situations. A prominent one is the fact that your average Facebook friend has more Facebook friends than average.

Consider how your Facebook friends are sampled: obviously, only people with at least one friend will appear in your sample. So all those idle accounts with no friends at all are already excluded. People with 100 friends are 10 times more likely to appear in your list than people with 10 friends. This leads to a big inflation of the average number of friends your friends have. To put it in a different way, if you have an average number of friends, it’s *perfectly normal* that you have fewer friends than your friends. So there is no need to worry about it.