bookmark_borderA Random Clock

I may have found a solution to one of my biggest, longest-standing, most irredeemable problems. For most of my life, I have been consistently late. Whether it’s appointments, attending events, taking trains or joining a zoom call, I’m typically 10 minutes late for everything and it’s ruining my life – not because I actually miss the train (though that happens too) but because I’m constantly rushing and panicking. Whatever I do, I start it in a state of maximum stress and guilt. Obviously, I tried pretty much everything to address the problem, including various artificial rewards and punishments, telling a therapist about it, having people call me to remind me to get ready, taking nootropics, and many more ridiculous ideas. So I thought, “how do all these well-adjusted adults manage to be perfectly on time all the time?” and I did what any well-adjusted normie would do: I tried to formally frame the problem in terms of expected utility theory.

Tricking myself: single-player game theory

Imagine I have to attend a very important scientific conference on the effect of dubstep on mosquitos. The figure below plots how much I enjoy the event depending on the time I arrive.

Arriving early by ten minutes or one hour does not make any difference (or so I presume – this never happened to me). Being just a few minutes late is not a big deal either, since it’s just going to be the speaker testing her microphone or other formalities of no importance. Beyond that, it starts becoming really rude (with some variation depending on which culture you live in) and I risk missing some crucial information, like the definition of a crucial concept central to understanding the equations of mosquitos’ taste for Skrillex.

The second aspect of the problem is how much time I can save by arriving later, which is just a straight line:

Why would I arrive ten minutes early to the Skrillex-as-a-cure-for-dengue talk, when I could spend ten more minutes reading about exorcism under fMRI? Summing both aspects of the problem, the grand unified utility curve looks something like this:

There you have it: the utility peak, the most rational outcome, is obtained by being just a few minutes late. I suppose for most people, this basically means you should arrive on time, since the peak is not that far from the start of event. But chronically-late people like myself have a distorted vision of the utility curves, that looks more like that:

This might look like a desperate situation, but there is one spark of hope: even in this wildly-distorted version of the utility function, the downward part of the curve (problems with being late) is much steeper than the upward part of the curve (time saved by being late). This asymmetry makes it possible to change the location of the peak by adding some uncertainty, in the form of a random clock. Let me explain.

A rookie approach to not-being-late is to shift your watch 10 minutes in the future. This way, it “looks” like you’re already 10 minutes late when you are actually on time, which might make you speed up through some obscure psychological mechanism. Of course, this does not work since you know perfectly well your clock is 10 minutes early and you compensate accordingly. But what if you ask a friend to shift your watch by a random number of minutes, between 0 and 10? Then, you don’t know how much to compensate. Coming back to the utility function above, we are effectively blurring out the utility function. Here is what happens:

Thanks to the asymmetry of the original peak, the maximum utility is now shifted to the left! Say the mosquito conference starts at 8:00, and the random clock says 7:59. Best case scenario, the clock is 10 minutes in advance, and I still have 11 minutes left, so everything is fine and I can take my time. Worst case scenario, the clock is exactly on time, and the show starts in one minute, and I can’t wait any longer. Since I would rather be 10 minutes early than 10 minutes late, I stop reading this very important exorcism paper, and hurry to the conference room.

Self-blinding in practice

In the early development phase I asked a trusted friend to pick a number between 0 and 10 and shift my watch by this amount in the future without telling me. This was for prototyping only, since it has some disadvantages:

  • I don’t want to ask friends to change my watch all the time, especially if I have to explain the reasoning behind it every time,
  • My friend could totally troll me in various ways, like shifting my clock two hours in the future. I’m clueless enough not to notice. But she is an amazing person and did not do that.

Then, I used this very simple python command:

#!/usr/bin/python3
import time,random
print(time.ctime(time.time()+60*10*random.random()))

It takes the current time, draws a random number between 0 and 10, and adds the same number of minutes to the time.

I have an advantage for this project: I usually wear a wristwatch at all times. This makes the practical implementation of the random clock much easier – I just need to shift my wristwatch, and rely exclusively on it without ever looking at any other clock. I also have an alarm clock and a regular clock on the wall of my room, so I simply shifted them to match my watch. I also had clocks on my computer and my phone, and there is surely a way to shift them too, but I was lazy and just disabled the time display on both devices1In hindsight, I think removing the clock from computers/smartphone is also a healthy decision in its own right, as it forces you to get your eyes off the screen from time to time, you should give it a try. Here is my full randomization procedure:

  • Shuffle my watch and alarm clock by a large amount, so I can’t read the time when I randomize them,
  • Wait until I can no longer tell what time it is (to a 10 minutes margin of error),
  • Run the script,
  • Set my watch and clocks to the time prescribed by the script.

And then, it is all about avoiding looking at the various clocks in my environment that display the true time (sometimes the microwave will just proudly display the time without warning). Who will win – my attempt at deliberately adding uncertainty to the world, or my microwave? Let’s do the experiment.

Putting a number on it

For a few days before and after trying out the random clock, I kept track of the time when I arrived to various appointments and events. For the random phase, I would just write down the raw time displayed on my watch, then, before re-randomizing it, I would check what the shift was and subtract it to the data to know at what time I really arrived. My astonishing performance can be witnessed in the figure below:

The horizontal segments represent the median. As you can see, I went from a median lateness of nine minutes to only one minute. I’m still not perfectly calibrated, but this might be the first time in my whole life I am so close to being on time, so I’d consider this a success. In both series, there are a few outliers where I was very very late (up to 35 min), but those are due to larger problems – for example, the green outlier was when my bicycle broke and I had to go to a band rehearsal on foot. Apparently, I am so bad at managing time that my lateness undergoes black swan events.

Contrary to what I expected, it is very easy to just stop looking at all the clocks in the outside world, and only rely on my watch. Of course, the world is full of danger and sometimes I caught a glimpse at whatever wild clock someone carelessly put in my way. In that case, I just had to avoid checking my watch for a few minutes to avoid breaking the randomization. A bigger problem is seeing when events actually start. Whether I like it or not, my system 1 can’t help but infer things about the real time by seeing when other people arrive, or when the conference actually starts, or when some !#$@ says “alright, it’s 10:03, should we start?”. If this narrows the distribution too much, I have to randomize again. I did not find it to be a major problem, only having to re-randomize about once a week. In fact, when I revealed the real shift to myself before re-randomizing, I often found that what I inferred about the true time was completely wrong. Thus, even if I believe I’ve inferred the real time from external clues, I can tell to myself it’s probably not even accurate. This only makes my scheme stronger.

A continuously-randomizing clock

Since no randomization is eternal, am I doomed to re-randomize every few weeks all my life? There is actually a pretty simple solution to avoid this, which is to use a continuously-randomizing clock. Instead of manually randomizing it from time to time, the clock is constantly drifting back and forth between +0 min and +10 min, slightly tweaking the length of a second. A very simple way to do that is to add a sine function to the real time:

#!/usr/bin/python3
import time, math
real_time = time.time()
shift = (1+math.sin(real_time/1800))/2 # Between 0 and 1
wrong_time = real_time + shift*60*10
print(time.ctime(wrong_time))

In this example, the clock shift will oscillate between 0 and 10 once every π hours. Of course it is not really random anymore, but it does not matter since we are just trying to trick our system 1 so it cannot figure out the real time against our will. Finding the real time might be possible with some calculations, but those would involve your system 2, and that one is supposed to be under your control. All that matters is that the oscillation period is not an obviously multiple of one hour. The snippet above uses a period of π, which is not even rational, so we are pretty safe.

The advantage of using a sine function rather than a fancy random variable is that it is magically synchronized across all clocks that use the same formula. If you use this on two different computers, they will both give the same (wrong) time, without the intervention of any internet. As I said, I am fine with my old needle watch, but if you are the kind of person who uses a smartwatch, give it a try and tell me how it went. Or perhaps I will try to build one of these Arduino watches.

In my tests, I found that my archaic wristwatch-based system is already good enough for my own usage, so I will stick to this for the moment. Maybe it will keep on working, maybe the effect will fade out after a while, once the novelty wears out. Most likely, I might have been more careful than usual because I really wanted the experiment to succeed. Maybe I will get super good at picking up every clue to guess the real time. I will update this post with the latest developments. Anyways, there is something paradoxical about manipulating oneself by deliberately adding uncertainty – a perfectly rational agent would always want more accurate information about the world, and would never deliberately introduce randomness. But I am not a perfectly rational agent, I did introduce uncertainty, and it worked.

bookmark_borderQuantified Pop Culture

We all noticed the gender stereotypes in films, books and video games, and we all know that they shape how we behave in real life (or is it the other way around?). But it would nice to know how common these stereotypes really are. Intuitively, it’s tempting to resort to the availability heuristic, that is to recall a bunch of films where you remember seeing a stereotype, and assume that the number of examples you can find is proportional to its actual prevalence. But the availability heuristic is quite bad in general, especially for pop culture where authors try to subvert your expectations all the time by replacing a stereotype with its exact opposite. Thus, it would be useful to put actual numbers on the frequency of various stereotypes in the entertainment media, before we make any extravagant claim about their importance.

But how do you measure stereotypes in pop culture? The only way would be to go over all the films, books, comics and theater plays, systematically list every single occurrence of every stereotype you see, and compile them into a large database. This would of course represent an astronomical amount of mind-numbingly boring work, and nobody in their right mind would ever want to do that.

But wait – that’s TVtropes! For reasons that I can’t fathom, a group of nerds over the Internet actually performed this mind-numbingly boring work and created a full wiki of every “trope” they could find, with associated examples. All there is left to do is statistics.

Of course, editing TVtropes is not a systematic, unbiased process and there will be all kinds of bias, but it’s certainly better than just guessing based on the examples that come to your mind. In addition, TVtropes have clear rules for what qualifies as a trope or not, and I believe they are enforced. Also, TVtropes is a “naturally-occuring” database – contributors were not trying to make any specific statement about gender stereotypes when they built the wiki, so there should not be too much ideological bias (compared to, say, a gender studies PhD student looking for evidence to back up their favorite hypothesis). I’m almost surprised it has not been used more often in social sciences1I looked it up. Somebody wrote a Master’s thesis about TVtropes, but it’s about how the wiki gets edited, they are not making any use of the content..

So I went ahead and wrote a TVtropes scrapper. It goes through a portal (a page that lists all the tropes related to one topic), visits all the trope pages, then goes to the description of each medium that contains the trope. I even hacked together a small script to extract the publication date of the medium, looking for things like “in [4-digit number]”, “since [4-digit number]” and so on. It’s not 100% accurate, but it should be enough to see how the different stereotypes evolved over time.

I then ran my script on a large portal page called the Gender Dynamic Index, that has all the tropes related to gender in one place. Scrapping it and the pages it links to took about one full day, because TVtropes kept banning me for making too many requests. Sorry for that, TVtropes. Anyways, the scrapper code can be found here, and the dataset in CSV format is here. Using this dataset, we can look into the following questions:

  • What are the most common tropes about female characters? About male characters?
  • Are some tropes more common in specific media, like video games, television or manga?
  • How did trope frequency evolve over time? Did some new tropes emerge in the last decades? Which old tropes went out of fashion?

As a sanity check, here is how the different media are represented in my dataset for each year. You can see the rise of video games starting in the 1980s, so my attempt at extracting the dates is not so bad. There also seem to be a few video games as early as 1960, which is weird. Maybe they are just video games whose story takes place in the sixties and my script got confused.

So what does pop culture say about women? Here are the top 50 tropes, ranked by the number of examples referenced on their wiki page. You can find an absurd lot of detail about any given trope on the dedicated TVtropes page (example).

And this is the top 50 for men:

I was a bit surprised to find “drowning my sorrows” so high in the list of stereotypes about men. It’s about how, in fiction, men tend to drink alcohol when they are sad. Interestingly, this one is equally frequent in all kinds of media, even cartoons2That being said, I don’t know how many of these are children cartoons. It is possible that TVtropes contributors are more likely to mention cartoons for an adult audience.. That does not sound like a very healthy message.

TVtropes also has a special category for tropes that contrast men and women. Here they are:

The tropes are not evenly-distributed across media. Here are a few selected examples, with their relative frequency in different supports:

Next, I took advantage of my super-accurate date-guessing algorithm to plot the evolution of various tropes over time. Guys Smash, Girls Shoot is primarily found in video games, so it’s not surprising that it became more frequent over time. More surprising is the fact that Men Are the Expendable Gender increased so much in frequency in the last decades – given how harmful it is, you would expect the entertainment media to stop perpetuating it. The famous Damsel in Distress trope peaked in the 90s, possibly because it was the scenario-by-default in video games from the 90s3I’ll admit I know very little about video games, I don’t usually play them, so please correct me if that’s wrong.. It does not look like there are that many Damsels in Distress left nowadays. The Girl of The Week, which is how male heroes appear to have a new girlfriend at every episode, has become much less prevalent since the 90s, which is certainly a sign of progress.

Finally, here is a combined plot that show how much each stereotype has changed between the pre-2000 era and the post-2000 era. I chose 2000 as a discontinuity point based on the plot above, but the results stay mostly the same if I move the threshold to other years.

Notice, in yellow, the “corrective” tropes, which are reversed versions of classic gender tropes. As you can expect, most of them became more common after 2000. To my surprise, the two corrective tropes that became less common are the Damsel Out of Distress and the Rebellious Princess, which both fit the “empowering girls” line of thought. On the other end, tropes like Female Gaze or Non-Action Guy are thriving, even though they are less about empowerment and more of a race to the bottom.

Let me know what you think about all of this. Does it match your expectations? If you were a writer, what would you do? If there are further analyses or plots that you would like to see, don’t hesitate to ask in the comments. For instance, I can easily plot the evolution over time, or the distribution by medium, for other tropes that the ones I picked here.

PS: If you enjoy this kind of things, check out this analysis of the vocabulary associated with men and women in literature on The Pudding. They did a great job blending data visualization into illustrations.


Update on 16 nov: one commenter wanted to see the evolution of tropes related to double standards over time. Here is what it looks like: