bookmark_borderSecond-order selection against the immortal

In his recent review of Lifespan, Scott Alexander writes:

Algernon’s Law says there shouldn’t be easy gains in biology. Your body is the product of millions of years of evolution – it would be weird if some drug could make you stronger, faster, and smarter. Why didn’t the body just evolve to secrete that drug itself?

He is talking about anti-aging research, and wondering why, if there is an easy way to stop aging, humans haven’t already evolved immortality spontaneously. There are many relevant things to say about this, but I think the evolutionary perspective is particularly interesting. Under some circumstances, it might be that immortality is inherently unstable.

The Imperium and the Horde

Suppose that it’s the future, and the FDA just approved a pill that makes you immortal. Of course people disagree about whether one should take the pill or not. As a result, humanity is now divided in two populations: the Immortal Imperium, who took the immortality pill, and the Horde of Death, who still experience the painful decay and death we all know and love.

Artist depiction of the Horde of Death.

So, people from the Horde spend their time having plenty of children to populate the next generation, while people in the Immortal Imperium try to escape their existential ennui by reading speculative blog posts on the Internet. Who will prevail?

Two orders of fitness

There are two competing phenomena at play here. One is first-order selection, which is how many of your genes are passed on to the next generation, the more the better. For the Horde of Death, there is nothing mysterious: they reproduce, then they die, and an uncertain fraction of their genes gets passed on.

What about immortal people? They don’t really pass anything to the next generation, because they don’t do the whole generation thing. On the other hand, all of their genes will still be around centuries after centuries, so for the genes involved, this is a 100% success rate. In this sense, people in the Immortal Imperium have a very high first-order fitness.

The second process is second-order selection. This is selection on evolvability. This is about how easy it is for your lineage to improve its own first-order fitness in the future. If a lineage finds a way to evolve quicker, then it may eventually take over the whole population because it will be more likely to discover new beneficial variants, and the original mechanism that granted better evolvability will hitchhike with these new variants.

If you want to see it happen with your own eyes, look at Richard Lenski’s long term evolution experiment, where people have been growing the same E. coli lineages continuously since 1988. Among the mutants that took over the population after a few thousands of generations, some were present since almost the very beginning. They are called EW, for Eventual Winners. Other mutants from the same period eventually disappeared, so they are called Eventual Losers (EL). Surprisingly, in the early days, the EL were able to grow faster than the EW. But in the long term, the EW did better. That is because the EW had mutations that made them more evolvable: they became more likely to acquire further beneficial mutations that ultimately made them grow faster than the EL. People in Lenski’s lab replayed the competition over and over, and most of the time the more evolvable strain ended up taking over.

An external file that holds a picture, illustration, etc.
Object name is nihms-322983-f0003.jpg
From Woods et al., 2011. EL: eventual losers. EW: eventual winners.

Second-order selection matters most for organisms that are not well-adapted to their environment. After all, if you are already at the top of the fitness landscape, there is no point improving your gradient-climbing abilities. Intuitively, it may look like humans are well-adapted to their environment, because we deliberately modified our environment to match our needs. But in a biological sense, current mortal humans are absolutely not well-adapted to their environment. In the First World, fertility is at an all-time low, yet we have all the resources we would need to have tons of offspring. In terms of sheer gene-copying, there is clearly a lot of room for improvement. In fact, there is a lot of genomic evidence that humans are currently under high selective pressure.

(Here is a fun way to think about it: consider that contraceptives are basically antibiotics for humans, in that they are chemicals that prevent us from reproducing. What do bacteria do when exposed to antibiotics for a long time? They evolve antibiotic resistance. So if someone gets a mutation that makes them resistant to contraceptives, they will have a fitness advantage. Realistically, we would quickly notice and switch to other contraceptives, so it’s not likely to be a large issue. But what if people get mutations that increase their parental instinct instead?)

Will the Horde win in the long run?

While the Imperium has better first-order fitness, they are pretty bad at evolving. It is likely that they’ll stop having children to avoid over-crowding. In that case, they just stop evolving completely.

Zardoz Is The 70s Sci-Fi Movie Sean Connery Wants You To ...
If I remember well, that’s what the immortals in Zardoz do. That’s enough evidence for me.

Meanwhile, the Horde does a full cycle of variation/selection/reproduction every 30 years or so. This makes them pretty effective at discovering beneficial variants and become more adapted. To makes things worse, humans have a tendency to constantly change and remodel their own environment. This would explain why the rate of human evolution became higher in the last few thousands years: civilization is changing all the time. Our genomes are always adjusting to human-made changes in technology, environment, agriculture and social organization. The Horde would have no problem finding new genomes to stay up-to-date. The Imperium must do with the same old genomes they have had since the late 21st-century. For example, it’s easy to imagine that the mortals can physically adapt to global warming, while the immortals will not have this chance.

If the Immortals do continue to have babies, their second-order fitness is still pretty bad: if the centuries-old generations still reproduce or mate with the newer generations, the average generation time is still much longer than the Horde’s, so they still evolve slower, and it only gets worse as the population ages. Also, the original immortals still have to compete for resources with the younger, better-adapted immortals, so we are back to the problems we had with the Horde.

Cultural evolution

Anyways, genomic evolution is only one part of the picture. There is also cultural evolution, which is how cultures with higher fitness reproduce (or spread) faster, selecting for more adapted cultural norms. The main reason why humans are so good at colonizing everything is that cultural evolution is faster and more efficient than genetic evolution, so that’s an important thing to have.

For the Horde of Death vs Immortal Imperium conflict, I am not sure who would benefit more from cultural evolution. On one hand, the Imperium has a lot of experience. They have seen everything and had plenty of time to discuss every problem. They have all read the Sequences. They have maximum wisdom.

On the other hand, the Horde gets fresh brains. We all know that young scientists and mathematicians tend to do the most groundbreaking discoveries, and that scientific fields tend to have booms in creativity following the death of established leaders. So what happens if they never die?

Here is a hint of evidence from tennis: when composite rackets were introduced, it altered the way people play in a subtle way, so that the previous way to play was no longer optimal. According to that one study, older players had trouble adapting to the new way to play, which favored younger players. I don’t know how well it generalizes – at the very least, it implies that the Imperium would suck at tennis.

Another hint of evidence: moral values seem to be acquired at young age. When asked moral dilemmas (is it ok to eat the corpse of your pet after it was killed by a car?), people are more morally conservative when the question is asked in their native language, as opposed to another language they learned later in life. This suggests that some of our beliefs and values are shaped by the things we learn while our brains are still developing, and it is not clear whether that can ever be fully overwritten. Perhaps it will be much more difficult for the Imperium to update to new moral norms, which would hinder their cultural evolution. If the Overton window remains stuck in the same place, it would also hamper technological progress: at some point, the Immortals will see all the new gadgets the Horde constantly comes up with, and of course find them absolutely disgusting and immoral.

Elephants and mice

Altogether, it is hard to tell how humanity would continue its evolution if we discovered a way to immortality. There is a decent chance that an immortal population is inherently unstable, but there could also be cultural workarounds. One possible path that I didn’t explore is that mortals and immortals end up occupying different ecological niches. Elephants are practically immortal compared to mice, yet both of them coexist without out-competing each other. If the Horde and the Imperium ever reach such an equilibrium, what would their respective niches look like?

One last quirk: what if the Immortal Imperium, in a last-resort strategy, decides to put immortality drugs in the Horde’s drinking water? Then the Horde become immortal too, and lose their second-order advantage. Problem solved. Unless, of course, people start developing resistance to the immortality pills – such a mutation would be selected for because it helps selecting for mutations that help selecting for mutations that are beneficial. I have never heard of any third-order selection occurring in nature, but maybe humans will make it happen.

bookmark_borderIs top-down veganism unethical?

« Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away. »
– Antoine de Saint-Exupéry

Remember how the vegetables we eat everyday are very different from their ancestors from a few centuries ago? The same is true for animals. In half a century, farmers bred increasingly large races of chicken. Here is a comparison of the size of bones for modern and ancestral chickens:

https://royalsocietypublishing.org/cms/asset/d2de8704-e027-4c4c-b56a-bf42fe2d258c/rsos180325f04.gif
Scale bar: 2 cm (source)

The leg on the left belongs to a modern broiler chicken. The one on the right belongs to a wild jungle chicken.

From the perspective of meat production, this is an improvement. From the perspective of animal suffering, things are more uncertain. Contemporary chicken are reaching pantagruelian proportions and now they have trouble walking and their legs often break under their own weight. One might even go as far as worrying this is a little bit unethical. Fortunately, there are solutions. I can think of three of them – the first two, you already know. The last one, however, I never see discussed anywhere.

1. Non-meat

The most fashionable solution now is to replace meat with plant-based construction materials that are claimed to look and taste similar to meat. My main problem is that plant-based meat is, at best, overlapping with real meat: the best-quality plant-meat is comparable to the lowest-quality meat. If you think the vegan burgers make accurate simulacra of meat, I’m afraid you are eating too much heavily processed shitty meat. We are still very far from the impossible® A5-rated wagyu, the impossible® pressed duck, the impossible® volaille de Bresse “en vessie” (which must be gently cooked in a plant-based impossible® pork bladder to be valid). As a typical Westerner, i have the opportunity to eat only about 90,000 meals in a lifetime, there is no way I’m wasting any of them on sub-delicious food. Still, this approach deserves some praise for actually existing and working, which cannot be said about the second approach –

2. Lab-grown meat

To be fair, the interest in lab-grown meat is increasing, slowly and steadily. Perhaps it will eventually catch up on sexbots. Here is a Frontiers review from last year, whose title alone drives the point home: “The Myth of Cultured Meat”. It is not that bad, really, but the current prototypes look like attempts at emulating the vegan attempts at emulating real meat. I don’t see any lab-grown marbled beef appearing in the foreseeable future.

3. Top-down vegan meat

Lab-grown meat was the bottom-up approach. Here, I will inquire into the feasibility of a top-down approach. Rather than starting from cell cultures and engineering them into a sirloin steak, I suggest starting from whole animals and using genetic engineering to remove all the things we find ethically questionable, one by one. Our end goal is, of course, to turn the live animals into warm, squishy, throbbing blocks of flesh devoid of anything that could possibly be construed as qualia. If we can give them a cubic shape for easy packaging and storage, that’s even better.

The path to success is long, but straightforward:

Perhaps the easiest, short-term solution is to make the animals insensitive to pain. We’ve known for a long time that some genetic variants in humans make pain disappear completely. The most famous one, a mutation in the gene SCN9A, was discovered on a Pakistani street performer who would literally eat burning coals and stab himself for the show (he did not live very long). Earlier this year, Moreno et al managed to make mice insensitive to pain using a CRISPR-based epigenome editing scheme (basically, they fused an inactivated Cas9 to a KRAB repressor, so it binds to the DNA just next to the SCN9A gene and inhibits transcription). As we can see from the street performer kid, disrupting the pain sensitivity pathway is totally viable, so I see no technical reason we couldn’t try that on farm animals too.

Of course, pain is not the only form of suffering. If we really want to persuade the PETA activists, we might want to make the animals permanently happy, whatever the circumstances. This is where it gets tricky. I found this genome-wide association study which identifies variants associated to subjective well-being in humans, but it’s not clear whether these variants have a direct effect on happiness, or if they just make you more likely to be rich and handsome. In the later case, it would not be particularly useful for our next-gen farm animals (it can’t hurt, though). It is pretty clear that some genetic variants have a direct effect on personality traits like depression and anxiety, so maybe there is room for action. To optimize happiness in farm animals, we would of course need a way to measure the animals’ subjective well-being, so that’s another obstacle in the way of convincing the vegans (vegans, I’ve been told, can be extremely picky). Also, there is another problem: if we find a way to make animals permanently happy, we might be tempted to apply it to ourselves instead, and then, nobody will care about factory farming anymore.

If removing pain and sadness is not enough, the next logical step is to get rid of consciousness entirely. Any chemical used to induce coma is probably not an option, since we don’t want people to fall into a coma themselves after eating lunch (I’m already close enough to a comatose state after lunch with regular food, let’s not make this worse). A more radical approach is just to remove as much of the nervous system as possible. In humans, there is a rare condition called anencephaly where a fœtus develops without most of the brain, and in particular without a neocortex. It is pretty clear that these kids have no consciousness, yet they can survive for a few hours or even a few days. There is also evidence that some mutations or recessive variants can trigger anencephaly, so the prospect of developing animal lineages without a cerebrum does not seem completely impossible. A major challenge, of course, would be to extend the life of the organism for more than a few hours. Moreover, it would require a lot of effort from the marketing department to make such a monstruosity appealing to consumers.

Sadly, this will not be enough for most vegans. Most of the vegans I personally know put the edibility frontier somewhere between the harp sponge Chondrocladia lyra and the egg-yolk jellyfish Phacellophora camtschatica, that is, anything with a nervous system is formally off-limit. This criterion does not make things easy for our master plan: we can remove as much of the nervous system as we can, I can’t think of any way to get rid of the cardiac automatism or the part of the nervous system in charge of respiratory function. Unless, of course, we dare enter into cyborg territory. Is the world ready for alimentary cyborgs? The future is full of surprises.

Conclusion

Let’s be honest, this post started as fun speculation and gratuitous vegan trolling, but I am actually very serious about the central point. GMOs are mainly discussed in terms of cost, environmental impact or health properties, yet very rarely as an avenue to reduce animal suffering. Many of the ideas discussed here are still beyond what is possible with our current understanding of genetics. Still, we can already identify some interesting research paths that are just waiting to be explored. So, what makes this approach so disturbing? As often, the moral questions turn out more difficult than the technical barriers. The major obstacle is not so much the actual genetic engineering, but the lack of good metrics for success – how do you even measure suffering to begin with? On the other hand, if the outcome of a problem cannot be measured or even defined in any meaningful way, maybe it does not matter that much, after all. I would be happy to hear what ethical vegans think about the general approach. What would it take for a top-down reduction of animal suffering to be acceptable to you?

bookmark_borderThe two-headed bacterium

I like to see categories as fish nets we use to capture ideas. We classify things into categories like individuals, nation or species, and of course it is all arbitrary and doesn’t correspond to anything in the real world. But categories still form useful chunks we can use to make sense of the world. Furthermore, here is a fun exercise: introduce arbitrary changes in the categories, and see what the world looks like through this new lens. As I will argue, there are plenty of things to be discovered this way. Use the standard fish nets, and you get a standard understanding of the world. Try to use slightly larger or smaller nets, and maybe you will discover things you had never noticed before.

Take the individual, for example. One bacterial cell contains exactly one genome and all the necessary equipment to replicate it. Using our human-derived intuition of what makes an individual, it makes sense to see bacteria as unicellular organisms, meaning that one cell = one individual. If you visit the wiki page on prokaryotes (the larger group that encompasses bacteria and archaea) the first thing you hear is that they are unicellular, as if it were the most important thing about them. However, bacteria are so weird, so different from us, that it makes little sense to describe them using the categories we invented while observing humans.

Let’s explore the strange and surprising processes that are uncovered when you change your definition of the individual to make it either wider, or narrower. First, I will start with a hot take: each bacterial lineage is one big multicellular individual. Then I will move on to the super-hot-magma-take: each bacterial cell is actually made of two distinct individuals fused together, facing in opposite directions.

Bacteria as multicellular organisms

First, let’s make our definition of the individual arbitrarily broader, and consider that the whole bacterial culture, descending from a single ancestral cell, is one individual. Is there anything interesting to see here? For starters, some behaviors of bacterial cells don’t really make sense as individuals. For example, bacterial cells regularly perform what could only be described as bacterial sacrifice.

The Kelly criterion in prokaryotes

Content warning: bacterial sacrifice

Antibiotics were already in the environment long before humans started using them, usually secreted by other micro-organisms who want to take your precious nutrients for themselves. Imagine being a bacterium growing peacefully – there is always a risk that some bastard fungus will put their filthy pterulone, sparassol or strobilurin in your soup. Fortunately, bacteria figured out a solution: enemies can’t stop you from growing if you are already not growing.

In its simplest form, this works because the antimicrobial compound needs to be actively incorporated in the growth machinery to cause trouble. Think of a grain of sand being caught in a clockwork mechanism and breaking everything – if the mechanism is stopped, the grain of sand doesn’t enter, and you can resume operation later once the grain of sand has been blown away. Obviously, the drawback is that the bacterium is no longer growing, which kind of defeats the whole point. This is why bacteria have invented what we humans know as the Kelly betting system.

Say a gambler bets on something with 2:1 odds, so if she wins the bet, she gains twice as much as what she invested. She know she has a 60% chance of winning, so the most profitable strategy is of course to invest 100% of her money every time – this way, she maximizes the return of every winning bet! But obviously this is bad, because eventually she will lose a bet, and then have zero monies remaining. For bacteria, this is like having 100% of the cells growing as fast as they can. This maximizes the population growth rate, until the aforementioned bastard fungus secretes some pleuromutilin or whatever and then the entire population takes it up and goes extinct. To avoid this, our gambler should invest only a fraction of her money on each bet, so her funds still grow exponentially (albeit at a slower rate) but in case of loss she still has some funds to continue. For bacteria, this means always having a small fraction of the population that stops growing, as a backup. This is essentially the bacterial population betting on whether there will be antibiotics in the close future. From the perspective of an individual cells, both situations are bad – either you stop growing, while your friends quickly outnumber you by orders of magnitudes and you practically disappear, or you are part of the growing fraction and eventually you die from antibiotic overdose. But if you look at the entire colony, you can see the two sub-populations as two essential parts of a single organism, that figured out some slick decision theory techniques long before the species of John L. Kelly even evolved a brain.

Eating the corpses of your siblings

Content warning: eating the corpses of your siblings.

Similarly, one puzzling feature of bacteria is that they sometimes commit apoptosis. This happens, for example, when food is scarce – some cells may spontaneously explode so that other cells can feed on their remains, increasing the chances that at least one of them will make it out alive when resources come back. If you see each cell as an individual, that is weird, and does not fit well with anything methodological individualism would predict. But if you see the whole colony as the individual, then it is just like your good old typical apoptosis – just like, in the fetal stage, your fingers were all connected by cells until some of them honorably committed seppuku so you get born with fingers instead of webbed paws.

(One fascinating thing with bacterial apoptosis is that every cell which ever activated these pathways is dead. Thus, if you look at a currently living bacteria, at no point in billion years of evolution did this pathway ever activate in any of its ancestors. Not even by chance. The entire mechanism evolved and improved only by correlation with other cells, without ever activating in the lineages we can now see.)

Action potentials in biofilms

As a third exhibit of things bacteria do that definitely don’t look like unicellular behavior, there is the recent discovery that some bacteria, after organizing themselves as a biofilm, are able to communicate with each other using electrical waves. The way it works is remotely similar to the action potentials we see in neurons. At a resting state, cells are filled with potassium ions, which makes them electrically polarized. Whenever the polarization disappears, ion channels in the envelope open up, and the potassium ions all exit the cell into the extracellular environment. This, in turns, cancels out the polarization of neighboring cells. The result is this:

Video from Prindle et al., 2015, showing waves of potassium propagating in a colony of tens of thousands of cells.

Supposedly, this mechanism makes sure the outer bacteria will stop eating from time to time, so the nutrients can diffuse all the way to the center and prevent the interior cells from starving. If this does not make you scream “multicellular!”, I don’t know what will.

In short, rather than being just individual cells fighting against each other, bacteria have evolved hard-wired mechanisms that only make sense if you consider the dynamics of the whole colony. A microbiologist could spend her entire career building a perfect model of one bacterial cell, but she would still be far from understanding all facets of the organism. Oh, and if you are ready to hear a similar point about humans (that is, human communities are multi-body individuals), get your largest fish net and check out this review. I will continue with bacteria, because we have barely scratched the lipopolysaccharide of their weirdness.

Bacterial cells are two-faced pairs of individuals

Now, let’s see what happens with a much narrower definition for an individual. Even narrower than a single cell. Put down that extra-large “big game”-rated landing net and bring the tweezers.

Here is our new definition: an individual is what happens between a birth event and a death event. Now we need to find definitions of birth and death that apply to bacteria. Let’s say, a birth event is when a mother cell divides into two daughters (specifically, cytokinesis). A death event is when a cell is irreversibly broken, is torn apart or becomes too damaged to grow. We have a simple and precise definition, now we can look at bacteria and pick apart the individuals.

One generation goes as follows:

  • The cell extends and roughly doubles in length
  • The middle of the cell constricts and two new poles are constructed
  • The cell divides and you get two cells. Each of them has one old pole that was already there in the previous generation, and one shiny new pole:

Where is the individual here? Now you understand why I came up with that bizarre birth-death definition. First, let’s number the poles according to their age (in generations).

Blink very fast while on shrooms and you might see a Koch snowflake in the bottom sequence.

But what if bacteria age? It turns out that, yes, bacteria age. After a number of generations, old poles accumulate damage. Depending on the growth environment, they may still be fine, or grow slower, or explode in an effusion of bacteria blood. To reduce clutter, I’ll consider that poles have a lifespan of 3 generations, and then the cell is dead (in real life, they hold for much longer, but that wouldn’t be sketchable).

Coming back to our custom, “birth-to-death” definition of an individual, you can see that each cell is actually made of two of them – one on the left, one on the right.

Here they are very short-lived and die after three generations, but in real life these “half-bacteria” live for much longer, perhaps hundreds of generations if the conditions are not too bad. But the principle remains the same, there are just a lot more of these diagonal individuals.

Using your ancestors as trashcans

Content warning: yeah, that.

But wait, there is more. As I said, in nice conditions the poles can grow basically forever. Yet they still exhibit aging. And yes, this is all sane and coherent. This is where the titles of the papers become really spooky (Age structure landscapes emerge from the equilibrium between aging and rejuvenation in bacterial populations or Cell aging preserves cellular immortality in the presence of lethal levels of damage), showing how far we are from our typically construction of the individual.

To put it very briefly, take the sketches above where half of the cell is young and half of the cell keeps getting older. Old material accumulates in the old pole, so those cells keep growing slower and slower after each generation. Now add some mixing to it: every generation, the older pole gets a little bit of fresh material, and the younger pole gets a little bit of old material. Eventually the old pole reaches an equilibrium when the new material their inherit exactly compensates the damage from aging. As there is the same thing, reversed, for the young pole, you end up with two attractors:

Slightly adapted from Proenca et al., 2018.

What is the importance of this? There may be no importance at all, since the old cells are quickly outnumbered by young cells so they only represent a tiny fraction of the colony. However, there is also some evidence that all kinds of garbage, like misfolded proteins or aggregates, tend to accumulate in the old pole. Perhaps this ensure that at least some cells in the population will be in perfect shape, so in case of trouble, they have a good chance of having at least one survivor (a bit like North Korea preparing a team for the Math Olympiads).

But this, of course, brings us back to collective, multicellular behavior. Life is too complicated to fit in a single fish net.

bookmark_borderWholesale Wikipedias – July 2021

https://en.wikipedia.org/wiki/Concealed_shoes

https://en.wikipedia.org/wiki/Umm_al-Qura_Mosque

https://en.wikipedia.org/wiki/Operation_Vegetarian

https://en.wikipedia.org/wiki/Retired_husband_syndrome

https://en.wikipedia.org/wiki/Berners_Street_hoax

https://en.wikipedia.org/wiki/Gilles_de_Rais

https://en.wikipedia.org/wiki/Love_Jihad

https://en.wikipedia.org/wiki/Ejaculatory_prayer

https://en.wikipedia.org/wiki/Herma

https://en.wikipedia.org/wiki/St._Petersburg_paradox

bookmark_borderA Random Clock

I may have found a solution to one of my biggest, longest-standing, most irredeemable problems. For most of my life, I have been consistently late. Whether it’s appointments, attending events, taking trains or joining a zoom call, I’m typically 10 minutes late for everything and it’s ruining my life – not because I actually miss the train (though that happens too) but because I’m constantly rushing and panicking. Whatever I do, I start it in a state of maximum stress and guilt. Obviously, I tried pretty much everything to address the problem, including various artificial rewards and punishments, telling a therapist about it, having people call me to remind me to get ready, taking nootropics, and many more ridiculous ideas. So I thought, “how do all these well-adjusted adults manage to be perfectly on time all the time?” and I did what any well-adjusted normie would do: I tried to formally frame the problem in terms of expected utility theory.

Tricking myself: single-player game theory

Imagine I have to attend a very important scientific conference on the effect of dubstep on mosquitos. The figure below plots how much I enjoy the event depending on the time I arrive.

Arriving early by ten minutes or one hour does not make any difference (or so I presume – this never happened to me). Being just a few minutes late is not a big deal either, since it’s just going to be the speaker testing her microphone or other formalities of no importance. Beyond that, it starts becoming really rude (with some variation depending on which culture you live in) and I risk missing some crucial information, like the definition of a crucial concept central to understanding the equations of mosquitos’ taste for Skrillex.

The second aspect of the problem is how much time I can save by arriving later, which is just a straight line:

Why would I arrive ten minutes early to the Skrillex-as-a-cure-for-dengue talk, when I could spend ten more minutes reading about exorcism under fMRI? Summing both aspects of the problem, the grand unified utility curve looks something like this:

There you have it: the utility peak, the most rational outcome, is obtained by being just a few minutes late. I suppose for most people, this basically means you should arrive on time, since the peak is not that far from the start of event. But chronically-late people like myself have a distorted vision of the utility curves, that looks more like that:

This might look like a desperate situation, but there is one spark of hope: even in this wildly-distorted version of the utility function, the downward part of the curve (problems with being late) is much steeper than the upward part of the curve (time saved by being late). This asymmetry makes it possible to change the location of the peak by adding some uncertainty, in the form of a random clock. Let me explain.

A rookie approach to not-being-late is to shift your watch 10 minutes in the future. This way, it “looks” like you’re already 10 minutes late when you are actually on time, which might make you speed up through some obscure psychological mechanism. Of course, this does not work since you know perfectly well your clock is 10 minutes early and you compensate accordingly. But what if you ask a friend to shift your watch by a random number of minutes, between 0 and 10? Then, you don’t know how much to compensate. Coming back to the utility function above, we are effectively blurring out the utility function. Here is what happens:

Thanks to the asymmetry of the original peak, the maximum utility is now shifted to the left! Say the mosquito conference starts at 8:00, and the random clock says 7:59. Best case scenario, the clock is 10 minutes in advance, and I still have 11 minutes left, so everything is fine and I can take my time. Worst case scenario, the clock is exactly on time, and the show starts in one minute, and I can’t wait any longer. Since I would rather be 10 minutes early than 10 minutes late, I stop reading this very important exorcism paper, and hurry to the conference room.

Self-blinding in practice

In the early development phase I asked a trusted friend to pick a number between 0 and 10 and shift my watch by this amount in the future without telling me. This was for prototyping only, since it has some disadvantages:

  • I don’t want to ask friends to change my watch all the time, especially if I have to explain the reasoning behind it every time,
  • My friend could totally troll me in various ways, like shifting my clock two hours in the future. I’m clueless enough not to notice. But she is an amazing person and did not do that.

Then, I used this very simple python command:

#!/usr/bin/python3
import time,random
print(time.ctime(time.time()+60*10*random.random()))

It takes the current time, draws a random number between 0 and 10, and adds the same number of minutes to the time.

I have an advantage for this project: I usually wear a wristwatch at all times. This makes the practical implementation of the random clock much easier – I just need to shift my wristwatch, and rely exclusively on it without ever looking at any other clock. I also have an alarm clock and a regular clock on the wall of my room, so I simply shifted them to match my watch. I also had clocks on my computer and my phone, and there is surely a way to shift them too, but I was lazy and just disabled the time display on both devices1In hindsight, I think removing the clock from computers/smartphone is also a healthy decision in its own right, as it forces you to get your eyes off the screen from time to time, you should give it a try. Here is my full randomization procedure:

  • Shuffle my watch and alarm clock by a large amount, so I can’t read the time when I randomize them,
  • Wait until I can no longer tell what time it is (to a 10 minutes margin of error),
  • Run the script,
  • Set my watch and clocks to the time prescribed by the script.

And then, it is all about avoiding looking at the various clocks in my environment that display the true time (sometimes the microwave will just proudly display the time without warning). Who will win – my attempt at deliberately adding uncertainty to the world, or my microwave? Let’s do the experiment.

Putting a number on it

For a few days before and after trying out the random clock, I kept track of the time when I arrived to various appointments and events. For the random phase, I would just write down the raw time displayed on my watch, then, before re-randomizing it, I would check what the shift was and subtract it to the data to know at what time I really arrived. My astonishing performance can be witnessed in the figure below:

The horizontal segments represent the median. As you can see, I went from a median lateness of nine minutes to only one minute. I’m still not perfectly calibrated, but this might be the first time in my whole life I am so close to being on time, so I’d consider this a success. In both series, there are a few outliers where I was very very late (up to 35 min), but those are due to larger problems – for example, the green outlier was when my bicycle broke and I had to go to a band rehearsal on foot. Apparently, I am so bad at managing time that my lateness undergoes black swan events.

Contrary to what I expected, it is very easy to just stop looking at all the clocks in the outside world, and only rely on my watch. Of course, the world is full of danger and sometimes I caught a glimpse at whatever wild clock someone carelessly put in my way. In that case, I just had to avoid checking my watch for a few minutes to avoid breaking the randomization. A bigger problem is seeing when events actually start. Whether I like it or not, my system 1 can’t help but infer things about the real time by seeing when other people arrive, or when the conference actually starts, or when some !#$@ says “alright, it’s 10:03, should we start?”. If this narrows the distribution too much, I have to randomize again. I did not find it to be a major problem, only having to re-randomize about once a week. In fact, when I revealed the real shift to myself before re-randomizing, I often found that what I inferred about the true time was completely wrong. Thus, even if I believe I’ve inferred the real time from external clues, I can tell to myself it’s probably not even accurate. This only makes my scheme stronger.

A continuously-randomizing clock

Since no randomization is eternal, am I doomed to re-randomize every few weeks all my life? There is actually a pretty simple solution to avoid this, which is to use a continuously-randomizing clock. Instead of manually randomizing it from time to time, the clock is constantly drifting back and forth between +0 min and +10 min, slightly tweaking the length of a second. A very simple way to do that is to add a sine function to the real time:

#!/usr/bin/python3
import time, math
real_time = time.time()
shift = (1+math.sin(real_time/1800))/2 # Between 0 and 1
wrong_time = real_time + shift*60*10
print(time.ctime(wrong_time))

In this example, the clock shift will oscillate between 0 and 10 once every π hours. Of course it is not really random anymore, but it does not matter since we are just trying to trick our system 1 so it cannot figure out the real time against our will. Finding the real time might be possible with some calculations, but those would involve your system 2, and that one is supposed to be under your control. All that matters is that the oscillation period is not an obviously multiple of one hour. The snippet above uses a period of π, which is not even rational, so we are pretty safe.

The advantage of using a sine function rather than a fancy random variable is that it is magically synchronized across all clocks that use the same formula. If you use this on two different computers, they will both give the same (wrong) time, without the intervention of any internet. As I said, I am fine with my old needle watch, but if you are the kind of person who uses a smartwatch, give it a try and tell me how it went. Or perhaps I will try to build one of these Arduino watches.

In my tests, I found that my archaic wristwatch-based system is already good enough for my own usage, so I will stick to this for the moment. Maybe it will keep on working, maybe the effect will fade out after a while, once the novelty wears out. Most likely, I might have been more careful than usual because I really wanted the experiment to succeed. Maybe I will get super good at picking up every clue to guess the real time. I will update this post with the latest developments. Anyways, there is something paradoxical about manipulating oneself by deliberately adding uncertainty – a perfectly rational agent would always want more accurate information about the world, and would never deliberately introduce randomness. But I am not a perfectly rational agent, I did introduce uncertainty, and it worked.

bookmark_borderWholesale Wikipedias – June 2021

Chemistry edition. Anyone has a few milligrams to spare?

https://en.wikipedia.org/wiki/Magic_acid

https://en.wikipedia.org/wiki/Chlorine_trifluoride

https://en.wikipedia.org/wiki/Bremelanotide

https://en.wikipedia.org/wiki/Fenestrane

https://en.wikipedia.org/wiki/Resiniferatoxin

https://en.wikipedia.org/wiki/Megaphone_(molecule)

https://en.wikipedia.org/wiki/Isosorbide_dinitrate%2Fhydralazine

https://en.wikipedia.org/wiki/Olympiadane

bookmark_borderWholesale Wikipedias – May 2021

I almost forgot about this.

https://en.wikipedia.org/wiki/Oil_futures_drunk-trading_incident

https://en.wikipedia.org/wiki/Lady_tasting_tea

https://en.wikipedia.org/wiki/Ribs_(recordings) (see also, samizdat)

https://en.wikipedia.org/wiki/Long_line_(topology)

https://en.wikipedia.org/wiki/Bald-hairy

https://en.wikipedia.org/wiki/Blind_Willie_Johnson

https://en.wikipedia.org/wiki/Non-human_electoral_candidates

https://en.wikipedia.org/wiki/Osama_Vinladen

https://en.wikipedia.org/wiki/List_of_lists_of_lists

bookmark_borderWholesale wikipedias – April 2021

https://en.wikipedia.org/wiki/Curse_of_the_Colonel

https://en.m.wikipedia.org/wiki/Anthropodermic_bibliopegy

https://en.wikipedia.org/wiki/Mhoon_Landing

https://en.wikipedia.org/wiki/Lyman-alpha_forest

https://en.wikipedia.org/wiki/Mobro_4000

https://en.wikipedia.org/wiki/Lacrymaria_olor (https://www.youtube.com/watch?v=ZquzlvEEZq8)

https://en.wikipedia.org/wiki/Mathematical_coincidence

bookmark_borderThe Holy Algorithm

As it will surely not have escaped your insight, this weekend is Easter. Why now? The date of Easter is determined by a complicated process called the Computus Ecclesiasticus. I will just quote the Wikipedia page:

The Easter cycle groups days into lunar months, which are either 29 or 30 days long. There is an exception. The month ending in March normally has thirty days, but if 29 February of a leap year falls within it, it contains 31. As these groups are based on the lunar cycle, over the long term the average month in the lunar calendar is a very good approximation of the synodic month, which is 29.53059 days long. There are 12 synodic months in a lunar year, totaling either 354 or 355 days. The lunar year is about 11 days shorter than the calendar year, which is either 365 or 366 days long. These days by which the solar year exceeds the lunar year are called epacts. It is necessary to add them to the day of the solar year to obtain the correct day in the lunar year. Whenever the epact reaches or exceeds 30, an extra intercalary month (or embolismic month) of 30 days must be inserted into the lunar calendar: then 30 must be subtracted from the epact.

If your thirst of knowledge is not satisfied, here is a 140-page document in Latin with more detail.

As far as I understand, during the Roman Era the Pope or one of his bureaucrats would perform the computus, then communicate the date to the rest of Christianity and everybody could eat their chocolates at the same time. Then, the Middle-Ages happened and communication became much harder, so instead they came up with a formula so people could compute the date of Easter locally. Of course, the initial formulas had problems – with the date of Easter dangerously drifting later and later in the year over centuries, and don’t even get me started on calendar changes. Eventually Carl Friedrich Gauss entered the game and saved humanity once again with a computationally-efficient algorithm (I am over-simplifying the story so you have more time to eat chocolate).

But now is 2021, and I’m wondering how they run the algorithm now, in practice. I looked up “how is the date of Easter calculated” but all the results are about the algorithms themselves, not about their practical implementation. I have a few hypotheses:

  1. There are responsible Christians everywhere who own printed tables with the dates of Easter already computed for the next few generations. If your Internet goes down, you can probably access such tables at the local church.
https://upload.wikimedia.org/wikipedia/commons/e/e4/DiagrammePaques_Flammarion.jpg
Here is such table from 1907 (Wikimedia commons)

Of course this does not really solve the problem: who comes up with these tables in the first place? Who will make new ones when they expire?

2. There is a ceremony in Vatican where a Latin speaker ceremoniously performs the Holy Algorithm by hand, outputs the date of Easter, prints “Amen” for good measure and then messengers spread the result to all of Christianity.

3. Responsible Christians everywhere own a Computus Clock, a physical device that tells you if it is Easter or not. When in doubt, you just pay a visit to that-guy-with-the-computus-clock. Then, it is like hypothesis 1 except it never expires.

4. There is software company (let’s call it Vatican Microsystems®) who managed to persuade the Pope to buy a license for their professional software solution, Computus Pro™ Enterprise Edition 2007 – including 24/7 hotline assistance, that only runs on Windows XP and they have a dedicated computer in Vatican that is used once in a while to run these 30000 lines of hard Haskell or something. Then, it goes just like hypothesis 2.

(Of course, all of these solutions are vulnerable to hacking. It might be as easy as sneaking into a church and replace their Easter tables with a fake. A talented hacker might even have it coincide with April fools.)

If an active member of the Christian community reads this and knows how it is done in practice, I am all ears.

Anyways, happy Easter and Amen, I guess.

bookmark_borderWholesale wikipedias – March 2021

Wikipedias for the wikipedia God.

https://en.wikipedia.org/wiki/Everyday_life

https://en.wikipedia.org/wiki/999-year_lease

https://en.wikipedia.org/wiki/UEFA_Champions_League_Anthem

https://en.wikipedia.org/wiki/Artificial_cranial_deformation

https://en.wikipedia.org/wiki/Metro-2

https://en.wikipedia.org/wiki/Cookiecutter_shark

https://en.wikipedia.org/wiki/Quantum_tic-tac-toe

https://en.wikipedia.org/wiki/Rolling_coal