bookmark_borderIs top-down veganism unethical?

« Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away. »
– Antoine de Saint-Exupéry

Remember how the vegetables we eat everyday are very different from their ancestors from a few centuries ago? The same is true for animals. In half a century, farmers bred increasingly large races of chicken. Here is a comparison of the size of bones for modern and ancestral chickens:

https://royalsocietypublishing.org/cms/asset/d2de8704-e027-4c4c-b56a-bf42fe2d258c/rsos180325f04.gif
Scale bar: 2 cm (source)

The leg on the left belongs to a modern broiler chicken. The one on the right belongs to a wild jungle chicken.

From the perspective of meat production, this is an improvement. From the perspective of animal suffering, things are more uncertain. Contemporary chicken are reaching pantagruelian proportions and now they have trouble walking and their legs often break under their own weight. One might even go as far as worrying this is a little bit unethical. Fortunately, there are solutions. I can think of three of them – the first two, you already know. The last one, however, I never see discussed anywhere.

1. Non-meat

The most fashionable solution now is to replace meat with plant-based construction materials that are claimed to look and taste similar to meat. My main problem is that plant-based meat is, at best, overlapping with real meat: the best-quality plant-meat is comparable to the lowest-quality meat. If you think the vegan burgers make accurate simulacra of meat, I’m afraid you are eating too much heavily processed shitty meat. We are still very far from the impossible® A5-rated wagyu, the impossible® pressed duck, the impossible® volaille de Bresse “en vessie” (which must be gently cooked in a plant-based impossible® pork bladder to be valid). As a typical Westerner, i have the opportunity to eat only about 90,000 meals in a lifetime, there is no way I’m wasting any of them on sub-delicious food. Still, this approach deserves some praise for actually existing and working, which cannot be said about the second approach –

2. Lab-grown meat

To be fair, the interest in lab-grown meat is increasing, slowly and steadily. Perhaps it will eventually catch up on sexbots. Here is a Frontiers review from last year, whose title alone drives the point home: “The Myth of Cultured Meat”. It is not that bad, really, but the current prototypes look like attempts at emulating the vegan attempts at emulating real meat. I don’t see any lab-grown marbled beef appearing in the foreseeable future.

3. Top-down vegan meat

Lab-grown meat was the bottom-up approach. Here, I will inquire into the feasibility of a top-down approach. Rather than starting from cell cultures and engineering them into a sirloin steak, I suggest starting from whole animals and using genetic engineering to remove all the things we find ethically questionable, one by one. Our end goal is, of course, to turn the live animals into warm, squishy, throbbing blocks of flesh devoid of anything that could possibly be construed as qualia. If we can give them a cubic shape for easy packaging and storage, that’s even better.

The path to success is long, but straightforward:

Perhaps the easiest, short-term solution is to make the animals insensitive to pain. We’ve known for a long time that some genetic variants in humans make pain disappear completely. The most famous one, a mutation in the gene SCN9A, was discovered on a Pakistani street performer who would literally eat burning coals and stab himself for the show (he did not live very long). Earlier this year, Moreno et al managed to make mice insensitive to pain using a CRISPR-based epigenome editing scheme (basically, they fused an inactivated Cas9 to a KRAB repressor, so it binds to the DNA just next to the SCN9A gene and inhibits transcription). As we can see from the street performer kid, disrupting the pain sensitivity pathway is totally viable, so I see no technical reason we couldn’t try that on farm animals too.

Of course, pain is not the only form of suffering. If we really want to persuade the PETA activists, we might want to make the animals permanently happy, whatever the circumstances. This is where it gets tricky. I found this genome-wide association study which identifies variants associated to subjective well-being in humans, but it’s not clear whether these variants have a direct effect on happiness, or if they just make you more likely to be rich and handsome. In the later case, it would not be particularly useful for our next-gen farm animals (it can’t hurt, though). It is pretty clear that some genetic variants have a direct effect on personality traits like depression and anxiety, so maybe there is room for action. To optimize happiness in farm animals, we would of course need a way to measure the animals’ subjective well-being, so that’s another obstacle in the way of convincing the vegans (vegans, I’ve been told, can be extremely picky). Also, there is another problem: if we find a way to make animals permanently happy, we might be tempted to apply it to ourselves instead, and then, nobody will care about factory farming anymore.

If removing pain and sadness is not enough, the next logical step is to get rid of consciousness entirely. Any chemical used to induce coma is probably not an option, since we don’t want people to fall into a coma themselves after eating lunch (I’m already close enough to a comatose state after lunch with regular food, let’s not make this worse). A more radical approach is just to remove as much of the nervous system as possible. In humans, there is a rare condition called anencephaly where a fœtus develops without most of the brain, and in particular without a neocortex. It is pretty clear that these kids have no consciousness, yet they can survive for a few hours or even a few days. There is also evidence that some mutations or recessive variants can trigger anencephaly, so the prospect of developing animal lineages without a cerebrum does not seem completely impossible. A major challenge, of course, would be to extend the life of the organism for more than a few hours. Moreover, it would require a lot of effort from the marketing department to make such a monstruosity appealing to consumers.

Sadly, this will not be enough for most vegans. Most of the vegans I personally know put the edibility frontier somewhere between the harp sponge Chondrocladia lyra and the egg-yolk jellyfish Phacellophora camtschatica, that is, anything with a nervous system is formally off-limit. This criterion does not make things easy for our master plan: we can remove as much of the nervous system as we can, I can’t think of any way to get rid of the cardiac automatism or the part of the nervous system in charge of respiratory function. Unless, of course, we dare enter into cyborg territory. Is the world ready for alimentary cyborgs? The future is full of surprises.

Conclusion

Let’s be honest, this post started as fun speculation and gratuitous vegan trolling, but I am actually very serious about the central point. GMOs are mainly discussed in terms of cost, environmental impact or health properties, yet very rarely as an avenue to reduce animal suffering. Many of the ideas discussed here are still beyond what is possible with our current understanding of genetics. Still, we can already identify some interesting research paths that are just waiting to be explored. So, what makes this approach so disturbing? As often, the moral questions turn out more difficult than the technical barriers. The major obstacle is not so much the actual genetic engineering, but the lack of good metrics for success – how do you even measure suffering to begin with? On the other hand, if the outcome of a problem cannot be measured or even defined in any meaningful way, maybe it does not matter that much, after all. I would be happy to hear what ethical vegans think about the general approach. What would it take for a top-down reduction of animal suffering to be acceptable to you?

bookmark_borderThe two-headed bacterium

I like to see categories as fish nets we use to capture ideas. We classify things into categories like individuals, nation or species, and of course it is all arbitrary and doesn’t correspond to anything in the real world. But categories still form useful chunks we can use to make sense of the world. Furthermore, here is a fun exercise: introduce arbitrary changes in the categories, and see what the world looks like through this new lens. As I will argue, there are plenty of things to be discovered this way. Use the standard fish nets, and you get a standard understanding of the world. Try to use slightly larger or smaller nets, and maybe you will discover things you had never noticed before.

Take the individual, for example. One bacterial cell contains exactly one genome and all the necessary equipment to replicate it. Using our human-derived intuition of what makes an individual, it makes sense to see bacteria as unicellular organisms, meaning that one cell = one individual. If you visit the wiki page on prokaryotes (the larger group that encompasses bacteria and archaea) the first thing you hear is that they are unicellular, as if it were the most important thing about them. However, bacteria are so weird, so different from us, that it makes little sense to describe them using the categories we invented while observing humans.

Let’s explore the strange and surprising processes that are uncovered when you change your definition of the individual to make it either wider, or narrower. First, I will start with a hot take: each bacterial lineage is one big multicellular individual. Then I will move on to the super-hot-magma-take: each bacterial cell is actually made of two distinct individuals fused together, facing in opposite directions.

Bacteria as multicellular organisms

First, let’s make our definition of the individual arbitrarily broader, and consider that the whole bacterial culture, descending from a single ancestral cell, is one individual. Is there anything interesting to see here? For starters, some behaviors of bacterial cells don’t really make sense as individuals. For example, bacterial cells regularly perform what could only be described as bacterial sacrifice.

The Kelly criterion in prokaryotes

Content warning: bacterial sacrifice

Antibiotics were already in the environment long before humans started using them, usually secreted by other micro-organisms who want to take your precious nutrients for themselves. Imagine being a bacterium growing peacefully – there is always a risk that some bastard fungus will put their filthy pterulone, sparassol or strobilurin in your soup. Fortunately, bacteria figured out a solution: enemies can’t stop you from growing if you are already not growing.

In its simplest form, this works because the antimicrobial compound needs to be actively incorporated in the growth machinery to cause trouble. Think of a grain of sand being caught in a clockwork mechanism and breaking everything – if the mechanism is stopped, the grain of sand doesn’t enter, and you can resume operation later once the grain of sand has been blown away. Obviously, the drawback is that the bacterium is no longer growing, which kind of defeats the whole point. This is why bacteria have invented what we humans know as the Kelly betting system.

Say a gambler bets on something with 2:1 odds, so if she wins the bet, she gains twice as much as what she invested. She know she has a 60% chance of winning, so the most profitable strategy is of course to invest 100% of her money every time – this way, she maximizes the return of every winning bet! But obviously this is bad, because eventually she will lose a bet, and then have zero monies remaining. For bacteria, this is like having 100% of the cells growing as fast as they can. This maximizes the population growth rate, until the aforementioned bastard fungus secretes some pleuromutilin or whatever and then the entire population takes it up and goes extinct. To avoid this, our gambler should invest only a fraction of her money on each bet, so her funds still grow exponentially (albeit at a slower rate) but in case of loss she still has some funds to continue. For bacteria, this means always having a small fraction of the population that stops growing, as a backup. This is essentially the bacterial population betting on whether there will be antibiotics in the close future. From the perspective of an individual cells, both situations are bad – either you stop growing, while your friends quickly outnumber you by orders of magnitudes and you practically disappear, or you are part of the growing fraction and eventually you die from antibiotic overdose. But if you look at the entire colony, you can see the two sub-populations as two essential parts of a single organism, that figured out some slick decision theory techniques long before the species of John L. Kelly even evolved a brain.

Eating the corpses of your siblings

Content warning: eating the corpses of your siblings.

Similarly, one puzzling feature of bacteria is that they sometimes commit apoptosis. This happens, for example, when food is scarce – some cells may spontaneously explode so that other cells can feed on their remains, increasing the chances that at least one of them will make it out alive when resources come back. If you see each cell as an individual, that is weird, and does not fit well with anything methodological individualism would predict. But if you see the whole colony as the individual, then it is just like your good old typical apoptosis – just like, in the fetal stage, your fingers were all connected by cells until some of them honorably committed seppuku so you get born with fingers instead of webbed paws.

(One fascinating thing with bacterial apoptosis is that every cell which ever activated these pathways is dead. Thus, if you look at a currently living bacteria, at no point in billion years of evolution did this pathway ever activate in any of its ancestors. Not even by chance. The entire mechanism evolved and improved only by correlation with other cells, without ever activating in the lineages we can now see.)

Action potentials in biofilms

As a third exhibit of things bacteria do that definitely don’t look like unicellular behavior, there is the recent discovery that some bacteria, after organizing themselves as a biofilm, are able to communicate with each other using electrical waves. The way it works is remotely similar to the action potentials we see in neurons. At a resting state, cells are filled with potassium ions, which makes them electrically polarized. Whenever the polarization disappears, ion channels in the envelope open up, and the potassium ions all exit the cell into the extracellular environment. This, in turns, cancels out the polarization of neighboring cells. The result is this:

Video from Prindle et al., 2015, showing waves of potassium propagating in a colony of tens of thousands of cells.

Supposedly, this mechanism makes sure the outer bacteria will stop eating from time to time, so the nutrients can diffuse all the way to the center and prevent the interior cells from starving. If this does not make you scream “multicellular!”, I don’t know what will.

In short, rather than being just individual cells fighting against each other, bacteria have evolved hard-wired mechanisms that only make sense if you consider the dynamics of the whole colony. A microbiologist could spend her entire career building a perfect model of one bacterial cell, but she would still be far from understanding all facets of the organism. Oh, and if you are ready to hear a similar point about humans (that is, human communities are multi-body individuals), get your largest fish net and check out this review. I will continue with bacteria, because we have barely scratched the lipopolysaccharide of their weirdness.

Bacterial cells are two-faced pairs of individuals

Now, let’s see what happens with a much narrower definition for an individual. Even narrower than a single cell. Put down that extra-large “big game”-rated landing net and bring the tweezers.

Here is our new definition: an individual is what happens between a birth event and a death event. Now we need to find definitions of birth and death that apply to bacteria. Let’s say, a birth event is when a mother cell divides into two daughters (specifically, cytokinesis). A death event is when a cell is irreversibly broken, is torn apart or becomes too damaged to grow. We have a simple and precise definition, now we can look at bacteria and pick apart the individuals.

One generation goes as follows:

  • The cell extends and roughly doubles in length
  • The middle of the cell constricts and two new poles are constructed
  • The cell divides and you get two cells. Each of them has one old pole that was already there in the previous generation, and one shiny new pole:

Where is the individual here? Now you understand why I came up with that bizarre birth-death definition. First, let’s number the poles according to their age (in generations).

Blink very fast while on shrooms and you might see a Koch snowflake in the bottom sequence.

But what if bacteria age? It turns out that, yes, bacteria age. After a number of generations, old poles accumulate damage. Depending on the growth environment, they may still be fine, or grow slower, or explode in an effusion of bacteria blood. To reduce clutter, I’ll consider that poles have a lifespan of 3 generations, and then the cell is dead (in real life, they hold for much longer, but that wouldn’t be sketchable).

Coming back to our custom, “birth-to-death” definition of an individual, you can see that each cell is actually made of two of them – one on the left, one on the right.

Here they are very short-lived and die after three generations, but in real life these “half-bacteria” live for much longer, perhaps hundreds of generations if the conditions are not too bad. But the principle remains the same, there are just a lot more of these diagonal individuals.

Using your ancestors as trashcans

Content warning: yeah, that.

But wait, there is more. As I said, in nice conditions the poles can grow basically forever. Yet they still exhibit aging. And yes, this is all sane and coherent. This is where the titles of the papers become really spooky (Age structure landscapes emerge from the equilibrium between aging and rejuvenation in bacterial populations or Cell aging preserves cellular immortality in the presence of lethal levels of damage), showing how far we are from our typically construction of the individual.

To put it very briefly, take the sketches above where half of the cell is young and half of the cell keeps getting older. Old material accumulates in the old pole, so those cells keep growing slower and slower after each generation. Now add some mixing to it: every generation, the older pole gets a little bit of fresh material, and the younger pole gets a little bit of old material. Eventually the old pole reaches an equilibrium when the new material their inherit exactly compensates the damage from aging. As there is the same thing, reversed, for the young pole, you end up with two attractors:

Slightly adapted from Proenca et al., 2018.

What is the importance of this? There may be no importance at all, since the old cells are quickly outnumbered by young cells so they only represent a tiny fraction of the colony. However, there is also some evidence that all kinds of garbage, like misfolded proteins or aggregates, tend to accumulate in the old pole. Perhaps this ensure that at least some cells in the population will be in perfect shape, so in case of trouble, they have a good chance of having at least one survivor (a bit like North Korea preparing a team for the Math Olympiads).

But this, of course, brings us back to collective, multicellular behavior. Life is too complicated to fit in a single fish net.

bookmark_borderWholesale Wikipedias – July 2021

https://en.wikipedia.org/wiki/Concealed_shoes

https://en.wikipedia.org/wiki/Umm_al-Qura_Mosque

https://en.wikipedia.org/wiki/Operation_Vegetarian

https://en.wikipedia.org/wiki/Retired_husband_syndrome

https://en.wikipedia.org/wiki/Berners_Street_hoax

https://en.wikipedia.org/wiki/Gilles_de_Rais

https://en.wikipedia.org/wiki/Love_Jihad

https://en.wikipedia.org/wiki/Ejaculatory_prayer

https://en.wikipedia.org/wiki/Herma

https://en.wikipedia.org/wiki/St._Petersburg_paradox

bookmark_borderA Random Clock

I may have found a solution to one of my biggest, longest-standing, most irredeemable problems. For most of my life, I have been consistently late. Whether it’s appointments, attending events, taking trains or joining a zoom call, I’m typically 10 minutes late for everything and it’s ruining my life – not because I actually miss the train (though that happens too) but because I’m constantly rushing and panicking. Whatever I do, I start it in a state of maximum stress and guilt. Obviously, I tried pretty much everything to address the problem, including various artificial rewards and punishments, telling a therapist about it, having people call me to remind me to get ready, taking nootropics, and many more ridiculous ideas. So I thought, “how do all these well-adjusted adults manage to be perfectly on time all the time?” and I did what any well-adjusted normie would do: I tried to formally frame the problem in terms of expected utility theory.

Tricking myself: single-player game theory

Imagine I have to attend a very important scientific conference on the effect of dubstep on mosquitos. The figure below plots how much I enjoy the event depending on the time I arrive.

Arriving early by ten minutes or one hour does not make any difference (or so I presume – this never happened to me). Being just a few minutes late is not a big deal either, since it’s just going to be the speaker testing her microphone or other formalities of no importance. Beyond that, it starts becoming really rude (with some variation depending on which culture you live in) and I risk missing some crucial information, like the definition of a crucial concept central to understanding the equations of mosquitos’ taste for Skrillex.

The second aspect of the problem is how much time I can save by arriving later, which is just a straight line:

Why would I arrive ten minutes early to the Skrillex-as-a-cure-for-dengue talk, when I could spend ten more minutes reading about exorcism under fMRI? Summing both aspects of the problem, the grand unified utility curve looks something like this:

There you have it: the utility peak, the most rational outcome, is obtained by being just a few minutes late. I suppose for most people, this basically means you should arrive on time, since the peak is not that far from the start of event. But chronically-late people like myself have a distorted vision of the utility curves, that looks more like that:

This might look like a desperate situation, but there is one spark of hope: even in this wildly-distorted version of the utility function, the downward part of the curve (problems with being late) is much steeper than the upward part of the curve (time saved by being late). This asymmetry makes it possible to change the location of the peak by adding some uncertainty, in the form of a random clock. Let me explain.

A rookie approach to not-being-late is to shift your watch 10 minutes in the future. This way, it “looks” like you’re already 10 minutes late when you are actually on time, which might make you speed up through some obscure psychological mechanism. Of course, this does not work since you know perfectly well your clock is 10 minutes early and you compensate accordingly. But what if you ask a friend to shift your watch by a random number of minutes, between 0 and 10? Then, you don’t know how much to compensate. Coming back to the utility function above, we are effectively blurring out the utility function. Here is what happens:

Thanks to the asymmetry of the original peak, the maximum utility is now shifted to the left! Say the mosquito conference starts at 8:00, and the random clock says 7:59. Best case scenario, the clock is 10 minutes in advance, and I still have 11 minutes left, so everything is fine and I can take my time. Worst case scenario, the clock is exactly on time, and the show starts in one minute, and I can’t wait any longer. Since I would rather be 10 minutes early than 10 minutes late, I stop reading this very important exorcism paper, and hurry to the conference room.

Self-blinding in practice

In the early development phase I asked a trusted friend to pick a number between 0 and 10 and shift my watch by this amount in the future without telling me. This was for prototyping only, since it has some disadvantages:

  • I don’t want to ask friends to change my watch all the time, especially if I have to explain the reasoning behind it every time,
  • My friend could totally troll me in various ways, like shifting my clock two hours in the future. I’m clueless enough not to notice. But she is an amazing person and did not do that.

Then, I used this very simple python command:

#!/usr/bin/python3
import time,random
print(time.ctime(time.time()+60*10*random.random()))

It takes the current time, draws a random number between 0 and 10, and adds the same number of minutes to the time.

I have an advantage for this project: I usually wear a wristwatch at all times. This makes the practical implementation of the random clock much easier – I just need to shift my wristwatch, and rely exclusively on it without ever looking at any other clock. I also have an alarm clock and a regular clock on the wall of my room, so I simply shifted them to match my watch. I also had clocks on my computer and my phone, and there is surely a way to shift them too, but I was lazy and just disabled the time display on both devices1In hindsight, I think removing the clock from computers/smartphone is also a healthy decision in its own right, as it forces you to get your eyes off the screen from time to time, you should give it a try. Here is my full randomization procedure:

  • Shuffle my watch and alarm clock by a large amount, so I can’t read the time when I randomize them,
  • Wait until I can no longer tell what time it is (to a 10 minutes margin of error),
  • Run the script,
  • Set my watch and clocks to the time prescribed by the script.

And then, it is all about avoiding looking at the various clocks in my environment that display the true time (sometimes the microwave will just proudly display the time without warning). Who will win – my attempt at deliberately adding uncertainty to the world, or my microwave? Let’s do the experiment.

Putting a number on it

For a few days before and after trying out the random clock, I kept track of the time when I arrived to various appointments and events. For the random phase, I would just write down the raw time displayed on my watch, then, before re-randomizing it, I would check what the shift was and subtract it to the data to know at what time I really arrived. My astonishing performance can be witnessed in the figure below:

The horizontal segments represent the median. As you can see, I went from a median lateness of nine minutes to only one minute. I’m still not perfectly calibrated, but this might be the first time in my whole life I am so close to being on time, so I’d consider this a success. In both series, there are a few outliers where I was very very late (up to 35 min), but those are due to larger problems – for example, the green outlier was when my bicycle broke and I had to go to a band rehearsal on foot. Apparently, I am so bad at managing time that my lateness undergoes black swan events.

Contrary to what I expected, it is very easy to just stop looking at all the clocks in the outside world, and only rely on my watch. Of course, the world is full of danger and sometimes I caught a glimpse at whatever wild clock someone carelessly put in my way. In that case, I just had to avoid checking my watch for a few minutes to avoid breaking the randomization. A bigger problem is seeing when events actually start. Whether I like it or not, my system 1 can’t help but infer things about the real time by seeing when other people arrive, or when the conference actually starts, or when some !#$@ says “alright, it’s 10:03, should we start?”. If this narrows the distribution too much, I have to randomize again. I did not find it to be a major problem, only having to re-randomize about once a week. In fact, when I revealed the real shift to myself before re-randomizing, I often found that what I inferred about the true time was completely wrong. Thus, even if I believe I’ve inferred the real time from external clues, I can tell to myself it’s probably not even accurate. This only makes my scheme stronger.

A continuously-randomizing clock

Since no randomization is eternal, am I doomed to re-randomize every few weeks all my life? There is actually a pretty simple solution to avoid this, which is to use a continuously-randomizing clock. Instead of manually randomizing it from time to time, the clock is constantly drifting back and forth between +0 min and +10 min, slightly tweaking the length of a second. A very simple way to do that is to add a sine function to the real time:

#!/usr/bin/python3
import time, math
real_time = time.time()
shift = (1+math.sin(real_time/1800))/2 # Between 0 and 1
wrong_time = real_time + shift*60*10
print(time.ctime(wrong_time))

In this example, the clock shift will oscillate between 0 and 10 once every π hours. Of course it is not really random anymore, but it does not matter since we are just trying to trick our system 1 so it cannot figure out the real time against our will. Finding the real time might be possible with some calculations, but those would involve your system 2, and that one is supposed to be under your control. All that matters is that the oscillation period is not an obviously multiple of one hour. The snippet above uses a period of π, which is not even rational, so we are pretty safe.

The advantage of using a sine function rather than a fancy random variable is that it is magically synchronized across all clocks that use the same formula. If you use this on two different computers, they will both give the same (wrong) time, without the intervention of any internet. As I said, I am fine with my old needle watch, but if you are the kind of person who uses a smartwatch, give it a try and tell me how it went. Or perhaps I will try to build one of these Arduino watches.

In my tests, I found that my archaic wristwatch-based system is already good enough for my own usage, so I will stick to this for the moment. Maybe it will keep on working, maybe the effect will fade out after a while, once the novelty wears out. Most likely, I might have been more careful than usual because I really wanted the experiment to succeed. Maybe I will get super good at picking up every clue to guess the real time. I will update this post with the latest developments. Anyways, there is something paradoxical about manipulating oneself by deliberately adding uncertainty – a perfectly rational agent would always want more accurate information about the world, and would never deliberately introduce randomness. But I am not a perfectly rational agent, I did introduce uncertainty, and it worked.

bookmark_borderWholesale Wikipedias – June 2021

Chemistry edition. Anyone has a few milligrams to spare?

https://en.wikipedia.org/wiki/Magic_acid

https://en.wikipedia.org/wiki/Chlorine_trifluoride

https://en.wikipedia.org/wiki/Bremelanotide

https://en.wikipedia.org/wiki/Fenestrane

https://en.wikipedia.org/wiki/Resiniferatoxin

https://en.wikipedia.org/wiki/Megaphone_(molecule)

https://en.wikipedia.org/wiki/Isosorbide_dinitrate%2Fhydralazine

https://en.wikipedia.org/wiki/Olympiadane

bookmark_borderWholesale Wikipedias – May 2021

I almost forgot about this.

https://en.wikipedia.org/wiki/Oil_futures_drunk-trading_incident

https://en.wikipedia.org/wiki/Lady_tasting_tea

https://en.wikipedia.org/wiki/Ribs_(recordings) (see also, samizdat)

https://en.wikipedia.org/wiki/Long_line_(topology)

https://en.wikipedia.org/wiki/Bald-hairy

https://en.wikipedia.org/wiki/Blind_Willie_Johnson

https://en.wikipedia.org/wiki/Non-human_electoral_candidates

https://en.wikipedia.org/wiki/Osama_Vinladen

https://en.wikipedia.org/wiki/List_of_lists_of_lists

bookmark_borderWholesale wikipedias – April 2021

https://en.wikipedia.org/wiki/Curse_of_the_Colonel

https://en.m.wikipedia.org/wiki/Anthropodermic_bibliopegy

https://en.wikipedia.org/wiki/Mhoon_Landing

https://en.wikipedia.org/wiki/Lyman-alpha_forest

https://en.wikipedia.org/wiki/Mobro_4000

https://en.wikipedia.org/wiki/Lacrymaria_olor (https://www.youtube.com/watch?v=ZquzlvEEZq8)

https://en.wikipedia.org/wiki/Mathematical_coincidence

bookmark_borderThe Holy Algorithm

As it will surely not have escaped your insight, this weekend is Easter. Why now? The date of Easter is determined by a complicated process called the Computus Ecclesiasticus. I will just quote the Wikipedia page:

The Easter cycle groups days into lunar months, which are either 29 or 30 days long. There is an exception. The month ending in March normally has thirty days, but if 29 February of a leap year falls within it, it contains 31. As these groups are based on the lunar cycle, over the long term the average month in the lunar calendar is a very good approximation of the synodic month, which is 29.53059 days long. There are 12 synodic months in a lunar year, totaling either 354 or 355 days. The lunar year is about 11 days shorter than the calendar year, which is either 365 or 366 days long. These days by which the solar year exceeds the lunar year are called epacts. It is necessary to add them to the day of the solar year to obtain the correct day in the lunar year. Whenever the epact reaches or exceeds 30, an extra intercalary month (or embolismic month) of 30 days must be inserted into the lunar calendar: then 30 must be subtracted from the epact.

If your thirst of knowledge is not satisfied, here is a 140-page document in Latin with more detail.

As far as I understand, during the Roman Era the Pope or one of his bureaucrats would perform the computus, then communicate the date to the rest of Christianity and everybody could eat their chocolates at the same time. Then, the Middle-Ages happened and communication became much harder, so instead they came up with a formula so people could compute the date of Easter locally. Of course, the initial formulas had problems – with the date of Easter dangerously drifting later and later in the year over centuries, and don’t even get me started on calendar changes. Eventually Carl Friedrich Gauss entered the game and saved humanity once again with a computationally-efficient algorithm (I am over-simplifying the story so you have more time to eat chocolate).

But now is 2021, and I’m wondering how they run the algorithm now, in practice. I looked up “how is the date of Easter calculated” but all the results are about the algorithms themselves, not about their practical implementation. I have a few hypotheses:

  1. There are responsible Christians everywhere who own printed tables with the dates of Easter already computed for the next few generations. If your Internet goes down, you can probably access such tables at the local church.
https://upload.wikimedia.org/wikipedia/commons/e/e4/DiagrammePaques_Flammarion.jpg
Here is such table from 1907 (Wikimedia commons)

Of course this does not really solve the problem: who comes up with these tables in the first place? Who will make new ones when they expire?

2. There is a ceremony in Vatican where a Latin speaker ceremoniously performs the Holy Algorithm by hand, outputs the date of Easter, prints “Amen” for good measure and then messengers spread the result to all of Christianity.

3. Responsible Christians everywhere own a Computus Clock, a physical device that tells you if it is Easter or not. When in doubt, you just pay a visit to that-guy-with-the-computus-clock. Then, it is like hypothesis 1 except it never expires.

4. There is software company (let’s call it Vatican Microsystems®) who managed to persuade the Pope to buy a license for their professional software solution, Computus Pro™ Enterprise Edition 2007 – including 24/7 hotline assistance, that only runs on Windows XP and they have a dedicated computer in Vatican that is used once in a while to run these 30000 lines of hard Haskell or something. Then, it goes just like hypothesis 2.

(Of course, all of these solutions are vulnerable to hacking. It might be as easy as sneaking into a church and replace their Easter tables with a fake. A talented hacker might even have it coincide with April fools.)

If an active member of the Christian community reads this and knows how it is done in practice, I am all ears.

Anyways, happy Easter and Amen, I guess.

bookmark_borderWholesale wikipedias – March 2021

Wikipedias for the wikipedia God.

https://en.wikipedia.org/wiki/Everyday_life

https://en.wikipedia.org/wiki/999-year_lease

https://en.wikipedia.org/wiki/UEFA_Champions_League_Anthem

https://en.wikipedia.org/wiki/Artificial_cranial_deformation

https://en.wikipedia.org/wiki/Metro-2

https://en.wikipedia.org/wiki/Cookiecutter_shark

https://en.wikipedia.org/wiki/Quantum_tic-tac-toe

https://en.wikipedia.org/wiki/Rolling_coal

bookmark_borderAverage North-Koreans Mathematicians

Here are the top-fifteen countries ranked by how well their teams do at the International Math Olympiads:

When I first saw this ranking, I was surprised to see that North Koreans have such an impressive track record, especially when you factor in their relatively small population. One possible interpretation is that East Asians are just particularly good at mathematics, just like in the stereotypes, even when they live in one of the world’s worst dictatorships.

But I don’t believe that. In fact, I believe North Koreans are, on average, particularly bad at math. More than 40% of the population is undernourished. Many of the students involved in the IMOs grew up in the 1990s, during the March of Suffering, when hundreds of thousands of North Koreans died of famine. That is not exactly the best context to learn mathematics, not to mention the direct effect of nutrients on the brain. There does not seem to be a lot of famous North Korean mathematicians either1There is actually a candidate from the North Korean IMO team who managed to escape during the 2016 Olympiads in Hong-Kong. He is now living in South Korea. I wish him to become a famous mathematician.. Thus, realistically, if all 18 years-old from North Korea were to take a math test, they would probably score much worse than their South Korean neighbors. And yet, Best Korea reaches almost the same score with only half the source population. What is their secret?

This piece on the current state of mathematics in North Korea gives it away. “The entire nation suffered greatly during and after the March of Suffering, when the economy collapsed. Yet, North Korea maintained its educational system, focusing on the gifted and special schools such as the First High Schools to preserve the next generation. The limited resources were concentrated towards gifted students. Students were tested and selected at the end of elementary school.” In that second interpretation, the primary concern of the North Korean government is to produce a few very brilliant students every year, who will bring back medals from the Olympiads and make the country look good. The rest of the population’s skills at mathematics are less of a concern.

When we receive new information, we update our beliefs to keep them compatible with the new observations, doing an informal version of Bayesian updating. Before learning about the North Korean IMO team, my prior beliefs were something like “most of the country is starving and their education is mostly propaganda, there is no way they can be good at math”. After seeing the IMO results, I had to update. In the first interpretation, we update the mean – the average math skill is higher than I previously thought. In the second interpretation, we leave the mean untouched, but we make the upper tail of the distribution heavier. Most North Koreans are not particularly good at math, but a few of them are heavily nurtured for the sole purpose of winning medals at the IMO. As we will see later in this article, this problem has some pretty important consequences for how we understand society, and those who ignore it might take pretty bad policy decisions.

But first, let’s break it apart and see how it really works. There will be a few formulas, but nothing that can hurt you, I promise. Consider a probability distribution where the outcome x happens with probability p(x). For any integer n, the formula below gives what we call the nth moment of a distribution, centered on \mu.

\int_{\mathbb{R}}p(x)(x-\mu)^ndx

To put it simply, moments describe how things are distributed around a center. For example, if a planet is rotating around its center of mass, you can use moments to describe how its mass is distributed around it. But here I will only talk about their use in statistics, where each moment encodes one particular characteristic of a probability distribution. Let’s sketch some plots to see what it is all about.

First moment: replace n with 1 and μ with 0 in the previous formula. We get

\int_{\mathbb{R}}p(x)(x)dx

which is – suprise – the definition of the mean. Changing the first moment just shifts the distribution towards higher or lower values, while keeping the same shape.

Second moment: for n = 2, we get

\int_{\mathbb{R}}p(x)(x-\mu)^2dx

If we set μ to be (arbitrarily, for simplicity) equal to the mean, we obtain the definition of the variance! The second moment around the mean describes how values are spread away from the average, while the mean remains constant.

Third moment (n = 3): the third moment describes how skewed (asymmetric) the distribution is, while the mean and the variance remain constant.

Fourth moment (n = 4): this describes how leptokurtic or platykurtic your distribution is, while the mean, variance and skew remain constant. These words basically describe how long the tails of your distribution are, or “how extreme the extreme values are”.

You could go on to higher n, each time bringing in more detail about what the distribution really looks like, until you end up with a perfect description of the distribution. By only mentioning the first few moments, you can describe a population with only a few numbers (rather than infinite), but it only gives a “simplified” version of the true distribution, as on the left graph below:

Say you want to describe the height of humans. As everybody knows, height follows a normal distribution, so you could just give the mean and standard deviation of human height, and get a fairly accurate description of the distribution. But there is always a wise-ass in the back of the room to point out that the normal distribution is defined over \mathbb{R}, so for a large enough population, some humans will have a negative height. The problem here is that we only gave information about the first two moments and neglected all the higher ones. As it turns out, humans are only viable within a certain range of height, below or above which people don’t survive. This erodes the tails of the distribution, effectively making it more platykurtic2If I can get even one reader to use the word platykurtic in real life, I’ll consider this article a success..

Let’s come back to the remarkable scores of North Koreans at the Math Olympiads. What these scores teach us is not that North Korean high-schoolers are really good at math, but that many of the high-schoolers who are really good at math are North Koreans. On the distribution plots, it would translate to something like this:

With North Koreans in purple and another country that does worse in the IMOs (say, France), in black. So you are looking at the tails and try to infer something about the rest of the distribution. Recall the plots above. Which one could it be?

Answer: just by looking at the extreme values, you cannot possibly tell, because any of these plots would potentially match. In Bayesian terms, each moment of the distribution has its own prior, and when you encounter new information, you could in principle update any of them to match the new data. So how can we make sure we are not updating the wrong moment? When you have a large representative sample that reflects the entire distribution, this is easy. When you only have information about the “top 10” extreme values, it is impossible. This is unfortunate because the extreme values are precisely what gets all our attention – most of what we see in the media is about the most talented athletes, the most dishonest politicians, the craziest people, the most violent criminals, and so forth. Thus, when we hear new information about extreme cases, it’s important to be careful about which moment to update.

This problem also occurs in reverse – in the same way looking at the tails doesn’t tell you anything about the average, looking at the average doesn’t tell you anything about the tails. An example: on a typical year, more Americans die from falling than from viral infections. So one could argue that we should dedicate more resources to prevent falls than viral infections. Except the number of deaths from falls is fairly stable (you will never have a pandemic of people starting to slip in their bathtubs 100 times more than usual). On the other hand, virus transmission is a multiplicative process, so most outbreaks will be mostly harmless (remember how SARS-cov-1 killed less than 1000 people, those were the days) but a few of them will be really bad. In other words, yearly deaths from falls have a higher mean than deaths from viruses, but since the latter are highly skewed and leptokurtic, they might deserve more attention. (For a detailed analysis of this, just ask Nassim Taleb.)

There are a lot of other interesting things to say about the moments of a probability distribution, like the deep connection between them and the partition function in statistical thermodynamics, or the fact that in my drawings the purple line always crosses the black like exactly n times. But these are for nerds, and it’s time to move on to the secret topic of this article. Let’s talk about SEX AND VIOLENCE.

This will not come as a surprise: most criminals are men. In the USA, men represent 93% of the prison population. Of course, discrimination in the justice system explains some part of the gap, but I doubt it accounts for the whole 9-fold difference. Accordingly, it is a solid cultural stereotypes that men use violence and women use communication. Everybody knows that. Nevertheless, having just read the previous paragraphs, you wonder: “are we really updating the right moment?”

A recent meta-analysis by Thöni et al. sheds some light on the question. Published in the journal Pyschological Science, it synthesizes 23 studies (with >8000 participants), about gender differences in cooperation. In such studies, participants play cooperation games against each other. These games are essentially a multiplayer, continuous version of the Prisoner’s Dilemma – players can choose to be more or less cooperative, with possible strategies ranging from total selfishness to total selflessness.

So, in cooperation games, we expect women to cooperate more often than men, right? After all, women are socialized to be caring, supportive and empathetic, while men are taught to be selfish and dominant, aren’t they? To find out, Thöni et al aligned all of these studies on a single cooperativeness scale, and compared the scores of men and women. Here are the averages, for three different game variants:

This is strange. On average, men and women are just equally cooperative. If society really allows men to behave selfishly, it should be visible somewhere in all these studies. I mean, where are all the criminals/rapists/politicians? It’s undeniable that most of them are men, right?

The problem with the graph above is that it only shows averages, so it misses the most important information – that men’s level of cooperation is much more variable than women’s. So if you zoom on the people who were either very selfish or very cooperative, you find a wild majority of men. If you zoom on people who kind-of cooperated but were also kind-of selfish, you find predominantly women.

As I’m sure you’ve noticed, the title of the Thöni et al paper says “evolutionary perspective”. As far as I’m concerned, I’m fairly skeptical about evolutionary psychology, since it is one of the fields with the worst track record of reproducibility ever. To be fair, a good part of evpsych is just regular psychology where the researchers added a little bit of speculative evolutionary varnish to make it look more exciting. This aside, real evpsych is apparently not so bad. But that’s not the important part of the paper – what matters is that there is increasingly strong evidence that men are indeed more variable than women in behaviors like cooperation. Whether it is due to hormones, culture, discrimination or cultural evolution is up to debate and I don’t think the current data is remotely sufficient to answer this question.

(Side note: if you must read one paper on the topic, I recommend this German study where they measure the testosterone level of fans of a football team, then have them play Prisoner’s Dilemma against fans of a rival team. I wouldn’t draw any strong conclusion from this just yet, but it’s a fun read.)

The thing is, men are not only found to be more variable in cooperation, but in tons of other things. These include aggression, exam grades, PISA scores, all kinds of cognitive tests, personality, creativity, vocational interests and even some neuroanatomical features. In the last few years, support for the greater male variability hypothesis has accumulated, so much that it is no longer possible to claim to understand gender or masculinity without taking it into account.

Alas, that’s not how stereotyping works. Instead, we see news report showing all these male criminals, and assume that our society turns men into violent and selfish creatures and call them toxic3Here is Dworkin: “Men are distinguished from women by their commitment to do violence rather than to be victimized by it. Men are rewarded for learning the practice of violence in virtually any sphere of activity by money, admiration, recognition, respect, and the genuflection of others honoring their sacred and proven masculinity.” (Remember – in the above study, the majority of “unconditional cooperators” were men.). Internet people make up a hashtag to ridicule those who complain about the generalization. We see all these male IMO medalists, and – depending on your favorite political tradition – either assume that men have an unfair advantage in maths, or that they are inherently better at it. The former worldview serves as a basis for public policy. The question of which moment to update rarely even comes up.

This makes me wonder whether this process of looking at the extremes then updating our beliefs about the mean is just the normal way we learn. If that is the case, how many other things are we missing?