The Almanack of Amazing Things (#2)

In this category of Innumerabilibus I will regularly share some quick ideas, insights and facts that I have recently discovered while listening to podcasts, reading books, newsletters etc. Posts in this series have the tag “The Almanack of Amazing Things”. Links to sources are included!

  • Ancient Earth’s… Google Earth: fascinating website showing a navigable map of our planet at different periods of its history, starting from 750 million years ago. I enjoyed seeing how Europe was a huge archipelago connecting America to Asia and Africa circa 100M years ago, as well as the supercontinents that preceded Pangea.
  • A naming trivia: the word “minute” comes from the Latin pars minuta prima, meaning “first small part” because it’s the first division of the hour. The “second small part” of the hour (Latin: pars minuta secunda) became the word “second”.
  • No one knows how much governments can borrow: economist Noah Smith explains how one of the biggest questions of today’s Economics (how much public debt can a country afford before hyperinflation kicks in?) is fundamentally unresolved and strangely under-researched. That’s quite ironic for a topic where confident opinions abound across the political spectrum.
  • Lichtenberg figures: the pattern created by lightning in the sky is a particular case of a more general phenomenon (Lichtenberg figures) that occurs every time electricity discharges through an insulating material. Lichtenberg figures can appear on wood, glass or even human beings…
Lichtenberg figures on wood

My dog’s theory of mind

I like it when animals behave like humans. But I find even more fascinating those human behaviours that do not have an animal equivalent: they shed light into how humans are very special animals after all.

In my family we have had dogs since I was 10 and I have enjoyed many of their human-like behaviours. One that has always impressed me is their guilty look when they have done something naughty when left alone in the house (there are entire compilations online devoted to this genre).

However, there is one scenario where that cute guilty look doesn’t really play out in the way it would with humans: when we leave multiple dogs in the house (yes, there are multiple) and only one has done something naughty (say, it chews toilet paper in the bathroom), then only that dog looks guilty when we arrive back. This happens with my mother’s two dogs (Sushi and Trilly) and I always find it comical how the naughty dog (typically Trilly) would reveal itself without even trying to play dumb, which is what a child would do. After all we have no way of knowing which dog chewed the toilet paper!

Are dogs incapable of being deceptive?

The real explanation is actually more interesting and I came across it recently while reading this passage from the book “Knowledge: A Very Short Introduction” by Jennifer Nagel (emphasis mine):

“… human beings can also keep track of the ways in which others are mistaken, and this is something that no other animal does (as far as we can tell). You can see another person as having a false belief: playing a practical joke on a friend, you empty his cereal box and fill it with rubber insects. You know, as you watch him sitting down at the breakfast table, that he is expecting cereal to pour from the box. You know that he has an inner representation that doesn’t match the actual outer reality. No other animal seems to be capable of representing this kind of state, even in situations in which it would be highly advantageous. In experimental tests of whether an animal can keep track of another’s false belief, all non-human animals fail.”

What an elegant explanation for Trilly’s behaviour! The naughty dog has no way of understanding its owner’s state of mind of being mistaken or uncertain about the culprit1. It just implicitly assumes the owner will know what the dog knows, i.e. that it is guilty!

Knowing that someone else is wrong – and taking advantage of it – is a very sophisticated cognitive process, and uniquely human. Such a crystal clear demonstration of this has been playing out all along right in front of me, every time Trilly chews some toilet paper.

[1] Dogs don’t have a theory of mind, to use the technical terminology.

The Almanack of Amazing Things (#1)

New category on Innumerabilibus! Here I will regularly share some quick ideas, insights and facts that I have recently discovered while listening to podcasts, reading books, newsletters etc. Posts in this series will have the tag “The Almanack of Amazing Things”. Links to sources are included!

  • Facts don’t change our minds. Friendship does: very insightful essay by James Clear on how to be more effective at convincing people to change their mind. It’s not just a question of proving your point, but also of being welcoming and gentle while proving your point: “Convincing someone to change their mind is really the process of convincing them to change their tribe. If they abandon their beliefs, they run the risk of losing social ties. You can’t expect someone to change their mind if you take away their community too. You have to give them somewhere to go. Nobody wants their worldview torn apart if loneliness is the outcome“.
  • The entire Universe could collapse at any time due to a complex quantum mechanism: the mechanism is called vacuum decay and it involves the famous Higgs’ boson. The good news is that it is just a hypothesis and, even if true, the process would take hundreds of billions of years anyway. I have learned about this and other ways the Universe could end listening to Katie Mach in a great episode of Sean Caroll’s Mindscape podcast.
  • A naming trivia: the Spanish city of Pamplona, famous for its “running of the bulls“, is named after the Roman general Pompey, who founded it.
  • Education may explain the last 10 years of global politics better than globalisation: What drove the working class away from left-wing parties and towards right-wing leaders? The standard answer is that the Left was guilty of embracing the globalisation agenda that ended up harming many blue collar workers in the West. Data scientist David Shor objects to this theory by pointing out that, for example, support for Trump in US is strong even in areas that benefited from trade with China (e.g. Iowa). Shor thinks the reason for this electoral shift is instead the fact that liberal values have always been irreconcilable with the values of a majority of blue collar workers: this clash is becoming apparent only now because the huge, recent increase in education has created a core of liberal, mostly young supporters, to whom left-wing parties can finally advertise their “real” policies. Shor discusses this and other ideas about data-driven political campaigns on Julia Galef’s “Rationally Speaking” podcast.

The best argument for Brexit (and my objection)

The most sensible thing to do, when comparing two conflicting ideas, is focusing on the best argument in support of both sides. That is where the conflict is most likely to be resolved, and the best idea (or best combination of ideas) to be found. And yet we fail to follow this simple rule virtually all the time. When we attack an idea that we don’t like we systematically aim for the weakest, most grotesque version of it – after all, we are humans and we want to win arguments more than we want to discover the truth.

The inevitable consequence of this is that groups supporting opposite ideas end up talking past one another and misunderstand each other: neither of them recognise the ideas that the rivals are attacking, and there is no real progress in the conversation.

In the spirit of fighting this tendency in myself I thought I would periodically share on Innumerabilibus (under the new category called “The Hidden Argument”) what I recognise as the best argument for an idea I don’t support, as well as my (best) objection to that argument.

I will start today with the best argument in support of Brexit that I came across.

The best argument in support of Brexit1

The bureaucratic nature of the EU’s institutions makes them unsuitable for effective decision making. In particular, it makes it hard to correct bad policies because no one is really accountable for them. Also, since these policies are often the result of broad compromises among countries, they don’t really reflect anyone’s position, making it even harder to pinpoint who was wrong if things go badly. The effectiveness of a political system relies in its ability to correct for errors, and this is best achieved at the smaller scale level of single countries, where accountability is clearer.

Why it is a good argument

The argument’s premise is very solid and it is rarely spelled out so clearly: the reason we should like democracy is not some abstract concept of justice or fairness. Rather, it is because democracies allow us to test different policies and then adjust/withdraw them – if unsuccessful – in a relatively smooth and peaceful way through regular elections (this idea dates back to the philosopher Karl Popper). Put it this way, it seems true that a supranational body like the EU (where changes in policies are slower as they are based on wide consensus and bureaucracy) is less effective at delivering such error correction than single countries.

Why it didn’t make me change my mind nonetheless

First of all, even though I find the “error correction framing” quite compelling, I don’t think the democratic process can be completely reduced to that. Voters are humans who, as such, are naturally forgetful, short-term focused and inconsistent. Framing their votes as thoughtful error correction of policies that were voted in the previous election seems a bit naïve. Tribalism, charisma, bias and disinformation seem to play a much more important role in deciding whom the average voter is going to vote for.

Secondly, even if we accept the “error correction framing”, we know from game theory that some optimal policies that benefit everyone simply cannot be achieved short of an enforced and supervised collaboration among the players2. Any advantage in terms of “agility” linked to country-level decision making should be weighed against the (mathematically-proven) disadvantages that come from removing supranational institutions.

Finally, this whole pro-Brexit argument is more like an argument against proportional electoral systems than against supranational institutions like the EU. Today the EU happens to work (mostly) through bureaucratic and proportional mechanisms for historical reasons, but this doesn’t have to be the case forever. A hypothetical, future “United States Of Europe” may well end up having a majoritarian electoral system, hence becoming a more efficient error correction system. Ironically, what stops the EU from becoming such a political union is mainly the opposition of countries like the UK, which then criticise it for not being as effective as individual countries.

[1] I first came across this pro-Brexit argument through the physicist David Deutsch, great scientist and follower of Popper’s ideas in both epistemology and political science. A thorough summary of his view in support of Brexit can be found on this interview.

[2] For example, the so-called Nash equilibria that are not Pareto optimal.

Game the algorithm

Social media has long been blamed for fuelling polarisation and echo chambers. I strongly believe this accusation to be well founded, given my personal experience with e.g. political fights on Facebook with (ex-) acquaintances, or family members trapped in fake news bubbles on YouTube. The way we underestimated social media’s power on society is one of the biggest historical oversights of the early 21st century.

But social media can be a powerful tool for personal growth too and we rarely talk about it. It is a learning curve but they can be awesome for learning and opening your intellectual horizons. As someone said on Twitter, you just have to “game the algorithm until your feed is nothing but wholesome content and wisdom worth implementing“.

To prove this point I thought I would give here a few examples of interesting things I have been learning on Twitter since I stopped using it to fight with strangers about politics or religion and started following inspiring people instead.

Fascinating insights into the future…

… and into the past…

… wisdom at the highest level…

… and deep insights which make a better person every day, and maybe even a better writer:

New and interesting mistakes

I never cease to be surprised by how smart people can be very stupid. To be more precise, people can be very clever in certain areas and extremely dumb in others. Why do we compartmentalise intelligence?

You see people who are clearly bright – e.g. succeeding in highly competitive and difficult professions – who at the same time believe the most naive conspiracy theories. Or otherwise thoughtful individuals who, when it comes to politics, can only repeat the ideology they uncritically received from their parties or tribes1.

I consider a particular case of this paradox the many smart people who are also deeply unhappy. The proved lack of correlation between intelligence and happiness is more puzzling than it may seem at first: after all, “humans are intelligent to the extent that our actions can be expected to achieve our objectives” (Stuart Russell). If the most intelligent individuals don’t seem to achieve the supreme objective (happiness) better than others, then… in what sense are they more intelligent?

I’ve recently come across a brilliant passage that shed some light on this paradox. In his book “Rationality: from AI to Zombie“, Eliezer Yudkowsky writes:

“When I think about how my younger self very carefully followed the rules of Traditional Rationality in the course of getting the answer wrong, it sheds light on the question of why people who call themselves “rationalists” do not rule the world. You need one whole hell of a lot of rationality before it does anything but lead you into new and interesting mistakes.”

Yudkowsky is really onto something here. So many smart people are smart only in the sense that they can fool themselves in highly sophisticated ways. After all, they are inclined to the same biases and prejudices of less smart people, but because they can make a more convincing case for their errors they will stick with them deeper and for longer.

Sometimes – in some narrow field where there are strong, external incentives to reach the truth, e.g. in their professions or studies – these people are “nudged” into using their sharper cognitive tools to be correct more often than the average person. But in most other areas there is no such incentive: when you spend your night watching the umpteenth video confirming your suspicion that the world is controlled by a zionist conspiracy there is no external penalty for indulging in confirmation bias (on the contrary, finding evidence that confirms your belief is quite satisfying), at least not in the same way as there is a penalty for building a bridge that will collapse2,3.

Maybe the paradox simply rests on a misunderstanding: the implicit assumption that the objective of human intelligence is to reach the truth. If we instead accept that our brain is first of all a tool for survival, and that self-deception used to be often instrumental for survival in our ancestral environment (e.g. by making us stick closer to our own tribe), it is no surprise that intelligent people are wrong as often as anyone else.

I have used an analogy with Artificial Intelligence (AI) before on this blog, but here is another one: current AI systems are very smart in some narrow fields externally designed by human researchers (e.g. the game of Go), but they are genuinely stupid when they try to generalise their skills to other tasks. In the case of AI, this is an objective limit of the algorithms (and this limit is probably the main technical obstacle on the path to human-level artificial intelligence), whereas humans could in principle be rational across the board if they really wanted and had the right incentives.

Unlocking the true potential of our intelligence would greatly benefit all of us. A benefit to humanity that is no inferior to the (much more celebrated) opportunities that true human-level AI could bring about.

[1] I would be tempted to describe these people as affected of “cognitive dissonance”, but that would not be accurate because the contradiction in their minds is not (necessarily) about beliefs: it is about their selective use of rationality. “Epistemic dissonance” is more appropriate.

[2] This example neatly explains why there are some many smart engineers believing stupid conspiracy theories.

[3] I believe another reason has to do with sheer complexity: after all, truly understanding the causes of complex social dynamics is way more complicated than understanding how to solve equations.

The first and last time humanity is a global village

Modernity is an unprecedented period for many reasons. One that is often cited is telecommunication: for the first time in history, all humankind is connected and able to communicate almost instantaneously across the globe.

I would argue that this fact is so exceptional that ours might be not only the first, but also the last time in history when this is possible.

The reason is simple and has nothing to do with nuclear war and post-apocalyptic scenarios. Rather, it has to do with two facts that we discovered in the 20th century.

A great power with a cosmic limitation

The first fact is that nothing can travel faster than light (aka electromagnetic waves). It is one of the principles underpinning Einstein’s theory of relativity since 1905: just when telegraphs and radios were revolutionising our world we learned that it was physically impossible to get faster than that technology.

It isn’t a big deal from a practical point of view most of the time, given that the speed of light is a mind-blowing 300,000 km per second. A beam of light can go around the Earth 7 times per second. When it comes to videocalling your friends it doesn’t matter how far they are, the communication is practically instantaneous.

Space is big

But here comes the second fact that we discovered in the 20th century: space travel is possible. We haven’t taken advantage of it as much as sci-fi writers were imagining in the 60s, but the technology is continuously improving and getting cheaper. It is only a matter of time before we colonise other planets in our Solar System. When? It is hard to predict but it seems unlikely that it will take much more than a few centuries.

And here is the thing about space: the light is not that fast compared to the distances between planets, let alone stars. Just sending a message to Mars, one of the closest and most likely planets to be colonised, takes an average of 13 minutes. Forget any Whatsapp call with your friends on Mars. And these delays become ridiculous when we look at other stars: it takes 4 years to send a message to the exoplanet closest to us. Around 50,000 years to send a voicemail to the other side of the galaxy.

Back to the past

What will happen when humanity start colonising other planets then? There are so many things we cannot even imagine about people living so far in the future, but one thing we can say for sure: they won’t be able to keep in touch. For all their technology and scientific knowledge, when it comes to telecommunication it will be like going back in time. Back to a past when internet, telephones and even telegraphs weren’t available. When “talking” with someone far away meant sending a letter and waiting months or years for a reply.

Exactly like our ancestors, these future people will be able to have a real-time conversation with only a restricted network of people. This network will include everyone quickly reachable with light signals (i.e. everyone in a ~300,000 km radius), which is a big upgrade from the villages of our ancestors, but it is still insignificant compared to the interstellar distances that humanity will cover and colonise.

Maybe old social dynamics and practices will play out again, just in different forms. Writing letters (or their futuristic equivalent) could become again a common and natural way to keep in touch. Societies will differentiate greatly, reverting centuries of cultural globalisation, simply because far away communities cannot catch up in real time.

Our pre-modern past – a history of fragmentation and bubbles – will come back and there will be no way to escape it. These few centuries of “global village” we are currently living in will be remembered in history as a curious interlude, the temporary gap between the invention of radio and the launch of space colonisation that allowed all humanity to be connected for the first and last time.

The most significant phenomena in nature

“[…] the class of transformations that could happen spontaneously – in the absence of knowledge – is negligibly small compared with the class that could be effected artificially by intelligent beings who wanted those transformations to happen. So the explanations of almost all physically possible phenomena are about how knowledge would be applied to bring these phenomena about. […]

If you want to explain how an object might possibly reach a temperature of ten degrees or a million, you can refer to spontaneous processes and can avoid mentioning people explicitly (even though most processes at those temperatures can be brought about only by people). But if you want to explain how an object might possibly cool down to a millionth of a degree above absolute zero, you cannot avoid explaining in detail what people would do. […]

It follows that humans, people and knowledge are not only objectively significant: they are by far the most significant phenomena in nature – the only ones whose behaviour cannot be understood without understanding everything of fundamental importance”.

From “The Beginning of Infinity: Explanations that Transform The World” by David Deutsch

The secret weapon in the Information War

People who often believe fake news and conspiracy theories are guilty of a specific cognitive error. We could call this error “failure to apply Occam’s razor”.

William of Ockham, portrayed in a manuscript of his “Summa Logicae”, 1341 (Wikipedia)

For those who are not familiar with Occam’s razor: roughly speaking, it is the principle saying that when multiple theories explain an observation equally well, we should choose the theory that makes fewest assumptions (the “simplest” one, in other words). Occam’s razor has a deep, elegant probabilistic justification1, and it was first formalised by the English Franciscan philosopher William of Ockham in the 1300s.

Why do I say that conspiracy theorists fail at applying the principle? Because conspiracy theories are incredibly complex. They make assumptions about the number and level of people involved in hiding some truth, assumptions about their skills, aligned motivations and about how they managed to keep everything secret so far. All of this to explain observations (the apparent flatness of our planet, terrorist attacks, murders, pandemics) that can easily be accounted for with much more parsimonious theories2.

I find it really ironic that conspiracy theorists don’t understand Occam’s razor, and for one reason in particular. It turns out there is a group of people who are really good at following Occam’s advice: they are the data scientists working on the fake news algorithms that target the conspiracy theorists.

The secret sauce of machine learning

Machine learning models, such as those that find the right audience for fake news online, can be quite complex. But a crucial factor makes them work: they are never more complex than they need to be. When a model is too complex, data scientists say that it is overfitting, and that’s a huge problem. An overfitting model can be disastrously inaccurate. A big part of modelling in data science is about avoiding overfitting – that is, applying Occam’s razor.

A typical task for a Machine Learning model is finding a curve that separates two groups of points (here represented with different colours). The model on the left is unnecessarily complicated and it is unlikely to perform well on new data (overfitting)

This is true for all kind of models: those applied to credit risk, marketing, image recognition, text analysis and so on. And, of course, for models whose purpose is to identify the right audience for fake news.

In other words, a big part of the job of those who spread fake news online is applying the logical principle (Occam’s razor) that should make people skeptical of the content of fake news and conspiracy theories!

An old story with new consequences

This is an old story in modern clothes. In history, the source of power has always been the unequal allocation of knowledge, not just of resources.

But there is something new here: today differences in knowledge can make an unprecedented difference in the balance of power. In the era of information, the ability to cut through the overwhelming amount of data and news is crucial to make sense of the world and act accordingly as consumers, voters and citizens.

Those who can master the subtleties of logic (and one of its modern incarnations, machine learning) have an advantage over those who cannot. The former can literally change the reality bubbles of the latter. Recalling the philosophical principle of a Franciscan friar of the 14th century can help us navigate this battle and fight back.

[1] MacKay, David J. C. (2003). Information Theory, Inference, and Learning Algorithms (chapter 28)

[2] This is not say that all conspiracy theories are necessarily wrong, of course. But they should never be preferred to simpler explanations, unless they can explain observations better than the simpler explanations – and in my experience that happens very rarely.

Things I discovered while learning a second language (that no one had warned me about)

I began using English on a daily basis around 7 years ago, when I moved to the UK to do my PhD. Before then, my written English was passable but my oral skills were quite basic. Living and working in London has definitely been the best way to improve, but I wouldn’t call it a smooth or pain-free experience.

In this post I want to share the learnings I drew from it: for people who never learned a second language, so they know what it is like, and for people who have, whose experience I am sure is similar but not 100% the same. It’s a very personal journey after all.

  1. At first, it’s demoralising. The sophistication I had finally achieved in my mother tongue (and that I was so proud of) was suddenly useless. I was back to square one, a mostly mute child in a world full of fluent and witty grownups.
  2. Losing the ability to use humour is particularly painful. I have never been a big talker, even in Italian, but I had always felt that my sense of humour was there to compensate – and to make me an interesting person despite my few words. All of it was lost when English became my day-to-day language. My conciseness was no longer witty, just rudimentary.
  3. Things obviously get better with time, although not all in the same way. While your listening keeps improving virtually forever, your speaking skills (especially pronunciation) quickly reach a plateau. After that, improving is still possible but requires an effort that I am rarely willing to make (especially because I’m fluent enough to make myself understood most of the time). But it is frustrating. There are sounds in the English language that I have given up trying to reproduce, and expressions that I “know” but that always escape me when I speak.
  4. You can’t hear your accent. It’s not like I hear a British accent when I hear myself speaking English. But somehow I hear a plain sound, with no distinctive feature. The same thing happens when I listen to other Italians speaking English. Everyone else is immediately able to tell they’re Italians, I am not. There is always a surprising asymmetry between the image you have of yourself and the image others have of you: speaking a second language reveals how deep this difference can be.
  5. Your mother tongue is soaked with an emotional legacy that goes back to your childhood, in a way that a second language isn’t. Being praised in your mother tongue is more pleasant, and being insulted is more offensive. The second language has a more objective, almost technical vibe – after all, you learned it as an emotionally balanced adult trying to convey practical information. The downside is it can feel colder. The upside, however, more than compensates the downside: your second language is a land of new opportunities, of new emotional meanings that you can freely decide to associate to words and expressions, in a way that your mother tongue will never let you.
  6. This is not to say that English doesn’t already evoke memories to me. But they are of a very different quality from my mother tongue’s. What I found is that you’ll often remember words along with the context in which you learned them for the first time. Those initial, “eureka” moments. When I say “wobbly“, I remember a bar in Boston where I asked my friend John what word he would use to describe our unsteady table; I learned the meaning of “belly” reading a book on Greek mythology, on a flight back from Crete (Zeus was releasing his siblings from Cronus’ belly); I looked up the expression “but then again” while watching “Kill Bill”, when Budd tells his brother they deserve to die. I’ll take those images with me forever. The examples are countless.
  7. In mildly noisy places your ability to listen drops to zero very quickly, no matter how familiar you are with the language. You would expect to go through a grey area first, where you can understand 50% of what people are telling you, but that’s not the case. You simply lack the ability that native speakers have of reconstructing the meaning of a sentence from few sounds. It looks like each sound is crucial to reconstruct a sentence in which no piece can be taken for granted.
  8. You’ll forever lose some phrases in your mother tongue that have no equivalent in the second language. The sooner you get over it and find some kind of surrogate, the better. To me, such Italian phrases include “Buon lavoro”, “Boh”, “magari“.
  9. On the other hand, you gain some handy, short expressions that your mother tongue doesn’t have. Some of them are so convenient that I often find myself trying to use them in Italian, with comical effects.
  10. At the end of the day, learning a new language is a humbling experience that makes you a better person. Personally, the frustration of the first years taught me to take myself less seriously and worry less about what other people think. I am definitely more confident and less shy than I would be if I had never seriously learnt a second language. Being forced to go back to square one, a mute child in a world of witty grownups, is painful but refreshing.