Did a 6th century Hebrew fortuneteller accidentally do the first documented experiment?

Who did the first experiment? 13th century scholastics like Roger Bacon are usually credited with the invention of the modern scientific method – in particular with regard to doing experiments. Bacon expanded on the work of Robert Grosseteste, who revived the method of induction and deduction from the ancients and applied it to the natural world (going far beyond Aristotle’s intuitive natural philosophy). Others give credit to even later individuals, such as another Bacon – Francis Bacon – who made the experiment the centerpiece of empirical investigations of the natural world.

However, it seems that the first experiment – admittedly with an ad hoc methodology and no clearly spelled out rationale linking outcomes and interpretation, let alone statistical considerations or power calculations – seems to have predated these developments by well over 600 years.

The backdrop to this experiment is straightforward – the Western Roman Empire fell in 476 to an onslaught of Goths, whereas the Eastern Roman Empire soldiered on. A bit more than 50 years later, the Eastern Roman Emperor (Justinian) tasked his General Belisarius to retake the lands of the Western Empire from the Goths, first and foremost the Italian peninsula.

The first experiment?

Experimental design and outcome of what is potentially the first documented experiment.

The history of this conflict is relatively well documented, mostly by contemporary chroniclers like Procopius (in the pay of Justinian, so perhaps not entirely impartial). The story in question might be apocryphal and was already flagged as such by Procopius, but this does not really matter in this case – given that it is mostly the idea (of doing an experiment) itself that is of interest here, not its particular outcome. The story of the experiment is invoked to partially explain why Theodatus – the Gothic king in Rome, characterized by Procopius as “unmanly”, without experience in war and devoted to the pursuit of money – did not come to the aid of the denizens of Naples when they were besieged by Belisarius.

To make a decision whether he should fight Belisarius and try to lift the siege of Naples, Theodatus asked a Hebrew fortuneteller as to the likely outcome of the conflict. The fortuneteller told him to place 3 groups of 10 pigs each (presumably randomly selected) in separate huts, label them as Goths, Italians and East Roman soldiers, then wait a specified (but unstated) number of days. After doing just that, Theodatus and the fortuneteller entered the huts and noted that only 2 pigs labeled as Goths had survived, whereas most of the pigs labeled as East Roman soldiers lived. Of the pigs representing native Italians, half survived but had lost all of their hair. Theodatus interpreted the results to mean that the Italians would lose half of the population and all of their possessions, and that the Goths would be defeated, at little cost to the victorious Eastern Romans. Hence his reluctance to confront Belisarius at Naples.

In any event, Theodatus did not intervene and Belisarius ended up capturing both Naples and Rome from the Goths before being recalled by the Emperor. Losing both of these cities without much of a fight led the Goths to open rebellion and replace Theodatus as king of the Goths. Their chosen successor, Vittigis ordered the fleeing Theodatus to be apprehended and Theodatus perished at the hands of his pursuers.

 

Here is the original text (from the beginning of book 9, translated):

So the besieged, without the knowledge of the enemy, sent to Theodatus in Rome begging him to come to their help with all speed. But Theodatus was not making the least preparation for war, being by nature unmanly, as has been said before.[36] And they say that something else happened to him, which terrified him exceedingly and reduced him to still greater anxiety. I, for my part, do not credit this report, but even so it shall be told. Theodatus even before this time had been prone to make enquiries of those who professed to foretell the future, and on the present occasion he was at a loss what to do in the situation which confronted him—a state which more than anything else is accustomed to drive men to seek prophecies; so he enquired of one of the Hebrews, who had a great reputation for prophecy, what sort of an outcome the present war would have. The Hebrew commanded him to confine three groups of ten swine each in three huts, and after giving them respectively the names of Goths, Romans, and the soldiers of the [85]emperor, to wait quietly for a certain number of days. And Theodatus did as he was told. And when the appointed day had come, they both went into the huts and looked at the swine; and they found that of those which had been given the name of Goths all save two were dead, whereas all except a few were living of those which had received the name of the emperor’s soldiers; and as for those which had been called Romans, it so happened that, although the hair of all of them had fallen out, yet about half of them survived. When Theodatus beheld this and divined the outcome of the war, a great fear, they say, came upon him, since he knew well that it would certainly be the fate of the Romans to die to half their number and be deprived of their possessions, but that the Goths would be defeated and their race reduced to a few, and that to the emperor would come, with the loss of but a few of his soldiers, the victory in the war. And for this reason, they say, Theodatus felt no impulse to enter into a struggle with Belisarius. As for this story, then, let each one express his views according to the belief or disbelief which he feels regarding it.

Posted in History, Philosophy, Science | 1 Comment

Why “dressgate”* matters

At this point, we have probably all reached “peak dress”, been oversaturated by all matters dress and are ready to move on. But there is more.

There is no question that “the dress” is the most viral image relevant to science in the history of the internet (well more than 9 million tweets in 2 days). Some people noted that this is quite a self-indulgent debate in light of all the serious problems facing the world today. Don’t we all have more important things to worry about? This is a particularly likely sentiment if someone doesn’t immediately “get” what the big deal is about (people are more likely to “get” it, if the interpretation flipped on them as they were looking at the dress, or if someone close to them vigorously insisted that they sincerely see it different).

While it is true that the world faces many pressing problems, discussing the dress is *not* frivolous. Here is why:

1. It’s not about the dress – it’s about visual perception and human cognition.

2. Perception is inherently a guess and even the best guesses can be wrong. It should not be surprising that different people guess differently – or even the same person differently at different times – if the information one starts with is ambiguous.

Two different colors, top vs. bottom, right?

Two different colors, top vs. bottom, right?

3. Basically, the brain is playing a game of telephone. Only the eyes have direct access to the physical light energy in the environment. The human visual system consists of at least 30 distinct areas that get information from earlier ones. All later areas have to rely on what the earlier areas (and ultimately the eyes) tell them is going on. This transmission is fundamentally unreliable. Information is lost at every step.

4. All of this has been documented many times – and known for over 150 years – in many domains of vision, but *not* color vision. To my knowledge, this is the first strongly “bistable” stimulus – where interpretations can be radically different – in the color domain.

How about now? Same colors as above, but with different context

How about now? Same colors as above, but with different context

5. It has been known for a long time that color vision is strongly susceptible to illumination and context. This is not a bug, but a feature that allows to identify objects despite varying illumination conditions. This suggests that the reason people differ in their interpretation is because they differ in their assumption what the light source might be.

The little links between the two regions might convince you that they are actually the same, but look different. Context does matter.

The little links between the two regions might convince you that they are actually the same, but look different. Context does matter.

6. What is particularly intriguing is that the interpretation readily shifts for most bistable stimuli like Rubin’s vase, rotating spheres, etc. – but not for the dress. Some people never shift, others do but on a time scale of hours and once they flip, they can’t flip back. This suggests that there is some kind of rapid perceptual learning going on, much like in “Dalmatian” like situations.

Can you see the dalmatian? If not, see below

Can you see the dalmatian? If not, see below

7. The reasons for this – why different people make different assumptions about the light source and why some people exhibit rapid perceptual learning are fundamentally unknown. There are no shortcuts here. More research on this is needed and will be done.

8. Which is the last point on this list – to my knowledge, this is the first time, a powerful stimulus display has been brought to the attention of science by social media. This also explains why this kind of thing hasn’t happened before. Surely, many people have taken overexposed pictures of fabric (the fabric might matter, too) in poor lighting conditions before. But without social media to amplify the disagreement (social media seems to be best at that), it would have ended there. So what if you disagree with your friend? We all know your friend is a bit crazy and that’s that. Easy to dismiss. But social media changes all that. So we could look forward to more of that.

Can you see the dalmatian now? If you do: Congrats. This can not be unseen. If confronted with a display like the one above, you will always see the dalmatian.

Can you see the dalmatian now? If you do: Congrats. This can not be unseen. If confronted with a display like the one above, you will always see the dalmatian.

To summarize – and repeat – it’s not about the dress. It’s about visual perception and human cognition. If this is the first time you encountered that your perception – that you rely on to get safely through the day – is inherently and fundamentally unreliable, you might be skeptical, defensive or shocked. But that doesn’t change the facts.

So yes, ISIS is an obvious concern. But that doesn’t mean “the dress” is trivial. It is not. As a matter of fact, I would argue that this could not be more topical. If we can’t agree about the color of a dress, how can we hope for world peace? How can we foster tolerance if we don’t allow for and don’t understand that other people can sincerely see the world differently from us?

To be perfectly clear: The question is not what the color of the dress actually is. The question is why people disagree so strongly.

What follows from this is that finding the original dress is not a solution to the issue. Neither will any clever analysis of light distributions in the image. There are no shortcuts here. In other to find out why the interpretation has flipped on you or why your interpretation disagrees with that of someone you care about, more research is needed. At this point, the answer is: “We don’t know, but would like to find out”. Dismissing things is easy. Research is hard.

Note: We understand a lot about how vision works and how it doesn’t render a veridical percept of objective reality. But that is not the point. Vision mostly just has to be useful for survival purposes. But that does not account for the fact that different people see the same image of the dress (on the same screen) differently. It is important to acknowledge our ignorance in this regard and plan to do some research to overcome it.

Human perception is more variable than most people realize, both within the same person over time and between people. This goes beyond perception, by the way. Lots of people say things like: “You said x” or “this is offensive” – a more accurate statement would be “I understood x”, or “this is offensive to me”. Big difference. So it is always advisable to keep an open mind.

#itsnotaboutthedress

*Not my creation. This was one of the trending Twitter hashtags.

Images shared with kind permission from Steve Shevell.

Posted in Neuroscience, Psychology, Science, Social commentary | 9 Comments

Lessons from the dress: The fundamental ambiguity of visual perception

The brain lives in a bony shell. The completely light-tight nature of the skull renders this home a place of complete darkness. So the brain relies on the eyes to supply an image of the outside world, but there are many processing steps between the translation of light energy into electrical impulses that happens in the eye and the neural activity that corresponds to a conscious percept of the outside world. In other words, the brain is playing a game of telephone and – contrary to popular belief – our perception corresponds to the brains best guess of what is going on in the outside world, not necessarily to the way things actually are. This has been recognized for at least 150 years, since the time of Hermann von Helmholtz. As there are many parts of the brain that contribute to any given perception, it should not be surprising that different people can reconstruct the outside world in different ways. This is true for many perceptual qualities, including form

Rubin's vase: A classical example of figure/ground segmentation. The image is fundamentally ambiguous. People perceive a vase or faces, but not both at the same time.

Rubin’s vase: A classical example of figure/ground segmentation. The image is fundamentally ambiguous. People perceive a vase or faces, but not both at the same time.

and motion. While this guessing game is going on all the time, it is possible to generate impoverished stimulus displays that are consistent with different mutually exclusive interpretations, so in practice the brain will not commit to one interpretation, but switch back and forth. These are known as ambiguous or bistable stimuli, and they illustrate the point that the brain is ultimately only guessing when perceiving the world. It usually just has more information to go by and disambiguate the interpretation.

A bistable motion stimulus. Do you see the dots moving from left to right or up and down?

A bistable motion stimulus. Do you see the dots moving from left to right or up and down?

This is also true for color vision. The fundamental challenge in the perception of color is to identify an object despite changing illumination conditions. The mixture of wavelengths that reaches our eye will be interpreted by the brain as color, but which part is due to the reflectance of the object and which part is due to the illumination?

This is a inherently ambiguous situation, so the brain has to make a decision whether to take the appearance of an object at face value or whether it should try to discount the part of the information that stems from the illumination. As the organism is not primarily interested in the correct representation of hues, but rather the identification of objects in light of dramatically varying conditions (e.g. a predominance of long wavelengths in the early morning and late afternoon vs. more short wavelengths at noon), it is commonly accepted that the brain strives for “color constancy” and is doing a pretty good job at that.   But in this tradeoff towards discounting, something has to give, and that is that we are bad at estimating the apparent absolute hue of objects. For instance, a white surface illuminated by red light will objectively look reddish. The same white surface illuminated by blue light will objectively look blueish. In order to recognize both as the same white surface, the subjective percept needs to discount the color of the light source.

So it should not be surprising that inference of hue can be dramatically influenced by context. The same shade of grey can look almost black on a bright background but almost white on a dark one.

Lightness illusions are common. The shade of grey above and below the horizon are the same. You can see this by covering the strip in the middle

Lightness illusions are common. The shade of grey above and below the horizon are the same. You can see this by covering the strip in the middle

Note that this is not a bug. It is a necessary tradeoff in the quest to achieve a stable appearance of the same object, regardless of context or illumination.

So far, so good. Now where does the dress come in? The latest sensation to sweep social media has sharply divided observers. Some see the dress as gold on white, others as black on blue.

White/gold vs. Black/blue: Some people perceive the image on the left, others the one on the right. Others switch back and forth.

White/gold vs. Black/blue: Some people perceive the image on the left, others the one on the right. Others switch back and forth.

As noted before, this kind of divergence of interpretation might be rather common with complex stimuli. The importance of the “dress” stimulus is the extent to which intersubjective interpretation differs in the color domain. To my knowledge, this is by far the most extreme such stimulus. Of course one has to allow for the fact that not everyone’s monitor will be calibrated in the same way and viewing angles might differ, but this doesn’t account for the different subjective experience of people viewing the exact same image on the same monitor from the same position. And of course the reason why the “true colors” of the dress are in dispute in the first place is the color constancy phenomenon we alluded to above. This was likely a black/blue dress that was photographed with poor white balance, giving it an ambiguous appearance. But that doesn’t change the fact that some people sincerely perceive it as white/gold.

That the interpretation of the color values itself depends on context can readily be seen if the context is taken away. In the image below, some stripes were extracted from the original image without altering it in any other way. The “white/blue” stripe can now be identified as light blue and the “gold/black” stripe as brown.

Stripes, out of context: One now unambiguously looks like blue, the other like brown.

Stripes, out of context: One now unambiguously looks like blue, the other like brown.

But why the difference in interpretation? That is where things get interesting. If the ambiguity derives from color constancy (and it looks like it does), the most plausible explanation is that people differ in their interpretation of what the illumination source is. Those who interpret the dress as illuminated by a blue light will discount for this and see it as white/gold whereas those who interpret the illumination as reddish will tend to see it as black/blue. Interestingly, the image itself does allow for both interpretations, the illumination looks blueish in the top of the image, but yellowish/reddish in the bottom. On a more fundamental level, a blue/black dress illuminated by a white light source might be indistinguishable from a white/gold one, but with a blueish shadow falling onto it.

But if this is the case, one should be able to consciously override this interpretation once it is pointed out, but – for most people – this does not seem to be the case, in contrast to most other such ambiguous duckrabbit displays. People are able to willfully control what they see.

Inherently ambiguous stimuli. Interpretations switch, they don't blend. But people can consciously override interpretations once they are pointed out.

Inherently ambiguous stimuli. Interpretations switch, they don’t blend. But people can consciously override interpretations once they are pointed out.

This raises several intriguing possibilities. For instance, it has been recognized for quite a while that the human “retinal mosaic” – the distribution of short-, medium- and long-wavelength cones in humans is radically different between observers, but this seems to only have a minute impact on the actual perception of color. Perhaps it is the case that differences in retinal mosaic can account for difference in the perception of this kind of “dress” stimulus. Moreover, there is another type of context to consider and that is temporal context. We don’t just perceive visual stimuli naively, but we perceive them in the context of what we have encountered before – not all stimuli are equally likely. This is known as a “prior”. It is quite conceivable that some people (e.g. larks, owls) have a different prior as to what kind of illumination conditions they encounter more frequently. Or a complex interaction between the two.

While we must confess that we currently do not know why some people consistently see the dress one way, others consistently in another way and some switch, it is remarkable that the switching happens on very long timescales. Usually, switching is fast, for instance in the Rubin’s vase stimulus above. This could be particular to color vision. There are no shortcuts other than to do research as to the underlying reason that accounts for this striking difference in subjective perception.

Meanwhile, one lesson that we can take from all of this is that it is wise to assume a position of epistemic humility. Just because we see something in a certain way doesn’t mean that everyone else will see it in the same way. Moreover, it doesn’t mean that our percept necessarily corresponds to anything in the real world. A situation like this calls for the hedging of one’s bets, and that means to keep an open mind.  Something to remember next time you disagree with someone.

Who is right? Does the question even make sense? Or is it important to see the bigger picture?

Who is right? Does the question even make sense? Or is it important to see the bigger picture?

 

Posted in Neuroscience, Psychology, Science | 4 Comments

Why I am not continuing this online argument

I started our exchange in a spirit of friendship and respectful exploration. I hoped that we could maybe learn something. The reason I’m now discontinuing it is that I realize that this specific communication can no longer be meaningfully characterized in that way. In other words, nothing good can come of it, so it is best not to extend the triggering of adversarial circuitry. It does not mean that I concede the argument, it does not mean that you “won”. On the contrary. There is just no point continuing – victory conditions in an adversarial online exchange are ill-defined. Such exchanges only have a point if they are conducted in a spirit of mutual goodwill. I maintain this goodwill, but feel strongly that this is not reciprocated and that common ground has – for the time being – been lost. Perhaps we can regain it later. But that’s it for now, as we are at serious risk of simply making it worse. Not worth it.

This has never happened

This has never happened – it is a fantasy. Although an alluring one, fueling millions upon millions of of pointless internet arguments that are going on right now, as you read this. Let’s not add to that sad tally – it is not adding anything but vitriol, ill will and hatred. Illustration by: Michael “Flux” Caruso @fluxinflux

It is understood that you are wrapped up deeply in your subjective reality.  Trying to alter that will not be easy, so it might simply not be worth anyone’s time, let alone mine.

In the 21st century, one is dealing with a diverse range of people who are likely to construct reality quite differently. In other words, it is crucial not to confuse temporal and faux-spatial proximity online with importance (these heuristics do serve one well in offline interactions – people who are close matter, but the online space literally transcends space). Put differently, it is not only important to engage online, being able to disengage – the earlier the better – is arguably even more important:

Picard disengage

Never forget – victory conditions are ill-defined in such an exchange. It is tempting to think that your next “killer argument” will settle the issue once and for all. But that’s not how it works. Violence kills, arguments don’t.

Posted in In eigener Sache | Leave a comment

Shadowy but present danger: A primer on psychopathy

In the age of social media, it is hard to avoid exposure to popular culture. This is a problem because most of the bugbears that are popular in this culture – like zombies or vampires – do not actually exist. In contrast, those that do exist – notably psychopaths – get comparatively little screen time and if they do, they are portrayed as a caricature of the actual condition. This increases the likelihood that you will not recognize a psychopath when he enters your life, increasing the odds that the experience will truly be life ruining. At the same time, society appears naive and helpless in the face of psychopaths. While society continuously adds new layers of rules and regulations to all procedures, this does not come without a cost, diminishing the efficiency of carrying out all activities. Also, it is questionable whether this approach works – in many cases a determined psychopath will find a way to game the system all the same. If they are eventually caught, many psychopaths end up in prison, but at that point, the damage is done and prison as an institution is not specifically geared to deal with the condition.

At this point, the only hope comes in the form of the emerging neuroscience of psychopathy, which helps us detect the condition and understands its neural underpinnings. I review the evidence here.

Psychopaths - few moral considerations enter their decision making calculus. Due to the mask of sanity, you are unlikely to be able to recognize them until it is too late. Unless you know exactly what to look for.

Psychopaths – few moral considerations enter their decision making calculus. Due to the mask of sanity, you are unlikely to be able to recognize them until it is too late. Unless you know exactly what to look for.

Perhaps this will – in the long run – allow us to harden our society against psychopaths, dramatically lowering the burden of suffering inflicted by the condition on all of us.

Note: This is the primer. The full article is here.

Posted in Psychology, Science | 15 Comments

Positive thinking about positive thinking might just be wishful thinking

Bringing about positive changes in your life is hard. Everyone knows this. But everyone also desires them. So it is seductive to believe – particularly if you have no credible way to actually bring them about – that merely wishing for them will make it so. There are many people who will try to sell you on the idea that the world is your personal genie, only with an infinite number of wishes. Of course, if this were so, there would be no unfulfilled wants or needs. The pied pipers will tell you that you are simply not wishing hard enough. If you think this account is implausible, there is now a scientific account of what works and what doesn’t when it comes to make your wishes a reality.

What is more plausible - that the world is your personal wish genie, or that your blind faith in the power of positive thinking constitutes the very chains that ultimately hold you back?

What is more plausible – that the world is your personal wish genie, or that your blind faith in the power of positive thinking constitutes the very chains that ultimately hold you back?

There is a downside to wishful thinking about wishful thinking or positive thinking about positive thinking. It is the sacred mission of science to replace fairy tales and superstitions with a scrupulous account of what is and is not the case. Whether that account is popular or not.

TLDR: Wishful thinking is not enough. As usual, reality is deeply ironic.

Of course all of this only applies *if* you desire to achieve personal goals and contribute in ways that is valued by society. If that is not the case, none of this applies and you are of course still free to *enjoy*. But I would argue that most (but of course not all) people have a desire to achieve and contribute, of course manifesting in different ways.

Posted in Psychology, Science | Leave a comment

On the insinuation of bad intentions

Intentions matter. When assessing the merit or moral value of an action, we do not do so solely based on their outcomes, but take intentions into account. For instance, we consider it worse if someone breaks one cup in an attempt to steal cookies than someone who accidentally breaks 1o cups trying to help with the dishes.  Only very young children disagree (Piaget, 1932).

Here is the rub: We can only observe actions and their outcomes. Intentions have to be inferred. In everyday life, we have enough data on the people around us (their previous actions, their character, etc.) to make this inference somewhat reliably. As so much rides on a correct inference – the example above shows that the outcome is essentially meaningless in moral terms, unless intention is considered as well – getting it right matters. A lot. Yet, the dedicated social neuroscience circuitry that most of us possess to make these inferences responsibly does critically rely on a rich and subtle dataset of temporal and social context. If this context is missing, this circuitry will still operate, but in a vacuum, making inferences that are no longer reliable (but feel no less compelling to the mind that inhabits that brain).

This is where the modern (social) media landscape comes in. Most of the issues brought before us now are extremely complex and unless we are experts on everything, we have no way of assessing what is really going on, or what the intentions of the actors might be. More often than not, when we encounter someone’s actions online (often mediated by a third party, the “media”), this is also the first time we encounter that person as well. Put differently, we have no temporal context to judge them, nor much in the way of character to go by. But we will judge their actions implicitly or explicitly, and to do so with conviction, and we will also ascribe intentions. As they cannot be grounded in empirical fact – as we don’t know them – something else will be filling in for that. There are plenty of things that can be used in this fashion. Prior experience with other people comes to mind. If one has the experience of being taken advantage of by other people, one might be forgiven for adopting a generally misanthropic view and distrusting people in general – ascribing an intention to cheat to a freshly encountered person. This can be hard to override empirically. If I am convinced that someone is going to cheat me, even them being nice can be construed as a ruse to gain my trust or put me at ease, seemingly consistent with the ascribed intention.

As a matter of fact, it is hard to conceive of a pattern of behaviors that is inconsistent with this perceived intention, which – in this case – is made up without regard to the current situation. This is not an issue in the use case for which the system was designed (or evolved!) to work. Specific brains create a range of specific actions and by observing them over time and in various situations and environments, I can make inferences about the kinds of intentions this particular brain is prone to, which we call character or personality. In this case, everything works out. Brains generate behaviors to further certain intentions, both of which are consistent with a certain character. Once identified, this can be used to predict future behaviors. But in an online environment, the only thing one has to go by are actions. Also, they are often not observed, but reported, and reported by parties that are rarely disinterested. How is one to infer intentions or character? As pointed out above, there are plenty of sources that will allow conscious or unconscious “filling in” of missing information.

One such source is the report of the behavior itself, as it can emphasize or de-emphasize certain aspects of the action, making a particular intention more or less likely. Moreover, ideology is a key aspect of one’s intention-inference machinery, much of which is happening unconsciously and automatically. The problem is that there is very little in the way of “overriding” these conclusions. First, one might not want to override them because they reinforce one’s worldview. Second, there might be no perceived need to do so either, as all possible actions are consistent with one’s inferred character and intention. For instance, if one held the belief that all rich people are evil and only motivated by greed, there is quite literally nothing a billionaire could do to dispel this preconceived notion. If the billionaire gave the money to charity, one could argue that it is not enough or should have been done earlier. One could argue that the billionaire is only doing this to soothe a guilty conscience or to whitewash their legacy (this is not without precedent). One could even go so far as to say that the billionaire shouldn’t have had all that money in the first place, that it was effectively stolen from society and that it is only proper that it is now being returned, no thanks to the billionaire. At no point does the billionaire – despite all that money – have an action at his or her disposal – that could overturn the preconceived notion of the critic. This has happened. What has not happened – and generally does not happen – is for people to adopt a scientific stance and to ask themselves what piece of evidence would constitute a pattern of empirical facts that would lead them to overturn their preconceptions of intention and character. Of course, that might not always be possible. If I am convinced that evil entities are ready to strike at any moment, someone pointing out that no such entities have been observed will not sway me. On the contrary – their tendency to stay just beyond sensor range is further proof of their ill intentions…

But it is critical that we try to do so, as in a modern media environment, we are constantly evaluating scenarios we know nothing about, but which we then use to reinforce our preconceived ideological notions. A recent example of this is the horrendous Ebola outbreak in western Africa. After it became clear that American scientists had developed an experimental serum that was a potentially effective treatment, they were immediately accused of withholding it from the Africans, out of sheer racism. Once they considered making it available, they were accused of using this epidemic as an opportunity to test their half-baked drugs by experimenting on the poor Africans, again out of sheer racism. It is worth stating explicitly: There is nothing people who actually work on improving the human condition by developing cures for terrible scourges of mankind can do to dispel the notion that they are actually terrible racists, once these accusations are raised.

This situation is most acute on platforms like Twitter where people from the entire world communicate with each other, but often with little prior exposure and with an extremely limited shared basis of empirical facts. It can be hard to tell what someone is trying to say in 140 characters. Germans would call it the “Aussageabsicht”. Consequently, we observe plenty of misunderstandings and vitriol. With such a thin veneer of data, one can ascribe virtually any intention or character to anyone, regardless of what they say. And people do.

This is a dangerous situation and more likely to promote the bad than the common good in the long run, unless we reign in the tendencies brought about by our social neuroscience circuitry that is operating far outside of its design range (when it comes to social media). It would be a good start to admit that one does not really know the other people making the statements we object to and that judging them on this basis is really not a good basis for judging them as a person on the whole. On the contrary, we should be mindful that if we are willing to do so on a whim is both unwise and reflects poorly on us as an arbiter of empirical facts. Second, it is critical to recognize that any perception of intention or character is an inference and an inference that is made – given that we know next to nothing about the person – mostly on the basis of our prior beliefs, based on social norms and expectations and on ideology. It might feel compelling, but it is not necessarily so. Third, we should ask ourselves what behavior of the person could possibly be inconsistent with our preconceived notions, then go about eliciting such behavior. Everything else could be considered unfair and willfully unjustified. Finally, we should also recognize that seeing someone else making an accusation of ill-intentions does not constitute evidence of ill intentions. Sometimes, they just constitute evidence of bad inferences on part of the person who wrote the tweet. Not all tweets are created equal. Some are written by people desperately trying to advance the human condition as best as they can whereas others are written by people without achievement or judgment. They should not be weighed the same. Of course, it can be challenging to tell which is which, on Twitter.

To realize that not all tweets are created equal is particularly important for “shaming tweets”, tweets intended to shame the target (how one can ascertain that shaming is indeed the intention is the topic of an elaborate meta-discussion). It is important to recognize how devastatingly effective shaming is. Most (not all!) people are wired to protect their reputation at (almost) all cost. This is critical for autonomous agents that evolved to live in a social group. All social groups are beset with persistent principal agent problems. Reputation is one way to make sure that everyone does what they are supposed to. This is devastatingly effective. Whereas the concept is a little murkier in a modern global society, there can be no doubt about the fact that people’s behavior still changes dramatically depending on whether they think they are anonymous or not. But historical examples are most striking While most countries in WW1 simply drafted soldiers for WW1, the United Kingdom held on to their volunteer system for as long as they could. This is a problem if the country is in dire need for manpower, but few people are willing to sign up to fill this need. The example of the “Order of the white feather” shows how effective shaming can be. Men can quite literally be shamed to “volunteer” to fight and – not that unlikely – die or get maimed in the process. Yet, many picked this fate over being shamed. The pillory also comes to mind. In an age where hanging was the norm in terms of punishment for misbehaving and carefully calibrated torture was routinely used in criminal investigations, there was also an entire catalog of punishments specifically designed to shame people, the so-called “Ehrenstrafen“. It is not a coincidence that we abolished those along with torture – and for the most part – capital punishment.

Twitter stocks - not the kind of stocks that generally come to mind when one thinks of Twitter, but there it is - is Twitter just a global and eternal pillory for some?

Social media stocks – not the kind of stock that generally comes to mind when one thinks of Twitter, but there it is – is Twitter just a global and eternal pillory for some?

One should recognize how devastatingly effective shaming actually is. It is so effective that there are entire movements with the sole position of emphasizing that “fat shaming” or “victim shaming” is not ok. In addition, Twitter allows for a kind of asymmetric warfare. Anonymous or pseudonymous twitter users can attempt to to shame whoever they feel like, for any reason and for the entire world to see, preserved forever. That is a tremendous power with no checks and balances whatsoever. Anyone trying to shame someone on Twitter for perceived misdeeds is effectively judge, jury and executioner all in one. Where are these checks and balances supposed to come from? The only way I can see this happening is from an evolution of internet culture itself. Perhaps people will learn – as they get more comfortable with the medium – to recognize that not every allegation or insinuation is equally justified and that some are more telling about the mental state of the shamer than reflective of the presumed misdeeds of the shamed. Not everyone with an internet connection and a twitter account has something insightful to say. People like that can’t be stopped, but can they be ignored?

Meanwhile, can we all try to stick to the known facts? We might all be surprised how little we can actually ascertain for sure. But it might lead to a healthier and less vitriolic online culture.

Posted in Pet peeve, Psychology, Social commentary, Technology | 1 Comment

On “Kardashians” in science and the general relationship between achievement and fame

I am not in the habit of commenting on ephemeral events, but this was brought to my attention by interested parties in a decidedly snarky fashion which obliges me to respond.

Briefly, Neil Hall introduced the “Kardashian index” to quantify the discrepancy between the social media profile of scientists (measured by the number of Twitter followers) and their publication record (measured by citation count). The index is designed to identify “Kardashians” in science – people who are more well-known than merited by their scholarly contributions to the field.

Of course, the actual Kardashians are entirely indefensible, as they – by their very existence – take scarce and valuable attention away from more worthy issues that are in urgent need of debate. Perhaps this is their systemic purpose. So I won’t attempt a hopeless defense. However, the validity of the allegations implicit in creating such an index for science is questionable at best.

To be sure, Dr. Hall raises an important issue. It is true that there are a lot of self-important gasbags high on the “Kardashian” list who have stopped contributing to science decades ago, people who are essentially using their scientific soapbox only to bloviate about ideological pet interests they are ill equipped to fully comprehend.

Moreover, the Kardashian effect is quite real. While I do not believe that anyone on the list released a sex tape to gain attention, there is now a decided Matthew effect in place where attention begets more attention, whether it was begotten in a legitimate fashion or not. The victims of this regrettable process are the hordes of nameless and faceless graduate students and postdocs who toil in obscurity, making countless personal sacrifices on a daily basis and with scant hope of ever being recognized for their efforts.

Yet, introducing a “Kardashian index” betrays several profound misunderstandings that – if spread – will likely do more harm than good.

First, it exposes a fundamental misunderstanding of the nature of Twitter itself. Celebrity is not the issue here. I follow thousands of accounts on Twitter. Not all of them are even human. I know almost none personally and would be hard pressed to recognize more than a few of them. Put differently, I’m not following them because of who they are, but largely because of what they have to say. If they were actual Kardashians, this would be a no-brainer, as they have nothing valuable to say. The message (information), not the person (celebrity) is at the heart of (at least my) following behavior on Twitter.

Second, there are many ways for the modern academic to make a meaningful contribution. Outreach comes to mind. What is so commendable about narrowly advancing one’s own agenda by publishing essentially the same paper 400 times while taking the funding that fuels this research and is provided by the population at large completely for granted? In an age of scarce economic resources, patience for this kind of approach is wearing thin and outreach efforts should be applauded, not marginalized. There are already far too few incentives for outreach efforts in academia, meaning that the people who are doing outreach despite the lack of extrinsic rewards will tend to be those who are intrinsically motivated to do so. Another “Kardashian” high on the list comes to mind – Neil deGrasse Tyson. So what if he made few (if any) original contributions to science itself – he certainly inspires lots of those who are inclined to be so inspired. Why is that not a valuable contribution to the scientific enterprise more broadly conceived? Of course, it is worth noting that this should be done with integrity and class.

The scientific "Kardashians". As far as I know, none of them released a sex tape to garner initial fame

The scientific “Kardashians”. As far as I know, none of them released a sex tape to garner initial fame

Third, if there are many ways to make a contribution to science, why privilege one of them? Doing so would be justified if there was some kind of unassailable gold standard by which one could measure scientific contributions. But to suggest that mere citation counts could serve as such a standard in a time of increasing awareness of serious problems inherent to the classic peer review system (to name just a few: it is easy to game, there are rampant replication issues, perverted incentives, all of which are increasingly reflected in ever rising retraction numbers) is as naive as it is irresponsible.

Pyrite or fool's gold. This is a metaphor. Only a fool would take citation counts as a gold standard of scientific merit. It does happen, as there are a lot of fools out there.

Pyrite or fool’s gold. This is a metaphor. Only a fool would take citation counts as a gold standard of scientific merit. It does happen, as fools abound – there is never a shortage of those…

All of these concerns aside, this matter highlights the issue of the link between contributions to society (“achievement”) and recognition by society (“fame”). The implication of the “Kardashian index” is that this link is particularly loose if one uses twitter followers as a metric of fame and citations as a measure of contribution. Of course, we know that this is absurd. We don’t have to speculate – serious scholars like Mikhail Simkin have made it their life’s work to elucidate the link between achievement and fame. In every field Simkin considered to date (and by every metric he looked at), this link is tenuous at best. Importantly, it is particularly loose for the case of citations. As a matter of fact, the distribution of citation counts is *entirely consistent* with luck alone, due to the fact that papers are often not read, despite being cited. For fields where such data is available, the “best” papers are rarely the most cited papers. Put differently, citations are not a suitable proxy of achievement, as they are also just another proxy for fame. This is not entirely without irony, but – to put it frankly – the “Kardashian index” seems to be wholly without any conceptual foundation whatsoever in light of this research.

In fact, most papers are never cited, whereas a few are cited hundreds of thousands of times. Taking the Kardashian index seriously implies that most papers are completely worthless. Perhaps this is so, but maybe most people are just bad at playing this particular self-promotion game? I’m not sure if there is an inequality index akin to GINI for citations, but I would venture to guess that it would be staggering and that the top 1% of cited papers would easily account for more citations than the other 99%.
Put differently: Is everyone trying to be a Kardashian, on Twitter or off it, but some people are just better or luckier in the attempt?

I find this account quite plausible, but what does any of this have to do with scientific contributions?

As already pointed out, attention (or fame) matches real life contributions rarely, if ever. This is mostly due to the fact that real life contributions are hard to measure. Science is no exception. Citation counts are likely a better proxy for fame than for genuine contributions. If anything – the “Kardashian index” relates two different kinds of fame, fame inside a particular field of scientific inquiry relative to fame in the more general population. But who is to say that one kind of fame is better than another kind of fame? If one wants to save the “Kardashian index”, it would make sense to rebrand its inverse as the “insurality index” or “obscurity index” – do you get your message out only to people in your field or are you able to reach people more generally?

There are many ways for scientists to make a contribution. Not all of them are directly related to original research. Teaching comes to mind, reviewing too. The job of a modern academic is complex. One can do all of these things with varying degrees of seriousness. The reason why this became an issue in the first place is because outreach efforts (proxy: Twitter) and publications (proxy: Citations) can be easily quantified. That doesn’t mean that it is valid to do so or that one is necessarily better than the other. A research paper is a claim that is hopefully backed up by good data created by original research. But as the replication crisis shows, many of these claims turn out to be unsubstantiated, born from methodical issues, not seminal insights. To be sure, academia would be well advised to develop better metrics of performance. In general, the evaluation of academic performance is often surprisingly unscientific. From papers to citations to evaluations, it is all remarkably unsophisticated at this point in time.

Ultimately, it is all about the relevance of one’s contribution. But trying to discredit an increasingly important form of academic activity (outreach), by shaming those engaged in it, is probably uncalled for. We need more interactions between scientists and the general public, not less. To be sure, a lot of what is called “outreach” today is more leech than outreach, but the real thing does exist. As for the relevance of citations: Just because someone managed to create a citation cartel where everyone in it cites each other, that doesn’t mean anyone outside of the cartel cares. Calling something an impact factor doesn’t make it so. To be clear, there is nothing wrong with deciding to focus on original research. As a matter of fact, our continued progress as a species depends on it, if done right. But lashing out at others who decide to make a different choice does invite responses that will tend to point out that the situation is akin to dinosaurs celebrating their impending fossil status. At any rate, evolution will continue regardless. Wisdom consists in large part of not being trapped by one’s own reward history when the contingencies change. But good luck staying relevant in the 21st century without a social media strategy.

As everyone knows, Mark Twain observed that it is inadvisable to pick fights with people who buy ink by the barrel. Is the modern day equivalent that it is unwise trying to shame people who buy followers by the bushel?

PS: Of course, all of this bickering about who is more famous, who contributed more and who is more deserving of the fame is somewhat unbecoming and indicative of a deeper problem. As far as I can tell, the root cause for this problem is that society does not scale well in this regard. Historically, humans lived in small groups of under 150 individuals. In such a group, it is trivial to keep track of who did what and how much that contributed to the welfare of the group. There is no question that the circuitry that handles this tallying was not designed to deal with a group size of millions – or even billions (if one were to adopt a global stance). It is not trivial at all to ascertain who contributed something meaningful and who got away with something with a group size that big. Unless we find a way to handle these attribution problems (modern society obviously allows for plenty of untrammeled freeloading) in general – perhaps with further advances in technology that allow more advanced forms of social organization – this issue will keep coming up, unresolved. If you have any suggestions how to solve this or a perspective of your own, please don’t refrain from commenting below.

PPS: Of course, there is an even more general issue behind all of this, namely the relationship between contribution and reward. No society is great at making this relationship perfect. Interested parties can exploit these misallocations by gaming the system. The questions is what one should do about this and how one ought to perfect the system. Obviously, one way to deal with this is to shame the people who game the system, but I worry that the false-positive (and miss!) rate might be quite high. If so, that doesn’t really improve the overall justice in the system. A better way might be to make the system more robust against this kind of abuse. Ultimately the best long-term policy is virtue.

Posted in Pet peeve, Science, Social commentary | 2 Comments

Ideological opportunity cost (IOC)

Ideology interferes with an unbiased appraisal of reality. This – in itself – would be detrimental enough, but ideology is far more insidious than that. By nature, ideology is designed to be extremely self-serving and inherently creating in- and an out-groups.  Put differently, there is never a reason to have an ideological debate with someone. The reason someone brings up a particular topic is not to have an honest discourse about it, but rather to celebrate the beliefs of the in-group. Assuming belief homeostasis, I would expect this response to be particularly strong in those without sincerely held religious beliefs. In other words, someone bringing up an ideological topic is more akin to a secular sermon than to a genuine debate or discourse. It’s function is to celebrate the in-group and to identify and ostracize the out-group.

Life is inherently hard as reality is extremely complex and nature does not ask for informed consent. Autonomous but inherently limited cognitive agents living in this world can get lucky, but should be expected to suffer the consequences of living in such a reality on a fairly constant basis. Given the ambiguous nature of *interpreting* reality, it can be tempting for individuals to willfully make fundamental attribution errors if they exculpate the individual – shortcomings are externalized sensu “It is not my fault, if only these evil other guys didn’t exist, all would be well with the world.”, particularly if this is socially reinforced. Ideology amounts to outsourcing the causes of the pain associated with existence. Therefore, it is highly likely that psychological and social needs are behind any given ideological position, whereas expressions on issues of fact that touch on the ideological position of the ideologue are merely a smokescreen to obscure this fact, perhaps even to the believers themselves.

All of this raises the issue of ideological opportunity cost (IOC). Given how self-serving and community building ideological positions are, it is to be expected that their seductive nature will lead the ideologue to bring them up frequently. But there is a terrible cost associated with this. There are plenty of topics where a discussion can lead to a better appreciation of the realities of life for all involved. Such sincere discussions are critical, as a genuine exchange of information transcends the limitations of the individual agents sampling reality in solitude. The problem is that ideologues have no interest in such discussion, as they already “know” the nature of reality. From their perspective, there definitely is one (politically) correct, but many wrong views of reality. Their only real interest is in spreading and celebrating their secular gospel in their secular sermon (SS). The opportunity cost associated with all of this is that most of these ideological exchanges can seem incredibly engaging in the short run, but will likely be – for the reasons outlined in the previous paragraph – fruitless in the long one. But they do displace genuine exchanges concerning the nature of reality, as there is only limited time for social engagements.

This is a problem without a good solution. If people disagree, this kind of exchange creates adversity without any conceivable upside. Even if everyone agrees on the ideological issue in question, the group prior is further distorted from reality and discussions that could set the proper group prior are displaced by this kind of ideological self-gratification.

If you are also concerned about this, you should be on the lookout for people who are likely to drag you into this kind of below zero-sum exchange. The prediction is that people who are most likely to do this are those who feel the pain of existence most acutely – due to a plethora of real and perceived problems and who are unable to mitigate it by making it meaningful (usually with religion) or bearable (usually with stoic philosophy or meditation). In this view, ideology is a social coping strategy with personal problems resulting from very existence itself. You do not have to be an unwitting part in this process.

The only solution I see is a conscious override: Recognizing that – while tempting – the ideological exchange is a trap, that ideological differences cannot be settled by discussion, that there is a tremendous opportunity cost even if everyone agrees and that – therefore – the only way to win is not to play.

Posted in Pet peeve, Philosophy, Social commentary | 3 Comments

What should we call simulated data?

Data is not made. Data is born as a result of a measurement process. Taking measurements (in conjunction with a measurement theory) creates data. But then, what should we call – in contrast – the results of simulations, the output of theoretical models? Some might object that this is not an interesting question in the first place, but pointless – and rather nitpicky at that – semantics. However, I must disagree with this. The question is of critical importance, as “simulated data” is a contradiction in terms. Data concern the state of the real world, the world of things. The outputs of a simulation firmly belong to the realm of ideas. It is critical not to confuse the two, lest we make ill-informed statements about the real world solely based on observations we made based on models or simulations. This matters, as was impressively shown in the financial crisis of 2008 – when it became apparent that the if in “risk is only tamed if the real world risk fits the risk model” is indeed a big (and uncertain) if. Entire fields can be built on the confusion between models that work for idealized systems and the real world they are trying to account for, as Ricardo and the rise of economics impressively shows. That doesn’t mean one should attempt this.

Actual simulated data

Actual simulated data

With that in mind, what should we call “simulated data”? The term itself is a contradiction in terms and it bugs me that the rise of data science makes me qualify actual data with the term “real”. Predictions would be needlessly imperialist, as many (if not most) models deal solely with postdiction (there is nothing inherently wrong with this). I tried to make “sata” happen, but that did not catch on (yet).

Any suggestions?

 

PS: Lest you think that I’m needlessly pedantic and that this is a distinction without a difference – it really does matter. Simulations, modeling and “simulated data” (data) all have a role in the scientific process. But they are no substitute for data. In age of sophisticated modeling, it really does matter what one uses as a training (and test) set. Actual, real data – well recorded – is best. To wit: http://www.pnas.org/content/early/2016/06/27/1602413113.long

Posted in Pet peeve, Philosophy, Science | 1 Comment

A tale of two wars

We are upon the 100 year anniversary of the start of the 1st world war. Most people alive today don’t fully appreciate the cataclysmic forces that were unleashed in this conflict, several of which still shape world events today. Of course, most people are aware of the sequel, World War 2 – a very different, yet closely related conflict. More subtly, WW1 brought on the rise of communism (seeding the cold war) as well as the demise of the Ottoman empire. The way in which weak and inherently unstable nation states like Syria or Iraq were carved out of the corpse of the Ottoman empire troubles the world today. As was revealed by removing the strongmen in the early 2000s, most of these states could only be stabilized and pacified by dictatorships. Which is not a problem per se, unless one has a psychopathic and expansionistic one in charge of a major regional power (such as Iraq), in a region that decides the energetic fate of the world economy, as the experience of the 1970s illustrated. Put differently, Iraq really was about resources, but in a more subtle way than most people believe.

But why have a conflict so apocalyptic in the first place, as cataclysmic as it was? Briefly, because everyone wanted to fight, no matter how senseless it was in economic terms. The sequel – WW2 – could happen only because no one (except for Hitler) wanted to or could afford to (given the debt accumulated in WW1) fight.

A great deal can – and has been – said about what amounts to the suicide of the West, as it was classically conceived. Briefly, I want to emphasize a few points.

*Really everyone in Europe wanted to fight, including Germany (who felt itself surrounded by enemies), Russia (who wanted to come to the aid of their Serbian friends), Serbia who felt its honor wounded by the Austro-Hungarian empire, Austria-Hungry who wanted to teach the Serbs a lesson and get revenge, France who was in a revanchist mood since 1871 and saw the population development east of the Rhine with great concern, England who felt its imperial primacy threatened by an upstart Germany and even Belgium, who was asked by Germany to stand down so that Germany can implement the Schlieffen plan but instead blocked every road and blew up every bridge they could.

*The irony of the Schlieffen plan: Germany saw itself surrounded by powerful enemies (France and Russia). The way to beat a spatial encirclement is to introduce the concept of time – the Russians were expected to mobilize their armies slowly. This gave the Germans a narrow time window for a decisive blow against Paris to knock out France in time before turning around and dealing with Russia. In addition, this time pressure is so severe that while Germany had the necessary artillery to knock out the French border forts, it did not have the time to do so. This necessitated going through neutral Belgium, which brought he British and – eventually – the US into the war against Germany. The irony is that this worry about the Russians was misplaced. As a matter of fact, the Russians showed up way earlier than anyone expected and started to invade East Prussia, in an attempt to march on Berlin. However, while they were early, they were also a disaster. A single German army defending East Prussia managed to utterly destroy both invading Russian armies – and then some. Given this outcome, there was no need for the highly risky Schlieffen plan.

*The irony of having a war plan in the first place. But this is only obviously a problem in hindsight as well. In WW1, both sides made disastrous mistakes on a regular basis. As a matter of fact, the rate of learning was in itself appallingly slow – infantry operated with outdated tactics and without helmets well into the war. Ultimately, the side that was faster at improvising and made the lesser amount of disastrous mistakes won.

*The irony of constructing a high seas fleet for Germany. This got the English into the conflict, which turned it from a small regional engagement to a world war. Immensely costly to build, this fleet did the Germans a world of good, sitting in port for the entire duration of the war (with the exception of a brief and inconclusive engagement in 1916) and providing the seed for revolution in 1918, bringing the entire government down.

*The difference in conflict between WW1 and WW2. As mentioned above, everyone wanted to fight in WW1. Consequently, well over a million people were dead within a few months, whereas it took almost two years for WW2 to get “hot”.

*It is a legitimate question to wonder what would have happened if the US hadn’t intervened in 1917. Without US intervention, there probably wouldn’t have been enough strength remaining for either side to conclusively claim victory (Operation Michael in 1918 would probably not have been successful regardless and without US encouragement, the allied offense would likely have suffered the same fate as the one in previous years). But the war couldn’t conceivably go on any longer regardless, due to a global flu epidemic and war weariness on all sides. What would the world look like today if the war had ended in an acknowledged stalemate, a total draw?

*Eternal glory might be worth fighting for, but the time constant of glory in real life is much shorter. Most people have absolutely no idea what different sides were fighting for specifically, or be hard pressed to even name a single particular engagement.

*Much has been made of the remarkable coincidence involving the assassination of Archduke Ferdinand. It is true that the assassin struck his mark only after a series of unlikely events, e.g. Ferdinand’s driver getting lost, and the car trying to reverse – and stall – at the precise moment that the assassin Princip is exiting a deli (Schiller’s Delicatessen) where he got a sandwich. Given the significance of subsequent outcomes, one is hard pressed not to see the hand of fate in all of this. However, there might be a massive multiple comparisons problem here. First of all, Princip was not the only assassin. To play it safe, six assassins were sent and indeed, the first attempt did fail. More importantly, this event was the trigger – the spark that set the world ablaze – but not the cause. Franz Ferdinand makes an unlikely casus belli. Not only was he suspected of harboring tendencies supporting tolerance and imperial reform, Ferdinand had married someone who was ineligible to enter such a marriage. This was a constant source of scandal in the Austrian-Hungarian empire. The marriage was morganatic in nature, his wife was not generally allowed to appear in public with him and even the funeral was used to snub her. Therefore, the emperor considered the assassination “a relief from great worry“. More importantly, it can be argued that this event is just one in a long series, all of which could have lead to war. Bismarck correctly remarked in the late 19th century that “some damn thing in the Balkans” will bring about the next European war. Indeed, the Balkans was the scene of constant crises going back to 1874 and including 1912 and 1913, all of which could have led to a general war. If anything, it can be argued that fate striking in 1888 was more material to the ultimate outcome. In 1888, Frederick III, a wise and progressive emperor died from cancer of the larynx, after having reigned for only 99 days. This made way for the much more insecure and belligerent Wilhelm II.

*What remains is the scariness of people ready to go to war even though it makes absolutely no economic sense. In a hyperconnected world of globalized trade that closely resembles our own (mutatis mutandis, e.g. the US stands in for the British Empire as the global hegemon). In addition, there was a full – ultimately wasted – month for negotiations between the assassination of archduke Ferdinand and the beginning of hostilities. This raises the prospect of a repeat. At least, it doesn’t rule it out. In 1913, there had been an almost 100 year “refractory period” (respite from truly serious, all-out war) after the Napoleonic wars as well. However, odds are that if it should happen again, repeating the 20th century in the 21st, it will happen in Asia. Asia has the necessary population density and a lot of its key countries – China, Japan, Russia, South Korea, India and Pakistan – are toying with extreme nationalism. As the history of the 20th century illustrates, that is a dangerous game to play.

Posted in History, Strategy | 1 Comment

The relative scale of early visual areas

The visual system of primates comprises a large number of distinct cortical areas containing neurons that modulate their activity in response to a visual stimulus and are believed to represent different aspects of the visual scene. It has been recognized since the 1980s that these areas are roughly organized as “early” visual areas (primary visual cortex or V1 and V2) followed by two parallel (dorsal and ventral) but hierarchical visual streams. What is usually underappreciated is how much cortical real estate is taken up by the early visual system, i.e. V1 and V2. This matters, as the biggest bottleneck in the entire system is on the level of V2 outputs (at least within cortex). The scene is likely to be represented at a fine grain in the early visual system, but not afterwards. Put differently, what happens before V2 (mostly) stays in V2. To illustrate this point, we took a page out from a popular infographic meme and superimpose the rest of the visual system onto V1 and V2. To do so, we modified a figure from Wallisch et al. (2008), see figure 1. In this figure, size of the area is scaled proportional to its cortical surface area.

Figure 1. The higher visual system, superimposed on V1 and V2. As you can see, individual areas of the higher visual system are on the scale of small European principalities while V1 and V2 most resemble land empires in Asia or America. Modified with permission from Wallisch et al., 2008.

Figure 1. The higher visual system, superimposed on V1 and V2. As you can see, individual areas of the higher visual system are on the scale of small European principalities while V1 and V2 most resemble land empires in Asia or America. Modified with permission from Wallisch et al., 2008.

As you can see, V1 easily accommodates the entire dorsal stream (and then some) and V2 almost fits into the ventral stream (although not quite, because V4 is so big). Neatly, this is consistent with earlier reports regarding the design limitations of the visual system.

Posted in Neuroscience, Science | Leave a comment

Ideology poisons everything, as it rotates perceptions of reality

It is obvious where ideology comes from. It solves a lot of problems. A small tribe needs to agree on a distinct course of coherent action. Otherwise, its strength is frittered away, defeating the very point of finding strength in numbers, i.e. of being a tribe in the first place. Ideology also solves a lot of freerider and principal agent problems in general. It makes individuals do things for the common good that objectively impedes their subjective welfare and that they wouldn’t otherwise do. This also makes good sense along other lines. Small tribes perish or flourish as a whole (a genetically highly interrelated group), not as individuals. Ideology promotes fitness on the level that it matters, the group-selection level.

However, we no longer live in a world of small, competing, ever warring and highly xenophobic tribes. On the contrary, we live in an extremely large society, in terms of the US about 2-3 million times as large as the typical tribe, which constitutes the form of social organization that was the norm throughout almost all of human evolution (if one considers the whole world as one big globalized and highly interconnected society, it is about 70 million times as large). So the archaic model of social organization clearly doesn’t scale. Yet, its roots are still with us. For a simple test, try watching a soccer game (or indeed any team sport) without rooting for a particular team (or watch a game where you don’t care about any of the teams). The athletic display won’t be any different, but it will likely be rather un-riveting.

So in modern times, tribalism is a cancer that is threatening to tear society apart. Why? Because most of the remaining societal problems are extremely thorny and complicated (that’s why they remain in the first place – we already addressed the easy ones). They usually don’t lend themselves to resolution by experimental approaches. But a lot typically rides on the outcome, the answer to these questions. This gives ideology a perfect opening to take root. For instance, modeling the climate is extremely complicated. All models rely on plenty of assumptions, relatively sparse data and are so complex that they are even hard to debug (or to know when debugging was successful). It is very hard for anyone to ascertain what is going on, let alone will be happening in the future, yet ideologists are very keen to either dismiss any probability of warming or assume dramatic human-caused warming as a certainty.  The confidence of both camps far exceeds the data cover. Where does it come from? Potential holes in the story are simply filled in by ideology. Similar questions arise – for instance – in history. A key question is history is: What makes a society successful? The most realistic answer is that it likely involves a complex interplay of geography, genetics and culture. It is extremely hard to assign relative weights to these factors, as it is impossible to do experiments on this issue. Yet, one can make a good living writing books asserting that it is all geography (implicitly or explicitly assigning a factor of zero to the other factors), all culture or – recently – by pointing out that the weight of the genetic factor is unlikely to be zero, unfashionable as it might be, given the political climate.

Which position is most compelling to you says much less about which position is true – at this point, the evidence is far from conclusive – but much more about you: Which position do you want to be true? Why would you want a particular position to be true? Because it neatly fits in with your worldview or Weltanschauung.

What is the problem with that? The problem is that people on the two different ideological poles simply look at the same data from two different vantage points (e.g. left vs. right, see figure 1).

Figure 1: This represents reality. Two ideological camps have positions on issues that vary along the left/right dimension. Some of them are more valid than others, but no camp has a monopoly on validity, given these issues.

But in the mind of the ideologue, any issue doesn’t come down to a horizontal difference, but rather a vertical one – the ideologue assumes that the positions in one’s camp are valid, the others invalid. But when one sincerely perceives a difference in appraisal as a difference in fact (where one is either right or wrong), resolving these issues is basically impossible.

Figure 2: Liberal ideology. From the liberal perspective - which corresponds to a clockwise rotation of reality by 90 degrees - their positions are now perfectly centered in ideological terms. They just happen to be right, whereas the other camp is just wrong about everything.

Figure 2: Liberal ideology.   From the liberal perspective – which corresponds to a                  clockwise rotation of reality by 90 degrees – their positions are now perfectly centered in ideological terms. They just happen to be right, whereas the other camp is just wrong about everything.

Figure 3: Conservative ideology. From the conservative perspective - which corresponds to a counterclockwise rotation of reality by 90 degrees - their positions are now perfectly centered in ideological terms. They just happen to be right, whereas the other camp is just wrong about everything.

Figure 3: Conservative ideology. From the conservative perspective – which corresponds to a counterclockwise rotation of reality by 90 degrees – their positions are now perfectly centered in ideological terms. They just happen to be right, whereas the other camp is just wrong about everything.

The insidious thing is that this happens inadvertently and automatically. People that take a particular position will naturally see the other one as invalid and just plain wrong. Not as a difference in position, but as a matter of (moral) right and wrong. Righteousness vs. wickedness. Feeling this with every fiber of their being, leads to an immediate dismissal of the other position. If they can see the truth so clearly, why can’t the other side? Surely, they must be willfully ignorant, malicious or both. Ascribing turpitude usually follows. Worse, this destroys any reasoned discourse. Making a nuanced argument will go unappreciated, as the ideologue will not understand its nuance. Instead, it is automatically transformed into a low dimensional ideological space, perpetuating the framework of divisive tribalism, with all of its odious consequences. As we’ve known for a long time, it is basically impossible to convince someone with complete certainty that they are wrong, regardless of how ludicrous their position is or how much it flies in the face of new incoming information.

It is awfully convenient that the same people who disagree with us are also those who are dead wrong about everything. If the issue is important enough, (religious) long and brutal wars have been fought about this.

The problem is not to be wrong about something. That happens all the time. The problem is how right it can feel to be so wrong. If you want to experience this for yourself, there is a simulacrum with a somewhat juvenile name that deals with issues like information accumulation, cue validity, confidence and uncertainty. It allows an apt simulation of what social primates can be absolutely convinced of, even in the near absence of valid information. This can be rather scary. An important difference to reality is that in reality, there is rarely a reality check (feedback from reality or god) whether one’s beliefs actually correspond to the truth. 

What is the way out of this? Acknowledging that this is going on. Metacognition allows for a possible avenue of transcending these biases. Naturally, this will take a lot of training, particularly as most actors (the media, for instance, have every incentive to be as divisive as possible in their rendering of events). But by appreciating the complexity of problems and by embracing the fundamental uncertainty inherent to life, one opens the possibility that this can be done. It won’t be easy, but there is no real alternative. A de facto perpetual cold civil war is no real alternative. Certainly not a good one.

The problem with the ideologue is that they are constitutionally unable to learn from feedback. As they already see themselves as right to begin with, the error is irreducible. As such, they are the natural enemy of the scientist. In particular, it is important not to give people a pass for being uncivil (and not helpful) just because they agree with us ideologically. 

Tribalism had its day, for most of human history. And it was very adaptive. But today, it no longer is. Instead, it is needlessly divisive. Is it really useful to judge people based on what browser or operating system they use, what car they drive, what phone they have, which language they program in, etc.? People are obviously eager to self-righteously do so at the drop of a hat, but is that really helpful in the modern world (the traditional solution being to wipe out the other small tribe)?

Posted in Social commentary | 1 Comment

The social mission of perceptual research

Our perception corresponds to an idiosyncratic model of reality, not reality itself.

This is easy to forget, as we all share a common outside environment in the form of external reality and process it with a cognitive apparatus that has been honed in billions of years to work properly. Yet, this is a profound truth.

Illusory motion

Do you see motion? If so, it was created by your brain. There is nothing moving in the image in the outside world. But psychologists now understand the contrast gradients that can be used to make the brain assume the presence of motion.

It is important to recognize that the perceptual model does not necessarily correspond to objective reality. This is not a failing of the system. On the contrary, as it almost always has to work with incomplete information at the front end, gaps in evolutionary relevant information are filled in from other sources, be they other modalities, correlations to other cues as well as correlations that have been learned during ontogeny and phylogeny. Put differently, it is more adaptive for organisms to make educated guesses about what is out there than to take a strictly agnostic position if information is missing.

The perception of depth information is a good example of this. The spatial outside world is (at least) three-dimensional, yet the receptors that transduce the physical energy from photons to electrical energy that the brain can process are arranged in a two-dimensional sheet at the back of the eye, as a part of the retina. In other words, an entire dimension of information – how far away things are – is lost up front. Strictly speaking, the brain has no genuine distance information available whatsoever. Yet, we see distance just fine. Why? Because this is a dimension that the brain can ill afford to lose in terms of survival and reproduction. It is critical to know how far away predator and prey is, to say nothing of all other kinds of objects, if only not to bump into them. So what is the brain to do? In short, it uses a great many tricks to recover depth information from two-dimensional images. Most of these “depth cues” are now known and used by artists to make perfectly flat images look like a scene with great depth or even make movies look three-dimensional. Strictly speaking, this constitutes an error of the system, as the images are really two-dimensional, but this is an adaptive kind of error and given how complicated the problem is – operating with such little available information – even the best guess can still be a wrong one.

A bigger object that is father away takes up the same space on the retina as a smaller one that is closer. How – then – is the brain supposed to know which one is closer and farther? This information is not contained in the size of the retinal image traced by the object itself – as it is a projection – and has to be reconstructed by the brain.

The recovery of distance information illustrates the principle nicely, but it is far from the only case. Other perceptual aspects like color, shape or motion are derived in a similar fashion. The brain constructs its model almost always on the basis of incomplete information. There are many processing steps that go into the construction of the model, and most of them are unconscious, only the final percept is conscious. Many brain areas are involved in the construction of this model (in primates, between 30 and 50% of the brain).

One thing that is remarkable about all of this is that the brain fully commits to a particular interpretation of the available information at any given time. Even if the perceptual input is severely degraded, the brain rarely hedges, if the result is compatible with a coherent model of the world. The end-user is not informed which parts of the percept correspond to relatively “hard” information and which were “filled in”, but really correspond to little more than educated guesses about what is out there. This seems to be a general processing principle of the brain. Moreover, if the available information is inherently ambiguous and compatible with several different interpretations, it does not flag this fact. Instead, second thoughts manifest as a (sometimes rapid) switching between different interpretations, yet a meta-perspective is not taken by the perceptual system. A particular interpretation of the available information usually comes at the exclusion of all others, if only for a time. Interpretations switch, they don’t blend.

Inherently ambiguous stimuli. Interpretations switch, they don't blend.

Inherently ambiguous images. Interpretations switch, they don’t blend.

Arguably, this necessarily has to be so, as this system provides an interpretation of the outside world not for our viewing pleasure, but to be actionable, i.e. to guide and improve motor action. In the natural world, both indecision and dithering incur serious survival disadvantages. To be sure, flip-flopping also is not without its perils, but fully bistable and inherently ambiguous stimulus configurations are probably rare outside of the laboratory of the experimenter (these displays are designed specifically to probe the perceptual apparatus. Doing this in a fashion that doesn’t bias the stimuli one way or the other is not easy), so evolution probably didn’t have to make allowances for that.

To summarize, the brain overcommits to a particular interpretation of the available evidence in a way that is often not entirely warranted by the strength of the evidence itself. It does so for a good reason – survival – but it is nevertheless doing it. This has social implications.

Given the slightly disparate – if incomplete – information, individual differences in brain structure and function as well as a different history of experience accumulation (which in itself colors future perception), it is to be *expected* that different people reconstruct the world differently, sincerely but quite literally seeing things differently.

Different perspectives. Which one is correct about the state of the external world?

Different perspectives. Which one is correct about the state of the external world?

The construction of a perceptual model of the world involves so many steps that it is not surprising that the end-result can be idiosyncratic. What is surprising is that we have not yet fully wizened up to this fact. The fact that it is happening is not a secret; modern psychology and neuroscience is pretty unequivocal that this is basically how perception works. Most of us also know about this from personal experience. Visit any discussion board on any issue and you will see this effect in action. Every individual feels strongly (and often sincerely) that they are right – and by implication that everyone who is at variance with their stance is wrong. This is very dangerous, as we are closing ourselves off to other viable positions. The other side is – at best – misinformed, if not outright malicious or disingenuous. In addition to the illusory certainty of perception, there is often an immediate emotional reaction to things we disagree with. A righteous anger that has cost many a FB friendship and triggered quite a few outrage storms on twitter, with no end in sight.

The traditional form to resolve this kind of dispute is by resorting to violence. In the modern world, this kind of conflict resolution is frowned upon, partly due to the domesticating effects of civilization and partly due to effectiveness of our weapons, which make this path a little too scary these days. So what most people now do is to engage in a kind of verbal sparring as a substitute for frank violence. But this is often rather ineffective as few people can be convinced in that way. There is pretty good evidence that by talking to the other person, we are just talking to their PR department, spinning a story in any way possible. So most of these things devolve into attempts at silencing the other side by shaming. All of these tactics are highly divisive. Popular goodwill is a commons and it is easily polluted. Now, it might be possible that social interactions simply don’t scale. Small tribes need to find a consensual solution in order to foster coherent action, but large and diverse societies like ours don’t allow this easily.

A different approach opens up if we take the insights from perceptual psychology and neuroscience seriously. If we acknowledge that our brains have a tendency to construct highly plausible but ultimately overconfident models based on scant and in a large society necessarily disparate models, dissent is to be expected. It should not (necessarily) be interpreted as a personal attack or slight. Put differently, benign dissent is the default mode that should be expected from the structure of our cognitive apparatus. It is not necessary to invoke malice. Nor is it necessary to invoke ignorance. The other side might well be less informed. But it is also plausible that they simple had different experiences (and a different cognitive apparatus to process them).

Once this is acknowledged, there are two ways to go about building common ground. The first one would be to – instead of arguing – figure out what kind of evidence the other side is missing and to provide it, if possible. If that is not possible, it might still be worthwhile to point out a different possible interpretation of the evidence available to both.

Of course, this is a particular challenge online, as many statements are not made to have a genuine discourse, but rather for signaling purposes, showing one’s tribe how good of a person one is by toeing the tribal line. Showing tribal allegiance is easy. It also typically doesn’t cost an individual anything, nor does it usually achieve anything. But the cost to the commons is big, namely tribalism. This is indeed a tragedy as we all do have to share this planet together, like it or not. As online discourse matures, it is my sincere hope that this kind of fruitless and toxic pursuit will be flagged as an empty attempt at signaling, discouraging individuals who engage in it from doing so.

There is a hopeful note on which to end on: Taking the lessons of perceptual research seriously, we can transcend the tribalism our ancestors used to get to this level, but that now prevents society from advancing further. Once we recognize that it is tribalism that is holding us back, we can use the different perspectives afforded by different people using different brains to get a higher order imaging of reality than would be possible by any individual brain (as any individual brain necessarily has to take a perspective, see above). Being able to routinely do this kind of simultaneous multi-perspective imaging of reality would be the mark of a truly advanced society. Perceptual research can pave the way. If its lessons are heeded.

 

Posted in Neuroscience, Philosophy, Psychology, Science, Social commentary | 3 Comments

A primer on the neuroscience of happiness

The age old question of what makes for a happy life is of great interest to almost anyone who is in fact alive. A classic answer, building on Aristotelian notions of happiness, is provided by Charles Murray who points out that lasting life satisfaction is likely to derive from vocation, family, community and faith. We can now add a fifth element to these considerations, namely the neuroscience of happiness.

That is not to say that studying neuroscience will necessarily provide happiness. But there is mounting evidence as to what will *not* lead to increased human happiness.

Happiness is often symbolized by depicting people with arms spread out, people jumping and people looking into the sun.

Happiness is often symbolized by depicting people with arms spread out, people jumping and people looking into the sun.

As a matter of fact, perhaps the most important insight from neuroscientific considerations is a conceptual one. The brain uses happiness as a means to an end, getting the organism to do the right thing. It is by its very nature transient and elusive. Neither brain, nor reality are designed to make happiness easy to come by, nor to make it last. That would almost certainly interfere with survival- and reproduction goals. Ironically, this makes happiness *more* valuable and desirable, as evinced by the number of talks and books on the subject. If it was easy, there would be no need for anything more to say or do, as everyone would just be happy.

Of course, if one is really skilled in the ways of the sage, one can override and sidestep this programming. However, *simple* solutions like acquiring more stuff will not work.

Why? Because if they would, you would already be happy. One simple cannot expect improvements in material wealth to improve one’s subjective happiness in the long term. On the contrary.

Consider this: If you are reading this, it is likely that you are living in a magical palace that would have been unimaginable to most of your ancestors, even if your living quarters are modest by contemporary standards. Look around you. You can effortlessly turn on lights with the flick of a switch, at any time of the day or night. You have fairly well insulated windows that maintain a considerable temperature differential to the environment. You have a non-leaky roof over your head. Your walls are made of stone or a similar solid material. You command virtually endless quantities of potable water, both hot and cold, at a trivial cost. You also have a sewage system in place that makes waste disposal largely a non-issue. There are people collecting your trash, making this chore an afterthought. Your kitchen and household appliances replace a veritable horde of servants. Your refrigerator allows you to store large quantities of exotic foods from all over the world for long periods of time without spoilage. Your stove allows to prepare warm meals without much smoke or serious danger of fire, to say nothing of the wonders of the microwave which allows to do all that at the push of a button and in no time. Radiators and air conditioners allow you to keep this place at the desired comfortable temperature at all times, regardless of outside conditions. The most humble TV and radio seamlessly connect you to cultural content and information from around the world, again – remotely. The list is endless.

And this is just one aspect of your life, your living quarters. From the perspective of your ancestors, you live in unimaginable luxury. Yet, you take all of this completely for granted and most likely never give it a second thought.

But there is more. How did you afford this place? Almost certainly not by doing backbreaking work. Not in this day and age. Nor are you likely afraid to lose it all due to war or natural disaster. While these things still do happen, it is major news if they do, and there are local, regional, national and international relief efforts waiting in the wings if disaster should strike.

As a matter of fact, in this age of globalization, you are almost certainly doing business with almost everyone on the entire planet on a daily basis, making seminal contributions to the promotion of peaceful commerce, cooperation and trade without even being aware of it. Due to the way the system is set up, this all magically happens just by going about the things you do. Going to work, shopping for groceries, etc.

On that note, if you are reading this, you are also likely to own a smartphone. This “phone” connects you – at a modest fee – to virtually every other human on the planet as well as to the combined knowledge of civilization since time immemorial. Wirelessly, without cords of any kind, for long periods of time. On top of that, it can carry electronic versions of thousands of books, songs, it can play videos and – by the miracle of apps – double as a flashlight, mirror, wallet, notebook, voice recorder, photo album, map, calculator, camera and video camera, among many others. Yet, it fits into your hand.

How much do you think would such a device have been valued at a few short decades ago, for instance in the 1970s, if it was even conceivable?

Now, shortly after their arrival on the world scene, smart phones are mostly impacting human happiness *negatively*, when something doesn’t work quite as expected. Which is the first hint that expectations are the real driver of human happiness. If you expect a mapping to get you to your appointment and it doesn’t, you will feel let down. Naturally.

You should be in awe. You really ought to be amazed. Every day, all day. But you are not. Far from it. That would get in the way of achieving your (evolutionary) goals.

Essentially, you fell victim to a confusion of timescales. This is not uncommon. Most people assume that what will make them feel good in the short term (rewards) will make them feel good in the long term. But that is not the case. It’s a trap. If you go down that route, you will just need ever larger and ever more varied rewards just to feel the same level of satisfaction. Which might be good for the continued growth of GDP, but not necessarily for you.

The point is that unimaginable increases in the standard of living in rather short periods of time did not make you any happier than people living a couple of generations ago. It is naive to assume that future increases in standards of living will raise human happiness as a whole. At least not in absolute terms. There is evidence that material possessions can impact relative happiness for social reasons, for instance making you happier if you have a bigger house or TV than your neighbor or making you feel bad if your phone is not as advanced as that of your colleague. But that won’t change the level of happiness of society on the whole.

The neuroscience of happiness

Of course, this topic is vast and cannot be adequately covered by less than a treatise. So a short primer will have to suffice for now. Here it is.

Posted in In eigener Sache, Neuroscience, Philosophy, Psychology, Social commentary, Technology | 1 Comment

The consolation of temporal perspective

Few things are more discouraging and galling to the righteous than the raging success of the obviously undeserving and unworthy. This can be particularly dispiriting early in life. The wise will recognize that virtue and non-virtue have fundamentally different time constants. Lack of virtue is eventually its own undoing. The catch lies in the “eventually”. It might take a while. But it will happen, due to the inherent nature of virtue and the lack thereof. Assuming history is ergodic.

And therein lies the consolation.

Virtue vs nonvirtue

Virtue vs nonvirtue have different time constants

That this is necessarily so stems from it being a corollary of the notion of a great filter. If great power does not go hand in hand with an equally great sense of ethics and self-restraint, it will ultimately prove self-destructive. This can be observed in many systems, be they civilizations, technology, lottery winners or celebrities. The key is to minimize the damage – perhaps by non-association – when the inevitable collapse happens. Also, to stick around. Given the above situation, this is not easy, but unavoidable if one doesn’t want to end it on a down.

Put differently, the path of virtue is long and arduous, but sustainable. Taking shortcuts helps those who are lucky in the short term, but devastates them in the long run all the same. Solace is provided by Warren Buffet’s life story, which essentially shows that exponential growth can add up to one giant and irresistible snowball.

I would like to end this piece by pointing out that I do not believe the great filter to be a plausible explanation of the Fermi paradox. Mitochondria are a much more parsimonious explanation, leading credence to an “early filter”. Then again, parsimony ought not necessarily be invoked when explaining complex phenomena.

Posted in Philosophy | Leave a comment

SfN 2013 in San Diego

This post will document my annual pilgrimage to SfN. This year (as in 2004, 2007 and 2010), it will take place in San Diego.

The San Diego Convention Center. Hallowed halls.

The San Diego Convention Center. Hallowed halls.

See here how I prepare for the event and what I recommend how to go about it.

Dealing effectively with SfN is a daunting challenge, equivalent to running two marathons and likely to get you physically sick if you don’t do it right.

This year, I’m presenting on neural correlates of the line motion illusion.

The line motion illusion

The line motion illusion

In other news, the dynamic posters have finally arrived. I am glad that I was on the leading edge of this development. Of course, future generation won’t be able to appreciate this, as – for them – it will have always been this way.

Dynamic poster

The dynamic posters have arrived.

In other news, the second – “all black” – edition of “Matlab for Neuroscientists” has arrived. Now including LFP analysis, GUIs, parallel computing, etc.

The second edition

The second edition


Posted in Neuroscience, Science | Leave a comment

You really do need to sleep right

Two years ago, I wrote extensively why getting sufficient sleep is crucial to a good life and how to go about getting establishing sufficient levels of quality and quantity.

Since then,  the situation has – if anything – gotten even more dire. Despite an overhyped “quantified self” movement, few people seem to actually be serious about self-monitoring, so ZEO went out of business, leaving the market only with rather undesirable options for sleep tracking that deserves that name.

Culturally, there has also been no change in terms of sleep appreciation. High powered movers and shakers celebrate people who claim that they adopted the habit of only sleeping every other night or that they get a headache when they sleep for more than four hours. Of course, these were confabulations to intimidate the competition – sleep is not just a fungible habit, but a fundamental aspect of physiology. Some aspiring masters of the universe have already paid the ultimate price for ignoring basic physiology. Widespread elite failure has been a more disappointing aspects of living in these times and an inadequate sleep culture might well have contributed to it.

I do understand the desire to get more done in a given day, but it is important to recognize that the ability to stretch the day and compress the night by technology (mostly light and stimulants) is not a winning strategy in the long term. As laid out before, lack of sleep doesn’t just make one more tired. It also affects – negatively – every single aspect of cognition (including decision making and creativity) and emotion (motivation, emotional stability, etc.) as well as somatic integrity (aging, immune function, etc.) that has been studied. This includes virtually all features of proper bodily function including counter-intuitive aspects such as bone mineral density, which seems to be rather negatively affected by bad sleep habits.

Sleep disorders are so strongly associated with mental health problems such as depression and neurological disorders like dementia that many now suspect lack of adequate sleep to play a causal role in the genesis of these problems.

Recent research (that has come out since my 2011 piece) strongly corroborates this view.

First, there seems to be nothing “light” about light whatsoever: It has now been established that aberrant light cycles can *directly* (even if the overall amount of sleep is preserved) increase depression-like behaviors and increases in the release of stress hormones (with attendant decrease of cognitive function).

Second, while it has long been suspected that sleep is somehow involved in “restoring” brain health per se, it has now been shown that this is actually the case. Specifically, brain cells seem to be much smaller during sleep, increasing the interstitial space and leading to a dramatic increase in the exchange of fluid exchange. This – in turn – seems to remove potential toxins like beta-amyloid from neurons. Put differently, lack of sleep might quite literally be neurotoxic, with a potentially increased long-term dementia risk.

Sleep, from the perspective of the brain?

Sleep, from the perspective of the brain?

In this view, sleep has mostly a dishwasher/brainwasher/sewage plant function for the brain. This makes sense, given how conserved and widespread sleep is throughout evolution, even in animals not renowned for their cognitive prowess.

No one likes to be told what to do, particularly when it is perceived to be limited. However – in light of this evidence, it might be advisable to – finally – get some. Neurotoxicity and dementia is not something to play around with. We have previously discussed how barbaric the past appears to us. How could people possibly live like that? Our inadequate sleep culture is high on my list of things that are likely to evoke the same reaction a thousand years hence, once the critical value of adequate sleep on wellbeing is more widely recognized.

Skimping on sleep is not glamorous. It is a barbaric practice that likely imposes unaffordable long term costs on all involved.

Posted in Neuroscience, Optimization, Science, Social commentary | 2 Comments

The paradox of progress

I often wonder how people managed to get by a thousand years ago, without effective anesthetics or antibiotics or even a fundamental understanding of the underlying causes of illness and disease.

However, I realize that people a thousand years from now will wonder the exact same thing about us. For instance, we still don’t have effective antivirals, cancer treatment outcomes have largely stalled in past decades and our available “antidepressants” (as well as psychotropic agents in general) are woefully inadequate.

Put differently, not only do we have a long – and increasingly hard – way to go (the low-hanging simple fruit having been plucked a long time ago), our advances might even leave us worse off in the meantime.

For instance, if the Mongols were entirely ignorant of even the most fundamental tenets of medical science – attributing human ailments to animal spirits instead – this entire aspect of human existence won’t garner much attention. One goes about one’s daily life, if one gets sick one can pray and hope for the best and should one succumb to one’s illness, well perhaps that is just fate.

In contrast, we are in a tantalizingly different position. We know that genes play a big role in health and disease, we can now even read individual genomes. However, we are far from being able to interpret the role of individual genes, let alone understand their expression patterns (via epigenetics) or manipulate them if they are broken. Note the contrast: We know that being able to manipulate individual genes would be crucial, but we can’t do it.

Neuropsychology is in a similar position. We know that the state of the brain matters. We can even correlate individual lesions with striking mental deficits such as attention in conditions like hemineglect or Balints’ syndrome. And that’s where the state of the art ends. We are much better at diagnosing these conditions than being able to do anything about them.

The promised land, as seen from the desert. It is a far way off.

This state of affairs isn’t unique to neuropsychology either – it basically characterizes the situation in almost all of the neural sciences. The mongols didn’t appreciate the importance of brain chemistry. We do, but there is preciously little we can do about it. For instance, ADHD and ADD are fairly common and it is a good bet that dopaminergic neurotransmission is impaired in these conditions. However, we can’t be entirely sure what is wrong in an individual case and even drugs effective at modulating dopaminergic neurotransmission, e.g. Ritalin *downregulate* the expression and sensitivity of dopamine receptors in the long term. It also tends to interfere with other cognitive systems, such as memory in susceptible individuals. I have no doubt that in the long term, we will develop much more selective agents that work with the individual biochemistry and *up*regulate dopamine receptors in these people.

So we are basically in the position of Moses, wandering around in the desert for decades on end. We know the promised land exists and that we are on our way, but the path there will be hard and we will probably not live to see it (but endure all the suffering along the way). That is a truly harsh fate – having come so far only to realize that true progress eludes us further yet. Future generations – those living when the progress has been realized – will shake their head and wonder how anyone could live under such clearly inadequate conditions.

Again, the Mongols didn’t have this particular problem. I think we might be well advised to develop philosophical coping mechanisms that let us deal with our peculiar and – I would argue historically unique – position: Due to the very progress we made, we know that our available treatment options are far from ideal for many (if not most) conditions. We have come a long way, but at this point, it mostly just highlights how far we have yet to go.

Posted in Philosophy, Science, Social commentary, Strategy | 1 Comment

On the importance of consistent mapping

The problem I’m about to write about has been persisting for quite a while and I thought Google would have fixed it by now. Alas, no such luck, thus far.

In a nutshell, we have been aware of the extreme importance of consistent mapping to learn automatic, efficient and error-free behavior ever since Schneider & Shiffrin (1977).

Briefly put – although this differs from the particular experimental design of Shiffrin & Schneider – it is crucially important that the same symbol consistently has the same meaning and is not sometimes associated with other meanings.

And this is the problem with the current way Gmail uses the trashcan symbol. It can at the same time be used to delete an entire thread (possibly consisting of hundreds of individual emails) as well as to discard a recent – and unsent – individual draft of a message.

"Delete" refers to moving the entire thread - which could contain hundreds of messages - into the trash. "Discard draft" refers to one - as of yet unsent - message.

This is a concern because while the mouseover disambiguates the symbol, users do learn an automatic mapping (e.g. assuming that the trashcan discards a draft) so the mouseover never comes up in efficient mass-email handling. This leaves the possibility for users to accidentally delete entire email threads when they simply want to discard a draft. I know it happened to me, repeatedly.

This is a particular concern for Gmail because its whole concept rests upon the notion that one should never have to delete an email. However, because the system allows for accidental deletion, the integrity of the entire corpus is at risk.

Google is usually good about user interface design, but not in this case. Sadly, the ambiguity affects a critical function that can’t be undone after 30 days (and probably won’t be in time if it happened accidentally).

The good news is that this ought to be an easy fix. And should only be remembered as a cautionary tale.

Posted in Misc, Optimization, Pet peeve | Leave a comment