Autism and the microbiome

The incidence of autism has been on the rise for 40 years. We don’t know why, but the terrible burden of suffering has spurred people to urgently look for a cause. As there are all kinds of secular trends over the same time period, correlating this rise in autism with corresponding changes in environmental parameters, led to the “discovery” of all kinds of spurious or incidental relationships.

When attempting to establish causal relationships, experimental work is indispensable, but unethical to do in humans.

Now, it has been shown that feeding maternal mice a high fat diet led to social behavioral deficits reminiscent of autism in offspring. These deficits were associated with a disrupted microbiome, specifically low levels of L. reuteri. Restoring levels of L. reuteri rescued social behaviors, linked to the increased production to oxytocin.

I’m aware of the inherent limitations of mouse work (does anything ever transfer?), but if this does (and I think it will – given recent advances in our understanding of the gut microbiome in relationship to mental), it will be transformational, not just for autism. 

Here is a link to the paper: Microbial reconstitution reverses materal diet-induced social and synaptic deficits in offspring.

Posted in Neuroscience, Nutrition, Psychology, Science | 1 Comment

A primer on the science of sleep

I’ve written about sleep and the need to sleep and how sleep is measured before, but in order to foster our #citizenscience efforts at NYU, I want to bring accessible and actionable pieces on the science of sleep together in one place, here.

1. How the brain regulates sleep/wake cycles

2. Regulating sleep: What can you do?

What you can do

What you can do

3. Sleep: Why does it matter?

Sleep matters

Sleep matters

4. What you can do right now if your baby has sleep problems

5. Common sleep myths

6. Sleep is an active process

Sleep is an active process

Sleep is an active process

7. What are sleep stages?

Sleep stages

Sleep stages

Click on the links if you want to read more.

If you’re curious what our marriage between #citizenscience, #datascience and #neuroscience is about, read this.


Posted in Life, Neuroscience, Psychology, Science | Leave a comment

Beyond free will

Some say that every time philosophy and neuroscience cross, philosophy wins. The usual reason cited for this? Naive and unsophisticated use of concepts and the language to express them within neuroscience. Prime exhibit is the mereological fallacy – the confusion of the part with the whole (by definition, people see, not the eye or the brain). And yes, all too many scientists are entirely uneducated, but “winning” might be a function of letting philosophy pick the battleground – language – which philosophy has privileged for over 2500 years (if for no other reason than lack of empirical methods, initially). There is no question that all fields are in need of greater conceptual clarity, but what can one expect from getting into fights with people who write just the introduction and discussion section, then call it a paper and have – unburdened by the need to run studies or raise money to do so – an abundance of time on their hands? Yet, reality might be unmoved by such social games. It needs to be interrogated until it confesses – empirically. There are no shortcuts. Particularly if the subject is as thorny as free will or consciousness. See here for the video.

Posted in Neuroscience, Pet peeve, Philosophy | 1 Comment

Explaining color constancy

The brain is using spectral information of light waves (their wavelength mix) to aid in the identification of objects. This works because any given object will absorb some wavelengths of the light source (the illuminant) and reflect others. For instance, plants look green because they absorb short and long wavelengths, but reflect wavelengths in the middle of the visible spectrum. In that sense, plants – that perform photosynthesis to meet their energy needs – are ineffective solar panels: They don’t absorb all wavelengths of the visible spectrum. If they did, they would be black. So this information is valuable, as it allows the brain to infer object identity and help with image parsing: Different but adjacent objects usually have different reflectance properties and profiles. But this task is complicated by the fact that the mix of wavelengths reflected off an object is dependent on the wavelength mix emanating from the light source in the first place. In other words, the brain needs to take the illumination into account when determining object color. Otherwise, object identity would not be constant – the same object would look different depending on the illumination source. Illumination sources can contain dramatically different wavelength-mixes, e.g. incandescent light with most of the energy in the long wavelenghts vs. cool light LEDs with a peak in the short wavelengths. This is not a recent problem due to the invention of artificial lighting. Throughout the day, the spectral content of daylight changes – e.g. the spectral content of sunlight is different midday from late afternoon. If color perceiving organisms didn’t take this into account, the same object would look a radically different color at different times of day. So such organisms need to discount the illuminant, as illustrated here:

Achieving color constancy by discounting the illuminant

Achieving color constancy by discounting the illuminant

The details of how this process happens physiologically are still being worked out, but we do know that it happens. Of course, there are also other factors going into the constant color correction of the image performed by the organism. For instance, if you know the “true color” of an object, this will largely override other considerations. Try illuminating strawberries with a green laser pointer. The light bouncing off the strawberries will contain little to no long wavelengths, but the strawberries will still look red to you because you know that strawberries are red. Regardless of these considerations, we do know that color constancy matters quite a bit, even in terms of assumed illumination in case of #thedress, when the illumination source is ill-defined:

Discounting an assumed illuminant explains what is going on with the dress.

Discounting an assumed illuminant explains what is going on with the dress.

Of course, things might not be as straightforward as that. It won’t always be perfectly clear what the illuminant is. In that case, the brain will make assumptions to disambiguate. A fair assumption would be that it is like the kind of illuminant one has seen most. In most of human history – and perhaps even today – that means sunlight. In other words, humans could be expected to assume illumination along the daylight axis (over the day), which means short-wavelength illumination, which could account for the fact that most people did report to see the dress as white and gold.

Posted in Neuroscience, Psychology, Science | Leave a comment

The neuroscience of violent rage

Violent rage results from the activation of dedicated neural circuitry that is on the lookout for existential threats to prehistoric lifestyles. Life in civilization is largely devoid of these threats, but this system is still in place, triggering what largely amounts to false alarms with alarming frequency.

We are all at the mercy of our evolutionary heritage. This is perhaps nowhere more evident than in the case of violent rage. Regardless of situation, in the modern world, responding with violent rage is almost never appropriate and almost always inadvisable. At best, it will get you jailed; at worst, it might well get you killed.

So why does it happen so often, and in response to seemingly trivial situations, given these stakes? Everyone is familiar with the frustrations of daily life that lead to these intense emotions such as road rage or when stuck in a long checkout line.
The reason is explored in “Why we snap” by Douglas Fields. Briefly, ancient circuitry in the hypothalamus is always on guard for trigger situations that correspond to prehistoric threats. Unless quenched by the prefrontal cortex, this dedicated sentinel system kicks in and produces a violent rage response to meet the threat. The book identifies 9 specific triggers that activate this system, although they arguably boil down to 3 existential threats in prehistoric life:

1) Existential threats to the (increasingly extended) physical self (attacks on oneself, mates, family, tribe)

2) Existential threats to the (increasingly extended) social self (insults against oneself, mates, family, tribe) and

3) Existential threats to the integrity of the territory that sustains these selves (being encroached on or being restrained from exploring one’s territory by others or having one’s resources taken from one’s territory).

Plausibly, these are not even independent – for instance, someone could interpret the territorial encroachment of a perceived inferior as an insult. Similarly, the withholding of resources, e.g. having a paper rejected despite an illustrious publication history could be taken as a personal insult.

The figure depicts the rage circuitry in the hypothamalus - several nuclei close to the center of the brain (stylized)

The figure depicts the rage circuitry in the hypothamalus – several nuclei close to the center of the brain (stylized)

Understood in this framework, deploying the “nuclear option” of violent rage so readily starts to make sense – the system thinks it is locked in a life and death struggle and goes all out, as those who could not best these situations perished from the earth long ago, along with their seed.

Of course, in the modern environment, almost none of these trigger situations still represent existential threats, even if they feel like it, such as being stuck at the post office.

In turn, maybe we need to start respecting the ancient circuitry that resides in all of us in order to make sense of seemingly irrational behavior, perhaps even incorporate our emerging understanding of these brain networks into public policy. In the case of violent rage, the stakes are high, namely preventing the disastrous outcomes of these behaviors that keep filling our prisons and that kill or maim the victims.

Read more: Unleashing the beast within

Posted in Neuroscience, Psychology, Science | 1 Comment

Brighter than the sun: Introducing Powerscape

Statistical power needs are often counterintuitive and underestimated. This has deleterious consequences for a number of scientific fields. Most science practitioners cannot reasonably be expected to make power calculations themselves. So we did it for them and visualized this as a “Powerscape”, which makes power needs immediately obvious.



Read more here:

Posted in Psychology, Science | Leave a comment

Tracking the diversity of popular music since 1940

This is a rather straightforward post. Our lab is doing research on music taste and one of our projects involves sampling songs from the Billboard Hot 100. It tracks the singles that made it to the #1 in the charts in the US (and for how long they were on top), going back to 1940.

Working on this together with my students Stephen Spivack and Sara Philibotte, we couldn’t fail to notice a distinct pattern in the diversity of music titles over time. See for yourself:

Musical diversity over time. Note that the data was smoothed by a 3-year moving average.

Musical diversity over time. Note that the data was smoothed by a 3-year moving average. A value of just above 10 in the early 40s means that an average song was on top of the charts for over 4 weeks in a row. The peak levels in the mid-70s mean that the average song was only on top of the charts for little more than a week during that period.

Basically, diversity ramped up soon after the introduction of the BillBoard charts, and then had distinct peaks in the mid-1960s, mid-1970s and late 1980s. The late 1990s peak is already much diminished, ushering in the current era of the unquestioned dominance of Taylor Swift, Katy Perry, Rihanna and the like. Perhaps this flowering of peak diversity in the 1970s, 1980s and 1990s accounts for the distinct sound that we associate with these decades?

Now, it would of course be interesting to see what drives this development. Perhaps generational or cohort effects involving the proportion of youths in the population at a given time?

Note 1: “Diversity” as literally “number of different songs over a given time period”. The temporal difference density. It is quite possible that these “different” songs are actually quite similar, but there is no clear metric by which to compare songs or a canonical space in which to compare them. Pandora has data on this, but they are proprietary. So if you prefer “annual turnover rate”, it would probably be more precise.

Note 2: My working hypothesis as to what drives these dynamics (that are notable phase-locked to decade) is some kind of overexposure effect. A new style comes about that coalesces at some point. Then, a dominant player emerges, until people get bored of the entire style, which starts the cycle afresh. A paradigmatic case as to how decade-specific the popularity of music is would be Phil Collins.

Note 3: Similarity of music might be hard to quantify objectively. Lots of things sound similar to each other, e.g. the background beat/subsound in “Blank space” and the background beat/subsound in the Limitless soundtrack.

Posted in Science, Social commentary | 4 Comments

Mary revisited: The Brian problem

Generations of philosophers have been fascinated what has been termed the “Mary problem“. In essence, Mary is the worlds foremost expert on color vision and knows everything that there is to know about it. The catch is that she is (depending on the version) either color-blind or has never experienced color before. The question is (depending on the version) whether she is lacking something or whether she would gain new experiences/knowledge if she were to experience color. Supposedly, this is to show us interesting things about physicalism and qualia.

In reality, it only shows us how philosophy is fundamentally limited. The philosophical apperception of the world (as well as interactions among philosophers) is entirely language-based. Needless to say, this is an extremely impoverished way to conceive of reality. Language has all kinds of strange properties, including being inherently categorical. Its neural basis is poorly understood but known to be rather extraneous – most animals don’t have it, and even in the higher primates, most brain regions don’t involve language processing.

Yet, all of Mary’s knowledge is language-based. So yes, if she were to see a color, the part of her visual cortex that processes spectral information (assuming that part of her cortex was intact) would be activated and she would experience the corresponding color, for reasons that are still not well known.

Brian's brain

Brian’s brain

But the heart of the matter – and the inherent shortcomings of an understanding that is entirely language based (invisible to philosophers, as this is something shared by all of philosophy) can be illustrated by a new problem. So let’s add some color to the Mary problem. Here is the Mary problem reimagined: I call this the “Brian problem”:

Brian is the worlds foremost expert on all things sex. He has read every single paper that was ever published in sexology and he also has a keen grasp on the biological and physiological literature. Yet, he has never had sex. Now, Brian has a chance to have sex – would this give him an opportunity to learn anything that he doesn’t already know or have an experience that he didn’t already have? If so, why?

Put in this way, a sample of 100 undergrads were quite clear in their interpretation:

The Brian problem

This nicely illustrates the fundamental problem with language to assess the state of reality: Language has face validity to other people, because it is our mode of thinking, and probably evolved for reasons of social coordination. So arguments might seem compelling (particularly to unrepresentative subcommunities with a shared culture) that do not correspond to an apt description of reality in any other way.

If this is not clear yet: Imagine there being a rat who has never experienced anger but has read every paper on anger. Then you stimulate the ventromedial hypothalamus of this rat with optogenetic methods. Does the rat experience something new? If so, why?

There are really infinite variations of this. Say – for instance – Stephen has read every paper on LSD. He has an intricate understanding of how the drug works. He knows everything there is to know about serotonin receptors and their interactions with ligands. He even understands mTOR, GCaMP and knows everything there is to know about phosphorylation as well as methylation (although that might not be directly relevant). Yet, he has never taken LSD. Now, Stephen is about to take some LSD. Will he experience something that he has not experienced before? Will he learn something? Is this really that hard to understand if one has a lot of prior exposure to philosophical thought?

Posted in Pet peeve, Philosophy | 4 Comments

Pascal’s Pensees, 5 years on

Before October 25th, 2010, I had no social media presence whatsoever – I wasn’t on Twitter, didn’t have a blog and G+ wasn’t around yet. Frankly, it hadn’t occurred to me before, but it was one of the requirements to serve as a “Social Media Representative” (Neuroblogger, in short) for the Society for Neuroscience, to cover the 2010 Annual Meeting, which I did5

Outreach efforts were also in an embryonic stage at the Society itself at that point, and by picking a few people to cover a giant meeting, one can either give new people a chance to establish a social media presence, boost existing bloggers or do a mix of both. I’m still grateful to the Society for picking me, but predictably some established bloggers were somewhat upset.

In hindsight, I think the choice by the Society to broaden the tent was justifiable as it pushed me and some others to start outreach efforts of any kind. Efforts which I aim to continue into the far future. At the same time, most of the critical blogs are now defunct.

That said, I still don’t think the “neuroblogger” concept itself is particularly well thought out. People attending the meeting will have better things to do than read a blog. People who didn’t go will have their reasons. The neurobloggers themselves will (in my experience) have some tough choices to make between attending the meeting and writing dispatches from it.

What might work better is to have a single “neuroblogger” platform (hosted on the SfN website) where people write contributions from the meeting, squarely aimed at the general public (to satisfy the outreach mission). Basically expert commentary on things that are just over the horizon, but without the breathless and unsubstantiated hype one usually gets from journalists. Perhaps have a trusted, joint twitter account, too.

Anyway. Thanks for giving me my start and I hope to be able to improve on the mission as time goes on. Semper melior.

Posted in Conference, In eigener Sache, Neuroscience | 3 Comments

Did a 6th century Hebrew fortuneteller accidentally do the first documented experiment?

Who did the first experiment? 13th century scholastics like Roger Bacon are usually credited with the invention of the modern scientific method – in particular with regard to doing experiments. Bacon expanded on the work of Robert Grosseteste, who revived the method of induction and deduction from the ancients and applied it to the natural world (going far beyond Aristotle’s intuitive natural philosophy). Others give credit to even later individuals, such as another Bacon – Francis Bacon – who made the experiment the centerpiece of empirical investigations of the natural world.

However, it seems that the first experiment – admittedly with an ad hoc methodology and no clearly spelled out rationale linking outcomes and interpretation, let alone statistical considerations or power calculations – seems to have predated these developments by well over 600 years.

The backdrop to this experiment is straightforward – the Western Roman Empire fell in 476 to an onslaught of Goths, whereas the Eastern Roman Empire soldiered on. A bit more than 50 years later, the Eastern Roman Emperor (Justinian) tasked his General Belisarius to retake the lands of the Western Empire from the Goths, first and foremost the Italian peninsula.

The first experiment?

Experimental design and outcome of what is potentially the first documented experiment.

The history of this conflict is relatively well documented, mostly by contemporary chroniclers like Procopius (in the pay of Justinian, so perhaps not entirely impartial). The story in question might be apocryphal and was already flagged as such by Procopius, but this does not really matter in this case – given that it is mostly the idea (of doing an experiment) itself that is of interest here, not its particular outcome. The story of the experiment is invoked to partially explain why Theodatus – the Gothic king in Rome, characterized by Procopius as “unmanly”, without experience in war and devoted to the pursuit of money – did not come to the aid of the denizens of Naples when they were besieged by Belisarius.

To make a decision whether he should fight Belisarius and try to lift the siege of Naples, Theodatus asked a Hebrew fortuneteller as to the likely outcome of the conflict. The fortuneteller told him to place 3 groups of 10 pigs each (presumably randomly selected) in separate huts, label them as Goths, Italians and East Roman soldiers, then wait a specified (but unstated) number of days. After doing just that, Theodatus and the fortuneteller entered the huts and noted that only 2 pigs labeled as Goths had survived, whereas most of the pigs labeled as East Roman soldiers lived. Of the pigs representing native Italians, half survived but had lost all of their hair. Theodatus interpreted the results to mean that the Italians would lose half of the population and all of their possessions, and that the Goths would be defeated, at little cost to the victorious Eastern Romans. Hence his reluctance to confront Belisarius at Naples.

In any event, Theodatus did not intervene and Belisarius ended up capturing both Naples and Rome from the Goths before being recalled by the Emperor. Losing both of these cities without much of a fight led the Goths to open rebellion and replace Theodatus as king of the Goths. Their chosen successor, Vittigis ordered the fleeing Theodatus to be apprehended and Theodatus perished at the hands of his pursuers.


Here is the original text (from the beginning of book 9, translated):

So the besieged, without the knowledge of the enemy, sent to Theodatus in Rome begging him to come to their help with all speed. But Theodatus was not making the least preparation for war, being by nature unmanly, as has been said before.[36] And they say that something else happened to him, which terrified him exceedingly and reduced him to still greater anxiety. I, for my part, do not credit this report, but even so it shall be told. Theodatus even before this time had been prone to make enquiries of those who professed to foretell the future, and on the present occasion he was at a loss what to do in the situation which confronted him—a state which more than anything else is accustomed to drive men to seek prophecies; so he enquired of one of the Hebrews, who had a great reputation for prophecy, what sort of an outcome the present war would have. The Hebrew commanded him to confine three groups of ten swine each in three huts, and after giving them respectively the names of Goths, Romans, and the soldiers of the [85]emperor, to wait quietly for a certain number of days. And Theodatus did as he was told. And when the appointed day had come, they both went into the huts and looked at the swine; and they found that of those which had been given the name of Goths all save two were dead, whereas all except a few were living of those which had received the name of the emperor’s soldiers; and as for those which had been called Romans, it so happened that, although the hair of all of them had fallen out, yet about half of them survived. When Theodatus beheld this and divined the outcome of the war, a great fear, they say, came upon him, since he knew well that it would certainly be the fate of the Romans to die to half their number and be deprived of their possessions, but that the Goths would be defeated and their race reduced to a few, and that to the emperor would come, with the loss of but a few of his soldiers, the victory in the war. And for this reason, they say, Theodatus felt no impulse to enter into a struggle with Belisarius. As for this story, then, let each one express his views according to the belief or disbelief which he feels regarding it.

Posted in History, Philosophy, Science | 1 Comment

Why “dressgate”* matters

At this point, we have probably all reached “peak dress”, been oversaturated by all matters dress and are ready to move on. But there is more.

There is no question that “the dress” is the most viral image relevant to science in the history of the internet (well more than 9 million tweets in 2 days). Some people noted that this is quite a self-indulgent debate in light of all the serious problems facing the world today. Don’t we all have more important things to worry about? This is a particularly likely sentiment if someone doesn’t immediately “get” what the big deal is about (people are more likely to “get” it, if the interpretation flipped on them as they were looking at the dress, or if someone close to them vigorously insisted that they sincerely see it different).

While it is true that the world faces many pressing problems, discussing the dress is *not* frivolous. Here is why:

1. It’s not about the dress – it’s about visual perception and human cognition.

2. Perception is inherently a guess and even the best guesses can be wrong. It should not be surprising that different people guess differently – or even the same person differently at different times – if the information one starts with is ambiguous.

Two different colors, top vs. bottom, right?

Two different colors, top vs. bottom, right?

3. Basically, the brain is playing a game of telephone. Only the eyes have direct access to the physical light energy in the environment. The human visual system consists of at least 30 distinct areas that get information from earlier ones. All later areas have to rely on what the earlier areas (and ultimately the eyes) tell them is going on. This transmission is fundamentally unreliable. Information is lost at every step.

4. All of this has been documented many times – and known for over 150 years – in many domains of vision, but *not* color vision. To my knowledge, this is the first strongly “bistable” stimulus – where interpretations can be radically different – in the color domain.

How about now? Same colors as above, but with different context

How about now? Same colors as above, but with different context

5. It has been known for a long time that color vision is strongly susceptible to illumination and context. This is not a bug, but a feature that allows to identify objects despite varying illumination conditions. This suggests that the reason people differ in their interpretation is because they differ in their assumption what the light source might be.

The little links between the two regions might convince you that they are actually the same, but look different. Context does matter.

The little links between the two regions might convince you that they are actually the same, but look different. Context does matter.

6. What is particularly intriguing is that the interpretation readily shifts for most bistable stimuli like Rubin’s vase, rotating spheres, etc. – but not for the dress. Some people never shift, others do but on a time scale of hours and once they flip, they can’t flip back. This suggests that there is some kind of rapid perceptual learning going on, much like in “Dalmatian” like situations.

Can you see the dalmatian? If not, see below

Can you see the dalmatian? If not, see below

7. The reasons for this – why different people make different assumptions about the light source and why some people exhibit rapid perceptual learning are fundamentally unknown. There are no shortcuts here. More research on this is needed and will be done.

8. Which is the last point on this list – to my knowledge, this is the first time, a powerful stimulus display has been brought to the attention of science by social media. This also explains why this kind of thing hasn’t happened before. Surely, many people have taken overexposed pictures of fabric (the fabric might matter, too) in poor lighting conditions before. But without social media to amplify the disagreement (social media seems to be best at that), it would have ended there. So what if you disagree with your friend? We all know your friend is a bit crazy and that’s that. Easy to dismiss. But social media changes all that. So we could look forward to more of that.

Can you see the dalmatian now? If you do: Congrats. This can not be unseen. If confronted with a display like the one above, you will always see the dalmatian.

Can you see the dalmatian now? If you do: Congrats. This can not be unseen. If confronted with a display like the one above, you will always see the dalmatian.

To summarize – and repeat – it’s not about the dress. It’s about visual perception and human cognition. If this is the first time you encountered that your perception – that you rely on to get safely through the day – is inherently and fundamentally unreliable, you might be skeptical, defensive or shocked. But that doesn’t change the facts.

So yes, ISIS is an obvious concern. But that doesn’t mean “the dress” is trivial. It is not. As a matter of fact, I would argue that this could not be more topical. If we can’t agree about the color of a dress, how can we hope for world peace? How can we foster tolerance if we don’t allow for and don’t understand that other people can sincerely see the world differently from us?

To be perfectly clear: The question is not what the color of the dress actually is. The question is why people disagree so strongly.

What follows from this is that finding the original dress is not a solution to the issue. Neither will any clever analysis of light distributions in the image. There are no shortcuts here. In other to find out why the interpretation has flipped on you or why your interpretation disagrees with that of someone you care about, more research is needed. At this point, the answer is: “We don’t know, but would like to find out”. Dismissing things is easy. Research is hard.

Note: We understand a lot about how vision works and how it doesn’t render a veridical percept of objective reality. But that is not the point. Vision mostly just has to be useful for survival purposes. But that does not account for the fact that different people see the same image of the dress (on the same screen) differently. It is important to acknowledge our ignorance in this regard and plan to do some research to overcome it.

Human perception is more variable than most people realize, both within the same person over time and between people. This goes beyond perception, by the way. Lots of people say things like: “You said x” or “this is offensive” – a more accurate statement would be “I understood x”, or “this is offensive to me”. Big difference. So it is always advisable to keep an open mind.


*Not my creation. This was one of the trending Twitter hashtags.

Images shared with kind permission from Steve Shevell.

Posted in Neuroscience, Psychology, Science, Social commentary | 9 Comments

Lessons from the dress: The fundamental ambiguity of visual perception

The brain lives in a bony shell. The completely light-tight nature of the skull renders this home a place of complete darkness. So the brain relies on the eyes to supply an image of the outside world, but there are many processing steps between the translation of light energy into electrical impulses that happens in the eye and the neural activity that corresponds to a conscious percept of the outside world. In other words, the brain is playing a game of telephone and – contrary to popular belief – our perception corresponds to the brains best guess of what is going on in the outside world, not necessarily to the way things actually are. This has been recognized for at least 150 years, since the time of Hermann von Helmholtz. As there are many parts of the brain that contribute to any given perception, it should not be surprising that different people can reconstruct the outside world in different ways. This is true for many perceptual qualities, including form

Rubin's vase: A classical example of figure/ground segmentation. The image is fundamentally ambiguous. People perceive a vase or faces, but not both at the same time.

Rubin’s vase: A classical example of figure/ground segmentation. The image is fundamentally ambiguous. People perceive a vase or faces, but not both at the same time.

and motion. While this guessing game is going on all the time, it is possible to generate impoverished stimulus displays that are consistent with different mutually exclusive interpretations, so in practice the brain will not commit to one interpretation, but switch back and forth. These are known as ambiguous or bistable stimuli, and they illustrate the point that the brain is ultimately only guessing when perceiving the world. It usually just has more information to go by and disambiguate the interpretation.

A bistable motion stimulus. Do you see the dots moving from left to right or up and down?

A bistable motion stimulus. Do you see the dots moving from left to right or up and down?

This is also true for color vision. The fundamental challenge in the perception of color is to identify an object despite changing illumination conditions. The mixture of wavelengths that reaches our eye will be interpreted by the brain as color, but which part is due to the reflectance of the object and which part is due to the illumination?

This is a inherently ambiguous situation, so the brain has to make a decision whether to take the appearance of an object at face value or whether it should try to discount the part of the information that stems from the illumination. As the organism is not primarily interested in the correct representation of hues, but rather the identification of objects in light of dramatically varying conditions (e.g. a predominance of long wavelengths in the early morning and late afternoon vs. more short wavelengths at noon), it is commonly accepted that the brain strives for “color constancy” and is doing a pretty good job at that.   But in this tradeoff towards discounting, something has to give, and that is that we are bad at estimating the apparent absolute hue of objects. For instance, a white surface illuminated by red light will objectively look reddish. The same white surface illuminated by blue light will objectively look blueish. In order to recognize both as the same white surface, the subjective percept needs to discount the color of the light source.

So it should not be surprising that inference of hue can be dramatically influenced by context. The same shade of grey can look almost black on a bright background but almost white on a dark one.

Lightness illusions are common. The shade of grey above and below the horizon are the same. You can see this by covering the strip in the middle

Lightness illusions are common. The shade of grey above and below the horizon are the same. You can see this by covering the strip in the middle

Note that this is not a bug. It is a necessary tradeoff in the quest to achieve a stable appearance of the same object, regardless of context or illumination.

So far, so good. Now where does the dress come in? The latest sensation to sweep social media has sharply divided observers. Some see the dress as gold on white, others as black on blue.

White/gold vs. Black/blue: Some people perceive the image on the left, others the one on the right. Others switch back and forth.

White/gold vs. Black/blue: Some people perceive the image on the left, others the one on the right. Others switch back and forth.

As noted before, this kind of divergence of interpretation might be rather common with complex stimuli. The importance of the “dress” stimulus is the extent to which intersubjective interpretation differs in the color domain. To my knowledge, this is by far the most extreme such stimulus. Of course one has to allow for the fact that not everyone’s monitor will be calibrated in the same way and viewing angles might differ, but this doesn’t account for the different subjective experience of people viewing the exact same image on the same monitor from the same position. And of course the reason why the “true colors” of the dress are in dispute in the first place is the color constancy phenomenon we alluded to above. This was likely a black/blue dress that was photographed with poor white balance, giving it an ambiguous appearance. But that doesn’t change the fact that some people sincerely perceive it as white/gold.

That the interpretation of the color values itself depends on context can readily be seen if the context is taken away. In the image below, some stripes were extracted from the original image without altering it in any other way. The “white/blue” stripe can now be identified as light blue and the “gold/black” stripe as brown.

Stripes, out of context: One now unambiguously looks like blue, the other like brown.

Stripes, out of context: One now unambiguously looks like blue, the other like brown.

But why the difference in interpretation? That is where things get interesting. If the ambiguity derives from color constancy (and it looks like it does), the most plausible explanation is that people differ in their interpretation of what the illumination source is. Those who interpret the dress as illuminated by a blue light will discount for this and see it as white/gold whereas those who interpret the illumination as reddish will tend to see it as black/blue. Interestingly, the image itself does allow for both interpretations, the illumination looks blueish in the top of the image, but yellowish/reddish in the bottom. On a more fundamental level, a blue/black dress illuminated by a white light source might be indistinguishable from a white/gold one, but with a blueish shadow falling onto it.

But if this is the case, one should be able to consciously override this interpretation once it is pointed out, but – for most people – this does not seem to be the case, in contrast to most other such ambiguous duckrabbit displays. People are able to willfully control what they see.

Inherently ambiguous stimuli. Interpretations switch, they don't blend. But people can consciously override interpretations once they are pointed out.

Inherently ambiguous stimuli. Interpretations switch, they don’t blend. But people can consciously override interpretations once they are pointed out.

This raises several intriguing possibilities. For instance, it has been recognized for quite a while that the human “retinal mosaic” – the distribution of short-, medium- and long-wavelength cones in humans is radically different between observers, but this seems to only have a minute impact on the actual perception of color. Perhaps it is the case that differences in retinal mosaic can account for difference in the perception of this kind of “dress” stimulus. Moreover, there is another type of context to consider and that is temporal context. We don’t just perceive visual stimuli naively, but we perceive them in the context of what we have encountered before – not all stimuli are equally likely. This is known as a “prior”. It is quite conceivable that some people (e.g. larks, owls) have a different prior as to what kind of illumination conditions they encounter more frequently. Or a complex interaction between the two.

While we must confess that we currently do not know why some people consistently see the dress one way, others consistently in another way and some switch, it is remarkable that the switching happens on very long timescales. Usually, switching is fast, for instance in the Rubin’s vase stimulus above. This could be particular to color vision. There are no shortcuts other than to do research as to the underlying reason that accounts for this striking difference in subjective perception.

Meanwhile, one lesson that we can take from all of this is that it is wise to assume a position of epistemic humility. Just because we see something in a certain way doesn’t mean that everyone else will see it in the same way. Moreover, it doesn’t mean that our percept necessarily corresponds to anything in the real world. A situation like this calls for the hedging of one’s bets, and that means to keep an open mind.  Something to remember next time you disagree with someone.

Who is right? Does the question even make sense? Or is it important to see the bigger picture?

Who is right? Does the question even make sense? Or is it important to see the bigger picture?


Posted in Neuroscience, Psychology, Science | 4 Comments

Why I am not continuing this online argument

I started our exchange in a spirit of friendship and respectful exploration. I hoped that we could maybe learn something. The reason I’m now discontinuing it is that I realize that this specific communication can no longer be meaningfully characterized in that way. In other words, nothing good can come of it, so it is best not to extend the triggering of adversarial circuitry. It does not mean that I concede the argument, it does not mean that you “won”. On the contrary. There is just no point continuing – victory conditions in an adversarial online exchange are ill-defined. Such exchanges only have a point if they are conducted in a spirit of mutual goodwill. I maintain this goodwill, but feel strongly that this is not reciprocated and that common ground has – for the time being – been lost. Perhaps we can regain it later. But that’s it for now, as we are at serious risk of simply making it worse. Not worth it.

This has never happened

This has never happened – it is a fantasy. Although an alluring one, fueling millions upon millions of of pointless internet arguments that are going on right now, as you read this. Let’s not add to that sad tally – it is not adding anything but vitriol, ill will and hatred. Illustration by: Michael “Flux” Caruso @fluxinflux

It is understood that you are wrapped up deeply in your subjective reality.  Trying to alter that will not be easy, so it might simply not be worth anyone’s time, let alone mine.

In the 21st century, one is dealing with a diverse range of people who are likely to construct reality quite differently. In other words, it is crucial not to confuse temporal and faux-spatial proximity online with importance (these heuristics do serve one well in offline interactions – people who are close matter, but the online space literally transcends space). Put differently, it is not only important to engage online, being able to disengage – the earlier the better - is arguably even more important:

Picard disengage

Never forget - victory conditions are ill-defined in such an exchange. It is tempting to think that your next “killer argument” will settle the issue once and for all. But that’s not how it works. Violence kills, arguments don’t.

Posted in In eigener Sache | Leave a comment

Shadowy but present danger: A primer on psychopathy

In the age of social media, it is hard to avoid exposure to popular culture. This is a problem because most of the bugbears that are popular in this culture – like zombies or vampires – do not actually exist. In contrast, those that do exist – notably psychopaths – get comparatively little screen time and if they do, they are portrayed as a caricature of the actual condition. This increases the likelihood that you will not recognize a psychopath when he enters your life, increasing the odds that the experience will truly be life ruining. At the same time, society appears naive and helpless in the face of psychopaths. While society continuously adds new layers of rules and regulations to all procedures, this does not come without a cost, diminishing the efficiency of carrying out all activities. Also, it is questionable whether this approach works – in many cases a determined psychopath will find a way to game the system all the same. If they are eventually caught, many psychopaths end up in prison, but at that point, the damage is done and prison as an institution is not specifically geared to deal with the condition.

At this point, the only hope comes in the form of the emerging neuroscience of psychopathy, which helps us detect the condition and understands its neural underpinnings. I review the evidence here.

Psychopaths - few moral considerations enter their decision making calculus. Due to the mask of sanity, you are unlikely to be able to recognize them until it is too late. Unless you know exactly what to look for.

Psychopaths – few moral considerations enter their decision making calculus. Due to the mask of sanity, you are unlikely to be able to recognize them until it is too late. Unless you know exactly what to look for.

Perhaps this will – in the long run – allow us to harden our society against psychopaths, dramatically lowering the burden of suffering inflicted by the condition on all of us.

Note: This is the primer. The full article is here.

Posted in Psychology, Science | 12 Comments

Positive thinking about positive thinking might just be wishful thinking

Bringing about positive changes in your life is hard. Everyone knows this. But everyone also desires them. So it is seductive to believe – particularly if you have no credible way to actually bring them about – that merely wishing for them will make it so. There are many people who will try to sell you on the idea that the world is your personal genie, only with an infinite number of wishes. Of course, if this were so, there would be no unfulfilled wants or needs. The pied pipers will tell you that you are simply not wishing hard enough. If you think this account is implausible, there is now a scientific account of what works and what doesn’t when it comes to make your wishes a reality.

What is more plausible - that the world is your personal wish genie, or that your blind faith in the power of positive thinking constitutes the very chains that ultimately hold you back?

What is more plausible – that the world is your personal wish genie, or that your blind faith in the power of positive thinking constitutes the very chains that ultimately hold you back?

There is a downside to wishful thinking about wishful thinking or positive thinking about positive thinking. It is the sacred mission of science to replace fairy tales and superstitions with a scrupulous account of what is and is not the case. Whether that account is popular or not.

TLDR: Wishful thinking is not enough. As usual, reality is deeply ironic.

Of course all of this only applies *if* you desire to achieve personal goals and contribute in ways that is valued by society. If that is not the case, none of this applies and you are of course still free to *enjoy*. But I would argue that most (but of course not all) people have a desire to achieve and contribute, of course manifesting in different ways.

Posted in Psychology, Science | Leave a comment

On the insinuation of bad intentions

Intentions matter. When assessing the merit or moral value of an action, we do not do so solely based on their outcomes, but take intentions into account. For instance, we consider it worse if someone breaks one cup in an attempt to steal cookies than someone who accidentally breaks 1o cups trying to help with the dishes.  Only very young children disagree (Piaget, 1932).

Here is the rub: We can only observe actions and their outcomes. Intentions have to be inferred. In everyday life, we have enough data on the people around us (their previous actions, their character, etc.) to make this inference somewhat reliably. As so much rides on a correct inference – the example above shows that the outcome is essentially meaningless in moral terms, unless intention is considered as well – getting it right matters. A lot. Yet, the dedicated social neuroscience circuitry that most of us possess to make these inferences responsibly does critically rely on a rich and subtle dataset of temporal and social context. If this context is missing, this circuitry will still operate, but in a vacuum, making inferences that are no longer reliable (but feel no less compelling to the mind that inhabits that brain).

This is where the modern (social) media landscape comes in. Most of the issues brought before us now are extremely complex and unless we are experts on everything, we have no way of assessing what is really going on, or what the intentions of the actors might be. More often than not, when we encounter someone’s actions online (often mediated by a third party, the “media”), this is also the first time we encounter that person as well. Put differently, we have no temporal context to judge them, nor much in the way of character to go by. But we will judge their actions implicitly or explicitly, and to do so with conviction, and we will also ascribe intentions. As they cannot be grounded in empirical fact – as we don’t know them – something else will be filling in for that. There are plenty of things that can be used in this fashion. Prior experience with other people comes to mind. If one has the experience of being taken advantage of by other people, one might be forgiven for adopting a generally misanthropic view and distrusting people in general – ascribing an intention to cheat to a freshly encountered person. This can be hard to override empirically. If I am convinced that someone is going to cheat me, even them being nice can be construed as a ruse to gain my trust or put me at ease, seemingly consistent with the ascribed intention.

As a matter of fact, it is hard to conceive of a pattern of behaviors that is inconsistent with this perceived intention, which – in this case – is made up without regard to the current situation. This is not an issue in the use case for which the system was designed (or evolved!) to work. Specific brains create a range of specific actions and by observing them over time and in various situations and environments, I can make inferences about the kinds of intentions this particular brain is prone to, which we call character or personality. In this case, everything works out. Brains generate behaviors to further certain intentions, both of which are consistent with a certain character. Once identified, this can be used to predict future behaviors. But in an online environment, the only thing one has to go by are actions. Also, they are often not observed, but reported, and reported by parties that are rarely disinterested. How is one to infer intentions or character? As pointed out above, there are plenty of sources that will allow conscious or unconscious “filling in” of missing information.

One such source is the report of the behavior itself, as it can emphasize or de-emphasize certain aspects of the action, making a particular intention more or less likely. Moreover, ideology is a key aspect of one’s intention-inference machinery, much of which is happening unconsciously and automatically. The problem is that there is very little in the way of “overriding” these conclusions. First, one might not want to override them because they reinforce one’s worldview. Second, there might be no perceived need to do so either, as all possible actions are consistent with one’s inferred character and intention. For instance, if one held the belief that all rich people are evil and only motivated by greed, there is quite literally nothing a billionaire could do to dispel this preconceived notion. If the billionaire gave the money to charity, one could argue that it is not enough or should have been done earlier. One could argue that the billionaire is only doing this to soothe a guilty conscience or to whitewash their legacy (this is not without precedent). One could even go so far as to say that the billionaire shouldn’t have had all that money in the first place, that it was effectively stolen from society and that it is only proper that it is now being returned, no thanks to the billionaire. At no point does the billionaire – despite all that money – have an action at his or her disposal – that could overturn the preconceived notion of the critic. This has happened. What has not happened – and generally does not happen – is for people to adopt a scientific stance and to ask themselves what piece of evidence would constitute a pattern of empirical facts that would lead them to overturn their preconceptions of intention and character. Of course, that might not always be possible. If I am convinced that evil entities are ready to strike at any moment, someone pointing out that no such entities have been observed will not sway me. On the contrary – their tendency to stay just beyond sensor range is further proof of their ill intentions…

But it is critical that we try to do so, as in a modern media environment, we are constantly evaluating scenarios we know nothing about, but which we then use to reinforce our preconceived ideological notions. A recent example of this is the horrendous Ebola outbreak in western Africa. After it became clear that American scientists had developed an experimental serum that was a potentially effective treatment, they were immediately accused of withholding it from the Africans, out of sheer racism. Once they considered making it available, they were accused of using this epidemic as an opportunity to test their half-baked drugs by experimenting on the poor Africans, again out of sheer racism. It is worth stating explicitly: There is nothing people who actually work on improving the human condition by developing cures for terrible scourges of mankind can do to dispel the notion that they are actually terrible racists, once these accusations are raised.

This situation is most acute on platforms like Twitter where people from the entire world communicate with each other, but often with little prior exposure and with an extremely limited shared basis of empirical facts. It can be hard to tell what someone is trying to say in 140 characters. Germans would call it the “Aussageabsicht”. Consequently, we observe plenty of misunderstandings and vitriol. With such a thin veneer of data, one can ascribe virtually any intention or character to anyone, regardless of what they say. And people do.

This is a dangerous situation and more likely to promote the bad than the common good in the long run, unless we reign in the tendencies brought about by our social neuroscience circuitry that is operating far outside of its design range (when it comes to social media). It would be a good start to admit that one does not really know the other people making the statements we object to and that judging them on this basis is really not a good basis for judging them as a person on the whole. On the contrary, we should be mindful that if we are willing to do so on a whim is both unwise and reflects poorly on us as an arbiter of empirical facts. Second, it is critical to recognize that any perception of intention or character is an inference and an inference that is made – given that we know next to nothing about the person – mostly on the basis of our prior beliefs, based on social norms and expectations and on ideology. It might feel compelling, but it is not necessarily so. Third, we should ask ourselves what behavior of the person could possibly be inconsistent with our preconceived notions, then go about eliciting such behavior. Everything else could be considered unfair and willfully unjustified. Finally, we should also recognize that seeing someone else making an accusation of ill-intentions does not constitute evidence of ill intentions. Sometimes, they just constitute evidence of bad inferences on part of the person who wrote the tweet. Not all tweets are created equal. Some are written by people desperately trying to advance the human condition as best as they can whereas others are written by people without achievement or judgment. They should not be weighed the same. Of course, it can be challenging to tell which is which, on Twitter.

To realize that not all tweets are created equal is particularly important for “shaming tweets”, tweets intended to shame the target (how one can ascertain that shaming is indeed the intention is the topic of an elaborate meta-discussion). It is important to recognize how devastatingly effective shaming is. Most (not all!) people are wired to protect their reputation at (almost) all cost. This is critical for autonomous agents that evolved to live in a social group. All social groups are beset with persistent principal agent problems. Reputation is one way to make sure that everyone does what they are supposed to. This is devastatingly effective. Whereas the concept is a little murkier in a modern global society, there can be no doubt about the fact that people’s behavior still changes dramatically depending on whether they think they are anonymous or not. But historical examples are most striking While most countries in WW1 simply drafted soldiers for WW1, the United Kingdom held on to their volunteer system for as long as they could. This is a problem if the country is in dire need for manpower, but few people are willing to sign up to fill this need. The example of the “Order of the white feather” shows how effective shaming can be. Men can quite literally be shamed to “volunteer” to fight and – not that unlikely – die or get maimed in the process. Yet, many picked this fate over being shamed. The pillory also comes to mind. In an age where hanging was the norm in terms of punishment for misbehaving and carefully calibrated torture was routinely used in criminal investigations, there was also an entire catalog of punishments specifically designed to shame people, the so-called “Ehrenstrafen“. It is not a coincidence that we abolished those along with torture – and for the most part – capital punishment.

Twitter stocks - not the kind of stocks that generally come to mind when one thinks of Twitter, but there it is - is Twitter just a global and eternal pillory for some?

Social media stocks – not the kind of stock that generally comes to mind when one thinks of Twitter, but there it is – is Twitter just a global and eternal pillory for some?

One should recognize how devastatingly effective shaming actually is. It is so effective that there are entire movements with the sole position of emphasizing that “fat shaming” or “victim shaming” is not ok. In addition, Twitter allows for a kind of asymmetric warfare. Anonymous or pseudonymous twitter users can attempt to to shame whoever they feel like, for any reason and for the entire world to see, preserved forever. That is a tremendous power with no checks and balances whatsoever. Anyone trying to shame someone on Twitter for perceived misdeeds is effectively judge, jury and executioner all in one. Where are these checks and balances supposed to come from? The only way I can see this happening is from an evolution of internet culture itself. Perhaps people will learn – as they get more comfortable with the medium – to recognize that not every allegation or insinuation is equally justified and that some are more telling about the mental state of the shamer than reflective of the presumed misdeeds of the shamed. Not everyone with an internet connection and a twitter account has something insightful to say. People like that can’t be stopped, but can they be ignored?

Meanwhile, can we all try to stick to the known facts? We might all be surprised how little we can actually ascertain for sure. But it might lead to a healthier and less vitriolic online culture.

Posted in Pet peeve, Psychology, Social commentary, Technology | 1 Comment

On “Kardashians” in science and the general relationship between achievement and fame

I am not in the habit of commenting on ephemeral events, but this was brought to my attention by interested parties in a decidedly snarky fashion which obliges me to respond.

Briefly, Neil Hall introduced the “Kardashian index” to quantify the discrepancy between the social media profile of scientists (measured by the number of Twitter followers) and their publication record (measured by citation count). The index is designed to identify “Kardashians” in science – people who are more well-known than merited by their scholarly contributions to the field.

Of course, the actual Kardashians are entirely indefensible, as they – by their very existence – take scarce and valuable attention away from more worthy issues that are in urgent need of debate. Perhaps this is their systemic purpose. So I won’t attempt a hopeless defense. However, the validity of the allegations implicit in creating such an index for science is questionable at best.

To be sure, Dr. Hall raises an important issue. It is true that there are a lot of self-important gasbags high on the “Kardashian” list who have stopped contributing to science decades ago, people who are essentially using their scientific soapbox only to bloviate about ideological pet interests they are ill equipped to fully comprehend.

Moreover, the Kardashian effect is quite real. While I do not believe that anyone on the list released a sex tape to gain attention, there is now a decided Matthew effect in place where attention begets more attention, whether it was begotten in a legitimate fashion or not. The victims of this regrettable process are the hordes of nameless and faceless graduate students and postdocs who toil in obscurity, making countless personal sacrifices on a daily basis and with scant hope of ever being recognized for their efforts.

Yet, introducing a “Kardashian index” betrays several profound misunderstandings that – if spread – will likely do more harm than good.

First, it exposes a fundamental misunderstanding of the nature of Twitter itself. Celebrity is not the issue here. I follow thousands of accounts on Twitter. Not all of them are even human. I know almost none personally and would be hard pressed to recognize more than a few of them. Put differently, I’m not following them because of who they are, but largely because of what they have to say. If they were actual Kardashians, this would be a no-brainer, as they have nothing valuable to say. The message (information), not the person (celebrity) is at the heart of (at least my) following behavior on Twitter.

Second, there are many ways for the modern academic to make a meaningful contribution. Outreach comes to mind. What is so commendable about narrowly advancing one’s own agenda by publishing essentially the same paper 400 times while taking the funding that fuels this research and is provided by the population at large completely for granted? In an age of scarce economic resources, patience for this kind of approach is wearing thin and outreach efforts should be applauded, not marginalized. There are already far too few incentives for outreach efforts in academia, meaning that the people who are doing outreach despite the lack of extrinsic rewards will tend to be those who are intrinsically motivated to do so. Another “Kardashian” high on the list comes to mind – Neil deGrasse Tyson. So what if he made few (if any) original contributions to science itself – he certainly inspires lots of those who are inclined to be so inspired. Why is that not a valuable contribution to the scientific enterprise more broadly conceived? Of course, it is worth noting that this should be done with integrity and class.

The scientific "Kardashians". As far as I know, none of them released a sex tape to garner initial fame

The scientific “Kardashians”. As far as I know, none of them released a sex tape to garner initial fame

Third, if there are many ways to make a contribution to science, why privilege one of them? Doing so would be justified if there was some kind of unassailable gold standard by which one could measure scientific contributions. But to suggest that mere citation counts could serve as such a standard in a time of increasing awareness of serious problems inherent to the classic peer review system (to name just a few: it is easy to game, there are rampant replication issues, perverted incentives, all of which are increasingly reflected in ever rising retraction numbers) is as naive as it is irresponsible.

Pyrite or fool's gold. This is a metaphor. Only a fool would take citation counts as a gold standard of scientific merit. It does happen, as there are a lot of fools out there.

Pyrite or fool’s gold. This is a metaphor. Only a fool would take citation counts as a gold standard of scientific merit. It does happen, as fools abound – there is never a shortage of those…

All of these concerns aside, this matter highlights the issue of the link between contributions to society (“achievement”) and recognition by society (“fame”). The implication of the “Kardashian index” is that this link is particularly loose if one uses twitter followers as a metric of fame and citations as a measure of contribution. Of course, we know that this is absurd. We don’t have to speculate – serious scholars like Mikhail Simkin have made it their life’s work to elucidate the link between achievement and fame. In every field Simkin considered to date (and by every metric he looked at), this link is tenuous at best. Importantly, it is particularly loose for the case of citations. As a matter of fact, the distribution of citation counts is *entirely consistent* with luck alone, due to the fact that papers are often not read, despite being cited. For fields where such data is available, the “best” papers are rarely the most cited papers. Put differently, citations are not a suitable proxy of achievement, as they are also just another proxy for fame. This is not entirely without irony, but – to put it frankly – the “Kardashian index” seems to be wholly without any conceptual foundation whatsoever in light of this research.

In fact, most papers are never cited, whereas a few are cited hundreds of thousands of times. Taking the Kardashian index seriously implies that most papers are completely worthless. Perhaps this is so, but maybe most people are just bad at playing this particular self-promotion game? I’m not sure if there is an inequality index akin to GINI for citations, but I would venture to guess that it would be staggering and that the top 1% of cited papers would easily account for more citations than the other 99%.
Put differently: Is everyone trying to be a Kardashian, on Twitter or off it, but some people are just better or luckier in the attempt?

I find this account quite plausible, but what does any of this have to do with scientific contributions?

As already pointed out, attention (or fame) matches real life contributions rarely, if ever. This is mostly due to the fact that real life contributions are hard to measure. Science is no exception. Citation counts are likely a better proxy for fame than for genuine contributions. If anything – the “Kardashian index” relates two different kinds of fame, fame inside a particular field of scientific inquiry relative to fame in the more general population. But who is to say that one kind of fame is better than another kind of fame? If one wants to save the “Kardashian index”, it would make sense to rebrand its inverse as the “insurality index” or “obscurity index” – do you get your message out only to people in your field or are you able to reach people more generally?

There are many ways for scientists to make a contribution. Not all of them are directly related to original research. Teaching comes to mind, reviewing too. The job of a modern academic is complex. One can do all of these things with varying degrees of seriousness. The reason why this became an issue in the first place is because outreach efforts (proxy: Twitter) and publications (proxy: Citations) can be easily quantified. That doesn’t mean that it is valid to do so or that one is necessarily better than the other. A research paper is a claim that is hopefully backed up by good data created by original research. But as the replication crisis shows, many of these claims turn out to be unsubstantiated, born from methodical issues, not seminal insights. To be sure, academia would be well advised to develop better metrics of performance. In general, the evaluation of academic performance is often surprisingly unscientific. From papers to citations to evaluations, it is all remarkably unsophisticated at this point in time.

Ultimately, it is all about the relevance of one’s contribution. But trying to discredit an increasingly important form of academic activity (outreach), by shaming those engaged in it, is probably uncalled for. We need more interactions between scientists and the general public, not less. To be sure, a lot of what is called “outreach” today is more leech than outreach, but the real thing does exist. As for the relevance of citations: Just because someone managed to create a citation cartel where everyone in it cites each other, that doesn’t mean anyone outside of the cartel cares. Calling something an impact factor doesn’t make it so. To be clear, there is nothing wrong with deciding to focus on original research. As a matter of fact, our continued progress as a species depends on it, if done right. But lashing out at others who decide to make a different choice does invite responses that will tend to point out that the situation is akin to dinosaurs celebrating their impending fossil status. At any rate, evolution will continue regardless. Wisdom consists in large part of not being trapped by one’s own reward history when the contingencies change. But good luck staying relevant in the 21st century without a social media strategy.

As everyone knows, Mark Twain observed that it is inadvisable to pick fights with people who buy ink by the barrel. Is the modern day equivalent that it is unwise trying to shame people who buy followers by the bushel?

PS: Of course, all of this bickering about who is more famous, who contributed more and who is more deserving of the fame is somewhat unbecoming and indicative of a deeper problem. As far as I can tell, the root cause for this problem is that society does not scale well in this regard. Historically, humans lived in small groups of under 150 individuals. In such a group, it is trivial to keep track of who did what and how much that contributed to the welfare of the group. There is no question that the circuitry that handles this tallying was not designed to deal with a group size of millions – or even billions (if one were to adopt a global stance). It is not trivial at all to ascertain who contributed something meaningful and who got away with something with a group size that big. Unless we find a way to handle these attribution problems (modern society obviously allows for plenty of untrammeled freeloading) in general – perhaps with further advances in technology that allow more advanced forms of social organization – this issue will keep coming up, unresolved. If you have any suggestions how to solve this or a perspective of your own, please don’t refrain from commenting below.

PPS: Of course, there is an even more general issue behind all of this, namely the relationship between contribution and reward. No society is great at making this relationship perfect. Interested parties can exploit these misallocations by gaming the system. The questions is what one should do about this and how one ought to perfect the system. Obviously, one way to deal with this is to shame the people who game the system, but I worry that the false-positive (and miss!) rate might be quite high. If so, that doesn’t really improve the overall justice in the system. A better way might be to make the system more robust against this kind of abuse. Ultimately the best long-term policy is virtue.

Posted in Pet peeve, Science, Social commentary | 2 Comments

Ideological opportunity cost (IOC)

Ideology interferes with an unbiased appraisal of reality. This – in itself – would be detrimental enough, but ideology is far more insidious than that. By nature, ideology is designed to be extremely self-serving and inherently creating in- and an out-groups.  Put differently, there is never a reason to have an ideological debate with someone. The reason someone brings up a particular topic is not to have an honest discourse about it, but rather to celebrate the beliefs of the in-group. Assuming belief homeostasis, I would expect this response to be particularly strong in those without sincerely held religious beliefs. In other words, someone bringing up an ideological topic is more akin to a secular sermon than to a genuine debate or discourse. It’s function is to celebrate the in-group and to identify and ostracize the out-group.

Life is inherently hard as reality is extremely complex and nature does not ask for informed consent. Autonomous but inherently limited cognitive agents living in this world can get lucky, but should be expected to suffer the consequences of living in such a reality on a fairly constant basis. Given the ambiguous nature of *interpreting* reality, it can be tempting for individuals to willfully make fundamental attribution errors if they exculpate the individual – shortcomings are externalized sensu “It is not my fault, if only these evil other guys didn’t exist, all would be well with the world.”, particularly if this is socially reinforced. Ideology amounts to outsourcing the causes of the pain associated with existence. Therefore, it is highly likely that psychological and social needs are behind any given ideological position, whereas expressions on issues of fact that touch on the ideological position of the ideologue are merely a smokescreen to obscure this fact, perhaps even to the believers themselves.

All of this raises the issue of ideological opportunity cost (IOC). Given how self-serving and community building ideological positions are, it is to be expected that their seductive nature will lead the ideologue to bring them up frequently. But there is a terrible cost associated with this. There are plenty of topics where a discussion can lead to a better appreciation of the realities of life for all involved. Such sincere discussions are critical, as a genuine exchange of information transcends the limitations of the individual agents sampling reality in solitude. The problem is that ideologues have no interest in such discussion, as they already “know” the nature of reality. From their perspective, there definitely is one (politically) correct, but many wrong views of reality. Their only real interest is in spreading and celebrating their secular gospel in their secular sermon (SS). The opportunity cost associated with all of this is that most of these ideological exchanges can seem incredibly engaging in the short run, but will likely be – for the reasons outlined in the previous paragraph – fruitless in the long one. But they do displace genuine exchanges concerning the nature of reality, as there is only limited time for social engagements.

This is a problem without a good solution. If people disagree, this kind of exchange creates adversity without any conceivable upside. Even if everyone agrees on the ideological issue in question, the group prior is further distorted from reality and discussions that could set the proper group prior are displaced by this kind of ideological self-gratification.

If you are also concerned about this, you should be on the lookout for people who are likely to drag you into this kind of below zero-sum exchange. The prediction is that people who are most likely to do this are those who feel the pain of existence most acutely – due to a plethora of real and perceived problems and who are unable to mitigate it by making it meaningful (usually with religion) or bearable (usually with stoic philosophy or meditation). In this view, ideology is a social coping strategy with personal problems resulting from very existence itself. You do not have to be an unwitting part in this process.

The only solution I see is a conscious override: Recognizing that – while tempting – the ideological exchange is a trap, that ideological differences cannot be settled by discussion, that there is a tremendous opportunity cost even if everyone agrees and that – therefore – the only way to win is not to play.

Posted in Pet peeve, Philosophy, Social commentary | 3 Comments

What should we call simulated data?

Data is not made. Data is born as a result of a measurement process. Taking measurements (in conjunction with a measurement theory) creates data. But then, what should we call – in contrast – the results of simulations, the output of theoretical models? Some might object that this is not an interesting question in the first place, but pointless – and rather nitpicky at that – semantics. However, I must disagree with this. The question is of critical importance, as “simulated data” is a contradiction in terms. Data concern the state of the real world, the world of things. The outputs of a simulation firmly belong to the realm of ideas. It is critical not to confuse the two, lest we make ill-informed statements about the real world solely based on observations we made based on models or simulations. This matters, as was impressively shown in the financial crisis of 2008 – when it became apparent that the if in “risk is only tamed if the real world risk fits the risk model” is indeed a big (and uncertain) if. Entire fields can be built on the confusion between models that work for idealized systems and the real world they are trying to account for, as Ricardo and the rise of economics impressively shows. That doesn’t mean one should attempt this.

Actual simulated data

Actual simulated data

With that in mind, what should we call “simulated data”? The term itself is a contradiction in terms and it bugs me that the rise of data science makes me qualify actual data with the term “real”. Predictions would be needlessly imperialist, as many (if not most) models deal solely with postdiction (there is nothing inherently wrong with this). I tried to make “sata” happen, but that did not catch on (yet).

Any suggestions?


PS: Lest you think that I’m needlessly pedantic and that this is a distinction without a difference – it really does matter. Simulations, modeling and “simulated data” (data) all have a role in the scientific process. But they are no substitute for data. In age of sophisticated modeling, it really does matter what one uses as a training (and test) set. Actual, real data – well recorded – is best. To wit:

Posted in Pet peeve, Philosophy, Science | 1 Comment

A tale of two wars

We are upon the 100 year anniversary of the start of the 1st world war. Most people alive today don’t fully appreciate the cataclysmic forces that were unleashed in this conflict, several of which still shape world events today. Of course, most people are aware of the sequel, World War 2 – a very different, yet closely related conflict. More subtly, WW1 brought on the rise of communism (seeding the cold war) as well as the demise of the Ottoman empire. The way in which weak and inherently unstable nation states like Syria or Iraq were carved out of the corpse of the Ottoman empire troubles the world today. As was revealed by removing the strongmen in the early 2000s, most of these states could only be stabilized and pacified by dictatorships. Which is not a problem per se, unless one has a psychopathic and expansionistic one in charge of a major regional power (such as Iraq), in a region that decides the energetic fate of the world economy, as the experience of the 1970s illustrated. Put differently, Iraq really was about resources, but in a more subtle way than most people believe.

But why have a conflict so apocalyptic in the first place, as cataclysmic as it was? Briefly, because everyone wanted to fight, no matter how senseless it was in economic terms. The sequel – WW2 – could happen only because no one (except for Hitler) wanted to or could afford to (given the debt accumulated in WW1) fight.

A great deal can – and has been - said about what amounts to the suicide of the West, as it was classically conceived. Briefly, I want to emphasize a few points.

*Really everyone in Europe wanted to fight, including Germany (who felt itself surrounded by enemies), Russia (who wanted to come to the aid of their Serbian friends), Serbia who felt its honor wounded by the Austro-Hungarian empire, Austria-Hungry who wanted to teach the Serbs a lesson and get revenge, France who was in a revanchist mood since 1871 and saw the population development east of the Rhine with great concern, England who felt its imperial primacy threatened by an upstart Germany and even Belgium, who was asked by Germany to stand down so that Germany can implement the Schlieffen plan but instead blocked every road and blew up every bridge they could.

*The irony of the Schlieffen plan: Germany saw itself surrounded by powerful enemies (France and Russia). The way to beat a spatial encirclement is to introduce the concept of time – the Russians were expected to mobilize their armies slowly. This gave the Germans a narrow time window for a decisive blow against Paris to knock out France in time before turning around and dealing with Russia. In addition, this time pressure is so severe that while Germany had the necessary artillery to knock out the French border forts, it did not have the time to do so. This necessitated going through neutral Belgium, which brought he British and – eventually – the US into the war against Germany. The irony is that this worry about the Russians was misplaced. As a matter of fact, the Russians showed up way earlier than anyone expected and started to invade East Prussia, in an attempt to march on Berlin. However, while they were early, they were also a disaster. A single German army defending East Prussia managed to utterly destroy both invading Russian armies – and then some. Given this outcome, there was no need for the highly risky Schlieffen plan.

*The irony of having a war plan in the first place. But this is only obviously a problem in hindsight as well. In WW1, both sides made disastrous mistakes on a regular basis. As a matter of fact, the rate of learning was in itself appallingly slow – infantry operated with outdated tactics and without helmets well into the war. Ultimately, the side that was faster at improvising and made the lesser amount of disastrous mistakes won.

*The irony of constructing a high seas fleet for Germany. This got the English into the conflict, which turned it from a small regional engagement to a world war. Immensely costly to build, this fleet did the Germans a world of good, sitting in port for the entire duration of the war (with the exception of a brief and inconclusive engagement in 1916) and providing the seed for revolution in 1918, bringing the entire government down.

*The difference in conflict between WW1 and WW2. As mentioned above, everyone wanted to fight in WW1. Consequently, well over a million people were dead within a few months, whereas it took almost two years for WW2 to get “hot”.

*It is a legitimate question to wonder what would have happened if the US hadn’t intervened in 1917. Without US intervention, there probably wouldn’t have been enough strength remaining for either side to conclusively claim victory (Operation Michael in 1918 would probably not have been successful regardless and without US encouragement, the allied offense would likely have suffered the same fate as the one in previous years). But the war couldn’t conceivably go on any longer regardless, due to a global flu epidemic and war weariness on all sides. What would the world look like today if the war had ended in an acknowledged stalemate, a total draw?

*Eternal glory might be worth fighting for, but the time constant of glory in real life is much shorter. Most people have absolutely no idea what different sides were fighting for specifically, or be hard pressed to even name a single particular engagement.

*Much has been made of the remarkable coincidence involving the assassination of Archduke Ferdinand. It is true that the assassin struck his mark only after a series of unlikely events, e.g. Ferdinand’s driver getting lost, and the car trying to reverse – and stall – at the precise moment that the assassin Princip is exiting a deli (Schiller’s Delicatessen) where he got a sandwich. Given the significance of subsequent outcomes, one is hard pressed not to see the hand of fate in all of this. However, there might be a massive multiple comparisons problem here. First of all, Princip was not the only assassin. To play it safe, six assassins were sent and indeed, the first attempt did fail. More importantly, this event was the trigger – the spark that set the world ablaze – but not the cause. Franz Ferdinand makes an unlikely casus belli. Not only was he suspected of harboring tendencies supporting tolerance and imperial reform, Ferdinand had married someone who was ineligible to enter such a marriage. This was a constant source of scandal in the Austrian-Hungarian empire. The marriage was morganatic in nature, his wife was not generally allowed to appear in public with him and even the funeral was used to snub her. Therefore, the emperor considered the assassination “a relief from great worry“. More importantly, it can be argued that this event is just one in a long series, all of which could have lead to war. Bismarck correctly remarked in the late 19th century that “some damn thing in the Balkans” will bring about the next European war. Indeed, the Balkans was the scene of constant crises going back to 1874 and including 1912 and 1913, all of which could have led to a general war. If anything, it can be argued that fate striking in 1888 was more material to the ultimate outcome. In 1888, Frederick III, a wise and progressive emperor died from cancer of the larynx, after having reigned for only 99 days. This made way for the much more insecure and belligerent Wilhelm II.

*What remains is the scariness of people ready to go to war even though it makes absolutely no economic sense. In a hyperconnected world of globalized trade that closely resembles our own (mutatis mutandis, e.g. the US stands in for the British Empire as the global hegemon). In addition, there was a full – ultimately wasted – month for negotiations between the assassination of archduke Ferdinand and the beginning of hostilities. This raises the prospect of a repeat. At least, it doesn’t rule it out. In 1913, there had been an almost 100 year “refractory period” (respite from truly serious, all-out war) after the Napoleonic wars as well. However, odds are that if it should happen again, repeating the 20th century in the 21st, it will happen in Asia. Asia has the necessary population density and a lot of its key countries – China, Japan, Russia, South Korea, India and Pakistan – are toying with extreme nationalism. As the history of the 20th century illustrates, that is a dangerous game to play.

Posted in History, Strategy | 1 Comment