Is the overuse of low memory data types to blame for much of tribalism and overall nonsense one encounters online and offline?

The notion of “data types” is probably the most underrated concept outside of computer science that I can think of right now. Briefly, computers use “typed variables” to represent numbers internally. All numbers are internally represented as a collection of “binary digits” or “bits” (a concept introduced by the underrated genius John Tukey, who also gave us the LSD test and the fast Fourier transform, among other useful things), more commonly known to the general public as “zeroes and ones”. An electric switch can either be on (1) or off (0) – usually implemented by voltages that are either high or low. So as a computer ultimately represents all numbers as a collection of zeroes and ones, the question is how many of them are used. A set of 8 bits make up a “byte”, usually the smallest unit of memory. So with one byte of memory, we can represent 2^8 or 256 distinct states of switch positions, i.e. from 00000000 (all off) to |||||||| (all on), and everything in between. And that is what data types are building off of. For instance, an “integer” takes up one byte in memory, so we can represent 256 distinct states (usually numbers from 0 to 255) with an integer. Other data types such as “single precision” take up 32 bits (=4 bytes) and can represent over 4 billion states (2^32) whereas “double precision” that are represented by 64 bits (=8 bytes of memory) and that can represent even more states. In contrast, the smallest possible data type is a Boolean, which can technically be represented by a single bit and that can only represent two states (0 and 1), which is often used when checking conditionals (“if these conditions are true (=1), do this. If they are not true (or 0), do something else”).

Each switch by itself can represent 2 states, 0 (“false”, represented by voltage off) and 1 (“true”, represented by voltage on). Left column: This corresponds to the k in 2^k . Right column: This corresponds to the number of unique states that this set of binary switches can represent.

Note that all computer memory is finite (and used to be expensive), so memory economy is paramount. Do you really need to represent every pixel in an image as a double or can you get away with an integer? How many shades of grey can you distinguish anyway? If the answer is “probably less than 256”, then you can save 87.5% of memory by representing the pixel as an integer, compared to representing it as a double. If the answer is that you want to go for maximal contrast, and “black” vs. “white” are the only states you want to represent (no shades of grey), then booleans will do to represent your pixels.

But computer memory has gotten cheap and is getting ever cheaper, so why is this still an important consideration?

Because I’m starting to suspect that something similar is going on for cognition and cognitive economy in humans and other organisms. Life is complicated and I wonder how that complexity is represented in human memory. How much nuance does long term memory allow for? Phenomena like the Mandela effect might suggest that the answer is “not much”. Perhaps long term memory only allows for the most sparse, caricature-like representation of objects (“he was for it” or “he was against it”, “the policy was good” or “the policy was bad”). Maybe this is even a feature to avoid subtle nuance-drift over time and keep the representation relatively stable over time, once encoded in long term memory.

But the issue doesn’t seem to be restricted to long term memory. On the contrary. There is a certain simplicity that really doesn’t seem suitable to represent the complexity of reality in all of its nuances, not even close, but people seem to be drawn to it. In fact, often the dictum “the simpler the better” seems to have a particular draw. This goes for personality types (I am willing to bet that much of the popularity of the MBTI in the face of a shocking lack of reliability can be attributed to the fact that it promises to explain the complexity of human interactions with a mere 16 types – or a 4 bit representation), horoscopes (again, it would be nice to be able to predict anything meaningful about human behavior with a mere 12 zodiac signs (3.5 bit (if bit were non-integers))), racism (maybe there are 4-8 major races, and thus can be represented with 2-3 bits), and sexism (biological sex used to be conventionally represented with a single bit). There is now even a 2-bit representation of personality that is rapidly gaining popularity – one that is based on the 4 blood types, and that has no validity whatsoever. But this kind of simplicity is hard to beat. In other words, all of these are “low memory plays”. If there is even a modicum of understanding about the world to be gained from such a low memory representation (perhaps even well within the realms of “purely felt effectiveness”, from the perspective of the individual, given the effects of confirmation bias, etc.), it should appeal to people in general, and to those who are memory-limited in particular.

Given this account, what remains puzzling – however – is that this kind of almost deliberate lack-of-nuance is even celebrated by those who should know better, i.e. people who are educated and smart enough that they don’t *have to* compromise and represent the world in this way, yet seem to do it anyway: For instance, there are some types of research where preregistration makes a lot of sense. If only to finally close the file drawer. Medication development comes to mind. But there are also some types where it makes less sense and some types where it makes no sense (e.g. creative research on newly emerging topics at the cutting edge of science) – so how appropriate it actually is mostly depends on your research. Surely, it must be possible for sophisticated people to keep a more nuanced position than a purely binary one (“preregistration good, no preregistration bad”) in their head. This goes for other somewhat sophisticated positions where tribalism rules the roost, e.g. “R good, SPSS bad” (reality: This depends entirely on your skill level) or “Python good, Matlab bad” (reality: Depends on what you want – and can – do) or “p-values bad, Bayes good” (reality: Depends on how much data you have and how good your priors are). And so on… 

Part of the reason these dichotomies for otherwise sophisticated topics are so popular must then lie in the fact that such a low-memory, low-nuance representation – after all, it even takes 6 bits to represent a mere 49 shades of grey and 49 shades isn’t really all that much – has other hidden benefits. One is perhaps that it optimally preserves action potential (no course of action is easier to adjudicate than a binary choice – you don’t need to be an octopus to represent these 2 options) and it engenders tribalism and group cohesion (assuming for the sake of argument that this is actually a good thing). A boolean representation has more action potential and is more conducive to tribalism than a complex and nuanced one, so that’s perhaps what most people instinctively stick with…

But – and I think that is often forgotten in all of this – action potential and group cohesion nonwithstanding, there are hidden benefits to be able to represent a complex world in sufficient nuance as well. Choosing a data type that is too coarse might end up representing a worldview that is plagued by undersampling and suffers from aliasing. In other words, you might be able to act fast and decisively, but end up doing the wrong thing because you picked from two alternatives that were not without alternative – you fell prey to a false dichotomy. If a lot is at stake, this could matter tremendously.

In other words, even the cognitive utility of booleans and other low memory data types is not clear cut – sometimes they are adequate, and sometimes they are not. Which is another win for nuanced datatypes. Ironically? Because if they are superior, maybe it is a binary choice after all. Or not. Depending on the dimensionality of the space one is evaluating all of this in. And whether it is stationary. And so on.

Posted in Pet peeve, Philosophy | Leave a comment

This is what is *really* going on with Laurel and Yanny – why your brain has to guess (without telling you)

At this point, we’re all *well* beyond peak #Yannygate. There have been comprehensive takes, there have been fun ones and there have been somber and downright ominous ones. But there have not been short ones that account for what we know.

This is the one (minute read). Briefly, all vowels that you’ve ever heard have 3 “formant frequencies” – 3 bands of highest loudness in the low (F1: ~500 Hz), middle (F2: ~1500 Hz) and high (F3: ~2500 Hz) frequency range. These bands are usually clearly visible in any given “spectrogram” (think “ghosts”) of speech.

However, the LaurelYanny sound doesn’t have this signature characteristic of speech. The F2 is missing. But your brain has no (epistemic) modesty. Instead of saying: “I have literally never heard anything like this before, is this even speech?”, it says: “I know exactly what this is” and makes this available to your consciousness as what you hear, without telling you that this is a guess (might be worth mentioning that, no)?

Stylized version of the Laurel and Yanny situation: Diagram of spectrograms. "Laurel" has all 3 formants, but with most power in the low frequencies. "Yanny" has all 3 formants, but with most power in the high frequencies. "LaurelYanny" has both high and low power, but nothing in the middle. So you have to guess.

Stylized version of the Laurel and Yanny situation: Diagram of spectrograms. “Laurel” has all 3 formants, but with most power in the low frequencies. “Yanny” has all 3 formants, but with most power in the high frequencies. “LaurelYanny” has both high and low power, but nothing in the middle. So you have to guess.

That’s pretty much it. The signal contains parts of both “Laurel” and “Yanny”, but also misses parts of both, hence the need to guess. WHAT you are guessing and why you hear “Laurel”, “Yanny” or sometimes one, then the other, and what it means for you whether you are a “Laurel” or a “Yanny” is pretty much still open to research.

Action potential: Hopefully, that was a mercifully short read. If you have some more time – specifically another 7-9 minutes – and want to help, click here.

Posted in Psychology, Science | Leave a comment

#Yannygate highlights the underrated benefits of keeping foxes around

In May 2018, a phenomenon surfaced that lends itself of differential interpretation – some people hear “Laurel” whereas others hear “Yanny” when listening to the same clip. As far as I’m concerned, this is a direct audio analogue of #thedress phenomenon that surfaced in February 2015, but in the auditory domain. Illusions have been studied by scientists for well over a hundred years and philosophers have wondered about them for thousands of years. Yet, this kind of phenomenon is new and interesting, because it opens the issue of differential illusions – illusions that are not strictly bistable, like Rubin’s vase or the Duckrabbit, but that are perceived as a function of the prior experience of an organism. As such, they are very important because it has long been hypothesized that priors (in the form of expectations) play a key role in cognition, and now we have a new tool to study their impact on cognitive computations.

What worries me is that this analogy and the associated implications were lost on a lot of people. Linguists and speech scientists were quick to analyze the spectral (as in ghosts) properties of this signal – and they were quite right with their analysis – but also seemed to miss the bigger picture, as far as I’m concerned, namely the directly analogy to the #dress situation mentioned above and the deeper implication of existence of differential illusions. The reason for that is – I think – that Isaiah Berlin was right when he stated:

“The fox knows many things, but the hedgehog knows one big thing.”

The point of this classification is that there are two cognitive styles by which different people approach a problem: Some focus on one issue in depth, others connect the dots between many different issues.

What he didn’t say is that there is a vast numerical discrepancy between these cognitive styles, at least in academia. Put bluntly, hedgehogs thrive in the current academic climate whereas foxes have been brought to the very brink of extinction.

Isiah Berlin was right about the two types of people. But he was wrong about the relative quantities. It is not a one-to-one ratio.

Isiah Berlin was right about the two types of people. But he was wrong about the relative quantities. It is not a one-to-one ratio. So it shouldn’t be ‘the hedgehog and the fox’, it should be ‘the fox and the hedgehogs’, at least by now…

It is easy to see why. Most scientists start out by studying one type of problem. In the brain – owed to the fact that neuroscience is methods driven and it is really hard to master any given method (you basically have to be MacGyver to get *any* usable data whatsoever) – this usually manifests as studying one modality such as ‘vision’, ‘hearing’ or ‘smell’ or one cognitive function such as ‘memory’ or ‘motor control’. Once one starts like that, it is easy to see how one could get locked in: Science is a social endeavor, and it is much easier to stick with one’s tribe, in particular when one already knows everyone in a particular field, but no one in any other field. Apart from the social benefits, this has clear advantages for one’s career. If I am looking for a collaborator, and I know who is who in a given field, I can avoid the flakes and those who are too mean to make it worthwhile to collaborate and seek out those who are decent and good people. It is not always obvious from the published record what differentiates them, but it makes a difference in practice, so knowing one’s colleagues socially comes with lots of clear blessings. In addition, literatures tend to cite each other, silo-style, so once one starts reading the literature of a given field, it is very hard to break out and do this for another field: People tend to use jargon that one picks up over time, but that is rarely explicitly spelled out anywhere. People have a lot of tacit knowledge (also picked up over time, usually in grad school) that they *don’t* put in papers, so reading alien literatures is often a strange and trying experience, especially when compared with the comforts of having a commanding grasp on a given literature where one already knows all of the relevant papers. Many other mechanisms are also geared towards further fostering hedgehogs: One of them is “peer-review”, which must be nice because it is de facto review by hedgehog, which can end quite badly for the fox. Just recently, a program officer told me that my grant application was not funded because the hedgehog panel of reviewers simply did not find it credible that one person could study so many seemingly disparate questions at once. Speaking of funding: Funding agencies are often structured along the lines of a particular problem, for instance in the US, there is no National Institute of Health – there are the National Institutes of Health, and that subtle plural “s” makes all the difference, because each institute funds projects that are compatible with their mission specifically. For instance, the NEI (the National Eye Institute) funds much of vision research with the underlying goal of curing blindness and eye diseases in general. But also quite specifically. And that’s fine, but what if the answer to that question relies on knowledge from associated, but separate fields (other than the eye or visual cortex). More on this later, but a brief analogy might suffice to illustrate the problem for now: Can you truly and fully understand a Romance language – say French – without having studied Latin? Even cognition itself seems to be biased in favor of hedgehogs: Most people can attend to only one thing at a time, and can associate an entity with only one thing. Scientists who are known for one thing seem to have the biggest legacy, whereas those with many – often somewhat smaller – disparate contributions seem to get forgotten at a faster rate. In terms of a lasting legacy, it is better to be known for one big thing, e.g. mere exposure, cognitive dissonance, obedience or the ill-conceived and ill-named Stanford Prison Experiment. This is why I think all of Google’s many notorious forrays to branch out into other fields have ultimately failed. People so strongly associate it with “search”, specifically that their – many – other ventures just never really catch on, at least not when competing with hedgehogs in those domains, who allocate 100% of their resources to that thing, e.g. FB (close online social connections – connecting with people you know offline, but online) eviscerated G+ in terms of social connections. Even struggling Twitter (loose online social connections – connecting with people online that you do not know offline) managed to pull ahead (albeit with an assist by Trump himself), and there was simply no cognitive space left for a 3rd, undifferentiated social network that is *already* strongly associated with search. LinkedIn is not a valid counterexample, as it isn’t as much a social network, as it formalized informal professional connections and put them online, so it is competing in a different space.

So the playing field is far from level. It is arguably tilted in the favor of hedgehogs, has been tilted by hedgehogs and is in danger of driving foxes to complete extinction. The hedgehog to fox ratio is already quite high in academia – what if foxes go extinct and the hedgehog singularity hits? The irony is that – if they were to recognize each others strengths – foxes and hedgehogs are a match made in heaven. It might even be ok for hedgehogs to outnumber foxes. A fox doesn’t really need another fox to figure stuff out. What the fox needs is solid information dug up by hedgehogs (who are admittedly able to go deeper), so foxes and hedgehogs are natural collaborators. As usual, cognitive diversity is extremely useful and it is important to get this mix right. Maybe foxes are inherently rare. In which case it is even more important to foster, encourage and nurture them. Instead, the anti-fox bias is further reinforced by hyper-specific professional societies that have hyper-focused annual meetings, e.g. VSS (the vision sciences society) puts on an annual meeting that is basically only attended by vision scientists. It’s like a family gathering, if you consider vision science your family. Focus is important and has many benefits – as anyone suffering from ADD will be (un)happy to attest, but this can be a bit tribal. It gets worse – as there are now so many hedgehogs and so few remaining foxes, most people just assume that everyone is a hedgehog. At NYU’s Department of Psychology (where I work), every faculty member is asked to state the research question they are interested in, on the faculty profile page (the implicit presumption is of course that everyone only has exactly 1, which is of course true for hedgehogs and works for them. But what is the fox supposed to say? Even colloquially, scientists often ask each other “So, what do you study”, implicitly expecting a one-word answer like “vision” or “memory”. Again, what is the fox supposed to say here? Arguably, this is the wrong question entirely, and not a very fox-friendly one at that). This scorn for the fox is not limited to academia; there are all kinds of sayings that are meant to denigrate the fox as a “Jack of all trades, master of none” (“Hansdampf in allen Gassen”, in German), it is common to call them “dilettantes” and it is of course clear that a fox will appear to lead a bizarre – startling and even disorienting – lifestyle, from the perspective of the hedgehog. And there *are* inherent dangers of spreading oneself too thin. There are plenty of people who dabble in all kinds of things, always talking a good game, but never actually getting anything done. But these people give just give real foxes a bad name. There *are* effective foxes, and once stuff like #Yannygate hits we need them to see the bigger picture. Who else would? Note that this is not in turn meant to denigrate hedgehogs. This is not an anti-hedgehog post. Some of my closest friends are hedgehogs, and some are even nice people (yes that last part is written in jest, come on, lighten up). No one questions the value of experts. We definitely need people with a lot of domain knowledge to go beyond the surface level on any phenomenon. But whereas no one questions the value of keeping hedgehogs around, I want to make a case for keeping foxes around, too – even valuing them.

What I’m calling for specifically, is to re-evaluate the implicit or explicit “foxes not welcome here” attitude that currently prevails in academia. Perhaps unsurprisingly, this attitude is a particular problem when studying the brain. While lots of people talk a good game about “interdisciplinary research”, few people are actually doing it and even less are doing it well. The reason this is a particular problem when studying the brain is that complex cognitive phenomena might cut across discipline boundaries, but in ways that were unknown when the map of the fields was drawn. To make an analogy: Say you want to know where exactly a river originates – where its headwaters or source are. To find that out, you have to go wherever the river leads you. That might be hard enough, just like when Theodore Roosevelt did this with the River of Doubt, arguably all phenomena in the brain are a “river of doubt” in their own right, with lots of waterfalls and rapids and other challenges to progress. We don’t need artificial discipline or field boundaries to hinder us even further. We have to be able to go wherever the river leads us, even if that is outside of our comfort zone or outside of artificial discipline boundaries. If you really want to know where the headwaters of a river are, you simply *have to* go where the river leads you. If that is your primary goal, all other considerations are secondary. If we consider the suffering imposed by an incomplete understanding of the brain, reaching the primary objective is arguably quite important.

To mix metaphors just a bit (the point is worth making), we know from history that artificially imposed borders (without regard for the underlying terrain or culture) can cause serious problems long term problems, notably in Africa and the Middle East.

All of this boils down to an issue of premature tessellation:

The tessellation problem. Blue: Field boundaries as they should be, to fully understand the phenomena in question. Red: Field boundaries, as they might be, given that they were drawn before understanding the phenomena. This is a catch 22. Note that this is a simplified 2D solution. Real phenomena are probably multidimensional and might even be changing. In addition, they are probably jagged and there are more of them. This is a stylized/simplified version. The point is that the lines have to be drawn beforehand. What are the chances that they will end up on the blue lines, randomly? Probably not high. That's why foxes are needed - because they transcend individual fields, which allows for a fuller understanding of these phenomena.

The tessellation problem. Blue: Field boundaries as they should be, to fully understand the phenomena in question. Red: Field boundaries, as they might be, given that they were drawn before understanding the phenomena. This is a catch 22. Note that this is a simplified 2D solution. Real phenomena are probably multidimensional and might even be changing. In addition, they are probably jagged and there are more of them. This is a stylized/simplified version. The point is that the lines have to be drawn beforehand. What are the chances that they will end up on the blue lines, randomly? Probably not high. That’s why foxes are needed – because they transcend individual fields, which allows for a fuller understanding of these phenomena.


What if the way you conceived of the problem or phenomenon is not the way in which the brain structures it, when doing computations to solve cognitive challenges? The chance of a proper a priori conceptualization is probably low, given how complicated the brain is. This has bothered me personally since 2001, and other people have noticed this as well.

This piece is getting way too long, so we will end these considerations here.

To summarize briefly, being a hedgehog is prized in academia. But is it wise?

Can we do better? What could we do to encourage foxes to thrive, too? Short of creating “fox grants” or “fox prizes” that explicitly recognize the foxy contributions that (only) foxes can make, I don’t know what can be done to make academia a more friendly habitat for the foxes among us. Should we have a fox appreciation day? If you can think of something, write it in the comments?


Action potential: Of course, I expect no applause for this piece from the many hedgehogs among us. But if this resonates with you and you strongly self-identify as a fox, you could consider joining us on FB.

Posted in In eigener Sache, Neuroscience, Pet peeve, Psychology, Science, Social commentary | Leave a comment

Social media and the challenge of managing disagreement positively

Technological change often entails social change. Historically, many of these changes were unintended and could not be foreseen at the time of making the technological advances. For instance, the printing press was invented by Johannes Gutenberg in the 1400s. One can make the argument that this advance led to the reformation within a little more than 50 years and the devastating 30-years war within another 100 years of that. Arguably, the 30-years war was an attempt at the violent resolution of fundamental disagreements – about how to interpret the word of god (the bible), which had suddenly become available for the masses to read. Of course the printing press was probably not sufficient to bring these developments about, but one can make a convincing argument that it was necessary. Millions of people died and the political landscape of central Europe was never quite the same.

Which brings us to social media. I think it is safe to say that most of us were surprised how fundamentally we disagree with each other as to how to interpret current events. Previously, the tacit assumption was that we all kind of agree about what is going on. This is obviously no longer possible and often quite awkward. Social media got started in earnest about 10 years ago, with the launch of Twitter and the Facebook News Feed. Since then, people have shared innumerable items on social media and from personal experience, one can be quite surprised how different other people interpret the very same event.

Which brings us to my research.

Briefly, people can fundamentally disagree about the merits of any given movie or piece of music, even though they saw the same film or listened to the same clip.

Moreover, they can vehemently disagree about the color of a whole wardrobe of things: Dresses, jackets, flipflops and sneakers. Importantly, nothing anyone can say would change anyone else’s mind in case of disagreement and these disagreements are not due to being malicious, ignorant or color-blind.

So where do they come from? When ascertaining the color of any given object, the brain needs to take illumination into account, a phenomenon known as color-constancy. Insidiously, the brain is not telling us that this is happening, it simply makes the end-result of this process available to our conscious experience. The problem – and the disagreement – arises when different people make different assumptions about the illumination.

Why might they do that? Because people assume the kind of light that they usually see, and this will differ between people. For instance, people who get up and go to bed late will experience more artificial lighting than those who get up and go to bed early. It stands to reason that people assume to happen in the future what they have experienced in the past. Someone who has seen lots of horses but not a single unicorn might misperceive a unicorn as a horse, should they finally encounter one. This is what seems to be happening more generally: People who go to bed late do assume lighting to be artificial, compared to those who go to bed early. 

In other words, prior experience does shape our assumptions, which shapes our conclusions (see diagram).

Conclusions can be anything that the brain makes available to our conscious experience - percepts, decisions, interpretation. Objects above dashed line are often not consciously considered when evaluating the conclusions. Some of them might not be consciously accessible. Note that this is not the only possible difference between individuals. Arguably, it might be that the brains are also different from the very beginning. That is probably true, but we know next to nothing about that. Note that differing assumptions are sufficient to bring about differences in conclusions in this framework. That doesn't mean other factors couldn't matter as well. Also note that we consider two individuals here. Once more than two are involved, the situation would be more complicated yet.

Conclusions can be anything that the brain makes available to our conscious experience – percepts, decisions, interpretation. Objects above dashed line are often not consciously considered when evaluating the conclusions. Some of them might not be consciously accessible. Note that this is not the only possible difference between individuals. Arguably, it might be that the brains are also different from the very beginning. That is probably true, but we know next to nothing about that. Note that differing assumptions are sufficient to bring about differences in conclusions in this framework. That doesn’t mean other factors couldn’t matter as well. Also note that we consider two individuals here. Once more than two are involved, the situation would be more complicated yet.

If this is true more generally, three fundamental conclusions are important to keep in mind, if one wants to manage disagreement positively:

1. There is no point in arguing about the outcomes – the conclusions. Nothing that can be said can be expected to change anyone’s mind. Nor is it about the evidence (what actually happened), as the interpretation of that is colored by the assumptions.

2. In order to find common ground, one would be well advised to consider – and question – the assumptions you and others make. Ideally, it would be good to trace someone’s life experience, which is almost certain to differ between people. Of course, this is almost impossible to do. Someone’s life experience is theirs and theirs alone. No one can know what it is like to be someone else. But pondering – and discussing – on this level is probably the way to go. Maybe trying to create common experiences would be a way to transcend the disagreement.

3. As life experiences are radically idiosyncratic, fundamental and radical disagreements should be expected, frequently. The question is how this disagreement is managed. If it is not managed well, history suggests that bad things might be in store for us.

Posted in Uncategorized | Leave a comment

My policy on quote review

I understand the need of journalists to simplify quotes and make them more palatable to their audience. Academics have a tendency to hedge every statement. In fact, they would have to be an octopus to account for all the hands involved in a typical statement. From this perspective, it is fair that journalists would try to counteract this kind of nuance that their audience won’t appreciate anyway. However, I’m in the habit of choosing my words carefully and try to make the strongest possible statement that can be justified based on the available evidence. If journalists then apply their own biases, the resulting statements can veer into the ridiculous. So I’m now quoted – all over the place – saying the damnedest things, none of which I actually said. Sometimes, the quote is the opposite of what I said. This is not ok.

Of course you can write whatever you want. But that doesn’t include what I allegedly said. Note also that I did give journalists the benefit of the doubt in the past. But they demonstrably – for whatever reason, innocent or willful – did not care much for quote accuracy.

Thus – from now on, I must insist on quote review prior to publication. This is not negotiable, as my reputation is on the line and – again – I’m in the habit of speaking very carefully. This policy is also mutually beneficial – wouldn’t any journalist with integrity be concerned about getting the quotes right?

In the meantime, one should be wise to assume the media version of Miranda: “Everything you don’t say will be attributed to you anyway.”

Posted in In eigener Sache | Leave a comment

Retro-viral phenomena: The dress over and over again

It is happening again. Another “dress”-like image just surfaced.


As far as I can tell, more or less the same thing is going on. Ill defined lighting conditions in the images are being filled in by lighting assumptions, and they differ between people due to a variety of factors, including which light they have seen more of. Just as described in my original paper.

As we get better at constructing these (images with ill-defined illumination), I expect more of these to pop up periodically. But people now seem more comfortable (and less surprised) by the notion that we can see colors of the same image differently.

The reason these things are still a thing is our tacit assumption that we all more or less see the same reality as everyone else.

So if I’m right (which most people presume) and someone else disagrees, they have to be wrong, for whatever reason. Color stimuli like this seem to produce categorically and profoundly differing interpretations. Which is what makes them so unsettling.

I think the same thing – more or less – applies to social and political questions. We take our experience at face value and fill the rest in with assumptions that are based on prior experience. As people’s experiences will differ, disagreements abound.

Which is why I find these stimuli so interesting and which is why I study them in my lab.

Hopefully, as these become more common, it will make people more comfortable with the notion that they can fundamentally – but sincerely – disagree with their fellow man.

Because people operate experientially. Here, they experience benign disagreement. In contrast to politics, where the disagreement is often no longer benign.

So this kind of thing could be therapeutic.

We could use it.
















Posted in Philosophy, Psychology, Science, Social commentary | Leave a comment

Of psychopaths, musical tastes, media relations and games of telephone

Usually, I publicly comment on our work once it is published, like here, here or here.

So I was quite surprised when I was approached by the Guardian to comment on an unpublished abstract. Neuroscientists typically present these as “work in progress” to their colleagues at the annual Meeting of the Society for Neuroscience, which is held in Washington DC in November, this year and at which our lab has 5 such abstracts. Go to this link if you want to read them.

Given these constraints, the Guardian did a good job at explaining this work to a broader audience, emphasizing its preliminary nature (we won’t even attempt to publish this unless we replicate it internally with a larger sample of participants and songs) as well as some ethical concerns inherent to work like this.

What becomes apparent on the basis of our preliminary work is that we can basically rule out the popular stereotype that people with psychopathic tendencies have a preference for classical music and that we *might* be able to predict these tendencies on the basis of combining data from *many* songs – individual songs won’t do, and neither will categories as broad as genre (or gender, race or SES). To confirm these patterns, we need much more data. That’s it.

What happened next is that a lot of outlets – for reasons that I’m still trying to piece together – made this about rap music and a strong link between a preference for rap music and psychopathic traits.

As far as I can tell, there is no such link, I have never asserted there to be one and I am unsure as to the evidentiary basis of such a link at this point.

It is worth pointing out that I actually did not say most of the things I’m quoted as saying on this topic, or at least not in the form they were presented.

So all of this is a lesson in media communications. Between scientists and the media, as well as between media and media, media and social media and social media and people (and all other combinations).

So it is basically a game of telephone: What we did. What the (original) media thinks we did. What the media that copies from the original media think we did. What social media thinks we did. What people understand we did. Apparently, all these links are “leaky” or rather unreliable. Worse, the leaks are probably systematic, accumulating systematic error (or bias) based on a cascade of differential filters (presumably, media filters by what they think will gain attention, whereas readers will filter by personal relevance and worldview). 

Given that, the reaction of the final recipient (the reader) of this research was basically dominated by their prior beliefs (and who could blame them), dismissing this either as obviously flawed “junk science” or so obvious that it doesn’t even need to be stated, depending on whether the media-rendering of the findings clashed with or confirmed these prior beliefs.

Is publicizing necessarily equal to vulgarizing?

I still think the question of identifying psychopaths based on more than their self-report is important. I also still think that doing so by using metrics without obvious socially desirable answers like music taste is promising, e.g. given their lack of empathy, psychopaths could be taken by particularly lyrics or given their need for stimulation, particular rhythms or beats could resonate with them more than average. But working all that out will take a lot more – and nuanced – work.

And to those who have written me in concern, I can reassure you: No taxpayer money was spent on this – to date.

If you are interested in this, stay tuned.

Posted in In eigener Sache, Science, Social commentary | 2 Comments

Vector projections

Hopefully, this will clear up some confusions regarding vector projections onto basis vectors.



Via Matlab, powered by @pascallisch






Posted in Matlab | Leave a comment

What should we call science?

The term for science – scientia (knowledge) is terrible. Science is not knowledge. It is simply not (just) a bunch of facts. The German term “Wissenschaft” is slightly better, as it implies a knowledge creation engine. Something that creates knowledge, emphasizing that this is a process (and the only valid one we have as far as I can tell) that generates knowledge. But that doesn’t quite capture it either. Science does not prove anything, nor create any knowledge per se. Science has been wrong many times, and will be wrong in the future. That’s the point. It is a process that detects – via falsification – when we were wrong. Which is extremely valuable. So a better term is in order. How about uncertainty reduction engine? But incertaemeíosikinitiras probably won’t catch on. 
How about incertiosikini? Probably won’t catch on either.

Posted in Pet peeve, Science | 1 Comment

Predicting movie taste

There is a fundamental tension between how movie critics conceive of their role and how their reviews are utilized by the moviegoing public. Movie critics by and large see their job as educating the public as to what is a good movie and explaining what makes it good. In contrast, the public generally just wants a recommendation as to what they might like to watch. Given this fundamental mismatch, the results of our study that investigated the question whether movie critics are good predictors of individual movie liking should not be surprising.

First, we found that individual movie taste was radically idiosyncratic. The average correlation was only 0.26 – in other words, one would predict an astarsverage disagreement of 1.25 stars, out of a rating scale from 0 to 4 stars – that’s a pretty strong disagreement (max RMSE possible is 1.7). Note that these are individuals who reported having seen *the same* movies.

Interestingly, whereas movie critics correlated more strongly with each other – at 0.39 – which had been reported previously, on average they are not significantly better than a randomly picked non-critic at predicting what a randomly picked person will like. This suggests that vaunted critics like the late Roger Ebert gain prominence not by the reliability of their predictions, but other factors such as the force of their writing.

What is the best way to get a good movie recommendation? In absence of all other information, information aggregators of non-critics such as the Internet Movie Database do well (r = 0.49), whereas aggregators of critics such as Rotten Tomatoes underperforms, relatively speaking (r = 0.33) – Rotten Tomatoes is better at predicting what a critic would like (r = 0.55), suggesting a fundamental disconnect between critics and non-critics.

Finally, as taste is so highly idiosyncratic, your best bet might be to find a “movie-twin” – someone who shares your taste, but has seen some movies that you have not. Alternatively, companies like Netflix are now employing a “taste cluster” approach, where each individual is assigned to the taste cluster their taste vector is closest to, and the predicted rating would be that of the cluster (as the cluster has presumably seen all movies, whereas individuals, even movie-twins will not). However, one cautionary note about this approach is that Netflix probably does not have the data it needs to pull this off, as ratings are provided in a self-selective fashion, i.e. over-weighing those that people feel most strongly about, potentially biasing the predictions.

Fox lab logo white


Posted in In eigener Sache, Journal club, Psychology, Science | Leave a comment

Revisiting the dress: Lessons for the study of qualia and science

gifBigWhen #thedress first came out in February 2015, vision scientists had plenty of ideas why some people might be seeing it differently than others, but no one knew for sure. Now we have some evidence as to what might be going on. The illumination source in the original image of the dress is unclear. It is unclear whether the image was taken in daylight or artificial light, and if the light comes from above or behind. If things are unclear, people assume that it was illuminated with the light that they have seen more often in the past. In general, the human visual system has to take the color of the illumination into account when determining the color of objects. This is called color
constancy. That’s why a sweater looks largely the same inside a house 
and outside, even though the wavelengths hitting the retina are very different (due to the different illumination). So if someone assumes blue light, they will mentally subtract that and see the image as yellow. If someone assumes yellow light, they will mentally subtract it and see blue. The sky is blue, so if someone assumes daylight, they will see the dress as gold.
Artificial incandescent light is relatively long-wavelength (appearing yellow-ish), so if someone assumes that, they will see it as blue. People who get up in the morning see more daylight in their lifetime and tend to see the dress as white and gold, people who
get up later and stay up late see more artificial light in their lifetime and tend to see the dress as black and blue.

This is a flashy result. Which should be concerning because scientific publishing seems to have traded off rigor with appeal in the past. However, I really do not believe that this was the case here. In terms of scientific standards, the paper has the following features:

*High power: > 13,000 participants

*Conservative p-value: Voluntarily adopted p < 0.01 as a reasonable significance threshold to guard against multiple comparison issues.

*Internal replication prior to publication: This led to a publication delay of over a year, but it is important to be sure.

*No excluding of participants or flexible stopping: Everyone who had taken the survey by the time of lodging the paper for review at the journal was included.

*#CitizenScience: As this effect holds up “in the wild”, it is reasonable to assume that it doesn’t fall apart outside of carefully controlled laboratory conditions.

*Open science: Shortly (once I put the infrastructure in place), data and analysis code will be made openly available for download. Also, the paper was published – on purpose – in an open-access journal.

Good science takes time and usually raises more questions than it answers. This is no exception. If you want to help us out, take this brief 5-minute survey. The more data we have, the more useful the data we already have becomes.

Fox lab logo white

Posted in Journal club, Neuroscience, Psychology, Science | 5 Comments

Autism and the microbiome

The incidence of autism has been on the rise for 40 years. We don’t know why, but the terrible burden of suffering has spurred people to urgently look for a cause. As there are all kinds of secular trends over the same time period, correlating this rise in autism with corresponding changes in environmental parameters, led to the “discovery” of all kinds of spurious or incidental relationships.

When attempting to establish causal relationships, experimental work is indispensable, but unethical to do in humans.

Now, it has been shown that feeding maternal mice a high fat diet led to social behavioral deficits reminiscent of autism in offspring. These deficits were associated with a disrupted microbiome, specifically low levels of L. reuteri. Restoring levels of L. reuteri rescued social behaviors, linked to the increased production to oxytocin.

I’m aware of the inherent limitations of mouse work (does anything ever transfer?), but if this does (and I think it will – given recent advances in our understanding of the gut microbiome in relationship to mental), it will be transformational, not just for autism. 

Here is a link to the paper: Microbial reconstitution reverses materal diet-induced social and synaptic deficits in offspring.

Posted in Neuroscience, Nutrition, Psychology, Science | 1 Comment

A primer on the science of sleep

I’ve written about sleep and the need to sleep and how sleep is measured before, but in order to foster our #citizenscience efforts at NYU, I want to bring accessible and actionable pieces on the science of sleep together in one place, here.

1. How the brain regulates sleep/wake cycles

2. Regulating sleep: What can you do?

What you can do

What you can do

3. Sleep: Why does it matter?

Sleep matters

Sleep matters

4. What you can do right now if your baby has sleep problems

5. Common sleep myths

6. Sleep is an active process

Sleep is an active process

Sleep is an active process

7. What are sleep stages?

Sleep stages

Sleep stages

Click on the links if you want to read more.

If you’re curious what our marriage between #citizenscience, #datascience and #neuroscience is about, read this.


Posted in Life, Neuroscience, Psychology, Science | Leave a comment

Beyond free will

Some say that every time philosophy and neuroscience cross, philosophy wins. The usual reason cited for this? Naive and unsophisticated use of concepts and the language to express them within neuroscience. Prime exhibit is the mereological fallacy – the confusion of the part with the whole (by definition, people see, not the eye or the brain). And yes, all too many scientists are entirely uneducated, but “winning” might be a function of letting philosophy pick the battleground – language – which philosophy has privileged for over 2500 years (if for no other reason than lack of empirical methods, initially). There is no question that all fields are in need of greater conceptual clarity, but what can one expect from getting into fights with people who write just the introduction and discussion section, then call it a paper and have – unburdened by the need to run studies or raise money to do so – an abundance of time on their hands? Yet, reality might be unmoved by such social games. It needs to be interrogated until it confesses – empirically. There are no shortcuts. Particularly if the subject is as thorny as free will or consciousness. See here for the video.

Posted in Neuroscience, Pet peeve, Philosophy | 1 Comment

Explaining color constancy

The brain is using spectral information of light waves (their wavelength mix) to aid in the identification of objects. This works because any given object will absorb some wavelengths of the light source (the illuminant) and reflect others. For instance, plants look green because they absorb short and long wavelengths, but reflect wavelengths in the middle of the visible spectrum. In that sense, plants – that perform photosynthesis to meet their energy needs – are ineffective solar panels: They don’t absorb all wavelengths of the visible spectrum. If they did, they would be black. So this information is valuable, as it allows the brain to infer object identity and help with image parsing: Different but adjacent objects usually have different reflectance properties and profiles. But this task is complicated by the fact that the mix of wavelengths reflected off an object is dependent on the wavelength mix emanating from the light source in the first place. In other words, the brain needs to take the illumination into account when determining object color. Otherwise, object identity would not be constant – the same object would look different depending on the illumination source. Illumination sources can contain dramatically different wavelength-mixes, e.g. incandescent light with most of the energy in the long wavelenghts vs. cool light LEDs with a peak in the short wavelengths. This is not a recent problem due to the invention of artificial lighting. Throughout the day, the spectral content of daylight changes – e.g. the spectral content of sunlight is different midday from late afternoon. If color perceiving organisms didn’t take this into account, the same object would look a radically different color at different times of day. So such organisms need to discount the illuminant, as illustrated here:

Achieving color constancy by discounting the illuminant

Achieving color constancy by discounting the illuminant

The details of how this process happens physiologically are still being worked out, but we do know that it happens. Of course, there are also other factors going into the constant color correction of the image performed by the organism. For instance, if you know the “true color” of an object, this will largely override other considerations. Try illuminating strawberries with a green laser pointer. The light bouncing off the strawberries will contain little to no long wavelengths, but the strawberries will still look red to you because you know that strawberries are red. Regardless of these considerations, we do know that color constancy matters quite a bit, even in terms of assumed illumination in case of #thedress, when the illumination source is ill-defined:

Discounting an assumed illuminant explains what is going on with the dress.

Discounting an assumed illuminant explains what is going on with the dress.

Of course, things might not be as straightforward as that. It won’t always be perfectly clear what the illuminant is. In that case, the brain will make assumptions to disambiguate. A fair assumption would be that it is like the kind of illuminant one has seen most. In most of human history – and perhaps even today – that means sunlight. In other words, humans could be expected to assume illumination along the daylight axis (over the day), which means short-wavelength illumination, which could account for the fact that most people did report to see the dress as white and gold.

Posted in Neuroscience, Psychology, Science | Leave a comment

The neuroscience of violent rage

Violent rage results from the activation of dedicated neural circuitry that is on the lookout for existential threats to prehistoric lifestyles. Life in civilization is largely devoid of these threats, but this system is still in place, triggering what largely amounts to false alarms with alarming frequency.

We are all at the mercy of our evolutionary heritage. This is perhaps nowhere more evident than in the case of violent rage. Regardless of situation, in the modern world, responding with violent rage is almost never appropriate and almost always inadvisable. At best, it will get you jailed; at worst, it might well get you killed.

So why does it happen so often, and in response to seemingly trivial situations, given these stakes? Everyone is familiar with the frustrations of daily life that lead to these intense emotions such as road rage or when stuck in a long checkout line.
The reason is explored in “Why we snap” by Douglas Fields. Briefly, ancient circuitry in the hypothalamus is always on guard for trigger situations that correspond to prehistoric threats. Unless quenched by the prefrontal cortex, this dedicated sentinel system kicks in and produces a violent rage response to meet the threat. The book identifies 9 specific triggers that activate this system, although they arguably boil down to 3 existential threats in prehistoric life:

1) Existential threats to the (increasingly extended) physical self (attacks on oneself, mates, family, tribe)

2) Existential threats to the (increasingly extended) social self (insults against oneself, mates, family, tribe) and

3) Existential threats to the integrity of the territory that sustains these selves (being encroached on or being restrained from exploring one’s territory by others or having one’s resources taken from one’s territory).

Plausibly, these are not even independent – for instance, someone could interpret the territorial encroachment of a perceived inferior as an insult. Similarly, the withholding of resources, e.g. having a paper rejected despite an illustrious publication history could be taken as a personal insult.

The figure depicts the rage circuitry in the hypothamalus - several nuclei close to the center of the brain (stylized)

The figure depicts the rage circuitry in the hypothamalus – several nuclei close to the center of the brain (stylized)

Understood in this framework, deploying the “nuclear option” of violent rage so readily starts to make sense – the system thinks it is locked in a life and death struggle and goes all out, as those who could not best these situations perished from the earth long ago, along with their seed.

Of course, in the modern environment, almost none of these trigger situations still represent existential threats, even if they feel like it, such as being stuck at the post office.

In turn, maybe we need to start respecting the ancient circuitry that resides in all of us in order to make sense of seemingly irrational behavior, perhaps even incorporate our emerging understanding of these brain networks into public policy. In the case of violent rage, the stakes are high, namely preventing the disastrous outcomes of these behaviors that keep filling our prisons and that kill or maim the victims.

Read more: Unleashing the beast within

Posted in Neuroscience, Psychology, Science | 1 Comment

Brighter than the sun: Introducing Powerscape

Statistical power needs are often counterintuitive and underestimated. This has deleterious consequences for a number of scientific fields. Most science practitioners cannot reasonably be expected to make power calculations themselves. So we did it for them and visualized this as a “Powerscape”, which makes power needs immediately obvious.



Read more here:

Posted in Psychology, Science | Leave a comment

Tracking the diversity of popular music since 1940

This is a rather straightforward post. Our lab is doing research on music taste and one of our projects involves sampling songs from the Billboard Hot 100. It tracks the singles that made it to the #1 in the charts in the US (and for how long they were on top), going back to 1940.

Working on this together with my students Stephen Spivack and Sara Philibotte, we couldn’t fail to notice a distinct pattern in the diversity of music titles over time. See for yourself:

Musical diversity over time. Note that the data was smoothed by a 3-year moving average.

Musical diversity over time. Note that the data was smoothed by a 3-year moving average. A value of just above 10 in the early 40s means that an average song was on top of the charts for over 4 weeks in a row. The peak levels in the mid-70s mean that the average song was only on top of the charts for little more than a week during that period.

Basically, diversity ramped up soon after the introduction of the BillBoard charts, and then had distinct peaks in the mid-1960s, mid-1970s and late 1980s. The late 1990s peak is already much diminished, ushering in the current era of the unquestioned dominance of Taylor Swift, Katy Perry, Rihanna and the like. Perhaps this flowering of peak diversity in the 1970s, 1980s and 1990s accounts for the distinct sound that we associate with these decades?

Now, it would of course be interesting to see what drives this development. Perhaps generational or cohort effects involving the proportion of youths in the population at a given time?

Note 1: “Diversity” as literally “number of different songs over a given time period”. The temporal difference density. It is quite possible that these “different” songs are actually quite similar, but there is no clear metric by which to compare songs or a canonical space in which to compare them. Pandora has data on this, but they are proprietary. So if you prefer “annual turnover rate”, it would probably be more precise.

Note 2: My working hypothesis as to what drives these dynamics (that are notable phase-locked to decade) is some kind of overexposure effect. A new style comes about that coalesces at some point. Then, a dominant player emerges, until people get bored of the entire style, which starts the cycle afresh. A paradigmatic case as to how decade-specific the popularity of music is would be Phil Collins.

Note 3: Similarity of music might be hard to quantify objectively. Lots of things sound similar to each other, e.g. the background beat/subsound in Blank space and the background beat/subsound in the Limitless soundtrack.

Posted in Science, Social commentary | 4 Comments

Mary revisited: The Brian problem

Generations of philosophers have been fascinated what has been termed the “Mary problem“. In essence, Mary is the worlds foremost expert on color vision and knows everything that there is to know about it. The catch is that she is (depending on the version) either color-blind or has never experienced color before. The question is (depending on the version) whether she is lacking something or whether she would gain new experiences/knowledge if she were to experience color. Supposedly, this is to show us interesting things about physicalism and qualia.

In reality, it only shows us how philosophy is fundamentally limited. The philosophical apperception of the world (as well as interactions among philosophers) is entirely language-based. Needless to say, this is an extremely impoverished way to conceive of reality. Language has all kinds of strange properties, including being inherently categorical. Its neural basis is poorly understood but known to be rather extraneous – most animals don’t have it, and even in the higher primates, most brain regions don’t involve language processing.

Yet, all of Mary’s knowledge is language-based. So yes, if she were to see a color, the part of her visual cortex that processes spectral information (assuming that part of her cortex was intact) would be activated and she would experience the corresponding color, for reasons that are still not well known.

Brian's brain

Brian’s brain

But the heart of the matter – and the inherent shortcomings of an understanding that is entirely language based (invisible to philosophers, as this is something shared by all of philosophy) can be illustrated by a new problem. So let’s add some color to the Mary problem. Here is the Mary problem reimagined: I call this the “Brian problem”:

Brian is the worlds foremost expert on all things sex. He has read every single paper that was ever published in sexology and he also has a keen grasp on the biological and physiological literature. Yet, he has never had sex. Now, Brian has a chance to have sex – would this give him an opportunity to learn anything that he doesn’t already know or have an experience that he didn’t already have? If so, why?

Put in this way, a sample of 100 undergrads were quite clear in their interpretation:

The Brian problem

This nicely illustrates the fundamental problem with language to assess the state of reality: Language has face validity to other people, because it is our mode of thinking, and probably evolved for reasons of social coordination. So arguments might seem compelling (particularly to unrepresentative subcommunities with a shared culture) that do not correspond to an apt description of reality in any other way.

If this is not clear yet: Imagine there being a rat who has never experienced anger but has read every paper on anger. Then you stimulate the ventromedial hypothalamus of this rat with optogenetic methods. Does the rat experience something new? If so, why?

There are really infinite variations of this. Say – for instance – Stephen has read every paper on LSD. He has an intricate understanding of how the drug works. He knows everything there is to know about serotonin receptors and their interactions with ligands. He even understands mTOR, GCaMP and knows everything there is to know about phosphorylation as well as methylation (although that might not be directly relevant). Yet, he has never taken LSD. Now, Stephen is about to take some LSD. Will he experience something that he has not experienced before? Will he learn something? Is this really that hard to understand if one has a lot of prior exposure to philosophical thought?

Posted in Pet peeve, Philosophy | 6 Comments

Pascal’s Pensees, 5 years on

Before October 25th, 2010, I had no social media presence whatsoever – I wasn’t on Twitter, didn’t have a blog and G+ wasn’t around yet. Frankly, it hadn’t occurred to me before, but it was one of the requirements to serve as a “Social Media Representative” (Neuroblogger, in short) for the Society for Neuroscience, to cover the 2010 Annual Meeting, which I did5

Outreach efforts were also in an embryonic stage at the Society itself at that point, and by picking a few people to cover a giant meeting, one can either give new people a chance to establish a social media presence, boost existing bloggers or do a mix of both. I’m still grateful to the Society for picking me, but predictably some established bloggers were somewhat upset.

In hindsight, I think the choice by the Society to broaden the tent was justifiable as it pushed me and some others to start outreach efforts of any kind. Efforts which I aim to continue into the far future. At the same time, most of the critical blogs are now defunct.

That said, I still don’t think the “neuroblogger” concept itself is particularly well thought out. People attending the meeting will have better things to do than read a blog. People who didn’t go will have their reasons. The neurobloggers themselves will (in my experience) have some tough choices to make between attending the meeting and writing dispatches from it.

What might work better is to have a single “neuroblogger” platform (hosted on the SfN website) where people write contributions from the meeting, squarely aimed at the general public (to satisfy the outreach mission). Basically expert commentary on things that are just over the horizon, but without the breathless and unsubstantiated hype one usually gets from journalists. Perhaps have a trusted, joint twitter account, too.

Anyway. Thanks for giving me my start and I hope to be able to improve on the mission as time goes on. Semper melior.

Posted in Conference, In eigener Sache, Neuroscience | 3 Comments