Explaining why some people see “the sneaker” as pink, even though its pixels are grey

  1. This has nothing to do with being left or right brained. People love brain based explanations, and this phenomenon has one, but a different one – see below.
  2. “This thing, yet again” seems to unsettle some people, but it is neither silly, nor should you feel bad that you see things differently than others. What is going on sheds light on how the mind works normally to bring about perception.
  3. Most people think they see things just how they are, but a lot of things have to be taken into account to make that happen reliably.
  4. For instance, an object that appears to be red could look like that either because its surface properties reflect red light*, or because it is illuminated by red light to begin with.
  5. Our brain contains mechanisms that take the kind of light into account to color-balance the perceptual experience. In other words, if everything is bathed in red light, the brain can take that into account. This allows the brain to render the appearance of the object consistently, regardless of changing light colors. This process is called color constancy.
  6. In some images – like “the sneaker” – a pink sneaker was illuminated with complementary – green – light to make it appear grey. Context is missing, so people cannot apply color constancy mechanisms in the usual way.
  7. However, there are subtle cues – shoelaces are usually white, so the green light will render them green. Some people – who know that shoelaces are usually white can unconsciously take that into account to deduce that the light must have been green and color-correct to see the sneaker as pink.
  8. We know this because whereas displays like the sneaker, the dress, the Adidas Jacket or “Laurel & Yanny” came about accidentally, we applied these principles – which we call SURFPAD – to design and create a new display: The crocs. And we are the first ones to do so successfully.
  9. We used green lights to illuminate pink crocs to make the pixels appear as grey. But that light will make the socks look green, so those who know that socks like that tend to be white can account for that and perceive them as pink. This knowledge about the whiteness of these socks seems to come from experience. Moreover, people who are able to do this – see the sneaker as pink even though the pixels that make it up are grey – tend to be the same people who see the crocs as pink.
  10. The reason this is not frivolous to ponder these phenomena that it highlights the tremendous mental work – such as taking experience and context into account – that goes into normal perception that we typically take for granted. Moreover, in these highly polarized times, it highlights why you might disagree with someone, even though you both have access to the same evidence. If someone looks bad in the media, how do you know whether they are actually bad or because they are put into a bad light by the media? Where you come down on that depends on how well you know the person vs. how much you trust the media.

If you have a couple of minutes and want to help us get to the bottom of some of the more subtle aspects of sneakers, crocs and the like, you can click here to donate your data.

What color are these crocs?

*We are aware that lights don’t have colors. Notoriously, lights are not themselves colored, as the color experience is created in the brain. However, light – a form of electromagnetic radiation – has a frequency, which corresponds to a certain wavelength. Humans see lights in a relatively narrow wavelength range, typically between about 400 and 700 nm. Radiation with longer wavelengths (think a laser with 650 nm) is typically perceived as red, whereas one at 441 is perceived as blue. Generally, lights with longer wavelengths reddish, shorter blueish. Being aware of all that, we say “red light” as shorthand for light that contains power at predominantly long wavelengths, to make for a more concise text, much like neuroscientists say that neurons have “tuning preferences” (say for stimuli of a certain orientation or spatial frequency), being fully aware that people have preferences, neurons (lacking agency) do not. In other words, most neuroscientists say that neurons have preferences as a shorthand to make communication more efficient, they are not committing a mereological fallacy, as they are sometimes accused of by philosopher. The same applies here to our use of colored lights or pink shoes – the “pink shoe” is shorthand for a shoe that appears pink to most observers without visual impairments and under typical lighting conditions.

Posted in Uncategorized | Leave a comment

Exploring the roots of disagreement with crocs and socks

Pascal Wallisch & Michael Karlovich

The degree of polarized disagreement about current events is at an all-time high, and rising.

So we need to understand disagreement better in order to avoid disagreeable results.

A key problem when studying discord in politics or economics is that all issues are loaded – people have entrenched positions that might make it hard for them to accept some potential conclusions of such an investigation.

One viable research strategy to circumvent this problem is to explore perceptual disagreements instead. These are arguably sufficiently free of preconceptions – innocent enough – that people are open-minded to the outcomes of such research.

Fortuitously, we were blessed with the dress – an image that evokes vehement disagreement about perception.

However, this image – and others like it – were mostly considered but mere curiosities.

This is a fair point – until now, no one was able to intentionally create such displays, so it was unclear whether the disagreement about the colors of the dress has significance beyond the idiosyncratic quirkiness of that particular image, or if there are wider implications.

We derived a principle underlying the nature of disagreement, which allows us to design perceptually ambiguous displays at will, and – in turn – understand how disagreement comes about in general.

Let’s illustrate this general principle with a specific example, the case of color, and – even more specifically – a particular type of footwear, crocs. We first create uncertainty about the color of the crocs by removing any cues that would be present under typical viewing conditions. We then illuminate the crocs with colored lights so that they appear as some shade of grey. We finally add a second object that has a characteristic color – like a white tube sock – that reflects the color of the lighting, but which could be any color.

This – in turn – creates the disagreement. Some observers will take the appearance of the object at face value and perceive grey crocs, with colored socks. Others will remember that socks like that are usually white and use this subtle cue to calibrate the lighting of the overall display, perceiving pink crocs, as they would appear under normal lighting, and with white socks, see gif.

We call this principle SURFPAD (Substantial Uncertainty combined with Ramified or Forked Priors and Assumptions yields Disagreement). We used it to create several color ambiguous displays of crocs and surveyed a large number of observers about their perceptions.

We found that observers indeed disagree about the color of the crocs and that the way an individual observer perceives the crocs depends on how they interpret the socks. Observers who think the socks are white – despite them objectively appearing colored – are likely to see the crocs as if they were illuminated with natural light (pink), whereas those who see the socks as colored don’t. In turn, the propensity to see the socks as white in the first place was linked to one’s experience with these socks. Finally, we show that the individual perception of the crocs has no bearing on how someone sees the dress, highlighting that croc perception involves a different kind of assumption – assumptions about fabrics, not assumptions about light, like in the dress. Put differently, this is not just a warmed over dress effect – it is superficially similar, but separate and novel.

There are several wider implications of this research. First, as you can see for yourself, the effect is stronger if you focus on the red dot – or on a part of the socks instead of the crocs. This could reflect the fact that the color signal coming from photoreceptors is stronger in the fovea (where there are more cones) than in the periphery. Of course, if you already perceive the crocs as grey even if they are illuminated with green light, they should not change color subjectively.

This brings us to some of the more psychological possibilities one could consider. For instance, most people think they see things how they really are. However, in this case, that presents a conundrum: When presenting displays created with SURFPAD principles to observers, we found that whereas some saw the crocs like the pixels as they really appear on their monitors – grey – others saw the crocs as pink – the color they really are as the manufacturer intended them to appear when viewed under everyday lighting conditions. So does “really” mean “grey”, as an analysis of the pixels with photoshop would yield or does “really” mean “pink”, the color that the manufacturer intended to sell? Related to this is the question of whether someone sees objects in terms of their isolated component elements – the grey pixels – or the colored crocs as wholes in the context of particular lighting. Moreover, this could touch on another personality difference, namely whether someone (perceptually) “lives in the past” – by taking information from prior experience into account more strongly than those who don’t.

What all of these considerations have in common is that they require further research –  these tendencies could reflect general personality characteristics, but it is also possible that these individual effects do not transfer to other displays.

As it is, this research does suggest that perception and cognition are more closely intertwined than previously believed, as one’s beliefs can demonstrably color perception. That is important because if cognition plays a large role in perception, it is plausible that perceptual principles in turn underlie cognitive phenomena. Our findings open up a new avenue of research – instead of studying cognition in a siloed fashion (i.e. studying memory completely independently from studying other cognitive functions like attention or perception), as has been the norm, we can now attempt to use perception as a bridgehead to gain traction on more elusive cognitive phenomena.

For instance, it is clear cultural effects play a large role in shaping the human experience. However, culture is extremely hard to study. In contrast, studying culture on a perceptual footing – as a set of shared experiences and assumptions – is much more tractable. Imagine a culture where people wear one kind of garment – say white socks and another culture where they wear black socks. We now have clear predictions as to what people from these cultures would perceive if they were confronted with displays engineered with SURFPAD principles.

But the real value of this principle might lie in a deeper understanding of disagreement about more controversial topics. While we need to study this directly, it is quite conceivable that the same principles that govern perceptual are those that underlie conceptual disagreement. It has been the source of considerable uneasiness that people with unorthodox but dearly held beliefs that are central to their identity (such as anti-vaxxers or flat-earthers) are essentially immune to being convinced of alternative views. Introducing challenging evidence does not change their beliefs. If anything, it strengthens them. This might appear puzzling, but makes complete sense in a SURFPAD framework. Consider the following hypothetical. Imagine that every day, newspapers write an article pointing out that a certain politician is a bad person. Naively, one could think that if the media is doing that, they will paint the figurative socks as really, really green, and readers should be swayed and start to realize that the politician in question is indeed a bad person. And this would work, if people had no preconceptions. But they do. For instance, some people know that the socks are actually white. For those people, seeing really green socks will make them conclude that the lighting is off and just allow them to estimate better just how off it is. And people will have no problem believing that, as they know that anyone can be put in a bad light, and levels of trust in the media are rather low, so people are quite ready to believe that the media would alter the lighting.

Note that in this model, no updating of the prior beliefs takes place, even with repeated exposure, as the socks are still seen as white, the crocs still as colored and the lighting is still discounted, in a cascade of polarized interpretations. If anything, the belief in the color of the socks and the biased light is strengthened.

So what is one to do if one wants to change someone’s mind, particularly about dearly held beliefs?

Our research suggests that simply presenting new evidence is not going to be compelling, as it will be interpreted in light of the pre-existing framework of assumptions. Instead, there are two potentially effective avenues for changing someone’s mind. First, highlighting assumptions directly and questioning why they are made in the first place promises some success. Second, one could address the potential for confusion between sock and light color directly – and offer a more compelling alternative scenario, i.e. pointing out why in this particular situation, it is more likely that the socks are actually green, and that the lighting is still white. Third, maybe we should incentivize a culture that discourages – not encourages – the ramification of priors.

To summarize, it is clear why the brain has to make these assumptions in order to operate effectively in an uncertain world. The necessary information to act is not always available, so it is prudent to make educated guesses. Under normal conditions, this works reasonably well, which is why we are all still here. However, what is nefarious about this is that your brain does not tell you when it quietly jumps to – unwarranted – conclusions by over-applying assumptions, much like autocorrect is often largely aspirational – it isn’t actually correct all that often.

In the area of politics, this is dangerous, as different people will apply different sets of assumptions (or priors), and there are now entire industries dedicated to the ramification of these priors. We have to come to terms with the ongoing and intentional forking and ramification of priors and its deleterious impact on civil discourse in one way or the other if we are to avoid the downside of this process. Given uncertainty and forked priors, disagreement might be inevitable, yet conflict might be avoidable. We suggest to achieve this by bringing about a new culture of disagreement on the basis of SURFPAD principles.

 

If you want to contribute to a follow-up of this research, you can do so here.

Posted in Philosophy, Psychology, Science | Leave a comment

How effective is cultural transmission?

In order to learn from history, one has to know about it first. Even then, it is hard to do – arguably, human nature is constant, but how it manifests is ever changing, as the circumstances change, mostly due to innovation, which has led some to observe that history rhymes more so than it repeats, which is also hard to assess, as there is only one human history, in other words observing counterfactuals is impossible – there is neither a control group, nor the possibility of experiments. All of which makes “lessons from history” far more ambiguous than one would like.

But I rather seriously digress, and in the very first paragraph, no less. Back to the question, which could be phrased as “How aware are we of things that happened before we were born?, “How well will things that are popular today be remembered in the future?”, “How fleeting is fame and what determines which ideas will stand the test of time?”

Anecdotally, the answer to the first questions is “not very”. Every year, Beloit College publishes an updated “Mindset List” that attempts to highlight all the things that students who are now entering college are completely unaware of, as they have never used a typewriter, floppy disks, don’t know about VHS or cassette tapes, and so on. While amusing, such considerations raise deeper scientific questions: How well do cultural artifacts age in the collective memory? Will future generations have any awareness of things that are popular today?

Of course, there is longstanding interest in the question of what is in the cultural awareness, as exemplified by wild but popular speculations about archetypes, but scientific answers have been wanting until relatively recently. 

One such investigation pertains to American presidents and Chinese leaders, both picked because the list of entries is comprehensive and known. In brief, the results show that collective memory mirrors individual memory – there are strong primacy and recency effects. In terms of American presidents, this manifests as knowing the first few and the most recent view, but most people would be hard pressed to name an American president from the middle of the 19th century other than Lincoln.

This paints a rather depressing picture of cultural transmission. As time progresses, most events will be “somewhere in the middle”, so are we doomed to live the cultural version of Eternal Sunshine of the Spotless Mind, with relatively little transmission between generations?

A somewhat more positive picture emerges when considering cultural artifacts that people seek out, e.g. music. Looking at onetime number 1 hits, we could show that recognition of these songs does not hit zero about a decade before our participants were born – what one would get if one extrapolated the steep drop-off implied by the recency effect.

Instead, recognition hits a rather stable “plateau” of moderate recognition that extends for 3 decades. Averaging is somewhat misleading, as there is tremendous inter-song variability in this period. Some are as well recognized as if they were released yesterday, whereas others are completely forgotten.

What drives this difference seems to be exposure, as measured by Spotify playcounts. In other words, we can’t tell whether music from the 60s to 90s was truly special, or whether recognition rates for things people seek out (music) are higher than for things people don’t (political leaders) in general.

The good news is that cultural transmission seems to work better than previously thought, at least for things that people are seeking out. Whether music is a fluke or not in this regard could be investigated by looking at other things like popular movies or books.

Posted in Psychology | Leave a comment

Pascal’s Wager 2.0

Obviously, you can believe whatever you want about metaphysics, as there is no observable reality to constrain you. That said, I believe the usual debates about theism vs. atheism miss the point. The real issue is not whether the world was created by a god – with endless debates about who has the burden of proof…  theists asserting that there is a god, or atheists that there is not, others discussing or disputing specific characteristics of this god. However, this already casts the issue in terms that are comprehensible to human understanding, and there is no a priori reason why we should presuppose that reality is amenable to that – what is really going on might be much more ineffable. Instead, I propose that the real issue is whether the world is meaningful or not. In other words, does existence have a purpose? I would say so – as it is awfully specific. My mind is linked to my brain, and not yours. Why is it today, right now? It is also profoundly strange – you just got used to it. What exactly did you wake up from, when you woke up this morning? And what happened to yesterday – where did the time go? And not everything about this reality is observable. For instance, mathematical objects (e.g. numbers) are not observable in principle, but mathematics has – Goedel nonwithstanding – excellent and rigorous methods to assess the truth status of mathematical statements. Also, why does the universe have a very specific content of mass and energy, and in its current mix/configuration? Why these forces, and not others? Everything about our reality seems to be quite specific. Why even have rules in the first place and where is the computational overhead of the universe that decides what happens next? What even does it mean to be “next”? Of course you could say that there is a larger – unobservable – multiverse that explains these things, but that is strictly speaking also a metaphysical (in principle unfalsifiable) statement about reality. In other words, the fundamental question is whether the world is meaningful or not. Here is where Pascal’s wager 2.0 comes in. It literally costs you nothing to assume that the universe is meaningful – or has a purpose – because you lose absolutely nothing if you are wrong. Because then, you are wrong but nothing matters anyway as everything is genuinely pointless. You can argue that this is just as cynically utilitarian and therefore without moral value as the original wager, but I don’t think you can argue its basic validity. So to summarize, there is no way to tell whether reality is meaningful or not, but you lose nothing by assuming it is. The catch is that it is probably impossible to ascertain the purpose of a system from within the system. Of course this state of affairs is so vexing that it points to there being a purpose – what kind of system could hold such conundrums for no reason at all? It would be a pitiful waste indeed.

This is – by the way – a good example of dialectics:

Level 0: Believe what people around you believe/your culture raised you to believe

Level 1: Pascal’s Wager – belief is not arbitrary – it is rational to believe in god, due to the asymmetric utility of outcomes. If you falsely believe that there is a god, you lose nothing, but if you are wrong about that, you lose a lot (by going to hell).

Level 2: That’s a fallacy because “belief in god” is not specific enough. Based on the wager, it would be rational to adopt (or create) a religion with beliefs that spell out the greatest discrepancy in outcomes between believers and nonbelievers in terms of the afterlife (ultimate rewards vs. ultimate punishment). It also raises the issue of moral desert, putting the moral value of someone’s actions in question – do even good actions have any moral value, if they are ultimately made for entirely selfish reasons?

Level 3: Pascal’s Wager 2.0 – it is rational to believe that reality/existence has a purpose/is meaningful because you really do not lose anything if it turns out that you are wrong. Because then, nothing matters anyway. In addition – from a purely utilitarian perspective –  as suffering necessarily outweighs pleasure for the vast majority of beings in this plane of existence, having no meaning to make up for this deficit is truly a brutal way of life. Of course, sentient beings torturing each other forever might be the purpose of this place (it would certainly be consistent with a lot of the evidence), as there is no guarantee that the purpose is a good purpose. Just that it is not entirely meaningless.

Posted in Philosophy | Leave a comment

Is the overuse of low memory data types to blame for much of tribalism and overall nonsense one encounters online and offline?

The notion of “data types” is probably the most underrated concept outside of computer science that I can think of right now. Briefly, computers use “typed variables” to represent numbers internally. All numbers are internally represented as a collection of “binary digits” or “bits” (a concept introduced by the underrated genius John Tukey, who also gave us the LSD test and the fast Fourier transform, among other useful things), more commonly known to the general public as “zeroes and ones”. An electric switch can either be on (1) or off (0) – usually implemented by voltages that are either high or low. So as a computer ultimately represents all numbers as a collection of zeroes and ones, the question is how many of them are used. A set of 8 bits make up a “byte”, usually the smallest unit of memory. So with one byte of memory, we can represent 2^8 or 256 distinct states of switch positions, i.e. from 00000000 (all off) to |||||||| (all on), and everything in between. And that is what data types are building off of. For instance, an “integer” takes up one byte in memory, so we can represent 256 distinct states (usually numbers from 0 to 255) with an integer. Other data types such as “single precision” take up 32 bits (=4 bytes) and can represent over 4 billion states (2^32) whereas “double precision” that are represented by 64 bits (=8 bytes of memory) and that can represent even more states. In contrast, the smallest possible data type is a Boolean, which can technically be represented by a single bit and that can only represent two states (0 and 1), which is often used when checking conditionals (“if these conditions are true (=1), do this. If they are not true (or 0), do something else”).

Each switch by itself can represent 2 states, 0 (“false”, represented by voltage off) and 1 (“true”, represented by voltage on). Left column: This corresponds to the k in 2^k . Right column: This corresponds to the number of unique states that this set of binary switches can represent.

Note that all computer memory is finite (and used to be expensive), so memory economy is paramount. Do you really need to represent every pixel in an image as a double or can you get away with an integer? How many shades of grey can you distinguish anyway? If the answer is “probably less than 256”, then you can save 87.5% of memory by representing the pixel as an integer, compared to representing it as a double. If the answer is that you want to go for maximal contrast, and “black” vs. “white” are the only states you want to represent (no shades of grey), then booleans will do to represent your pixels.

But computer memory has gotten cheap and is getting ever cheaper, so why is this still an important consideration?

Because I’m starting to suspect that something similar is going on for cognition and cognitive economy in humans and other organisms. Life is complicated and I wonder how that complexity is represented in human memory. How much nuance does long term memory allow for? Phenomena like the Mandela effect might suggest that the answer is “not much”. Perhaps long term memory only allows for the most sparse, caricature-like representation of objects (“he was for it” or “he was against it”, “the policy was good” or “the policy was bad”). Maybe this is even a feature to avoid subtle nuance-drift over time and keep the representation relatively stable over time, once encoded in long term memory.

But the issue doesn’t seem to be restricted to long term memory. On the contrary. There is a certain simplicity that really doesn’t seem suitable to represent the complexity of reality in all of its nuances, not even close, but people seem to be drawn to it. In fact, often the dictum “the simpler the better” seems to have a particular draw. This goes for personality types (I am willing to bet that much of the popularity of the MBTI in the face of a shocking lack of reliability can be attributed to the fact that it promises to explain the complexity of human interactions with a mere 16 types – or a 4 bit representation), horoscopes (again, it would be nice to be able to predict anything meaningful about human behavior with a mere 12 zodiac signs (3.5 bit (if bit were non-integers))), racism (maybe there are 4-8 major races, and thus can be represented with 2-3 bits), and sexism (biological sex used to be conventionally represented with a single bit). There is now even a 2-bit representation of personality that is rapidly gaining popularity – one that is based on the 4 blood types, and that has no validity whatsoever. But this kind of simplicity is hard to beat. In other words, all of these are “low memory plays”. If there is even a modicum of understanding about the world to be gained from such a low memory representation (perhaps even well within the realms of “purely felt effectiveness”, from the perspective of the individual, given the effects of confirmation bias, etc.), it should appeal to people in general, and to those who are memory-limited in particular.

Given this account, what remains puzzling – however – is that this kind of almost deliberate lack-of-nuance is even celebrated by those who should know better, i.e. people who are educated and smart enough that they don’t *have to* compromise and represent the world in this way, yet seem to do it anyway: For instance, there are some types of research where preregistration makes a lot of sense. If only to finally close the file drawer. Medication development comes to mind. But there are also some types where it makes less sense and some types where it makes no sense (e.g. creative research on newly emerging topics at the cutting edge of science) – so how appropriate it actually is mostly depends on your research. Surely, it must be possible for sophisticated people to keep a more nuanced position than a purely binary one (“preregistration good, no preregistration bad”) in their head. This goes for other somewhat sophisticated positions where tribalism rules the roost, e.g. “R good, SPSS bad” (reality: This depends entirely on your skill level) or “Python good, Matlab bad” (reality: Depends on what you want – and can – do) or “p-values bad, Bayes good” (reality: Depends on how much data you have and how good your priors are). And so on… 

Part of the reason these dichotomies for otherwise sophisticated topics are so popular must then lie in the fact that such a low-memory, low-nuance representation – after all, it even takes 6 bits to represent a mere 49 shades of grey and 49 shades isn’t really all that much – has other hidden benefits. One is perhaps that it optimally preserves action potential (no course of action is easier to adjudicate than a binary choice – you don’t need to be an octopus to represent these 2 options) and it engenders tribalism and group cohesion (assuming for the sake of argument that this is actually a good thing). A boolean representation has more action potential and is more conducive to tribalism than a complex and nuanced one, so that’s perhaps what most people instinctively stick with…

But – and I think that is often forgotten in all of this – action potential and group cohesion nonwithstanding, there are hidden benefits to be able to represent a complex world in sufficient nuance as well. Choosing a data type that is too coarse might end up representing a worldview that is plagued by undersampling and suffers from aliasing. In other words, you might be able to act fast and decisively, but end up doing the wrong thing because you picked from two alternatives that were not without alternative – you fell prey to a false dichotomy. If a lot is at stake, this could matter tremendously.

In other words, even the cognitive utility of booleans and other low memory data types is not clear cut – sometimes they are adequate, and sometimes they are not. Which is another win for nuanced datatypes. Ironically? Because if they are superior, maybe it is a binary choice after all. Or not. Depending on the dimensionality of the space one is evaluating all of this in. And whether it is stationary. And so on.

Posted in Pet peeve, Philosophy | Leave a comment

This is what is *really* going on with Laurel and Yanny – why your brain has to guess (without telling you)

At this point, we’re all *well* beyond peak #Yannygate. There have been comprehensive takes, there have been fun ones and there have been somber and downright ominous ones. But there have not been short ones that account for what we know.

This is the one (minute read). Briefly, all vowels that you’ve ever heard have 3 “formant frequencies” – 3 bands of highest loudness in the low (F1: ~500 Hz), middle (F2: ~1500 Hz) and high (F3: ~2500 Hz) frequency range. These bands are usually clearly visible in any given “spectrogram” (think “ghosts”) of speech.

However, the LaurelYanny sound doesn’t have this signature characteristic of speech. The F2 is missing. But your brain has no (epistemic) modesty. Instead of saying: “I have literally never heard anything like this before, is this even speech?”, it says: “I know exactly what this is” and makes this available to your consciousness as what you hear, without telling you that this is a guess (might be worth mentioning that, no)?

Stylized version of the Laurel and Yanny situation: Diagram of spectrograms. "Laurel" has all 3 formants, but with most power in the low frequencies. "Yanny" has all 3 formants, but with most power in the high frequencies. "LaurelYanny" has both high and low power, but nothing in the middle. So you have to guess.

Stylized version of the Laurel and Yanny situation: Diagram of spectrograms. “Laurel” has all 3 formants, but with most power in the low frequencies. “Yanny” has all 3 formants, but with most power in the high frequencies. “LaurelYanny” has both high and low power, but nothing in the middle. So you have to guess.

That’s pretty much it. The signal contains parts of both “Laurel” and “Yanny”, but also misses parts of both, hence the need to guess. WHAT you are guessing and why you hear “Laurel”, “Yanny” or sometimes one, then the other, and what it means for you whether you are a “Laurel” or a “Yanny” is pretty much still open to research.

Action potential: Hopefully, that was a mercifully short read. If you have some more time – specifically another 7-9 minutes – and want to help, click here.

Posted in Psychology, Science | Leave a comment

#Yannygate highlights the underrated benefits of keeping foxes around

In May 2018, a phenomenon surfaced that lends itself of differential interpretation – some people hear “Laurel” whereas others hear “Yanny” when listening to the same clip. As far as I’m concerned, this is a direct audio analogue of #thedress phenomenon that surfaced in February 2015, but in the auditory domain. Illusions have been studied by scientists for well over a hundred years and philosophers have wondered about them for thousands of years. Yet, this kind of phenomenon is new and interesting, because it opens the issue of differential illusions – illusions that are not strictly bistable, like Rubin’s vase or the Duckrabbit, but that are perceived as a function of the prior experience of an organism. As such, they are very important because it has long been hypothesized that priors (in the form of expectations) play a key role in cognition, and now we have a new tool to study their impact on cognitive computations.

What worries me is that this analogy and the associated implications were lost on a lot of people. Linguists and speech scientists were quick to analyze the spectral (as in ghosts) properties of this signal – and they were quite right with their analysis – but also seemed to miss the bigger picture, as far as I’m concerned, namely the directly analogy to the #dress situation mentioned above and the deeper implication of existence of differential illusions. The reason for that is – I think – that Isaiah Berlin was right when he stated:

“The fox knows many things, but the hedgehog knows one big thing.”

The point of this classification is that there are two cognitive styles by which different people approach a problem: Some focus on one issue in depth, others connect the dots between many different issues.

What he didn’t say is that there is a vast numerical discrepancy between these cognitive styles, at least in academia. Put bluntly, hedgehogs thrive in the current academic climate whereas foxes have been brought to the very brink of extinction.

Isiah Berlin was right about the two types of people. But he was wrong about the relative quantities. It is not a one-to-one ratio.

Isiah Berlin was right about the two types of people. But he was wrong about the relative quantities. It is not a one-to-one ratio. So it shouldn’t be ‘the hedgehog and the fox’, it should be ‘the fox and the hedgehogs’, at least by now…

It is easy to see why. Most scientists start out by studying one type of problem. In the brain – owed to the fact that neuroscience is methods driven and it is really hard to master any given method (you basically have to be MacGyver to get *any* usable data whatsoever) – this usually manifests as studying one modality such as ‘vision’, ‘hearing’ or ‘smell’ or one cognitive function such as ‘memory’ or ‘motor control’. Once one starts like that, it is easy to see how one could get locked in: Science is a social endeavor, and it is much easier to stick with one’s tribe, in particular when one already knows everyone in a particular field, but no one in any other field. Apart from the social benefits, this has clear advantages for one’s career. If I am looking for a collaborator, and I know who is who in a given field, I can avoid the flakes and those who are too mean to make it worthwhile to collaborate and seek out those who are decent and good people. It is not always obvious from the published record what differentiates them, but it makes a difference in practice, so knowing one’s colleagues socially comes with lots of clear blessings. In addition, literatures tend to cite each other, silo-style, so once one starts reading the literature of a given field, it is very hard to break out and do this for another field: People tend to use jargon that one picks up over time, but that is rarely explicitly spelled out anywhere. People have a lot of tacit knowledge (also picked up over time, usually in grad school) that they *don’t* put in papers, so reading alien literatures is often a strange and trying experience, especially when compared with the comforts of having a commanding grasp on a given literature where one already knows all of the relevant papers. Many other mechanisms are also geared towards further fostering hedgehogs: One of them is “peer-review”, which must be nice because it is de facto review by hedgehog, which can end quite badly for the fox. Just recently, a program officer told me that my grant application was not funded because the hedgehog panel of reviewers simply did not find it credible that one person could study so many seemingly disparate questions at once. Speaking of funding: Funding agencies are often structured along the lines of a particular problem, for instance in the US, there is no National Institute of Health – there are the National Institutes of Health, and that subtle plural “s” makes all the difference, because each institute funds projects that are compatible with their mission specifically. For instance, the NEI (the National Eye Institute) funds much of vision research with the underlying goal of curing blindness and eye diseases in general. But also quite specifically. And that’s fine, but what if the answer to that question relies on knowledge from associated, but separate fields (other than the eye or visual cortex). More on this later, but a brief analogy might suffice to illustrate the problem for now: Can you truly and fully understand a Romance language – say French – without having studied Latin? Even cognition itself seems to be biased in favor of hedgehogs: Most people can attend to only one thing at a time, and can associate an entity with only one thing. Scientists who are known for one thing seem to have the biggest legacy, whereas those with many – often somewhat smaller – disparate contributions seem to get forgotten at a faster rate. In terms of a lasting legacy, it is better to be known for one big thing, e.g. mere exposure, cognitive dissonance, obedience or the ill-conceived and ill-named Stanford Prison Experiment. This is why I think all of Google’s many notorious forrays to branch out into other fields have ultimately failed. People so strongly associate it with “search”, specifically that their – many – other ventures just never really catch on, at least not when competing with hedgehogs in those domains, who allocate 100% of their resources to that thing, e.g. FB (close online social connections – connecting with people you know offline, but online) eviscerated G+ in terms of social connections. Even struggling Twitter (loose online social connections – connecting with people online that you do not know offline) managed to pull ahead (albeit with an assist by Trump himself), and there was simply no cognitive space left for a 3rd, undifferentiated social network that is *already* strongly associated with search. LinkedIn is not a valid counterexample, as it isn’t as much a social network, as it formalized informal professional connections and put them online, so it is competing in a different space.

So the playing field is far from level. It is arguably tilted in the favor of hedgehogs, has been tilted by hedgehogs and is in danger of driving foxes to complete extinction. The hedgehog to fox ratio is already quite high in academia – what if foxes go extinct and the hedgehog singularity hits? The irony is that – if they were to recognize each others strengths – foxes and hedgehogs are a match made in heaven. It might even be ok for hedgehogs to outnumber foxes. A fox doesn’t really need another fox to figure stuff out. What the fox needs is solid information dug up by hedgehogs (who are admittedly able to go deeper), so foxes and hedgehogs are natural collaborators. As usual, cognitive diversity is extremely useful and it is important to get this mix right. Maybe foxes are inherently rare. In which case it is even more important to foster, encourage and nurture them. Instead, the anti-fox bias is further reinforced by hyper-specific professional societies that have hyper-focused annual meetings, e.g. VSS (the vision sciences society) puts on an annual meeting that is basically only attended by vision scientists. It’s like a family gathering, if you consider vision science your family. Focus is important and has many benefits – as anyone suffering from ADD will be (un)happy to attest, but this can be a bit tribal. It gets worse – as there are now so many hedgehogs and so few remaining foxes, most people just assume that everyone is a hedgehog. At NYU’s Department of Psychology (where I work), every faculty member is asked to state the research question they are interested in, on the faculty profile page (the implicit presumption is of course that everyone only has exactly 1, which is of course true for hedgehogs and works for them. But what is the fox supposed to say? Even colloquially, scientists often ask each other “So, what do you study”, implicitly expecting a one-word answer like “vision” or “memory”. Again, what is the fox supposed to say here? Arguably, this is the wrong question entirely, and not a very fox-friendly one at that). This scorn for the fox is not limited to academia; there are all kinds of sayings that are meant to denigrate the fox as a “Jack of all trades, master of none” (“Hansdampf in allen Gassen”, in German), it is common to call them “dilettantes” and it is of course clear that a fox will appear to lead a bizarre – startling and even disorienting – lifestyle, from the perspective of the hedgehog. And there *are* inherent dangers of spreading oneself too thin. There are plenty of people who dabble in all kinds of things, always talking a good game, but never actually getting anything done. But these people give just give real foxes a bad name. There *are* effective foxes, and once stuff like #Yannygate hits we need them to see the bigger picture. Who else would? Note that this is not in turn meant to denigrate hedgehogs. This is not an anti-hedgehog post. Some of my closest friends are hedgehogs, and some are even nice people (yes that last part is written in jest, come on, lighten up). No one questions the value of experts. We definitely need people with a lot of domain knowledge to go beyond the surface level on any phenomenon. But whereas no one questions the value of keeping hedgehogs around, I want to make a case for keeping foxes around, too – even valuing them.

What I’m calling for specifically, is to re-evaluate the implicit or explicit “foxes not welcome here” attitude that currently prevails in academia. Perhaps unsurprisingly, this attitude is a particular problem when studying the brain. While lots of people talk a good game about “interdisciplinary research”, few people are actually doing it and even less are doing it well. The reason this is a particular problem when studying the brain is that complex cognitive phenomena might cut across discipline boundaries, but in ways that were unknown when the map of the fields was drawn. To make an analogy: Say you want to know where exactly a river originates – where its headwaters or source are. To find that out, you have to go wherever the river leads you. That might be hard enough, just like when Theodore Roosevelt did this with the River of Doubt, arguably all phenomena in the brain are a “river of doubt” in their own right, with lots of waterfalls and rapids and other challenges to progress. We don’t need artificial discipline or field boundaries to hinder us even further. We have to be able to go wherever the river leads us, even if that is outside of our comfort zone or outside of artificial discipline boundaries. If you really want to know where the headwaters of a river are, you simply *have to* go where the river leads you. If that is your primary goal, all other considerations are secondary. If we consider the suffering imposed by an incomplete understanding of the brain, reaching the primary objective is arguably quite important.

To mix metaphors just a bit (the point is worth making), we know from history that artificially imposed borders (without regard for the underlying terrain or culture) can cause serious problems long term problems, notably in Africa and the Middle East.

All of this boils down to an issue of premature tessellation:

The tessellation problem. Blue: Field boundaries as they should be, to fully understand the phenomena in question. Red: Field boundaries, as they might be, given that they were drawn before understanding the phenomena. This is a catch 22. Note that this is a simplified 2D solution. Real phenomena are probably multidimensional and might even be changing. In addition, they are probably jagged and there are more of them. This is a stylized/simplified version. The point is that the lines have to be drawn beforehand. What are the chances that they will end up on the blue lines, randomly? Probably not high. That's why foxes are needed - because they transcend individual fields, which allows for a fuller understanding of these phenomena.

The tessellation problem. Blue: Field boundaries as they should be, to fully understand the phenomena in question. Red: Field boundaries, as they might be, given that they were drawn before understanding the phenomena. This is a catch 22. Note that this is a simplified 2D solution. Real phenomena are probably multidimensional and might even be changing. In addition, they are probably jagged and there are more of them. This is a stylized/simplified version. The point is that the lines have to be drawn beforehand. What are the chances that they will end up on the blue lines, randomly? Probably not high. That’s why foxes are needed – because they transcend individual fields, which allows for a fuller understanding of these phenomena.

 

What if the way you conceived of the problem or phenomenon is not the way in which the brain structures it, when doing computations to solve cognitive challenges? The chance of a proper a priori conceptualization is probably low, given how complicated the brain is. This has bothered me personally since 2001, and other people have noticed this as well.

This piece is getting way too long, so we will end these considerations here.

To summarize briefly, being a hedgehog is prized in academia. But is it wise?

Can we do better? What could we do to encourage foxes to thrive, too? Short of creating “fox grants” or “fox prizes” that explicitly recognize the foxy contributions that (only) foxes can make, I don’t know what can be done to make academia a more friendly habitat for the foxes among us. Should we have a fox appreciation day? If you can think of something, write it in the comments?

 

Action potential: Of course, I expect no applause for this piece from the many hedgehogs among us. But if this resonates with you and you strongly self-identify as a fox, you could consider joining us on FB.

Posted in In eigener Sache, Neuroscience, Pet peeve, Psychology, Science, Social commentary | 1 Comment

Social media and the challenge of managing disagreement positively

Technological change often entails social change. Historically, many of these changes were unintended and could not be foreseen at the time of making the technological advances. For instance, the printing press was invented by Johannes Gutenberg in the 1400s. One can make the argument that this advance led to the reformation within a little more than 50 years and the devastating 30-years war within another 100 years of that. Arguably, the 30-years war was an attempt at the violent resolution of fundamental disagreements – about how to interpret the word of god (the bible), which had suddenly become available for the masses to read. Of course the printing press was probably not sufficient to bring these developments about, but one can make a convincing argument that it was necessary. Millions of people died and the political landscape of central Europe was never quite the same.

Which brings us to social media. I think it is safe to say that most of us were surprised how fundamentally we disagree with each other as to how to interpret current events. Previously, the tacit assumption was that we all kind of agree about what is going on. This is obviously no longer possible and often quite awkward. Social media got started in earnest about 10 years ago, with the launch of Twitter and the Facebook News Feed. Since then, people have shared innumerable items on social media and from personal experience, one can be quite surprised how different other people interpret the very same event.

Which brings us to my research.

Briefly, people can fundamentally disagree about the merits of any given movie or piece of music, even though they saw the same film or listened to the same clip.

Moreover, they can vehemently disagree about the color of a whole wardrobe of things: Dresses, jackets, flipflops and sneakers. Importantly, nothing anyone can say would change anyone else’s mind in case of disagreement and these disagreements are not due to being malicious, ignorant or color-blind.

So where do they come from? When ascertaining the color of any given object, the brain needs to take illumination into account, a phenomenon known as color-constancy. Insidiously, the brain is not telling us that this is happening, it simply makes the end-result of this process available to our conscious experience. The problem – and the disagreement – arises when different people make different assumptions about the illumination.

Why might they do that? Because people assume the kind of light that they usually see, and this will differ between people. For instance, people who get up and go to bed late will experience more artificial lighting than those who get up and go to bed early. It stands to reason that people assume to happen in the future what they have experienced in the past. Someone who has seen lots of horses but not a single unicorn might misperceive a unicorn as a horse, should they finally encounter one. This is what seems to be happening more generally: People who go to bed late do assume lighting to be artificial, compared to those who go to bed early. 

In other words, prior experience does shape our assumptions, which shapes our conclusions (see diagram).

Conclusions can be anything that the brain makes available to our conscious experience - percepts, decisions, interpretation. Objects above dashed line are often not consciously considered when evaluating the conclusions. Some of them might not be consciously accessible. Note that this is not the only possible difference between individuals. Arguably, it might be that the brains are also different from the very beginning. That is probably true, but we know next to nothing about that. Note that differing assumptions are sufficient to bring about differences in conclusions in this framework. That doesn't mean other factors couldn't matter as well. Also note that we consider two individuals here. Once more than two are involved, the situation would be more complicated yet.

Conclusions can be anything that the brain makes available to our conscious experience – percepts, decisions, interpretation. Objects above dashed line are often not consciously considered when evaluating the conclusions. Some of them might not be consciously accessible. Note that this is not the only possible difference between individuals. Arguably, it might be that the brains are also different from the very beginning. That is probably true, but we know next to nothing about that. Note that differing assumptions are sufficient to bring about differences in conclusions in this framework. That doesn’t mean other factors couldn’t matter as well. Also note that we consider two individuals here. Once more than two are involved, the situation would be more complicated yet.

If this is true more generally, three fundamental conclusions are important to keep in mind, if one wants to manage disagreement positively:

1. There is no point in arguing about the outcomes – the conclusions. Nothing that can be said can be expected to change anyone’s mind. Nor is it about the evidence (what actually happened), as the interpretation of that is colored by the assumptions.

2. In order to find common ground, one would be well advised to consider – and question – the assumptions you and others make. Ideally, it would be good to trace someone’s life experience, which is almost certain to differ between people. Of course, this is almost impossible to do. Someone’s life experience is theirs and theirs alone. No one can know what it is like to be someone else. But pondering – and discussing – on this level is probably the way to go. Maybe trying to create common experiences would be a way to transcend the disagreement.

3. As life experiences are radically idiosyncratic, fundamental and radical disagreements should be expected, frequently. The question is how this disagreement is managed. If it is not managed well, history suggests that bad things might be in store for us.

Posted in Uncategorized | Leave a comment

My policy on quote review

I understand the need of journalists to simplify quotes and make them more palatable to their audience. Academics have a tendency to hedge every statement. In fact, they would have to be an octopus to account for all the hands involved in a typical statement. From this perspective, it is fair that journalists would try to counteract this kind of nuance that their audience won’t appreciate anyway. However, I’m in the habit of choosing my words carefully and try to make the strongest possible statement that can be justified based on the available evidence. If journalists then apply their own biases, the resulting statements can veer into the ridiculous. So I’m now quoted – all over the place – saying the damnedest things, none of which I actually said. Sometimes, the quote is the opposite of what I said. This is not ok.

Of course you can write whatever you want. But that doesn’t include what I allegedly said. Note also that I did give journalists the benefit of the doubt in the past. But they demonstrably – for whatever reason, innocent or willful – did not care much for quote accuracy.

Thus – from now on, I must insist on quote review prior to publication. This is not negotiable, as my reputation is on the line and – again – I’m in the habit of speaking very carefully. This policy is also mutually beneficial – wouldn’t any journalist with integrity be concerned about getting the quotes right?

In the meantime, one should be wise to assume the media version of Miranda: “Everything you don’t say will be attributed to you anyway.”

Posted in In eigener Sache | Leave a comment

Retro-viral phenomena: The dress over and over again

It is happening again. Another “dress”-like image just surfaced.

Retro-viral

As far as I can tell, more or less the same thing is going on. Ill defined lighting conditions in the images are being filled in by lighting assumptions, and they differ between people due to a variety of factors, including which light they have seen more of. Just as described in my original paper.

As we get better at constructing these (images with ill-defined illumination), I expect more of these to pop up periodically. But people now seem more comfortable (and less surprised) by the notion that we can see colors of the same image differently.

The reason these things are still a thing is our tacit assumption that we all more or less see the same reality as everyone else.

So if I’m right (which most people presume) and someone else disagrees, they have to be wrong, for whatever reason. Color stimuli like this seem to produce categorically and profoundly differing interpretations. Which is what makes them so unsettling.

I think the same thing – more or less – applies to social and political questions. We take our experience at face value and fill the rest in with assumptions that are based on prior experience. As people’s experiences will differ, disagreements abound.

Which is why I find these stimuli so interesting and which is why I study them in my lab.

Hopefully, as these become more common, it will make people more comfortable with the notion that they can fundamentally – but sincerely – disagree with their fellow man.

Because people operate experientially. Here, they experience benign disagreement. In contrast to politics, where the disagreement is often no longer benign.

So this kind of thing could be therapeutic.

We could use it.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Posted in Philosophy, Psychology, Science, Social commentary | Leave a comment

Of psychopaths, musical tastes, media relations and games of telephone

Usually, I publicly comment on our work once it is published, like here, here or here.

So I was quite surprised when I was approached by the Guardian to comment on an unpublished abstract. Neuroscientists typically present these as “work in progress” to their colleagues at the annual Meeting of the Society for Neuroscience, which is held in Washington DC in November, this year and at which our lab has 5 such abstracts. Go to this link if you want to read them.

Given these constraints, the Guardian did a good job at explaining this work to a broader audience, emphasizing its preliminary nature (we won’t even attempt to publish this unless we replicate it internally with a larger sample of participants and songs) as well as some ethical concerns inherent to work like this.

What becomes apparent on the basis of our preliminary work is that we can basically rule out the popular stereotype that people with psychopathic tendencies have a preference for classical music and that we *might* be able to predict these tendencies on the basis of combining data from *many* songs – individual songs won’t do, and neither will categories as broad as genre (or gender, race or SES). To confirm these patterns, we need much more data. That’s it.

What happened next is that a lot of outlets – for reasons that I’m still trying to piece together – made this about rap music and a strong link between a preference for rap music and psychopathic traits.

As far as I can tell, there is no such link, I have never asserted there to be one and I am unsure as to the evidentiary basis of such a link at this point.

It is worth pointing out that I actually did not say most of the things I’m quoted as saying on this topic, or at least not in the form they were presented.

So all of this is a lesson in media communications. Between scientists and the media, as well as between media and media, media and social media and social media and people (and all other combinations).

So it is basically a game of telephone: What we did. What the (original) media thinks we did. What the media that copies from the original media think we did. What social media thinks we did. What people understand we did. Apparently, all these links are “leaky” or rather unreliable. Worse, the leaks are probably systematic, accumulating systematic error (or bias) based on a cascade of differential filters (presumably, media filters by what they think will gain attention, whereas readers will filter by personal relevance and worldview). 

Given that, the reaction of the final recipient (the reader) of this research was basically dominated by their prior beliefs (and who could blame them), dismissing this either as obviously flawed “junk science” or so obvious that it doesn’t even need to be stated, depending on whether the media-rendering of the findings clashed with or confirmed these prior beliefs.

Is publicizing necessarily equal to vulgarizing?

I still think the question of identifying psychopaths based on more than their self-report is important. I also still think that doing so by using metrics without obvious socially desirable answers like music taste is promising, e.g. given their lack of empathy, psychopaths could be taken by particularly lyrics or given their need for stimulation, particular rhythms or beats could resonate with them more than average. But working all that out will take a lot more – and nuanced – work.

And to those who have written me in concern, I can reassure you: No taxpayer money was spent on this – to date.

If you are interested in this, stay tuned.

Posted in In eigener Sache, Science, Social commentary | 2 Comments

Vector projections

Hopefully, this will clear up some confusions regarding vector projections onto basis vectors.

 

projections

Via Matlab, powered by @pascallisch

 

 

 

 

 

Posted in Matlab | Leave a comment

What should we call science?

The term for science – scientia (knowledge) is terrible. Science is not knowledge. It is simply not (just) a bunch of facts. The German term “Wissenschaft” is slightly better, as it implies a knowledge creation engine. Something that creates knowledge, emphasizing that this is a process (and the only valid one we have as far as I can tell) that generates knowledge. But that doesn’t quite capture it either. Science does not prove anything, nor create any knowledge per se. Science has been wrong many times, and will be wrong in the future. That’s the point. It is a process that detects – via falsification – when we were wrong. Which is extremely valuable. So a better term is in order. How about uncertainty reduction engine? But incertaemeíosikinitiras probably won’t catch on. 
How about incertiosikini? Probably won’t catch on either.

Posted in Pet peeve, Science | 1 Comment

Predicting movie taste

There is a fundamental tension between how movie critics conceive of their role and how their reviews are utilized by the moviegoing public. Movie critics by and large see their job as educating the public as to what is a good movie and explaining what makes it good. In contrast, the public generally just wants a recommendation as to what they might like to watch. Given this fundamental mismatch, the results of our study that investigated the question whether movie critics are good predictors of individual movie liking should not be surprising.

First, we found that individual movie taste was radically idiosyncratic. The average correlation was only 0.26 – in other words, one would predict an astarsverage disagreement of 1.25 stars, out of a rating scale from 0 to 4 stars – that’s a pretty strong disagreement (max RMSE possible is 1.7). Note that these are individuals who reported having seen *the same* movies.

Interestingly, whereas movie critics correlated more strongly with each other – at 0.39 – which had been reported previously, on average they are not significantly better than a randomly picked non-critic at predicting what a randomly picked person will like. This suggests that vaunted critics like the late Roger Ebert gain prominence not by the reliability of their predictions, but other factors such as the force of their writing.

What is the best way to get a good movie recommendation? In absence of all other information, information aggregators of non-critics such as the Internet Movie Database do well (r = 0.49), whereas aggregators of critics such as Rotten Tomatoes underperforms, relatively speaking (r = 0.33) – Rotten Tomatoes is better at predicting what a critic would like (r = 0.55), suggesting a fundamental disconnect between critics and non-critics.

Finally, as taste is so highly idiosyncratic, your best bet might be to find a “movie-twin” – someone who shares your taste, but has seen some movies that you have not. Alternatively, companies like Netflix are now employing a “taste cluster” approach, where each individual is assigned to the taste cluster their taste vector is closest to, and the predicted rating would be that of the cluster (as the cluster has presumably seen all movies, whereas individuals, even movie-twins will not). However, one cautionary note about this approach is that Netflix probably does not have the data it needs to pull this off, as ratings are provided in a self-selective fashion, i.e. over-weighing those that people feel most strongly about, potentially biasing the predictions.

Fox lab logo white

 

Posted in In eigener Sache, Journal club, Psychology, Science | Leave a comment

Revisiting the dress: Lessons for the study of qualia and science

gifBigWhen #thedress first came out in February 2015, vision scientists had plenty of ideas why some people might be seeing it differently than others, but no one knew for sure. Now we have some evidence as to what might be going on. The illumination source in the original image of the dress is unclear. It is unclear whether the image was taken in daylight or artificial light, and if the light comes from above or behind. If things are unclear, people assume that it was illuminated with the light that they have seen more often in the past. In general, the human visual system has to take the color of the illumination into account when determining the color of objects. This is called color
constancy. That’s why a sweater looks largely the same inside a house 
and outside, even though the wavelengths hitting the retina are very different (due to the different illumination). So if someone assumes blue light, they will mentally subtract that and see the image as yellow. If someone assumes yellow light, they will mentally subtract it and see blue. The sky is blue, so if someone assumes daylight, they will see the dress as gold.
Artificial incandescent light is relatively long-wavelength (appearing yellow-ish), so if someone assumes that, they will see it as blue. People who get up in the morning see more daylight in their lifetime and tend to see the dress as white and gold, people who
get up later and stay up late see more artificial light in their lifetime and tend to see the dress as black and blue.

This is a flashy result. Which should be concerning because scientific publishing seems to have traded off rigor with appeal in the past. However, I really do not believe that this was the case here. In terms of scientific standards, the paper has the following features:

*High power: > 13,000 participants

*Conservative p-value: Voluntarily adopted p < 0.01 as a reasonable significance threshold to guard against multiple comparison issues.

*Internal replication prior to publication: This led to a publication delay of over a year, but it is important to be sure.

*No excluding of participants or flexible stopping: Everyone who had taken the survey by the time of lodging the paper for review at the journal was included.

*#CitizenScience: As this effect holds up “in the wild”, it is reasonable to assume that it doesn’t fall apart outside of carefully controlled laboratory conditions.

*Open science: Shortly (once I put the infrastructure in place), data and analysis code will be made openly available for download. Also, the paper was published – on purpose – in an open-access journal.

Good science takes time and usually raises more questions than it answers. This is no exception. If you want to help us out, take this brief 5-minute survey. The more data we have, the more useful the data we already have becomes.

Fox lab logo white

Posted in Journal club, Neuroscience, Psychology, Science | 5 Comments

Autism and the microbiome

The incidence of autism has been on the rise for 40 years. We don’t know why, but the terrible burden of suffering has spurred people to urgently look for a cause. As there are all kinds of secular trends over the same time period, correlating this rise in autism with corresponding changes in environmental parameters, led to the “discovery” of all kinds of spurious or incidental relationships.

When attempting to establish causal relationships, experimental work is indispensable, but unethical to do in humans.

Now, it has been shown that feeding maternal mice a high fat diet led to social behavioral deficits reminiscent of autism in offspring. These deficits were associated with a disrupted microbiome, specifically low levels of L. reuteri. Restoring levels of L. reuteri rescued social behaviors, linked to the increased production to oxytocin.

I’m aware of the inherent limitations of mouse work (does anything ever transfer?), but if this does (and I think it will – given recent advances in our understanding of the gut microbiome in relationship to mental), it will be transformational, not just for autism. 

Here is a link to the paper: Microbial reconstitution reverses materal diet-induced social and synaptic deficits in offspring.

Posted in Neuroscience, Nutrition, Psychology, Science | 1 Comment

A primer on the science of sleep

I’ve written about sleep and the need to sleep and how sleep is measured before, but in order to foster our #citizenscience efforts at NYU, I want to bring accessible and actionable pieces on the science of sleep together in one place, here.

1. How the brain regulates sleep/wake cycles

2. Regulating sleep: What can you do?

What you can do

What you can do

3. Sleep: Why does it matter?

Sleep matters

Sleep matters

4. What you can do right now if your baby has sleep problems

5. Common sleep myths

6. Sleep is an active process

Sleep is an active process

Sleep is an active process

7. What are sleep stages?

Sleep stages

Sleep stages

Click on the links if you want to read more.

If you’re curious what our marriage between #citizenscience, #datascience and #neuroscience is about, read this.

 

Posted in Life, Neuroscience, Psychology, Science | Leave a comment

Beyond free will

Some say that every time philosophy and neuroscience cross, philosophy wins. The usual reason cited for this? Naive and unsophisticated use of concepts and the language to express them within neuroscience. Prime exhibit is the mereological fallacy – the confusion of the part with the whole (by definition, people see, not the eye or the brain). And yes, all too many scientists are entirely uneducated, but “winning” might be a function of letting philosophy pick the battleground – language – which philosophy has privileged for over 2500 years (if for no other reason than lack of empirical methods, initially). There is no question that all fields are in need of greater conceptual clarity, but what can one expect from getting into fights with people who write just the introduction and discussion section, then call it a paper and have – unburdened by the need to run studies or raise money to do so – an abundance of time on their hands? Yet, reality might be unmoved by such social games. It needs to be interrogated until it confesses – empirically. There are no shortcuts. Particularly if the subject is as thorny as free will or consciousness. See here for the video.

Posted in Neuroscience, Pet peeve, Philosophy | 1 Comment

Explaining color constancy

The brain is using spectral information of light waves (their wavelength mix) to aid in the identification of objects. This works because any given object will absorb some wavelengths of the light source (the illuminant) and reflect others. For instance, plants look green because they absorb short and long wavelengths, but reflect wavelengths in the middle of the visible spectrum. In that sense, plants – that perform photosynthesis to meet their energy needs – are ineffective solar panels: They don’t absorb all wavelengths of the visible spectrum. If they did, they would be black. So this information is valuable, as it allows the brain to infer object identity and help with image parsing: Different but adjacent objects usually have different reflectance properties and profiles. But this task is complicated by the fact that the mix of wavelengths reflected off an object is dependent on the wavelength mix emanating from the light source in the first place. In other words, the brain needs to take the illumination into account when determining object color. Otherwise, object identity would not be constant – the same object would look different depending on the illumination source. Illumination sources can contain dramatically different wavelength-mixes, e.g. incandescent light with most of the energy in the long wavelenghts vs. cool light LEDs with a peak in the short wavelengths. This is not a recent problem due to the invention of artificial lighting. Throughout the day, the spectral content of daylight changes – e.g. the spectral content of sunlight is different midday from late afternoon. If color perceiving organisms didn’t take this into account, the same object would look a radically different color at different times of day. So such organisms need to discount the illuminant, as illustrated here:

Achieving color constancy by discounting the illuminant

Achieving color constancy by discounting the illuminant

The details of how this process happens physiologically are still being worked out, but we do know that it happens. Of course, there are also other factors going into the constant color correction of the image performed by the organism. For instance, if you know the “true color” of an object, this will largely override other considerations. Try illuminating strawberries with a green laser pointer. The light bouncing off the strawberries will contain little to no long wavelengths, but the strawberries will still look red to you because you know that strawberries are red. Regardless of these considerations, we do know that color constancy matters quite a bit, even in terms of assumed illumination in case of #thedress, when the illumination source is ill-defined:

Discounting an assumed illuminant explains what is going on with the dress.

Discounting an assumed illuminant explains what is going on with the dress.

Of course, things might not be as straightforward as that. It won’t always be perfectly clear what the illuminant is. In that case, the brain will make assumptions to disambiguate. A fair assumption would be that it is like the kind of illuminant one has seen most. In most of human history – and perhaps even today – that means sunlight. In other words, humans could be expected to assume illumination along the daylight axis (over the day), which means short-wavelength illumination, which could account for the fact that most people did report to see the dress as white and gold.

Posted in Neuroscience, Psychology, Science | Leave a comment

The neuroscience of violent rage

Violent rage results from the activation of dedicated neural circuitry that is on the lookout for existential threats to prehistoric lifestyles. Life in civilization is largely devoid of these threats, but this system is still in place, triggering what largely amounts to false alarms with alarming frequency.

We are all at the mercy of our evolutionary heritage. This is perhaps nowhere more evident than in the case of violent rage. Regardless of situation, in the modern world, responding with violent rage is almost never appropriate and almost always inadvisable. At best, it will get you jailed; at worst, it might well get you killed.

So why does it happen so often, and in response to seemingly trivial situations, given these stakes? Everyone is familiar with the frustrations of daily life that lead to these intense emotions such as road rage or when stuck in a long checkout line.
The reason is explored in “Why we snap” by Douglas Fields. Briefly, ancient circuitry in the hypothalamus is always on guard for trigger situations that correspond to prehistoric threats. Unless quenched by the prefrontal cortex, this dedicated sentinel system kicks in and produces a violent rage response to meet the threat. The book identifies 9 specific triggers that activate this system, although they arguably boil down to 3 existential threats in prehistoric life:

1) Existential threats to the (increasingly extended) physical self (attacks on oneself, mates, family, tribe)

2) Existential threats to the (increasingly extended) social self (insults against oneself, mates, family, tribe) and

3) Existential threats to the integrity of the territory that sustains these selves (being encroached on or being restrained from exploring one’s territory by others or having one’s resources taken from one’s territory).

Plausibly, these are not even independent – for instance, someone could interpret the territorial encroachment of a perceived inferior as an insult. Similarly, the withholding of resources, e.g. having a paper rejected despite an illustrious publication history could be taken as a personal insult.

The figure depicts the rage circuitry in the hypothamalus - several nuclei close to the center of the brain (stylized)

The figure depicts the rage circuitry in the hypothamalus – several nuclei close to the center of the brain (stylized)

Understood in this framework, deploying the “nuclear option” of violent rage so readily starts to make sense – the system thinks it is locked in a life and death struggle and goes all out, as those who could not best these situations perished from the earth long ago, along with their seed.

Of course, in the modern environment, almost none of these trigger situations still represent existential threats, even if they feel like it, such as being stuck at the post office.

In turn, maybe we need to start respecting the ancient circuitry that resides in all of us in order to make sense of seemingly irrational behavior, perhaps even incorporate our emerging understanding of these brain networks into public policy. In the case of violent rage, the stakes are high, namely preventing the disastrous outcomes of these behaviors that keep filling our prisons and that kill or maim the victims.

Read more: Unleashing the beast within

Posted in Neuroscience, Psychology, Science | 1 Comment