Hopefully, this will clear up some confusions regarding vector projections onto basis vectors.
Via Matlab, powered by @pascallisch
Hopefully, this will clear up some confusions regarding vector projections onto basis vectors.
Via Matlab, powered by @pascallisch
The term for science – scientia (knowledge) is terrible. Science is not knowledge. It is simply not (just) a bunch of facts. The German term “Wissenschaft” is slightly better, as it implies a knowledge creation engine. Something that creates knowledge, emphasizing that this is a process (and the only valid one we have as far as I can tell) that generates knowledge. But that doesn’t quite capture it either. Science does not prove anything, nor create any knowledge per se. Science has been wrong many times, and will be wrong in the future. That’s the point. It is a process that detects – via falsification – when we were wrong. Which is extremely valuable. So a better term is in order. How about uncertainty reduction engine? But incertaemeíosikinitiras probably won’t catch on.
How about incertiosikini? Probably won’t catch on either.
There is a fundamental tension between how movie critics conceive of their role and how their reviews are utilized by the moviegoing public. Movie critics by and large see their job as educating the public as to what is a good movie and explaining what makes it good. In contrast, the public generally just wants a recommendation as to what they might like to watch. Given this fundamental mismatch, the results of our study that investigated the question whether movie critics are good predictors of individual movie liking should not be surprising.
First, we found that individual movie taste was radically idiosyncratic. The average correlation was only 0.26 – in other words, one would predict an average disagreement of 1.25 stars, out of a rating scale from 0 to 4 stars – that’s a pretty strong disagreement (max RMSE possible is 1.7). Note that these are individuals who reported having seen *the same* movies.
Interestingly, whereas movie critics correlated more strongly with each other – at 0.39 – which had been reported previously, on average they are not significantly better than a randomly picked non-critic at predicting what a randomly picked person will like. This suggests that vaunted critics like the late Roger Ebert gain prominence not by the reliability of their predictions, but other factors such as the force of their writing.
What is the best way to get a good movie recommendation? In absence of all other information, information aggregators of non-critics such as the Internet Movie Database do well (r = 0.49), whereas aggregators of critics such as Rotten Tomatoes underperforms, relatively speaking (r = 0.33) – Rotten Tomatoes is better at predicting what a critic would like (r = 0.55), suggesting a fundamental disconnect between critics and non-critics.
Finally, as taste is so highly idiosyncratic, your best bet might be to find a “movie-twin” – someone who shares your taste, but has seen some movies that you have not. Alternatively, companies like Netflix are now employing a “taste cluster” approach, where each individual is assigned to the taste cluster their taste vector is closest to, and the predicted rating would be that of the cluster (as the cluster has presumably seen all movies, whereas individuals, even movie-twins will not). However, one cautionary note about this approach is that Netflix probably does not have the data it needs to pull this off, as ratings are provided in a self-selective fashion, i.e. over-weighing those that people feel most strongly about, potentially biasing the predictions.
When #thedress first came out in February 2015, vision scientists had plenty of ideas why some people might be seeing it differently than others, but no one knew for sure. Now we have some evidence as to what might be going on. The illumination source in the original image of the dress is unclear. It is unclear whether the image was taken in daylight or artificial light, and if the light comes from above or behind. If things are unclear, people assume that it was illuminated with the light that they have seen more often in the past. In general, the human visual system has to take the color of the illumination into account when determining the color of objects. This is called color
constancy. That’s why a sweater looks largely the same inside a house and outside, even though the wavelengths hitting the retina are very different (due to the different illumination). So if someone assumes blue light, they will mentally subtract that and see the image as yellow. If someone assumes yellow light, they will mentally subtract it and see blue. The sky is blue, so if someone assumes daylight, they will see the dress as gold.
Artificial incandescent light is relatively long-wavelength (appearing yellow-ish), so if someone assumes that, they will see it as blue. People who get up in the morning see more daylight in their lifetime and tend to see the dress as white and gold, people who
get up later and stay up late see more artificial light in their lifetime and tend to see the dress as black and blue.
This is a flashy result. Which should be concerning because scientific publishing seems to have traded off rigor with appeal in the past. However, I really do not believe that this was the case here. In terms of scientific standards, the paper has the following features:
*High power: > 13,000 participants
*Conservative p-value: Voluntarily adopted p < 0.01 as a reasonable significance threshold to guard against multiple comparison issues.
*Internal replication prior to publication: This led to a publication delay of over a year, but it is important to be sure.
*No excluding of participants or flexible stopping: Everyone who had taken the survey by the time of lodging the paper for review at the journal was included.
*#CitizenScience: As this effect holds up “in the wild”, it is reasonable to assume that it doesn’t fall apart outside of carefully controlled laboratory conditions.
*Open science: Shortly (once I put the infrastructure in place), data and analysis code will be made openly available for download. Also, the paper was published – on purpose – in an open-access journal.
Good science takes time and usually raises more questions than it answers. This is no exception. If you want to help us out, take this brief 5-minute survey. The more data we have, the more useful the data we already have becomes.
The incidence of autism has been on the rise for 40 years. We don’t know why, but the terrible burden of suffering has spurred people to urgently look for a cause. As there are all kinds of secular trends over the same time period, correlating this rise in autism with corresponding changes in environmental parameters, led to the “discovery” of all kinds of spurious or incidental relationships.
When attempting to establish causal relationships, experimental work is indispensable, but unethical to do in humans.
Now, it has been shown that feeding maternal mice a high fat diet led to social behavioral deficits reminiscent of autism in offspring. These deficits were associated with a disrupted microbiome, specifically low levels of L. reuteri. Restoring levels of L. reuteri rescued social behaviors, linked to the increased production to oxytocin.
I’m aware of the inherent limitations of mouse work (does anything ever transfer?), but if this does (and I think it will – given recent advances in our understanding of the gut microbiome in relationship to mental), it will be transformational, not just for autism.
Here is a link to the paper: Microbial reconstitution reverses materal diet-induced social and synaptic deficits in offspring.
I’ve written about sleep and the need to sleep and how sleep is measured before, but in order to foster our #citizenscience efforts at NYU, I want to bring accessible and actionable pieces on the science of sleep together in one place, here.
Click on the links if you want to read more.
If you’re curious what our marriage between #citizenscience, #datascience and #neuroscience is about, read this.
Some say that every time philosophy and neuroscience cross, philosophy wins. The usual reason cited for this? Naive and unsophisticated use of concepts and the language to express them within neuroscience. Prime exhibit is the mereological fallacy – the confusion of the part with the whole (by definition, people see, not the eye or the brain). And yes, all too many scientists are entirely uneducated, but “winning” might be a function of letting philosophy pick the battleground – language – which philosophy has privileged for over 2500 years (if for no other reason than lack of empirical methods, initially). There is no question that all fields are in need of greater conceptual clarity, but what can one expect from getting into fights with people who write just the introduction and discussion section, then call it a paper and have – unburdened by the need to run studies or raise money to do so – an abundance of time on their hands? Yet, reality might be unmoved by such social games. It needs to be interrogated until it confesses – empirically. There are no shortcuts. Particularly if the subject is as thorny as free will or consciousness. See here for the video.
The brain is using spectral information of light waves (their wavelength mix) to aid in the identification of objects. This works because any given object will absorb some wavelengths of the light source (the illuminant) and reflect others. For instance, plants look green because they absorb short and long wavelengths, but reflect wavelengths in the middle of the visible spectrum. In that sense, plants – that perform photosynthesis to meet their energy needs – are ineffective solar panels: They don’t absorb all wavelengths of the visible spectrum. If they did, they would be black. So this information is valuable, as it allows the brain to infer object identity and help with image parsing: Different but adjacent objects usually have different reflectance properties and profiles. But this task is complicated by the fact that the mix of wavelengths reflected off an object is dependent on the wavelength mix emanating from the light source in the first place. In other words, the brain needs to take the illumination into account when determining object color. Otherwise, object identity would not be constant – the same object would look different depending on the illumination source. Illumination sources can contain dramatically different wavelength-mixes, e.g. incandescent light with most of the energy in the long wavelenghts vs. cool light LEDs with a peak in the short wavelengths. This is not a recent problem due to the invention of artificial lighting. Throughout the day, the spectral content of daylight changes – e.g. the spectral content of sunlight is different midday from late afternoon. If color perceiving organisms didn’t take this into account, the same object would look a radically different color at different times of day. So such organisms need to discount the illuminant, as illustrated here:
The details of how this process happens physiologically are still being worked out, but we do know that it happens. Of course, there are also other factors going into the constant color correction of the image performed by the organism. For instance, if you know the “true color” of an object, this will largely override other considerations. Try illuminating strawberries with a green laser pointer. The light bouncing off the strawberries will contain little to no long wavelengths, but the strawberries will still look red to you because you know that strawberries are red. Regardless of these considerations, we do know that color constancy matters quite a bit, even in terms of assumed illumination in case of #thedress, when the illumination source is ill-defined:
Of course, things might not be as straightforward as that. It won’t always be perfectly clear what the illuminant is. In that case, the brain will make assumptions to disambiguate. A fair assumption would be that it is like the kind of illuminant one has seen most. In most of human history – and perhaps even today – that means sunlight. In other words, humans could be expected to assume illumination along the daylight axis (over the day), which means short-wavelength illumination, which could account for the fact that most people did report to see the dress as white and gold.
Violent rage results from the activation of dedicated neural circuitry that is on the lookout for existential threats to prehistoric lifestyles. Life in civilization is largely devoid of these threats, but this system is still in place, triggering what largely amounts to false alarms with alarming frequency.
We are all at the mercy of our evolutionary heritage. This is perhaps nowhere more evident than in the case of violent rage. Regardless of situation, in the modern world, responding with violent rage is almost never appropriate and almost always inadvisable. At best, it will get you jailed; at worst, it might well get you killed.
So why does it happen so often, and in response to seemingly trivial situations, given these stakes? Everyone is familiar with the frustrations of daily life that lead to these intense emotions such as road rage or when stuck in a long checkout line.
The reason is explored in “Why we snap” by Douglas Fields. Briefly, ancient circuitry in the hypothalamus is always on guard for trigger situations that correspond to prehistoric threats. Unless quenched by the prefrontal cortex, this dedicated sentinel system kicks in and produces a violent rage response to meet the threat. The book identifies 9 specific triggers that activate this system, although they arguably boil down to 3 existential threats in prehistoric life:
1) Existential threats to the (increasingly extended) physical self (attacks on oneself, mates, family, tribe)
2) Existential threats to the (increasingly extended) social self (insults against oneself, mates, family, tribe) and
3) Existential threats to the integrity of the territory that sustains these selves (being encroached on or being restrained from exploring one’s territory by others or having one’s resources taken from one’s territory).
Plausibly, these are not even independent – for instance, someone could interpret the territorial encroachment of a perceived inferior as an insult. Similarly, the withholding of resources, e.g. having a paper rejected despite an illustrious publication history could be taken as a personal insult.
Understood in this framework, deploying the “nuclear option” of violent rage so readily starts to make sense – the system thinks it is locked in a life and death struggle and goes all out, as those who could not best these situations perished from the earth long ago, along with their seed.
Of course, in the modern environment, almost none of these trigger situations still represent existential threats, even if they feel like it, such as being stuck at the post office.
In turn, maybe we need to start respecting the ancient circuitry that resides in all of us in order to make sense of seemingly irrational behavior, perhaps even incorporate our emerging understanding of these brain networks into public policy. In the case of violent rage, the stakes are high, namely preventing the disastrous outcomes of these behaviors that keep filling our prisons and that kill or maim the victims.
Read more: Unleashing the beast within
Statistical power needs are often counterintuitive and underestimated. This has deleterious consequences for a number of scientific fields. Most science practitioners cannot reasonably be expected to make power calculations themselves. So we did it for them and visualized this as a “Powerscape”, which makes power needs immediately obvious.
Read more here: http://arxiv.org/abs/1512.09368
Generations of philosophers have been fascinated what has been termed the “Mary problem“. In essence, Mary is the worlds foremost expert on color vision and knows everything that there is to know about it. The catch is that she is (depending on the version) either color-blind or has never experienced color before. The question is (depending on the version) whether she is lacking something or whether she would gain new experiences/knowledge if she were to experience color. Supposedly, this is to show us interesting things about physicalism and qualia.
In reality, it only shows us how philosophy is fundamentally limited. The philosophical apperception of the world (as well as interactions among philosophers) is entirely language-based. Needless to say, this is an extremely impoverished way to conceive of reality. Language has all kinds of strange properties, including being inherently categorical. Its neural basis is poorly understood but known to be rather extraneous – most animals don’t have it, and even in the higher primates, most brain regions don’t involve language processing.
Yet, all of Mary’s knowledge is language-based. So yes, if she were to see a color, the part of her visual cortex that processes spectral information (assuming that part of her cortex was intact) would be activated and she would experience the corresponding color, for reasons that are still not well known.
But the heart of the matter – and the inherent shortcomings of an understanding that is entirely language based (invisible to philosophers, as this is something shared by all of philosophy) can be illustrated by a new problem. So let’s add some color to the Mary problem. Here is the Mary problem reimagined: I call this the “Brian problem”:
Brian is the worlds foremost expert on all things sex. He has read every single paper that was ever published in sexology and he also has a keen grasp on the biological and physiological literature. Yet, he has never had sex. Now, Brian has a chance to have sex – would this give him an opportunity to learn anything that he doesn’t already know or have an experience that he didn’t already have? If so, why?
Put in this way, a sample of 100 undergrads were quite clear in their interpretation:
This nicely illustrates the fundamental problem with language to assess the state of reality: Language has face validity to other people, because it is our mode of thinking, and probably evolved for reasons of social coordination. So arguments might seem compelling (particularly to unrepresentative subcommunities with a shared culture) that do not correspond to an apt description of reality in any other way.
If this is not clear yet: Imagine there being a rat who has never experienced anger but has read every paper on anger. Then you stimulate the ventromedial hypothalamus of this rat with optogenetic methods. Does the rat experience something new? If so, why?
There are really infinite variations of this. Say – for instance – Stephen has read every paper on LSD. He has an intricate understanding of how the drug works. He knows everything there is to know about serotonin receptors and their interactions with ligands. He even understands mTOR, GCaMP and knows everything there is to know about phosphorylation as well as methylation (although that might not be directly relevant). Yet, he has never taken LSD. Now, Stephen is about to take some LSD. Will he experience something that he has not experienced before? Will he learn something? Is this really that hard to understand if one has a lot of prior exposure to philosophical thought?
Before October 25th, 2010, I had no social media presence whatsoever – I wasn’t on Twitter, didn’t have a blog and G+ wasn’t around yet. Frankly, it hadn’t occurred to me before, but it was one of the requirements to serve as a “Social Media Representative” (Neuroblogger, in short) for the Society for Neuroscience, to cover the 2010 Annual Meeting, which I did.
Outreach efforts were also in an embryonic stage at the Society itself at that point, and by picking a few people to cover a giant meeting, one can either give new people a chance to establish a social media presence, boost existing bloggers or do a mix of both. I’m still grateful to the Society for picking me, but predictably some established bloggers were somewhat upset.
In hindsight, I think the choice by the Society to broaden the tent was justifiable as it pushed me and some others to start outreach efforts of any kind. Efforts which I aim to continue into the far future. At the same time, most of the critical blogs are now defunct.
That said, I still don’t think the “neuroblogger” concept itself is particularly well thought out. People attending the meeting will have better things to do than read a blog. People who didn’t go will have their reasons. The neurobloggers themselves will (in my experience) have some tough choices to make between attending the meeting and writing dispatches from it.
What might work better is to have a single “neuroblogger” platform (hosted on the SfN website) where people write contributions from the meeting, squarely aimed at the general public (to satisfy the outreach mission). Basically expert commentary on things that are just over the horizon, but without the breathless and unsubstantiated hype one usually gets from journalists. Perhaps have a trusted, joint twitter account, too.
Anyway. Thanks for giving me my start and I hope to be able to improve on the mission as time goes on. Semper melior.
Who did the first experiment? 13th century scholastics like Roger Bacon are usually credited with the invention of the modern scientific method – in particular with regard to doing experiments. Bacon expanded on the work of Robert Grosseteste, who revived the method of induction and deduction from the ancients and applied it to the natural world (going far beyond Aristotle’s intuitive natural philosophy). Others give credit to even later individuals, such as another Bacon – Francis Bacon – who made the experiment the centerpiece of empirical investigations of the natural world.
However, it seems that the first experiment – admittedly with an ad hoc methodology and no clearly spelled out rationale linking outcomes and interpretation, let alone statistical considerations or power calculations – seems to have predated these developments by well over 600 years.
The backdrop to this experiment is straightforward – the Western Roman Empire fell in 476 to an onslaught of Goths, whereas the Eastern Roman Empire soldiered on. A bit more than 50 years later, the Eastern Roman Emperor (Justinian) tasked his General Belisarius to retake the lands of the Western Empire from the Goths, first and foremost the Italian peninsula.
The history of this conflict is relatively well documented, mostly by contemporary chroniclers like Procopius (in the pay of Justinian, so perhaps not entirely impartial). The story in question might be apocryphal and was already flagged as such by Procopius, but this does not really matter in this case – given that it is mostly the idea (of doing an experiment) itself that is of interest here, not its particular outcome. The story of the experiment is invoked to partially explain why Theodatus – the Gothic king in Rome, characterized by Procopius as “unmanly”, without experience in war and devoted to the pursuit of money – did not come to the aid of the denizens of Naples when they were besieged by Belisarius.
To make a decision whether he should fight Belisarius and try to lift the siege of Naples, Theodatus asked a Hebrew fortuneteller as to the likely outcome of the conflict. The fortuneteller told him to place 3 groups of 10 pigs each (presumably randomly selected) in separate huts, label them as Goths, Italians and East Roman soldiers, then wait a specified (but unstated) number of days. After doing just that, Theodatus and the fortuneteller entered the huts and noted that only 2 pigs labeled as Goths had survived, whereas most of the pigs labeled as East Roman soldiers lived. Of the pigs representing native Italians, half survived but had lost all of their hair. Theodatus interpreted the results to mean that the Italians would lose half of the population and all of their possessions, and that the Goths would be defeated, at little cost to the victorious Eastern Romans. Hence his reluctance to confront Belisarius at Naples.
In any event, Theodatus did not intervene and Belisarius ended up capturing both Naples and Rome from the Goths before being recalled by the Emperor. Losing both of these cities without much of a fight led the Goths to open rebellion and replace Theodatus as king of the Goths. Their chosen successor, Vittigis ordered the fleeing Theodatus to be apprehended and Theodatus perished at the hands of his pursuers.
So the besieged, without the knowledge of the enemy, sent to Theodatus in Rome begging him to come to their help with all speed. But Theodatus was not making the least preparation for war, being by nature unmanly, as has been said before. And they say that something else happened to him, which terrified him exceedingly and reduced him to still greater anxiety. I, for my part, do not credit this report, but even so it shall be told. Theodatus even before this time had been prone to make enquiries of those who professed to foretell the future, and on the present occasion he was at a loss what to do in the situation which confronted him—a state which more than anything else is accustomed to drive men to seek prophecies; so he enquired of one of the Hebrews, who had a great reputation for prophecy, what sort of an outcome the present war would have. The Hebrew commanded him to confine three groups of ten swine each in three huts, and after giving them respectively the names of Goths, Romans, and the soldiers of the emperor, to wait quietly for a certain number of days. And Theodatus did as he was told. And when the appointed day had come, they both went into the huts and looked at the swine; and they found that of those which had been given the name of Goths all save two were dead, whereas all except a few were living of those which had received the name of the emperor’s soldiers; and as for those which had been called Romans, it so happened that, although the hair of all of them had fallen out, yet about half of them survived. When Theodatus beheld this and divined the outcome of the war, a great fear, they say, came upon him, since he knew well that it would certainly be the fate of the Romans to die to half their number and be deprived of their possessions, but that the Goths would be defeated and their race reduced to a few, and that to the emperor would come, with the loss of but a few of his soldiers, the victory in the war. And for this reason, they say, Theodatus felt no impulse to enter into a struggle with Belisarius. As for this story, then, let each one express his views according to the belief or disbelief which he feels regarding it.
The brain lives in a bony shell. The completely light-tight nature of the skull renders this home a place of complete darkness. So the brain relies on the eyes to supply an image of the outside world, but there are many processing steps between the translation of light energy into electrical impulses that happens in the eye and the neural activity that corresponds to a conscious percept of the outside world. In other words, the brain is playing a game of telephone and – contrary to popular belief – our perception corresponds to the brains best guess of what is going on in the outside world, not necessarily to the way things actually are. This has been recognized for at least 150 years, since the time of Hermann von Helmholtz. As there are many parts of the brain that contribute to any given perception, it should not be surprising that different people can reconstruct the outside world in different ways. This is true for many perceptual qualities, including form
and motion. While this guessing game is going on all the time, it is possible to generate impoverished stimulus displays that are consistent with different mutually exclusive interpretations, so in practice the brain will not commit to one interpretation, but switch back and forth. These are known as ambiguous or bistable stimuli, and they illustrate the point that the brain is ultimately only guessing when perceiving the world. It usually just has more information to go by and disambiguate the interpretation.
This is also true for color vision. The fundamental challenge in the perception of color is to identify an object despite changing illumination conditions. The mixture of wavelengths that reaches our eye will be interpreted by the brain as color, but which part is due to the reflectance of the object and which part is due to the illumination?
This is a inherently ambiguous situation, so the brain has to make a decision whether to take the appearance of an object at face value or whether it should try to discount the part of the information that stems from the illumination. As the organism is not primarily interested in the correct representation of hues, but rather the identification of objects in light of dramatically varying conditions (e.g. a predominance of long wavelengths in the early morning and late afternoon vs. more short wavelengths at noon), it is commonly accepted that the brain strives for “color constancy” and is doing a pretty good job at that. But in this tradeoff towards discounting, something has to give, and that is that we are bad at estimating the apparent absolute hue of objects. For instance, a white surface illuminated by red light will objectively look reddish. The same white surface illuminated by blue light will objectively look blueish. In order to recognize both as the same white surface, the subjective percept needs to discount the color of the light source.
So it should not be surprising that inference of hue can be dramatically influenced by context. The same shade of grey can look almost black on a bright background but almost white on a dark one.
Note that this is not a bug. It is a necessary tradeoff in the quest to achieve a stable appearance of the same object, regardless of context or illumination.
So far, so good. Now where does the dress come in? The latest sensation to sweep social media has sharply divided observers. Some see the dress as gold on white, others as black on blue.
As noted before, this kind of divergence of interpretation might be rather common with complex stimuli. The importance of the “dress” stimulus is the extent to which intersubjective interpretation differs in the color domain. To my knowledge, this is by far the most extreme such stimulus. Of course one has to allow for the fact that not everyone’s monitor will be calibrated in the same way and viewing angles might differ, but this doesn’t account for the different subjective experience of people viewing the exact same image on the same monitor from the same position. And of course the reason why the “true colors” of the dress are in dispute in the first place is the color constancy phenomenon we alluded to above. This was likely a black/blue dress that was photographed with poor white balance, giving it an ambiguous appearance. But that doesn’t change the fact that some people sincerely perceive it as white/gold.
That the interpretation of the color values itself depends on context can readily be seen if the context is taken away. In the image below, some stripes were extracted from the original image without altering it in any other way. The “white/blue” stripe can now be identified as light blue and the “gold/black” stripe as brown.
But why the difference in interpretation? That is where things get interesting. If the ambiguity derives from color constancy (and it looks like it does), the most plausible explanation is that people differ in their interpretation of what the illumination source is. Those who interpret the dress as illuminated by a blue light will discount for this and see it as white/gold whereas those who interpret the illumination as reddish will tend to see it as black/blue. Interestingly, the image itself does allow for both interpretations, the illumination looks blueish in the top of the image, but yellowish/reddish in the bottom. On a more fundamental level, a blue/black dress illuminated by a white light source might be indistinguishable from a white/gold one, but with a blueish shadow falling onto it.
But if this is the case, one should be able to consciously override this interpretation once it is pointed out, but – for most people – this does not seem to be the case, in contrast to most other such ambiguous duckrabbit displays. People are able to willfully control what they see.
This raises several intriguing possibilities. For instance, it has been recognized for quite a while that the human “retinal mosaic” – the distribution of short-, medium- and long-wavelength cones in humans is radically different between observers, but this seems to only have a minute impact on the actual perception of color. Perhaps it is the case that differences in retinal mosaic can account for difference in the perception of this kind of “dress” stimulus. Moreover, there is another type of context to consider and that is temporal context. We don’t just perceive visual stimuli naively, but we perceive them in the context of what we have encountered before – not all stimuli are equally likely. This is known as a “prior”. It is quite conceivable that some people (e.g. larks, owls) have a different prior as to what kind of illumination conditions they encounter more frequently. Or a complex interaction between the two.
While we must confess that we currently do not know why some people consistently see the dress one way, others consistently in another way and some switch, it is remarkable that the switching happens on very long timescales. Usually, switching is fast, for instance in the Rubin’s vase stimulus above. This could be particular to color vision. There are no shortcuts other than to do research as to the underlying reason that accounts for this striking difference in subjective perception.
Meanwhile, one lesson that we can take from all of this is that it is wise to assume a position of epistemic humility. Just because we see something in a certain way doesn’t mean that everyone else will see it in the same way. Moreover, it doesn’t mean that our percept necessarily corresponds to anything in the real world. A situation like this calls for the hedging of one’s bets, and that means to keep an open mind. Something to remember next time you disagree with someone.
I started our exchange in a spirit of friendship and respectful exploration. I hoped that we could maybe learn something. The reason I’m now discontinuing it is that I realize that this specific communication can no longer be meaningfully characterized in that way. In other words, nothing good can come of it, so it is best not to extend the triggering of adversarial circuitry. It does not mean that I concede the argument, it does not mean that you “won”. On the contrary. There is just no point continuing – victory conditions in an adversarial online exchange are ill-defined. Such exchanges only have a point if they are conducted in a spirit of mutual goodwill. I maintain this goodwill, but feel strongly that this is not reciprocated and that common ground has – for the time being – been lost. Perhaps we can regain it later. But that’s it for now, as we are at serious risk of simply making it worse. Not worth it.
It is understood that you are wrapped up deeply in your subjective reality. Trying to alter that will not be easy, so it might simply not be worth anyone’s time, let alone mine.
In the 21st century, one is dealing with a diverse range of people who are likely to construct reality quite differently. In other words, it is crucial not to confuse temporal and faux-spatial proximity online with importance (these heuristics do serve one well in offline interactions – people who are close matter, but the online space literally transcends space). Put differently, it is not only important to engage online, being able to disengage – the earlier the better - is arguably even more important:
Never forget - victory conditions are ill-defined in such an exchange. It is tempting to think that your next “killer argument” will settle the issue once and for all. But that’s not how it works. Violence kills, arguments don’t.
In the age of social media, it is hard to avoid exposure to popular culture. This is a problem because most of the bugbears that are popular in this culture – like zombies or vampires – do not actually exist. In contrast, those that do exist – notably psychopaths – get comparatively little screen time and if they do, they are portrayed as a caricature of the actual condition. This increases the likelihood that you will not recognize a psychopath when he enters your life, increasing the odds that the experience will truly be life ruining. At the same time, society appears naive and helpless in the face of psychopaths. While society continuously adds new layers of rules and regulations to all procedures, this does not come without a cost, diminishing the efficiency of carrying out all activities. Also, it is questionable whether this approach works – in many cases a determined psychopath will find a way to game the system all the same. If they are eventually caught, many psychopaths end up in prison, but at that point, the damage is done and prison as an institution is not specifically geared to deal with the condition.
At this point, the only hope comes in the form of the emerging neuroscience of psychopathy, which helps us detect the condition and understands its neural underpinnings. I review the evidence here.
Perhaps this will – in the long run – allow us to harden our society against psychopaths, dramatically lowering the burden of suffering inflicted by the condition on all of us.
Note: This is the primer. The full article is here.
Bringing about positive changes in your life is hard. Everyone knows this. But everyone also desires them. So it is seductive to believe – particularly if you have no credible way to actually bring them about – that merely wishing for them will make it so. There are many people who will try to sell you on the idea that the world is your personal genie, only with an infinite number of wishes. Of course, if this were so, there would be no unfulfilled wants or needs. The pied pipers will tell you that you are simply not wishing hard enough. If you think this account is implausible, there is now a scientific account of what works and what doesn’t when it comes to make your wishes a reality.
There is a downside to wishful thinking about wishful thinking or positive thinking about positive thinking. It is the sacred mission of science to replace fairy tales and superstitions with a scrupulous account of what is and is not the case. Whether that account is popular or not.
Of course all of this only applies *if* you desire to achieve personal goals and contribute in ways that is valued by society. If that is not the case, none of this applies and you are of course still free to *enjoy*. But I would argue that most (but of course not all) people have a desire to achieve and contribute, of course manifesting in different ways.