Data were analyzed using Matlab…

It is important to use the right tools for a given job. Science is no exception. In particular, given the vast amounts of data that are now routinely encountered in the field, one will want to use the best available data analysis tools (by whatever metric one prefers – ease of use, speed, efficiency, versatility, etc.)

In neuroscience, there is a prevailing sense that MATLAB currently dominates the market for analysis tools, but that Python has a lot of momentum.

Is Python in the future of brain research?

To get an empirical handle on this, I decided to search Google for a stock phrase employed in the vast majority of methods sections of papers (“Data were analyzed using x”), replacing x with a variety of modern – and presumably commonly used – analysis tools. For the sake of completeness, I also search for “Data were analyzed in x” and “Data were analyzed with x”, then adding them up (although the vast majority of phrases included “using”, not “with” or “in”). And yes, this is the passive voice. Most scientists are about as well trained in writing as they are in programming…

The results are below and they strike me as surprising, to say the least. A whopping 8 (in words, eight) hits for Python, 5 for Octave and none for Julia.

Results of the Google search (as of 09/26/2013). The slice representing Python, Octave and Julia together is too small to be visible. Aptly, the data underlying this figure were analyzed using Matlab.

So what is going on? Are scientists – despite all the enthusiasm for Python, Octave and Julia – not actually using these methods in published papers? Is there a systematically nuanced way of grammar usage that I am missing?

Regardless of the validity of these particular results, there can be no question that Matlab cornered the analysis market these days (at least in neuroscience – I presume the heavy use of SAS and Stata takes place in other fields).

Ironically, this is cause for concern. Success leads to dominance. Dominance leads to a sense of arrogant complacency that is not warranted in the field of technology. Just ask Nokia or the ironically named “Research in Motion”, ill-fated maker of the blackberry. Once a competitor has gained momentum because the monopolist missed “niche” developments, it is almost impossible to halt it.
To date, MathWorks has completely missed out on capabilities for online deployment of code. It is quite disgraceful actually, as this is now routinely done in Python and R. Does MathWorks have to be shamed into doing the right thing on this?

Finally, I hope we can move beyond primitive tribalism on this. I do understand that it comes naturally to people and that it is ubiquitous – be it with regards to computers (Mac vs. PC), cell phones (Android vs. iPhone), sports, etc.; however, this kind of brutish behavior has no place in science. All that matters is that one uses a suitable tool for the job at hand so that one can do the science in question and hopefully move the species forward a bit. Moreover, it is understandable that any self-respecting programmer can’t have things to be too easy or straightforward. Otherwise, anyone could do them. That might indeed be the chief problem of Matlab.

Seriously – it doesn’t matter as much which language you use to program as long as you are in fact programming. There is a simple reason for that: The success of western civilization allows for a second – heavily incentivized – route to rewards, namely social engineering (by hacking some fairly primitive tribalist circuitry). So the waves of BS can rise ever higher. But programming has to work. So the BS can only go so far. And we need more of that. More reality checks (in the literal sense), not more BS. We have too much of that as it is.

Posted in Matlab, Neuroscience, Psychology, Science | 8 Comments

A more general relationship between relevance and rigor

Recently, SMBC (one of the few webcomics still worth reading, as he somehow manages to be uncorrupted by his own success) posted another inimitable offering.

Except that in this case, it is actually perfectly imitable. This kind of thing can be done for any number of fields, including psychology and neuroscience. The advantage of doing something like this in a systematic fashion consists in the fact that one will be able to gauge – by the very reaction to it – how defensive or self-confident a given field is. For didactic purposes, I’ll start with low hanging fruit.

An obvious retort to this post is that it is extremely derivative. This is true. That doesn’t change the fact that Economics is in good company. Put differently, almost every field faces a tradeoff between the sexiness of the question under study and the availability of rigorous methods to study it, as the two seem to be inversely correlated. Like so:

Come to think of it, this relationship might be generally true across fields and empirically testable. Perhaps the only question that is – at this point -is both sexy and tractable.

But I might be biased.

Posted in Philosophy, Science | Leave a comment

Superior motion perception in individuals with autism?

The empirical evidence seems to contradict Betteridge’s law.

For the past 10 years, research on the “spatial suppression effect” showed that large moving stimuli are more readily perceived than smaller ones.

Most people suggest that the larger one should be easier to see. But data suggests that the opposite is the case empirically. Large stimuli need to be presented for a longer time to be perceived as accurately as small ones.

However, this relationship doesn’t seem to hold in certain populations, such as those with a history of depression or lower IQ.

These results have been explained with a lack of inhibitory tone. It has been suggested that there is also less GABAergic tone in autism. So the prediction from this research would be that there is also less spatial suppression in autism.

Yet, this is not the case empirically. Larger stimuli *were* more readily perceived by autistic observers, but so where small ones. It is uncanny how good their motion perception was. Just a few frames of motion was sufficient for ready identification of motion direction – blink and you’ll miss it.

Individuals with autism see both small and large moving stimuli faster

See here for a more comprehensive writeup: http://www.jneurosci.org/content/33/37/14631.full?etoc

On a final note, it is time to transcend singular lenses on autism. In the spirit of this excellent piece:

http://www.nimh.nih.gov/about/director/2013/the-four-kingdoms-of-autism.shtml

Its only drawback is the title. These are not kingdoms. They are positions or perspectives. And it is crucial to transcend a singular one. The issue is just more complex than that.

Posted in Psychology, Science | Leave a comment

Is there a relationship between weight and success in PhD programs?

TLDR: No. Details on caveats, background, methods and specific results below.

On June 2nd 2013, Geoffrey Miller suggested on Twitter that there might be a strong relationship between excess body fat and adverse outcomes in PhD programs, mediated by willpower.

Predictably, a firestorm of indignation ensued, leading Miller to delete the tweet, lock his account and apologize.

This unfortunate episode opens a whole host of issues, including whether scientists should tweet (a notion that many scientists, including Miller answer in the affirmative). Another one is the state of educational attainment in PhD programs – only slightly more than half of those who enter finish with a PhD within 10 years. A third concern is the inherent communication structure of twitter itself. It is almost impossible to lay out any argument with the necessary nuance. Not in 140 characters. Thus, there are clear biases towards absolute statements. Combined with the potential for global (re)sharing of a tweet, this is a recipe for this kind of thing to happen over and over again, particularly if one was to hold non-PC beliefs (defined as those who are likely to inflame passions). The strong form of Miller’s thesis (likely molded by Twitter’s 140 character limit) could be refuted by a single counterexample, a fact that was not lost on those who took exception to the statement.

This leads to the core of the matter, which is the thesis itself. A lot of people found the claim implausible even at face value. Yet, it is not inconceivable to craft a rationale that is internally consistent. For instance, one could argue that grad school is inherently stressful and taxes one’s willpower to the max (as evidenced by the extremely high attrition rates). If one further allows that willpower might be a finite resource (as is indeed suggested by leading figures in willpower research) and also assumes that increased body fat is a good proxy for lack of willpower, one might pose the hypothesis that those with limited willpower resources might be at risk for higher attrition rates.

However, in the complex world of social science, nothing is ever simple or straightforward. This is not physics. For instance, one could plausibly argue that people will anticipate the extremely high willpower demands of grad school and focus on their studies at the expense of everything else, willpower being a limited resource and all (as an aside, this is what happened to me – I gained 50 pounds in grad school and didn’t even really notice it). Indeed, there are so many potential confounds that one would have a hard time coming up with any clear-cut prediction. Are people who enter PhD programs (a highly self-selective population) as athletic as the rest of the population (on average)? Are they outside as much? (There are known links between weight and Vitamin D levels). In other words, the link between body weight, the probability of entering a PhD program to begin with and other lifestyle choices is extremely complex and far from simple, even in absence of the willpower connection. Ironically, recent research suggests that eating those carbs might *help* find the willpower to finish that PhD, but only if you believe in it.

So what is one to make of this? The tried and true way to settle issues of this kind in the past several hundred years has been an empirical one. If the question hasn’t already been answered in the literature, collect some data and see which model of reality is most likely, given the data.

As this issue has – to my knowledge – not been answered within existing literature, one has to collect fresh data. Fortunately, the internet now affords the capability of rapid data collection. I was in the lucky position to obtain a suitable dataset. To be clear, even with the wonders of the internet age, it is much easier to make a sweeping statement than to pursue it scientifically. For instance, obesity (the category in the original tweet) is defined by body mass index (BMI). This is an extremely simple measure that normalizes body weight by height, but has many obvious shortcomings. There is mounting evidence that body fat percentage, not BMI per se would make a better marker of obesity (an issue that is compounded by the fact that muscle is much more dense than fat, and thus weighs more). However, BMI is a measure that is extremely easy to assess, whereas body fat percentage is not. Most people do not have ready access to a dexascan or any of the other semi-reliable measurement methods. So BMI is still widely used. A more serious problem is the fact that the data needed to address the issue is the BMI at the beginning of a program, not the current BMI of people who obtained their PhD. However, I would not trust anyone to reliably recall their weight over such long time periods, so the data would have to be collected prospectively – as people enter a PhD program (which is prohibitively costly in terms of resources). A reasonable response? Perhaps to use the data one has, not the data one might want or wish to have and see how far one gets. An even more serious issue is the self-selective nature of survey participation. Given the intensity of the controversy, combined with the fact that some people probably feel insecure about their weight as well as educational outcomes in a systematic fashion, this is a serious concern. However, it should be noted that the vast majority of psychological studies suffer from this problem. Samples of convenience are the norm, representative samples rare. This doesn’t keep people from publishing high profile papers. And yes, everyone knows at this point (that’s a good thing!) that surveys are not experiments, so the data will be entirely correlational in nature.

Being mindful of all these concerns, what do we have?

167 people who claimed to have started and finished (or dropped out from) a PhD program participated in the survey. Of these, 161 answered questions about both their time to completion and BMI. The data of one participant was excluded because she reported a BMI of 3.8, which is incompatible with life (probably a typo), leaving data from 160 participants in the analysis.

The descriptives (see figure 1) look plausible. This is what I would expect a time to completion histogram for PhDs to look like (although I would like to meet the person who claims to have finished within a single year). Given the existing literature on PhD completion rates, the only notable thing is the relative absence of ABDs. But who can blame them? Perhaps this is a sour subject that they would rather not revisit. BMIs ranged from 18 to 53, which also seems reasonable.

Descriptives

Figure 1. y-axis: Counts. x-axis left panel: Time to completion (PhD). A = Abandoned. Right panel: BMI

So far, so good. Onward to the all-important relational analysis. The correlation between time to completion and BMI is a whopping -0.035, p = 0.67. In other words, there is absolutely no trace of a linear relationship between these variables. More evidence of absence than absence of evidence. Note that one can actually legitimately use a Pearson correlation here, as both scales (time to completion and BMI) have the necessary qualities. However, I will spare us a scatterplot here, as the “time to completion” scale is rather granular and the datapoints basically form a big blob. Put differently, the lack of observed correlation is not due to a few outliers.

Time to completion vs. BMI

Figure 2. y-axis: BMI. x-axis: Time to completion. S: Short (1-3 years, n = 18), N: "Normal" (4-6 years, n = 103), L: Long (7+ years, n = 33), A: Abandoned (n=6). Error bars represent standard error of the mean.

There is no statistically reliable difference in terms of BMI between those who took longer for their PhD and those who finished in a “normal” timeframe. The same goes for those who abandoned their hopes of a PhD (p = 0.38). If anything, one could make an argument that those who finished fastest (within 3 years) were somewhat on the heavier side, but there is a lot of variation in this group. And who knows? Maybe they finished fastest because they knew what they were doing, suggesting that they were older students and statistically likely to be heavier (BMI increases with age, on average).

Now, one could bemoan the fact that most people (more than half of this sample) finished their PhD within 4-6 years and are thus put into a single group here. Besides, the original claim was about obesity. So let’s revisit the data with flipped axes in figure 3.

Figure 3

Figure 3. y-axis: Time to completion in years. a-axis: UW = Underweight (n = 1), NW = Normal weight (n = 89), OW = Overweight (n = 37), OB = Obese (n = 27). Error bars represent standard error of the mean.

Looking at this, one could spin an exciting yarn about how underweight people – spending all their precious willpower resources on staying thin – take much longer than average to finish their PhD. Alas, the underweight “group” in this sample consisted of a single person, so this would not be a legitimate claim. As above, there is no evidence of a linear relationship. Doing a direct comparison between the normal weight group (mean time to completion 5.3 years) and the obese group (mean time to completion 5.0 years) is not significant, p = 0.35 and the “trend” is in the wrong direction.

There were other questions in the survey, such as time since PhD and whether body weight was stable since. If there was any clear trend, this information would allow to weigh more reliable datapoints (from people with recent PhDs and unchanged weights) more heavily. Similarly, one could do an analysis by gender or field. But I don’t see any reason to water down the power of the dataset by such tesselations at this point, as the dataset is not large enough for that kind of thing. But others can explore this, if they want to. Speaking of power: I expect a charge that this investigation is underpowered. It is. Yet, I would have expected *some* trace of some relationship, if it was real. Also, the power issue is easy to remedy. Simply collect more data. Moreover, it should be understood that this is what could be done in a few days. Certainly, people are free to do that, perhaps with refined hypotheses (although I’m not optimistic for strong effects, given all the above).

To summarize, there is not even a hint of a shred of supporting evidence for the original hypothesis. A total nonresult. Less exciting? Not necessarily. Just ask Michelson and Morley. Note that a non-result is actually meaningful here. One could argue that the self-selective nature of the survey – presumably full of people with an axe to grind who want to stick it to Miller – would produce a strong negative correlation. But that is not what we see. Also, judging from the colorful comments, not all participants were aware of the controversy to begin with. One thing to note about null-results is that just because *this* specific way of probing the relationship didn’t work does *not* mean that there is no possible relationship. This is due to the asymmetric nature of scientific logic and generally true for studies based on inferential statistics that report null-results, not just here. However, the lack of relationship found here is so resounding that I will here. This should not discourage others who want to explore this further. Another issue is that I – deliberately – stayed from constructs such as conscientiousness or willpower. Like IQ, their measurement is not without substantial controversy in itself, as there are no objective operationalization criteria. Staying clear of these, I decided to focus on the directly observable quantities BMI and educational attainment, without the need for operationalization, which is legitimate given the initial hypothesis.

Now that we know the empirical side of things, I do not believe we should “break the staff” (as the Germans say) over Geoffrey Miller. We routinely give second chances to people who – by their own admission – lead a parasitic existence, contributing nothing. Miller has contributed important research – and we regrettably all make mistakes on a daily basis, so there is perhaps no reason to single him out. The ethics of casting stones and all that…

Miller aside, there is an even bigger issue at stake here, namely the communication between science and society. Personally, I am only interested in the truth status of statements. I suspect many scientists feel the same way, let the dice fall there they may. That is – however – not how society at large generally tends to react. There is typically a lot of passion and vested interests. People *want* things to be true, or *not* to be true. The dialogue between science and society is notoriously tricky. Since the times of Galileo and Darwin, scientists have routinely come under personal attack for proposing extremely unpopular claims. Sadly, the tried and true approach is to attack the source of the unwelcome statements. And it does work, as the sad case of Semmelweis illustrates. In this case, the claim was not true. But what if it had been? It must be possible for scientists to express what they sincerely believe to be the truth, no matter how inconvenient. But how? How should scientists communicate with the public? How should the public treat scientists? What is the mutual responsibility? Besides the obvious need to show mutual sincere respect, I have no ready answers for any of this and would be curious what people think about it. If you do have a position, please feel free to express it in the comments below.

Posted in Science, Social commentary | 4 Comments

Local and global connectivity – a tale of two datasets

The original images were generated based on facebook friendship data as well as data on scientific collaborations from Elsevier’s Scopus. The map of scientific collaborations was in itself inspired by the facebook map – I considered a direct comparison to be interesting. Note that – as far as we know – most brain regions (particularly in early sensory areas) exhibit a connectivity pattern that is quite similar: Mostly local connectivity with some long range connections. In this sense, the external social network seems to replicate the internal one.

Comparing scientific collaborations and facebook connections

Above: Scientific collaborations. Below: Facebook friendships

The image speaks for itself. Neither is a strict subset of the other, with interesting systematic differences. The scientific one basically seems to be a mapping between “global cities“.

And they say that there is no such thing as a social scientist…

Update: Flight paths seem to exhibit similar patterns as well.

Posted in Science | 1 Comment

The current mental health crisis and the coming Ketamine revolution

Few FDA approved drugs have a reputation as controversial as Ketamine. This reputation is well earned. Originally developed in the 1960s as a short-acting anesthetic for battlefield use, in recent decades it has become notorious as a date-rape drug (‘Special K’)*, a club drug (‘Vitamin K’) and for its use in veterinary medicine (‘horse tranquilizer’).

However, I venture to bet that Ketamine is about to be rehabilitated for legitimate human uses. Here is why.

The reason for this belief is a most severe crisis in mental health and our approach to treating mental illness. It is not polite to say this, but given the gravity of the situation it needs to be said: It’s not working out. Not really.

Certainly a strong statement, but this sobering reality is becoming increasingly apparent. Every year, upwards of 10 million people experience a major depressive episode in the US alone. The number of people who never see a professional – and are thus never diagnosed – is likely much higher. Each of these episodes can be expected to last for months to years and is characterized by utter misery; a substantial number is terminal (the number of suicides in the US outnumbers the number of violent non-self gun deaths by about 3:1). Currently available “antidepressants” (a term I would use only in quotation marks, as most struggle to beat placebo except for the most extreme cases and might appear statistically more effective than they actually are) – these days usually a selective serotonin reuptake inhibior (SSRI) – can be expected to help one in ten people, and only after a trial period of 4 weeks to several months. Of course, if the depression happens to coincide with an underlying bipolar tendency, treatment with an SSRI will likely trigger a dysphoric mania. In other words, one will never reach the end of the interval that the SSRI needs to take to work (if it ever does). Instead, one will – at best – need to be hospitalized. So-called “treatment resistant depression” (in reality, depression that didn’t respond to treatment with a few SSRIs or SNRIs as MAO-inhibitors and tricyclics have largely fallen out of favor due to their side effect profile) is either allowed to take its course (with devastating consequences for the individual) or treated with electroshock therapy (ECT). ECT is remarkably effective in addressing this kind of depression, but the “side effects” (more aptly called treatment effects) like deleterious memory loss are a steep price to pay.

This state of affairs is obviously unacceptable. The good news is that it is increasingly being recognized as unacceptable. The National Institute of Mental Health (NIMH) recently announced that it would no longer fund research based on diagnostic criteria as outlined in the DSM. Psychiatric disorders are the only medical conditions that – for historical reasons – are diagnosed entirely based on subjectively reported symptoms. Doing this for any other condition, e.g. cancer or infectious diseases would obviously be absurd. Molecular biology has seen to that. In the 21st century, this simply will no longer do and the NIMH is essentially willing to start over from scratch.

Enter Ketamine. When Ketamine was used as a battlefield anesthetic in Vietnam and the first Gulf war, it was anecdotally noted that wounded soldiers such treated developed far fewer cases of PTSD than soldiers with similar injuries that were treated with other anesthetics. A couple of years ago, systematic studies showed that the vast majority of patients suffering from major depressive disorders made a rapid recovery when given a low, sub-anesthetic dose (most studies show a dose of 0.5 mg/kg to be effective) of Ketamine.

Ketamine

Ketamine

The clinical effectiveness of Ketamine in these scientific reports sounds too good to be true, particularly when compared with anything else on the market. The effects take hold within days (if not hours, compared with weeks to months for SSRIs), they seem to work for most people, there seem to be few discernible side effects (bladder issues seem to be a concern, but mostly at “recreational” doses and frequency of use) in stark contrast to – for instance – ECT), and it seems to be equally effective in the treatment of bipolar depression (which is notoriously hard to treat).

So what is the catch? Given the crushing disease burden inflicted by major depressive disorders, the question why this treatment is not readily available arises immediately.

The issue does not seem to be primarily medical in nature. Ketamine *is* a dissociate anesthetic, so the immediate effects on the conscious experience are rather extreme. Based on reports from people who have received low sub-anesthetic doses of Ketamine, the Kantian a priori categories of space and time seem to unravel shortly after the injection. They report that it becomes obvious that shared reality is a construct – brought about by the normal operation of the brain – but that Ketamine suspends this normal construction process, allowing for different reconstructions of reality. At higher doses, the reports speak of “leaving flatland” and becoming aware of higher dimensional objects that only appear to be separate when projected onto a low-dimensional space (such as the one we commonly perceive). Whether these experience reports sound scary or intriguing, there is no question that these experiences do not last very long. Given the short half-life of Ketamine and depending on individual metabolism and route of administration (IM or IV), these dissociative effects last for an hour or two, not longer.

Dissociative symptoms from Ketamine

One of the most remarkable figures I’ve ever seen. Dissociative symptoms over time. x-axis is nonlinear. Differences at all time points other than the 40 minute mark are not significant. Adapted from Diazgranados, Nancy, et al. “A randomized add-on trial of an N-methyl-D-aspartate antagonist in treatment-resistant bipolar depression.” Archives of general psychiatry 67.8 (2010): 793.

It is also reassuring that the patients could *answer* the CADSS, so they couldn’t have been too far gone. Intriguingly, while there was no significant difference between placebo and ketamine group except for the 40 minute mark, *all* the mean scores of the placebo group seem to be slightly above the ketamine group. The CADDS features items like: “Do objects look different than you would expect?”. It is not inconceivable that, the experience properly calibrated the scale of the Ketamine group.

Obviously, there are no studies on long term effects in humans at this point, but similar studies on monkeys are encouraging. Animals which had received similar doses for long periods of time on a daily basis show that the dose has to be relatively high and the period of time relatively long to demonstrate impairment. Moreover, the doses involved in the treatment of pain are usually considerably higher, without reports of long term ill effects. To be clear: *Any* potential neurotoxicity is obviously cause for serious concern. However, all medical decisions involve tradeoffs. Depression in itself is increasingly linked to neurotoxicity. Moreover, the spectre of neurotoxicity can lurk where one least expects it, e.g. from antibiotic treatments. In addition, Ketamine seems to potentially *reverse* depression (or stress) induced brain damage via synaptogenesis. The moral of the story is that one should not embark on a course of medical treatment unless the expected upside (far) exceeds the expected downside.

Of course, there is a rub. More than one, actually. The two biggest and uncontested issues seem to be unclear mechanism of action as well as sustainability.

The first issue is that we do not really know or understand how Ketamine seems to bring about its antidepressant magic. Many theories are currently being explored in active research. Some of the hottest trails are NMDA antagonism, neurogenesis and dentritic sprouting. Personally, I believe there might be something to the notion that one is “growing a prefrontal forest”, strengthening prefrontal networks that are in turn able to better quench aberrant activity originating in evolutionary older structures (amygdala, limbic system, etc.)  One way or the other, glutamate seems to be involved. While it may sound unsettling that we do not understand the mechanism of a treatment, this situation is far from unique. As a matter of fact, we do not understand the mechanism of action of *any* anesthetic drug. Similarly, there is plenty of evidence that even SSRIS don’t work the way we thought they do. There is mounting empirical support for the notion that the Serotonin action is largely incidental, and that neurogenesis is really behind their therapeutic effects.

The second issue – that of sustainability – is more serious. Curiously, the effects don’t seem to last. While Ketamine can rapidly pull someone out of a serious depression, the depression seems to return in time, requiring a “booster” shot to banish the demons again, even for a short while. The time until relapse is different from individual to individual, ranging from weeks to months, but it is intriguing that there is a time constant at all.

However, there are many chronic diseases that require daily administration of medications, including injections. In this regard, Ketamine isn’t even all that different from most other antidepressants, which are – more often that not – a pretty permanent deal. Many have to keep taking them, for fear of relapse. The real issue why these treatments are not more readily available seems to be economic in nature.

Ketamine is an extremely cheap drug, as it has been off patent for almost 40 years. Put differently, there is no money to be made here. The FDA is tasked with protecting the public from harmful treatments. Thus, the approval process is lengthy and costly. In reality, only a major pharmaceutical company has the financial resources to spearhead the approval of a drug. In the case of Ketamine, this is unlikely to happen, as these companies could never realistically expect to recover their expenses. To be clear, Ketamine already *is* FDA approved, but not for the treatment of depression. There are already some courageous pioneers who will administer Ketamine today (for a king’s ransom) in its off-label use for depression. This is not in itself unusual. Many drugs – once FDA approved – are prescribed for off-label uses. For instance, Modafinil was initially approved for the treatment of narcolepsy. Today, the vast majority of prescriptions are not issued by neurologists for narcoleptics, but rather by primary care physicians for people who feel a little tired – or simply because people want to use it. So in principle, Ketamine already is available for the treatment of depression, off-label. However, the overwhelming majority of psychiatrists is unlikely to touch a drug that is a PCP derivative and needs to be injected, no matter how effective. This problem is not unique to Ketamine. Once a drug has acquired a certain infamy, it is hard to change minds. For instance, Thalidomide is now being explored as an effective cancer treatment, precisely because its such a potent inhibitor of angiogenesis. But that’s a hard sell, given its historical record.

Where does this leave us? In an uncomfortable (as millions are suffering right now), but hopeful position. The evidence for the antidepressant effectiveness of Ketamine (for whatever, yet to be understood reason) is so overwhelming that quite a few pharmaceutical companies are feverishly working on Ketamine analogues and delivery methods (e.g. nasal sprays) as well as alternative NMDA modulators that *can* be patented and thus would be worthwhile to put through the highly demanding FDA approval process. Preliminary results are so promising that one can reasonably hope to have truly effective antidepressants available within another decade or so. If this happens, a mental health revolution will be at hand. And it will be sorely needed. Having rapidly acting and unequivocally effective antidepressants widely available and covered by insurance (akin to antibiotics) will make all the difference.

Note: There is no question that Ketamine is a crude drug when it comes to addressing the pathology that underlies depression. Nevertheless, it is a promising and encouraging start, not necessarily the end. Further drug development will need to hone in on the underlying biological target systems (which is why a mental illness classification based on biomarkers is so sorely needed). Moreover, it has not escaped my notice that this discussion has focused on psychopharmacological aspects of depression. There are other aspects that are social, psychological and nutritional in nature, among others. Etiology is likely complex. Breakdown in social coping structures? Sedentary lifestyles? Overfeeding? Intense and chronic stress? Extreme social competition and comparison? Sleep deprivation? Light pollution? Hormone disruption? There certainly is a discussion to be had about these aspects, but not now. It will take a while for research to disentangle these causal links. Meanwhile, it is important to lighten the burden of disease. Thus, the focus was deliberate and we will save a deliberation of other factors for later.

On a final note, it is quite unsettling that virtually all truly effective treatments for mental disorders (e.g. Lithium, ECT, Ketamine, etc.) and psychoactive substances in general (e.g. LSD, Benzos, etc.) were discovered entirely by chance, by pure serendipity. Conversely, all mental health treatments *designed* to do a certain thing, based on our current understanding of the nervous system (e.g. SSRIs, but not just SSRIs – one can always do worse) basically failed to deliver. This suggests that we do not currently understand it very well. Recognizing this should break a lance for pioneering (some call it basic, but there is nothing basic about it) neuroscience research.

PS: This is another installation in an ongoing series on how language really does matter. Most people suffering from depression would reasonably turn to antidepressants, not dissociative anesthetics for help. But just because marketing calls them that doesn’t make it so (whatever one’s position, they are on average so ineffective that a vigorous debate whether or not they are more effective than placebo is even possible. A debate we didn’t see regarding the effectiveness of penicillin. Regardless, there are so many vested interests involved here that the debate is likely to go on. So it is certainly premature to write “Listening to Ketamine” even as people are sick of listening to overhyped BS. Yet, the suffering of the suffering is so severe that I remain hopeful that reason will prevail in the struggle to make depression history. The notion to have effective antidepressants available is a powerful one). As it turns out, the antidepressant effects of some dissociative anesthetics are in all likelihood much more potent than that of current “antidepressants”. Go figure.

Be it as it may, it is downright scandalous to have people suffer every day, some of them killing themselves and have a safe solution (at the right dose) readily available, yet not allow it to be used.  That this is even a possible – and ongoing – state of affairs does not instill confidence in the way things are in general. Downright scary, actually.

Update: The story has now hit the mainstream. Also, there is now solid evidence that repeated low dose administration of Ketamine seems to keep depression at bay, akin to maintenance ECT and without mounting side effects.

Update: In the piece, I expressed surprise at the fact that patients have not been more forceful in advocacy and outreach, demanding the FDA approval of this treatment on an emergency basis. This now seems to be happening. The website also compiles a – growing – directory of health care providers willing to administer Ketamine infusions. Given how bad the suffering can be – note this case of a woman enduring 273 (mostly bilateral) ECT treatments – without an appreciable effect on the depression, but dramatic and sustainable effects from low-dose Ketamine infusions on a maintenance schedule every 3 weeks. Note that dosage seems to be *critical*, with a very narrow therapeutic range between 0.4-0.6 mg/kg.

*It is possible that Ketamine acquired this reputation somewhat unfairly, as legislators might have confused it with the date-rape drug GHB when passing emergency legislation in August 1999 (classifying it as a schedule-3 substance in the US). Interestingly enough, GHB is now legally available as a drug for narcolepsy, Xyrem.

Posted in Psychology, Science | 19 Comments

Dress images

TheOriginalDressImage

 

 

giphy

 

 

LandY

Posted in Uncategorized | Leave a comment

Can music elicit a visual motion aftereffect?

Briefly, if you look at a large moving scene for a while, you will experience things moving in the opposite direction afterwards. This “motion aftereffect” was already known to Aristotle, presumably from the visual inspection of waterfalls. It was rediscovered by Purkinje in the 19th century, on the occasion of witnessing a cavalry parade. Now, we were able to show that listening to ascending or descending musical scales produces a visual aftereffect in the expected (opposite) direction.

A cavalry parade, much like the one that inspired Purkinje and launched a thousand papers.

A cavalry parade, much like the one that inspired Purkinje and launched a thousand papers.

I anticipate to get a lot of grief for this, but I consider the study to be executed in a valid and rigorous way and it contributes to the often neglected study of multimodal effects. Let alone multimodal higher-order effects. But they do exist.

Hedger SC, Nusbaum HC, Lescop O, Wallisch P, Hoeckner B (2013). Music can elicit a visual motion aftereffect. Attention, Perception & Psychophysics.

What is a motion aftereffect (MOA)? Have a look here or below.

MOA

To see a motion aftereffect, look at the red dot (Warning: Browser-dependent, the effect might be a little subtle if the animation renders in a choppy fashion.)

Here are two figures that didn’t make it into the paper. Consider them as supplementary material. For an explanation of what the axes mean, etc. – see the paper itself. It won’t make much sense to go into that here without the context of the experimental setup.

The basic effect

Psychometric functions

Posted in Psychology, Science | Leave a comment

Bang or BAM? On respecting complex problems

There are simple problems that can be solved with a single bang. The task of understanding the (human) brain is not a simple problem. On the contrary, the classic quote

The brain, the masterpiece of creation, is almost unknown to us.

attributed to Nicolaus Steno in 1669 is – by any large – still very much true today. This is owed to the fact that brains – let alone human brains – are essentially unparalleled in terms of their complexity, both in terms of their structure, as well as their function (the activity patterns they are able to produce).

If anything, the past 150 years or so of “modern” research on the brain give us an appreciation for the magnificent scale of the complexity. A single synapse is awe-inspiringly complex. Each single neuron typically has thousands of such synapses (in addition to quite a few other functional parts). Each local circuit contains a plethora of many different varieties of neurons. In the interest of brevity, I will skip a few levels of organization here, but a whole (human) brain contains on the order of 100 billion (or slightly less, but regardless of the precise number about 10 for every single person on the planet or in the neighborhood of the number of stars in the galaxy) neurons (and about an order of magnitude more glia cells, which might play functional roles as well), organized in intricate functional structures, constantly producing dynamic electrochemical activity patterns that in turn change the structure of the synapses and neural connectivity that produced these patterns.

From this concise outline, it should be obvious that understanding the human brain (or any brain for that matter) is not a simple problem.

A great deal has already been written about the Brain Activity Map project (BAM), so I will make this short.

As welcome as the money will be to researchers who have gotten used to funding rate in the single digits in the past decade, it is important to be realistic what 3 billion dollars can buy you.

Depending on the number you use (total program cost, procurement cost, unit cost or flyaway cost), a single Northrop Grumman B-2 spirit bomber costs on the order of 1-2 billion dollars. For the purposes of this argument, I think it not unreasonable to argue that 3 billion will buy you two operational ones. This is a conservative estimate. What matters for the purposes of this calculation is the ultimate cost to the taxpayer which is – if anything – higher.

The US made 21 of them (20 are left, after one of them crashed in Guam in 2008. After originally ordering over 130 at the end of the cold war, but cutting back to 21 after the fall of the Soviet Union). Ever since they became available, they participated in most major campaigns, e.g. pacifying Yugoslavia in the late 1990s.

BANG or BAM? Can we understand the human brain for the cost of these?

BANG or BAM? Can we understand the human brain for the cost of these?

As awe-inspiring as these marvels of advanced technology might be, as unparalleled their ability to rain down death and destruction with impunity, they are designed to solve relatively simple (as in tractable and straightforward) problems.

To meaningfully understand structure and function of the human brain, I think that it will take money on the order of magnitude that would buy a fleet of B2 spirits that will blacken the sky. And that is just the funding aspect of it. Money will be necessary, but it won’t necessarily be sufficient. Could the Manhattan project be pulled off without decades of “basic” (there is nothing basic about basic research) research and some fortuitous insights by Einstein himself? Probably not, no matter how much money one threw at the problem. There is a place for a “targeted science” approach in neuroscience, but I will focus on this in another piece.

It is not a bad thing to have challenging goals. It is also not a bad thing to spend money on research (effectively spending money on understanding the world around us). But it is important to be realistic about the magnitude of the challenge and the magnitude of funds devoted to it. There needs to be a balance. If not, money is either wasted, or one sets oneself up for failure, or both.

Human societies can accomplish a great deal if they put their mind to it (in close analogy to reliable computation with unreliable components). We got Buzz Aldrin to the moon and back, safely. But we could not protect him from being ravaged by depression for decades afterwards. Understanding the brain matters, and we do not understand it yet. Our ability to mitigate suffering originating from the human brain is at the very beginning. For instance, it has been suggested that current therapies for clinical depression (most based on manipulating levels of monoamine neurotransmitters) are beneficial for only one in ten patients.

We did build the B2 bomber. We do not yet understand the human brain. I think the latter can be done, but we need to get our priorities straight and have the scale of our challenges match the scale of our resources. This is a political discussion – the US annually spends hundreds of billions of dollars on defense. It is not a foregone conclusion that money spent on keeping world peace is necessarily misspent, but there are many competing policy goals. In times of scarce resources, nothing should be sacrosanct, all options should be on the table.

I’m all for getting serious about understanding the human brain. But in order to do so, we need to actually get serious about understanding the human brain.

What do we want? A bang? A big bang? A BAM? Or rather, a really, really BIG BAM?

If one wants to begin to start understanding the human brain or – with apologies to Churchill – end the beginning of understanding the human brain, one could argue that we need a really big BAM, or more than one.

From what we already learned about the brain, it is clear that the gains in understanding are worth the tremendous cost and effort. However, it is equally clear that it won’t be possible to gain substantial further understanding on the cheap or quickly.

Adequate relative scaling matters. For prospects of success.

Does the brain deserve some respect?

Regardless: How much would an actionable understanding of brain function be worth to you? How much would you be willing to pay, as a personal share?

Update: It now seems that the numbers are in. Way to politicize neuroscience for $100 million. For comparison, people in the US paid 320 times that in overdraft fees in 2012 alone. It is not unreasonable to think in dimensions that scale with the magnitude of the problem, even if the absolute numbers are starting to get a little high. The total projected life-cycle cost for the F35 program is just over 1.5 trillion dollars. The US built over 12,000 B17 bombers to win WW2. They also built a large number of B24s, B25s, B26s and B29s. It is all about how serious one wants to be about addressing an issue. An estimated billion people suffer from some kind of brain disorder right now. To say nothing of the untold billions who will suffer from one in the future. So even a multi-trillion dollar investment might not get a bad return on investment in terms of reduction of suffering if it yields anything tangible. Can we afford it? Can we afford not to do this? Some people wonder how people in the dark ages got by, without the benefit of understanding the source of all their troubles with regard to bacteria and viruses. I wonder if people in the future will ask the same question (mutatis mutandis) about us.

That said, it is undoubtedly true that we are in urgent need of new tools to probe the brain, as our available methods are woefully inadequate for the task. And the new acronym – BRAIN – is also much improved. Hedged excitement as the most suitable way to move ahead?

Regardless, one needs to be cautious of overselling. The history of AI research provides a cautionary tale, showing that in the long term, overly hyping something without being able to delivery inevitably leads to backlash, which leads to funding cuts (“winters“).

Posted in Science, Social commentary | 2 Comments

PSA: Your “sleep monitor” is probably anything but

As the “quantified self” (probably ill-named) movement gains steam, all kinds of apps that purport to measure important physiological parameters that are related to health gain popularity.

In principle, this development is to be welcomed, as individual lifestyle and metabolism is so heterogenous across the population that most scientific studies on the matter are so noisy that they offer only very limited benefits to optimize individual lifestyles.

Having more data available is almost always to be welcomed, as it can inform decision making about lifestyle choices far beyond common sense (which is actually far less common than commonly assumed) and old wives’ tales. Moreover, the medical profession is more geared towards limiting downsides and minimizing suffering from acute harm than towards optimizing the upside of life. Preventative medicine is in its infancy.

However, these developments naturally also harbor significant risks. The only decisions worse than those made randomly are those based on bad data, as they tend to be systematically wrong, yet are defended with conviction.

Most scientists are properly trained in the gathering and interpretation of data, as this how they make their living. As data collection and interpretation moves into the mainstream, familiar (to scientists) concerns about objectivity, reliability and validity of data come to the fore.

Giving a primer on research methods is beyond the scope of this piece (however helpful it might be). Instead, we will focus on one recent – alarming – development.

As we have observed before, language matters. Tremendously. This is also the case here.

In a nutshell, all devices that measure physiologic parameters rely on proxies (that are actually measured). The validity of the measurement crucially relies on the tightness of the link between the proxy and the quality of interest. This can be tricky if the quality of interest is psychological or neurological in nature. For interest, current and past “lie detectors” are really only called that. In reality, they measure skin conductance changes, which is used as a proxy for sweat gland activity, which is used as a proxy for sympathetic nervous system activity, which is used as a proxy for the probability that someone is lying (because something is personally significant or unsettling). In principle, this is sound, as the autonomous nervous system is not under the voluntary control of most people. However, that’s a lot of proxies, so the link is tenuous at best.

The same is true for sleep, which is still for the most part a mystery. Even if you go in for a clinical “sleep study”, the way sleep is currently assessed is by “polysomnography“. Briefly, several sensors measure the electrical activity on the scalp (via EEG), muscular activity (via EMG), eye movements (via EOG) breathing rate, blood oxygenation and perhaps some other parameters. Of these, EEG, EMG and EOG – in combination – are most diagnostic of gross physiological states, such as sleep. For instance, an EEG that is dominated by high amplitude and low frequency waves (delta) characterizes deep sleep. But things can get tricky. The EEG patterns during REM sleep are relatively low amplitude and high frequency and can look quite desynchronized. To the untrained observer, it would be hard to distinguish REM sleep from awake, based on the EEG alone. That’s where the other parameters come in. During REM, EMG will show a lack of muscle tone (there is actually an active muscle paralysis going on) whereas EOG will show characteristic rapid eye movements, both of which sets it apart from the waking state. Similarly, it is hard to distinguish deep sleep from REM sleep if one were to look at the EMG alone. There isn’t much movement during either phase, even though these sleep states couldn’t be more different in any other way.

Polysomnography

Polysomnography

The gist of this is that one really needs all three parameters in order to properly characterize sleep stages, as they are defined now (as we don’t really understand sleep yet, I anticipate there to be further, neurotransmitter-based metrics to come into common use in the future). At a minimum, one cannot forgo EEG, as one won’t be able to distinguish REM from deep sleep without it. There is a reason it is called polysomnography. Many parameters need to be measured in order to characterize sleep.

And herein lies the rub. In order to measure the EEG, one has to get an electrode on the scalp. With the advent of wireless technology, brave companies like ZEO have pioneered this approach. While the results fall far short of full polysomnography – as one would do in a clinical setting – they are quite impressive. It is remarkable what modern signal processing can do with a single electrode. Their correlations with polysomnographic recordings are high, implying a high reliability. This kind of “at home” sleep measurement capability opens up the potential for all kinds of investigations, both for research and personal use.

But this technology is not cheap and however important sleep might be, most people turned out to be reluctant to shell out much money for its measurement. Moreover, the ZEO device necessarily required a headband to be worn, and most people couldn’t be bothered. Consequently, ZEO (the company) is struggling for survival.

In contrast, all kinds of smartphone apps and devices that rely on accelerometers which can be gotten for a few dollars  are booming. It is understandable that people want to minimize cost and not want to bother with headbands, utilizing devices that they have anyway for this purpose (e.g. phones, calorie trackers). However, one should *not* confuse these devices with sleep monitors, as they do no such thing. Claims that they allow to “track”, “monitor” or “measure” sleep are disingenuous at best.

Measuring sleep is not trivial

Measuring sleep is not trivial

Inferring sleep from vibrations on the surface of the bed is at least one step too far. A myriad of factors could confound these measures, all of which should be obvious, such as pets jumping on the bed, sharing the bed with a partner, etc. Most serious is the insurmountable hurdle of distinguishing REM sleep from deep, based on data from accelerometers. Most currently available apps just lump the two together and call it “sleep depth”. This is at best inaccurate (as they could not be more different physiologically) and at worst dangerous. For instance, it has been shown that a lot of deep sleep is restorative in terms of physical exertion whereas too much REM sleep is not necessarily a good thing. Instead, it can be indicative of a major depressive episode. Sleep disturbances like that accompany almost all mood disorders. It is an extremely disconcerting prospect that someone with such a disorder could rely on measurements from such a pseudo-sleep-monitor to reassure himself that their sleep is just fine, when it is really not.

To summarize, actigraph based metrics can at best measure the quantity, but not the quality of sleep. Claiming otherwise is (deliberately?) misleading.

No one likes criticism. But in this case, it is warranted. It is understandable why people do what they do, but in this case, I must stand fast. I’m not sure if it will make a dent, but it is imperative to stem this quite unfortunate development.

Of course, there is no harm in people using these cheap accelerometer-based devices if they understand their limitations. However, given how these are currently marketed, I doubt that most people are aware of this problem. It would be a good start to stop calling them “sleep monitors”.

It is up to every individual how serious they take the development of their own life. As long as they can live with the outcome. Because they will have to live with it.

Update: ZEO has lost the struggle for survival. It is a terrible shame that cheap actigraphs killed the only device that came close to a home sleep monitor. Hopefully, this is just an indication that ZEO was ahead of its time and not indicative of how serious people are willing to get about sleep. Perhaps people *are* rational after all, though. If the vast majority of users is unable to interpret a hypnogram anyway, it makes sense to go with the option that is cheaper, more convenient and – apparently – simpler to interpret. As detailed above, this can – however – be dangerous, as excessive REM sleep is far less refreshing than commonly believed.

In the meantime, I bet there is a sizable minority who would be happy to spend quite a bit for genuine home sleep-monitoring capabilities sensu ZEO. What to do? Kickstarter to the rescue? Maybe mattress manufacturers could sponsor it – if their marketing claims are true, one *should* see a reliable improvement in sleep, as measured.

What ZEO could have given you

What ZEO could have given you

Posted in Data driven lifestyle, Optimization, Pet peeve, Social commentary, Technology | 7 Comments

Meet the netmonger – could it be you?

Netmongers (Excerpt from “Advice for a modern investigator”, chapter 5. Elsevier Press, 2011)

The most striking observation regarding this category is that it was entirely absent in the 1898 edition of this book. Meanwhile, it has become by far the most prevalent type, crowding out all the others. In terms of etiology, this can perhaps be explained by the fact that the prevalence of netmongers is closely tied to the presence of certain technologies, namely computers and internet access. In places where these are missing, netmongers are absent as well. Conversely, if fast internet access is available, all other types effectively convert to netmongers, particularly contemplators and bibliophiles, but also misfits and theorists.

What characterizes this – by now – extremely common and serious disease of the will?

Netmongers can be described as exhibiting pure akrasia. It is not the case that the netmonger is apathetic or lethargic. On the contrary, netmongers exhibit sustained and frenetic activity for long periods of time, during which they utilize the most powerful productivity tools ever conceived.  Moreover, they begin the day with the best of intentions to further their important scientific work. Yet, at the end of the day, they leave their place of business without having accomplished anything of relevance.

How is this possible?

This is best explained by looking at the typical day of the netmonger. When he arrives in his office bright and early, he first has to check his facebook page. After that, he checks his email for messages that might necessitate urgent attention. Both activities are repeated every 15 minutes, just in case. The netmonger has a lot of friends, so besides the link to the hilarious video on youtube, which he just watched, there is always something so extreme that he must share it with the rest of the world. Thus, he apprises his large crowd of followers of what is going on by posting it on twitter. Invariably, he does receive numerous immediate responses, some of which refer to the latest blogs that are relevant to the issue at hand. Due diligence requires that he read them carefully before crafting a response. By now, some dissenters have voiced their opinion, and wikipedia has to be consulted in order to resolve the dispute. Meanwhile, new messages have arrived, and the circle begins afresh. Doing this for several hours, the netmonger is exhausted and takes a break during which he reads CNN and the New York Times. Without fail, there is always a curious story to be found, which is itself linked to other stories that are of interest to netmongers. He cannot resist reading these as well. But even more interesting than these stories are the 2500 comments associated with it, some of which are just outrages and require a ruthless rebuttal. Of course, there is so much to read that the netmonger is often forced to skim. Consequently, he remembers only very little of what he just read. At this point, there are now 35 browser tabs open and it is almost time to go home. In any case, it is now too late to start something serious. Having spent all day at work, the netmonger indulges in a little guilty pleasure – cracked.com – before going home.

The netmonger is hopelessly caught in the web

The netmonger is hopelessly caught in the web

The tragedy of the netmonger is that this disease of the will mostly affects people that are genuinely attracted to information. Of course, the internet supplies such copious, interconnected and ever changing amounts that these depths can never be plumbed, not even by the most severe netmonger.

Posted in Life, Social commentary, Technology | Leave a comment

Neurotycho – A farewell to an old man

Tycho Brahe was once a Danish Nobleman. Do not think that I am very much impressed by that as a title of nobility, but it meant a lot to Brahe. He cared nothing for nobility itself, in fact he disliked it, but he learned painfully and thoroughly that it pays the bills. Tycho had been observing the orbits of planets for over 20 years. He was really quite good at it. He was so good that the Danish King had been funding his observatory for this entire period, including galleries and libraries and printing presses and a papermaking works and assistants. I never met anyone as conceited as him. You heard that he got his nose permanently cut off. It was the same attitude that led him to lose his observatory and estate. After the King cut his funding, he had been thinking for months about leaving Denmark. He hadn’t done it because it would be too cruel to deprive her of himself, so it was a very healthful shock when King Christian finally made it clear that he was no longer welcome in the realm.

The next two years, Tycho spent in travel, first to Wandsbeck, near Hamburg, then to Prague, having left his family in Dresden. Tycho was in Prague on the invitation of Rudolph II, Holy Roman Emperor. Tycho was made Imperial Mathematician – a misnomer if I ever saw one – after an audience with the Emperor who provided new funding to build an observatory. You have to understand that in those days, Prague was not the city that it is today. It is only the stench of the streets, not the city walls, that kept the Turks at bay. You can imagine how a man like Tycho felt about all that. He set up his observatory in a remote castle three dozen miles northeast of the capital. That way, he also avoided the plague that was ravaging the Prague at that time.

The funding source

The funding source

That is where we first met. You see, my father Heinrich Kepler was a traveling mercenary. He had a distinct fondness of hard liquor. My mother Katherine much believed in the healing powers of herbs. She was successfully prosecuted as a witch. When I was a child, I caught smallpox, which permanently left me with poor eyesight. Under these circumstances, I am not at liberty to be too particular about my employ. So I make do as a theorist and modeler, also casting horoscopes on the side to supplement my income.

I first met Tycho in February of 1600, he was almost twice my age. You would think this to be a promising meeting. Tycho had assembled the most comprehensive and accurate body of astronomical data in history. Yet, he had never analyzed any of it. Tycho needed to, in order to create a new set of tables for planetary positions, the “Rudolphine Tables”, named in honor of the Emperor. I had never in my life laid my hands on any actual data, but was confident in my abilities to model them. I also had a burning desire to get a paying job. I tried to convince Tycho to appoint me as his equal in the quest for the tables. Then, he could simply give me his collection of data and I could model the planetary orbits. But things did not work out that way. All I could manage after a year of negotiations was to attain a position as his assistant. Also, he did not give me free access to his data. I cannot imagine why. He knows of my skill. On top of all this, he calls me ungrateful behind my back and to my face.

I first became aware of his grave condition when I was summoned to his deathbed on October 24th, 1601, less than a year into my appointment.

Tycho and I

Tycho: “I don’t trust you, and you are ungrateful, but you have the greatest mathematical ability of all my assistants. I want you to complete the Rudolphine Tables. You have full access to my entire collection of data. I compel you to put it to good use”.

I: “Thank you, Tycho”

Tycho: “Kepler, you must ensure that I did not live in vain”.

I: “I will, Tycho”

Immediately after this exchange, Tycho died.

Such is the relationship between modelers and experimentalists. It really is an intriguing story. Tycho collected data for 40 years without ever analyzing any of it. Before the pressures of conferences, before publish or perish, before grants. He then is forced to giveit to a modeler who – despite his own preconceived notions – figures out what is going on.

Edit: Apparently, “Neurotycho” is a real thing. Who knew?

Posted in Misc, Science | Leave a comment

The shining city

The city of light was attacked by the forces of darkness, but the lights prevailed.

Beaming brighter than ever.

The shining city

Encouraging. Perhaps reason and virtue will prevail after all, checking the barbarians at the gate as well as subduing the barbarians within.

Posted in Misc | Leave a comment

A statistical analysis of Olympic outcomes of the past 28 years

Now that the excitement over the Olympics has abated a bit, it is time to reflect on the outcomes to see if we can discern any long term trends.

Most casual observers seem to be most interested on the outcomes (=medals) as a function of nationality. This is not surprising, as it is perhaps taken as a proxy for relative competitiveness of the home country of the individual. Therefore, we will focus our analysis on these metrics.

Based on medal counts alone, it can be hard to factor out extra-athletic political developments, which is why we limit the time frame to the last 7 olympics, going back to 1988. Before that, cold war events such as the large scale boycott of Moscow in 1980 or Los Angeles in 1984 by the “other” political block led to obvious distortions of the outcomes. For instance, the US won 88 gold medals in 1984, or more than 35% of the total – a highly unusual result, which is likely owed to the absence of most athletes from the eastern bloc. This yields a analysis period of 28 years, sufficiently long to discern long term trends.

For these last 7 Olympic Games, figure 1 shows the basic results.

Figure 1. Total medals per nation and year

It may look complicated, but there are some clear trends. Before we go further into that, let’s discuss some necessary preliminaries. Seven countries were picked, based on them being relatively successful (being in the top 10 of countries at least once during the time period), and them showing some development. Nations that were extremely stable, e.g. South Korea, which has been consistently around 30 medals have been omitted. As have been others that basically show the same general trend as those shown, e.g. Romania and Hungary show the same trend as Bulgaria, which is shown. Restricting it to seven nations allowed the use of seven distinct and easily recognizable colors. I did want to include more, but the graph was already busy enough as it is. Some other remarks about the mechanics of the figure: In the spirit of keeping politics at bay as much as possible, “Soviet Russia” includes the Soviet Union in 1988, the “United Team” in 1992 and the Russian team post 1992 (to indicate legal and organizational continuity). “Germany” combines both East and West Germany in 1988 and unified Germany thereafter. The “UK” refers to the team that calls itself Great Britain, further increasing the confusion between the Great Britain/UK distinction. The designation was made in 1908 to avoid an Irish boycott, but Ireland has fielded its own team for a long time now and it is unfair to athletes from Northern Ireland, which is why this anachronism is incorrect.

Two major trends become apparent:

1. Communism/central planning boosts medal counts. It is hard to distinguish between the two, as basically all remaining central planning regimes are also communistic in nature. The dramatic decline in medals from countries that abandoned communism (Germany, Bulgaria, Soviet Russia as well as others not shown) contrasts with the continued success of countries that retained it (China, and others not shown, e.g. Cuba, North Korea). The effect is very real, but it is unclear how to interpret it. Do communist countries value the olympics more (perhaps as an outward sign of pride)? Does sports performance lend itself to central planning? The German results highlight that this might be so. In 1988, East Germany won 2.5 times the medals that West Germany did, despite the West having almost 4 times the population at the time. In other words, based on population alone, East Germany was outperforming West Germany by a factor of almost 10:1. Note that this happened in spite of the famed abundance of economic resources in the west, compared to the east.

2. The outcomes of the UK, Australia and China show that countries that host the olympics see their results boosted not only in the year in which they host the olympics itself (which could be due to some kind of home-advantage), but also in the run-up to the olympics (maybe reflecting an increased allocation of resources on athletics in a given country in preparation for the big event, perhaps not uncorrelated with the fact that they were awarded the olympics in the first place). This is consistent with a steep rise of the UK performance even before the 2012 olympics was given to London in 2005 and likely due to direct investment from the proceeds of a lottery program. Also, the performance of Australia suggests that hosts might be able to put some persistent sports infrastructure in place, which allows them to retain most – but not all of their gains. In this sense, it might be interesting to watch the Chinese performance in the future. The rise of China has led to much insecurity in the US, but it remains to be seen whether this rise is sustainable, or if the strong Chinese showing in 2008 was simply due to a confluence of several of these factors.

Could it be argued that a focus on total medals is perhaps misleading? Is a bronze medal really as reflective of top athletic performance as a gold medal? Is second best good enough? Purists would say no. The standard of the IOC itself is unequivocal: When used in rankings, a gold medal is infinitely more valuable than a silver one. Gold is always first. A nation which wins 1 gold medal but nothing else will always be ranked above all countries that didn’t win a gold medal, regardless of the number of silver or bronze medals won. However, this is likely to be mostly Spiegelfechterei. There is scant evidence that the distinction between medals is even reliable. Intuitively, in a field of extremely talented and competitive athletes, sheer luck or daily form might give the last 0.01% in performance differential that makes the difference between gold and silver these days. This intuition is substantiated by statistical analysis. As figure 2 shows, the correlation between total medal count (of a country) and total number of gold medals (of a country) is extremely close to perfect: r = 0.97, p = 4.47e-63. Looks marginally significant.

Figure 2. Total medals vs. Gold medals

With this in mind, looking at the total medals is indeed justified, as there is 3 times the amount of data available, increasing statistical reliability. Looking at only gold medals will increase volatility of results. To check these notions empirically, we look at the same trends as above, but only at gold medals.

Figure 3 shows the same data as in figure 2, but only fore gold medals. Indeed, we can ascertain that the same trends as the ones we observe in figure 2 do hold, but they are more pronounced, exaggerated, e.g. the home advantage (which is now evident for the US numbers, even more evident for China), the decline of (East)Germany is even more evident, as are host effects across the board, etc.

Figure 3. Gold medals per nation per year.

Figure 3. Gold medals per nation per year.

To allow for a fair comparison over time, what remains to be done is to normalize the absolute numbers by the number of medals given out at the events. This number has gone up dramatically over the years, by over 30% percent in the last 28 years alone. So looking at absolute numbers can be somewhat misleading. If the current trend holds, more than 1000 medals may be given out in the near future, as can be seen in figure 4 below (although this trend has somewhat slowed in recent years).

Figure 4: Total medals awarded per year

Figure 4: Total medals awarded per year

Taking this into account yields figure 5. Broadly speaking, the same trends hold.

Figure 5. Total medals per nation and year, normalized.

Figure 5. Total medals per nation and year, normalized.

This leads us to a simple model (that of course leave a lot of variance unaccounted for), but from these graphs, two things matter for olympic success of high performing nations:

1. The means to train top athletes, e.g. population and/or financial resources devoted to athletics. However, this is not in itself enough or sufficient for olympic success. This is best exemplified by India, which has a very large population and increasing available economic resources, but in spite of this only earned 5 medals at the 2012 olympics, less than Ethopia.

2. The (political?) will to do so. It is not enough for resources to be available in a country. For Olympic success, these resources need to be mobilized and focused, which perhaps happens in the runup to an olympic event and which might be important to communist regimes and lend itself to central long-term planning.

We will see if this analysis holds for the future.

So far the – mostly descriptive – side of things. I am not unaware of the larger debate about the cult around the Olymp Games as such. Some people legitimately question whether athletes exhibit the qualities and virtues we should admire as a society. This is not a trivial question in the age of rampant doping scandals. A second, related position notes that the IOC essentially leads a monopolistic and parasitic existence, feeding off the naive vanities of nations, cities and individuals alike. Totalitarian states certainly do seem to be fond of them, as it gives an opportunity to demonstrate the manifest superiority of their system before a world audience and at the same time give their oppressed populations some pride, solace and reassurance. This is a tried and true method. It worked for Hitler (Germany won by far the most medals in 1936), the Soviets, the east Germans and others. Even today, North Korea and Cuba punch far above what could reasonably be expected from them. To say nothing of China. Finally, the oft-touted inspirational effects of the Olympics, supposedly encouraging physical fitness in the general population might be largely fictional. I am not aware of a correlation between Olympic success of the athletes of a country and the BMI in the general population of that country.

These are all valid concerns, but I must respectfully insist that these issues need to be addressed at a different time as they require a different level of analysis entirely.

Posted in Science | 1 Comment

Meditations on the proper handling of pigs

Popular wisdom is not short on advice on how to handle pigs. This goes back to at least biblical times, which counseled that one  should

Neither cast ye your pearls before swine.

Matthew, 7:6.

The point here is that the pigs – being pigs – will not be able to appreciate the pearls for what they are. They mean nothing to them. But one’s own supply of pearls will be rather limited and on top of that, one probably expects something in return for parting with a pearl. In this scenario, no one wins. Not the pigs, and not the pearl caster. So don’t do it.

Actual pigs

Actual pigs

Our cultural obsession with pigs does not stop with the bible. On the contrary, pigs are the metaphorical animal of choice for morally and cognitively corrupt characters such as Stalinists, popularized by Orwell’s “Animal Farm”. Of course, none of this is limited to pigs. We like to use animals metaphorically, from black swans (the book is *much* better than the movie) to hedgehogs and foxes.

But back to pigs.

As an old Irish proverb has it:

Never wrestle with a pig. You’ll both get dirty, but the pig will like it.

Observing most of what passes for public discourse these days, this seems to have been forgotten. It is important to remember this as I do not believe that the rules of engagement for public fights with pigs have changed all that much since time immemorial. I do understand that it drives ratings, but it is not all that helpful. Or at least, one should know what to expect.

A corollary of this – as observed by Heinlein – is that one should

Never try to teach a pig to sing. You waste your time and you annoy the pig.

Robert A. Heinlein (1973)

To be clear, the idea is that one wastes one’s time because it won’t work, due to inability or unwillingness on behalf of the pig. And it is not restricted to singing, either. Other have pointed out that it is equally pointless – and even harmful – to try to teach pigs how to fly. A lot of this has been summarized and recast into principles by Dale Carnegie.

So how *should* one interact with pigs? As far as I can tell, the sole sensible practical advice – bacon nonwithstanding – is that the only way to win is not to play, when it comes to pigs.

How relevant this is depends on how many pigs there are and how easy it is to distinguish them from non-pigs. Sadly, the population of proverbial pigs seems to be ever growing, likely due to the fact that societal success shields them from evolutionary pressures imposed by reality (to make matters worse, a lot of the pigs also seem to have mastered the art of camouflage). In this sense, a society that is successful in most conceivable ways is self-limiting, as it creates undesirable social evolution because the pigs can get away with it, polluting the commons with their obnoxious behavior (although the original commons was a sheep issue). This will – in the long run – need to be countered with a second order cultural evolution in order to stave off the inevitable societal crash that comes from the pigs taking over completely.

This is a task for an organized social movement. In the meantime, how should individuals handle pigs? Probably by recognizing them for what they are (pigs) and recognizing that they probably can’t help being pigs. And not have unreasonable expectations. It could be worse. In another fable, the punchline is that one should not be surprised if a snake bites you because that is what snakes do.

The point is that we now live in a social environment that we are not well evolved for. Does our humanity scale for it? In a social environment many individuals and diverse positions, verbal battles can be expected to be frequent, but there are no clear victory conditions. It is commonly believed that the pen is mightier than the sword, but in an adverse exchange (particularly on the internet), arguments are very rarely sufficient to change anyone’s mind, no matter how compelling. So the “swine maneuver” can be used to flag such an exchange, and perhaps allow the parties to disengage gracefully, disarm the tribalist primate self-defense systems that have kicked in and perhaps meaningfully re-engage. Perhaps…

Will an invocation of the “swine maneuver” forestall adverse outcomes? Does the gatekeeper go away if he is called out on his behavior? Does it amount to pouring oil into the fire (not unlikely, as most pigs can read, these days. Although it could go either way)? That is an empirical question.

On a final note, teaching pigs to fly hasn’t gotten any easier in the internet age. Preaching to people who already share your beliefs is easy. What is hard is to have a productive discussion with someone who empathically does not share your fundamental premises and perhaps effect positive change. Mostly, these exchanges just devolve into name-calling. Not useful.

To be clear, we are talking about proverbial pigs here. Actual pigs are much more cognitively and socially adept (not to mention cleaner and less lazy but perhaps less happy) than most cultures – and religions – give them credit for (they are probably maligned to help rationalize eating them).

NOTE: Putin himself (however one feels about his politics) has weighed in on the issue and aptly characterized some exchanges as fruitless:

I’d rather not deal with such questions, because anyway it’s like shearing a pig – lots of screams but little wool“.

A somewhat similar – but not quite the same – situation seems to be given in the avian world:

If you want to soar with the eagles, you can’t flock with the turkeys” (as they say)

Posted in Optimization, Social commentary, Strategy | Leave a comment

Low contrasts shed light on the neural code for speed

The effects of stimulus contrast on visual perception have been known for a long time.  For example, there is a consensus that at lower contrasts objects appear to be moving slower than they actually are (Thompson, 1982). Several computational models have been posited to account for this observation, although up until now there has been a paucity of neural data with which to validate them.  A recent article in the Journal of Neuroscience seeks to address this issue, combining single-cell recordings in a motion sensitive region of visual cortex with psychophysics and modeling to better elucidate the neural code for speed perception (Krekelberg et al, 2006).
It has been well established that the middle temporal (MT) cortical area in the macaque monkey is of central importance for both  motion processing and perception (for review, see Born and Bradley, 2005).  Importantly, it has recently been shown that lowering stimulus contrast produces a qualitative difference in the extent to which MT cells integrate information over space (Pack et al, 2005).  Hence, it is only natural to wonder how the joint manipulation of stimulus contrast and speed affects both perceptual reports and MT responses.
To answer this question, Krekelberg et al. recorded neural activity from single units in area MT of awake-behaving macaques in response to patches of moving dots that could vary both in terms of speed and contrast. The purpose of the electrophysiological recordings was to elicit neural speed tuning curves at various levels of contrast. In these trials, experimenters presented a single patch that was centered on the receptive field and moving in the preferred direction of the neuron while the monkey maintained fixation.  In separate sessions, the authors had human observers and a monkey subject perform a psychophysical speed discrimination task, thus allowing them to compare neural and psychophysical performance. In the psychophysical task, observers had to judge which of two simultaneously presented patches of moving dots appeared faster – one patch was presented at a fixed speed but variable levels of contrast while the other was presented at a fixed contrast level but at a variable speeds. This procedure yielded psychometric functions quantifying the shift in apparent speed at lower contrasts.
Figure 1

Figure 1

Using these methods, the authors report several major findings.
Consistent with previous reports (Thompson 1982), the authors show that there is a substantial effect of contrast on speed perception, insofar as the perceived speed of the random dot stimuli is drastically reduced at low contrasts – reducing contrast by one octave reduces the perceived speed by about 9%. Moreover, the psychophysical data from the monkey suggest that there is – at least qualitatively – a corresponding effect in macaques.( See Krekelberg et al., Figure 2).
Further, reducing contrast had several effects on the neural activity of speed tuned MT neurons: Generally, speed tuning curves shift such that the peak firing rate is reached at slower speeds (Krekelberg et al., Figure 5A, 6A) – this shift in preferred speed is more pronounced in neurons preferring faster speeds (Figure 5B). Also, most – but not all – cells respond less vigorously at lower contrasts (Krekelberg et al., Figure 4A). It is of particular interest to note that due to the shifted peak, some cells (about 30% of neurons) respond more strongly in response to low contrast than to high contrast motion at slow speeds. (See Krekelberg et al., Figure 3C).
Finally, the authors use these neural data to test models attempting to account for the observed psychophysical effects.  Specifically, they test two exemplars of a family of labeled line models, each of which had previously been shown to account for human speed perception. In the vector average model, a population of MT cells effectively computes a weighted average, where each neuron’s contribution is proportional to its preferred speed and a weight given by its normalized firing rate. Surprisingly, when fed with their neural data, the authors find that this model predicts an increase in perceived speed with lower contrast, inconsistent with the psychophysical data, (Krekelberg et al., Figure 7).
Similarly, the authors point out that a “ratio model”, in which perceived speed corresponds to the ratio of activity in a fast and a slow channel, also cannot account for the psychophysical effects in terms of the neural data (Krekelberg et al., Figure 8B).
Hence, the authors state that their data is fundamentally inconsistent with existing labeled-line models of speed coding in area MT. They conclude that it is likely that the code for speed in MT will differ from the established labeled-line codes that can account for direction perception.
While there are several potential problems which will need to be addressed in further work, specifically: stimulus size was held constant (potentially covering a dynamic surround), not only contrast but also luminance varied across conditions, and finally, there was a profound lack of psychophysical data in the macaque, insofar as we can tell it appears that none of  these issues threatens the interpretations proffered by the authors.
The authors characterize their effect as the result of imperfect contrast-gain control and to a large degree, the differences in firing rates at different contrasts can be accounted for by simple contrast gain mechanisms (Krekelberg et al, Figure 4C). However, before one can attribute the observed effects to such a mechanism, it is beneficial to first rule out other plausible explanations.
A more exciting view, highlighting the functional role of non-veridical speed perception has recently been proposed. In this Bayesian approach, it is adaptive, when signal strength is low in proportion to the noise, to rely less on current sensory input and more on prior experience, which recent evidence suggests corresponds to slower speeds (Stocker & Simoncelli, 2006).
Of course, this doesn’t detract from the theoretical significance, or the main thrust of the paper, which is highly provocative as it highlights deficits in current models of speed perception. To improve on these models, it might be beneficial to simultaneously gather neural and psychophysical data from animals performing a similar speed discrimination task. This would allow a more direct comparison of neurometric and psychometric performance measures as well as the elucidation of neuro-perceptual correlates. Additionally it would provide modelers with valuable data on the issue of speed coding in area MT.
In conclusion we believe that this paper proposes a formidable challenge to both the neurophysiological as well as the modeling community.

References

Born, R. T. & Bradley, D. C. (2005) Structure and function of visual area MT. Annu. Rev. Neurosci., 157-189.

Krekelberg, B., van Wezel, R.J.A., & Albright, T.D. (2006). Interactions between Speed and Contrast Tuning in the Middle Temporal Area: Implications for the Neural code for Speed. J. Neurosci., 8988-8998.

Pack, C. C., Hunter, J. N., & Born, R. T. (2005)  Contrast dependence of suppressive influences in cortical area MT of alert macaque. J. Neurophysiol., 1809-1815.

Stocker, A. A. & Simoncelli E. P. (2006). Noise characteristics and prior expectations in human visual speed perception. Nat. Neurosci., 578-585.

Thompson, P. (1982) Perceived rate of movement depends on contrast. Vision Res., 377–380.

Acknowledgments: This piece was written with substantial input from Andrew M. Clark

Posted in Journal club, Science | Leave a comment

How to successfully attend a major scientific conference

Most professional societies hold an annual meeting. They are an important venue for the exchange of ideas as well as for professional development.

SfN 2009

However, given their scope and scale, these can be overwhelming, particularly – but not exclusively – to the novice. This leads to suboptimal yields, which can be quite disappointing given the substantial investment in terms of time and money, to say nothing of effort.

There is a way to get a lot more out of these conferences – but it requires a lot of knowledge as well as a set of highly specific skills. As you will see, it is well worth it to develop this particular skill set. I already wrote about this topic extensively, using the Annual Meeting of the Society for Neuroscience as a paradigmatic example of a major conference. Here is a compilation:

1. Be extremely selective about what you attend. Your time and attention are quite limited.

2. Be sure to preview the venue before getting there. It is easy to get lost.

2. Use cutting edge media, even in your posters. Create a podster or padster.

4. Make sure you have readily available food with you. Fasting is a bad idea.

5. Understand the difference between poster and slide sessions as well as symposia.

6. Understand the relationship between science and society.

7. Be aware of temporal constraints. How much time is actually available at the meeting?

8. Understand the history and development of the meeting you are attending.

9. Understand that the meeting can be extremely physically taxing.

10. Can the meeting take so much out of you that you get sick?

Hope this helps. Have a good meeting.

Posted in Conference | 2 Comments

A video says much more than a thousand words – rapid communication of scientific concepts via the “Podster”, a historic waystation towards a truly dynamic presentation surface

Background
The efficient communication of complex scientific concepts at an academic conference poses a challenge that is as old as it is formidable. The use of dynamic visual stimuli and intricate experimental designs have exacerbated the problems associated with this issue in recent decades, particularly in the area of visual neuroscience. It is not an easy task to communicate experimental procedures and results to an audience unfamiliar with the study, often under conditions of sleep deprivation and within a short period of time, as people typically visit a multitude of presentations. These issues are partially alleviated in slide sessions, during which the speaker can show the audience his actual stimuli, no matter how sophisticated. Yet, posters are generally much more ubiquitous than talks – at SfN 2006, posters outnumbered talks 10.5:1. This number is fairly typical of scientific conventions in general. Hence, the problem is to incorporate the positive aspects of a talk into the poster format.
Previous solutions typically involve a laptop computer. In one version of this approach, the laptop is held by the presenter while in another, it is suspended from the poster board. Both of these solutions are sub-optimal. In the former case, opportunities  to gesture are reduced, possibly impairing cognitive processing in both audience and presenter. Executing the latter approach is technically challenging and typically results in a display that is not on eye level and that crowds out the actual poster. Problems common to all incarnations of the laptop approach involve the lack of batteries that last an entire poster session, restricted viewing angles inherent to many notebook displays, thus limiting the number of people who can view the display at the same time and a fundamental lack of interactivity, as it is awkward for both presenter and audience to use the laptop controls in this setup.
To summarize, the common laptop solutions to the presentation challenge solve some problems but introduce others.
Concept
The goal remains to combine the audio-visual advantages of a talk with the interactivity and closeness with the audience that is afforded by a poster presentation.
This goal can be achieved by attaching one or more video iPods to the poster surface. The iPods can be seamlessly integrated in the poster as dynamic figures. These currently feature a 2.5” screen with a resolution of up to 640 x 480 pixels at very broad viewing angles. Efficient power management allows a battery life that lasts the entire poster session. More importantly, the small size of the unit makes it easy to place multiple iPods at the appropriate places on the poster. The controls make for a very interactive experience, as the presenter can focus the attention of the audience on what is relevant at a given time in the poster narrative. This allows to implement and augment psychologically appropriate presentation techniques.  Moreover, the hands of the presenter are free for gesturing and pointing, enhancing the learning experience of the audience. As in a talk, visual displays allow the audience to utilize their powerful visuospatial systems to maximize information transmission. As far as we can tell, the concept introduces no obvious drawbacks. This Podster concept was first implemented at SfN 2006, with overwhelmingly positive audience feedback, see Figure 1.

Figure 1: The first Podster, at SfN 2006

Practical considerations
Overall, the implementation of the concept is very simple. Yet, there are some things to consider to maximize the impact.
• In principle, other devices other than iPods can be used to achieve the same effect. Yet, these devices should be white, light and flat to be suitable as dynamic figures that are integrated with the rest of the poster. Moreover, battery power and resolution should be sufficient. Current video iPods fit these specifications, but other devices that do too are also suitable.
• One reason why video iPods are particularly useful for the implementation of this concept is the fact that they are already available in large numbers among the general public, allowing for a dual use at no additional cost.
• Try to place the iPod figures at eye level – this will facilitate audience interaction.
• The iPods can be easily attached with tesa® poster powerstrips. Two per iPod are sufficient.
• After mounting the iPdos, wipe the screen with alcohol swabs for clarity.
• If audio is desired, the earplugs can be mounted next to the iPod with tacks, see figure 1. Make sure to also wipe them with alcohol swabs between each use to prevent the transmission of ear diseases.
• To ensure a smooth removal of the iPods after the presentation, the poster should be laminated. Otherwise, the probability that the poster will rip is high.
• The Podster really affords the flexibility of a talk. In other words, one can update the dynamic figures up until the point of the presentation.
• In practice, this allows the re-use of posters with continuously updated data figures. This consideration is not immaterial with professional poster printing costs currently being around 200$.
Summary and Outlook
The biggest advantage of a poster presentation is the direct exchange with the audience. One of the biggest drawbacks is the lack of dynamic visual images to illustrate experimental stimuli, designs and results. Placing small portable video screens on the poster to yield a “podster” overcomes these problems. Hence, the podster combines the visual flexibility of a talk with the interactive narrative of a poster at low cost and little effort. Thus, the podster effectively constitutes a significant advance in the rapid communication of sciencific information.
The next thing to look for in the practical implementation of the podster is the full screen video iPod, featuring a 3.5” screen and is scheduled to launch within 6 months.
In the long run, technologies such as electronic ink or convention centers equipped with flat screens or touch screens instead of poster boards are likely to replace conventional poster presentations altogether.
Yet, it is unclear when this bright future will arrive. In the meantime, the podster is a viable and valuable bridge technology towards a dynamic presentation surface, augmenting the rapid communication of scientific information at poster sessions.

Note: This is a reproduction of something I wrote in late 2006, after the debut of the podster concept in a real life setting (SfN, if that counts as “real life”). How times can change – first we went from the podster to the padster, and now truly dynamic posters are not far off. Soon, hordes of scientists taking up all the overhead bins of a plane headed to or from SfN will be a thing of the past.

Posted in Conference, Optimization, Science, Technology | Leave a comment

The need for sleep

Western culture – the US in particular – is pervaded by the notion of achievement through hard work. This has many benefits, but like everything, it comes at a steep cost.

One of the things that is typically shortchanged in the relentless drive for achievement is the need for rest and recovery, sleep being a particular instance of this need. If you want to run the world and be part of the elite, you naturally don’t have time or even a need to sleep.

Thus, it should not come as a surprise that sleep is to self-proclaimed high achievers like studying is to high-school and college students – it is essential to long term success, but no one admits to doing much of it.

Just ask Tiger Woods – he admits to sleeping a mere 4-5 hours per night, even though it is common knowledge that elite athletes have increased sleep needs, on the order of 10 hours or more. Of course, we now all know what he is doing during the rest of the time.

Be that as it may be, it is common knowledge that great men have better things to do than to sleep – Da Vinci, Napoleon*, Edison have claimed sleep durations of well under 5 hours per night. After all, there is plenty of time to rest after one is dead, as common knowledge has it. This is not an exclusively American trait, either. A lot of cultures that place a similarly great value on achievement have similar sayings. But why?

That is quite simple. The downside to sleeping is obvious – for practical purposes, physical time is a completely nonelastic commodity. In other words, as a resource, time as a resource really does live in a zero-sum universe. Everyone has the same amount of time, and as one becomes more and more accomplished, one’s time becomes more and more valuable. Sleeping might be the single most expensive thing – due to forgone earnings – these people do on any given day. It would make good economic sense to cut back on it. If one is sleeping one can’t do anything and can’t react to anything. Indeed, the absence of motor output is one of the defining characteristics of sleep.

The upside of sleep is much harder to pin down. Ask any scientist why one needs to sleep and few will give a non-hedged answer that doesn’t amount to some form of “we don’t really know”.

The problem is that with this readily apparent downside and no clear upsides, the rational response is to limit sleep in whichever way possible.

Many people do exactly that – in the past 100 years, the average sleep duration per night in the US has dropped from around 8 hours to under 6.75 hours. This is achieved with the pervasive use of artificial lighting in combination with using all available kinds of stimulants, from Modafinil to Caffeine. People try to further cut corners with gimmicks like polyphasic sleep. The problem with this approach is that it is neither efficient, nor sustainable.

Why not? This is best illustrated by a concept called “sleep debt”. Like all debt, it tends to accumulate and even accrue interest. In other words, every time sleep is truncated or delayed with a stimulant like caffeine, a short-term loan is taken out. It will have to be repaid later.

But why? Because we sleep for very good reasons. It is important to realize that most scientist hedge their answers for purely epistemological reasons. And just because they do that doesn’t mean that very good reasons don’t exist. They do.

A strong hint comes from the fact that all animals studied – with no known exceptions – do in fact sleep. What’s more, they typically sleep as much as they can get away with, given the constraints of their lifestyle and habitat. As a general rule, predators sleep longer than prey animals (think cats vs. horses). Animals that are strict vegetarians can’t afford to sleep as long, as they have to forage longer to acquire their food which is less energy dense. But they all do sleep and they all sleep as much as they reasonably can.

And this is in fact quite rational. One significant constraint is posed by the need to minimize energy expenditures. An animal in locomotion expends dramatically more energy per unit weight and time than a sedentary one. This is an odd thought from the perspective of the modern world, with its ample food supplies and readily available refrigerators. But there are very few overweight animals in the wild. In other words, every calorie that is potentially obtained by foraging or predation has to be gained by expending and investing calories in locomotion. This is a precarious balance indeed. All of life depends on it.

Now if you were to design an organism that has to win in life (aka the struggle for survival and reproduction) would you arrange things such that the “engine” runs constantly at the same level or would you rather build it so that it periodically overclocks itself to unsustainable levels in order to best the competition temporarily then recover by downgrading performance during periods when little competition is expected?

Modulating performance by time of day might be beneficial

The rational and optimal answer to that question depends on outside conditions – can this period of little competition be predicted and expected? As it turns out, it can. Day and night cycles on this planet – dominating the external environment by establishing heat and light levels – have been fairly stable for several hundreds of millions of years. Consequently, pretty much all animal life on the planet has adapted to this periodicity. It is only for the past 200 years that man tried in earnest to wrest control over this state of affairs.

And we did. As warm blooded animals, we no longer strictly depend on heat from the sun to survive, but air conditioning and heating systems are quite nice to keep a fairly stable “temperate” environment. The same thing goes for light. I can have my light on 24/7 if I am able to afford the electricity and if I so choose. Lack of calories is also not a problem whatsoever. On the contrary.

So what is the problem? The problem is that evolution rarely lets a good idea go to waste. Instead, full advantage is taken of it for other purposes (exaptation). The forced downtime that was built into all of our systems by a couple hundred million years of evolution was put to plenty of other good uses. Just because locomotion is prevented doesn’t mean that the organism won’t take this opportunity to run all kinds of “cron jobs” in order to prepare it for more efficient future action. To further elaborate on the metaphor, the brain is not excluded from running a bunch of these scripts. On the contrary.

If you deprive the body of sleep, you are consequently depriving it of an opportunity to run these repair and prepare tasks. You might be able to get away with it once, or for a while, but over time, the system will start to fall apart. Ongoing and thorough system maintenance is not optional if peak performance is to be sustained. Efficiency will necessarily degrade over time if one keeps cutting corners in this department.

Of course, this is a vicious cycle. No one respects sleep (sleeping is for losers, see above), so sleeping is not an option. Instead, one resorts to stimulants to prop up the system and get tasks done. That makes the need for sleep (think maintenance and restoration) even greater, but the ability to sleep even less so, as tasks keep getting done less and less efficiently. This is truly a downward spiral. At the end of all of this, there is a steady state of highly stimulated, unrested and relatively unexceptional achievement. This is the state to be avoided, but probably the one that a high percentage of people finds themselves in, at any given time.

There is ample evidence that the cognitive benefits of sleep are legion. There is also a wide range of metabolic and physiological benefits. Here, I would to mention some extremely well documented cognitive ones explicitly. Briefly, sleep is essential for learning and memory consolidation, creative problem solving and insight as well as self control. This is not surprising. Peak mental performance is costly in terms of energy. Firing an action potential requires quite a bit of ATP (actually mostly the sodium-potassium pump that restores the status quo ante). It has been argued that these rapid energy needs cannot plausibly be met by glucose alone. Conversely, they must be met by glycolisis, which is also what supplies the energy to the muscles of a sprinter. It is precisely this brain glycogen that is depleted by prolonged wakefulness. Without sleep, the message of this mechanism is clear: No more mental sprints for you. Stimulants can mobilize some reserves for a while, but in the long run, your well energy will necessarily run dry. This is undone by sleep.

Moreover, it seems that amyloid beta is accumulating during wake. This is important because levels of amyloid beta are also increased in Alzheimer’s disease. The jury is still out, but I would not be surprised if a causal link between chronic sleep deprivation and dementia was in fact established. In the meantime, it might be wise to play it safe.

Another important hint at the crucial importance of sleep for neural function comes from the fact that your neurons will get their sleep one way or the other, synchronized or not. Sleep is characterized by synchronized activity of large scale neural populations. Recently, it has been shown that individual neurons can “sleep” on their own under conditions of sleep deprivation. Perhaps this correlates with the subjective feeling of sand in the mental gears when there hasn’t been enough sleep. The significance of this is that sleep deprivation is futile. If the need for sleep becomes pressing, some neurons will get their rest after all, but in a rather uncoordinated fashion and maybe while operating heavy machinery. Not safe. There is a reason why sleep renders one immobile.

Finally, some practical considerations. How to get enough sleep?

Here are some pointers:

  • If you must consume caffeine, do so early in the morning. It has a rather long metabolic half-life.
  • Try to enforce “sleep hygiene”. No reading or TV in the bed. No reading upsetting emails before going to bed. Try to associate the bedroom with sleep and establish a routine. If you can’t sleep, get up.
  • If you live in a noisy environment, invest in some premium earplugs. Custom fitted ones are well worth it.
  • If at all possible, try to rise and go to bed roughly at the same time.
  • Due to the reasons outlined above, a lot of circadian rhythms are coordinated and

    Having breakfast

    synchronized with light. Of course, we are doing it all wrong if we ignore this. Light control is crucial. This means several things. Due to the nature of the human phase response curve, bright lights have the ability to shift circadian rhythms in predictable ways. Short wavelength (blue) light is a particular offender, as the intrinsically light sensitive ganglion cells in the retina are sensitive in this range of the spectrum. In other words, no blue light after sunset. This is particularly true if you suffer from DSPS (delayed sleep phase disorder – and the suffering is to be taken literal in this case). There are plenty of free and very effective apps which will denude your computer screen (bright enough to have a serious effect) from blue lights. I recommend f.lux. It is not overdoing it to replace the light bulbs in your house with those that lack a blue component. Conversely, you *want* to expose yourself to bright blue light in the morning. The notion that annoying sounds wake us up is ludicrous. The way to physiologically wake up the brain in the morning is to stimulate with bright blue lights. I use a battery of goLites for this purposes. Looking at them peripherally is sufficient. The photoreceptors in question are in the periphery.

Warning: If you have any bipolar tendencies whatsoever, please be very careful using bright or blue lights. Even at short exposure durations, these can trigger what is known as a “dysphoric mania” – a state that is closely associated with aggression against yourself or others (and one of the most dangerous states there is). If you try it at all, do not do so without close supervision. Perhaps ironically, those suffering from bipolar tendencies might be among the most tempted to fix their sleep cycle this way, as they are so sensitive to light and “normal” artificial light at night shifts their sleep cycle backwards. For this population, using appropriate glasses (these can even be worn over existing glasses) is a much more suitable – and safer – option. I repeat, there is *nothing* light about light. Nothing.

To conclude, how do you know that this effort is worth it? It might take a while to normalize your brain and mind after decades of chronic sleep deprivation. In the meantime, I recommend to monitor your sleep on a daily basis. There are now devices available that have sufficient reliability to do this in a valid way. Personally, I use the ZEO device.

A typical night, as seen by ZEO.

This approach has two advantages. First of all, it makes you mindful of the fact that very important work is in fact done at night. Your brain produces interesting patterns of activity. You are typically biased to dismiss this because you are not awake and aware of it when it happens. This device visualizes it. Second,  the downside of disturbed sleep becomes much more apparent. You can readily see what works and what doesn’t (e.g. in terms of the recommendations above). In other words, you can literally learn how to sleep. You probably never did. And once you’ve done so, you can sleep like a winner. And then maybe even go and do great things.

Sleeping like a winner. #beatmyZQ

__________

I recently delivered this content as a talk in the ETS (Essential tools for scientists) series at NYU. The bottom line is the importance to respect one’s sleep needs, even – and in particular – if someone happens to be a scientist. A summary of the talk and slides can be found here.

*There is very little question that Napoleon suffered from Bipolar disorder (making – by extension – the rest of Europe suffer from it as well). It is true that he slept 5 hours or less during his manic episodes, but he slept up to 16 hours a day during depressive ones.

Posted in Neuroscience, Science | 14 Comments

Charlemagne was a Neuroscientist

The exploits of Charlemagne are fairly well documented and widely known. He was both the King of the Franks and the founding emperor of the Holy Roman Empire (technically, the Carolingian Empire). In this capacity, he is renowned for a wide variety of things usually associated with wisdom, such as diplomatic as well as military triumphs, a cultural renaissance, successful economic and administrative reforms and so on. There is a reason why he is called great. Importantly, he died in 814 A.D., well over a thousand years before anything that we would today recognize as Neuroscience.

Charlemagne (742-814)

Thus, it might come as a surprise that he also had a keen interest in mental phenomena. Some of his observations on this topic have been preserved through the ages; for instance:

To have another language is to possess a second soul.

There are several recent papers on this very issue, for instance this one; lo and behold, it seems to be true. It looks like Charlemagne scooped these authors by about 1200 years. Or did he?

It is not the only noteworthy quote. This one is pure gold:

Right action is better than knowledge; but in order to do what is right, we must know what is right.

I could easily spin a dense yarn about how this expression anticipated the results of literally decades worth of intensive research on perception, cognition and action. It would be a marvelous feast. I could discuss dorsal vs. ventral streams of processing, perception vs. action, perception for action vs. perception for cognition and a great many other “hot” topics of contemporary research in sensory and motor systems. The possibilities are almost endless and without reasonable bound.

In fact, I could write a whole book about this. The title would be obvious. Recent history suggests that editors eat this kind of stuff right up and that it would also sell quite well.

So why don’t I?

Because it would be wrong. Charlemagne was not a Neuroscientist. He was not even a scientist. Not even by a stretch. Not a chance. Not even close.

Asserting this would grossly misrepresent the character of science and what scientists do.

Pretty much everyone has intuitions about the workings of the world, including the workings of the mind. Sometimes, these intuitions even turn out to be largely on target.

Charlemagne, German version

But that is not what science is about. For the most part, science is about turning these hunches into testable hypotheses, trying to then test these hypotheses to the best of our abilities (this part is extremely hard) and then trying to systematize these systematic observations into a coherent framework of knowledge that is aimed at uncovering the principles behind the phenomena under study. In other words, we are not trying to describe the shadows per se, but we are trying to triangulate and infer the forms from a multitude of multidimensional (usually) shadows, as hard as that might be in practice.

It is not surprising that smart people are curious about a great many things and that their intuition is sometimes correct. In that sense, everyone is a physicist, neuroscientist, psychologist, chemist and so on. In other words, Spartacus was a neuroscientist. But that utterly trivializes science to a point that is entirely ridiculous. Modern science is nothing like that. It is precisely this moving beyond intuition that defines modern science. A very simple – and early – example is the dramatic difference between our naive intuitions about how objects fall and scientific descriptions how they actually fall.

It works the other way around, too. Some sciences – psychology in particular – have a very big PR problem in the sense that most of their findings seem to be perfectly obvious (or in line with our intuitions) after the fact. This is an illusion, as people are actually not able to predict the results of social psychology experiments better than chance, if forced to do so from common sense and before knowing the outcome. It is important to distinguish this from bad science (or non-science), where the conclusions are not derived from empirical data, but follow from the premises a priori (analytic, not synthetic judgments). In any case, the fact that some scientific results are consistent with our intuitive a priori notions misses the point completely. That is not what science is about at all.

It may be forgivable if a non-scientist does not understand these subtle yet fundamental things and makes embarrassing claims, but this person cannot then claim being a scientific expert at the same time. One can’t have it both ways. Really.

Doing so would just be wrong. barefoot

Knowingly doing something wrong would be disingenuous. slave

So I don’t and I won’t.

Posted in Neuroscience, Pet peeve, Science, Social commentary | 2 Comments