Feeling Risky Today? How's Your Dopamine?

Jan 28 2009 Published by under Behavioral Neuro

It's that time of the semester again. The time when Sci has to present at her Journal Club. I know I've talked about Journal Club a couple of times, but what is the purpose of a Journal Club? For those not in grad school, what IS a Journal Club?
For my MRU, Journal Club is a small-sized class, usually held weekly, wherein students take turns presenting a paper they find awesome. The class then (theoretically) holds a good discussion based on the paper. I say (theoretically) because grad students are often tired, excessively overworked, and usually don't have time to read the paper beforehand. So the discussion is not always stimulating. But for the best papers it usually is.
So what is the purpose of people presenting papers to each other? Well, in grad school you get a lot of experience in a lot of things. Things like learning new methodologies, trying new techniques, time management, tearing your hair out, and alcoholism. You get training in how to analyze and interpret your data, and how to write that data up for publication. In some programs, you even get experience in how to write your first grant. But there are a couple of things the typical grad student in biomed will not have a lot of experience in:
1) Presenting hefty science. Those of us in biomed do not TA classes to fill our plates and pay our rent (though some of us, like Sci, do anyway). In some programs, it is possible to go an entire year without ever presenting your data to an audience other than your cat. Usually committee meetings are required, and sometimes public seminars, but often not. And so a grad student can emerge from the chrysalis of PhD a TERRIBLE presenter if they are not careful.
2) Telling a good paper from a bad one. In a perfect world, all grad students would be able to tell good papers from bad without a problem. Their lab and mentor would guide them through with a gentle hand. Often, however, this is not the case, and you'll show what you thought was a good paper to your advisor, only to be greeted with "Are you kidding!?! That guy's a crackhead!"
In both of these scenarios, Journal Club is there for you. You get experience presenting a paper full of hefty science, and it's up to you to present what you know to be the best new research in the field. In the small class, there won't be too many people to laugh at you, and you're more likely to get constructive feedback on why the paper was or was not good, what they could have done better, and what they probably ARE doing now for their next publication.
So Journal Club is useful, and it is up to Scicurious to therefore present some good science. Science that is well thought out, elegant, and may even be presented with a little bow on the top. And that is why I'm asking you all for your input! I want to know not only that I will pick a good paper, but also that I can present it in a way that is clear.
With that in mind, let's get started on the first of the three possible offerings which I can sacrifice on the alter of my Journal Club:
ResearchBlogging.org St. Onge and Floresco. "Dopaminergic modulation of risk-based decision making". Neuropsychopharmacology, 2009, 34, 681-697.

We all know that some people display more high-risk behavior than others. Evaluating risk is essential to long-term decisions, as well as momentary decisions in our daily lives. Many of our decisions that we make are risk-based. The best example of this of course is gambling. In gambling, you make decisions over how much money to keep laying out based on what your odds are. But how does this work? And how can some people go to Vegas and lose only a few hundred dollars, while others start betting their house, their wife, and their dog, and just can't seem to stop?
It turns out that this has a great deal to do with dopamine. It's been know for a while now that the dopamine system is associated with disorders in risk-based decision making. People with problems in their dopamine systems, such as Parkinson's disease, schizophrenia, or stimulant abuse, show problems related to risk-based decision making (Mimura, 2006). The most convincing evidence so far has shown up in humans being treated with dopamine system agonists, usually for Parkinson's or restless leg syndrome. Some of these patients start to show signs of pathological gambling, but only while they are on dopamine agonists (Gallagher, 2007). Not only that, giving amphetamine (a dopamine releaser) to patients with gambling problems causes increases in their urge to gamble (Zack, 2004). These patients have a hard time adjusting their betting strategy, even when multiple outcomes in the past have been negative, and these effects don't show up when the patients are off their medication. So it looks like increased dopamine system activity can affect risk-based decision making.
But how? Which of the dopamine receptors could be involved? There are five dopamine receptor types, divided into two classes, the D1-like and the D2-like. D1-like primarily stimulates further activity where it is found, and includes dopamine receptors 1 and 5, while D2 like primarily inhibits further cell activity where it is found, and includes dopamine receptors 2,3, and 4. Not only are there dopamine receptors, dopamine levels outside the synapse are also controlled by levels of dopamine transporter, a molecule which sucks dopamine out of the synapse to be reused again. To narrow down what is really causing problems with risk-based decision making, you should test each one of these receptor types. Unfortunately, you can't really do that study in humans, you have to do it in rats.
So how do you make a rat play the slots? First, put the rat on a diet. Not too severe, about 90% of their free-feeding weight. Then, give the rat a choice of two levers to press in its cage. When a light comes on, the rats have a choice. They can press one lever, the small/certain lever. That lever will give them one food pellet per trial. Or they can pretty the other level, the large/risky lever. The large lever will give them 4 food pellets at a time, but with decreasing probability. So it will give 4 pellets the first time, then 4 pellets 75% of the time, then 50% of the time, then 25% of the time, and finally only 12.5% of the time. So you get more, but the likelihood of you winning the jackpot gets lower and lower as time passes.
A normal rat, trained the scenario above, will start on the big reward lever, getting the big pellets. But as the likelihood of reward decreases the rats switch over to the small/certain lever. They may not be getting the jackpot, but they are at least getting something every time. Some rats were, of course, more risky than others in their choices, but all of them ended up with a preference for the certain lever over the uncertain one by the end of each trial.
Then they got increasing doses of amphetamine. Amphetamine (marketed as Adderall) is a dopamine releaser, increasing levels of dopamine in the synapse. And as their dopamine levels increased, rats began to show changes in their decision making. They started to choose the high risk lever MORE.
In normal rats the choice of the lever shifted as probability of reward decreased. So when the risky lever worked all the time, they picked it all the time, but when it didn't work, they switched to the more certain lever. BUT, when they were on amphetamine, they did not switch to the certain lever as quickly, prolonging their risky behavior.
Ok, so that's a general dopamine releaser. What about specific receptors?
Well, in pharmacology, there are two main ways that you can test the actions of a specific receptor. You can give an agonist at the receptor, something to stimulate its activity. Or you can give something like a general releaser, and see if you can block the effects by blocking the specific receptor you are interested in. One of the things I like about this paper is that, when in doubt, they did BOTH. And here's what they got:
Stimulating the D1 receptor made the rats' risk based behavior worse, just like amphetamine. Stimulating the D2 receptor while the rats were on amphetamine made their behavior less risky, but stimulating the D2 receptor on its own made them MORE risky. Stimulating the D3 receptor also decreased risky choice, but did it so well that the rats didn't pick the risky lever even when it was giving rewards 100% of the time. Finally, a D4 receptor agonist decreased the risky behavior induced by amphetamine, but didn't have any effect when given on its own.
So what does all this mean? We've known that amphetamine and other DA releasers promote risky choices, but it now appears that these effects are due to stimulation at D1 and D2 receptors, both of which promoted risky choice when given by themselves. D3 and D4 receptors, on the other hand, appear to decrease risky choice when they are stimulated. There are several ideas as to why this could be happening. It could be that stimulation of the dopamine system in general (with amphetamine) and the D1 and D2 receptors in particular could produce changes in how rewarding we perceive something. It could make that large reward look more worth the risk. Alternately, it could be a decrease in how large the risk is that promotes more risky behavior.
Or it could just be increased patience. Rats on amphetamine may just be willing to take the knocks to get the bigger reward eventually. This could be a dissociation between the tendency to make a risky choice, and the tendency to bet a lot of money. The dopamine system could be affecting the risky choice without affecting how much money (in this case lever pressing effort) the rats are putting in.
Finally, it could be that increasing dopamine or D1 and D2 stimulation makes animals less able to calculate probability accurately. To me, this is similar to an incorrect assumption of how risk the behavior is, and it is this option that the authors seem to think is most likely.
So what's the point of all these rats with levers? Finding out about how our dopamine system influences risk-based decision making could help us find out more about problems like pathological gambling and drug abuse, both of which are behaviors which involve people repeatedly making risk-prone choices. It is possible that identifying drugs which reduce risk-taking could help those trying to get off drugs, by allowing them to more correctly assess how risky their behavior is.
So why do I like this paper? The pharmacologist in Sci finds the organization beautifully elegant. They tested all the possible drugs, with both agonists and antagonists, to pull out the possible directions that effects could take. I could wish they had added tests of antagonists on their own, but you can tell this was a very time consuming paper anyway. I also like that the authors spend their discussion carefully going through every option for how the dopamine system could be operating here, in each case including literature for and against each position, until they finally decide up the hypothesis they find most likely. I'll admit the paper doesn't have a lot of flash. But it answers a question, narrows it down to the receptors involved, and does it very thoroughly. Sci appreciates that in a paper.
Your thoughts?
Next up: Onset of antidepressants and PMS. But for now, Sci needs to fall into bed. There's only so much a grad student can do in a day.
Jennifer R St Onge, Stan B Floresco (2008). Dopaminergic Modulation of Risk-Based Decision Making Neuropsychopharmacology, 34 (3), 681-697 DOI: 10.1038/npp.2008.121

9 responses so far

  • The most important pedagogical purpose of journal clubs is to teach trainees how to *read* scientific papers. This is a far from obvious skill, and the vast majority of trainees go about it completely ass backwards until they learn how to do it correctly. This is because the vast majority of undergraduate classroom training is all about learning conclusions, rather than the process of arriving at conclusions. Thus, when young trainees attack a scientific paper, their immediate instinct is to go looking for the conclusions.
    Of course, the conclusions drawn by the authors of a research paper are completely irrelevant. What matters is the experimental evidence presented and the reasoning process that leads to those conclusions.

  • leigh says:

    CPP, for this exact reason i am bringing down the hammer on my senior undergrads. this early in the semester and i already have cemented in their little heads that i'm a hardass... sweet.
    it's far too late, i've run through eleventy papers about my own sub-project destined for completion, and my brain is toast, so i'm not gonna look at this one tonight. but wtf is up with the "significant" mark on the data points at 12.5% where the error bars overlap? i hope that's significantly different from the 100% and 50% or something, and not a drug effect they're trying to point out. cuz i'm totally not believin' that.

  • Scicurious says:

    Hmmm...Leigh, I know how you feel. I thought it was difference from 100 or 50% myself when I looked at it. I'm looking more closely at the results and stats sections now, and it looks like what they are showing with the stars is a significant effect of test day over all doses compared to vehicle, and a significant test day x probability block interactions. Still, it does look widgey. I've seen stuff like this before PLENTY of time when people present monkey data (n's of three will do this to you), so it's possible they were working with small sample size stats, but they didn't mention this in the methods. My theory is that, since they did a three-way, repeated-measures ANOVA, they are using the stars to represent overall differences during the test day, rather than specific differences at those points. There's no record of a post-hoc.
    However, I freely admit that Sci's stats skills are VERY lacking, so you might find something more in it than I did.

  • leigh says:

    Four groups of eight
    n=8/data point. within subjects design.
    Stars denote significant (P
    from the figure legend
    On average, all doses of amphetamine increased choice of the large/risky lever compared to saline (Dunnett's, P
    sooooo wtf happened to the repeated measures ANOVA results?
    oh... panel e is an average of ALL drug doses. but they use dose as one of their ANOVA effects... huh? i don't get that.
    just looking at the graphs it looks like the only significant effect at 12.5% is at 1 mg/kg- i won't believe that overlapping error bars indicates a significant difference. no way. my n=10 data had very small error bars and i still didn't get one or two effects i thought i might see based on looking at the graphs. and my error bars did not overlap.
    i'm not exactly a stats geek either, but i'm highly skeptical. i'm also pissy because i've been awake for a little bit too long tonight. and with that, i'm out.

  • leigh says:

    whoa, i'm way too tired. submitted twice AND totally borked the formatting. nice.

  • Scicurious says:

    s'ok, leigh, I'll fix.

  • Jake Young says:

    It is intriguing that in the control animals, even when the expected values of the rewards are 5/10 (risky/sure), the animals still do both levers 50% of the time. This violates the matching rule; if it were true they should do the risky lever 33% of the time.
    I wonder if that is some sort of endowment effect. There is certainly path dependence: if you started the animals with less reward on the risky lever, you would expect the animals to push the sure lever more than 50%.
    (This was an excellent post, by the way. And I totally hear you about the problems in most journal clubs.)

  • Scicurious says:

    Leigh: yeah, the more I look at it, the more I have problems with it. I've seen stats like that in more seminars than I can count, and it bothers me EVERY time. I think this one is out of the running.
    Jake: I think it's because they start the animals out on the high reward probability and then shift them down over time that you end up with a 50% risky lever result at the low probability end. Hope springs eternal. And thanks for the compliment! I love your blog!

  • JLK says:

    I'm throwing myself under the scientific bus here, but...
    When I read journal articles, I read everything very carefully......EXCEPT the results section. I usually skim through it, check out the graphs where and if they make sense, and then go on to the conclusion. When I critique an article, I usually focus on the methods and conclusions sections jointly.
    Why do I do that? Well, for one thing, my stats skills do not allow for much of the shit to make sense to me anyway. I look for significance and p values and all that shit, but reading paragraph after paragraph of symbols and numbers makes my head spin. (For now, at least)
    Also, in psychological research, there are often sooooo many variables and "we controlled for variable x and subsequently did a linear regression analysis of variables y and z, and blah blah blah blah blah." Boring. Fucking BOOOORRRRING. And they always analyze a fucktillion number of relationships, most of which are not significant anyway.
    I know, I know. I'm a horrible student. *slinks away in shame*

Leave a Reply