News & Events

 

Feburary 1, 2019: Congratulations on getting Addgene's Blue Flame Award Trophy!

Addgene's Blue Flame Award Trophy is given to researchers who have at least one plasmid that has been distributed to the research community more than 100 times. Specifically, Mitsuko Watabe-Uchida constructed pAAV-mCherry-flex-dtA plasmid and this plasmid has been requested 134 times so far.

To request our plasmids, please go to addgene website (http://www.addgene.org/Naoshige_Uchida/). AAV will be found in UNC vector core website (https://www.med.unc.edu/genetherapy/vectorcore/in-stock-aav-vectors/uchida/).

________________

 

March 22, 2017: Paper published in Neuron!
Somatosensory Cortex Plays an Essential Role in Forelimb Motor Adaptation in Mice 
By Mackenzie Weygandt Mathis, Alexander Mathis, Naoshige Uchida

WHO MOVED MY ARM? LEARNING FROM PERFORMANCE ERROR

You head to your favorite coffee shop, order a cappuccino, and when the barista calls your name, you grab your coffee. Imagine, however, you grab the wrong cup on the bar and it was empty. You lift the cup and it “flies” towards you, as you applied too much force. When you grab the correct cup to drink, you carefully adjust the strength of your grasp and force you apply to lift the cup to your mouth, and can seamlessly account for changes in the weight of the cup as you consume the coffee.

Our ability to monitor the outcomes of our actions and compare it with predictions is critical in executing movements. For instance, just grasping a cup requires that the brain predicts how your arm will move taking into account the dynamics of the arm, including how heavy your arm is (and whether you are wearing a watch that could add weight) or perhaps whether you wear a stiff leather jacket (that could add resistance), which could otherwise perturb your arm movement. We can execute simple motor movements like reaching because the brain effortlessly learns to predict these recurring perturbations and “adapts” to them. This process of the brain predicting the consequences of a particular movement, monitoring whether the prediction was correct, and adjusting future actions is called motor adaptation.

How the brain adapts to perturbations has been studied in the laboratory by introducing systematic disturbances either to a movement or to sensory feedback. About 20 years ago, two researchers, Shadmehr and Mussa-Ivaldi, asked subjects to move a joystick that was connected to motors. These motors can, for instance, add predetermined forces that may “kick” the arm laterally during a movement. They investigated how subjects learned to adapt their movements to reach a target in a similar manner as before the perturbations. The simplicity and elegance of their paradigm received much attention, which led to many experimental findings and influential computational theories about how the motor system adapts to systematic perturbations. Yet, understanding the neural mechanisms that support motor adaptation has been lagging, largely because previous animal models, i.e. primates, which are known to manipulate joysticks, are not particularly amenable to modern circuit analysis using population recording and manipulation techniques employing molecular and genetic methods.

In the present study (Mathis et al., Neuron, 2017, PDF), we established the first mouse model of motor adaptation that naturally lends itself to powerful neural circuit studies. Head-fixed mice were trained to manipulate a joystick: They had to reach towards a joystick, grab it and pull it from a starting location to a target location to receive a reward. The joystick can be moved in two independent directions. To get rewards mice had to pull it from the center towards themselves. To study motor adaptation, we used a magnetic force to apply a brief “kick”, or force-field pulse, halfway through a pull that deflects the joystick perpendicularly to the pull direction. Over the course of 100 trials the mice learned to predictively steer away from the kick before the onset of the pulse and reduced motor errors during the force-pulse period.  Furthermore, when the force field was (unpredictably) turned off, they moved the joystick in the opposite direction of the force pulse. Such a movement, called “aftereffect”, reveals that the mice were executing a counteracting force against the fore field, rather than stiffening their arm to counteract the perturbation.  These results demonstrated that mice could adapt to recurring perturbation to a forelimb movement, that bears a striking similarity to the adaptation observed in humans and non-human primates.

What in the brain enabled this adaptation? In our task the mice cannot see the joystick, therefore mice only receive proprioceptive feedback (from the “sense of posture”). However, there are two main sensory feedback pathways from touch and posture sensors: direct projections from the spinal cord to the cerebellum and thalamo-cortical projections targeting all the way to the neocortex. Previous studies in humans have implicated the cerebellum as a critical brain center in regulating motor adaptation, although its role remains debated. These studies have limitations: because most patients with impairments to the cerebellum also have movement disorders, dissociating deficits in adaptation from those in fine motor control has been difficult. Moreover, no studies had directly tested the role of the other feedback pathway to the neocortex, so its role in adaptation remained unclear.

To test whether the subcortical pathway was sufficient for motor adaptation in our paradigm, we first tested whether the thalamo-cortical pathway plays any role. To do this, we took advantage of optogenetics: By expressing a light-gated ion channel, channelrhodopsin-2, in GABAergic inhibitory neurons, one can inactivate a target area with high temporal and spatial specificity. Brief inactivation of S1, applied concurrently with the force field, abolished predictive steering, the reduction in errors and the aftereffect. Our results therefore demonstrate an essential role for the primary somatosensory cortex (S1) in motor adaptation. Remarkably, the lack of motor adaptation came with striking specificity – the execution of forelimb movements and reward-based learning were not impaired by inactivating S1. These results showed that subcortical processing by the cerebellum alone is not sufficient to support motor adaptation in our task. Interestingly, inactivating S1 after mice partially adapted did not interfere with the execution of already-adapted motor commands, consistent with the idea that S1 plays a critical role in learning to predict the force field, but not in storing the memory about the force field.

Motor adaptation is an extremely exciting field of study, with elaborate behavioral experiments, as well as attractive theories and computational models. Nonetheless, the mechanisms at the biological level are far less understood. The mouse model for adaption, which we developed, opens up new avenues to study detailed neural circuit mechanisms, and in turn to test, refine and further develop theories of motor adaptation.

________________

March 6th, 2017: Paper published in Nature Neuroscience!

PREDICTION ERRORS STAY ON TRACK, EVEN WHEN THE RIDE IS UNPREDICTABLE

by Clara Starkweather and Sam Gershman

Imagine you are waiting for the 2PM subway train. Based on your experience, you know that the train always arrives between 1:55PM and 2:05PM. You glance at your watch—it’s 1:55PM, and you return to reading your newspaper. Several minutes later, you check your watch again. Now it’s 2:05PM, and you move closer to the edge of the subway platform. You stare expectantly into the tunnel, and sure enough, you see the train’s lights approaching.

Now imagine a near-identical scenario, in which the train usually arrives between 1:55PM and 2:05PM, but occasionally doesn’t come. At 2:05PM, you check your watch and sigh dejectedly. Is the train late, or is it not coming at all? Maybe it’s time to start planning an alternate route.

These scenarios illustrate that we constantly infer when and if events will occur. By 2:05PM, we increasingly anticipated the reliable train’s arrival because it always arrives by that time. In contrast, we became increasingly pessimistic in the case of the unreliable train. Based on our prior knowledge of arrival timing and probability, we inferred that the train wouldn’t come. If the unreliable train did show up at 2:05PM, it would be a pleasant surprise.

Our new study suggests that a group of cells located deep in the midbrain report a ‘surprise’ signal that, as in real life, uses prior information to make further inferences.

We recorded from midbrain dopamine neurons while thirsty mice performed a classical conditioning task. Rather than waiting for trains to arrive, the mice learned to anticipate water rewards after being presented with certain odors. If the mouse unexpectedly received a reward, dopamine neurons produced a large positive response. If the mouse predicted a reward following an odor presentation, dopamine neurons produced a smaller response. These dopamine signals are called ‘reward prediction errors’ (RPEs), and they represent the discrepancy between actual and expected reward. By signaling surprising positive outcomes, positive dopamine RPEs are thought to reinforce behaviors leading to favorable consequences.

In our study, we trained mice on classical conditioning tasks that mirrored the timing unpredictability of the train scenarios. On any given trial, the time interval between cue and reward was chosen randomly from a normal distribution. In the first task, reward was always delivered (100% rewarded), similar to the reliable train. In the second task, reward was occasionally omitted (90% rewarded), similar to the unreliable train. We found that dopamine RPEs exhibited a striking difference between these two tasks. In the 100% rewarded task, dopamine RPEs were largest if reward was delivered early, and smallest if reward was delivered late. In order words, RPEs became smaller as time elapsed, indicating that reward expectation grew as a function of time. This result parallels the first train example: we increasingly expect the reliable train to arrive as time passes. In the 90% rewarded task, this trend flipped: dopamine RPEs were smallest if reward was delivered early, and largest if reward was delivered late.  Dopamine RPEs became larger as time passed, indicating that reward expectation decreased as a function of time. This result tracks our inference in the second train example: as time passes, our belief that the train is simply late yields to the belief that the train will not arrive, making it quite surprising if the train actually arrives at 2:05PM.

Our result shows that dopamine RPEs are exquisitely sensitive to inference about event timing and probability. Although this result may seem intuitive—even obvious—it provides a key theoretical advance for reinforcement learning. Traditionally, the brain’s reinforcement learning circuitry is thought to cache cue-reward associations independent of inference about the environment. This type of system would be just as surprised by the train arriving at 1:55PM as it would be at 2:05PM, in either of the above scenarios. Our data argues against this simple model, and suggests that the brain’s reinforcement learning circuitry taps into inferences about an uncertain environment.

In order to compute prediction errors, the midbrain dopamine system must be able to access accurate predictions. Our results suggest that the dopamine system benefits from the brain’s ability to make inferences across time, ensuring that these predictions are as accurate as possible—even when outcomes are uncertain.

 

Feb 2017 | The Uchida lab will have 2 posters at Cosyne 2017! 
 

January 27th, 2017 | Congratulations! 

Opposite initialization to novel cues in dopamine signaling in ventral and posterior striatum in mice
William Menegas, Benedicte M Babayan, Naoshige Uchida, Mitsuko Watabe-Uchida
https://elifesciences.org/content/6/e21886/article-info

October 19th, 2016 | Congratulations!

Midbrain dopamine neurons signal aversion in a reward-context-dependent manner
Hideyuki Matsumoto, Ju Tian, Naoshige Uchida, Mitsuko Watabe-Uchida, eLife

https://elifesciences.org/content/5/e17328

 

A MULTI-LAYERED NEURAL COMPUTATION FOR SIMPLE ARITHMETIC 

by Mitsuko Watabe-Uchida

September 8th, 2016

(l to r) Mitsuko Watabe-Uchida, Ju Tian, and Naoshige Uchida

 

An exciting aspect of neuroscience is the ability to peek into the brain, measuring the activity of neurons in behaving animals. Electrophysiology allows us to listen to the signals sent by individual neurons, as if intercepting a message in Morse Code over a radio. So far, neuroscientists have been able to discern the function of many types of neurons by presenting animals with stimuli, and observing which stimuli cause activation or inactivation of the neuron of interest. These types of measurements tell us the result of the computation performed by that neuron. A new and challenging puzzle will be to determine what kinds of signals are used in each calculation, and how they are combined.

Electrophysiological recordings in single brain regions typically result in a wide variety of activity recorded from individual neurons. This occurs because different types of neurons are intermingled in the brain. Therefore, to interpret the data, we have to classify neurons based on their molecular profiles and based on the connections that they make with other neurons. Dopamine neurons are a convenient model system for studying computation because they are found in only a few regions of the brain, and have a clear, targetable molecular profile.

In the ventral tegmental area (VTA) of the brain, dopamine neurons seem to have the uniform function of signaling when errors occur in the prediction of a reward (reward prediction error, RPE). We know that reward prediction error can theoretically be calculated with a simple subtraction: actual reward minus expected reward. This signal is important for guiding our future behaviors to maximize reward. If the actual reward is higher than the expectation (positive RPE), we may favor these actions, whereas if the actual reward is lower than our expectation (negative RPE), we may refrain from those actions in the future.

Although dopamine is important for our behaviors and dopamine RPE signals have been observed for 20 years, we still do not understand how dopamine neurons compute RPE. More precisely, although there have been many models which predict the mechanism of RPE computation in dopamine neurons, there was no direct method to test these ideas experimentally. One of main focuses of these models was, naturally, deciding which brain areas provide information about “actual reward” or “expectation” for dopamine neurons to calculate “actual reward minus expectation”.

We decided to tackle this question. Our first goal was to specifically label neurons that directly project onto dopamine neurons. To do this, we established a modified rabies-GFP virus system with mouse genetics to infect dopamine neurons and hop trans-synaptically exactly once, into presynaptic neurons. Using this technique, we mapped the monosynaptic inputs to dopamine neurons across the entire brain, 4 years ago (Watabe-Uchida et al., 2012). The next question was: what information do these inputs send to dopamine neurons? Which brain areas provide actual reward or expected reward information?

For this experiment, we used modified rabies virus carrying channelrhodopsin 2 (ChR2), a light-gated ion channel, so that only dopamine neurons and neurons that project to dopamine neurons express ChR2. We used this to identify neurons providing input to dopamine neurons across the brain, by seeing whether each neuron we recorded from was activated by blue light (with very low latency). We recorded the activity of neurons in input areas while mice behaved, and then shined blue light to determine which neurons were direct inputs to dopamine neurons. We recorded from 7 input-dense areas which have most often been suggested as important sources of signals in RPE models. With this data in hand, we were prepared to answer our initial question: which brain areas provide actual reward or expectation information for the computation? Which model is true?

The results turned out to be that each variable (actual reward and expectation) was distributed among inputs in all of the brain regions we recorded from. Furthermore, these variables were already mixed in many input neurons, such that they themselves could signal at least partial RPE information. Thus, it seems that our brain computes partial RPE at multiple nodes in the neuronal network and dopamine neurons gather this information together to compute a very precise RPE. In other words, computation in our brain can be distributed and redundant even for a simple arithmetic like the subtraction required for RPE. This type of redundancy likely contributes to the robustness of computations in our brain.

Overall, our data suggests that computations in the brain are different (and more complicated) than many proposed models and simple arithmetic is embedded in multi-layered neural circuits. When we study the brain, we often place too much focus on identifying cascades of brain areas and forget about one of the most exciting, though mysterious, aspects of our brain: integration of information. This study took a lot of work, more than 8 years, to prepare the systematic method to examine both connectivity and activity (Tian et al., 2016). We hope that we could contribute to the sense of wonder people feel when they think about computations in the brain, and demonstrate how unique and mysterious these neural computations can be compared to simple arithmetic. Finally, we hope that this study can help guide explorations of other computations, as the field slowly gathers a repertoire of model computations to look for themes across the brain.

Read more in Neuron of download PDF

 

 

_____________________________________________________________

August 16th, 2016: Congratulations to Ryu on receiving a Japan Society for the Promotion of Science Postdoctoral Fellowship! Project title: Elucidating the neural circuit mechanism underlying prediction error computation in dopamine neurons. Read more here 

_____________________________________________________________

 

DOPAMINE: A SHATTERPROOF SIGNAL FOR LEARNING 

by Neir Eshel
February 8th, 2016

Dopamine plays an outsized role in the public imagination, acting as a ‘happiness’ chemical, the drug that causes psychosis, or the pill that allows frozen people to move again, as in Oliver Sacks’ famous book, Awakenings. But 20 years ago, experiments with monkeys revealed a more specific role for dopamine: comparing outcomes with expectations. When an outcome is better than expected, dopamine neurons increase their activity. When an outcome is completely expected, dopamine neurons do not respond. And when an outcome is worse than expected, dopamine neurons go silent. This pattern of responses is deemed ‘reward prediction error’ and is thought to be a crucial way that we learn from our experiences. Positive prediction errors reinforce actions that lead to reward, while negative prediction errors prevent actions that lead to punishment.

In our new study, published this week in Nature Neuroscience, we explore how individual dopamine neurons make this calculation. Surprisingly, we discover that each neuron calculates prediction error in exactly the same way. Such a system is exceptionally robust and redundant, ensuring that the prediction error signal can be exploited by the broadest possible array of brain circuits to help us learn.

We recorded from neurons deep in the brain while thirsty mice performed simple tasks for water reward. Sometimes we delivered water out of the blue, completely unexpectedly. Other times we presented an odor that predicted water delivery. Every time this odor was presented, the mouse learned to expect water at a particular time in the future. By delivering different amounts of water, with or without the preceding odor, we could measure the precise method that dopamine neurons used to calculate prediction error. We then compared this method from neuron to neuron.

We found that dopamine neurons calculate prediction error through simple subtraction. This is consistent with previous computational theories, but quite rare to find in the brain. In most other settings, neurons appear to work through multiplication or division, rather than addition or subtraction. In this case, though, subtraction is the best method for a precise calculation, and the brain appears to have evolved accordingly.  

Moreover, each neuron appears to perform this subtraction in exactly the same way. This is even true for dopamine neurons recorded on different days, from different mice. The only difference between neurons was in the magnitude of their responses to unexpected rewards. Given this information, the rest of that neuron’s response was perfectly predictable. Indeed, even the ‘noise’ in dopamine neurons’ responses—that is, the different activity they exhibit from trial to trial, when the stimuli remain the same—was correlated from neuron to neuron. This has two profound implications: 1) that different dopamine neurons likely have overlapping inputs, and 2) that the targets of dopamine release likely receive similar information, regardless of which dopamine neurons they contact.

The homogeneity of dopamine neuron responses reinforces the idea that dopamine neurons broadcast a common signal to the rest of the brain: namely, prediction error. Even if a group of dopamine neurons were to die, the signal would persist. Thus, the system beautifully ensures our ability to perform one fundamental task: learning from our experience.

PATHWAY FOR DISAPPOINTMENT

by Ju Tian and Naoshige Uchida

September 10th, 2015

Imagine you are a child hoping to get a teddy bear from your parents as a birthday gift. What if they gave you a box of candies instead? Or, worse, what if they forgot your birthday entirely? Naturally, you might feel disappointed. On the other hand, you might be quite pleased if your parents gave you the same candies as a surprise on another day. In this case, your response to a gift is dramatically influenced by your expectation. Our brains always compare the rewards we get with what we expected.

But how does this comparison happen in our brains? Neurons that use dopamine as a neurotransmitter ("dopamine neurons") seem to represent the difference between actual reward and expectation. For instance, dopamine neurons transiently pause their spontaneous firing when an expected reward is omitted. Interestingly, this response occurs when reward was expected but was not granted. In other words, when nothing happened! This signal -- a dip in activity -- occurs exactly when reward was expected to happen. More generally, dopamine neurons are known to signal error in reward prediction, a.k.a reward prediction errors. When the outcome is better than expected, dopamine neurons increase their firing rates. When the outcome is worse than expected, their firing rates decrease. How dopamine neurons generate these prediction errors remains unknown.

In our study published in Neuron, we examined the contribution of a region of the brain called the habenula to dopamine prediction error signals. The habenula has long been a mysterious area, located at the very center of the brain, bridging the forebrain and the midbrain. Recent studies revealed that neurons in the lateral habenula signal prediction errors, although the direction of the responses (excitation versus inhibition) was opposite that of dopamine neurons. Given the existence of an inhibitory projection from the lateral habenula onto dopamine neurons, it has been hypothesized that dopamine neurons may relay prediction error signals from the habenula. To test this hypothesis, we removed input from the habenula by making an electrolytic lesion and examined what aspects of prediction error signals were affected in dopamine neurons. We found that, in animals with habenula lesions, the dip caused by reward omission was largely diminished. Surprisingly, the dip caused by aversive stimuli (e.g. an air puff) was not affected or enhanced. Note that negative prediction error can occur, for example, (1) when not receiving an expected reward (disappointment) or (2) when receiving an unexpected negative outcome (punishment). Our study showed that these types of negative prediction error are regulated by different mechanisms.

In our previous study (Eshel et al., 2015), we found that reward expectation reduces reward responses in a subtractive fashion. While divisive gain changes are common in the nervous system, subtraction is rarely found in the brain and its mechanisms are unknown. A key feature of subtractive computation is that dopamine neurons reduce their activity below baseline when reward is smaller than expected. In this new study, we found a key mechanism that pushes down dopamine neuron firing below baseline.
Our study also opens doors for future research. We found that many aspects of prediction error signals in dopamine neurons remain intact after large lesions in the habenula. This implies that other inputs to dopamine neurons are also making important contributions to prediction error coding. Based on our anatomical mapping of dopamine inputs (Menegas et al., 2015; Watabe-Uchida et al., 2012), areas such as the striatum, lateral hypothalamus, and tegmental areas are at the top of the list for future investigation.

PREDICTIONS AND THE BRAIN 

by Neir Eshel

August 31st, 2015

Say you’re at a supermarket, staring at two cartons of ice cream: chocolate and caramel. Before making your choice, you try to predict which will be more delicious. Wasn’t the caramel a bit too sweet last time? Wait, wasn’t the chocolate a little bitter? You hem and haw, and then choose the one you expect to be better.

Our new study demonstrates how the brain makes this type of prediction and uses it to optimize decisions.

We recorded from neurons deep in the brain while mice performed simple tasks. The animals had to learn the association between different odors and different rewards. Rather than ice cream, the researchers used water, which was rewarding to the thirsty mice. Usually, the mice would receive the reward they expected. Occasionally, however, the reward would be bigger or smaller. In those cases when the outcome was different from predicted, the chemical dopamine became especially important. If reward was bigger than predicted, dopamine neurons increased their activity. If reward was smaller than predicted, dopamine neurons decreased their activity. And if reward was the same as predicted, the neurons made no changes. In this way, dopamine neurons calculated the difference between expected and actual reward.

This pattern of responses is called ‘reward prediction error’, and dopamine neurons have been known to calculate it for over 20 years. It is thought that this signal is crucial for animals, including humans, to improve their predictions over time, allowing us to maximize reward (and the chance for a truly delicious ice cream dessert). However, it was never known how dopamine neurons make this calculation. In particular, how do dopamine neurons know how much reward to expect?

In our paper, published this week in the journal Nature, we discovered that a group of neurons intermingled with dopamine neurons provide the expectation signal. A previous paper from our lab had shown that when reward was expected, these inhibitory neurons (called GABA neurons) became active. But it was unknown whether dopamine neurons use this signal to calculate prediction error. In the paper published this week, we artificially increased the activity of GABA neurons, using a technique called optogenetics that makes neurons sensitive to light shined through a fiber-optic in the brain. When we did so, we found that dopamine neuron activity was reduced, as if reward was expected, even though it was not. Conversely, if we artificially decreased the activity of the GABA neurons, dopamine neuron activity was increased, as if the previously expected reward had become surprising. In other words, shifting the level of activity in GABA neurons appeared to shift the level of expectation reflected by the dopamine neurons.

These manipulations also affected mouse behavior. When we artificially increased GABA neuron activity on both sides of the brain, thereby artificially increasing the level of expectation, mice acted as if they were disappointed by the reward they got. The same reward that used to cause high levels of anticipation no longer elicited any anticipation when GABA activity was increased.

Finally, we designed an experiment to understand exactly how this prediction error calculation is made. We gave the mice different sizes of reward and plotted how dopamine neurons respond to these different sizes. Then we taught the mice to expect reward, and watched how expectation shifts the dopamine response. It turns out that dopamine neurons simply subtract the expectation signal, which we now know comes from GABA neurons. This is consistent with classic learning theories, but actually quite surprising in the brain. There are very few other examples where neurons seem capable of pure addition or subtraction; instead, the brain generally works through multiplication or division. In this case, though, subtraction allows for a precise and consistent calculation, and appears to be exactly what the brain evolved to do.

Together, our experiments demonstrate how a small circuit deep in the brain makes a simple calculation that enables a crucially important behavior: learning what’s good and what isn’t.

 

"PERSONALIZED LESSON" MAY NOT BE DOPAMINE'S WAY 

by William Menegas and Mitsuko Watabe-Uchida

September 1st, 2015

Dopamine, originally referred to as a pleasure molecule, is now one of the most well known neurotransmitters. Dopamine neurons are thought to broadcast a teaching signal for reinforcement learning throughout the brain. Dopamine neurons in the midbrain encode reward prediction error, which is the discrepancy between our expectation and reality. This signal potentially guides our behavior to maximize rewards in the future.
 
In a previous study (Watabe-Uchida et al., 2012), we used a genetically modified rabies virus to label all of the monosynaptic inputs to dopamine neurons. We reasoned that finding the inputs to these neurons would help us understand how they function. We found that many brain areas project directly onto dopamine neurons, but wanted to further refine our map of this circuit.
 
In our new study, led by Mitsuko Watabe-Uchida, (Menegas et al., 2015), we labeled the inputs to dopamine neurons based on their projection target. The main projection target of midbrain dopamine neurons is the striatum. However, dopamine neurons also project to other brain areas such as the amygdala, habenula, and much of the cortex. If dopamine encodes a teaching signal that guides behavior, then each brain area might improve its “behavior” in parallel to other brain areas. The simplest way of doing this would be for each brain area to send an expectation signal to dopamine neurons and receive an error signal back from that same population.
 
Instead, we found that most populations of dopamine neurons (defined by their projection targets) have a surprisingly similar distribution of inputs and are not embedded in parallel circuits. So, each brain area probably does not learn independently.
 
However, we also found that dopamine neurons projecting to the tail of the striatum differ dramatically from other populations. While most dopamine neurons receive many inputs from regions involved in reinforcement learning, addiction, and appetitive behavior (such as the ventral part of the striatum and hypothalamus), dopamine neurons projecting to the tail of the striatum receive inputs preferentially from regions involved in motor function and arousal (such as the globus pallidus, subthalamic nucleus, and zona incerta). This result suggests that dopamine release in the tail of the striatum might have a unique function, while most other dopamine neurons may encode a teaching signal.
 
This new study (Menegas et al., 2015) used CLARITY, a method for making tissue optically transparent, to allow the brains to be imaged as whole volumes using a light-sheet microscope. These brains were then aligned in 3D so that they could be compared to each other using a standard set of region boundary definitions. This is a technical benchmark for future anatomical studies, demonstrating that intact whole-brain imaging can be used to compare the inputs of different populations of cells with high precision in an automated fashion.
 
In summary, we pioneered an automated imaging pipeline which helps to lower the hurdle for future systematic anatomy studies, and increase their consistency and efficiency. Using this technique, we uncovered organization of dopamine circuits and found a unique population of dopamine neurons: the tail of striatum-projecting dopamine neurons. What is the function of this group of dopamine neurons? We hope that our study opened the door to further investigation.

________________

Congratulations to Marissa Shoji for winning a Hoopes prize for her thesis in Neurobiology, entitled " Characterization of the Activity of Glutamatergic Neurons in the Pedunculopontine Tegmentum During Decision-Making" Dept. News Here! 

_________________

FEELING GOOD OR BAD LATELY? LISTEN TO SEROTONIN 

by Mackenzie Amoroso and Jeremiah Cohen
March 10th, 2015

Serotonin is one of the most widespread and mysterious neurotransmitters in the brain. It has been proposed to be involved in many aspects of behavior, including regulating mood and our responses to aversive environmental events. Furthermore, it has also been proposed that a deficiency in serotonin plays a central role in depression. One of the major challenges in testing hypotheses about serotonin's function has been observing the activity of serotonin-releasing neurons during behavior. Historically, when we place a microelectrode into the midbrain structure that contains serotonin-releasing neurons, it was difficult to know whether the neuron under observation was releasing serotonin.

To address this problem, we used a combination of transgenic mice and optogenetics to identify serotonin neurons by their response to light stimulation. Then, we recorded the activity of these light-identified serotonin neurons as mice participated in a task in which the amount of reward or punishment available in the environment varied predictably over time. We found that 40% of serotonin neurons showed slow variations in the activity that correlated with the amount of reward in the environment. This was remarkable, as when we recorded the activity from light-identified dopamine neurons, which have long been thought to be involved in reward, they did not signal information on these slow timescales. In contrast, we found that all the dopamine neurons encoded only the immediate properties of the environment (for example, "I'm about to get a reward"), and only a fraction of the serotonin neurons signaled these immediately pending rewards. Taken together, serotonin neurons have the ability to signal reward and punishment on both slow and fast timescales. These results suggest that serotonin signals could be important for regulating our behavior on slow timescales, and may be involved in generating emotional states like mood.

Read more in eLife 

Read eLife Insight, a review  written by Peter Dayan and Quentin Huys

Three Informational Flows in the Brain

August 6th, 2014 - Cell Reports article 

Serotonin and dopamine are major neuromodulators. Here, we used a modified rabies virus to identify monosynaptic inputs to serotonin neurons in the dorsal and median raphe (DR and MR). We found that inputs to DR and MR serotonin neurons are spatially shifted in the forebrain, and MR serotonin neurons receive inputs from more medial structures. Then, we compared these data with inputs to dopamine neurons in the ventral tegmental area (VTA) and substantia nigra pars compacta (SNc). We found that DR serotonin neurons receive inputs from a remarkably similar set of areas as VTA dopamine neurons apart from the striatum, which preferentially targets dopamine neurons. Our results suggest three major input streams: a medial stream regulates MR serotonin neurons, an intermediate stream regulates DR serotonin and VTA dopamine neurons, and a lateral stream regulates SNc dopamine neurons. These results provide fundamental organizational principles of afferent control for serotonin and dopamine.