We are searching data for your request:
Upon completion, a link will appear to access the found materials.
15.10: Case Study- Microsoft’s Gift to Bloggers
Gift giving in business is both commonplace and controversial at the same time. Business gifts are usually seen as an advertising, sales-promotion, and marketing-communication medium. Such gifting is usually practiced for the following reasons:
- In appreciation for past client relationships, placing a new order, referrals to other clients, etc.
- In the hopes of creating a positive first impression that might help to establish an initial business relationship
- As a quid pro quo&mdash returning a favor or expecting a favor in return for something
Making good decisions about when business gifts are appropriate is extremely complex in the United States. In a global business environment, it becomes one of the most challenging ethical issues, since the cultural norms in other countries can be at odds with standard ethical practices in the United States. For this reason, gifts and bribes warrant a deeper discussion.
Let&rsquos examine one of Microsoft&rsquos promotions that included a gift.
Part 2: Connecting Movements & Neural Activity During Decision-Making
00:00:15.28 I'm Anne Churchland from Cold Spring Harbor Laboratory in New York,
00:00:18.08 and my lab is interested in understanding decision-making.
00:00:21.08 And today, I'm gonna tell you about connecting movements and neural activity during decision-making.
00:00:27.00 So, we'll start with a definition that you might remember from my previous talk,
00:00:30.17 that for us a decision is a commitment to one out of a number of alternatives.
00:00:35.00 And mostly, in my lab, we study decisions that ultimately lead to action.
00:00:39.11 And there's a few reasons for this.
00:00:41.10 And the first one is simply that many decisions naturally do lead to action.
00:00:45.16 So, consider this mouse.
00:00:47.04 It's deciding whether or not to rear up and eat these blueberries.
00:00:51.15 And if it decides that that is what it wants to do, that decision will naturally lead
00:00:55.27 to the appropriate motor response, or movement response, so that the animal can acquire the blueberries.
00:01:01.08 And that's true of many decisions that we make.
00:01:03.20 Sometimes a decision is the same thing as a decision to act.
00:01:07.08 Not true of all decisions, but certainly a big class of decisions that we and animals make.
00:01:13.01 A second reason that we study decision-making this way is that decisions that inform actions
00:01:18.20 are well suited to animal studies.
00:01:21.07 You might remember in the previous talk that I described a few ways that we
00:01:25.09 study decision-making behavior in the laboratory,
00:01:29.00 like having monkeys report decisions by making a saccadic eye movement,
00:01:31.25 or by having rats report decisions by moving to choice ports that they poke their snout into
00:01:38.05 to communicate to us what they've decided.
00:01:40.17 And this is really important in animals, and really even in studying human decision-making as well,
00:01:45.19 because it allows us as experimenters to have a systematic and efficient way
00:01:50.00 of knowing what it is that the human or animal has decided,
00:01:54.17 and then we can record that and analyze it, connect it to neural activity, and so on.
00:01:59.12 However, there are some challenges to this approach.
00:02:04.01 And one challenge is that when we study decisions that lead to action, if we're doing this
00:02:08.16 while measuring neural activity in the brain, we need to be able to separate out the decision-related activity
00:02:15.06 from the movement-related activity.
00:02:17.19 And this is a problem that's been known for quite some time.
00:02:20.14 And indeed, people in previous decision-making studies have thought about, within a particular area
00:02:25.16 and for a particular movement like an eye movement, what might be the consequences
00:02:31.27 for the brain in terms of the movements that are being planned.
00:02:35.12 But despite this appreciation in the field, that movements can modify.
00:02:39.17 modulate neural activity during decision-making, there are many open questions about the nature
00:02:45.24 of that neural activity.
00:02:47.09 And the first question is, well, how widespread is it?
00:02:50.26 Is neural activity related to movements just a few specific areas in the brain,
00:02:55.27 maybe limited to the motor cortex, for example?
00:02:58.16 Or does movement-related activity span many, many neural structures all across the cortex,
00:03:04.05 and maybe even subcortical areas as well?
00:03:07.08 The second question is, is this movement-related activity. is it driven only by instructed movements?
00:03:13.22 So, instructed movements are things like the saccades that the monkey uses to report a choice,
00:03:18.01 or the orienting movement that the rat uses to report a choice.
00:03:22.22 And certainly we would expect that those would drive neural activity.
00:03:25.28 But might there be other movements as well?
00:03:28.27 Could it be the case that animals make many uninstructed movements that we haven't been
00:03:33.04 really thinking about, but that have a big impact on neural activity?
00:03:36.24 Well, no one really knows.
00:03:39.13 And finally, is this movement-related activity task aligned or task independent?
00:03:45.13 Let me tell you what I mean by that.
00:03:47.05 We would expect that certain kinds of movements might have to do with specific events in the.
00:03:52.11 in the task that the animal is doing.
00:03:55.09 For instance, if decisions lead to an animal being rewarded at the end of a trial,
00:04:00.10 we might expect there could be movements that are in anticipation of that reward.
00:04:04.20 One example might be that perhaps the pupils become more dilated at the end of the trial,
00:04:08.28 when the animal thinks it's about to get its reward.
00:04:11.23 We would call that task aligned, because it always happens at the same moment in the task,
00:04:17.06 that is, right before the reward.
00:04:18.24 But there might be other kinds of movements that matter for neural activity as well.
00:04:23.28 And I'll refer to those as task independent.
00:04:26.15 And those are spontaneous, uninstructed movements that happen at random times during the trial.
00:04:32.02 And you might want to think of those as kind of more like fidgets
00:04:34.14 -- movements that don't have to do with anticipating the reward or seeing the stimulus specifically, that aren't.
00:04:40.10 and they're not really a high priority for the experimenter, but might be a high priority for the animal.
00:04:46.02 And we partly wondered about these task independent movements just because looking at what humans do
00:04:50.24 -- even looking at a classroom full of children, or looking at people on the subway --
00:04:54.15 they actually make a lot of spontaneous task independent.
00:04:58.19 seemingly task independent movements all the time.
00:05:01.00 So, these three questions. is the activity widespread?, driven by movement?,
00:05:04.26 task aligned or task independent. we didn't know the answers to these.
00:05:08.05 And so we set out on an experimental paradigm that would allow us to answer these questions,
00:05:12.15 and understand better how movement-related activity and decision-making.
00:05:16.16 decision-related activity interact in the brain.
00:05:20.16 So, this work was led by two postdocs in my lab, Simon Musall and also Matt Kaufman,
00:05:25.28 who now has his own lab at the University of Chicago.
00:05:28.23 So, they came up with a behavioral paradigm to study decision-making in mice.
00:05:33.22 And the mice are presented with an auditory or visual cue
00:05:37.14 -- so a flashing light or a series of clicks --
00:05:39.22 and it could be on one side of the mouse or the other side of the mouse.
00:05:43.01 And the mouse's job is just to figure out what side it's on, and to make a licking movement
00:05:46.21 to that side to report that that's where the stimulus was located.
00:05:50.10 So, I'm gonna show you a movie, now, of what this looks like.
00:05:53.20 And this is a camera that's looking up from underneath at a mouse.
00:05:57.17 You've probably never looked at a mouse this way before.
00:06:00.19 You can see the mouse's mouth at the top of the frame.
00:06:03.18 And next to that are two little squares.
00:06:06.22 Those are lick spouts, and those, later on, are gonna move in and allow the animal
00:06:10.09 to make a choice.
00:06:11.16 And at the bottom, there, you can see two. two circles, which are gonna move in.
00:06:15.10 Those are little handles.
00:06:16.10 And the animal will grab one or both handles when he's ready to start a trial.
00:06:20.28 This is our way of letting him know when he's ready to initiate a trial.
00:06:25.05 And this is an important component of our research design for a few reasons.
00:06:29.04 So, first, you might remember from the previous talk that an animal's internal state
00:06:34.12 has a big effect on neural activity.
00:06:36.17 And our hope is that by allowing the animal to self-initiate a trial that we can
00:06:40.16 start to control that internal state a little bit better.
00:06:43.09 At least he'll be in a similar state of mind if he's decided himself to initiate a trial,
00:06:48.02 as opposed to being caught off guard.
00:06:50.01 A second reason we have animals initiate trials is that we want them to be in charge of
00:06:54.14 how long the session lasts.
00:06:56.05 It gives us a really nice behavioral readout of their overall amount of engagement
00:07:00.05 and overall comfort if they are the ones that are initiating. initiating each trial.
00:07:05.10 So, I'll play the movie now, and you'll see the animal quite engaged and
00:07:10.02 grab the two little handles.
00:07:13.15 So, they move in, and you can see that he makes contact with them.
00:07:17.28 So, after that, it's time for the animal to see a stimulus.
00:07:21.10 So now, this is again a movie of the same mouse, but this time it's taken
00:07:25.03 from the opposite point of view.
00:07:26.13 It's a camera that's behind the mouse, and you can see its nice long tail there
00:07:29.16 at the bottom of the frame.
00:07:30.24 So, there's gonna be a visual stimulus that will appear on the right.
00:07:34.20 And the animal can see this, just the way that you can.
00:07:37.02 There it is.
00:07:38.15 Then the animal has to wait a whole second, which is a pretty long time for a mouse to wait,
00:07:42.05 but they're. they're at least moderately patient.
00:07:45.11 And then after that, the two lick spouts at the top are gonna move in, and then the animal
00:07:49.12 has the opportunity to report its choice.
00:07:52.03 So, here he goes.
00:07:53.08 He made a report. a choice to one spout, and the other spout moved away.
00:07:57.01 We do that so that the animal can't change its mind.
00:07:59.11 Mice, like humans, sometimes like to decide, and then go back and change their mind.
00:08:03.02 So, once they've committed, the other option is. is off the table.
00:08:06.06 So, this is what it looks like for mice.
00:08:08.22 And we set up these video cameras, maybe, for a few reasons.
00:08:11.19 But as you'll see in a moment, having this high-resolution video of what the animal
00:08:16.02 is doing at every moment in time turned out to be absolutely critical to interpreting
00:08:21.06 the neural activity that we measured.
00:08:23.03 But I'm getting ahead of myself.
00:08:24.26 So, I need to tell you an important component of our experimental design, which is that
00:08:29.11 all animals did both auditory and visual trials, but there were two groups of animals.
00:08:34.27 One of them we called the vision experts.
00:08:37.10 And that means they have a lot of experience with the visual stimulus
00:08:40.10 -- that's those flashing lights that you saw a moment ago --
00:08:43.06 and very little experience with the auditory stimulus.
00:08:46.04 And these animals are indicated by the blue lines that you see there.
00:08:49.17 On the vertical axis, that's the percentage of time that they make a correct response.
00:08:54.00 And you can see that for the blue line, when they're given a visual stimulus, that they are.
00:08:58.03 have a high accuracy, usually above 80% accuracy.
00:09:02.22 But when they're given an auditory stimulus, their performance is at chance.
00:09:06.16 That's the value corresponding to 0.5 that's right on the white dotted line.
00:09:11.14 The auditory experts you can see in green.
00:09:13.25 And the auditory experts are just the opposite.
00:09:16.01 They're great at the auditory version of the task, where they hear the clicks, and they're chance at vision.
00:09:21.17 And this turned out to be a really useful component of our experimental design.
00:09:26.01 Because if you think about the vision experts versus the auditory experts doing, say, a visual task,
00:09:30.13 they're both getting the same stimulus, they're both making the same response,
00:09:35.17 but they differ in how they interpret that incoming signal.
00:09:38.23 So, for the vision experts, they know what it means.
00:09:41.13 They know that when you get a visual stimulus on the right, you need to plan a lick movement
00:09:46.00 But if they're an auditory expert, and they get a visual stimulus,
00:09:48.24 they basically make a sensory motor guess.
00:09:51.01 They haven't figured out what the visual stimulus means, and so they make some kind of guess,
00:09:54.23 and they're right half the time.
00:09:55.26 So, their incoming sensory stimulus and their outgoing motor response are the same.
00:10:01.03 But what they do internally differs between those two groups.
00:10:05.14 So, we were open to the idea that there might be decision-making activity
00:10:11.06 and movement-related activity at many different places across the dorsal cortex.
00:10:17.20 And because a number of decision-making areas have been implicated in many studies
00:10:23.04 throughout the past years, especially about five years for mice,
00:10:26.13 we wanted to look at neural activity really quite broadly
00:10:30.00 to give ourselves the best chance of seeing where
00:10:32.00 decision-making and movement-related activity interact.
00:10:35.16 So, to do this, we developed a wide-field imaging setup, and this was led by the postdoc,
00:10:41.01 Simon Musall, that I mentioned a moment ago.
00:10:43.12 And what we do is that we have two channels.
00:10:46.13 In one channel, we can measure the neural activity, and I'll tell you a bit more in a moment
00:10:50.03 about where that activity comes from.
00:10:52.14 And then in the other channel, we can see the animal's hemodynamic response, that is,
00:10:56.01 the blood flow that goes between the brain. that goes across the brain to fuel up
00:11:00.14 all of the neurons.
00:11:01.14 And we have techniques that we use for separating those two so we can get a really precise estimate
00:11:05.24 of neural activity.
00:11:08.03 A big advantage of this approach is that we can see neural activity through an intact skull.
00:11:14.17 The mouse. the skull of the mouse is really very thin, and the signals that we have are pretty bright,
00:11:18.13 so we can see the neural activity all the way through the skull.
00:11:21.05 We don't have to do any surgery to remove skull or anything like that.
00:11:24.23 So, it's a quite non-invasive way to measure neural activity.
00:11:27.24 But you might wonder, well, wait a second.
00:11:31.00 How can you see neurons firing?
00:11:33.11 You might know or remember from my previous talk that the way they communicate with each other
00:11:36.21 is electrically.
00:11:37.23 So, for example, if we have this neuron, here, and it wants to send a message to a neighboring neuron,
00:11:42.09 over there, then the neighboring neuron will spike, and traditionally people have
00:11:46.20 recorded that electrical activity using sharp electrodes,
00:11:49.26 like the one that I've schematized here.
00:11:52.20 And this is a great approach, but it won't be really what we want if we want to
00:11:57.00 measure neural activity across the entire dorsal cortex,
00:12:00.09 because that would necessitate putting electrodes all over the dorsal cortex.
00:12:04.07 And in terms of a biological tissue, that wouldn't be a very good thing to do.
00:12:07.16 It would be too many electrodes in the brain, and it would really damage the brain.
00:12:11.01 So, that technique was kind of off the table.
00:12:13.13 So, we decided to take advantage of a different technique, which leverages the fact that
00:12:17.27 when neurons spike, in the way I just described. that there's an influx and outflux of calcium
00:12:23.23 in and out of the neuron.
00:12:25.15 And we can attach a fluorophore, something that emits light, to this calcium, so that,
00:12:30.21 when the calcium is going in and out, we can see the calcium.
00:12:34.14 And this gives us an estimate of the neuron's response.
00:12:38.21 So, to make it really simple, when a neuron fires, it turns green.
00:12:42.22 So, if we use transgenic tools to have the calcium indicators in all of the neurons
00:12:48.10 in a mouse, then what it allows us to do is this.
00:12:51.00 So, we take the camera that I described to you a moment ago.
00:12:53.28 And at the bottom, there, you can see six schematic neurons that are all quiet.
00:12:57.25 They're not responding.
00:12:59.06 And then, when the area becomes engaged and the neurons start firing, then they turn green,
00:13:03.21 and we can capture those photons with our wide-field microscope. macroscope,
00:13:08.26 and we can do this across the entire brain.
00:13:10.27 So, this allows us to see neurons firing, which is a really helpful tool when we
00:13:17.04 want to look at neural activity across a really broad swath of brain.
00:13:21.03 So, what kind of structures can we see.
00:13:23.24 I mean, you might be a little skeptical about this technique.
00:13:26.15 And indeed, many in the field were, because it's kind of a new technique that we
00:13:29.24 and only a few other labs are using.
00:13:31.11 Well, we have one kind of proof-of-principle test that kind of tells us that technique
00:13:36.04 is measuring a reasonable signal.
00:13:38.09 And we use a technique called visual mapping, or Fourier analysis, where we show animals
00:13:43.15 vertical bars and horizontal bars that are moving.
00:13:47.27 And then, by looking at the moments in time that we see neural activity for different pixels,
00:13:52.00 we can generate maps of visual space.
00:13:56.02 And one thing that we were able to recapitulate that others have observed is that each mouse
00:14:02.03 has multiple maps of visual space.
00:14:04.22 So, in primary visual cortex, there's a map of visual space.
00:14:08.16 And then in about six or seven surrounding areas, there is the same map of visual space
00:14:14.06 repeated multiple times.
00:14:15.24 So, this was a technique originally developed by Michael Stryker's lab at UCSF,
00:14:21.03 and it's really useful for us in demonstrating that the wide-field imaging is giving us
00:14:25.18 reasonable signals.
00:14:26.21 But it's also, I think, really cool to ponder.
00:14:28.15 So, mice are similar to other mammals, humans, and non-human primates in that the visual world
00:14:33.11 is represented multiple times in every single brain.
00:14:37.28 And it's interesting to think about why that is.
00:14:40.13 What computations distinguish one map of visual space from a neighboring map of visual space?
00:14:45.02 Why do we have so many?
00:14:47.09 The answer isn't totally known, but it does highlight that for many mammals, including mice,
00:14:52.08 that have a visual system that's not quite as good as primates, that even in mice
00:14:57.08 vision matters enough that they're willing to repeat that visual world
00:15:01.02 six times in every hemisphere of every brain.
00:15:05.01 So, having confirmed that we can see the maps of visual space that are expected
00:15:12.10 in the visual cortex of the mouse, we're then in a position where we can measure neural activity
00:15:16.20 during decision-making.
00:15:18.09 So, this is the same behavioral task that I showed you a moment ago,
00:15:21.20 that starts with the baseline period,
00:15:24.07 then when the visual stimulus comes on, you can see.
00:15:27.24 you'll be able to see the back of the brain, primary visual cortex.
00:15:31.02 There it is, lighting up.
00:15:35.09 Then there's the delay period.
00:15:38.05 And finally, when animal makes its decision, there's really quite a lot of activity
00:15:42.07 all over the brain.
00:15:43.07 So, in this video, it's clear that multiple areas are engaged during decision-making.
00:15:47.22 And the first thing we did when we got these measurements is to do a couple of
00:15:51.00 what we call sanity checks.
00:15:52.19 And this is a really important thing to do, really, in any experiment, but especially
00:15:56.15 if you're using a new technique.
00:15:57.22 So, the first things that we do are ask, are we seeing the right thing in the right place
00:16:01.28 at the right time?
00:16:03.06 So, this is an example.
00:16:05.01 It shows you just the raw fluorescence signal.
00:16:07.21 These are just basically frames from the movie that you just saw a moment ago.
00:16:11.07 And the first thing that we do is to consider a few areas that have response properties
00:16:15.04 that are well known.
00:16:16.13 So, those include primary visual cortex.
00:16:20.07 And the first thing that we ask is, do we see a difference in primary visual cortex
00:16:24.12 for auditory versus visual trials?
00:16:26.06 Well, we better see a difference, right?
00:16:27.24 If we don't see a difference, then something's wrong with our measurement.
00:16:30.08 So, that's what you see down here.
00:16:31.23 The white line is for visual trials, and the red line is for auditory trials.
00:16:36.02 And those two gray rectangles tell you when the stimulus is on.
00:16:39.05 And much to our relief, we saw a higher response on visual trials than auditory trials
00:16:43.22 in primary visual cortex.
00:16:45.18 Great -- sanity check passed.
00:16:47.22 Next time. we look at area RS, which stands for retrosplenial cortex.
00:16:51.27 It's an area that's a little bit more medial.
00:16:53.24 It has some visual known responses.
00:16:56.21 And again, we see that auditory and visual are a little bit different --
00:16:59.15 more of a visual response.
00:17:01.12 Again, confirmed.
00:17:02.20 In areas that are involved in movement planning, we wouldn't expect those to differ very much
00:17:06.11 for auditory versus visual stimuli, because the movement is always the same.
00:17:10.00 And indeed, for a hind limb area, and for secondary motor cortex,
00:17:13.13 we found those were the same.
00:17:15.15 So this, combined with the visual maps, gave us reassurance that our technique was working,
00:17:19.11 and we were seeing the right activity in the right place at the right time.
00:17:22.23 So, then we decided to make a comparison of novice versus expert subjects.
00:17:27.26 And that's what you see here.
00:17:29.05 So, starting with V1, that's at the top.
00:17:31.09 Remember, primary visual cortex.
00:17:33.09 The red line are the untrained subjects, and the white line are the trained subjects.
00:17:37.04 And you can see they're really pretty similar.
00:17:40.28 And we noticed how similar they were, and we thought, huh, well, they're.
00:17:44.08 they're doing something kinda different.
00:17:45.20 Remember, their behavior was very different.
00:17:47.12 But the neural responses seem pretty sim. pretty similar.
00:17:50.09 Okay, well let's look at retrosplenial cortex.
00:17:52.18 Maybe, you know, that's where we'll see a difference between novice and expert decision-makers.
00:17:55.26 But again, we thought. what?
00:17:58.28 The activity is really similar here too.
00:18:01.11 And everywhere we looked, it seemed like the novice and expert decision-makers,
00:18:05.25 even though their behavior was really different, their neural activity was surprisingly similar.
00:18:10.18 So, this led us to say, whoa. well, if decision-making signals aren't driving this neural activity
00:18:16.18 -- because, remember, the decision-making is quite different in the two groups --
00:18:20.12 what is driving the neural activity?
00:18:22.28 We didn't know.
00:18:23.28 So we decided we needed to work a little harder to connect the neural activity to the behavior.
00:18:29.03 And the way that we did this was to build a linear model.
00:18:31.18 And the goal of the model is this.
00:18:32.25 So, you can imagine a particular pixel that we record -- this is just one spot in the brain --
00:18:37.00 that on trial one the fluorescent signal in that pixel might go up and down
00:18:41.21 a bit, look something like this.
00:18:43.17 Here might be two other trials.
00:18:45.06 So, same part of the brain, but two different trials.
00:18:47.28 Fluorescence signals kind of going up and down.
00:18:49.19 And our goal with our model is that we wanted to model the trial-by-trial fluorescence signal
00:18:55.22 using all of the behavioral parameters we have at our disposal.
00:18:59.04 So, we say. we want to say, what is it that's driving these fluctuations?
00:19:02.17 Is it the decision?
00:19:03.17 Is it fidgeting movements?
00:19:04.23 Is it. is it pupil dilation?
00:19:06.11 What is it?
00:19:07.11 And we were fortunate to have a lot of behavioral parameters at our disposal.
00:19:11.00 So, what we did is we took all the possible events that might modulate neurons.
00:19:16.07 We didn't know which ones mattered.
00:19:17.28 We started with things like the moment that the stimulus came on.
00:19:21.00 We call those post-events.
00:19:23.05 Peri-events, these are things like licking movements.
00:19:27.09 Trial events that have to do with decision-making, like whether the past or current trial
00:19:31.19 is a success or failure.
00:19:33.24 And then, finally, a whole lot of analog parameters as well.
00:19:36.24 And these are things like the animal's pupil diameter, which fluctuates
00:19:40.20 over the course of the trial, and also whisking movements, and a number of other movements as well.
00:19:45.00 And I should say that we were partly inspired by work from Marius Pachitariu and Carsen Stringer
00:19:50.00 that had previously developed from electrical recordings in V1.
00:19:55.02 not during decision-making, but nonetheless very relevant.
00:19:57.19 that those neurons cared a lot about facial movements.
00:20:01.00 And so we thought, well, we'll just. we'll throw those into the model as well,
00:20:04.01 and see what comes out.
00:20:05.17 So, this is now the whole model.
00:20:07.25 And for those of you who are aficionados, we had to of course think really deeply about
00:20:10.27 how to fit this model, because we have many, many, many, many, many parameters.
00:20:14.16 We want to prevent overfitting.
00:20:16.23 But there are good mathematical techniques to do that that we leveraged.
00:20:20.04 And at the end of the day, we put these all together in a design matrix, and then we fit the model,
00:20:24.07 which just means we assign a weight to each of these variables, just saying,
00:20:29.09 how much does this variable matter for fitting the neural response?
00:20:33.24 And I'm gonna show you now the model's estimate of what the fluorescence signal should
00:20:38.16 do on every trial by overlaying, in red, the model prediction to the actual data, in white.
00:20:43.23 So, you can see that the model is pretty good.
00:20:46.03 So, we were able to capture a lot of the trial-by-trial fluctuations in fluorescence,
00:20:49.24 about 42% of those fluctuations, which was quite encouraging.
00:20:54.08 So, this tells us that the model is working.
00:20:56.17 And now we get to ask the really interesting question, which is, why is the model working?
00:21:00.07 What are the parameters that are really important for making this model so good at predicting
00:21:04.18 what the neural activity is gonna do?
00:21:07.12 So, we separated the variables into two groups.
00:21:10.11 And one of them I'm gonna describe now as task-related variables.
00:21:13.25 And these are things like the animal's choice, success or failure of the previous trial,
00:21:19.16 presence of an auditory or visual stimulus, things like this.
00:21:23.03 And we found that, you know, as we expected, these mattered for the model.
00:21:26.25 If you. they had weights that were nonzero, that was. that was good.
00:21:29.28 But surprisingly, when we then looked at a different class of variable, movement-related variables,
00:21:34.21 we found that they accounted for a lot more of the variance.
00:21:39.06 The movements of the nose, for example, which we hadn't really expected, these two parameters
00:21:44.19 called Video and Video ME, which stands for video motion energy.
00:21:48.10 These are just all of the remaining pixels in the video that we weren't expecting to matter at all.
00:21:54.02 They turned out to be really very important for explaining the fluorescence signal.
00:21:58.09 But at this point, you should be skeptical.
00:22:00.05 So, this is our, what I call, skeptics’ corner.
00:22:02.26 And a skeptic would say, oh, come on.
00:22:05.12 This isn't really the right way to do this analysis, because a lot of the variables
00:22:09.11 that you have here are related to each other.
00:22:12.02 For example, the animal's choice, whether it goes right or left, is intimately tied
00:22:16.19 to the animals licking because it uses the licking to report the choice.
00:22:21.05 And this is of course a very valid criticism.
00:22:23.00 So, to address this, we went back to the model, and we kicked the parameters out of the model
00:22:27.10 one by one, and looked at how much worse the model did as a result.
00:22:32.11 And this is a much more conservative way of assessing
00:22:35.20 how much each of these behavioral features matters for the brain.
00:22:38.19 So, for example, I'll be really concrete here.
00:22:41.05 Let's suppose we kick out the right visual stimulus.
00:22:44.01 Then we're gonna ask, how much worse does the model do?
00:22:48.17 Or another way of putting it, how much does the model suffer?
00:22:52.02 And where does the model suffer?
00:22:54.02 So, here's a way to visualize this.
00:22:55.19 So, now, the colors here tell you how much the model's ability to fit the data.
00:23:00.21 how well it worked. how much worse that got when we kicked out the parameter corresponding
00:23:05.21 to the right visual stimulus.
00:23:07.00 Well, good.
00:23:08.00 We found out, when we kicked out the right visual stimulus from the model,
00:23:11.13 we could no longer predict the neural activity in left V1.
00:23:15.07 And remember, that activity. that visual signals from the right
00:23:17.27 go to the opposite side of the brain -- they cross in the brain.
00:23:21.20 So, this tells us that when we eliminate that model we can no longer fit the data as well
00:23:26.16 in primary visual cortex, which is exactly what we would expect.
00:23:30.01 And similarly, when we kick the right handle grab out of the model, we're no longer able
00:23:34.07 to predict activity in the part of the brain that corresponds to the paw motor cortex.
00:23:41.17 So, this tells us that our more conservative approach is doing what it's supposed to do.
00:23:46.06 And we were then able to reanalyze the data and say, well, how much of each.
00:23:50.02 how much do each of these parameters matter when we use a much more conservative method?
00:23:54.17 And we found that a lot of them in the task. you can see here. well, you can't actually see.
00:23:59.27 These dark green bars tell you how much these parameters matter.
00:24:02.08 They're almost invisible, because when we kicked out most of the decision-making parameters,
00:24:07.01 we could still fit the model really well.
00:24:09.21 And that's because what really mattered for the neural activity was the movement model.
00:24:14.09 So, the dark green bars corresponding to the movement parameters are still much larger,
00:24:19.25 especially compared to the task, telling us that we really need to include those
00:24:24.16 in our model if we want to understand the neural activity.
00:24:27.20 But not so much the task parameters.
00:24:30.06 I have movies here that you how much each model matters at each location and space in the brain.
00:24:37.26 and also each moment in time.
00:24:38.26 So, the full model means the model including all of the parameters.
00:24:42.22 The movement model is the one including all of the movements in the plot I just
00:24:46.02 showedyou before.
00:24:47.05 And the one labeled task, these are all the decision-related parameters.
00:24:51.01 So, when you see that one of the. the values on these plots are yellow, that means that
00:24:56.24 that particular parameter really mattered for the model.
00:25:00.25 And blue means that it mattered less.
00:25:02.17 So, you can already see that at the beginning of the trial,
00:25:05.04 the task parameters don't really matter very much at all.
00:25:08.02 And let's see what happens over time.
00:25:11.08 So, the movement model really matters.
00:25:14.08 Again, really the movement model.
00:25:16.22 Okay, stimulus comes on.
00:25:17.24 You can see the task model starts to matter.
00:25:19.21 We need to have the task model to understand visual cortex responses.
00:25:27.23 So, really throughout the entire trial, the main things that we needed to explain
00:25:32.17 the neural activity really had to do with the movements that the animal was making,
00:25:37.00 and much less so the decision-making parameters that we had built into the behavior.
00:25:43.01 And just to be really concrete about what I mean, think about a licking, which is a movement variable,
00:25:47.27 versus the animal's choice, which is a task variable.
00:25:50.26 So, the choice. the choice parameter is a binary variable, which can be 1 at any moment in the trial,
00:25:56.25 so it can influence the neural activity at any moment in the trial.
00:26:00.26 And so that means if there's any particular moment where a choice is made,
00:26:04.14 that parameter will be a really good one to have.
00:26:06.09 But with licking, what actually happens is the animal makes
00:26:09.04 a few kind of idiosyncratic licking movements at the end of the trial.
00:26:12.22 And if, for example, there's a fluorescence spike every time the animal makes a licking movement,
00:26:16.10 then the licking parameter is what captures the neural activity, and not the choice parameter.
00:26:22.07 And that's. it's dissociations like that which allowed us to uncover that it was
00:26:27.12 really the movements that mattered much more than the abstract decision-related quantities
00:26:31.25 that we had included in the model as well.
00:26:34.28 So, I've told you so far that movement-related variables are really the most important
00:26:40.09 for understanding neural activity.
00:26:42.02 And there are really two kinds of movement-related activity.
00:26:46.20 Some of them are instructed movements, like handles and licks.
00:26:49.27 And remember, early in the talk we wondered whether instructed movements were
00:26:53.07 the only ones that mattered.
00:26:54.15 But there are many uninstructed movements as well.
00:26:56.18 We don't tell the animal to dilate its pupil.
00:26:58.17 We don't tell it to move its hindpaw or move its nose.
00:27:01.04 But you can see that many of these uninstructed movements were really important
00:27:06.05 for fitting the neural activity.
00:27:07.26 And there's kind of a lesson here, I think, that we really learned, which is that
00:27:11.01 those movements aren't important to us, right?
00:27:12.27 We care about the licking and the handle grab,
00:27:15.09 because those are what we built into the experimental design.
00:27:17.23 But apparently these other movements are important to the animal,
00:27:20.06 because it makes a lot of those movements.
00:27:22.04 And apparently they're a high priority for the brain, because we really need to know what,
00:27:26.04 for example, the nose is doing if we want to understand the neural activity.
00:27:31.05 So, these spontaneous movements really were quite important.
00:27:35.13 Just to summarize across all of the different movements, we've grouped them into the task variables
00:27:40.13 -- those are the decision-related ones --
00:27:43.02 and then instructed versus spontaneous. spontaneous movements.
00:27:47.15 And you can see that the spontaneous movements, the green bar. both dark and green bar,
00:27:51.20 are larger than the instructed movements.
00:27:53.22 So, those spontaneous movements were even more important than the ones that we had
00:27:58.17 built into the decision-making task.
00:28:01.05 So, what does this mean for understanding average neural activity?
00:28:05.13 One thing we often do as scientists is that we average together the responses of
00:28:09.08 many repetitions of the same trial.
00:28:11.14 And you might. you might feel reassured, thinking that once you've averaged responses together,
00:28:15.28 that a lot of these spontaneous movements that I'm talking about
00:28:19.06 won't really matter anymore.
00:28:21.12 The extent to that's true. to which that's true kind of depended a bit on the area under study.
00:28:26.17 So, here at the top, this is the fluorescence response and also the model fit
00:28:30.22 to average data from primary visual cortex, V1.
00:28:33.13 And there are two lines there, but you can't really see that there are two lines,
00:28:36.12 because the model fits the data really well.
00:28:38.10 So, now I'm gonna break up the model into just the task variables, the decision-making ones.
00:28:44.08 And you can see that we mostly fit the fluorescence activity pretty well,
00:28:47.26 except at the beginning of the trial there's a little bump that we couldn't really fit with the task model.
00:28:52.23 When we included instructed movements, we did better at fitting that bump,
00:28:56.09 because it turned out that that bump was related to the animal grabbing the handle.
00:28:59.16 And this was surprising, because this is in primary visual cortex,
00:29:01.26 where we wouldn't think a handle grab would matter so much, but it did matter.
00:29:05.10 And then, when we looked at the spontaneous movements, again, we found we were able to
00:29:10.02 understand the bump a little bit better when we included those spontaneous movements.
00:29:14.18 So, to summarize, in primary visual cortex, the decision-making parameters in the task category
00:29:20.22 are, really, pretty important, and we only need the movement model for
00:29:24.22 certain kinds of fluctuations in activity.
00:29:27.14 Here, in secondary motor cortex, this is kind of a different story.
00:29:30.26 Here, the task variables were not very helpful in understanding the neural activity at all.
00:29:36.00 The instructed movements were definitely more helpful.
00:29:39.10 And the spontaneous movements were really critical.
00:29:42.02 We really needed all three of these mod. these model components together
00:29:47.03 to be able to predict what the neurons were going to do.
00:29:49.26 So, as a result of this analysis so far, we developed what we call a task modulation index,
00:29:56.19 which tells us how much particular brain areas are modulated by these task variables.
00:30:01.28 And part of the reason we did this is that I'm hoping people might be skeptical
00:30:05.14 for a second reason.
00:30:06.14 So, back to skeptics’ corner again.
00:30:08.03 A skeptic here would say, okay, it's clear that neural activity is driv.
00:30:13.21 is dominated by movements when you measure activity using wide-field imaging.
00:30:18.13 But what is wide-field imaging really measuring?
00:30:21.00 It's a new technique.
00:30:22.00 Who knows?
00:30:23.00 What do you think the single neurons are doing?
00:30:24.00 That's what we've been measuring in the field of neuroscience for decades.
00:30:27.14 And that's of course a valid concern, because wide-field imaging is pooling signals
00:30:31.01 from probably a lot of different kinds of neural activity.
00:30:34.10 So, to address this point, we picked an area called ALM, anterior lateral motor cortex,
00:30:40.00 which is indicated by the dotted white line at the front of the brain.
00:30:44.15 And we decided to zoom in there with our two-photon microscope to see whether the dominance
00:30:49.15 of movement-related activity was evident when we looked at single neurons,
00:30:52.25 the more traditional approach, as well.
00:30:54.25 So, that's a ALM, there.
00:30:57.09 And this shows you where we imaged.
00:30:59.00 It's a. an area.
00:31:00.09 ALM is known to be active during decision-making, and especially when an animal is experiencing
00:31:05.22 a delay -- waiting to execute a motor response to report a decision.
00:31:09.17 There's been a lot of work on this from Karel Svoboda's lab as well as a number of others.
00:31:14.04 So, we imaged there, and used an automated segmentation method to identify
00:31:19.02 where all the individual neurons are.
00:31:20.24 And in the image with the yellow box around it, each one of those little colored dots
00:31:25.05 is a single neuron.
00:31:26.05 So, you can see we can measure the activity from lots of neurons at the same time.
00:31:30.15 And now we're measuring them one at a time.
00:31:32.15 So, it's the zoom-in view, as opposed to the bird's eye view I told you about previously.
00:31:37.21 We can see what these neurons do at every moment in time during the decision-making behavior.
00:31:42.13 And without going into too much detail, the measured responses that we observed in these neurons
00:31:47.15 were very similar to the responses that people had reported previously.
00:31:52.04 So, this confirms that we're recording neural activity in an area with known response properties,
00:31:57.25 and that we see what others see.
00:31:59.27 So, many neurons have clear task tuning.
00:32:02.22 So now, we're able to do exactly the same approach I told you before.
00:32:07.00 We take the same model -- same movement parameters, behavior parameters --
00:32:11.02 except instead of trying to model the trial-by-trial fluctuations in fluorescence activity of pixels,
00:32:17.27 we do it for individual neurons.
00:32:19.15 But other than that, the math is all the same, and we can ask all the same questions
00:32:23.17 about whether movement-related activity dominates during decision-making.
00:32:28.15 And this is the outcome of that analysis.
00:32:30.19 You can see that for the two-photon single neuron analysis, we see really a very similar picture
00:32:35.13 to what we saw in the wide-field.
00:32:37.16 Which is, again, movements really dominate neural activity.
00:32:43.02 And here's just a couple of single-cell examples.
00:32:45.07 And I like these, because, for us, they tell us that our intuitions
00:32:48.27 about what kind of computations are reflected in neurons can be kind of misleading.
00:32:53.01 So, this first cell. it's a cell in area ALM, and its fluorescence response is shown in white.
00:32:58.24 And the predicted response from the movement model and task model are shown in green.
00:33:03.15 And this is a neuron that's pretty similar to. or, has a message similar to the message
00:33:07.17 that I've been telling you throughout, which is that the movement model is way better.
00:33:11.12 And that's even true when you look at the response within those gray rectangles,
00:33:16.10 which is the time that the animal is making its decision.
00:33:18.20 There's a really interesting time-varying increase during that period,
00:33:22.12 which we might have thought was related to a decision-making parameter, but in fact,
00:33:26.27 really, it just has to do with the movements that the animal is making.
00:33:30.16 For this cell, it's the opposite story.
00:33:32.02 So, for this cell, again, the real response is in gray, and the model predictions
00:33:35.25 are in blue and green.
00:33:37.01 For this cell, the movement model is really terrible, and the task model is much better.
00:33:41.10 And if I had seen this neuron without having done this analysis, I would have thought,
00:33:45.28 oh yeah, that's a movement neuron.
00:33:47.15 It's building up right before the time of the movement.
00:33:50.00 But that really turned out not to be true.
00:33:51.26 And I think one lesson that we've learned, having analyzed the data in this way,
00:33:57.01 is that sometimes the intuitions we have about what kinds of computations are being reflected
00:34:00.13 in neural activity just really aren't right.
00:34:03.02 And we need to test the hypothesis to distin. explicitly, and distinguish movement-related activity
00:34:08.28 from other kinds of cognitive functions like decision-making.
00:34:12.05 So, I started off by asking the questions about. is this activity.
00:34:17.12 about movement-related activity during decision-making.
00:34:20.19 And you might remember that we had three questions.
00:34:22.18 So, the first question is, is it quite localized to just a few areas, or is it very widespread?
00:34:27.28 And yes, it's very widespread.
00:34:29.22 We saw movement-related activity all across the dorsal cortex, even in primary visual cortex.
00:34:35.28 And for anyone who is a skeptic, the dominance of movement in visual cortex
00:34:39.12 has also been observed by other labs as well, Marius Pachitariu's and Carsen Stringer's specifically.
00:34:45.02 Secondly, is the activity only driven by instructed movements?
00:34:50.17 The ones that we taught the animal to do, like grabbing a handle.
00:34:54.13 Both instructed and spontaneous movements matter.
00:34:56.26 So, the instructed ones matter for us, but apparently for the animals,
00:35:00.13 spontaneous movements also matter, and they're clearly a priority for the brain.
00:35:04.26 Task aligned or task independent?
00:35:06.18 Again, both really matter.
00:35:08.10 So, some of the movements are aligned to events in the trial,
00:35:12.05 and other ones happen at idiosyncratic moments in the trial,
00:35:14.27 and are really more like fidgets, perhaps of the sort that we see humans making a lot as well.
00:35:19.26 So, you might wonder at this point, well, what's really going on?
00:35:23.22 So, this analysis demonstrates that movement-related activity really dominates the cortex
00:35:29.08 even during decision-making.
00:35:30.25 But we know that can't be the whole story, because I showed you at the beginning
00:35:35.26 that the behavior of novices and experts are really different.
00:35:39.06 So, if that's true, there must be places in the brain, or ways of measuring neural activity,
00:35:43.28 that uncovers a difference between novice decision-makers, or guessers, and true decision-makers.
00:35:49.22 And we are still trying to uncover that.
00:35:51.26 We think if we have. use a different kind of analysis in area ALM that we do see a signature
00:35:57.10 of true decision-making.
00:35:59.09 But the main message that I hope you'll remember is that this needs to be very carefully
00:36:04.20 teased apart from signals that are related to the animal's movements.
00:36:09.08 The decision-making signals are there.
00:36:11.10 But they're hard to see.
00:36:12.12 And they interact very intimately with signals related to movements.
00:36:18.00 So, these are the people in my lab who are part of a team that's worked together to do this work.
00:36:24.28 We collaborate a lot within our group, and also with groups outside.
00:36:28.15 And we're also really grateful to our funders for providing the resources that make it possible
00:36:33.17 for us to do that work.
00:36:34.26 So, thank you very much.
- Part 1: How Do Brains Decide?
9 Bernard Heginbotham
By 2004, Bernard and Ida Heginbotham had been married 67 happy years. Together, the couple had six sons. Bernard, who was 100 years old, had been taking care of his 87-year-old ailing wife for some time. After several falls, Ida was transferred to a series of nursing homes as her health worsened. All the while, Bernard supported his wife as much as he could and visited her every day.
In April 2004, Bernard got a call from his eldest son, saying that Ida would have to be moved to yet another home. So on April 14, 2004, Bernard, who was a devoted church attendee and retired butcher, took a knife from the couple&rsquos bungalow. He walked to the home. As the elderly couple sat alone in Ida&rsquos room, he slit her throat. He then left the home and tried to kill himself. He was stopped and arrested.
Bernard was charged with murder, making him the oldest person to be charged with murder in England&rsquos history. He pleaded guilty to manslaughter on the grounds of diminished responsibility and received a year of probation.
CBSE Class 10 Maths Case Study Questions 2021 (Published by CBSE)
Check case study questions with answers for all chapters of CBSE Class 10 Maths. All the questions (with answers) are published by the CBSE Board itself.
The Central Board of Secondary Education (CBSE) published the case study questions for Class 10 Maths. We have provided below the chapter-wise questions for CBSE Class 10 Maths. Students must solve these case study questions as soon as they finish a chapter in the class. All these questions are published by the CBSE board hence carry certain importance for the exam. Students must start practicing with these questions to avoid extra burden at the time of the annual board examination.
Chapter-wise links for CBSE Class10 Maths Case Study questions are given below:
Case Study Questions for CBSE Class 10 Maths Chapter 8
Case Study Questions for CBSE Class 10 Maths Chapter 11
All the CBSE Class 10 Maths Case Study Questions are provided with answers. Most of the questions are given in the form of Multiple Choice Type Questions (MCQs). You will find the correct option (answer) written at the end of each question.
You may also download the complete question bank as is published by the board in the form of a PDF from the following link:
For CBSE Class 10, the board has decided to introduce a minimum of 30 per cent competency-based questions in form of case study questions, MCQs, source-based integrated questions, etc. in the new exam pattern 2021-2022. Therefore, students should make themselves familiar with the case study questions to learn the right process for approaching these new type of questions.
Besides practicing with these questions, you must learn the right technique to solve the case study questions with ease and perfection. We have summed up a few effective tips that will help you understand the right way to approach the given problem and come up with the right answer. Check the link given below:
Do not miss to explore our CBSE Class 10 Complete Study Package that comes online in this pandemic hit period and is best for self-study in 2021-2022:
Validation of a Case Definition for Pediatric Brain Injury Using Administrative Data
Background: Health administrative data are a common population-based data source for traumatic brain injury (TBI) surveillance and research however, before using these data for surveillance, it is important to develop a validated case definition. The objective of this study was to identify the optimal International Classification of Disease , edition 10 (ICD-10), case definition to ascertain children with TBI in emergency room (ER) or hospital administrative data. We tested multiple case definitions.
Methods: Children who visited the ER were identified from the Regional Emergency Department Information System at Alberta Children's Hospital. Secondary data were collected for children with trauma, musculoskeletal, or central nervous system complaints who visited the ER between October 5, 2005, and June 6, 2007. TBI status was determined based on chart review. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated for each case definition.
Results: Of 6639 patients, 1343 had a TBI. The best case definition was, "1 hospital or 1 ER encounter coded with an ICD-10 code for TBI in 1 year" (sensitivity 69.8% [95% confidence interval (CI), 67.3-72.2], specificity 96.7% [95% CI, 96.2-97.2], PPV 84.2% [95% CI 82.0-86.3], NPV 92.7% [95% CI, 92.0-93.3]). The nonspecific code S09.9 identified >80% of TBI cases in our study.
Conclusions: The optimal ICD-10-based case definition for pediatric TBI in this study is valid and should be considered for future pediatric TBI surveillance studies. However, external validation is recommended before use in other jurisdictions, particularly because it is plausible that a larger proportion of patients in our cohort had milder injuries.
Keywords: Brain Injury – Pediatric Health Services Health Services Research.