User Tools

Site Tools


psyc410_s2x:fmri_part1

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
psyc410_s2x:fmri_part1 [2025/03/22 09:45] – [Part 6: A preview of a problem - multiple comparisons] adminpsyc410_s2x:fmri_part1 [2025/03/22 11:06] (current) – removed admin
Line 1: Line 1:
-<WRAP centeralign> 
-<typo ff:'Georgia'; fs:36px; fc:purple; fw: bold; fv:small-caps; ls:1px; lh:1.1> 
-Lab 7: fMRI Part 1: Signal and Noise </typo> 
-</WRAP> 
- 
-====== Information, Preparation, Resources, Etc. ====== 
- 
- 
-Today we will begin a several part lab exercise series devoted to the analysis of functional MRI data. In our next session, we will use the FSL (__F__MRIB __S__oftware __L__ibrary) package created by the [[http://www.fmrib.ox.ac.uk/fsl/|Oxford group]] to analyze our data in the manner typical of the field. However, FSL hides most of the analysis steps from direct observation, and describes those steps in complex statistical terms. So today I want you to analyze your data 'by hand' using Matlab commands that are (relatively) easy to understand and to follow. **I want you to interact closely with your data so that you have a good understanding of the signal and noise characteristics of fMRI data**. All analysis, no matter how sophisticated, starts with the raw signal. That is where we begin today! 
-===== Assigned Readings / Videos: ===== 
- 
-  * {{ :psyc410:documents:06.0_pp_34_52_preprocessing_fmri_data.pdf | Preprocessing fMRI data, }} 
- 
-/* 
-<WRAP centeralign>//__COMPLETE READING PRIOR TO March XX__//</WRAP> 
- 
-  * {{ :psyc410:documents:essentials_of_fmri.pdf | Wager & Lindquist, 2011.}} Essentials of functional magnetic resonance imaging. 
-*/ 
-===== Goals for this lab: ===== 
- 
-  * Explore different fMRI data sets to observe how a simple task alters voxel intensity in a fMRI time series. 
-    * Visually inspect a data set to identify activated voxels. 
-    * Use a simple correlational approach to identify activated voxels using one or two templates. 
- 
- 
-===== Software introduced in this lab ===== 
- 
-  * n/a 
-===== Laboratory Report ===== 
-<WRAP center round important 70%> 
-<WRAP centeralign><wrap em>Lab Report #7 is due on Apr 01<sup>th</sup> @ 1:10 pm. </wrap></WRAP> 
-  * Throughout this (and all) lab exercise pages you will find instructions for your lab reports within these boxes. 
- 
-  * For many of the figures you are asked to create for this lab report you are __not__ required to create high quality figures. A simple screen shot will suffice. However, it should be cropped so that it only depicts the relevant information. In each lab report box I indicate whether it requires high-quality figures or simple screenshots. 
-</WRAP> 
- 
-===== Housekeeping ===== 
-<WRAP center round todo 70%> 
- 
-(remember to press RETURN after pasting commands into the ''Terminal'' window) 
- 
-**1.** Create your output directory for tonight 
-<code bash> 
-rm -r /Users/hnl/Desktop/output/lab06 
-mkdir -p /Users/hnl/Desktop/output/lab06 
-</code> 
- 
-**2.** Download the analysis scripts for tonight's lab 
-  * Click on [[https://www.dropbox.com/s/fw2jban9prveila/fmri_lab01.zip?dl=0|this link]] 
-  * Download the zip file (click the download arrow, then select ''Direct Download'') 
-  * If the zip file goes to your Downloads directory, then move it to your Desktop 
- 
-**3.** Unzip the scripts and move them to your output directory 
-<code bash> 
-cd ~/Desktop 
-unzip fmri_lab01.zip 
-rm -rf __MACOSX 
-</code>   
- 
-**4. ** Put fMRI scripts in relevant directories 
-<code bash> 
-mv ~/Desktop/fmri_lab01/fmri* ~/Desktop/scripts/ 
-rm ~/Desktop/scripts/*2019* 
-mv ~/Desktop/fmri_lab01/NIfTI_20140122 ~/Documents/MATLAB 
-rm -rf ~/Desktop/fmri_lab01* 
-</code> 
- 
-**5. ** Add path to your MATLAB startup file 
-  * Open ''Matlab'' 
-  * Open your startup file for editing 
-<code matlab>edit startup.m</code> 
-  * Copy and paste the following text into the file. 
-<code matlab> 
-addpath /Users/hnl/Documents/MATLAB/NIfTI_20140122/ 
-addpath /Users/hnl/Desktop/scripts 
-</code> 
-  * Save and close the file ''startup.m'' 
-  * Run the file by typing the following in the command window 
-<code matlab>startup</code> 
- 
-</WRAP> 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
-===== Data used in this lab ===== 
- 
-We will use different datasets throughout the lab. They will be described in detail in the relevant parts of the lab. What they all have in common is that they are what we would call **Block designs**. 
- 
-A **"block design"** is an an fMRI experiment in which a stimulus (or task) is presented for several seconds, often followed by several seconds of rest. Let's say we were running a block design experiment in which we wanted to investigate color sensitivity in the visual system. We would present blocks of color photographs and blocks of greyscale photographs. Importantly, within a given block the participant would probably be shown several individual stimuli, but they would all be of the same category. So within a 10 second greyscale block the participant might see 10 different greyscale pictures, each displayed for 1 sec. 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
-====== Part 1: Visual Inspection of High-Resolution Data ====== 
- 
-We will start here by looking at a visual evoked response, followed by a motor task. Our goal for each task will be to find a voxel in the brain that is activated by a given task. 
- 
-<WRAP center round info 100%> 
-Note that this part is looking at **high-resolution fMRI data** (2mm x 2mm x 2mm), which is different from the data that you will be looking at in Part 2. In this section our data has sufficient anatomical detail (not great, but sufficient) for us to make out the different sulci and gyri. This higher resolution is thanks to recent advances in parallel imaging. Some of our datasets later on will not have this level of detail due to their acquisition with more conventional fMRI acquisition sequences 
-</WRAP> 
- 
-===== Data for Part 1 ===== 
- 
-Data for Part 1 of the lab can be found in ''/Users/hnl/Desktop/input/fmri/high_res/''. This is high-resolution functional data (2mm x 2mm x 2mm) that will allow us to roughly identify relevant anatomy. Data were collected for two tasks: 
-  * visual response task 
-  * motor task 
-  /* * and a biological motion perception task. */  
-   
-==== Task Design - Part 1 ==== 
-Each of the tasks used a **block design** (see [[#Data used in this lab|above]]) that had the following common construction: 
-  * 115 volumes with TR = 2 sec 
-  * Each run of each task had the following construction 
-    * ''Task A'' => ''Task B'' => ''Task A'' ... 
-    * Each ''Task'' block lasted 12 seconds. 
-    * There were 7 blocks for each task for a total of 14 blocks 
- 
-=== Visual Response Task === 
-  * ''Task A'' checkerboard images presented to the **LEFT** hemifield.  
-  * ''Task B'' checkerboard images presented to the **RIGHT** hemifield.  
- 
-=== Motor Task === 
-  * ''Task A'' participants were asked to repeatedly squeeze their **LEFT** hand. 
-  * ''Task B'' participants were asked to repeatedly sqeeuze their **RIGHT** hand. 
-  
-//Note:// These tasks are essential the same as those used in the three original 1992 fMRI papers. 
-===== Visual Task ===== 
- 
- 
-==== Read in and Display data ===== 
- 
-**1.** Open ''Terminal'' and navigate to the date directory 
- 
-<code bash> cd ~/Desktop/input/fmri/high_res</code> 
- 
-**2.** Open [[http://www.andrewengell.com/wiki/doku.php?id=psyc410_s2x:brain_extraction_segmentation#part_1aviewing_mri_images_in_fsleyes|FSLeyes]] 
- 
-<code bash> 
-fsleyes 
-</code> 
- 
-**3.** Load the data file: 
-  * ''File'' -> ''Add from file'' 
-  * select ''tb9611_checkers_run01.nii.gz'' 
- 
-<WRAP center round info 100%> 
-We know right away that this is a T2 (actually, T2*) weighted image because the white matter is grey and the grey matter is white. As noted [[#Data for Part 1|above]] it is an EPI image at a resolution of 2 mm<sup>3</sup>, which is a bit higher than the 3 mm<sup>3</sup> we'd usually see. But this gives us sufficient resolution to identify anatomy. 
- 
-The dataset is 104 x 100 x 60 x 115. This means that we have 104 voxels in the x-dimension, 100 voxels in the y-dimension, 60 voxels in the z-direction. The fourth dimension is **<fc #ff0000>time</fc>.** This makes these data different from the 3D MRI datasets you've viewed earlier in the semester. The fact that the fourth dimension is 115 tells us that 115 volumes were acquired (this means the scan was 230 seconds long, or 3 minutes and 50 seconds). Each volume includes an entire brain (all 60 slices). 
- 
-<WRAP center round tip 85%> 
-You can jump to any of the locations in this 4D matrix using the ''Voxel location'' boxes at the bottom of the window. 
- 
-**Note:** ''FSLeyes'' starts counting at ''0'', not ''1''. So if you wanted to see the volume ''60'', you would set that value to ''59''. 
- 
-Many computer languages count from zero rather than one. That is, if you had three items in a list, you would count them as "zero", "one", and "two". This can be especially confusing when you are interacting with one language that counts from one (e.g., MATLAB) and another that counts from zero (e.g., C++; the language FSL is written in). But it is very important to remember this because being off by "one" can cause big headaches and Type II errors.   
- 
-**Also note:** FSLeyes displays the brains in “radiological” convention. This means that the left hemisphere is displayed on the right of the screen, and vice-versa. 
-</WRAP> 
-</WRAP> 
- 
- 
-=== Selecting time points === 
- 
-By default, ''FSLeyes'' displays the very first of the full-brain acquisitions (aka **volumes**). In the **Location** box at the bottom of the screen you will see that the ''Volume'' is ''0''. 
- 
-**4. ** To appreciate that we now have a time dimension, let's watch a movie of the data. 
-  * Click on the ''settings'' button in the upper left corner (see image below) 
-  * Slide the ''Movie update rate'' slider so that it's about three quarters of the way to the right 
-    * This controls the speed at which the movie will play 
-    * Close the **View settings** window 
-  * Click on the ''Movie mode'' button (see image below) and the movie will being to play  
-    * To have a better view, you might want to turn off the crosshairs by clicking on the ''Crosshair'' button (see image below) 
- 
-{{ :psyc410:images:fsleyes_moviemode.png?600 | }} 
-  
- 
-<WRAP center round tip 100%> 
-Can you observe the motion of the subject's head in this movie mode? Look at the large blood vessels in the neck, and at the eye balls - can you see moment-to-moment variation? You should appreciate that the voxels are changing intensity over time. Some of this variation is signal of interest (i.e., related to task), and some is noise (of various sources). 
-</WRAP> 
- 
-**5. ** Click on a voxel somewhere in the [[https://en.wikipedia.org/wiki/Occipital_lobe|medial occipital region]]. 
- 
-Knowing what you know about the [[#Task Design - Part 1|task parameters]], can you see the task-related variation in the intensity changes in the movie? Look carefully in visual cortex. Do you see the activation signal? 
- 
-You might notice some pulsation going on but probably find //very// it difficult to pinpoint regions with task-related activity. This should give us a hint that the signal-to-noise ratio is not too favorable or given the massive amounts of different voxels - **we cannot easily detect a signal with the naked eye**. 
- 
-**6. ** Stop movie mode by clicking on the ''Movie mode'' button. 
- 
-==== Find a Highly Responsive Voxel ==== 
- 
-A moment ago you viewed a movie of the signal in each voxel (brightness) changing over time. Here, we will instead look at the **time-series** from specific voxels. A time-series for this study would simply be a list of 115 values. At each voxel, we'd have a signal intensity for each of the 115 volumes. We actually have 624,000 separate time-series; one for each voxel (104 x 100 x 60). But fret not, we'll only be looking at one at a time. 
- 
- 
-If we found a responsive voxel-one that reflected a response to the task-what do you think its time-series would look like? Remember that each hemifield is stimulated in 12-second blocks. What part of the brain do you expect to respond to this stimulation? What do you expect that response to look like? 
- 
-**7. ** Let's look at the time-series data from individual voxels 
-  * ''View'' -> ''Time series'' 
-    * You'll be doing this a lot tonight, so you might want to remember the keyboard shortcut to open a timeseries: ''command'' + ''3'' 
- 
-You should now have a new window at the bottom of the screen that displays the voxel time-series. The y-axis represents the intensity of the BOLD signal and the x-axis represents the different time-points.  
- 
-<WRAP center round info 65%> 
-The change in signal intensity across time depicted as a time waveform is the very same signal variation that was depicted as changes in gray scale intensity when we viewed the movie mode (but for the selected voxel only). 
-</WRAP> 
- 
- 
-**8.** Now click around in the visual cortex until you find a really nice looking time-series. **You might consider focusing your search in and around the [[https://en.wikipedia.org/wiki/Calcarine_sulcus|calcarine sulcus]], which is where V1 and V2 are located.** Find a voxel with a highly responsive time-series in the right hemisphere. In other words, you should see the waveform go up and down with the timing of the task design. Check with me or a classmate to be sure you've found a good one. 
- 
-<WRAP center round tip 100%> 
-Once you're in the right "neighborhood", you might find it easier to use your keyboard arrow keys to move around one voxel at a time, rather than jumping from voxel to voxel using your mouse. Spend some time looking. It'll probably be quite challenging to find a good one, but it will be gratifying when you do!  
- 
-However, don't drive yourself nuts trying to find the //very best// voxel activated by the task. Your goal now is to get familiar with the data, viewing time-series, and finding a voxel that does a good (even if not perfect) job of showing a response. 
- 
-If you've spent **__at least__ several minutes** minutes looking and have been unable to find a responsive voxel, then you can cheat. If you highlight the text below it will reveal good voxel coordinates. But I'll be sad if all of your lab reports show these same voxels. You don't want to make me sad...do you? 
- 
-Highlight below if you're a cheating cheater: \\ 
-<fc #ffff00>GOOD VISUAL TASK VOXELS: </fc> \\ 
-<fc #ffff00>Right hemisphere: 46, 22, 30 </fc> \\ 
-<fc #ffff00>Left hemisphere: 57, 19, 25</fc> 
-</WRAP> 
- 
- 
-===== LAB REPORT Part 1 #1 ===== 
-<WRAP center round important 100%> 
-<WRAP centeralign> 
-<WRAP centeralign> 
-<typo fs:x-large; fc:purple; fw:bold; text-shadow: 2px 2px 2px #ffffff> 
-LAB REPORT Part 1 #1 
-</typo> 
-</WRAP></WRAP> 
- 
-  * Create a figure depicting the time-series from a responsive voxel in the right hemisphere //and// a responsive voxel in the left hemisphere. 
-    * For these screenshots you do __not__ need to create "nice" figures.  
-  * Compare the two time-series. 
-    * Are they the same/different? 
-    * Why? 
-</WRAP> 
-===== Motor Task ===== 
- 
-In this task, participants were asked to squeeze their hands at particular times. Thus, we should find particular regions in the brain (e.g., motor regions) that will show increases in brain activity when the participant is squeezing their hand. 
- 
-**9.** Load the data file: 
-  * ''File'' -> ''Add from file'' 
-  * Select ''tb9611_handsqueeze_run01.nii.gz'' 
-  * Make the checkerboard data invisible by clicking on the blue eye in the **Overlay list**. Alternatively, you can highlight it and click the minus button to remove it altogether. 
- 
-==== Finding the Motor Cortex ==== 
- 
-You can focus your search for a good-looking time-series to the [[https://en.wikipedia.org/wiki/Motor_cortex|motor cortex]]. 
- 
-I'm afraid that finding the hand area of motor cortex will be a bit harder than finding the early visual areas in the previous part of the lab. Below are some MRI images indicating the location of this region (click to embiggify). Note: most of these images are 1 mm<sup>3</sup> T1-weighted images, whereas you're data are 8 mm<sup>3</sup> T2*-weighted images.  
- 
-{{:psyc410:images:yousry1.png?&200 | Yousry and colleagues (1997)}} 
-{{ :psyc410:images:yousry2.png?&200|Yousry and colleagues (1997)}} 
-{{ :psyc410:images:yousry3.png?&400 |Yousry and colleagues (1997)}} 
- 
-Another hint to finding the hand area is to first find the superior frontal sulcus, which runs anterior-posterior, indicated by the arrow heads in the image below. The central sulcus with the omega shape (knob) should then be behind it. The star in this image shows the hand area in the left hemisphere. The omega shape is clearly seen in the mirror location of the right hemisphere. 
- 
-{{ :psyc410:images:superior_frontal_sulucs_hand_area.jpg?&350 |}} 
- 
-Even after finding the correct brain region, you might have some trouble finding a good voxel. At this point, I would suggest you use the keyboard arrow keys to move the crosshair by one voxel in a systematic way. 
- 
-<WRAP center round tip 100%> 
-You might remember the motor cortex from the DTI lab. The [[https://en.wikipedia.org/wiki/Corticospinal_tract|cortico-spinal tract]] projects to the motor cortex. 
-</WRAP> 
- 
-Here's a slice that includes the hand-area...can you spot the motor cortex hand region? 
- 
-{{ :psyc410:images:hand_area.png?nolink&400 |}} 
- 
- 
-<WRAP center round tip 100%> 
- 
-If you're unable to find a good voxel despite making an an honest __**good effort**__, and if you're still a cheating cheater.... 
- 
-Highlight here if you think you'll be able to sleep at night: \\ 
-<fc #ffff00>GOOD VISUAL TASK VOXELS: </fc> \\ 
-<fc #ffff00>Right motor cortex: 39, 46, 42 </fc> \\ 
-<fc #ffff00>Left motor cortex: 69, 46, 49</fc> 
-</WRAP> 
- 
- 
- 
-===== LAB REPORT Part 1 #2 ===== 
-<WRAP center round important 100%> 
-<WRAP centeralign> 
-<WRAP centeralign> 
-<typo fs:x-large; fc:purple; fw:bold; text-shadow: 2px 2px 2px #ffffff> 
-LAB REPORT Part 1 #2 
-</typo> 
-</WRAP></WRAP> 
- 
-  * Create a figure depicting the time-series from a responsive voxel in the right hemisphere //and// a responsive voxel in the left hemisphere. 
-    * For these screenshots you do __not__ need to create "nice" figures.  
-  * Compare the two time-series. 
-    * Are they the same/different? 
-    * Why? 
- 
-</WRAP> 
- 
-/* 
-===== Biological Motion Task (Optional) ===== 
- 
-In this task, participants saw dots randomly moving on the screen. At certain points in time, the dots displayed coherent motion in a particular direction. 
- 
-**10.** Load the data file: 
-  * ''File'' -> ''Add overlay from file'' 
-  * select ''tb9611_dotmotion_run01.nii.gz'' 
-  * Make the checkerboard data invisible by clicking on the blue eye in the **Overlay list** 
- 
-You might consider looking at early visual areas or area MT, which is sensitive to motion. See the image below for its location shown in green: 
- 
-{{ :psyc410:images:motion.014.jpg?500 |http://www.cns.nyu.edu/~david/courses/perception/lecturenotes/motion/motion-slides/motion.014.jpg }} 
-*/ 
- 
- 
- 
-====== Part 2: Visual inspection of the data ====== 
- 
-Here you will try to find task-activated voxels in the functional MRI data assigned to you. 
- 
-===== Data for Part 2 and beyond ===== 
- 
-Over the course of the next several exercises, you will work with the data from the same localizer experiment. 
- 
-<WRAP center round info 80%> 
-**//WHAT IS A LOCALIZER?//** 
- 
-A “localizer” task is an fMRI paradigm that has been designed to reliably activate a particular functional region. For instance, a face localizer is designed to reliably activate the regions of the brain involved in face perception. A language localizer would be designed to reliably activate language areas of the brain. In lecture, we will discuss how these localizers can be used to ask interesting questions. But for the purposes of the lab, they are used because they will yield strong activations. 
-</WRAP> 
- 
-==== Data Acquisition Parameters ==== 
- 
-^Parameter^Value^ 
-|Field Strength|3.0 T| 
-|Head  Coil|12-channel| 
-|Sequence|Echo Planar Imaging| 
-|Martix|64 x 64| 
-|Slices|37| 
-|Field of View|22.4 cm| 
-|Voxel size (x,y,z)|3.5 mm 3.5 mm 3.5 mm| 
-|Orientation|Axial| 
-|TR|2000 ms| 
-|TE|25 ms| 
-|Flip angle|90°| 
-|Slice order|1,3,5...,37,2,4,6,...36| 
- 
-==== Data File Names and Locations ==== 
-  * The data are located in this directory: 
-''/Users/hnl/Desktop/input/fmri/loc/data/nifti'' 
-  * There are 17 subdirectories within this ''nifti'' subdirectory, each containing the data for one subject 
-''2545 2552 2553 2554 2585 2731 2766 2767 2814 3850 3851 3855 3866 5743 5744 5769 5770'' 
-  * Within each of these 17 subdirectories are data for as many as four localizer tasks. We will only be using the first two: a face localizer (''face'') and a motor localizer (''motor'') 
-    * **the //face// task: viewing pictures of scenes (houses) vs. pictures of faces** 
-    * **the //motor// task: left vs. right hand movement** 
- 
-/*  
-    * the //bio-motion// task: biological motion (point-light walkers) vs. non-biological motion 
-    * the //language// task: words vs. non-pronounceable nonwords 
-*/   
- 
-<WRAP center round info 65%> 
-A **run** refers to a continues period of data acquisition. Let's say you want to acquire data for a study that takes a total of 30 minutes. In order to give your participant breaks, you might break the 30 minutes into 6 runs of 5-minutes each. 
-</WRAP> 
- 
-  * For each of these localizer tasks and subjects, there were 2 or 3 runs. For example, subject ''2545'' has three ''face'' runs, each saved to a separate file: 
-    * ''2545_face_run1.nii.gz'' 
-    * ''2545_face_run2.nii.gz'' 
-    * ''2545_face_run3.nii.gz'' 
-      * //note//, All subjects only have two ''motor'' runs. Most subjects have 3 runs of the remaining conditions. 
- 
-=== Your Assigned Subject === 
- 
-^ Name     ^ Task   ^ Subject ID  ^ 
-| Ronan    | Motor  | 2552        | 
-| Stuart   | Motor  | 2553        | 
-| Mallory  | Motor  | 2554        | 
-| Norah    | Motor  | 2767        | 
-| Blythe   | Motor  | 2814        | 
-| Hollen   | Face   | 2552        | 
-| Vaso     | Face   | 2553        | 
-| Angelia  | Face   | 2554        | 
-| Benji    | Face   | 2767        | 
-| Paula    | Face   | 2814        | 
-| Natalie  | Face   | 2814        | 
- 
- 
- 
-==== Task Design - Part 2 ==== 
-Each of the //localizer// tasks used a **block design** (see info box above) that had the following common construction: 
-  * 150 volumes with TR = 2 sec 
-    * 153 volumes were initially collected, but the first 3 volumes were deleted to allow the spins to reach a steady-state magnetization. 
-  * Each run of each task had the following construction 
-    * ''Task A'' => ''Rest'' => ''Task B'' => ''Rest'' => ''Task A'' ... 
-    * Each ''Task'' and ''Rest'' block lasted 12 seconds. 
-    * In each run, there were  
-      * 6 ''Task A'' blocks 
-      * 6 ''Task B'' blocks 
-      * 12 intervening ''Rest'' blocks. 
-    * ''Task A'' started at the 6-second mark of each run with the first brain volume occurring at time zero. 
- 
- 
-/* 
-**NOTE:** This is different for the ''biological motion'' task 
-*/ 
- 
- 
- 
-=== Motor Task === 
-  * Subjects were shown a visual display consisting of one of the following three symbols 
-    * ''<<<<'' 
-    * ''==='' 
-    * ''>>>>'' 
-  * ''<<<<'' was ''Task A'' and indicated that the subject should make rapid alternating button press responses with the index and middle finger of their <wrap em>left hand</wrap> 
-  * ''==='' indicated that the subject should rest and make no button presses 
-  * ''>>>>'' was ''Task B'' and indicated that the subject should make the same alternating press responses with the fingers of their <wrap em>right hand</wrap> 
- 
-=== Face Task === 
-  * ''Task A'' consisted of a series of eight pictures of scenes.  
-  * ''Task B'' consisted of a series of eight pictures of faces. 
-  * Rest consisted of a fixation cross on a gray background. 
-  * For both ''Face'' and ''Scene'' blocks, subjects covertly counted the number of immediate picture repetitions - a so-called "one-back" task. 
-    * Subjects reported their total count of repetitions at the conclusion of the run. 
-    * The purpose of this task was simply to ensure that participants continued to pay attention to the pictures. 
- 
-/* 
-=== Biological Motion Task === 
- 
-  * ''Task A'' consisted a series of six point-light upright figures engaged in short biological motions (e.g., “jumping jacks”). 
-  * ''Task B'' consisted of a series of six inverted point-light figures engaged in semi-random motions (e.g., upside-down jumping jacks with some additional randomizations to dispel the illusion of biological motion). 
-  * ''Rest'' consisted of a fixation cross on a gray background 
-  * For both biological and non-biological motion blocks, subjects covertly counted the number of times the point-light figure did not move - i.e., a static light display lasting 2 secs. Subjects reported their total count of repetitions at the conclusion of the run. 
-  * ''Task A'' started at the 12-sec mark of each run with the first brain volume occurring at time zero. 
- 
-*/ 
- 
-<WRAP center round info 100%> 
-Let's review some things about the task design that you should know: 
-  * You will have alternating **12 seconds of task** and **12 seconds of rest** (except for the first and last rest periods - see below) 
-  * For every 2 seconds, we get one time-point or volume 
-  * This means you get alternating **6 time-points/volumes of task** and **6 time-points/volumes of rest** 
-  * For every task, the first task (''Task A'') started **after 6 seconds** or **at time-point 4** (TP1=0s, TP2=2s, TP3=4s, TP4=6s). 
- 
-<WRAP center round alert 65%> 
-This is similar, but  not identical to the task design we used in Part 1. It is a bit more complicated as we now have rest blocks and there is a 6 second delay before the start of the first task. Take a minute or two to make sure you feel confident that you understand this design. 
-</WRAP> 
- 
-/* 
-  * For the biological motion task, the first task (Task A) started **after 12 seconds** or **at time-point 7**. 
-*/ 
-</WRAP> 
- 
- 
- 
- 
- 
- 
-===== Loading your own assigned data ==== 
- 
-<WRAP center round info 90%> 
-  * Whenever you see ''SUBJ'' replace that with the identification number of [[#Task Design - Part 2|your subject]]. 
-  * Whenever you see ''TASK'' replace that with the name of the task [[#Task Design - Part 2|you've been assigned]]. 
-</WRAP> 
- 
-**1. ** Use the steps above (see [[#visual_task|here]] for a refresher) to load your subject into ''FSLeyes'' 
-  * You might want to close ''FSLeyes'' and then reopen it so you have a clean slate. 
-  * You can look at run1, run2, or run3 (if it exists) for these first analyses. 
- 
-**2. **Knowing what you know about the [[#Task Design - Part 2|task and its temporal structure]] (i.e., when blocks come on and off) - try to locate individual voxels whose time courses vary with the task timing. Think about where a motor or face task should activate the brain. 
- 
-<WRAP center round tip 70%> 
-You've already had some experience with finding the [[#motor_task1|hand area]]. Below is an image that shows you the location of the "fusiform face area" along the fusiform gyrus. 
- 
-{{ :psyc410:images:ffa_location.png?nolink&0x200 | }} 
-</WRAP> 
- 
-I can assure you that there are voxels that clearly show task variation, and you may be amazed when you find them. However, <wrap em>**don't spend too much time looking**</wrap> because we are also going to use a simple statistical procedures in the next part to find these voxels 
- 
-**3. ** If you find a voxel or voxels that appear to be activated by the task, //call me over and show it to me//. 
- 
-===== LAB REPORT Part 2 ===== 
-<WRAP center round important 100%> 
-<WRAP centeralign> 
-<WRAP centeralign> 
-<typo fs:x-large; fc:purple; fw:bold; text-shadow: 2px 2px 2px #ffffff> 
-LAB REPORT Part 2 
-</typo> 
-</WRAP></WRAP> 
- 
-  * Nothing needs to be submitted for this part of the lab report. 
- 
-</WRAP> 
- 
- 
- 
-====== Part 3: Finding the activation using simple statistics ====== 
- 
-===== Statistical Mapping: Getting Started ===== 
- 
-You should now have a pretty good idea that it is //not// easy to locate a functional MRI activation by simply eye-balling the data. The change in the raw MR intensity for active voxels (the 'signal') is not much greater than the change in raw MR intensity for unactivated voxels (the 'noise'). That is, there is poor signal-to-noise for fMRI activation.  
- 
-<WRAP center round info 100%> 
- 
-One way of finding signal in noise is to calculate how well our **expected signal** ''Y'' (i.e., our model) correlates with the **actual signal** ''X''. This is calculated in the following steps 
- 
-**//Cross-Product//**: For each time-point we subtract the average of all time-points and then calculate the product, and then repeat this for all time-points. 
-<WRAP centeralign>Cross-product = (X<sub>i</sub> - X<sub>mean</sub>) * (Y<sub>i</sub> - Y<sub>mean</sub>). </WRAP> 
- 
-**//Covariance//**: To calculate the covariance of our //recorded// signal and our //expected// signal we simply take the sum of the cross-products and divide by n-1 (where 'n' is the number of time-points). 
- 
-**//Correlation//**: To get the correlation coefficient we standardize the waveforms by dividing each by its standard deviation. 
- 
-We are now going to search for the activation by correlating our expected signal with the raw signal. This is done on a voxel by voxel basis. In other words, we run this correlation analysis for each voxel independently. There are about 152,000 voxels, so we run 152,000 independent correlations. In practice, we'll analyze around 25,000 voxels because we'll exclude voxels that are outside of the brain. 
- 
-<WRAP center round tip 100%> 
-Do you know how I know there are 152,000 voxels? I promise I did not count them. //Hint//: have a look back [[#Data Acquisition Parameters|here]]. 
-</WRAP> 
- 
-</WRAP> 
- 
-We will use a MATLAB script, ''fmri_lab_script2_2025.m'', to help us find voxels at which the signal correlates with our task. You should have [[#Housekeeping|already copied the scripts]] for today's lab to your ''scripts'' directory. 
- 
-**1. ** Edit this script using the following command in ''MATLAB''. 
- 
-<code matlab> 
-  edit '/Users/hnl/Desktop/scripts/fmri_lab_script2_2025.m' 
-</code> 
- 
-<WRAP center round tip 90%> 
-**Look carefully at the script.** It is annotated so that you can easily see what it is doing (lines of documentation are in green text preceded by a ''%''). I want to demystify data analysis today - so please take the time to look at the code. You won't understand everything, but every time you try to understand code, you will chip away and learn a little more. 
-</WRAP> 
- 
-===== Modeling our expected brain signal ===== 
- 
-What do we expect as our **//expected signal//**? Well, in the absence of any better ideas - why don't we enter a waveform that looks like the [[#Task Design - Part 2|task timing]]. 
- 
-Inside your script is a variable called **''template''**. It is a 1-D matrix (a vector) that contains 150 zeros - one entry for each volume of data within the run. Remember, one volume was acquired every 2s and we have a total of 150 of these volumes in each run. 
- 
-**2. ** Create an expected waveform for your task using 1's and 0's. Put a ''0'' when you expect there to be no activation in a volume, and put a ''1'' when you expect there to be activation in a volume. Note that ''template'' takes up a few lines of the script. The **'', ...''** at the end of each line means that the line is continued to the next line. Thus, ''template'' is a single vector with 150 elements. 
- 
-You may also note that I put the lines together in such a way to facilitate your task. The first and last line of ''0''s represent **rest** while there are six main lines in between that represent the six repetitions of the **Task A** - **Rest** - **Task B** - **Rest** structure. 
- 
-//To complete the template below, think about//  
-  - the duration each block of ''Task A'', each block of ''Task B'', and each block of ''Rest'' 
-  - during which of those block periods we might expect brain activity? 
-    - For now, let's just discriminate between **Task** and **Rest**. That is, don't worry about treating **Task A** and **Task B** differently; we'll do that later in the lab. 
- 
-The initial template should look like this. Fill in the ''1''s where appropriate. 
-<code matlab> 
-template = [0 0 0,... % Rest 
-    0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0,... % Task A - Rest - Task B - Rest 
-    0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0,... % Task A - Rest - Task B - Rest 
-    0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0,... % Task A - Rest - Task B - Rest 
-    0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0,... % Task A - Rest - Task B - Rest 
-    0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0,... % Task A - Rest - Task B - Rest 
-    0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0,... % Task A - Rest - Task B 
-    0 0 0];   % Rest 
-</code> 
- 
-<WRAP center round tip 100%> 
-You might want to fill in the template for one line and if you think it is the same for some of the other lines then just copy and paste. **As you edit the template, remember that the number of elements in the template must always remain at 150.** 
-</WRAP> 
- 
- 
- 
- 
-/* 
-<WRAP center round alert 90%> 
-**Do NOT run the script immediately after you change the template. Save your changes and read the section below first (you will actually need to run the script through the MATLAB command-line and NOT through the editor).** 
-</WRAP> 
-=== Understanding the script === 
- 
-Look carefully at the script to see what it does. The heart of the script is the following code snippet. 
- 
-  * Extract the time series for a single voxel from the func.data array. (The squeeze function removes singleton dimensions). 
-  * Use the Matlab 'corr' function to compute a correlation between the timeseries (our raw MR signal) with the expected signal (your template). 
-    * The 'corr' function returns a Pearson r and a probability. 
-  * Store these in a 3D volume, which we will display overlaid upon a brain image to see if we can find the active voxels. 
- 
-<code matlab> 
-  timeseries = squeeze(func.data(x,y,z,1:tdim)); %Extract the time series (the t dimension) at coordinates x,y,z 
-  [r,p] = corr(template,timeseries);             %Calculate a correlation between time series and threshold 
-  output_corr.data(x,y,z) = r;                   %Save the Pearson r (the correlation coefficient -1 to +1) 
-  output_prob.data(x,y,z) = 1-p;                 %Save the significance probability 
-</code> 
-*/ 
- 
- 
-===== Statistical Mapping: Running our model ===== 
- 
-**3. ** Run the script by calling the function from the MATLAB command line 
- 
-<code matlab> 
-fmri_lab_script2_2025('SUBJ','TASK',1) 
-</code> 
- 
-<WRAP center round info 70%> 
-  * Note the single quotes around ''SUBJ'' and ''TASK''. 
-    * If your subject is ''9999'' and your task is ''motor'', your command would be: 
-<code> 
-fmri_lab_script2_2025('9999','motor',1) 
-</code> 
- 
-  * The ''1'' tells the program to do the analysis on run1. 
-    * Set this to whatever run you'd like to look at. 
-</WRAP> 
- 
-<WRAP center round alert 60%> Be patient, it takes several seconds to run. As we move through the lab and add to our analysis, these scripts will take 2-3 minutes to run. 
-</WRAP> 
- 
-The script will produce useful output: 
- 
-  * In your MATLAB command window you will see the ''Number of voxels exceeding minimum correlation of 0.30 => '' 
-    * This is a very rough indication of how many voxels were 'activated' (i.e., identified by correlation with your template). 
-  * ''FSLeyes'' will plot the functional data with the correlation results overlaid. 
-    * You'll probably want to press ''option'' + ''r'' to recenter the brain. 
-    * The contrast limits are set to .3 to .8 with a red-yellow color map. This means that voxels in which the //expected// time-series and the //actual// time-series correlated with a coefficient (//r//) of at least .3 are colored in. The more yellow the voxel, the larger the correlation coefficient. 
-    * By default, the cursor will be placed on the voxel that had the strongest correlation with your template. 
- 
-Your display should look something like this: 
- 
-{{ :psyc410:images:fmri2_1.png?800 |}} 
- 
-<WRAP center round tip 90%> 
-If your brains are cutoff in the display, press the button to the right of the ''Reset display on all canvases'' button. This button looks like magnifying glass and is located between the ''crosshairs'' button and the ''zoom'' slider. 
-</WRAP> 
- 
- 
-Let's take a closer look at the relationship between the //expected// (i.e., your template) and the //actual// time-series. 
- 
-**4. ** Plot the time-series 
-  * Make sure to highlight the functional MRI data in the **Overlay list** (this is the file that does **<fc #ff0000>not</fc>** have ''corr'' in the name). 
-  * Press ''command'' + ''3'' 
-  * Change the **Plotting Mode** to ''Normalised'' 
-    * See the red arrow two figures down 
- 
-Now you should see the time-series from the voxel plotted at the bottom of the screen. The y-axis represents the intensity of the BOLD signal and the x-axis represents different time-points: 
- 
-{{ :psyc410:images:fmri2_2.png?800 |}} 
- 
-You can probably see that the signal goes up and down in time with the task. But we can compare this more directly by overlaying your template. 
- 
-{{ :psyc410:images:fmri2_3.png?600 |}} 
- 
-**5. ** Overlay your model. 
-  * Press the button to **Import data series from a text file** 
-    * See the green arrow in the figure above 
-  * Select the text file you just created with your MATLAB script. 
-    * It will be in your data directory and named ''SUBJ_TASK_model1.txt'' 
-  * Click ''Ok'' in the **Scaling factor** pop-up window 
- 
-You should now see your model (the template you created) and the actual time-series, like this: 
- 
-{{ :psyc410:images:fmri2_4.png?800 |}} 
- 
-<WRAP center round help 70%> 
-When you generate the output, ask yourself: 
-  * Does the 'best fitting' voxel's time-series match your template pretty well?  
-  * Are the peaks and troughs closely aligned temporally? 
-  * How many voxels are activated (see the output in the Matlab command window)? 
-  * Are the activated voxels in areas that we might expect? 
-  * Click around to see how well (or poorly) other voxels correlate with you model. 
-</WRAP> 
- 
-===== LAB REPORT Part 3 #1 ===== 
-<WRAP center round important 100%> 
-<WRAP centeralign> 
-<WRAP centeralign> 
-<typo fs:x-large; fc:purple; fw:bold; text-shadow: 2px 2px 2px #ffffff> 
-LAB REPORT Part 3 #1 
-</typo> 
-</WRAP></WRAP> 
- 
-  * Include a screenshot of your time-series with the overlaid template. 
-    * For this screenshot you do __not__ need to create a "nice" figure.  
-  * Report how many voxels had //r// > .30 (remember, this is printed out to your MATLAB command window) 
- 
-</WRAP> 
- 
-Your results probably look pretty good, but not great. Let's think about how we can improve things ... 
- 
-===== Refining your model of the expected brain signal ===== 
- 
-We created a template that faithfully represented the timing of the task, so our results may not be optimal. One reason is that there is a ~4-8 sec lag between the onset of the task and the onset of this slow blood flow related response. You can account for the //hemodynamic lag// by time-shifting your template waveform. You can edit your expected activation template by adding zeros to the front end, and removing the same number of zeros from the back end. This has the effect of shifting the expected activation template to the right. To shift to the left, remove zeros from the front end and add them to the back end. Remember, however, you must have 150 elements in this vector.  
- 
-**6. ** Account for the hemodynamic lag by shifting the template to the right or left by adding and removing zeros from the beginning and end of the vector (again, this shifts it in time). 
- 
-<WRAP center round info 80%> 
-Think through how best to do this. You do not want to try and trial-and-error your way through it. 
-</WRAP> 
- 
-  * Rerun the analysis with the new model (aka template) 
-  * Observe how this changes the number of activated voxels. 
- 
-As you probably (should have) observed, the statistics are VERY sensitive to getting the expected stimulus waveform right and for accounting for the hemodynamic delay. 
- 
-Now examine the results with the results that generated the largest number of activated voxels. 
- 
-<WRAP center round help 70%> 
-  *Can you now readily detect the activated voxels? 
-  *Look at the time series for the voxels with high and low correlation values. 
-  *Look throughout the brain and note the regions where activation is obtained. I know this is difficult when done on a low resolution image - but can you tell what brain areas are activated? Try to identify these areas 
-  *Try adjusting the range of your r statistic (the min and max values) to look for additional activations. 
-{{ :psyc410:images:fsleyes_minmax.png?300 |}} 
-</WRAP> 
- 
-<WRAP center round alert 90%> 
-Adjusting your model for the hemodynamic delay is an important step and a **critical thing for you to understand**. Be sure you understand what you did in this step and why you did it before proceeding. 
-</WRAP> 
- 
-===== LAB REPORT Part 3 #2 ===== 
-<WRAP center round important 100%> 
-<WRAP centeralign> 
-<WRAP centeralign> 
-<typo fs:x-large; fc:purple; fw:bold; text-shadow: 2px 2px 2px #ffffff> 
-LAB REPORT Part 3 #2 
-</typo> 
-</WRAP></WRAP> 
- 
-  * Include a screenshot of your time-series with the overlaid template. 
-    * For this screenshot you do __not__ need to create a "nice" figure.  
-  * Report how many voxels had //r// > .30 (remember, this is printed out to your MATLAB command window) 
- 
- 
-</WRAP> 
- 
-===== Improving your detection of activation by temporal smoothing (filtering) ===== 
- 
-Have you noticed how noisy some of the raw time waveforms appear - even when coming from "significant" voxels? We know that our raw MR time series are composed of signal and noise. Some of the noise is of a higher frequency than the signal, so perhaps we can get rid of some of it through the imposition of a simple low pass filter. If we suppress the noise, we should make our raw time series more like our expected waveform, and our correlations and probabilities should improve. 
- 
-<WRAP center round info 100%> 
-What I wrote above is more simple than it probably sounds. Imposing a low-pass filter in time smoothes out data so that rapid changes become less apparent than slow changes. Let's say I wanted to track the temperature in Gambier over the course of one year. I sit here writing this on a day that is 20-30 degrees colder than the day before (Seriously.Ohio "spring"!). This would be an example of a rapid, or "high-frequency", change in the data. But this change might not be meaningful in terms of the big picture of understanding temperature trends in Gambier. A low pass filter would minimize the influence of such rapid changes and reveal slower changes, like seasonal warming and cooling.  
- 
-Imagine the that blue line represents our actual data. We observe there are a lot of high-frequency fluctuations. After low-pass filtering (the red line) the data looks much smoother. 
- 
-{{ :psyc410:images:lp_filt.png?300 }} 
- 
-</WRAP> 
- 
-** 7.** Included in your script is a bit of code that applies a very simple moving average filter. The actual operation is the line in which we calculate the mean of the time-series over time points j-len to j+len; where len = length of kernel. 
- 
-<WRAP center round tip 60%> 
-A moving average is a very simple low-pass filter. Each data point is replaced by the average of that data point and some of its neighbors. 
-</WRAP> 
- 
- 
-<code matlab> 
-timeseries = squeeze(func.data(x,y,z,1:tdim)); 
-if(mean(timeseries) > threshold) 
-  if(movavg ~= 0) 
-    temp = timeseries; 
-    for j = len+1:tdim-len 
-      temp(j) = mean(timeseries(j-len:j+len)); 
-    end 
-    timeseries(len+1:tdim-len) = temp(len+1:tdim-len); 
-end 
-</code> 
- 
-Rather than create a new script to implement this moving average filter, we can modify a variable at the top of the script called **''movavg''** (see below). 
-  * If movavg = 0 ('false' in a Matlab logical expression), then the moving average filter will NOT be applied. 
-  * If movavg = 1 ('true' in a Matlab logical expression), then the moving average filter will be applied. 
-  
-Try setting movavg = 1 and then save your script (see image below). Then run the script to visualize the effect of smoothing. 
- 
-{{course:lab07:matlab_02.png?500}} 
- 
-This may take a minute or two. Look at the bottom left-hand corner of the MATLAB window (near “Start”) to see if it says “Busy.” If so, it's still computing… 
- 
-===== LAB REPORT Part 3 #3 ===== 
-<WRAP center round important 100%> 
-<WRAP centeralign> 
-<WRAP centeralign> 
-<typo fs:x-large; fc:purple; fw:bold; text-shadow: 2px 2px 2px #ffffff> 
-LAB REPORT Part 3 #3 
-</typo> 
-</WRAP></WRAP> 
- 
-  * Include a screenshot of your time-series with the overlaid template. 
-    * For this screenshot you do __not__ need to create a "nice" figure.  
-  * Report how many voxels had //r// > .30 (remember, this is printed out to your MATLAB command window) 
- 
-</WRAP> 
-====== Part 4: Discriminating among activations with two templates ====== 
- 
-In fMRI studies, we are usually interested in comparing the activations evoked by different tasks - something that cannot be done with a single template like the one we've been using. In this section, you will use **''fmri_lab_script3_2020.m''** and create //two// templates - one for ''Task A'' and one for ''Task B''. 
- 
-**1.** Edit ''fmri_lab_script3_2025.m'' and modify the **''template1''** and **''template2''** vectors to correspond to the timing of ''Task A'' and ''Task B'' for your [[#Your Assigned Subject|assigned demonstration]] experiment. Use what you learned in [[#part_3finding_the_activation_using_simple_statistics|Part 3]] to optimize your templates. 
- 
-**2.** Then run the script on the same data as you did [[#statistical_mappingrunning_our_model|above]].  
- 
-**3 ** Plot the time-series 
-  * Make sure to highlight the functional MRI data in the **Overlay list** (this is the file that does **not** have ''corr'' in the name). 
-  * Press ''command'' + ''3'' 
-  * Change the **Plotting Mode** to ''Normalised'' 
- 
-**4. ** Overlay your models (Load ''modelA'', then repeat these steps to load ''modelB'') 
-  * Press the button to **Import data series from a text file** 
-  * Select the text file you just created with your MATLAB script. 
-    * It will be in your data directory and named ''SUBJ_TASK_modelA.txt'' 
-  * Click ''Ok'' in the **Scaling factor** pop-up window 
- 
-  * Note that the activations identified by template1 and template2 are color-coded in the final output. 
-    * Voxels that correlated with template1 (**Task A**) are red-yellow, whereas those that correlated with template2 (**Task B**) are blue-lightblue. 
- 
-===== LAB REPORT Part 4 ===== 
-<WRAP center round important 100%> 
-<WRAP centeralign> 
-<WRAP centeralign> 
-<typo fs:x-large; fc:purple; fw:bold; text-shadow: 2px 2px 2px #ffffff> 
-LAB REPORT Part 4 
-</typo> 
-</WRAP></WRAP> 
- 
-//For the requested figures below:// 
-  * //You __do__ need to create high quality figures for __depicting the brain__.// 
-  * //You __do not__ need to create high quality figures for __depicting the time series__.// 
- 
-  * Do you now see differences between the Task A and Task B blocks of your experiment? That is, do you observe the different timing associated with the different blocks? 
-  * How many voxels exceeded //r// = .30 for your two tasks? 
-  * **Motor study analysts**: 
-    * Did you find activation in the primary motor cortices and cerebellum? 
-      * Include a figure depicting motor cortex activation for each task. 
-    * How does the hemisphere of peak activation match the hand of movement in motor cortex and cerebellum? 
-  * **Face-Scene study analysts**: 
-    * Did you find activation in the the [[https://en.wikipedia.org/wiki/Fusiform_face_area|fusiform face area (FFA)]] and [[https://en.wikipedia.org/wiki/Parahippocampal_gyrus#Scene_recognition|parahippocampal place area (PPA)]] for faces and scene, respectively? 
-      * //Hint//: You'll probably find better activation in the right hemisphere. 
-    * Include a figure depicting FFA and PPA activation for the face and scene task, respectively. 
- 
- 
-</WRAP> 
- 
- 
-====== Part 5: Averaging across runs ====== 
- 
-The motor task was run twice and the face task was usually run three times in each subject. This was done to increase the sample size and thus increase signal-to-noise. The figure below shows the time-series from a single voxel for run1 (red), run2 (blue), run3 (green), and the average all three (black). You can see that the addition of two runs helps smooth out the data and increase SNR. 
- 
-{{ :psyc410:images:fmri_3runs.png?800 |}} 
- 
-**1.** As our final exercise, use **''fmri_lab_script5_2025.m''** to apply your two template model to the average of two runs. 
-  * You can simply copy and paste the ideal templates (template1 and template2) from fmri_lab_script3_2025.m into fmri_lab_script5_2025.m. 
-  * **Don't forget to look at the time-series with both of your templates overlaid**. 
- 
-<code matlab> 
-fmri_lab_scripts5_2025('SUBJ','TASK') 
-</code> 
- 
-<WRAP center round help 90%> 
- 
-  * How does the number of activated voxels compare for the run-averaged data compared to the analysis of the individual runs? 
-  * Does your pattern of activations appear more spatially extensive and 'filled-in' for the averaged data? 
-  * How do the waveforms look - are they cleaner for the run-averaged data than for the individual runs? 
-</WRAP> 
- 
-===== LAB REPORT Part 5 ===== 
- 
-<WRAP center round important 100%> 
-<WRAP centeralign> 
-<typo fs:x-large; fc:purple; fw:bold; text-shadow: 2px 2px 2px #ffffff> 
-LAB REPORT Part 5 
-</typo> 
-</WRAP> 
- 
-  * Include screenshots of your time-series overlaid with your templates from good voxels from each Task. 
-    * For these screenshots you do __not__ need to create "nice" figures.  
-  * How many voxels exceeded //r// = .30 for your two tasks? 
- 
-</WRAP> 
-====== Part 6: A preview of a problem - multiple comparisons ====== 
- 
-<WRAP center round box 100%> 
- 
-We have not yet performed null-hypotheiss signficance tests on our data, but we soon will. Are all of the voxels that exceed p<.01 //really// significant? Given the number of voxels in the brain, many of these 'activations' are 'false positives' - that is, they are expected due to chance alone. 
- 
-Null hypothesis testing gives us a p-value that indicates how likely our results would be assuming that the null hypothesis were true. For example, if you contrast face activations and house activations and find a voxel with p<.05, you know that if face and house activation **did not differ in reality** there would be less than a 5% chance of getting your results. 
- 
-But this necessarily means that if you run enough tests you **will have false positives** (i.e., instances where you see a difference, but that difference is just due to random variability). For example, if you calculate 100 correlations between pairs of samples from a random number generator, __and__ if you set your significance level to p<.05, you would expect to observe 5 'significant' correlations. Of course, this difference is not meaningful because because the samples were sampled from a random number distribution. In this example, these 5 (or so) significant correlations would be considered //false positives//. The false positive rate is set by the significance level we chose (sometimes called the //alpha level//). 
- 
-One way to correct against false positives is to use a Bonferroni correction. With this method, an adjusted p-value threshold is computed that compensates for the number of comparisons.  
- 
-  corrected p value = nominal p value / number of comparisons 
- 
-So in our example above, we would adjust our p-value to compensate for running 100 tests. Our //corrected// p-threshold would be p<.005. 
- 
-  p<.005 = .05 / 100 
- 
-The brain has tens of thousands of voxels, which means there will be __lots__ of false positives. If there are 20,000 voxels in the brain, and you want a p < .01 significance threshold 
- 
-  corrected p value = 0.01 / 20000 
-  corrected p value = .0000005; (or, .9999995) 
- 
-The Bonferroni correction is overly conservative for imaging data. This is because the voxels within an image are correlated (the signal from two neighboring voxels doesn't really represent two independent time-series). We will discuss correction for multiple comparisons in imaging data in a future lecture. 
- 
-{{ http://imgs.xkcd.com/comics/significant.png }} 
-</WRAP> 
- 
-===== LAB REPORT Part 6 ===== 
-<WRAP center round important 100%> 
-<WRAP centeralign> 
-<WRAP centeralign> 
-<typo fs:x-large; fc:purple; fw:bold; text-shadow: 2px 2px 2px #ffffff> 
-LAB REPORT Part 6 
-</typo> 
-</WRAP></WRAP> 
- 
-  * Nothing needs to be turned-in for this part of the lab. 
- 
-</WRAP> 
  
psyc410_s2x/fmri_part1.1742654735.txt.gz · Last modified: 2025/03/22 09:45 by admin

Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Share Alike 4.0 International
CC Attribution-Share Alike 4.0 International Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki