THIS PAGE IS STILL UNDER CONSTRUCTION!
Feel free to poke around, but do not start the lab as things might change!
In the last exercise (oh so many weeks ago), you analyzed fMRI data 'by hand' and conducted a simple correlation analysis by fitting an expected activation waveform to each voxel's raw time series. While analyzing by hand was (I hope) instructive, it is not practical for modern studies involving many subjects and several experimental factors. Fortunately, there are powerful programs available to do the heavy lifting for us - programs such as FSL, SPM, AFNI, and Brain Voyager. All of these programs are quite capable and, over the past several years, all have converged upon a very similar feature set. Although there are some minor differences in statistical approach, the choice among these programs is mostly one of taste. We will use FSL because (in my opinion) it offers the best balance of power, flexibility, support, and ease of use.
The lab exercise itself is not as long as this wiki page suggests. There are many choices in running an FSL analysis, and I tried to document the steps you should follow in enough detail so that you would not become frustrated by arcane details. However, I don't want you to simply push the buttons I document below without understanding what you are doing (a common pitfall with complex statistics programs).
To help guide you through these analyses, I encourage you to make use of the FSL course materials that are found here. The relevant material is located under the heading FMRI Preprocessing and Model-Based Analysis (FEAT). Lecture 1 is particularly relevant for today's class.
Some of the images used in tonight's lab are from earlier versions of FSL (and/or from FSL running on a Linux machine), so they might not always look identical to what you see when running the program. However, the display should be very similar. I have updated the images in all cases in which the display looks substantially different.
In today's laboratory exercise, you will use FSL to analyze the same block design data that you analyzed 'by hand' in the last lab. Recall that your data sets are typical for localizer tasks; i.e., a short duration task that is used to identify functional regions-of-interest (ROIs) that can be used to simplify analyses of a main experiment. In this case, the localizer was used to identify brain regions activated by faces/scenes or right/left hand movements. Each task is well documented in Lab 6 (Functional MRI Part 1) - so please refer to this lab for details about task design, timing, TRs, etc.
Your lab report is due one week from tonight.
As always, I do not want you rushing through at the expense of comprehension. It is critical that you understand each step of the analysis stream. Do not use your time in lab to write your lab report. This time should be used for working through the lab to best further your understanding of the process. Therefore, you should solely be taking the screenshots and notes necessary to put your lab report together later.
The lab report questions for tonight's lab are listed below (these will be shown again at the relevant sections throughout the lab). Tonight's lab has two parts. Usually, these would be two separate lab reports (LR 7 and LR 8). But I'd like you to submit them as a single MS Word document (with your last name in the filename).
Part 1
Part 2
design.png
Part 3
no questions
Part 4
no questions
Part 5
Part 6
Part 7
Some general things to keep in mind while completing your lab report and preparing your figures:
R
next to the right hemisphere.SUBJ_highres.nii.gz
- T1-weighted anatomical volume acquired at high-resolution (1 x 1 x 1 mm)SUBJ_coplanar.nii.gz
- T2-weighted anatomical volume at the same resolution as the functional dataThe skull stripped anatomical images will be used to coregister the functional brain volumes in our analyses. You cannot coregister the functional images if the anatomical images have skull outlines, which is why we will skull strip these brains in Part 1 of the lab.
Throughout this wiki you should replace:
SUBJ
with the ID of your assigned subjectTASK
with your task name (face
or motor
)RUN
with whatever run you are analyzing (e.g., run02
)
If you ever see face
, run01
, etc. do not blindly copy it. You should always be using the filenames that are relevant for your task (face
or motor
) and your conditions (face
or scene
& right
or left
).
A run
refers to a single continuous data acquisition period. There are several reasons you would divide your study up into separate chunks. One reasons is to give your subjects intermittent rest periods. So if you have an experiment that requires 40 minutes to complete, you might break that up into eight 5-minute runs
.
If you think your subject didn't show a great task-evoked response, let me know and we can pick a new subject for you.
Do not proceed using a brain that didn't show good/clean activation.
We will use the skills we developed with FSL/BET
in Laboratory Exercise 3 (click here for a refresher on using BET or see the BET User guide) to skull strip both the coplanar and highres anatomical brains. The skull stripped output brains will (by default) be named SUBJ_coplanar_brain.nii.gz
and SUBJ_highres_brain.nii.gz
. We will need these skull stripped brains for the rest of the exercise, so be careful to specify the correct names.
1. Create a new directory to store the output from this week's lab (first delete any existing directory).
#!bash # Delete the output directory if it already exists if [ -d ~/Desktop/output/lab07 ]; then rm -rf ~/Desktop/output/lab07 fi # Create the output directory mkdir ~/Desktop/output/lab07
2. Skull strip the coplanar and anatomical brains
/Users/hnl/Desktop/input/fmri/loc/data/nifti/SUBJ_coplanar.nii.gz
/Users/hnl/Desktop/input/fmri/loc/data/nifti/SUBJ_highres.nii.gz
To save yourself a lot of traversing through file manager boxes to find the relevant files, launch fsl
from a terminal in which you first change your current directory to a helpful place. For example, you might first cd /Users/hnl/Desktop/
and then start fsl fsl &
.
Remember that you can read the unstripped brains from the input
folder, but you must specify your output
folder for the skull stripped brains. For example:
input: /Users/hnl/Desktop/input/fmri/loc/data/nifti/2767/2767_coplanar output: /Users/hnl/Desktop/output/lab07/2767_coplanar_brain
Preprocessing refers to a sequence of processing steps that precede statistical model fitting. These steps prepare the data for statistical analysis by removing extraneous sources of noise and artifact.
Here are the preprocessing steps we will use this week:
FEAT
To get started, start FSL and then choose the FEAT FMRI analysis
option. The following FEAT GUI should appear. Note that the GUI has a tabbed interface with Misc
, Data
, Pre-Stats
, Stats
, Post-stats
, and Registration
. We will enter information on each of these tabs before running the program. For additional details regarding FEAT, or for an alternate explanation of the details provided below, please refer to the FSL documentation.
The options presented by the GUI are determined by the two drop down menus at the top of the GUI. In this part, we will be conducting a First Level Analysis. A First Level Analysis measures the degree to which
Later we will conduct Higher Level Analyses, which include
In this part we will be doing a Full Analysis.
Note that, by default, FEAT has 'bubble help'. If you hover the mouse over an option, a pop-up help menu will appear after a few seconds (don't be impatient - it takes a while). This can be very helpful. It can be disabled on the MISC tab.
Detailed information regarding the Data
tab can be found here
1. You should first analyze the data from a single run of your data. Select this data set on the using the Select 4D data
button. Then choose the functional data from your first run (~/Desktop/input/fmri/loc/data/nifti/SUBJ_TASK_RUN.nii.gz
)
2. Be sure to specify your Output directory
. In FSL, the output directories have a very specific format with a standard file naming convention.
/Users/hnl/Desktop/output/lab07/RUN
.RUN
with the actual run number for the 4D file (e.g., run01
).3. Check parameters
2.0
. Let me know if you see a different value.150
Remember from our MRI lecture on Contrast and Segmentation, TR
is the “repetition time” between excitation pulses. Also remember that echo-planar imaging allows us to “fill up” k-space very quickly. After a single excitation we can sequentially acquire data from all of the “slices” in our volume. So the TR is essentially our temporal resolution. If we have TR = 2s then we get a new data sample from a given voxel every 2 seconds. In other words, we get a sample of the whole brain every 2 seconds.
We have 150 volumes with TR = 2s. What is the total duration of our time-series?
Remember, a high-pass filter gets rid of low frequency signal. That is, a high-pass filter is like a bouncer that only allows high frequencies to “pass” into the club.
5. The High pass filter cutoff
is a temporal smoothing parameter. It can be problematic to have very low frequencies in the data (e.g., linear trends and slow drifts due to magnetic instabilities.) We can eliminate these by high pass filtering.
60
sec.
The pre-stats tab controls several preprocessing steps. Detailed information regarding the Pre-stats
tab can be found here
1. Motion correction
. The default motion correction option is to apply MCFLIRT
(motion correction FMRIB Linear Registration Tool). MCFLIRT applies a rigid body transformation (i.e., 6 DoF) to the data. That is, it translates in 3 dimensions and rotates in 3 dimensions, but does not stretch or scale. The presumption is that the brain is the same on each volume, and that a subject's motion can only cause the brain to either translate or rotate, not to stretch or shear.
2. The Slice timing correction
option partially corrects for the fact that the data from different slices are collected at different times. That is, each slice is collected individually in a sequential order across the duration of the TR. Our data were collected using an interleaved acquisition
, so choose this option.
3. BET brain extraction
refers here to the 4D fMRI data, NOT the structural data that you already skull stripped in Part 1. Check this option (box should be yellow when selected).
4. Leave the Spatial smoothing FHWM
set to its default value of 5mm
. This will blur your data with a Gaussian kernel with FWHM = 5mm. Increasing this value will increase the amount of smoothness applied to your data. One reason to apply some level of spatial smoothing is to reduce noise through averaging neighboring voxels. Another reason is that it increases the correspondence between different subjects brains.
5. Temporal filtering
has several options. Because we selected a high pass filter on the Data
tab, make sure the Highpass
filtering option is selected here. This option will actually apply the high pass filter that we specified on the Data
tab.
There are several other processes that we are not choosing for this dataset. For example, we are not B0 unwarping
. This is a process for removing some geometric distortions in the data caused by static variations in the magnetic field over space. While a useful technique, it requires the acquisition of a field map, which we do not have.
The registration tab sets up the coregistration between this subject's structural images and a standard brain. Detailed information regarding the Registration tab can be foundhere.
Recall that we previously normalized an individual subject's brain to a template brain earlier in the semester. Well, here we will do this again.
1. Make sure the yellow checkbox next to Expanded functionalimage
is selected.
The Expanded functional image
is a low resolution structural image that is coplanar with the functional MRI data.
SUBJ_coplanar_brain.nii.gz
6 DOF
2. The Main structural image
is the high resolution structural image for this subject.
SUBJ_highres_brain.nii.gz
6 DOF
3. The Standard space
is the standard template to which you wish to normalize the data.
12 DOF
During registration:
Do not select Go
yet.
You will continue setting up the analysis in the next section.
Lab Report Part 1
A detailed description of multi-level analysis in FSL can be found here.
FSL conceives of an fMRI analysis as consisting of several levels.
In FSL/FEAT, you have a choice in the GUI to choose First Level Analysis
or to choose Higher Level Analysis
. The Higher Level Analysis choice is used for second, third, and fourth level analyses. Note that the first level analysis is the only level that applies the statistical model to raw time courses, and the only level in which pre-processing steps are performed. For these reasons, the first level analysis takes longer to compute. If you get to a higher level analysis and decide to change your statistical model, you have to go back and recompute the first level analysis again.
The statistical model is specified in the Stats Tab. This is the most complicated part of the process, and the GUI is quite flexible and allows for the specification of complex experimental designs. Happily, our design is quite simple and easy to specify.
You may wish to consult with the FSL FEAT documentation as you read along with my documentation. Details regarding the Stats Tab can be found here.
Recall that last week you specified a simple single template (and then two templates) of the expected activation and we conducted a correlation between that template and the time course of each voxel. Here, instead of using correlation to look for our signal, we will use multiple regression ('general linear model' or GLM). FSL has a brief overview of the GLM approach here.
The most important prerequisite for specifying the model is to have an accurate stimulus timing file that specifies when the stimulus occurred (in seconds) relative to the beginning of the fMRI time series of volumes. The file also includes values indicating the stimulus:
There needs to be a separate timing file for each experimental factor (called explanatory variables, or EVs, in FSL terminology) used in each run. These do the same job that your template matrices did last week. So, in our face-scene localizer task, we will need two stimulus timing files for each run; one for face
and another for scene
.
The timing file below is for face
condition in the Face Task and for the right hand
condition in the Motor Task (these tasks used the same timing and so we can use the same timing file).
30 12 1 78 12 1 126 12 1 174 12 1 222 12 1 270 12 1
This indicates that the first block of faces (or right-hand finger tapping) started at 30 seconds and persisted for 12 seconds. The last column means that the relative weighting of this stimulus was '1'. (Note: For this study, the weightings will all be '1's. However, in studies where a stimulus varies in intensity, we might utilize the weighting factor to express the relative intensity of each stimulus). The second line means that the second block of faces began at 78 seconds and persisted for 12 seconds. There were six blocks of faces in this first data run.
Here is the stimulus timing file for the scene
condition in the Face Task and for the left hand
condition in the Motor Task.
6 12 1 54 12 1 102 12 1 150 12 1 198 12 1 246 12 1
Note that the scene and face blocks alternate with a 12 second blank period between the end of one block and the start of the next block. For example, the first scene
block begins at 6 s and runs until 18 s (12 s duration), but the first face
block does not start until (30 s). In our last lab we used 0
s to indicate these timepoints, but here we only indicate the times for the time periods during which we think we can explain the variance, while all others are assumed to be unexplained.
Normally, good experimental design would require you to change the stimulus timing for each run to avoid order effects. In setting up these experiments, we purposively did not change the stimulus timing, but rather used the identical timing in each run. This was done to simplify these demonstration analyses. You can therefore use the same timing files for faces and scenes for each run, instead of needing to create a unique file for each condition in each run.
1. Create and your task timing for each explanatory variable (i.e., face, scene) in a separate text file.
TextWrangler
or BBEdit
.~/Desktop/output/lab07
.In setting up our statistical regression model, we wish to create 'templates' of the expected activation for the face blocks and for the scene blocks, separately. This is similar to what you did last week 'by hand'. Now, however, you will use the timing file as input and the FEAT program will generate the template based upon that timing. It will also convolve your expected activation template with a hemodynamic response function (HRF) so that the expected activation template has a shape similar to that expected in a real physiological response.
2. To generate your model, begin by clicking on the Full Model Setup
button. A small window will appear with a tabbed interface.
2
in the Number of Original EVs.Face
.Custom (3 Column format)
.face_timing.txt
file (or whatever you named the stimulus timing file).Gamma
.0
, 3
, and 6
, respectively. These values affect the shape and delay of the expected hemodynamic response.Apply temporal derivative
option (the box should be grey).Apply temporal filtering
option is selected (the box should be yellow).
Repeat this process for Tab 2
, except provide the name Scene
and specify the scene_timing.txt
stimulus timing file. All other options should be the same.
3. Now choose Contrasts & F-tests
. This section presupposes some knowledge on the user's part about specifying statistical tests. You can read about this in detail here.
We want 4 contrasts, so set this accordingly:
Title | EV1 | EV2 |
---|---|---|
Face | 1 | 0 |
Scene | 0 | 1 |
Face > Scene | 1 | -1 |
Scene > Face | -1 | 1 |
When you are done, click the Done
button. A window will popup showing you your design. By convention, your two expected activation templates will be shown vertically, rather than horizontally. It should look similar to the one shown below.
Do not select Go
yet.
You will continue setting up the analysis in the next section.
Lab Report Part 2
There are several methods offered by FSL/FEAT for testing the significance of the statistical model. Students should be aware that, in FSL, the higher level analyses use the full range of statistics and variances from the lower level analyses. That is, FSL does not 'pass up' thresholded statistics to the next level of analysis. However, when you decide to review the significance of your model, at any level, you may likely want to correct for the number of statistical comparisons you performed.
You may wish to consult the FSL FEAT documentation for the Post-stats tab here.
Multiple Comparisons
The multiple comparisons problem is beyond the scope of this wiki page, and I will discuss it during class. It is important, however, that you understand this problem as it comes up frequently in imaging research (where there can be tens of thousands of voxels, and each is treated as a dependent variable). It also comes up in many other areas of research, such as genetics, where many thousands of gene variations are regressed against thousands of phenotypes).
We will set our method for correcting for multiple comparisons in the Thresholding
section of the Post-stats
tab. It is nearly impossible to publish results that have not been corrected for multiple comparisons. However, the very fact that there are choices in the method applied suggests that the field is not in agreement upon how to do this. The post-stat tab offers three choices for post-stats:
None
will show the statistical value (z-value) for each and every voxelUncorrected
will only show voxels above a specified z-value, but will not correct for multiple comparisons Voxel
will correct for multiple comparisons using Gaussian Random Field TheoryCluster
will correct for multiple comparisons based on the number of contiguous “activated” voxels in a cluster
For your first level analyses, it doesn't really matter what correction you apply (or, even no correction) because you will be combining the results of your two runs into a second level analysis. As I've mentioned elsewhere on this wiki page, the full statistics from lower level analysis are passed upward for higher level analyses. In the second level analysis, I will ask you to compare the different corrections for multiple comparisons.
For our first-level analysis let's be very liberal and set Thresholding to Uncorrected
with a P threshold of 0.05
.
FSL also includes other tests that are not yet accessible through the FEAT GUI, but can be applied through the command line. One relatively new correction for multiple comparisons available through FSL's command line is the False Discovery Rate (FDR). Tom Nichol's has an excellent web site that discusses and demonstrates the FDR procedure for imaging data. His website includes a slide presentation that can be found here.
Lab Report Part 3
There are no questions for this part.
1. Once you have entered all of the required information into the FEAT GUI, you should save the file you have created to disk.
Save
button on the bottom of the Data
tab.SUBJ_run01_preproc
)..fsf
file extension to your designated name.2. And now for the moment of truth. Pretty exciting, right!?
Go
button.
FSL creates an HTML file (report.html
) in your designated output directory. The processing progress will be written into this HTML file. Normally this report.html
will automatically open in a web browser and you can watch the progress in real-time.
It will take some time to complete processing (10-12 minutes). While running, the words Still Running
appear in red font on your HTML log page.
While FSL is working (assuming that it is running correctly) on run 01, you can start setting up your First Level Analysis for the second and third functional runs for your participant.
1. Set up FEAT for run02
Data
tab select the input file for run02 (via Select 4D data
)output directory
run02
)
All other parameters are still set correctly from run01
, so you don't' have to change anything on any of the other tabs.
2. Press Go
If you are analyzing data from the Face task
, your subject will have a third run of data. Go ahead and repeat the steps above to run First Level Analsyis for run03. Subjects in the Motor Task
only have two runs of data.
While you wait for your two (or three) runs of data to finish processing, you should read on through the wiki to preview what steps are coming up next.
The last first-level analysis we will run tonight will be on data that does not first get preprocessed. In your open FEAT
window make the following changes:
/Users/hnl/Desktop/output/lab07/run1_nopreproc
None
None
0.0
Highpass
Apply temporal filtering
Go
As long your HTML log file is not printing pages of error messages, you should be fine. However, if you do observe error messages in your HTML log, read them carefully. Errors will almost certainly be due to an incorrect specification of an input file (wrong name, wrong path, etc.), or because you are trying to write output to a “read only” folder. Check carefully and try to solve these errors on your own before calling me to help. Solving mundane technical problems is a bigger part of science than we care to admit!
Lab Report Part 4
There are no questions for this part.
FEAT will create a directory with whatever output name you specified and the suffix .feat
. So if you specified ~/Desktop/output/lab07/run01
as your output directory, FEAT will create the directory ~/Desktop/output/lab07/run01.feat
.
Inside this directory there will be a file named results.html
that will contain a log of activity and results. This file will be displayed as a web page in your internet browser.
The directory will also contain lots of other files and subdirectories. A full list can be found here. For tonight's exercise we will simply look at the results.html
file, but next week we will learn how to investigate the output interactively with FSLeyes
.
For this section look at the output for any of your first-level analysis except for run01_noprepoc
. We'll look at that one later.
You can begin to review your results as soon as the HTML output indicates that the first level analysis is complete (note, while it is running, the words “Still Running” appear in red font on your HTML log page).
There is a lot of information in the HTML file. To get you jump-started, click on the Post_stats
hyperlink on your HTML output and scroll down to see the output for the four contrasts that you specified.
zstat1 - C1(Face)
face
or right hand
, depending on your experiment) compared to all non-modeled time points (i.e., the short rest periods of 12 secs between each block).zstat2 - C2(Scene)
scene
or left hand
, depending on your experiment) compared to all non-modeled time points (i.e., the short rest periods of 12 secs between each block).Note that you can have the same voxels activated in both of these first two contrasts. For example, you might expect that visual cortex is activated by both faces and scenes, and so those visual cortex voxels should be activated in both of the first two contrasts.
zstat3 - C3(Face > Scene)
shows voxels that are activated more by faces
than by scenes
.faces
and scenes
(e.g., early visual cortex) and can therefore infer that these voxels are particularly sensitive to faces.zstat4 - C4(Scene > Face)
shows voxels that are activated more by scenes
than by faces
.faces
and scenes
(e.g., early visual cortex) and can therefore infer that these voxels are particularly sensitive to scenes.Unlike the first two contrasts, there should be no voxels will be in common between these latter two contrasts.
It will be more informative to fully investigate your activation results after you complete the Second Level Analysis. So, look at your results here in the HTML output, visually compare the activation patterns in the first and second runs to get a sense of the consistency, and then move on.
Lab Report Part 5
Several pre-processing steps were included in the first level analysis. Were they effective? Let's do a direct comparison of our raw data and the preprocessed data using FSLeyes
to see the effects of preprocessing. Some will be obvious, whereas others are more subtle.
1. Launch FSLeyes
.
2. Load two files:
~/Desktop/input/fmri/loc/data/nifti/SUBJ/SUBJ_face_run1.nii.gz
~/Desktop/output/lab07/run01/filtered_func_data.nii.gz
Of course, replace SUBJ
and face
with the appropriate subject ID and task.
3. Open a second viewer to view the files side by side
View
→ Ortho View
filtered_func_data.nii.gz
data by deselecting the blue eye.SUBJ_face_run1.nii.gz
data by deselecting the blue eye next to it.Notice that when you click at a location on one of the brains, the cursor will jump to the same location in the other brain.
4. Display the raw and preprocessed data sets' time-series.
command
+ 3
Overlay list
(the list in the Time series
window, not in the Ortho View
windows.Normalised
from the Plotting Mode
drop down list.5. Turn off the modeled time-series.
filtered_func_data
in the Overlay list
Time Series
window.Plot full model fit
from the FEAT settings
Plot full model fit
has not yet been unchecked.6. Compare the raw time-series with the preprocessed time-series. You might want to try this at a few different voxels.
Lab Report Part 6
As discussed in the lecture, you should examine your residuals to see how well your model accounts for the time-course of your activations. The residual of a regression is considered error variance, and the Least Squares approach used in regression analysis seeks to minimize the error variance.
You want the ratio of explained variance to unexplained variance to be as large as possible. The residuals represent the unexplained variance and therefore you want them as small as possible.
1. Load the following files into fsleyes
:
run01.feat/filtered_func_data.nii.gz
run01.feat/thresh_zstat1.nii.gz
Greyscale
to Red-Yellow
2. Click on a strongly activated voxel (colored yellow).
3. View the time-series
command
+ 3
filtered_func_data
FEAT
options…Plot residuals
3. Look at the residual for that voxel and judge whether the activation waveshape is largely absent. If it is, it means that your model successfully accounted for the activation, and there is no task-related activation left in the residual error term.
4. Load the data that was analyzed with preprocessing.
run01_nopreproc/filtered_func_data.nii.gz
to your display.
It will be best to view these residual timeseries with the Plotting mode
set to Normal
or Demeaned
.
5. You should now see the residuals from each of the two data sets. Remember, these are identical data sets that were analyzed with the exact same model. The only difference was that one was preprocessed and the other was not. Do you observe larger residuals (i.e., more unexplained variance) for the data that was not preprocessed?
Lab Report Part 7