Goals

  • To develop an intuitive understanding of the how the General Linear Model (GLM) is used for univariate, single-subject, single-run fMRI data analysis
  • To understand the significance of beta weights in GLM analysis
  • To understand the significance of residuals in GLM analysis
  • To learn to perform single-subject GLM analysis in BrainVoyager
  • To learn to interpret statistical analysis of beta weights

Accompanying Data

Tutorial 2 Data (Download the data from this link)

Background

The general linear model (GLM) is a fundamental statistical tool that is widely applied to fMRI data. In this tutorial, you will learn the basics modelling univariate BOLD timecourses using the GLM. First, we will explore how GLM analysis works one voxel at a time. Then we will move to BrainVoyager to do the analysis and apply statistical maps over an entire volume.

Recall that the GLM is used to model data using the form y = β0 + β1X1 + β2X2 + ... + βnXn + e where y is the observed or recorded data, β0 is a constant or intercept adjustment, β1 ... βn are the beta weights or scaling factors, and X1 ... Xn are the independent variables or predictors.

Alternatively, if you're not into matrix algebra, you can think of this as modelling a data series (time course) with a combination of predictor time courses scaled by beta weights (the explained variance) plus the "junk" time course (residuals) that is left over once you've explained as much as you can.

In fMRI data analysis, we are often interested in determining how well a set of predictors for expected response fits the observed or recorded signal. The GLM allows a linear combination of predictors to be optimally fit to a timecourse. Let's first consider a simple example using one predictor.

In the following plot, our 'data' is shown in grey, and is presented in arbitrary units over time in seconds. There is also a single predictor X1 shown in pink.

Question 1: Adjust the sliders until you find the beta weight and constant that best fit the data. How can you tell when you've chosen the best possible beta weight and constant -- Consider (1) the similarity of the weighted predicted (purple) and the data (black) in the top panel; (2) the appearance of the residual plot; and (3) the squared error and sum of residuals? How does the beta weight affect the weighted predictor? What aspects of the data might adjustments of the beta weight NOT affect that may still affect the goodness of fit an the residuals? How does changing the constant affect the weighted predictor?

In practice, we don't expect to find a perfect fit between predictors and data. The following is a more realistic example with noisy data.

Question 2: Adjust the beta weight and constant in the following plot to find the best fit between the predictor and the data. What values did you choose? What are the resulting error and sum of residual? Describe how the residual appears in relation to the data.

In fMRI analysis, we are usually interested in explaining signal variance in terms of multiple experimental conditions, not just a single one. The following examples use data taken from the course dataset — a 2 condition + baseline variant of the localizer sessions. We chose this variant so we can start with a simpler case than the course data. The participant was shown images of faces and hands in alternating 16-second blocks. The timecourses from three voxels were extracted, and two task-specific predictors were generated according to the block design.

Note: We have moved web servers and you may have to scroll left-right in the window below to see different parts of the window.

Question 3a: For Voxel 1, estimate the best fitting model parameters. Compare the residual to the one you obtained in Question 2. What can you conclude about this Voxel's selectivity for images of faces and/or hands?

Question 3b: For Voxel 2, estimate the best fitting model parameters. Compare the beta weights and squared errors from your estimated models for Voxels 1 and 2. What can you conclude about these voxels' relative selectivity for images of faces and/or hands?

Question 3c: For Voxel 1, estimate the best fitting model parameters. Compare the residual and the squared error to what you observed for Voxels 1 and 2. What can you conclude about these voxels' relative selectivity for images of faces and/or hands?

image004.png

GLM in BrainVoyager

1) Open BrainVoyager , then Open, navigate to the BrainVoyager Data folder, then select and open P1_anat-S1-BRAIN_IIHC_MNI.vmr 2) Select the Analysis menu and click Protocol… Next, in the Protocol window, click the Load .PRT... button. Select and open Run01.prt from the PRTs folder. You should now see Face and Hand in the Conditions list. Click the Close button.

3) Click the Analysis menu again, this time selecting Link Volume Time Course (VTC) File ... In the window that opens, click Browse then select and open P1_Run1_S1R1_MNI.vtc , then click OK.

 

You have now loaded all three files: the anatomical volume, the functional data, and the experimental protocol that BrainVoyager will use for GLM analysis.

4) Now, let’s get started with the GLM analysis. Click the Analysis menu and select General Linear Model: Single Study. Click options and make sure that Exclude first condition ("Rest") is unchecked (the default seems to be that it is checked). Click Define Preds to define the predictors in terms of the PRT file opened earlier and tick the Show All checkbox to visualize them. Notice the shape of the predictors – they are identical to the ones we used earlier for Question 3. Now click GO.

 
 

After the GLM has been fit, you should see on top of the anatomical volume a statistical heat-map is overlaid. This initial map defaults to showing us statistical differences between the average of all conditions vs. baseline and the baseline. More on this later.

 
 

In order to create this heat map, BrainVoyager has computed the optimal beta weights at each voxel such that, when multiplied with the predictors, maximal variance in the BOLD signal is explained (under certain assumptions made by the model). Another, equivalent interpretation, is that BrainVoyager is computing beta weights that minimize the residuals. For each voxel, we then ask, “How well does our model of expected activation fit the observed data?” – which we can answer by computing the ratio of explained to unexplained variance.

image016.png

It is important to understand that, in our example, BrainVoyager is computing two beta weights for each voxel – one for the Face predictor, and one for the Hand predictor. And for each voxel, residuals are obtained by computing the difference between the observed signal, and the modelled or predicted signal – which is simply vertically scaled by the beta weights. This is the same as you did manually in Question 3.

Now that we have computed betas and residuals for each voxel, we want to determine which voxels’ signals are most correlated or anticorrelated with the experimental predictors in our model. BrainVoyager uses t-statistics over beta weights, predictors, and residuals to do this analysis.

For example, we can use a hypothesis test to test whether activation for the Face condition (i.e., the beta weight for Faces) is significantly higher than for Hands (i.e., the beta weight for Hands). Informally, for each voxel we ask, “Was activation for faces significantly higher than activation for hands?”

To answer this, it is insufficient to consider the beta weights alone. We also need to consider how noisy the data is, as reflected by the residuals. Intuitively, we can expect that the relationship between beta weights is more accurate when the residual is small.

Question 4: Why can we be more confident about the relationship between beta weights when the residual is small? If the residual were 0 at all time points, what would this say about our model, and about the beta weights? Think about this in terms of the examples in Questions 1 to 3.

BrainVoyager implements these kind of hypothesis tests using a feature called contrasts . Contrasts allow you to specify the relationship between beta weights you want to test.

image015.png

5) From the Analysis menu, select Overlay General Linear Model... then click the box next to Predictor 2 until it changes to [-] you also want to make sure that Predictor 1 is set to [+] . Click OK to apply the contrast.

By doing this, you are specifying a hypothesis test to test whether the beta weight for Faces is significantly different (and greater than) the beta weight for Hands. This hypothesis test will be applied over all voxels, and the resulting t-statistic and p-value (error probability value) determine the colour intensity in the resulting heat map.

Since this heatmap is generated using a t-statistic at each voxel, a p-value can be used to threshold which voxels are coloured. The smaller the p value, the more conservative the threshold, and the more confidently we can interpret the result of the hypothesis test.

6) Use the Decrease Threshold and Increase Threshold buttons on the left-hand side of the main window to adjust the p-value threshold of this contrast.

Question 5: Decrease the threshold until nearly every voxel has been coloured in. What is the p-value for this threshold? How can you interpret the colours in this heatmap – does a voxel being coloured in orange or blue tell you anything about its beta weights at this threshold?

Question 6: Increase the threshold until you reach a p-value less than 0.05. What can you conclude about voxels that are coloured at this threshold?

Question 7: Increase the threshold a bit more, perhaps until you are satisfied with the number of coloured voxels outside the brain. Notice that there are blobs of blue to green voxels. The green voxels, under the current contrast (Face = 1, Hand = -1) have a highly negative test statistic. What can you conclude about the relationship between beta weights and about explainable variance in these voxels?

Locate Overlay Volume Maps under the Analysis tab. Click it and you will see a GUI appear. Uncheck Trillinear Interpolation and look at how the colour map changes on the Brain.

image3.png
 

Question 8: a) What kind of changes can you observe and broadly explain what the Trillnear Interpolation does to the data?

Next check the Trillniear Interpolation again and go to the statistics tab.

 

Question 8: b) What statistical correction is currently used on the GLM results (is automatically set to in BrainVoyager)?

Uncheck FDR and go to the Map options. Set the p-value to 0.05. Now you see a map that is uncorrected and uses a p-value of 0.05 as cutoff.

 

Next check Bonferroni and leave the p-value at 0.05

Question 8: c) What do you see on the map? Can you find any activation that survived this correction?

Uncheck the Bonferroni and go back to the Statistics tab. Find cluster threshold and if not enabled - enable it and then put the cluster size to 10 .

 

Question 8: d) What changes do you observe? What does the change in cluster threshold mean? What issue does cluster thresholding try to address?

Question 8: e) You examined different kinds of multiple comparison corrections. Name pros and cons for each of the examined approaches. Which of the corrections do you think yields the most valid results.