% Statistical Parametric Mapping - SPM2
% ______________________________________________________________________
%
% ___ ____ __ __
% / __)( _ \( \/ )
% \__ \ )___/ ) ( Statistical Parametric Mapping
% (___/(__) (_/\/\_) SPM - http://www.fil.ion.ucl.ac.uk/spm
%
% ______________________________________________________________________
%
% Statistical Parametric Mapping refers to the construction and
% assessment of spatially extended statistical process used to test
% hypotheses about [neuro]imaging data from SPECT/PET & fMRI. These
% ideas have been instantiated in software that is called SPM.
% This software also deals with other issues in image analysis
% such as spatial registration and normalisation problems.
%
% ----------------
%
% Please refer to this version as "SPM2" in papers and communications.
%
% ----------------
% Contents:
% 1) SPM - The software
% 2) Release notes for SPM2
%
%
% ======================================================================
% 1) S P M - T h e s o f t w a r e
% ======================================================================
%
% SPM was written to organise and interpret our data (at the Wellcome
% Department of Cognitive Neurology, and previously at the MRC
% Cyclotron Unit, London UK). The distributed version is the same as
% that we use ourselves.
%
% SPM is made freely available to the [neuro]imaging community, to
% promote collaboration and a common analysis scheme across
% laboratories.
%
% ______________________________________________________________________
% Authors
%
% SPM is developed under the auspices of The Wellcome Department of
% Imaging Neuroscience, a department of the Institute of Neurology at
% University College London.
%
% SPM94 was written primarily by Karl Friston in the first half of
% 1994, with assistance from John Ashburner (MRC-CU), Jon Heather
% (WDoIN), and Andrew Holmes (Department of Statistics, University of
% Glasgow). Subsequent development, under the direction of Prof. Karl
% Friston at the Wellcome Department of Imaging Neuroscience, has
% benefited from substantial input (technical and theoretical) from:
% John Ashburner (WDoIN), Andrew Holmes (WDoIN & Robertson Centre for
% Biostatistics, University of Glasgow, Scotland), Jean-Baptiste Poline
% (WDoIN & CEA/DRM/SHFJ, Orsay, France), Christian Buechel (WDoIN),
% Matthew Brett (MRC-CBU, Cambridge, England), Chloe Hutton (WDoIN) and
% Keith Worsley (Department of Statistics, McGill University, Montreal,
% Canada).
%
% SPM2 was developed by: Jesper Andersson (unwarping), John Ashburner
% (spatial, image handling), Karl Friston (project director,
% fMRI design), Andrew Holmes (statistics, user interfaces),
% Jean-Baptiste Poline (statistics), Philippe Ciuciu, Tom Nichols
% (statistics), Matthew Brett (WinNT implementation and other stuff),
% Volkmar Glauche (visualisation) & Darren Gitelman (various bits).
% Theoretical input was provided by Christian Buchel, Daniel Glaser,
% Rik Henson, Stefan Kiebel, Will Penny, Keith Worsley and loads of
% other people. We would like to thank everyone who has provided
% feedback on SPM.
%
% We envisage that this software will be used in a diverse number of
% ways. Although SPM has grown out of a PET background, it is now an
% established package for the analysis of fMRI data, and is also used
% for structural data.
%
% ______________________________________________________________________
% Resources
%
% The SPMweb site is the central repository for SPM resources:
% http://www.fil.ion.ucl.ac.uk/spm
% Introductory material, installation details, documentation, course
% details and patches are published on the site.
%
% There is an SPM eMail discussion list, hosted at
% . The list is monitored by the authors, and
% discusses theoretical, methodological and practical issues of
% Statistical Parametric Mapping and SPM. Subscribe by sending the one
% line message: "join spm firstname lastname" to
% . (Users at NIH or UC-Davis should join
% their local SPM feeds.) The SPMweb site has further details:
% http://www.fil.ion.ucl.ac.uk/spm/help
%
% ----------------
%
% In order to use the advanced spatial, statistical modelling and
% inference tools of SPM, it is vital to have at least a conceptual
% understanding of the theoretical underpinnings. Therefore, we
% recommend the theoretical articles published in the peer reviewed
% literature, and the SPMcourse notes (avaiable from the SPMweb site).
%
% ----------------
%
% Please report bugs to the authors at
% Peculiarities may actually be features(!), and should be raised on the
% SPM eMail discussion list, .
% ______________________________________________________________________
% The SPM distribution
%
% The SPM software is a suite of MATLAB functions, scripts and data
% files, with some externally compiled C routines, implementing
% Statistical Parametric Mapping. MATLAB, a commercial engineering
% mathematics package, is required to use SPM. MATLAB is produced by The
% MathWorks, Inc. Natick, MA, USA. http://www.mathworks.com/
% eMail:info@mathworks.com. SPM requires only core MATLAB to run (no
% special toolboxes are required).
%
% SPM2 is written for Matlab versions 6.0, 6.1 and 6.5 under UNIX,
% LINUX and Windows(95/98/NT). SPM2 may not will not work with versions
% of Matlab prior to 6.0. Later versions of Matlab (released after
% SPM2), will probably need additional patches in order to run. Once
% developed, these will be made available from:
% ftp://ftp.fil.ion.ucl.ac.uk/spm/spm2_updates
%
% Binaries of the external C-mex routines are provided for Solaris,
% Macintosh, Linux and Windows only. Users of other platforms need an
% ANSI C compiler to compile the supplied C source (Makefile provided)
% ______________________________________________________________________
% Copyright & licencing
%
% SPM (being the collection of files given in the manifest in the
% Contents.m file) is free but copyright software, distributed under
% the terms of the GNU General Public Licence as published by the Free
% Software Foundation (either version 2, as given in file
% spm_LICENCE.man, or at your option, any later version). Further
% details on "copyleft" can be found at http://www.gnu.org/copyleft/.
%
% SPM is supplied as is.
% No formal support or maintenance is provided or implied.
% ______________________________________________________________________
% File formats
%
% The various file types included in SPM are:
%
% spm_*.m files: ASCII files that form the main structure for SPM.
% Most of SPM is written as MatLab functions. These are compiled
% at their first invocation, leading to a slight delay in the
% startup of some routines. MatLab script files are occasionally
% used. These are interpreted by MATLAB, but have the advantage
% of working in the base MatLab workspace, such that their
% results are available to the user after completion.
%
% Clearly MatLab is slower than writing everything in fully optimised
% C; however the fundamental advantage of having a didactic pseudo-code
% specification of this sort is preferred over implementational
% efficacy. Further, MatLab *is* optimised for matrix and vector
% operations, which are utilised whenever possible.
%
% spm_*.c: ASCII files that are complied in a MATLAB-specific
% fashion to produce programs that can be called directly from
% MATLAB. Once compiled these routines are suffixed in a
% platform dependent fashion (e.g. spm_*.mexsol or mexlx). These
% routines implement memory mapping and some numerical and image
% operations that are called frequently. Precompiled Mex files
% are provided for Solaris2, Linux and Windows platforms, a
% Makefile is included for other platforms.
%
% spm_*.man: ASCII files containing manual pages.
%
% *.mat MATLAB specific data files that can be loaded directly
% into MATLAB. These files contain images and other data in matrix
% format, usually in double precision (see MATLAB user's guide)
%
% Where possible the user interface and computational or analytical
% aspects of the software have been segregated such that spm_*_ui.m
% sets up the user interface and assembles the appropriate input
% arguments for spm_*.m. spm_*.m contains the statistical and
% mathematical implementation of a generic nature. spm_*.m would be of
% greater interest to those whose wish to incorporate SPM into an
% existing package with its own 'front end'.
%
% ______________________________________________________________________
% Image formats
%
% SPM uses the simple header and flat binary image file format of
% ANALYZE-7 (Mayo Clinic, Rochester, USA.) http://www.mayo.edu/bir/),
% with slight customisations to the header. See "Data Format" in the
% online help [spm_format.man]. It can also read MINC & ECAT-7 images.
%
% You will either need to convert your image files to one of these
% formats (preferably Analyze), or construct an additional module for
% the SPM memory mapping subsystem to read your file format. Image
% conversion utilities for your image file format may be available in
% other packages, or may have been specially written by other SPM
% users. (Consult the SPM email discussion list, described below, by
% first searching the archives, and posting a query if necessary.)
% Unfortunately we have no resources to provide image conversion
% software, although we will collaborate in developing SPM memory
% mapping read-modules for popular image formats for inclusion in SPM.
%
% ======================================================================
% 2) RELEASE NOTES FOR S P M 2 b
% ======================================================================
%
% 1. OVERVIEW
% 2. NEW FUNCTIONALITY
% 2.1 Non-sphericity and ML estimation
% 2.1.2 User specification
% 2.1.2 Software implementation
% 2.2 Bayesian estimation and inference
% 2.2.1 User specification
% 2.2.2 Software implementation
% 2.3 Inference based on False Discovery rate
% 2.4 Dynamic Causal Modelling
% 2.4.1 User specification
% 2.4.2 Software implementation
% 2.5 Hemodynamic modelling
% 2.5.1 User specification
% 2.5.2 Software implementation
% 2.6 Hemodynamic deconvolution
% 2.7 Bias correction
% 2.8 fMRI Unwarping
% 3. IMPROVEMENTS - STRUCTURAL
% 3.1 Software architecture
% 3.2 File formats
% 4. IMPROVEMENTS - FUNCTIONAL
% 4.1 Interpolation
% 4.2 Realignment
% 4.3 Coregistration
% 4.4 Spatial normalisation
% 4.5 Segmentation
% 4.6 Specifying fMRI designs
% 4.7 Estimation
% 4.8 Results
%
%
%
% 1. OVERVIEW
% SPM2 is a software package for the analysis of brain imaging data
% sequences. The sequences can be a series of images from different cohorts,
% or time-series from the same subject. The current release is designed for
% the analysis of fMRI, PET, SPECT and similar modalities. Future releases
% will incorporate the analysis of EEG and MEG, enabling multimodality
% integration or fusion. Data analysis can be divided into three parts. The
% first comprises spatial pre-processing components that apply spatial
% operators to the data in order to register, warp or smooth the images into
% the same space. Registration and spatial normalisation do not change the
% voxel values but simply relocates them. The subsequent two parts are
% estimation of the models parameters (e.g. activations) and inferences about
% those estimates. The model used is the general linear model that can
% embrace simple variants, like a single-sample t-test, through to very
% complicated models such as generalised convolution models for dynamical
% systems. Previously, SPM has used unbiased ordinary least squares (OLS)
% estimators that, under assumptions of identically and independently
% distributed errors (i.e. i.i.d. or sphericity) are the same as maximum
% likelihood (ML) estimators.
% SPM2 now uses restricted maximum likelihood (ReML) estimates of variance
% components (pooled over voxels) to allow departures from i.i.d.
% assumptions. The ensuing non-sphericity estimates are then used to form ML
% estimates using weighted least squares (WLS). This increases the scope of
% data that can be analysed with SPM and provides more efficient and precise
% [hyper-]parameter estimates.
% Classical inference about parameters, or contrasts of the model parameters,
% proceeds using conventional parametric statistics to provide P values
% pertaining to the maxima, spatial extent of voxel-clusters or number of
% clusters. These P values are used to reject the null hypothesis. The P
% values have to be adjusted to protect against family-wise false-positives
% over the search volume. This adjustment uses a Gaussian field correction
% that, for spatially extended continuous data, plays the same role as a
% Bonferroni correction does with discrete data.
% In addition to WLS estimators and classical inference, SPM2 also supports
% conditional or Bayesian estimators and conditional inference. In this
% instance the statistical parametric maps become posterior probability maps
% (PPMs), where the posterior probability is a probability of an effect given
% the data. There is no multiple comparison problem in Bayesian inference and
% the posterior P values do not require adjustment.
% In what follows we describe the key changes from SPM99. These are described
% in terms of new functionality, improved implementation of existing
% functions and, finally, changes to the software architecture.
%
% 2. NEW FUNCTIONALITY
% Many new components of SPM2 rely on an Expectation Maximisation (EM)
% algorithm that is used to estimate restricted maximum likelihood (ReML)
% estimates of various hyperparameters or variance components. This enables a
% number of useful things. For example, non-sphericity can be estimated in
% single-level observation models and, in hierarchical observation models,
% one can adopt a parametric empirical Bayesian (PEB) approach to parameter
% estimation and inference. Furthermore, the EM algorithm allows fully
% Bayesian estimation for any general linear model under Gaussian
% assumptions. Examples of all these applications will be found in SPM2.
%
% 2.1 Non-sphericity and ML estimation
% Sphericity refers to the assumption of identically and independently
% distributed errors. Departures from this assumption can either be in terms
% of non-identical distributions (e.g. heterogeneity of variance among
% conditions or groups, known as heteroscedasticy). Departure from
% independence implies correlations amongst the errors that may be induced by
% observing different things in the same subject. The two most pertinent
% sources of non-sphericity in neuroimaging are heteroscedasticy (e.g. when
% entering the coefficients of different basis functions into a second-level
% analysis), and serial correlations in fMRI. It is important to estimate
% non-sphericity for two reasons: (i) Proper estimates of the co-variances
% among the error terms are needed to construct valid statistics. In this
% instance non-sphericity enters twice, first in the estimate of the variance
% component hyperparameters (e.g. error variance) and second in the
% computation of the degrees of freedom. The effective or adjusted degrees of
% freedom relies upon the Satterthwaite approximation as described in Worsley
% & Friston (1995) and is commonly employed in things like the
% Greenhouse-Geisser correction. (ii) The second reason why non-sphericity
% estimation is important is that it enables multivariate analyses to be
% implemented within a univariate framework. For example, it is entirely
% possible to combine PET and fMRI data within the same statistical model
% allowing for different error variances and serial correlations between the
% two modalities. The parameter estimates that are obtained from a
% multivariate analysis and the equivalent univariate analysis are identical.
% The only point of departure is at the point of inference. In the univariate
% framework this is based upon an F ratio using the Satterthwaite
% approximation, whereas multivariate inference uses a slightly different
% statistic and distributional approximation (usually one based upon Wilk's
% Lambda). The critical thing here is that multivariate analyses can now
% proceed within the univariate framework provided by SPM2.
% By default SPM2 uses WLS to provide ML estimators based on the
% non-sphericity. In this case the weighting 'whitens' the errors rendering
% them i.i.d. The effective degrees of freedom then revert to the classical
% degrees of freedom and the Satterthwaite approximation becomes exact. It
% is possible the use any WLS estimate (SPM2 will automatically compute the
% effective degrees of freedom) but this is not provided as an option in the
% user interface). This departure from SPM99 has been enables by highly
% precise non-sphericity estimates that obtain by pooling over voxels using
% ReML.
% In addition to finessing the computation of the t statistics and effective
% degrees of freedom the non-sphericity estimates are also used to provide
% conditional estimates in the Bayesian analysis stream (see below).
%
% 2.1.2 User specification
% For PET, you are requested to specify whether or not the errors are
% identically and independently distributed over the different levels of each
% factor. In any experimental design there will be one factor whose different
% levels are assumed to be replications. This is necessary in order to pool
% the sum of squared residuals over these levels to estimate the error
% variance hyperparameter. By default, SPM2 assumes that this factor is the
% one with the greatest number of levels. A common use of this non-sphericity
% option would be in the second-level analysis of basis function coefficients
% from a first-level fMRI experiment. Clearly, the coefficients for different
% basis functions will have different scaling and error variance properties
% and will not be identically distributed. Furthermore, they may not be
% independent because the error terms from any two basis functions maybe
% coupled because they were observed in the same subject. Using the
% non-sphericity option allows one to take up multiple contrasts from the
% same subject to a second level, to emulate a mixed or random effects
% analysis. In fMRI the key non-sphericity is attributable to serial
% correlations in the fMRI time series. SPM2 assumes that an auto-regressive
% process with white noise can model these. Critical to this assumption is
% the decomposition of fMRI time-series into three components. Induced
% experimental effects embodied in the design matrix, fixed deterministic
% drifts in the confounds (that are used in high pass filtering) and,
% finally, a stochastic error term that shows short-term correlations
% conforming to an AR(1) process. The AR(1) characterisation is only
% appropriate when long-term correlations due to drift have been modelled
% deterministically through high-pass filtering or, equivalently by inclusion
% in the design matrix as nuisance variables.
% If non-sphericity is modelled a weighted least squares (WLS) will be used
% during estimation to render the residuals i.i.d. This WLS estimator then
% becomes a ML estimator, which is the most efficient of linear unbiased
% estimators. To over-ride this default WLS simply specify the required
% weighting matrix before estimation (this matrix is in SPM.xX.W, see below).
%
% 2.1.2 Software implementation
% The hyperparameters or coefficients governing the different variance
% components of the error co-variances are estimated using ReML. The form of
% the variance components is described by covariance basis in the
% non-sphericity sub-field of the SPM structure (SPM.xVi.Vi, see below). In
% SPM2 a multiple hyperparameter problem is reduced to a single
% hyperparameter problem by factorising the non-sphericity into a single
% voxel-specific hyperparameter and the remaining hyperparameters that are
% invariant over voxels. By reformulating a multiple hyperparameter problem
% like this one can employ standard least squares estimators and classical
% statistics in the usual way. The voxel invariant hyperparameters are
% estimated at voxels that survive the default F test for all effects of
% interest (assuming sphericity) during the estimation stage in SPM. This
% entails a first pass of the data to identify the voxels that contribute to
% the ReML hyperparameter estimation. A second pass then uses the
% non-sphericity V (in SPM.xVi.V) to compute the appropriate weighting matrix
% (in SPM.xX.W) such that WW' = inv(V). [If W is already specified only a
% single estimation pass is required.]
%
% 2.2 Bayesian estimation and inference
% Having performed a weighted least squares estimation one can optionally
% revisit all the 'within mask' voxels to obtain conditional or posterior
% estimates. Conditional estimates are the most likely parameters estimates
% given the data. This is in contrast to the least squares maximum likelihood
% estimators that are the estimates that render the data most likely. The
% critical distinction between the two estimators relies upon prior
% information about the parameters. Effectively, under Gaussian assumptions
% and the sorts of hierarchical models used in SPM2, the conditional
% estimators are shrinkage estimators. This means that the estimates shrink
% toward their prior expectation that is typically zero. The degree of
% shrinkage depends upon the relative variance of the errors and the prior
% distribution. Although the likelihood estimator is the most accurate from
% the point of view of any one voxel, the conditional estimators minimize the
% equivalent cost function over the voxels analysed. Currently, in SPM2,
% these voxels are all 'in mask' or intracranial voxels. In short, the
% conditional estimators will not be the best for each voxel but will, on
% average the best over all voxels. The conditional estimators provided in
% SPM2 rely upon an empirical Bayesian approach, where the priors are
% estimated from the data. Empirical Bayes rest on a hierarchical observation
% model. The one employed by SPM2 is perhaps, the simplest and harnesses the
% fact that imaging time-series implicitly look for the same effect over many
% voxels. Consequently, a second level, in the observation hierarchy, is
% provided by voxel to voxel variation in the parameter estimates that is
% used as the prior variance. Put simply, the priors are estimated by
% partitioning the observed variance at each and every voxel into a
% voxel-specific error term and a voxel wide component generated by voxel to
% voxel variations in the parameter estimates. Giving the prior variance, one
% is in a position to work out the posterior or conditional distribution of
% the parameter estimates. In turn, this allows the computation of the
% posterior probability that the estimate exceeds some specified threshold.
% These posterior probabilities constitute a posterior probability map (PPM)
% that is a summary of the posterior density specific to the specified
% threshold.
%
% 2.2.1 User specification
% Having performed the usual least squares estimation simply select Bayesian
% estimators and the appropriate SPM.mat file. Additional Beta images will be
% created that are prefixed with a C to denote conditional estimates. When
% proceeding to inference, if the contrast specified is a t contrast (and
% conditional estimates are available) you will be asked whether the
% inference should be Bayesian or Classical. If you select Bayesian you will
% then be prompted for a threshold to form the posterior probability maps. By
% default this is one standard deviation of the prior distribution. The
% resulting PPM is treated in exactly the same way as the SPM, with the
% exception of P value adjustments that are required only in a classical
% context. Inferences based upon PPMs will generally have higher face
% validity, in that they refer to activation effects or differences among
% cohorts that exceed a meaningful size. Furthermore, by thresholding the PPM
% appropriately one can infer that activation did not occur. This is
% important in terms of characterising treatment effects or indeed true
% functional segregation.
%
% 2.2.2 Software implementation
% The expected variance components over voxels attributable to error and
% parameter variation at the second level (i.e. voxels) are estimated using
% ReML overall intracranial voxels. spm_spm_Bayes.m then revisits each voxel
% in turn to compute conditional parameter estimates and voxel-specific
% hyper-parameters. The conditional variance is computed using a first-order
% Taylor expansion about the expectation of the hyperparameters over voxels.
%
% 2.3 Inference based on False Discovery rate
% In addition to the Gaussian field correction 'adjusted' p values are now
% provided based on FDR. For a given threshold on n SPM, the False Discovery
% Rate is the proportion of supra-threshold voxels which are false positives.
% Recall that the thresholding of each voxel consists of a hypothesis test,
% where the null hypothesis is rejected if the statistic is larger than
% threshold. In this terminology, the FDR is the proportion of rejected
% tests where the null hypothesis is actually true. A FDR procedure produces
% a threshold that controls the expected FDR at or below q. In comparison, a
% traditional multiple comparisons procedure (e.g. Bonferroni or random field
% correction) controls Family-wise Error rate (FWE) at or below alpha. FWER
% is the chance of one or more false positives anywhere (not just among
% supra-threshold voxels). If there is truly no signal in the image
% anywhere, then a FDR procedure controls FWER, just as Bonferroni and random
% field methods do. (Precisely, controlling E(FDR) yields weak control of
% FWE). If there is some signal in the image, a FDR method will be more
% powerful than a traditional method.
% For careful definition of FDR-adjusted p-values (and distinction between
% corrected and adjusted p-values) see Yekutieli & Benjamini (1999).
%
% 2.4 Dynamic Causal Modelling
% Dynamic causal modelling is the final option in the data analysis stream
% and enables inferences about inter-regional coupling. It is based upon a
% generalised convolution model of how experimentally manipulated inputs
% evoked changes in particular cortical areas and how these changes induced
% changes elsewhere in the brain. The underlying model is an
% input-state-output dynamical model that incorporates the hemodynamic model
% described in the literature. Critically, the most important [coupling]
% parameters of this model are connection strengths among pre-selected
% volumes of interest. Conditional estimates of these connection strengths
% are derived using a posterior mode analysis that rests upon exactly the
% same EM algorithm used in the previous sections. In other words, the
% algorithm performs a gradient assent on the log posterior probability of
% the connection strengths given the data. The utility of dynamic causal
% modelling is that it enables inferences about connection strengths and the
% modulation of connection strengths by experimentally defined effects. For
% example, one can make inferences about the existence of effective
% connections from posterior parietal cortex to V5 and how attention
% increases or decreases this effective connectivity. Interestingly, if one
% took the dynamic causal model, focussed on a single region and linearised
% it through a first-order Taylor expansion, one would end up with exactly
% the same convolution model used in conventional analyses with a hemodynamic
% response function and various partial derivatives. This means that the
% standard approach to fMRI time-series is a special case of a dynamic causal
% model. In contradistinction to previous approaches to effective
% connectivity, dynamic causal modelling views all measured brain responses
% as evoked by experimental design. The only noise or stochastic component is
% observation error. This contrasts with structural equation modelling, and
% other variants, where it is assumes that the dynamics are driven by
% unobservable noise. We regard dynamic causal modelling (DCM) as a much more
% plausible approach to effective connectivity given fMRI data from designed
% experiments.
%
% 2.4.1 User specification
% To specify s DCM, first extract the required volumes of interest using VOI
% in the results section. These are stored in VOI.mat files. You will be
% requested to select from these files. In addition you will be asked to
% specify the experimental inputs or causes that are exactly the same as
% those used to construct the original design matrix. Once the program knows
% the number of regions, and experimental inputs it has to accommodate, it
% will ask you to specify the connectivity structure, first in terms of
% intrinsic connections and then in terms of those that are modulated by each
% input. Once the specification is complete the model parameters will be
% estimated and can be reviewed using the DCM results section. Inferences for
% DCM are based upon posterior probabilities and require no correction for
% multiple comparisons.
%
% 2.4.2 Software implementation
% The identification of a DCM uses a generic set of routines for non-linear
% system identification under Gaussian assumptions. Essentially, these can be
% regarded as Gauss-Newton assent upon the log of the posterior probability.
% This provides the maximum aposteriori or posterior mode and can therefore
% be regarded as a posterior mode analysis using a Gaussian approximation to
% the posterior density. The posterior probabilities are based upon fully
% Bayesian priors that, in turn, reflect natural constraints upon the system
% that ensure it does not exponentially diverge. Priors on the biophysical
% parameters of the hemodynamic response per se were determined using
% previous empirical studies. See spm_dcm_ui.m and spm_nlsi.m for operational
% details.
%
% 2.5 Hemodynamic modelling
% This, from a technical point of view, is exactly the same as DCM but is
% restricted to a single voxel or region. In this instance the interesting
% parameters are the connection of the experimentally designed inputs or
% causes to flow-inducing state variables of the hemodynamic model. The
% posterior distributions of these connections or efficacies are provided to
% enable inferences about the ability of a particular input or compound of
% causes to evoke a response.
%
% 2.5.1 User specification
% Within the results section, having placed the cursor over the volume of
% interest, you will be asked to specify which experimental causes you want
% to enter into the model. The computation then proceeds automatically and a
% separate figures window will show the results.
%
% 2.5.2 Software implementation
% This is a streamlined version of the DCM. See spm_hdm_ui.m for details.
%
% 2.6 Hemodynamic deconvolution
% There is an option in the main menu to form PPIs from fMRI data. PPIs are
% psychophysiological or physiophysiological interaction terms that are
% usually entered into a linear model as bilinear terms. Because the
% interaction takes place at a neuronal level (as opposed to hemodynamic it
% is necessary to deconvolve the observed fMRI time-series to estimate the
% underlying neuronal responses. This deconvolution uses the same EM
% procedure and hierarchical principles used by the Bayesian analyses above.
% It can be regarded as Weiner filtering using empirically determined priors
% on the frequency structure of the neuronal signal.
%
% 2.7 Bias Correction
% A module for bias correcting MR images has been incorporated into SPM2.
% Applying bias correction to MR images may allow more accurate spatial
% registration (or other processing) to be achieved with images corrupted by
% smooth intensity variations.
% The correction model is non-parametric, and is based on minimising a
% function related to the entropy of the image intensity histogram. A number
% of approaches already exist that minimise the entropy of the intensity
% histogram of the log-transformed image. This approach is slightly different
% in that the cost function is based on histograms of the original
% intensities, but includes a correction so that the solution is not biased
% towards a uniform scaling of all zeros.
%
% 2.8 fMRI Unwarping
% In addition to realigning (transforming the images according to a rigid
% body model) the time series there is the option of modelling the residual
% movement related variance in an EPI (fMRI) time series that can be
% explained by a model for suceptibility-by-movement interactions. This
% is given by the "Realign & Unwarp" option.
%
% Susceptibility artefacts in EPI time series is a consequence of the
% "field disturbances" caused by the presence of an object in the field.
% Susceptibility is a property of a material, and can be thought of as
% the "resistance" put up by that material against being magnetised.
% At the interface between materials with different susceptibility
% rather severe field disturbancies will ensue. I.e. the field is
% no longer nice and homogenous. Since varying field strenght (gradients)
% is used to encode position in MRI, these disturbances cause spatial
% misplacement of RF signal, i.e. geometric distortions. Because of
% the low bandwidth in the phase encode direction these distortions
% will be appreciable mainly in that direction (typically the Ant-Post
% direction). The distortions are noticable mainly near air-tissue
% interfaces inside the scull, i.e. near the sinuses and the auditory
% canals.
%
% In these areas in particular the observed image is a severly warped
% version of reality, much like a funny mirror at a fair ground. When
% one moves in front of such a mirror ones image will distort in
% different ways and ones head may change from very elongated to
% seriously flattened. If we were to take digital snapshots of the
% reflection at these different positions it is rather obvious that
% realignment alone will not suffice to bring them into a common space.
%
% The situation is similar with EPI images, and an image collected
% for a given subject position will not be identical to that collected
% at another. We call this effect suscebtibility-by-movement interaction.
% The "Unwarp" toolbox is predicated on the assumption that the
% suscebtibility-by-movement interaction is responsible for a sizeable
% part of residual movement related variance.
%
% Assume that we know how the deformations change when the subject changes
% position (i.e. we know the derivatives of the deformations with respect
% to subject position). That means that for a given time series and a
% given set of subject movements we should be able to predict the "shape
% changes" in the object and the ensuing variance in the time series.
% It also means that, in principle, we should be able to formulate the
% inverse problem, i.e. given the observed variance (after realignment)
% and known (estimated) movements we should be able to estimate how
% deformations change with subject movement.
%
% We have made an attempt at formulating such an inverse model, and at
% solving for the "derivative fields". A deformation field can be thought
% of as little vectors at each position in space showing how that particular
% location has been deflected. A "derivative field" is then the rate of
% change of those vectors with respect to subject movement. Given these
% "derivative fields" we should be able to remove the variance caused by
% the suscebtibility-by-movement interaction. Since the underlying model
% is so restricted we would also expect experimentally induced variance
% to be preserved.
%
%
%
% 3 IMPROVEMENTS - STRUCTURAL
% Although the look and feel and SPM2 is very similar to SPM99 there have
% been a number of implementation improvements. The general strategy is to
% ensure the code is as simple and accessible as possible and yet maintain a
% degree of robustness and efficiency. Specific changes are detailed below:
%
%
% 3.1 Software architecture
% A number of the revisions detailed above emphasise that modularity has been
% preserved and that we have tried to make the code as simple and readable as
% possible. A key change in terms of the architecture is the consolidation of
% the .mat files that contain the analysis variables and parameters. These
% files have been consolidated in such a way that to use SPM in batch mode
% should be much easier. We have done this as a prelude to planned work using
% mark-up languages (XML) to facilitate the review, specification and
% implementation of SPM procedures. In brief, SPM2 sets up a single structure
% (SPM) at the beginning of each analysis and, as the analysis proceeds,
% fills in sub-fields in a hierarchical fashion. This enables one to fill in
% early fields automatically and bypass the user interface prompting. After
% the design specification fields have been filled in the design matrix is
% computed and placed in a design matrix structure. This, along with a data
% structure and non-sphericity structure is used by SPM to compute the
% parameters and hyperparameters. These are saved (as handles to the
% parameter and hyperparameter images) as sub-fields of SPM. A contrast
% sub-structure is generated automatically and can be augmented at any stage.
% This structure array can be filled in automatically after estimation using
% spm_contrasts.m. The hierarchical organisation of the sub-function calls
% and the SPM structure means that, after a few specification fields are set
% in the SPM structure, an entire analysis, complete with contrasts can be
% implemented automatically. An example of a text file that fills in the
% initial fields of SPM and then calls the required functions can be found in
% spm_batch.m. Key fields include
%
% SPM.xY: - data structure
% SPM.nscan: - vector of scans per session
% SPM.xBF: - Basis function structure
% SPM.Sess: - Session structure
% SPM.xX: - Design matrix structure
% SPM.xGX - Global variate structure
% SPM.xVi: - Non-sphericity structure
% SPM.xM: - Masking structure
% SPM.xsDes: - Design description structure
% SPM.xCon: - Contrast Structure
%
%
% 3.2 File formats
% Currently, the Neuroimaging Information Technology Initiative (NIfTI) is
% deciding on a strategy to allow different software to share data more
% easily. This may involve the development of a new file format for possible
% adoption by the neuro-imaging community. No decisions have yet been made,
% but eventually, the recommendations by the group should be incorporated
% into SPM. Unfortunately, this will not be SPM2.
% There will however be slight changes to the SPM file format. By default,
% images are currently flipped in the left-right direction during spatial
% normalisation. This is for historical reasons and relates to the
% co-ordinate system of Talairach and Tournoux being a right-handed
% co-ordinate system, whereas the co-ordinate system adopted for storing
% Analyze images is left-handed. This will be streamlined, but will require
% each site to adopt a consistent co-ordinates system handedness for their
% images (specified in a file in the SPM2 distribution). One side effect of
% this is that images spatially normalised using SPM99 or earlier will need
% to be spatially normalised again.
%
% 4 IMPROVEMENTS - FUNCTIONAL
%
% 4.1 Interpolation
% Sinc interpolation is a classical interpolation method that locally
% convolves the image with some form of interpolant. Much more efficient
% re-sampling can be performed using generalised interpolation (Thévenaz,
% 2000), where an image is modelled by a linear combination of basis
% functions. A continuous representation of the image intensities can be
% obtained by fitting the basis functions through the original intensities of
% the image. The matrix of basis functions can be considered as square and
% Toeplitz, and the bases usually have compact support. In particular,
% spatial interpolation in the realignment and spatial normalisation modules
% of SPM2 will use B-splines, which are a family of functions of varying
% degree. Interpolation using B-splines of degree 0 or 1 is almost identical
% to nearest neighbour or linear interpolation respectively. Higher degree
% interpolation (2 and above) begins with a very efficient deconvolution of
% the basis functions from the original image, to produce an image of basis
% function coefficients. Image intensities at new positions can then be
% reconstructed using the appropriate linear combination the basis functions.
% Much of the interpolation code was based on (with permission from Philippe
% Thévenaz) algorithms released by the School of Engineering at the Swiss
% Federal Institute of Technology at Lausanne.
%
% 4.2 Realignment
% Image realignment involves estimating a set of 6 rigid-body transformation
% parameters for each image in the time series. For each image, this is done
% by finding the parameters that minimise the mean squared difference between
% it and a reference image. It is not possible to exhaustively search through
% the whole 6-dimensional (7 if the intensity scaling is included) parameter
% space, so the algorithm makes an initial parameter estimate (zero rotations
% and translations), and begins to iteratively search from there. At each
% iteration, the model is evaluated using the current parameter estimates,
% and the cost function re-computed. A judgement is then made about how the
% parameter estimates should be modified, before continuing on to the next
% iteration. This optimisation is terminated when the cost function stops
% decreasing.
% In order to be successful, the cost function needs to be smooth.
% Interpolation artefacts are one reason why the cost function may not be
% smooth. Using trilinear interpolation, sampling in the centre of 8 voxels
% effectively involves taking the weighted average of these voxels, which
% introduces smoothing. Therefore, an image translated by half of a voxel in
% three directions will be smoother than an image that has been translated by
% a whole number of voxels. The mean squared difference between smoothed
% images tends to be slightly lower than that for un-smoothed ones, which has
% the effect of introducing unwanted "texture" into the cost function
% landscape. Dithering the way that the images are sampled has the effect of
% reducing this texture. This has been done for the SPM2 realignment, which
% means that less spatial smoothing is necessary for the algorithm to work.
% Re-sampling the images uses B-spline interpolation, which is more efficient
% than the windowed sinc interpolation of SPM99 and earlier.
% The optional adjustment step has been removed, mostly because it is more
% correct to include the estimated motion parameters as confounds in the
% statistical model than it is to remove them at the stage of image
% realignment. This also means that each image can be re-sliced one at a
% time, which allows more efficient image I/O to be used. This extra
% efficiency should be seen throughout SPM2.
%
% 4.3 Coregistration
% The old default three-step coregistration procedure of SPM99 is now
% obsolete. The approach in SPM2 involves optimising an information theoretic
% objective function. The original mutual information coregistration in the
% original SPM99 release exhibited occasional problems due to interpolation
% artefact, but these have now been largely resolved using spatial dithering
% (see above). Information theoretic measures allow much more flexible image
% registration as they make fewer assumptions about the images. A whole range
% of different types of images can now be more easily coregistered in SPM.
%
% 4.4 Spatial Normalisation
% Estimating the spatial transformation that best warps an image to match one
% of the SPM template images is a two step procedure, beginning with a local
% optimisation of the 12-parameters of an affine transformation. This step
% has been made more robust by making the procedure more internally
% consistent. Affine registering image A to match image B should now produce
% a result that is much closer to the inverse of the affine transformation
% that matches image B to image A. The regularisation (a procedure for
% increasing stability) of the affine registration has also changed. The
% penalty for unlikely warps is now based on the matrix log of the affine
% transformation matrix (after removing rotation and translation components)
% being multivariate and normal.
% The non-linear component has also changed slightly, in that the bending
% energy of the warps is used to regularise the procedure, rather than the
% membrane energy. The bending-energy model seems to produce more realistic
% looking distortions.
%
% 4.5 Segmentation
% The segmentation model has been updated in order to improve the bias
% correction component. The version in SPM99 tended to produce a dip in the
% middle of the bias corrected images. This is because the bias correction
% had a tendency towards scaling the image uniformly by zero (lowest entropy
% solution), but was prevented from doing so because the bias field was
% constrained to have an average value of one. Because the bias was only
% determined from grey and white matter, it tended to push down the intensity
% in these regions, and compensated by increasing the intensity of other
% regions. This is now fixed with a new objective function.
% The segmentation procedure also includes a "cleanup" procedure whereby
% small regions of non-brain tissue that are misclassified as brain are
% removed. Also, the prior probability images are scaled such that they have
% less influence on the segmentation. Previously, abnormal brains could be
% extremely problematic if there was (e.g.) CSF where the prior probability
% images suggested that there was 0% probability of obtaining it. The
% modified code may be able to cope slightly better with these abnormalities.
%
% 4.6 Specifying fMRI designs (spm_fmri_spm_ui.m)
% fMRI design specification is now simpler. The distinction between event-
% and epoch-related designs has been effectively removed by allowing the user
% to specify stimulus functions that comprise a series of variable-length
% boxcars. If the length reduces to zero one is implicitly modelling an
% event. This contrasts with SPM99 where the distinction between event- and
% epoch-related responses was made at the level of the basis functions (a
% hemodynamic basis set or a boxcar epoch set). The main reason for putting
% all the design-specific effects in the input functions, as opposed to the
% basis functions, is to finesse the specification of inputs to DCM and
% hemodynamic models. From the point of view of dynamical modelling the basis
% functions simply serve to approximate the impulse response function of the
% input-state-output model that each voxel represents. Consequently, the
% basis functions you specify pertain to and only to the hemodynamic response
% function and these basis functions are assumed to be the same for all
% sessions. Onsets and durations of various trials or conditions can now be
% specified in scans or seconds. The option to perform a second-order or
% generalised convolution analysis using Volterra kernels is now assumed to
% be the same for all sessions. You can now also specify negative onset times
% up to 32 time bins (that default to 1/16 inter-scan intervals).
%
% 4.7 Estimation (spm_spm.m)
% spm_spm.m has been considerably simplified. Firstly, the Y.mad file, which
% used to store the time-series of voxels surviving an F test for all the
% effects of interest, has been removed. This means that you have to keep the
% original data in order to plot the responses. Although this decreases
% robustness it very much simplifies the software and allows you to
% interrogate any brain region using spm_graph.m at the point of inference
% and reviewing your results. The second simplification is that smoothness
% estimation is now performed on the residual fields after estimation. This
% means that spm_spm.m calls separate subroutines after the estimation has
% completed. This smoothness estimation is slower but is much more accurate
% because it computes the partial derivatives of the residual fields using a
% more finessed interpolation scheme.
% As mentioned above spm_spm.m now performs WLS that defaults to ML
% estimation using the estimated of specified non-sphericity in SPM.xVi.V.
% The weighting matrix is in SPM.xX.W. If SPM.xVi.V is not specified ReML
% hyperparameters are estimated in a first pass of the data according to the
% covariance components in SPM.xVi.Vi.
%
% 4.8 Results (spm_results_ui.m)
% The main revision to the results section has been in terms of plotting
% where the standard error bars have now been replaced by 90 confidence
% intervals. The plotting of event-related responses has been upgraded to
% provide true fixed impulse response (FIR) estimates, for both event- and
% epoch-related designs. This is equivalent to a peri-stimulus time
% histogram of hemodynamic responses and is estimated by refitting an FIR
% model at the selected voxel.
% Users can now simultaneously display a table of results and the functional
% activations. After displaying results, click on the Results-Fig button to
% the right of the menu on the graphic figure. This will open a new window.
% Clicking on the volume or cluster buttons will now display the results
% table in this figure. Slices and sections will still display in the main
% graphics figure. The Results-Fig is fully active and clicking on individual
% lines in the table will move the cursor to the appropriate points on the
% displayed sections.
% ______________________________________________________________________
% @(#)spm.man 2.16 The FIL methods group 03/05/12