You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Looking back at how the "big 3" do scaling, it looks like both FSL and SPM do what we do, scaling the full time-series by a single value: the grand-mean across the acquisition. AFNI, on the other hand, has some compelling reasons to instead, normalize each voxel separately by its own acquisition mean. In practice, the difference is likely not that huge but could be easily added with an optional axis flag. If we do add this, we need to account for voxels who's mean maybe 0 (e.g. out of brain voxels) as scaling will produce NaNs
b = Brain_Data('some_file.nii.gz')
b.scale() # default axis=None, scale each voxel's time-series by the grand-mean
b.scale(axis=0) # AFNI style, scale each voxel by it's own mean
The text was updated successfully, but these errors were encountered:
Looking back at how the "big 3" do scaling, it looks like both FSL and SPM do what we do, scaling the full time-series by a single value: the grand-mean across the acquisition. AFNI, on the other hand, has some compelling reasons to instead, normalize each voxel separately by its own acquisition mean. In practice, the difference is likely not that huge but could be easily added with an optional
axis
flag. If we do add this, we need to account for voxels who's mean maybe 0 (e.g. out of brain voxels) as scaling will produce NaNsThe text was updated successfully, but these errors were encountered: