[MINC-users] mincblur with FWHM =?utf-8?Q?<=3D_?=voxel dims

Alex Zijdenbos zijdenbos at gmail.com
Sun Jun 24 17:20:54 EDT 2012


On Thursday, 21 June, 2012 at 8:34 AM, Andrew Janke wrote:
> On 21 June 2012 13:53, Alex Zijdenbos <zijdenbos at gmail.com (mailto:zijdenbos at gmail.com)> wrote:
> > Empirically, when you reduce the FWHM to below the voxel dimension, the voxel value of the blurred image increases. Moreover, for an image with 1mm isotropic voxels, the FWHM at which the voxel values of the result are about equivalent to those in the source image seems to be around 1.3, give or take. See this image:
> >  
> > mni_icbm_00102_t1_blurs.png (http://cl.ly/2A1m3Y470P0M3S3J2Q0j)
> >  
> > which shows in the top row the original, 1mm isotropic, volume, followed by the result of running mincblur on it with decreasing FWHM from 1.5 to 0.5mm. I used spectral with a fixed range across all rows to bring out the signal difference.
> >  
> > So - is this expected? Surprising? Follows theory or bad behaviour of mincblur? After scribbling a few pages with normal distributions and looking through the mincblur code I still can't quite figure it out. Anybody can explain this to me?
>  
> I'd say expected (given how mincblur works) and I'd also say is the
> correct behaviour. If you think of the blur as fitting a Gaussian to
> your noisy peak then as the width of this peak decreases, the height
> of it increases to the same given amount of signal.
>  
>  

That is also the explanation I managed to come up with; however, having studied this a bit further, I disagree with this being the "correct" behaviour. (Gaussian) blurring is expected to adhere to "average grey level invariance", meaning that the filter should not modify the mean value of the image. And that is not a resolution-dependent principle, so I would say that mincblur should adhere to this regardless of the relationship between the kernel width and voxel size.   

I did some experimenting by blurring a binary cube in an image and monitoring the mean value (generously padded and away from the image edge to avoid edge effects). See this graph: ​mincblur_mean.png (http://cl.ly/2V472I092L3U3R3e0k02) which shows the mean value of the 1mm isotropic image as a function of kernel FWHM running from 0.2 to 5.0mm with 0.1mm steps. You can see that for FWHM < 2*voxel_dim the mean signal starts to go up rather steeply. By contrast, the orange HR line is from the same image, but obtained by first upsampling it to a relatively high (0.1mm) resolution, performing the blur at that resolution, and downsampling the result. I would argue that, barring some resampling effects, these curves should be the same.

I should note that the blurring implemented in FSL for example, does adhere to the average grey level invariance principle (regardless of voxel size).
> As for if it should happen when you (or a script) doesn't sanitize
> inputs then you probably have an argument that mincblur should give a
> warning. I would argue that the script should check the input sampling
> though, certainly I remember doing this in a few scripts of mine. I
> won't be so bold to say that I do check in all scripts I write though!
>  
>  

:) neither would I. But I actually think that mincblur should be fixed, in which case it won't need to produce a warning. I would imagine this means changing the kernel normalization. Of course a wrapper script could also fix this simply by scaling the image; but figuring out the right scaling factor would probably involve padding the images prior to blurring to not be affected by edge effects.

-- A 


More information about the MINC-users mailing list