[MINC-development] MINC2 file with floating-point voxels and slice normalization

Andrew Janke a.janke at gmail.com
Wed Mar 20 21:12:27 EDT 2013


> Yup. voxel_loop is designed to be memory efficient (back in the days
> when 32 MB was a lot of memory, or something - it's a bit hazy now).

To me this was a good decision and has allowed MINC to stand the test
of time. Over time we have of course increased this memory "chunk"
size but the ability to mash through 100 files and not have to read
them all into memory just to average/compute across them is something
that "the others" don't have.

> I believe that voxel_loop is designed to do no more than an image at a
> time. Perhaps it could be modified to slurp up more data in one shot -
> not sure how hard that would be.

image? I think you mean slice?

> You would end up discretizing the data a second time - that's not ideal.

Agree. It's not me that wants this change, it's for this reason that
I'll keep suggesting that people who think they don't want slice
scaling buy two disks and use float. :)

> Is it because minc does not support integer, boolean
> or label data? For years, I have felt that this is the big gap in minc
> and should really be addressed. If you know that the data is integer
> or labels (IDs for which proximity in value means nothing, so
> interpolating between 10 and 12 to get 11 is nonsense), don't do nasty
> things to it like scale it to real values or interpolate it.

Agree. And the shift to HDF was a big step in this direction, one of
the initial problems was that MINC2 was still very strongly tied to
netCDF data types. Vlad has a working port of minc_lite that is HDF
only that we are using in production here now. Once we are happy with
its stability we should think pretty seriously about releasing this
code as "MINC2 proper" as it will allow all the recoding that will be
needed to add such discrete types into MINC.

For now pretty much everything can be done with labels but you need to
be aware of tools like resample_labels from conglomerate.

> I suspect that the code changes would be easier and
> would also benefit the big data people (like Andrew showing off with
> his monster volumes :)).

I'll bet it's not only me. Our u-CT and u-MRI machines are now
routinely churning out multi-GB volumes and we aren't the only ones in
the world with such machines. It's minctracc I now have to fix as it
seems to have a size limit in there somewhere. ANTS chokes on these
volumes unless you downsample them so back to the well polished
hammer(s) I know.

Change mincaverage/mincmath/minccalc such that I can't perform a
calculation across 10x 12GB volumes because I'd need 120GB of RAM?  I
think not...  it'd be quicker to just use niftilib.


a


More information about the MINC-development mailing list