[MINC-users] multi-input output type/range

Alex Zijdenbos zijdenbos at gmail.com
Mon Nov 19 20:57:21 EST 2012


Actually, the motivation for changing this is not a "current itch" - I
can scratch my current itch just fine. It is a concern that the
current default behaviour has a very high probability of giving
"wrong" output that will likely go unnoticed. With "wrong" I mean with
possibly severe quantization errors that the user did not expect; and
with "user" I mean those who may not have the knowledge of the ins and
outs of MINC like some of us do.

I was under the impression that one of the central ideas behind MINC
was that the user wouldn't need to care about or be aware of data
type. But when you want to add or multiply two volumes, one of which
is 8-bit, the other is 16-bit, you end up with a non-commutative
operation due to the quantization: A*B != B*A and A+B != B+A. It gets
(a lot) worse when performing math operations on label volumes that
may have small ranges on the data type. So in order to avoid this, one
would either have to use a blunt hammer and force a float output
volume; or first figure out what datatypes the input volumes have and
then do the right (or better) thing. Neither of these are obvious to
the average user who just wants to add two volumes together.

I actually cannot come up with a situation where I would *want* to
lose data in quantization, so I am curious: can you give an example of
how you relied on the current behaviour?

-- A


On Mon, Nov 19, 2012 at 6:30 PM, Andrew Janke <a.janke at gmail.com> wrote:
> I would be very adverse to changing tools such as mincmath and
> minccalc's default behaviour, no doubt there are hundreds of buried
> calls to these two tools that would then behave differently to
> expected.
>
> FWIW, the change you are requesting would have to be done in
> voxel_loop.{c,h}. Voxel loop also does perform all internal
> calculations as double so it's only the output stage where
> quantization can creep in.
>
> That said, I am not sure of the motivation for the change other than
> your current itch? There are many times in scripts where I have relied
> upon this behaviour and crafted minccalc lines to suit... I would
> guess that Peter's original motivation for doing things the way they
> are was disk space. Granted this isn't so much of an issue anymore but
> it will still have some impact.
>
> If you want to achieve what Vlad is suggesting this is already easily
> achieved, just mincreshape all your input files to float and job done!
> The file type will propagate through if things are in the right order.
>
> Failing that add some hidden magic in your .bashrc...
>
>    alias minccalc="minccalc -float"
>
> That would be sure to cause headaches...  perhaps a small wrapper
> script would be a better option!
>
>
> a
>
> On 20 November 2012 07:52, Vladimir S. FONOV <vladimir.fonov at gmail.com> wrote:
>> Hello,
>>
>> considering that inter-slice normalization does more evil than good
>> especially in case of multiple labels maybe let's just treat all volumes as
>> float , unless specified otherwise?
>>
>>
>> On 12-11-19 10:31 AM, Alex Zijdenbos wrote:
>>>
>>> Hello all,
>>>
>>> I regularly run into issues with some of the minc tools, specifically
>>> mincmath and minccalc, due to the fact that by default, they inherit
>>> the data type of the first volume (only). For example, something like
>>> this:
>>>
>>> minccalc -expression "A[0] * A[1]" <mask.mnc> <vol.mnc> <masked.mnc>
>>>
>>> will produce very expected results if <mask.mnc> is "unsigned byte 0
>>> 1"; or even when it is "unsigned byte 0 255". Of course you can choose
>>> the output type and range that you like to see; but that implies being
>>> explicitly aware of the data type of the input volumes to begin with
>>> which is not always a given, especially not when this is buried deep
>>> into a script.
>>>
>>> I think it would make sense to have multi-input minc tools compute the
>>> "largest" data type/range among the input volumes, rather than just
>>> inheriting from the first volume. That wouldn't be perfect, but it
>>> would reduce quantization errors in quite a few cases.
>>>
>>> Thoughts?
>>
>>
>>
>> --
>> Best regards,
>>
>>  Vladimir S. FONOV ~ vladimir.fonov <at> gmail.com
>> _______________________________________________
>> MINC-users at bic.mni.mcgill.ca
>> http://www.bic.mni.mcgill.ca/mailman/listinfo/minc-users
> _______________________________________________
> MINC-users at bic.mni.mcgill.ca
> http://www.bic.mni.mcgill.ca/mailman/listinfo/minc-users
>


More information about the MINC-users mailing list