From minc-development@bic.mni.mcgill.ca Thu Jan 2 15:18:10 2003 From: minc-development@bic.mni.mcgill.ca (David Gobbi) Date: Thu, 2 Jan 2003 10:18:10 -0500 (EST) Subject: [MINC-development] Draft proposal for MINC 2.0 In-Reply-To: <20021223144228.A30128@sickkids.ca> Message-ID: It looks like a good proposal overall, but I have a couple comments particularly about Compression and Labels: Compression (page 7): "uncompressing the whole dataset into a temporary disk space" This doesn't sound reasonable to me, since disk access is so slow (in fact disk speed has hardly changed over 5 years while the amount of core memory in most PCs has climbed from 32MB to 256MB and CPU speeds have increased from 300 MHz to 2 GHz). There is a lot of CPU power available to do the decompression and a fair bit of memory for cacheing the uncompressed data. Simple math: a 2GB data set, written out at a typical disk write speed of around 10MB/s takes 200s or 3 min to write and then about another 1.5 min to read for a total time of about 5 min before the user can view the data. A cacheing scheme can be used that keeps a chunk of data in memory and uncompresses data from the file on-the-fly into the cache as necessary. There will have to be a way of telling the volume_io library what the maximum cache size can be (and ideally, the volume_io library should be able to figure out for itself how much core memory is available). I don't know enough about volume_io to say how well its current cacheing scheme fits into this. And the tradeoff between compression and random-access speed is not so bad given a fast CPU and cacheing of the uncompressed data. Saying that good compression is incompatible with random access is going a bit far, for example movie files are compressed and have key frames to make it possible to do random access. Likewise, JPEG images are broken into 16x16 macroblocks. Label format (page 7): Specific voxel values cannot be used as labels, since a voxel must contain the data as well as the label. Maybe you meant that some of the bits in the voxel would be used to store the label, e.g. the file could contain 16-bit voxels which have 12 bits for data and 4 bits for labels. But 4 bits means only 15 different available labels which isn't very many, assuming that those 4 bits are even available. I think it's best if the labels are stored in a separate variable within the NetCDF file. Some reasons for this are 1) it is not guaranteed that there will be any extra voxel bits available for labels, on top of those bits required for image data 2) label data will compress extremely well if kept separate from the data (because labels are generally large blobs of the same color) 3) if labels are stored in a separate variable, that makes it very easy for the MINC tools that don't care about labels to ignore the labels and simply copy them from the input file to the output file Slice scaling (page 8): It will be necessary to at least read per-slice scaled MINC files in order to maintain backwards compatibility with existing MINC files. But I think that all files that are written by the new MINC tools should use just one scale factor for the entire data set. If per-slice scaling is applied to the data before it is compressed, the compression will probably suffer as a result. I hope these comments are useful, and that I'm not stepping on anyone's toes. - David -- David Gobbi, MSc dgobbi@imaging.robarts.ca Advanced Imaging Research Group Robarts Research Institute, University of Western Ontario On Mon, 23 Dec 2002, John G. Sled wrote: > > > Hi everyone, > > Last time I was in Montreal, I had a chance to meet with Jason and > Bert and put together an outline for MINC 2.0. Based on that > discussion, Leila and I have written a proposal outlining the > requirements and design of MINC 2.0. I've attached it with this > email. Please comment. I'm hoping that this will be the basis of > a meeting in January in which we can all get together to hammer out the > details. > > cheers, > > John > From minc-development@bic.mni.mcgill.ca Sun Jan 5 22:47:40 2003 From: minc-development@bic.mni.mcgill.ca (Andrew Janke) Date: Mon, 6 Jan 2003 08:47:40 +1000 Subject: [MINC-development] Draft proposal for MINC 2.0 In-Reply-To: <20021223145509.C30128@sickkids.ca> Message-ID: On Mon, 23 Dec 2002, John G. Sled wrote: > Last time I was in Montreal, I had a chance to meet with Jason and > Bert and put together an outline for MINC 2.0. Based on that > discussion, Leila and I have written a proposal outlining the > requirements and design of MINC 2.0. I've placed it on the web at > > http://www.bic.mni.mcgill.ca/users/jgsled/outgoing/minc2.0/ > > Please comment. I'm hoping that this will be the basis of > a meeting in January in which we can all get together to hammer out the > details. Looks fairly complete from what I you have so far. I get scared by the statements in #4 though, this would insinuate that MINC2.0 is going to be something entirely new. Is there still sufficient interest in making MINC more netCDF'ish? Or has this dissapeared now? Has OSX been dropped form the list of supported systems for a reason? :) I get the impression that it will be platform of choice amongst "clinician" MINC-users in the future with the niceties that it provides. On the implementation side, how detailed do you want this spec to be? If you want detail, I would like to have these included: * Support for a -version/-v option in all C/L tools. this may well clash with -verbose but that's life. * On this note, write down the standard args a MINC C/L tool should support Peter obviously had a list in his head when writing them and as such I attempt to follow his lead with new tools, but it isn't obvious! * Inclusion of a minc-config .sh script ala gtk/gsl/gnu stuff such that all a makefile need include is INCLUDES = `minc-config --cflags` LDINCLUDES = `minc-config --libs` Or perhaps go with the new style and use pkg-config aka GTK2.0 This to me a far more elegant solution than distro-ing m4 macros to budding developers. (sorry steve!) pkg-config also just happens to be cross-platform. * Internationalisation (i18n) -- with the international use of MINC becomming apparent this should not be a hard thing to add. Especially with all the nice GNU gettext utils around for this nowadays. (Yes jason, viewnup is not a good example of this! :) * Finish of what Dave McDonald orginally intended volume_io to do, support multiple input formats, (but not OUTPUT formats! brough-ha ha ha!) If only the reading _in_ of ANALYZE/DICOM/ETC were supported world domination (and subsequent development) may happen sooner than you realise. All in all a good read though! Now get to coding it for me you lot! :) 'other interested party' member #12 From minc-development@bic.mni.mcgill.ca Mon Jan 6 04:28:17 2003 From: minc-development@bic.mni.mcgill.ca (Peter NEELIN) Date: Sun, 5 Jan 2003 23:28:17 -0500 Subject: [MINC-development] Draft proposal for MINC 2.0 In-Reply-To: <20021223145509.C30128@sickkids.ca> Message-ID: It is good to see a grand plan for things that I have longed dreamed of doing. Some thoughts on John and Leila's proposal (and David Gobbi and Andrew Janke's responses): 1) NetCDF itself imposes some serious restrictions on what extensions can be added. The blocked structure for large datasets should fit well with the NetCDF model, but any bit-compression scheme will not, and a more complex organization (colour-transform/ wavelet/bit-compression combination) would make any NetCDF scheme horribly unintelligible, I think. One can, of course, store everything in a single-dimensional byte variable (a file in a file, effectively), but this seems like a big departure from NetCDF thinking. I spent a fair bit of time worrying over this issue and I initially felt that a scheme could be fitted on top of NetCDF, but now I'm not so sure (perhaps it was denial - any change in underlying format could be very time-consuming). I think that it is at least worth investigating HDF to see if its combination of a very general data API with a simpler NetCDF-like API can satisfy all needs, especially since it can read (but not write) NetCDF files. (Sorry Andrew, but I think that someone should at least look into it.) 2) David asked why good compression has to be incompatible with random access and I think that it is a good question. Someone has actually married zlib (I think) to NetCDF to get one the fly compression with nearly-random access (and files are never truly random access anyway). The question of course is how big the compression chunks should be, with a tradeoff between speed/memory and randomness of access. I suspect that one approaches an asymptote in compression with fairly small chunk sizes. The problems introduced by disk decompression should not be ignored (beyond those raised by David about speed are the more basic ones of available disk space, disk management, etc.). I never did really understand why the NetCDF folks did not want to incorporate the zlib changes. They did give arguments (probably in the FAQ), but I don't recall them being compelling. Of course, standard lossless compression schemes are only the tip of the compression iceberg, and as I mentioned above, I don't think that NetCDF is particularly compression-friendly. 3) Having agreed with David on one point, I must now disagree with him on another. I have always felt that supporting separate scaling was important, since it made it possible for one to process large volumes with very little memory. The original MINC model of one scaling per slice matched the simple voxel organization, but more generally the notion is one scaling per chunk of data, however that might be defined. That said, it is not necessary to always have a separate scaling per slice. In fact, volume_io applications write out one scale per volume. It is simply the generic MINC applications, which try to avoid being memory hogs, that write out separate scales per slice. In cases where one has the memory and it makes more sense to have a single scale, do it. But should one impose the requirement of either lots of memory, or a two-pass approach (compute, then re-normalize) on all file-writing applications? 4) Perhaps I'm dense, but I did not quite get the 2N-1 dimension argument for blocked data structure (page 6). If I have a file that has dimensions (zspace,yspace,xspace) of sizes (1024,1024,1024), then the 2N structure is (zspace2,yspace2,xspace2,zspace1,yspace1,xspace1) with sizes (64,64,64,16,16,16), for example. Going to 2N-1 in the manner proposed (as I understand it), collapsing the fastest-varying dimension, would give (zspace2,yspace2,zspace1,yspace1,xspace1) with sizes (64,64,16,16,1024) which still requires loading the whole volume to get a sagittal slice. However, if one collapses the slowest-varying dimension, ones gets (yspace2,xspace2,zspace1,yspace1,xspace1) with sizes (64,64,1024,16,16), and loading any orientation would require only loading the neighbourhood data. Looking at this again, I suppose that one could also do (zspace2,yspace2,xspace2,zspace1,yspace1) with sizes (64,64,1024,16,16). Perhaps this is what was meant. But after all of this, what is the advantage of dropping a dimension (it is certainly less obvious to the casual reader)? 5) The discussion in the proposal on wavelet compression raises the potential problem of the cost of modifying a single voxel in a file (ie. read/write access). Although minc was originally designed to allow read/write access, we have virtually no applications that do this. Unless something significant changes (and perhaps large volumes prefer to be modified in place, although I have my doubts about the advisability of this), this worry seems to be about something that virtually never happens. The only type of volume that seems to be likely to be read/write is the label volume and I do not think that this is a particularly good candidate for wavelets. Furthermore, any interactive application that wants a working volume backed by a file would simply be well advised to not make the backing file have wavelet compression. From what I have seen, the real problem with a wavelet structure is that the decompression at full-resolution can be slow compared to an uncompressed file. This might mean that wavelet-compressed files are not good candiates for doing lots of automated computation. However, one must compare the decompression time to the subsequent computation that will done - I suspect that the decompression would be small compared to other calculations. 6) The arguments about data-type issues (page 9) seem to assume that caching will always be done. The cost of caching is not negligible, since a certain amount of index checking must be done on every lookup (remember that the calculation goes back to the volume whenever it needs a neighbourhood pixel, so this is not just a read problem, but a computation problem). If you do not do caching then for memory reasons one might want the internal volume representation to be different from the user representation and the file representation. 7) One thought related to complex numbers (pages 7-8): The current implementation of complex numbers (which is admittedly poorly supported) stores them with vector_dimension. Volume_io does not handle vector data very well since it uses a pointer per row (the fastest-varying dimension). For short vectors and small types, this can mean more pointer than data. This issue should be addressed in some way if greater use will be made of vector data in the future. 8) One of the issues that was not clear in my mind was the mechanism for deciding on output file structure in generic tools. If I want to do some volume arithmetic (minccalc) on a block-structured file, should I get a new block-structured file? What if the file has wavelet compression? Or should the output always have standard voxel structure unless I specify a set of output structure options. But then every application will need to have file-structure-related options added whenever a new structure is added. Related to this is the question of how much an application needs to know about file structure. Does the application need to have intimate knowledge of the output structure (e.g. doing wavelet-related things) or should that all be in the library? If so, then how does the user control this? (Weird file names, perhaps? Ughhh.) Can one create a compeletely general API that allows the application programmer to be unaware of different file structures? Can one also provide an API that gives complete control to applications that really do want to know about the structure (e.g. a wavelet streaming application)? What would all of this look like? 9) David raised a question about label and voxel data co-existing. I did not read that in the proposal, so perhaps the interpretation depends on your pre-disposition. I do not favour putting label data in with continuous voxel data, even as separate variables, but rather treating the two types of data as equivalent and putting them in separate files. Normally, the user would manage the two files separately (and they are separate pieces of information that are likely to go in different directions), with the advantage that tools made for continuous voxel data can be used on label data (and label data quickly turns into continuous data with blurring, etc.). Alternatively, an application would do the multiple-file management on behalf of the user. I think that the disadvantages of separating the types of data would outweigh the gain. (Re-reading the above, I just want to make clear that I support the originally proposed notion of an identifier for label volumes so that discrete and uninterpolable data can be properly handled. I just do not like the idea of making them completely different. The user should be able to treat them in similar ways, but applications would sometimes have to handle them differently.) That said, I can see the problems that arise in the continuum from geometric data to continuous voxel data. One can imagine wanting to incorporate ROI information (polyhedral data, for example) in a MINC file. But label data is really just another form of ROI data. And then a fuzzy label volume is just another form of label data. So where's the line? Should MINC incorporate all forms of label data, geometric and voxel-based? I think that the answer is to have another level of format to handle the most general case of label data, that could support polyhedra, meshes, voxel grids, etc. This format would be able to include MINC data as part of a "scene". Ideally, one would marry the MINC format with this format, but the danger is ending up with an unimplemented monster. The simplest route is to sidestep the problem by deferring it to another, meta, format (that may never get implemented, but at least it would not derail the more focused MINC effort). 10) Andrew raised the question of Mac OS X support. To my knowledge, no Mac OS has ever been officially supported by NetCDF (perhaps that has changed with OS X). However, building MINC and its command-line friends should not be a big problem. Is the issue one of windowing system rather than OS? Should the official MINC viewing tools (whatever that means) support X11, Windows and whatever-OS-X-calls-its-windowing-system? 11) Andrew makes a good point about conversion. However, I'm not sure that the volume_io way is really the best route (all applications read every format). I still think that the converter route makes life simpler (especially when you add a new format in the world of static linking - and beware of dynamic linking if you live in the world of software quarantines). Still, putting data-conversion into the plan is a good thing, since it is usually the first step (and a very non-trivial one). The world has simplified itself considerably in the past few years, so good DICOM, Analyze (and maybe Interfile?) converters would go a long way to making it easier to spread the use of MINC. 12) Would it make sense to develop a rough API definition before having a meeting so that the discussions can be more specific? It is often easy to have general discussions that talk about general principles but that do not lead to specific design. Also, I have found that people are more sensitive to potential problems when they can translate an API into their own context. General principles are often too distant for the omissions to show. John, Leila, got the time? Peter ---- Peter Neelin (neelin@bic.mni.mcgill.ca) From minc-development@bic.mni.mcgill.ca Tue Jan 7 21:46:54 2003 From: minc-development@bic.mni.mcgill.ca (Leila Baghdadi) Date: Tue, 7 Jan 2003 16:46:54 -0500 (EST) Subject: [MINC-development] MINC 2.0 draft Message-ID: Hi everyone Peter I read your mail and I believe you have actually explained some of the very important points infact believe it or not a couple of points are now clear to me 1) NetCDF versus HDF, I agree that it is at least worth investigating and because I myself am very curios to understand how it works (at least on an abstract level), I will look into it myself. I know that in 98 they introduced HDF5 which was motivated by some of the limitations of HDF itself such as no support for files larger than 2 GB. 2) as for compression and NetCDF, they have made a point that it is possible and not optimal but no reasons at least to the extend of my readings. mysterious NetCDF In or out? 3) I believe you have raised an important issue and explained it very clearly. Perhaps, the option of lots of memory is infact something we are trying to avoid to begin with so we have to come up with a good strategy for file-writing. 4) My understanding is that 2N-1 is the number we found out for the blocking scheme. Apart from less number of pointers in C , I am not sure what else I can comment on right now. maybe john can come up with a better explanation. 5) Believe it or not, we have been having the same discussion about SetVoxel() using the wavelet compression and came up with some ridiculously huge number. We are still not sure if wavelets are worth pursuing!! 6) This is also one of those points where further investigation might be a good idea. I am specially very curious about the computation cost. 7) I was not aware that volume_io does not handle vector data very well but I think maybe it is not a bad idea to have a new version of volume_io without this limitations. I also like the idea of an API definition. I think it will simplify things and make our development process faster which is one of our main priorities. Hopefully we can come up with a rough draft before the meeting! Thanks Leila From minc-development@bic.mni.mcgill.ca Fri Jan 10 05:58:38 2003 From: minc-development@bic.mni.mcgill.ca (Steve ROBBINS) Date: Fri, 10 Jan 2003 00:58:38 -0500 Subject: [MINC-development] Re: time for 1.1? In-Reply-To: <20021211102153.C1431417@shadow.bic.mni.mcgill.ca>; from stever@shadow.bic.mni.mcgill.ca on Wed, Dec 11, 2002 at 10:21:54AM -0500 References: <20021211102153.C1431417@shadow.bic.mni.mcgill.ca> Message-ID: <20030110005838.B3096582@shadow.bic.mni.mcgill.ca> Howdy, Bert and I are pleased to announce the first pre-release of MINC 1.1. Since the whole build system has been revamped, I'm making a PRE-release source package available at www.bic.mni.mcgill.ca/~stever/Software/Prerelease The package now uses automake and libtool, so the build should be a conventional "./configure; make" procedure. I'd be grateful if folks could test out the build and let me know of any rough spots. Please be sure to run "make check" and let me know if any of the checks fail. The build docs in README and GETTING_STARTED are not yet updated, so just ignore them. If you have netcdf in a weird location, e.g. /opt/foo/{include,lib}, configure using --with-build-path=/opt/foo . We haven't put any energy into the fortran stuff. If the FORTRAN bindings are important to you, let us know! -Steve From minc-development@bic.mni.mcgill.ca Sun Jan 12 03:21:02 2003 From: minc-development@bic.mni.mcgill.ca (Peter NEELIN) Date: Sat, 11 Jan 2003 22:21:02 -0500 Subject: [MINC-development] Re: time for 1.1? In-Reply-To: <20030110005838.B3096582@shadow.bic.mni.mcgill.ca> Message-ID: On Fri, 10 Jan 2003, Steve ROBBINS wrote: > Since the whole build system has been revamped, I'm making a > PRE-release source package available at > > www.bic.mni.mcgill.ca/~stever/Software/Prerelease This is great! One very minor point, but perhaps of interest for historical accuracy: MINC development was started in 1992, not 1993 (see the AUTHORS file). The first RCS check-in was on September 9, 1992 (according to cvs log, which agrees with my memory), although library development happened mostly in the summer (while Alan was away for five weeks canoeing down the Mackenzie). Likewise, volume_io development started in 1992 (check Prog_utils/alloc.c), although MINC support was not added until early 1993. A note about minccalc: I tried to make sure that the build would not try to run flex or bison unless the sources were changed so that people without these tools could still build the package. That is why the corresponding .c files are checked in. It is important to ensure that the .c files are checked in after the .y and .l files (so that make will not see them as out of date) and that the build process will only run flex and bison if necessary. It seems to do this (although reading the make output is quite something!), but I thought that I would mention it as something to keep in mind. > We haven't put any energy into the fortran stuff. If the FORTRAN > bindings are important to you, let us know! I stopped building the fortran by default, but left it in for those who might be interested. Since the old Makefile (now broken) and the .c files (yes, the fortran interface is in C) are all there, it should be fairly easy for someone who cares to build the wrappers by hand and add them to the library (I think that it was only ever used under irix). Peter ---- Peter Neelin (neelin@bic.mni.mcgill.ca) From minc-development@bic.mni.mcgill.ca Sun Jan 12 16:17:48 2003 From: minc-development@bic.mni.mcgill.ca (Steve ROBBINS) Date: Sun, 12 Jan 2003 11:17:48 -0500 Subject: [MINC-development] Re: time for 1.1? In-Reply-To: ; from neelin@bic.mni.mcgill.ca on Sat, Jan 11, 2003 at 10:21:02PM -0500 References: <20030110005838.B3096582@shadow.bic.mni.mcgill.ca> Message-ID: <20030112111748.A3882964@shadow.bic.mni.mcgill.ca> On Sat, Jan 11, 2003 at 10:21:02PM -0500, Peter NEELIN wrote: > On Fri, 10 Jan 2003, Steve ROBBINS wrote: > > > Since the whole build system has been revamped, I'm making a > > PRE-release source package available at > > > > www.bic.mni.mcgill.ca/~stever/Software/Prerelease > > This is great! > > One very minor point, but perhaps of interest for historical accuracy: > MINC development was started in 1992, not 1993 Thanks! I think I took the dates from the copyright statement in one of the files. Clearly, I should have checked more carefully. For example, the file "minc.h" says it was created on July 24, 1992 . I'll correct the dates in the files. If you have any other historical information to add (minor or otherwise), I'd be delighted to have it included. > A note about minccalc: I tried to make sure that the build would not try > to run flex or bison unless the sources were changed so that people > without these tools could still build the package. That is why the > corresponding .c files are checked in. It is important to ensure that the > .c files are checked in after the .y and .l files (so that make will > not see them as out of date) and that the build process will only run flex > and bison if necessary. That's a good point. Here's what happens now. 1. The configure script will probe for yacc or bison and for lex or flex. Even if they are not found, the build will succeed. You will get a diagnostic if the .c file is out-of-date with respect to the .y file. Presumably this is okay: anyone who is modifying the .y file ought to know that yacc is required to rebuild successfully. 2. The tarball is built using "make dist", which depends on "make all", so the distribution should be built with up-to-date .c files. Obviously, it is vital that the distribution-maker (me, this time) has yacc & lex installed. But someone building from the tarball need not have them. > > We haven't put any energy into the fortran stuff. If the FORTRAN > > bindings are important to you, let us know! > > I stopped building the fortran by default, but left it in for those who > might be interested. Since the old Makefile (now broken) and the .c files > (yes, the fortran interface is in C) are all there, it should be fairly > easy for someone who cares to build the wrappers by hand and add them to > the library (I think that it was only ever used under irix). Yes, that's basically my attitude at the moment. If there's a critical mass of people that require FORTRAN bindings, I'll look at automakifying it all. -Steve From minc-development@bic.mni.mcgill.ca Tue Jan 14 06:21:40 2003 From: minc-development@bic.mni.mcgill.ca (Steve ROBBINS) Date: Tue, 14 Jan 2003 01:21:40 -0500 Subject: [MINC-development] MINC 2.0 draft In-Reply-To: ; from baghdadi@sickkids.ca on Tue, Jan 07, 2003 at 04:46:54PM -0500 References: Message-ID: <20030114012140.A4043802@shadow.bic.mni.mcgill.ca> Howdy, I've finally read the proposal and the various emails about it. The changes proposed for the library code itself all seem reasonable. I worry a bit about changing the on-disk layout, since that will add complexity that can never be removed. On Tue, Jan 07, 2003 at 04:46:54PM -0500, Leila Baghdadi wrote: > I also like the idea of an API definition. I think it will simplify things > and make our development process faster which is one of our main > priorities. I wonder if it would also be valuable to sketch out the kinds of applications that use MINC and what the requirements of each would be. It might be helpful in thinking about the on-disk layout and algorithms appropriate to the lowest level of MINC. I can only think of a few kinds of applications. 1. Simple processing. Scan through a file performing a computation on each voxel (or a neighbourhood of, say, 5x5x5 voxels) and perhaps writing the result to a new file. Could be multiple input files. Think of mincmath or minccalc. 2. Visualization. Typically wants planar slices through the data. Read-only. 3. Complicated processing. Input files might be read in arbitrary order. For example, one input may be scanned and locations mapped through a transform into the second volume -- imagine computing an image similarity measure between one image and a transformed version of a second image. I'm surely oversimplifying life here. What kinds of access-patterns occur in your applications? Now I'm trying to figure out how the three listed requirements (large datasets, multi-resolution, and compression) impact these kinds of applications. 1. Simple processing. Can handle large files with little memory using voxel_loop (or a generalization). Reading a file in multiresolution form seems likely to require more disk seeking, which would be detrimental. Ditto for "blocked" files unless you got lucky and all the inputs have the same block structure. Ditto for compression that is block oriented. 2. Visualization. For large files, a multiresolution scheme that allows you to navigate through a low-res version and progressively fills in detail seems like a good idea. It's less clear to me whether a block-structured file would help. Would it? Compression would likely be detrimental. 3. Complicated processing. If the volume that you need to access in an unpredictable fashion won't fit in memory, you'd pretty much be forced to do a lot of disk seeks, I suspect. If the file was multi-resolution or compressed, you'd expect even more slowdowns. I suspect that whether the file was block structured or not wouldn't matter much. I'm just guessing here -- does anyone have hard data on this? In summary, my limited understanding suggests that applications in the first category are best served by the simplest disk layout such as the current MINC. Visualizations would likely benefit from a multi-res layout if the file is too large to fit in memory. And category #3 is doomed no matter what you do. To help better balance the inevitable tradeoffs, we should also consider how often each type of application is used. Clearly visualization is important and it is speed-sensitive. For the "processing" applications, the ones that I can think of either fall into the "simple" category (e.g. most of the minc tools) or they keep the entire volume(s) in memory (minctracc, for example). Could we possibly come up with an extension to MINC that is both forward- and backward-compatible? For example, use the current MINC format with some extra variables that multi-res-aware visualization tools can take advantage of? [Of course the compression would break forwards-compatibility, but presumably one could use a "mincuncompress" utility to get an uncompressed new-style MINC file.] -Steve From minc-development@bic.mni.mcgill.ca Tue Jan 14 06:35:34 2003 From: minc-development@bic.mni.mcgill.ca (Andrew Janke) Date: Tue, 14 Jan 2003 16:35:34 +1000 Subject: [MINC-development] MINC 2.0 draft In-Reply-To: <20030114012140.A4043802@shadow.bic.mni.mcgill.ca> Message-ID: On Tue, 14 Jan 2003, Steve ROBBINS wrote: > The changes proposed for the library code itself all seem reasonable. > I worry a bit about changing the on-disk layout, since that will > add complexity that can never be removed. > 1. Simple processing. Scan through a file performing a computation on > each voxel (or a neighbourhood of, say, 5x5x5 voxels) and perhaps writing the > result to a new file. Could be multiple input files. > Think of mincmath or minccalc. While you mention this, the fastest way I have found to prototype code to do this is a mangy perl script of mine that calls mincreshape, minc_modify_header and minccalc.... ie: create a few copies of the same file for all the offsets you are interested in and then just use minccalc to fly through the volumes in question doing your neighbourhood operation. :) > I'm surely oversimplifying life here. What kinds of access-patterns > occur in your applications? Very little apart from the ones you mention. > 2. Visualization. For large files, a multiresolution scheme that > allows you to navigate through a low-res version and progressively > fills in detail seems like a good idea. It's less clear to me whether > a block-structured file would help. Would it? Compression would > likely be detrimental. For the purposes of block-structured data and Visulisation, this is a bit of a non-issue as OpenGL provides mip-mapping as it is. (from a high-res volume). however for the sizes of most MINC volumes (excluding the MICe mob!) 256x256 is ample. This is all implemented in the recalcitrant viewnup. > Visualizations would likely benefit from a multi-res layout if the file is too > large to fit in memory. Likely for home-grown visualisation, but not for OpenGL apps that use Texture Mapping (the most logical choice for voxel data). a From minc-development@bic.mni.mcgill.ca Tue Jan 14 22:07:29 2003 From: minc-development@bic.mni.mcgill.ca (Steve ROBBINS) Date: Tue, 14 Jan 2003 17:07:29 -0500 Subject: [MINC-development] MINC 2.0 draft In-Reply-To: ; from rotor@cmr.uq.edu.au on Tue, Jan 14, 2003 at 04:35:34PM +1000 References: <20030114012140.A4043802@shadow.bic.mni.mcgill.ca> Message-ID: <20030114170729.A3898021@shadow.bic.mni.mcgill.ca> On Tue, Jan 14, 2003 at 04:35:34PM +1000, Andrew Janke wrote: > On Tue, 14 Jan 2003, Steve ROBBINS wrote: > > 1. Simple processing. Scan through a file performing a computation on > > each voxel (or a neighbourhood of, say, 5x5x5 voxels) and perhaps writing the > > result to a new file. Could be multiple input files. > > Think of mincmath or minccalc. > > While you mention this, the fastest way I have found to prototype code to do > this is a mangy perl script of mine that calls mincreshape, minc_modify_header > and minccalc.... ie: create a few copies of the same file for all the offsets > you are interested in and then just use minccalc to fly through the volumes in > question doing your neighbourhood operation. :) A true hack! Impressive. > > 2. Visualization. For large files, a multiresolution scheme that > > allows you to navigate through a low-res version and progressively > > fills in detail seems like a good idea. It's less clear to me whether > > a block-structured file would help. Would it? Compression would > > likely be detrimental. > > For the purposes of block-structured data and Visulisation, this is a bit of a > non-issue as OpenGL provides mip-mapping as it is. (from a high-res volume). > however for the sizes of most MINC volumes (excluding the MICe mob!) 256x256 is > ample. Yeah. But for the purposes of this discussion, "large files" means those of the MICe mob and Jason's high-res cadaver head (10k by 8k by 6k voxels). -S From minc-development@bic.mni.mcgill.ca Wed Jan 15 01:31:57 2003 From: minc-development@bic.mni.mcgill.ca (Andrew Janke) Date: Wed, 15 Jan 2003 11:31:57 +1000 Subject: [MINC-development] MINC 2.0 draft In-Reply-To: <20030114170729.A3898021@shadow.bic.mni.mcgill.ca> Message-ID: On Tue, 14 Jan 2003, Steve ROBBINS wrote: > Yeah. But for the purposes of this discussion, "large files" means those of the > MICe mob and Jason's high-res cadaver head (10k by 8k by 6k voxels). O.. a From minc-development@bic.mni.mcgill.ca Fri Jan 17 22:33:12 2003 From: minc-development@bic.mni.mcgill.ca (Steve ROBBINS) Date: Fri, 17 Jan 2003 17:33:12 -0500 Subject: [MINC-development] MINC 1.1 Message-ID: <20030117173311.C4133215@shadow.bic.mni.mcgill.ca> The MINC development team is pleased to announce the release of MINC 1.1. The source code is available now at http://www.bic.mni.mcgill.ca/software/distribution/packages/minc-1.1.tar.gz User-visible changes since 1.0 include: * All MINC programs have a manpage. * Rawtominc has new "-skip" option, to allow skipping header information. * Mincstats option "-max_bins" renamed to "-int_max_bins", to avoid clash with "-max". * Minccalc has new functions: tan, asin, acos, and atan. Bugs have been fixed relating to * building on 64-bit platforms * coredump in mincstats * inverted concatenated transforms The build process has been revamped (using automake and libtool): shared libraries are now easily buildable. Have fun! From minc-development@bic.mni.mcgill.ca Thu Jan 23 07:09:13 2003 From: minc-development@bic.mni.mcgill.ca (Andrew Janke) Date: Thu, 23 Jan 2003 17:09:13 +1000 Subject: [MINC-development] mincpik Message-ID: I have updated the source in /s/s/minc_dev/mincpik. There was a bug in which some images may not have been oriented correctly. Hrm... (the +direction option doesn't exactly do what it might without a -dimsize {z,y,x}space=-1 option or 3. Hrm. -- Andrew Janke ( rotor@cmr.uq.edu.au || www.cmr.uq.edu.au/~rotor ) Australia->University of Queensland->Centre for Magnetic Resonance Work: +61 7 3365 4100 || Home: +61 7 3800 4042 From minc-development@bic.mni.mcgill.ca Thu Jan 23 14:52:08 2003 From: minc-development@bic.mni.mcgill.ca (Steve ROBBINS) Date: Thu, 23 Jan 2003 09:52:08 -0500 Subject: [MINC-development] mincpik In-Reply-To: ; from rotor@cmr.uq.edu.au on Thu, Jan 23, 2003 at 05:09:13PM +1000 References: Message-ID: <20030123095208.A4478036@shadow.bic.mni.mcgill.ca> On Thu, Jan 23, 2003 at 05:09:13PM +1000, Andrew Janke wrote: > > I have updated the source in /s/s/minc_dev/mincpik. Mincpik is a very useful tool. Is there any reason we shouldn't incorporate it into the MINC sources? -S From minc-development@bic.mni.mcgill.ca Thu Jan 23 23:34:30 2003 From: minc-development@bic.mni.mcgill.ca (Andrew Janke) Date: Fri, 24 Jan 2003 09:34:30 +1000 Subject: [MINC-development] mincpik In-Reply-To: <20030123095208.A4478036@shadow.bic.mni.mcgill.ca> Message-ID: On Thu, 23 Jan 2003, Steve ROBBINS wrote: > > I have updated the source in /s/s/minc_dev/mincpik. > > Mincpik is a very useful tool. Is there any reason we shouldn't > incorporate it into the MINC sources? Peter expressed a (fairly good) reason for this a while back to me. He thinks it shouldn't be included as it then brings perl into the base minc distro. In addittion it really should be called volpik as it doesn't work (I don't think) for 2D volumes. :(. As such I have been working on my suite of "volxxx" tools and envisionaged making a separate voltools distro in the future. a From minc-development@bic.mni.mcgill.ca Fri Jan 24 00:36:13 2003 From: minc-development@bic.mni.mcgill.ca (Peter NEELIN) Date: Thu, 23 Jan 2003 19:36:13 -0500 Subject: [MINC-development] mincpik In-Reply-To: Message-ID: On Fri, 24 Jan 2003, Andrew Janke wrote: > Peter expressed a (fairly good) reason for this a while back to me. He thinks > it shouldn't be included as it then brings perl into the base minc distro. Times have changed, perl is fairly ubiquitous, perhaps it is time to drop that rule... Also, with a couple of eager packagers, the configuration could be made to do the work of figuring out where perl lives (if anywhere) and do the right thing. On the other hand, perhaps a separate package of minc extras might make it easier for those only interested in "core" minc. Thoughts, Steve? Peter ---- Peter Neelin (neelin@bic.mni.mcgill.ca) From minc-development@bic.mni.mcgill.ca Fri Jan 24 01:38:27 2003 From: minc-development@bic.mni.mcgill.ca (Steve ROBBINS) Date: Thu, 23 Jan 2003 20:38:27 -0500 Subject: [MINC-development] mincpik In-Reply-To: ; from rotor@cmr.uq.edu.au on Fri, Jan 24, 2003 at 09:34:30AM +1000 References: <20030123095208.A4478036@shadow.bic.mni.mcgill.ca> Message-ID: <20030123203827.B4478036@shadow.bic.mni.mcgill.ca> On Fri, Jan 24, 2003 at 09:34:30AM +1000, Andrew Janke wrote: > On Thu, 23 Jan 2003, Steve ROBBINS wrote: > > > > I have updated the source in /s/s/minc_dev/mincpik. > > > > Mincpik is a very useful tool. Is there any reason we shouldn't > > incorporate it into the MINC sources? > > Peter expressed a (fairly good) reason for this a while back to me. He thinks > it shouldn't be included as it then brings perl into the base minc distro. I'm not convinced by this argument. MINC already has a tool (albeit a rather obscure one) with nontrivial requirements: mincview uses "xv". On Thu, Jan 23, 2003 at 07:36:13PM -0500, Peter NEELIN wrote: > On Fri, 24 Jan 2003, Andrew Janke wrote: > > > Peter expressed a (fairly good) reason for this a while back to me. He thinks > > it shouldn't be included as it then brings perl into the base minc distro. > > Times have changed, perl is fairly ubiquitous, perhaps it is time to drop > that rule... Also, with a couple of eager packagers, the configuration > could be made to do the work of figuring out where perl lives (if > anywhere) and do the right thing. Yes, absolutely. We do this already for mni_autoreg, e.g. > On the other hand, perhaps a separate > package of minc extras might make it easier for those only interested in > "core" minc. Thoughts, Steve? I think small scripts like volpik fit well in core MINC, myself. It's along the lines of mincdiff and mincedit (and Andrew's minchistory). It's much easier on someone installing from source to get everything at once. The binary package makers are still free to split things up: Debian's packaging has separate "minc-tools" and "libminc-dev" packages, for example. -Steve From minc-development@bic.mni.mcgill.ca Fri Jan 24 05:19:37 2003 From: minc-development@bic.mni.mcgill.ca (Andrew Janke) Date: Fri, 24 Jan 2003 15:19:37 +1000 Subject: [MINC-development] mincpik In-Reply-To: <20030123203827.B4478036@shadow.bic.mni.mcgill.ca> Message-ID: On Thu, 23 Jan 2003, Steve ROBBINS wrote: > > Peter expressed a (fairly good) reason for this a while back to me. He thinks > > it shouldn't be included as it then brings perl into the base minc distro. > > I'm not convinced by this argument. MINC already has a tool (albeit a rather > obscure one) with nontrivial requirements: mincview uses "xv". And mincpik would add a dependance on ImageMagick (convert) the "competition" to xv. :) > > Times have changed, perl is fairly ubiquitous, perhaps it is time to drop > > that rule... Also, with a couple of eager packagers, the configuration > > could be made to do the work of figuring out where perl lives (if > > anywhere) and do the right thing. > > Yes, absolutely. We do this already for mni_autoreg, e.g. I thought '#! /usr/bin/env perl' was the right thing? > > On the other hand, perhaps a separate package of minc extras > I think small scripts like volpik fit well in core MINC, myself. It's along > the lines of mincdiff and mincedit (and Andrew's minchistory). It's much > easier on someone installing from source to get everything at once. The > binary package makers are still free to split things up: Debian's packaging > has separate "minc-tools" and "libminc-dev" packages, for example. Fine by if you wish to add it but in that case I should do a bit of work such that it will work somewhat reliably for 2D/3D/4D data. Anyone have any thought on what should happen when one asks for a coronal slice from a minc volume that consists of a single transverse slice? What the user asked for? :) -- Andrew Janke ( rotor@cmr.uq.edu.au || www.cmr.uq.edu.au/~rotor ) Australia->University of Queensland->Centre for Magnetic Resonance Work: +61 7 3365 4100 || Home: +61 7 3800 4042 From minc-development@bic.mni.mcgill.ca Mon Jan 27 16:06:05 2003 From: minc-development@bic.mni.mcgill.ca (Robert VINCENT) Date: Mon, 27 Jan 2003 11:06:05 -0500 Subject: [MINC-development] Windoze Message-ID: Hi all, I entertained myself this weekend by building NetCDF 3.5 and MINC 1.1 on Windows ME (I lead an exciting life, huh?). I have no idea if this was actually a useful exercise. Can I get a show of hands? How many folks out there could use MINC on Windows if it were generally available? And what other pieces besides the core distribution would have to be ported for the whole package to be truly useful? I created Makefiles for both GNU make (from Cygwin) and Microsoft NMAKE. I just used Microsoft's compiler; it handles 99.9% of the code just fine - I had to add a few #ifdefs here and there, perhaps changing 10 or 20 lines of code. I tried several of the NetCDF and MINC tests and everything checked out fine, although there is a minor but annoying inconsistency in floating-point output from printf(). There are several large questions to be answered yet, though - for example, how to handle shell and perl scripts. Obviously there are solutions; I'm curious if anyone knows of an especially good perl implementation for Win32. If anyone is interested, I have a binary ZIP file I can make available for evaluation & testing purposes. Otherwise I'll just find other ways to entertain myself next weekend... -bert From minc-development@bic.mni.mcgill.ca Mon Jan 27 16:12:12 2003 From: minc-development@bic.mni.mcgill.ca (Jason Lerch) Date: 27 Jan 2003 11:12:12 -0500 Subject: [MINC-development] Windoze In-Reply-To: References: Message-ID: <1043683932.14807.6.camel@dennis.bic.mni.mcgill.ca> > I entertained myself this weekend by building NetCDF 3.5 and MINC 1.1 on > Windows ME (I lead an exciting life, huh?). I have no idea if this was > actually a useful exercise. Can I get a show of hands? How many folks out > there could use MINC on Windows if it were generally available? Not I. > There are several large questions to be answered yet, though - for > example, how to handle shell and perl scripts. Obviously there are > solutions; I'm curious if anyone knows of an especially good perl > implementation for Win32. Try ActiveState - they usually provide the most solid distro of things such as python and perl for MS folks. http://www.activestate.com/ Cheers, and thanks for doing this work! Jason From minc-development@bic.mni.mcgill.ca Wed Jan 29 08:56:22 2003 From: minc-development@bic.mni.mcgill.ca (Andrew Janke) Date: Wed, 29 Jan 2003 18:56:22 +1000 Subject: [MINC-development] volregrid. Message-ID: For those who are interested (none? :) the -range option now works... ie it can output data in ranges other than 0..1 It even rescales data correctly now. What more could you ask? -- Andrew Janke ( rotor@cmr.uq.edu.au || www.cmr.uq.edu.au/~rotor ) Australia->University of Queensland->Centre for Magnetic Resonance Work: +61 7 3365 4100 || Home: +61 7 3800 4042