Skandha4/Slisp Feature Wish List
A collection of items that would be great to add to Skandha4/Slisp
at some point.
Skandha wish list
- Look at exporting geometry to global data structure in
anticipation of Skandha running inside avs module.
- Support fiducials for Dave. Save in 'master.fid' as text sexp?
- Some sort of ortho draw mode to support 2d drawing with inner
loop not in lisp.
- Read/write pbm or fbm or rgb or ... files, to provide image
processing in SkandhaMorpho via a standard image-processing
toolset.
- Complete emulation of Skandha3 in Skandha4:
- Improve Skandha4 graphics functionality:
- figure out how to represent adjacency info in patch
- figure out how to represent patch adjacency info
operations on polygon sets, esp sectioning by a plane
- figure out nice way to save animation scripts
- figure out how to save mixed skandha3 and skandha4 files (?)
- finish extrusion capability so it's trivial to make
noodles etc for John.
- Import/export geometry in some standard binary format, maybe dxf
(autocad), that's what people seem to refer to most often.
- Rationalize demos
- discard obsolete ones,
- xlisp giving access to all of remaining ones.
- Update xcore with latest xlisp.
Skandha Morpho wish list
- contour editing
- a "step" variable for next/previous slices (currently
hard-wired to step one slice at a time, but variable step would
be nice for editing a subset of the Visible Human images)
Additional projects from Jeff's "parting software overview" (January 1997)
- Set up skandha so it can run as a process on a Web server.
-
Jim's had no problem running Skandha as a
client-side process; Running it server-side can be
done just the same, except that small cgi-bin
hacks will be writing to the named pipe instead of
client-side stuff, and the Skandha4 display won't
as naturally appear on the client's screen.
Solutions to the latter include using the DISPLAY
environment variable to redirect Skandha4 output
to the client machine (probably seldom a useful
approach -- in cases where this would work, just
using the existing client-side solution would work
better) and running off screen snapshots. (See
next.)
- Function to take a snapshot of the current rendering and create a gif image.
-
Taking screen snapshots is already done in the
Skandha3 emulator (see
src/slisp/xsk3/lsp/xsk3-class-quicktime.lsp) --
one just uses the slisp "system" function to run a
script or program which ultimately uses the SGI
'snapshot' program to grab a given screen area,
the result of which can then be massaged from
native SGI image format into any desired format
using various filters including the netpbm ones
(many are installed in /usr/local/bin) and the SGI
4Dgifts ones (some of which are likewise in
/usr/local/bin. Many useful filters and programs
are hidden in the 4Dgifts hierarchy but otherwise
undocumented -- it is worth skimming this file
hierarchy to see what is available.)
- Document how to set up skandha to run in a named
pipe on the same machine as a Web server so it
does not have to be started each time. (An
alternative to Scott's approach).
-
Scott has since resolved this issue -- his
solution sounds essentially optimal to me.
- Function to cut current 3-D models (more general brain models, not skandha3
models) in an arbitrary plane, and optionally to
cap the two ends, return two models.
-
A zeroth approximation is very easy: Write a C
primitive which tests every vertex of every
triangle against the plane equation ax+by+cz=d,
and sets the triangle to be invisible if any of
them are positive/negative (as preferred). We
have bitvectors and can use them to control
visibility of triangles, so this is a snap.
For extra credit, accept a set of planes rather
than just a single one, and set the polygon
invisible only if it passes all of the tests --
this lets you drill square holes &tc rather than
just slice half the world off, and the incremental
extra coding effort is quite low. I'd write it to
accept the set of planes in the form of a graphic
relation of triangles, since this is a datatype we
already have: Then you can use the existing
insert-box &tc tools to create and position the
set of cutting planes.
For a first approximation, the program should
also insert new triangles representing the visible
part of bisected triangles, so as to produce a
smooth rather than jagged edge. This can be
viewed as a much simpler version of the existing
marching cubes code, if desired -- detailwork
of how to allocate space for the triangles &tc
can be copied from there if
desired. (src/slisp/xvol/c/*)
For pizzaz, the cutting plane can be displayed
either as a plain polygon, as a polygon with
applied texture, or as a set of polygons
simulating texture via color-sampled vertices and
gouraud shading (faster than texturemapping on our
machines, and looks just about as good). This is
all do-able directly in xlisp without any new C
primitives. Kevin has working examples of this
sort of stuff in his scanner program; The brain
mapper also has demonstration code of this sort
-- do "show one dataset" in it, then select
"Display: 3-D Surfaces".
To really do it up right, of course, one should
construct a cap to cover in the section sliced
off, rather than just put a plane over that
entire end. Really doing this right is quite
a bit of work, although well worthwhile -- this
would be a very general and popular tool.
Particular special cases might be coded up with
easy hacks -- for example, if you knew you were
only going to be cutting a nearly spherical
object, you might be able to get away with
constructing a cap just by determining a midpoint
(average all the points where the cutting plane
intersects a triangle, say) and then drawing a
pie-section from that midpoint out to the two
points where the cutting plane intersects the
triangle, for each sliced triangle in the surface,
say. This is an almost purely local algorithm
which can be done triangle-by-triangle (other
than determining the midpoint) so it would be
easy to hack up.
Another approximate hack might be to use grey
values to determine polygon tranlucency: If
the current SGI boxes support this on a
triangle-by-triangle basis, (or as part of an
applied texture), it is possible that deriving
translucency from MRI gray value by threshholding
would be sufficient to suppress the part of the
cutting plane outside the head, while leaving
a convincing cap.
The general case requires:
-
Make a collection of all the line segments forming
intersections between the cutting plane and any
triangle in the sliced surface.
-
Organize these segments into closed contours. (If
the original surface was closed, its intersection
with the cutting plane will of course consist only
of closed contours; Otherwise, you must be
prepared to handle open curves as well as closed
contours.) This is most likely best done by a
two-step process of first linking all line
segments which share an endpoint, then doing
connected component analysis on the resulting
graph -- pick a segment, and color all segments
reachable from it, then pick another uncolored
segement and color all segments reachable from
it, and so forth until all segments are colored.
The first step is most simply done by brute
force -- I'd do this first time around. That
will be O(N^2), so you may want to use a sorted
index or hash table to speed it up, later. A
general sort function for graphic relations
would be a cool C-coded primitive to add to
Skandha4, useful for things like this. The
second step can of course be done in linear
time using normal depthfirst search coding.
-
Tile these contours with polygons. Skandha3's
lidmaker solves exactly this problem, so its code
can be adapted if desired:
skandha3/lid/lid.c:lidEdit_CleanTiling. If you
want to fake texturemapping using gauroud shading,
you may need to limit the size of the triangles
created by subdividing any edges longer than
some threshold value.
Note that the inserted polygons can in all
cases be placed either in a new graphic
relation, or in the old one: In the
latter case, zapping the triangle-invisibility
bitvector and then resetting the fill-pointer
suffice to restore the original dataset,
so the operation isn't as destructive as
it might appear at first blush.
- Better visualization and interaction for
brain mapper.
- These will be very good for showing
off skandha and the brain project
Some simple improvements might include:
- Drawing a cube around the model.
Kevin does this, and it is very simple and
effective. Pure-lisp exercise.
- Glass ball manipulation.
Write code to allow rotation (and possibly more)
of the rendered brain via mouse drag in the
viewport rather than sliders. Check what
AVS does for an example of how this can look
and feel. Pure-lisp exercise. Be sure to
rotate around the center of the bounding
box of the object. (See below.)
- Cartooning during rotations
When there are too many polygons to draw in 1/10
second, it would be nice during active positioning
of the brain (or whatever) to replace it by some
reduced-resolution representation, be it only a
bounding box. Note that the :max and :min
messages to arrays may be used to efficiently
compute the bounding box, and the insert-box call
to create one: Just swap thinglists in the
downclick-fn, then swap them back in the
upclick-fn, of the glass ball or slider logic
controlling positioning.
For extra credit, dig up a convex-hull algorithm
and use it to generate a better cartoon than the
bounding box provides: Just take a random sample
of 1% of the points and compute a convex hull of
them, say, for a 500,000-triangle brain. The
insert-convex-hull function will need to be a
c-coded primitive, of course.
- Speeding up picking on the brain reconstruction
This means building some sort of spatial index.
Probably the most quick-and-dirty solution would
be to
-
Divide the brain into (say) 16 different graphic
relations, along the natural coordinate axes.
Since the polygons come from marching-cubes, this
can be done without having to subdivide any of
them.
-
Construct another 16 graphic relations, each
just containing the bounding box of the
corresponding relation from the previous
set.
-
To process a hit, check the mouseclick
against each bounding-box: If the click
doesn't hit it, all polygons enclosed
by the box can be ignored. Otherwise,
do a hit-test on the full polygon set
for that relation. Take the closest
of the up to 16 hits resulting.
The above scheme has the advantage of requiring
almost no new datastructures or C coding, and not
requiring (say) any changes to the existing pick
code: It can be done almost entirely in lisp. The
only place C code is likely to be needed is in
efficiently breaking the original dataset up into
the 16 datasets. The existing pointwise operators
should sufficient for labelling each point as to
which of 16 subspaces it is in; One would then
like a new C-prim for copying from one graphic
relation to another all points with a given tag,
and all polygons referrring to a point with a
given tag. Not quite trivial, but fairly
straightforward.
The above scheme has the disadvantage of being
fairly clumsy. A nicer solution would be to build
some sort of separate index to an existing graphic
relation, possibly partly or wholly in the form
of new arrays added to the relation, which
index it appropriately without requiring
disruption of the basic data, and then rewriting
code such as the pick logic to transparently
take advantage of such indices when they exist.
This would be a decidedly nontrivial project
to implement, but would be a tremendously nice,
general tool for the xlisp programmer if working.
- Draw open or closed regions on the 3-D model
Pure-lisp exercise.
Look at the existing code in the brain-mapper to
see how to recover the 3D coordinates of a point
clicked on.
(xmri/lsp/xmri.lsp:xmri-map-move-site-popup-camera-upclick-fn
is the heart of the logic. It's a bit convoluted
due to mode-change support so downclicks in that
window can do different things at different
times.)
If you want to draw contours as lines, just set up
a thing of line segments which is separate from
the brainsurface thing of triangles but part of
the same thinglist (see where
xmri/lsp/xmri.lsp:xmri-map-init sets the :things
property of poly-camera) and then insert
linesegments into it based on the 3D xyz coords
you get back from the mouseclicks on the brain,
using insert-facet. You probably don't want the
points/lines exactly on the surface of the brain,
they're likely to get lost: I would take the
middle of the bounding box of the brain as its
center, then move the surface points a few
millimeters out along the radial from the center
to themselves -- easy code to write in lisp.
If you want to draw regions as colors on the
brain instead of as lines above it, just
insert vertex-red/green/blue arrays in the
relation holding the brain, then when you
handle a mouseclick on it, use the pointwise
operators to color all vertices within some
radius. Set up a slider to control the
radius, and you now have airbrush painting
capabilities in 3D. Make color drop off
with radius if you like.
Main problem is that the whole brain will get
processed for each click, which will be about as
slow as picking currently is. The same hack
suggested above to speed up picking can be tried,
of course, cutting the brain into 16 smaller
relations. Or if you precompute polygon adjacency
information, you can use it to find all the
polygons close to the polygon clicked on without
further brain-global operations. I'd add to each
facet :neighbor-0 :neighbor-1 :neighbor-2 integer
arrays recording which polygons are adjacent
to each of the three edges of the current polygon:
This fits the current graphic relation structure
neatly without requiring a new relation for
(say) edges. To be really spiffy, you should
use a general graphic-relation sort to compute
the :neighbor-* information in O(n*logN) time,
but as a first hack doing it by brute force in
O(N^2) time might be do-able: 100,000 polygons
implies 10^10 operations, which on a 10^8Mhz
machine might only take a couple of minutes.
- Attach a label string to a boundary (eg central sulcus).
Trivial lisp exercise: Prompt for the label,
click on the brain for the location, use the
insert-hershey-string prim to insert the
text in a thing of linesegments (see above).
You'll want to offset the text some direction,
likely, and no doubt want a line segment leading
from the actual indicated point on the surface
up to the hershey text in space. for a spiffier
effect, experiment with putting the text in
a thing of triangles or rectangles. You'll
need to play with the parameters and fonts a
bit to look good, and you'll be amazed at the
number of polygons it takes. You'll probably
decide to stick with line-segment based text
for practical purposes.
- For a closed region, attach a label to the region interior (eg superior
temporal gyrus).
Should be about same as above.
- For a given 3-D point on the surface determine the closed surface region, if
any, of which is it a member.
If you're using the line segment representation of
surface regions, find the closest point on a
boundary, and assign the point to the surface
region enclosed by that boundary. This isn't
totally foolproof, but I bet it will work just
fine for what we want. To find the closest point,
brute-force computation should be quite
sufficient, we won't have so many contour-line
points that searching them all will take a
perceptible amount of time: Just look at how the
existing brain-mapper logic
(xmri/lsp/xmri.lsp:xmri-map-move-site-popup-camera-downclick-fn,
check out use of :dist2 array) determines which
site on the brain is being dragged, and clone the
logic. For this, you'll want to attach to each
vertex in the boundary thing of lines relation
an integer value indicating which area it belongs
to: This is easy to do while you're creating
the relation. Keep the label text in a separate
thing-of-lines in this case.
If you're using colors on the surface to represent
regions, this is even easier: When you get back
the surface point picked, you can ask for the
polygon picked on, look up a vertex, and see
what color it is. If your colors are not
easy to interpret, you may want to add a
vertex-label relation to the point relation,
and set it at the same time you set the color
values. You might make the vertex-label values
small integers indexing a small vector of strings,
so you can give the name of the region when it
is clicke on. Easy as pie :)
- Save surface lines, regions and labels as an extension to the map file.
Trivial -- just use the native file i/o support
to read/write the relevant graphic relations.
Kevin does this in xscanner, so you can lift
the detailwork from his code.
- Display coronal and sagittal MR slice wrt model.
Trivial -- peek at the "Display: 3-D Surfaces"
support in the brain mapper, and rig it to display
those at the same time, in the same window, as the
brain reconstruction proper. Displaying them in
conjunction is just a matter of putting two things
in the same thinglist: Again, see where :things is
set on poly-camera. One also has to get the
coordinate systems matched: The logic which draws
the crosshairs on the MRI subwindows in the brain
mapper
(xmri/lsp/xmri.lsp:xmri-map-move-site-popup-camera-upclick-fn)
demonstrates conversion from brain to voxel coordinates.
Kevin's xscanner application also has displays
which do almost exactly this.
- Display Hohne-like view of 3-D MR volume, with a
quarter cut away, optionally show the model in the
cut-away quarter.
Showing sliced and diced MR volumes is a pretty
trivial extension of the above: Instead of
inserting a single plane, one is inserting a
number of planes forming the outside of a
box or more complex polyhedron. Computers
being good at repeating a trick once they've
learned it, this is just a matter of setting
up an insert-textured-rectangle function
encapsulating the MRI-texture extraction
logic, then calling it once for each rectangular
face on the desired solid shape.
Slicing the model itself is more work, but
discussed above.
- Clean up all skandha code, document in info all the C functions and as
many Lisp functions as you can. Any other documentation helpful.
-
Heh. All the C-coded lisp primitives should be
decently described in the info docs at this
point; For internal C functions, the comments
in the code provide often-decent documentation.
The lisp-in-lisp code is more lightly commented,
but not unworkably so.
- Give suggestions for someone of Scott's ability to port skandha to another
architecture, say Intel running Linux.
-
The basic C code is by and large very portable.
The xlisp core started out on CP/M; The
overwhelming mass of Skandha4 internals is
self-contained. The link/library stuff in
src/slisp/systems is of course system-dependent.
By design, and I think in practice, the main
problems will be porting over the graphics
driver code under src/slisp/xgplotlib/c/dist/*.
This stuff is a mess, in large part due to
the legal restrictions on the gplotlib code.
The only stuff that really matters in the
entire above subtree are
src/slisp/xgplotlib/c/dist/gplotlib/gtplot.h
src/slisp/xgplotlib/c/dist/term/iris4d.trm
src/slisp/xgplotlib/c/dist/term/iris4d.tri
src/slisp/xgplotlib/c/dist/term/iris4d.seg
where the latter three are my Skandha4
driver for GL, and the former defines
the datastructures passed to the driver
by the various calls.
I would strongly recommend that the first
stage of doing a port be moving the
above four files into the
src/slisp/xgplotlib/c/
directory,
then nuking the entire
src/slisp/xgplotlib/c/dist/
directory,
and rehacking until Skandha4 passes selftest
again.
After that, I would clone the GL driver and
rehack it to be an OpenGL driver: SGI has
a GL -> OpenGL portability guide that should
ease this.
This should almost all be a straightforward matter
of renaming function calls in minor ways to their
OpenGL equivalents. The major exception, I
believe, will be the lack of mouse input support
in OpenGL: OpenGL leaves all mouse I/O issues
to the host window system.
This means that you'll need to explicitly link
Skandha4 against the X Windows libraries, and then
rewrite the driver to use X calls for mouse input
rather than the old GL calls. This may go quite
easily or prove a bit of a pain, depending what
unexpected and inconvenient differences between
the GL and X I/O models pop up -- in particular,
how one interleaves/intermixes keyboard and mouse
I/O. I'd not expect major problems here, however,
just annoyances.
It would be good to set up Imake as part of the
Skandha4 build process when switching to
explicitly linking to X -- this will greatly
ease later ports to other X Window systems
such as Linux, if done.
After the above is working, I'd try linking
against the Mesa implementation of OpenGL
instead of the native SGI graphics libraries:
If one got this passing selftest ok, one should
have a really unix-portable Skandha4 capable
of running on just about any unix system, in
particular including Linux.
If one then wanted to run on Windows NT, one
should just write a new driver. Since Microsoft
is officially supporting OpenGL on Windows, the
initial port could consist mostly of rewriting
the mouse and keyboard I/O yet again, to use
the Windows model instead of the X model. If
this is done by the person who did the port
from GL to X I/O, it should be very familiar
work that will go quite quickly.
If the above port works and gets used heavily, you
may then wind up wanting to port from OpenGL under
Windows to Microsoft's Active3D (I think it is
called) interface. Active3D is officially a
less-ambitious interface than OpenGL, but it looks
like it is becoming the interface of choice for PC
3D graphics accelleration hardware, and I wouldn't
be surprised if it is also free vs Microsoft
charging a fee for OpenGL, or some such: In any
event, Active3D is likely to wind up more common,
faster and cheaper than OpenGL on the Windows
platform, I think, and to do all we really need in
Skandha4 for most purposes.
That should be mostly it for porting Skandha4, in
a strict sense.
Other useful general maintainance stuff on
Skandha4 would include switching to proper ANSI C
prototype declarations everywhere (instead of the
current K&R C ones) and upgrading to a more recent
version of the xlisp interpreter.
If the latter is attempted, be wary of changes in
the garbage collector, which may break Skandha4 in
all sorts of hard-to-track-down ways. In
particular, I believe recent versions of xlisp
offer a garbage collector which shuffles objects
to compact ram. If the old non-shuffling garbage
collector is still available as a compile option
(was last I checked), one should probably select
it instead, at least as a first cut, and quite
likely permanently.
For general portability between unix systems, an
autoconfig script which figures out what local
libraries are available &tc and updates the
Makefiles appropriately would be a welcome
alternative to the current "systems" file
approach. The GNU 'autoconfig' package can be
used to generate a bourne shell script which will
do this, or if one prefers something similar can
probably be written more clearly and easily in
Perl, at a slight loss in portability (since
Bourne shell is on every Unix system, but Perl is
not yet quite so universal).
- Suggest best configuration for an Intel-Linux box that can run Web server
application - eg a skandha server, a repository
manager server, and a knowledfdge server - all
with a Lisp front end. Or suggest a better front
end.
-
Bill is actually at least as knowledgable about
hardware configuration issues, of course. I use
Debian Linux because it is a completely free
effort, but the commercial Red Hat Linux
distribution is winning over even dedicate
freeware afficionados: I'd probably go with it for
the lab. Other than that, any 486 or better PC
box is probably capable of doing a workable job;
just a matter of paying for as much performance as
one is comfortable with. The knee of the curve
this days is probably something like a 155MHz
Pentium-class chip, 128Meg of ram, 10/100Mbps
3com ethernet card, and 1-2G of SCSI disk.
The slowest SCSI disks is twice as fast as the
fastest EIDE drive, so at least one SCSI drive for
the swapfile and files being actively served to
the net would be good; Lesser used stuff and
backups can go onto EIDE drives if preferred. If
the fileset being served is relatively small
(100Meg, say), an alternative may be just buying
enough ram to hold it -- Linux is very good about
using all free ram as file cache. In this case,
just using an EIDE drive and no SCSI might be
quite sensible.
If the Linux box is going to run Skandha4 graphics
interactively, some hardware accelleration would
be sensible, of course: The offerings change on a
month-to-month basis right now, however.
Some other Skandha4 projects which might make sense;
- Import 16-bit AVS field files.
-
This would be particularly useful for importing
Bharath's masks or pre-masked data as a voxel-set.
The code for this is about half-written: Look
at
src/slisp/xvol/c/xvol.c:xvolh2_Read_AVS_Field_From_File_Fn
,
in particular the SHORT_FIELD case. This entire
function started out as a quick hack for a
specific project, and is being gradually
generalized over time to handle a wider
variety of AVS fields: Lots more work
could be done on this front.
- Finish support for deducing optimal camera angle in brain mapper
(I'm still hoping to finish this before I leave!)
-
This has been taken to the point of being able
to compute the matrix, the problem is just
plugging it into GL correctly. The fun comes
from the fact that GL uses separate viewing
and perspective transforms, and maybe does
some additional independent scaling or translation
after them, and the computed matrix needs to be
correctly dovetailed into all this.
See src/slisp/xmri/lsp/xmri.lsp:xmri-eye-solve,
which currently computes and prints out the
optimal mapping matrix, plus the viewing and
perspective matrices actually used to produce
the display. The
(send transform :setf newval row col)
call can be used to stuff any desired values
into the viewing and perspective matrices, it
is just a matter of deciding what values to
use. The perspective transform doesn't
actually have to use perspective, of course,
and almost certainly should not in this case,
since the photo is effectively from optical
infinity and attempting to deduce and match
the small degree of perspective present in
the photo is very likely to do more harm
than good.
I believe the SGI hardware always clips to a (-1,1)^3
cube, after the
viewing transform and before the perspective
transform, so I think the basic task is to
transform the viewing frustrum into this
cube in the viewing transform, then transform
this cube to screen coordinates in the
perspective transform, possibly ignoring
the window position and/or camera position
within the window, which may be independently
added in later by separate hardware.
- Replicate some of the remaining missing
Skandha3 functionality.
-
Importing the Skandha3 surface-tiling code would
be nice, but will probably never be a priority
with Jim. But better support for re/computing
surface normals once a surface has been
constructed by some means might be relatively
easy and useful.
Return to Skandha4 project page.
Last modified: Wed Jan 15 17:33:39 PST 2003 by Kevin Hinshaw (khinshaw@u.washington.edu)
Kevin Hinshaw