This is a brief summary of some of the work carried out by former students and me at the IP Lab. Where appropriate, contact links are provided to these "FP's".

One of the main developments was the multiresolution
stochastic models developed with Simon Clippingdale
and published in the
1989 *IEEE ICASSP Conference Proceedings. *An example of Simon's thesis
work is shown in the next image - a restoration of a noisy image
algorithm based on a simple multiresolution model. In this example, the
original image has 10dB of additive white Gaussian noise in it. The algorithm,
which is edge-adaptive and computationally equivalent to a 3x3 filter,
does a pretty good job of removing the noise without blurring the edges.

A more general analysis tool is the Multiresolution Fourier Transform,
which Andrew Calway worked on for his PhD. The original idea behind the
multiresolution approach is that images in particular are better seen as
evolving from coarse to fine scale, so that the innovations represent details
added at each resolution, rather than through the conventional top-left
to bottom-right Markov Random Field models, or even the more recent noncausal
MRF models. This is obviously related to wavelet representations.
Our more recent work emphasises a different point of view, however. This
is that *any
*signal model is only likely to be valid locally over
some domain of limited but varying extent. To get a compact representation
of data in terms of such models, we apply them in a multiresolution control
structure, in which any region for which the model turns out to be inadequate
is split into a number of *child* regions, which are again tested
for the applicability of the model. The process terminates either when
the model fits or when the data volume remaining is insufficient to test
the model.
An example of the use of these techniques is shown in this image, which
shows the extraction of line and edge features from a well known image,
using quadtree hypothesis testing. The computation was done using a tool
called the Multiresolution Fourier Transform, which was described in the
1992 *IEEE Transactions on Information Theory *paper by Andrew Calway,
Edward Pearson and myself. The problem of identifying the edges was formulated
as a composite hypothesis test, based on *Maximum Likelihood
*estimates
of the position and orientation of each edge feature within the nominal
quadtree block.The model is unusual not only because it is applied in a
multiresolution framework, but because it is a Fourier domain model of
edges, in fact an autoregressive model in the orientation
normal to the edge - a case of edge detection without an edge detector.

A second example
of multiresolution modelling is shown in the next figure, which was produced
by Chang-Tsun Li in his PhD work
on texture segmentation. Here, a multiresolution
hidden Markov model is used to define a labelling process, whose state
at each scale is estimated from measurements using the appropriate scale
of MFT. The multiresolution approach greatly speeds up computation in two
ways: it allows the segmentation at coarse scales to constrain those at
higher resolutions and it provides a good initial condition for the simulated
annealing which is used to arrive at a *Maximum a Posteriori*
labelling of the image. This particular algorithm has a number of novel
features: it uses a multiresolution MRF, it combines region and boundary
processes and it uses features based on a *deformable *model of texture,
which was part of Tao-I Hsu's PhD work here and was published
in the* IEEE Transactions on Image Processing *in October 1998. If you would like more information on image segmentation,
click here.

Another recent piece of work uses 3-D wavelet transforms to compress a video sequence. This work forms part of Ian Levy's PhD thesis. The coder uses a Vector Quantizer operating in the 3-D wavelet domain, on 4x4x4 blocks of pixels. In effect, each codebook vector represents a spatial pattern moving at a velocity defined by the ratio between the temporal and spatial wavelengths defined by the wavelet. By using the same codebook for different combinations of orientation and scale in time and space, we can accommodate a range of velocities with minimal computation. The example shown here is a short sequence of 256x256 pixel images, coded at a bit rate of .031 bpp (a compression of more than 200). The SNR is approximately 36dB for this sequence.

Just to show we do other stuff here, the next image is a type of Kohonen
neural network, which evolves under the impetus of randomly chosen
translations, dilations and rotations acting on a random array of Gaussian
functions. The circles have a radius showing 1 standard deviation for the
(circular) Gaussian functions. The idea behind this work is to show that
visual motion can produce a structure much like that found in the retina of
animals, with small receptive fields and high density in the centre, or
* fovea*,
with larger fields and lower density in the periphery. In this way, it connects to much of the other work in emphasising
the importance of symmetry and statistics in image analysis and vision. Simon
Clippingdale and I worked on this project, the results of which
were reported in our paper in * Neural Networks* in 1996.

Some new work, which we call MGMM, takes
the fusion of statistics and symmetry ideas one step further, as a general
tool for image analysis and computer vision. It is an approach to statistical
approximation which combines a multiresolution approach with Gaussian mixture
modelling - hence MGMM. It can be applied to a wide variety of problems. The
example shows the segmentation of the Lena image using MGMM. This gives a
* tree* description, in which leaf nodes represent single components
in the mixture model. The animation takes you down the tree, from its root to
the 8 leaves; in the background is the least squares reconstruction corresponding
to the given level in the MGMM tree.

Another project, which is for the new field of Proteomics is another application of motion estimation techniques. This is the geometry correction of images of electrophoretic gels, which are widely used in the analysis of proteins: each spot you see on these images represents an individual protein, which has migrated under electrical fields to a position which roughly corresponds to its PI and Molecular Weight. Unfortunately, the preparation of these gels is not so consistent that pixels can be translated directly to these meaningful co-ordinates without some geometry correction. The algorithm uses multiresolution correlation to estimate the geometric distortion between two images. The image sequence shows four such gel images all warped into the same geometry; without such correction, comparison of the protein abundances between images - the main goal of this analysis - is practically impossible. This work is a collaboration with Oxford GlycoSciences.

If you would like to know more about these ideas, you can try
our technical reports on the Department's Research Reports
Page or Email Roland.Wilson@dcs.warwick.ac.uk.

Unless otherwise stated, all title and copyright in and to any material on this website or any copies of the foregoing are owned by Roland Wilson. Copyright © 2000, Roland Wilson. All rights reserved. Any unauthorised copying or redistribution in any media strictly prohibited.