Reprocessing
of Cassini camera #2 images
The
Cassini space probe is
currently orbiting planet Saturn. It uses a high quality telescope
and
imaging system to take photographs of Saturn and its moons. Those
pictures are of outstanding scientific value and graphical quality.
Another system on Cassini, named camera #2, takes far less neat
pictures. One problem those pictures have is they are smeared with
horizontal lines of noise. For example this is picture
233_251_1.jpg:
I wrote a command line software to reprocess the images and remove
most
of the horizontal noise lines. Like this:
The software is called "Unundul". Its source code is available at
his
address:
http://www.ericbrasseur.org/unundul1.01.tar.gz
It is written in C++ and placed under GNU Public License.
Picture reprocessing softwares can encounter many problems. Worst
problems when it comes to scientific data are the software can
either
invent things that don't exist are remove valuable data. I remember
once I saw a picture of a star field taken by the Hubble Space
Telescope. Some stars were little blue discs. I really thought
Hubble
had captured the surface of those stars and I was looking at a
closeup
of a star. Actually the discs were just inventions made by a picture
scaling software. The real size of the stars was a minute fraction
of
the size of one sole pixel of the image. In order to cope with this
problem the unundul command produces a second image which is a
subtraction of the original image and the reprocessed image. That
check image proves nothing, yet it gives a hint no or few
information
was lost. Should something important have been removed from the
image,
it will be present on the check image. The little dots you can see
are
due to the fact those pixels on the reprocessed image had
brightnesses
above the maximum of 255. Hence they were cropped to 255. If the
image
is compared with the reprocessed one before the brightness crop, no
dots appear. I could not decide which procedure was the most honnest
one so I choose the one that yields the least pleasing results:
How does unundul operate?
The brightness of each image pixel is expressed as a number between
0
and 255. A brightness of 0 means the pixel is pitch dark. A
brightness
of 255 means the pixel is bright white. 128 means midgray.
How does the software operate? Basically, it compares a given
picture
line to its neighbor lines above or below. It makes the line lighter
or darker so it matches the brightness of the neighbor lines.
First problem was to determine how exactly a line should be made
brighter. These are the basic possibilities:
 The brightness of each line pixel has to be multiplied by a
factor. Suppose this short line of six pixels: 100
102 98 45 99 101. Making these pixels 5
%
brighter means multiplying their brightness by 1.05. This
yields 105 107 103 47 104 106.
 A constant has to be added to each pixel luminosity. Suppose
the
same pixels as above. Adding 5 to their luminosity yields
105 107 103 50
104
106.
Answer is a constant has to be added.
Second problem is how two lines should be compared. The real
question
is how exactly the constant must be computed. Suppose these two
lines:
110 110 110 255 110 110
100 100 100 100 100 100
For us humans it is obvious the constant to be added to the second
line
of pixels is 10. The pixel with a brightness of 255 in the first
line
must be an error or an object and we instinctively neglect it. How
can
we make a computer software behave the same intelligent way?
Actually
no intelligence should be implied in such a software. Intelligence
means possible errors. We have to find a mechanism anyway.
A basic approach is to compute the average brightness of each line.
Average of the first line is 134. Average of the second line is 110.
This yields a constant of 134  110 = 24 has to be added to each
pixel
of the second line, in order to make that second line as bright as
the
first line. Obviously this is bad.
The mechanism I finally found implies to compute the sum of the
absolute
values of the differences between the pixels. For example, the
difference between the first pixel of the first line and the first
pixel of the second line is 10. This yields an absolute value of
10.
For the fourth pixels this yields 155. For all six pixels pairs this
yields 10 + 10 + 10 + 155 + 10 + 10 = 205. Now, this calculation has
to
be performed out for every possible value of the constant. For
example
if we suppose we add a constant of 9 to the second line, the
computation of the absolute values yields 1 + 1 + 1 + 146 + 1 + 1 =
151. If we suppose we add 10, this yields 0 + 0 + 0 + 145 + 0 + 0 =
145. If we add 11, this yields 1 + 1 + 1 + 144 + 1 + 1 = 149. The
most
little value we got is 145, for a constant of 10. Hence 10 is the
correct answer. The principle of this mechanism is to favor the
pixels that can be made close to each other and neglect the pixels
with
very different values.
In fact the software does not try out every possible constant. It
uses
an approximation algorithm to converge quickly towards the right
constant.
Also the the software does not use the brightness of each pixel.
Indeed
the pixels are too noisy. Comparing them directly does not yield the
best result. Instead the software uses for each pixel an average of
the
pixel and its two left and two right neighbors. Don't ask me why two
neighbors left and right, I just tested out different values between
16 and 0 and found out 2 yields the smoothest result.
If you compare that way each line to the line above, and you and the
necessary constant to the line in order to make it as bright as the
line above, you sure will get a smoother picture. The problem is you
get a strong divergence. Two lines close to each other will get a
seemingly equal brightness. But two lines say ten lines apart, they
will have significantly different brightnesses. That's because each
time you compare two lines, you get a small error. After ten lines
the
errors accumulate and you get a strong error. There is a need for a
regulation mechanism. I tried out many mechanisms, finally the best
I
found is to compare each line with all the 32 lines above it. Then
an
average of the constants is made. This decreases the amount of the
error. The exact way the software proceeds is this : it starts from
the
line in the middle of the picture and makes it even with the 32
lines
above. Then it does the same for the next line below, and so on
towards
the bottom line. Then the process starts again, this time starting
from
a line well below the middle line. All lines are processed, towards
to
top line. Now all lines in the picture have roughly equal
luminosities.
There keeps a slight variation of brightness between the to line and
the
bottom line but that doesn't hamper I believe.
Last operation is to compute the average brightness of the original
picture and force the output picture to have the same average
brightness. Say the average brightness of the pixels of the original
image is 89 and the average brightness of the output is 92, well the
software will subtract 3 to the luminosity of each pixel of the
output
picture. This is merely cosmetic. It makes the output seem as close
as
possible to the original.
Technical remarks:
 The input picture must be in BMP format and not RLEencoded.
 The examples above can make you believe the computations are
made
with integers. Actually the software uses floating point
numbers. This
allows for optimal precision with no efforts. Yet it makes the
software
slower.
 I tested out the software on a set of pictures and got good
results for all. Yet bad results can be produced. The software
is not
perfect. If problems are encountered with some pictures the
software
probably can be adapted to cope with those pictures.
 The source code tarball proposed above contains a README file
that explains how to compile and run the software. It was
developed on
a CRUX 2.0 Linux box.
 Computation time for the example above was 2 minutes on a 1.5
GHz
Pentium IV Celeron processor.
I whish to thank Pierre Backers for having found and reported a bug
in
the first version of this software.
Eric Brasseur

February 3 2005