of Cassini camera #2 images
The Cassini space probe
currently orbiting planet Saturn. It uses a high quality telescope
imaging system to take photographs of Saturn and its moons. Those
pictures are of outstanding scientific value and graphical quality.
Another system on Cassini, named camera #2, takes far less neat
pictures. One problem those pictures have is they are smeared with
horizontal lines of noise. For example this is picture
I wrote a command line software to reprocess the images and remove
of the horizontal noise lines. Like this:
The software is called "Unundul". Its source code is available at
It is written in C++ and placed under GNU Public License.
Picture reprocessing softwares can encounter many problems. Worst
problems when it comes to scientific data are the software can
invent things that don't exist are remove valuable data. I remember
once I saw a picture of a star field taken by the Hubble Space
Telescope. Some stars were little blue discs. I really thought
had captured the surface of those stars and I was looking at a
of a star. Actually the discs were just inventions made by a picture
scaling software. The real size of the stars was a minute fraction
the size of one sole pixel of the image. In order to cope with this
problem the unundul command produces a second image which is a
subtraction of the original image and the reprocessed image. That
check image proves nothing, yet it gives a hint no or few
was lost. Should something important have been removed from the
it will be present on the check image. The little dots you can see
due to the fact those pixels on the reprocessed image had
above the maximum of 255. Hence they were cropped to 255. If the
is compared with the reprocessed one before the brightness crop, no
dots appear. I could not decide which procedure was the most honnest
one so I choose the one that yields the least pleasing results:
How does unundul operate?
The brightness of each image pixel is expressed as a number between
and 255. A brightness of 0 means the pixel is pitch dark. A
of 255 means the pixel is bright white. 128 means mid-gray.
How does the software operate? Basically, it compares a given
line to its neighbor lines above or below. It makes the line lighter
or darker so it matches the brightness of the neighbor lines.
First problem was to determine how exactly a line should be made
brighter. These are the basic possibilities:
- The brightness of each line pixel has to be multiplied by a
factor. Suppose this short line of six pixels: 100
102 98 45 99 101. Making these pixels 5
brighter means multiplying their brightness by 1.05. This
yields 105 107 103 47 104 106.
- A constant has to be added to each pixel luminosity. Suppose
same pixels as above. Adding 5 to their luminosity yields
105 107 103 50
Answer is a constant has to be added.
Second problem is how two lines should be compared. The real
is how exactly the constant must be computed. Suppose these two
110 110 110 255 110 110
100 100 100 100 100 100
For us humans it is obvious the constant to be added to the second
of pixels is 10. The pixel with a brightness of 255 in the first
must be an error or an object and we instinctively neglect it. How
we make a computer software behave the same intelligent way?
no intelligence should be implied in such a software. Intelligence
means possible errors. We have to find a mechanism anyway.
A basic approach is to compute the average brightness of each line.
Average of the first line is 134. Average of the second line is 110.
This yields a constant of 134 - 110 = 24 has to be added to each
of the second line, in order to make that second line as bright as
first line. Obviously this is bad.
The mechanism I finally found implies to compute the sum of the
values of the differences between the pixels. For example, the
difference between the first pixel of the first line and the first
pixel of the second line is -10. This yields an absolute value of
For the fourth pixels this yields 155. For all six pixels pairs this
yields 10 + 10 + 10 + 155 + 10 + 10 = 205. Now, this calculation has
be performed out for every possible value of the constant. For
if we suppose we add a constant of 9 to the second line, the
computation of the absolute values yields 1 + 1 + 1 + 146 + 1 + 1 =
151. If we suppose we add 10, this yields 0 + 0 + 0 + 145 + 0 + 0 =
145. If we add 11, this yields 1 + 1 + 1 + 144 + 1 + 1 = 149. The
little value we got is 145, for a constant of 10. Hence 10 is the
correct answer. The principle of this mechanism is to favor the
pixels that can be made close to each other and neglect the pixels
very different values.
In fact the software does not try out every possible constant. It
an approximation algorithm to converge quickly towards the right
Also the the software does not use the brightness of each pixel.
the pixels are too noisy. Comparing them directly does not yield the
best result. Instead the software uses for each pixel an average of
pixel and its two left and two right neighbors. Don't ask me why two
neighbors left and right, I just tested out different values between
16 and 0 and found out 2 yields the smoothest result.
If you compare that way each line to the line above, and you and the
necessary constant to the line in order to make it as bright as the
line above, you sure will get a smoother picture. The problem is you
get a strong divergence. Two lines close to each other will get a
seemingly equal brightness. But two lines say ten lines apart, they
will have significantly different brightnesses. That's because each
time you compare two lines, you get a small error. After ten lines
errors accumulate and you get a strong error. There is a need for a
regulation mechanism. I tried out many mechanisms, finally the best
found is to compare each line with all the 32 lines above it. Then
average of the constants is made. This decreases the amount of the
error. The exact way the software proceeds is this : it starts from
line in the middle of the picture and makes it even with the 32
above. Then it does the same for the next line below, and so on
the bottom line. Then the process starts again, this time starting
a line well below the middle line. All lines are processed, towards
top line. Now all lines in the picture have roughly equal
There keeps a slight variation of brightness between the to line and
bottom line but that doesn't hamper I believe.
Last operation is to compute the average brightness of the original
picture and force the output picture to have the same average
brightness. Say the average brightness of the pixels of the original
image is 89 and the average brightness of the output is 92, well the
software will subtract 3 to the luminosity of each pixel of the
picture. This is merely cosmetic. It makes the output seem as close
possible to the original.
- The input picture must be in BMP format and not RLE-encoded.
- The examples above can make you believe the computations are
with integers. Actually the software uses floating point
allows for optimal precision with no efforts. Yet it makes the
- I tested out the software on a set of pictures and got good
results for all. Yet bad results can be produced. The software
perfect. If problems are encountered with some pictures the
probably can be adapted to cope with those pictures.
- The source code tarball proposed above contains a README file
that explains how to compile and run the software. It was
a CRUX 2.0 Linux box.
- Computation time for the example above was 2 minutes on a 1.5
Pentium IV Celeron processor.
I whish to thank Pierre Backers for having found and reported a bug
the first version of this software.
February 3 2005