A proposal to find Earth-like planets
Planets around stars are difficult to find. Many have been found and
several search techniques exist. These are the techniques I know
- Doppler effect. When a
heavy planet turns around a sun it makes that sun move. The
movement causes faint shifts in the sun's color and those shifts
can be measured.
- Ocultation. Rarely a
planet passes in front of its sun. That makes the sun's
brightness decrease a little.
- White dwarfs. That
kind of star lost most of its mass. Hence the planets around the
star orbit further from the star. That makes they can be
- Gravitational lenses.
Sometimes a sun and one of its planets are adequately aligned
with another star and the Earth. That causes a gravitational
lens effect that allows to detect the presence of the planet.
Most of these methods detect the planet an indirect way, detect only
huge Jupiter-like planets or depend from very special and rare
events. Better telescopes are planed that will allow direct
detection of remote planets. The purpose of this text is to propose
a method that should contribute to detect Earth-like planets.
Let's suppose the Universe is clean and our telescopes are
excellent. What would we see looking towards a star orbited by an
Earth-like planet? We dream to see something like this, with the
star on the left and the earth-like planet as a little dot on the
Two things are correct in that picture:
- The distance between the star and the planet compared to the
- The white and blue color tints of the star and the planet (if
we suppose the star is similar to our Sun and the planet is
similar to our home Earth).
The big error is the luminosity of the star compared to that of the
planet. Also the picture suggests the planet has a diameter of 1
pixel while the star has a diameter of 4 pixels.
Suppose we focus on that pixel were the planet is situated. We use a
hypothetical hyper-telescope to zoom on it 100 times. This is what
we would see:
On that picture we really can see the planet, as a 2 pixel diameter
disc. That little 100 x
is made of 9,996 black pixels and the planet's 4 color pixels. The
contribution of the planet to the picture's global brightness is far
too faint to be noticeable. If we rescale that picture to make it
back 1 pixel, that pixel will be pitch black, the same perfect black
as all other pixels in the first picture. So in fact the planet
should have been invisible on the first picture.
What's more we supposed the star and the planet have comparable
surface brightnesses. That's false too. Just compare the Sun's
brightness with the Moon's brightness when both are visible in the
sky. The Moon's surface brightness is comparable to that of Earth's.
So the planet's total brightness is tremendously weaker than that of
A simple Space-based telescope would have no chance to spot a planet
because the star's brightness will overwhelm the telescope's camera.
Just have a look at photographs of galaxies. The stars of our own
galaxy present in the picture field virtually burn away the camera's
sensor in a few pixels radius. Those stars appear quite huge, while
their real diameter is only a minute fraction of a pixel. Masking
away the star will help. Masking devices are planned for future
I suppose a simple Earth-based telescope has yet another problem to
spot a planet: the light produced by its star will overwhelm
everything due to light scattering in the Earth atmosphere:
Maybe use a mask in astrosynchronous orbit to help the Earth-based
telescope... Or a high altitude dirigible.
Whatever technique used to dimm away the star's light, detecting the
faint brightness of the planet will be difficult.
There is a way to detect if a pixel has a slight brightness
increase. Therefore a huge amount of photographs must be taken, all
the same as the one just above. Maybe a million of them. They must
be fed into a computer. Let's explain this with a practical example.
Suppose a team of astronomers believes a planet is situated inside a
pixels square at this position around the star:
Our job is to determine if the planet is present and where exactly
it is situated inside the 8 x
8 square. Therefor we will
use photos of the 8 x
8 region. Each photo will be
strongly exposed. Say we seek an average luminosity of 100. That
yields little photos like this one:
That little photo seems all gray. In fact some pixels are slightly
brighter and some are slightly darker. That's unavoidable when
taking a photo. At random there will always be a slight difference
between the pixels even if what you photograph is perfectly uniform
and your camera is "perfect". Below is the little photo zoomed x
you've good eyes and a quality display you will be able to see the
brightness of the pixels varies slightly. Maybe help yourself by
increasing your display's contrast:
This table shows the brightnesses of the 8 x
In the top row there is a pixel with a brightness of 101. This
doesn't prove at all the planet is situated there. The brightness of
the planet I simulated is of 0.01. It would never have made the
brightness of a pixel raise from 100 to 101 on its own. Besides
other pixels have a value of 101 too. That are unavoidable errors,
just like the pixels with brightness 99. The only thing we can state
is the pixel where the planet is located will be slightly more often
101 than 99 in the many photos.
Let's take 1,000,000 such little photos and sum them up with the
computer. We sum up all 1,000,000 brightness values for each pixel.
That yields this table:
Clearly one cell contains a value significantly higher than the
others. That's the cell in bold, the fifth one on the third row.
Let's make the table simpler by making the numbers range from 0 to
255. That is the lowest value becomes 0 and the highest becomes 255,
with the formule
We can turn that table back in a picture and it yields this:
The position of the planet is clearly visible. Let's increase the
size just for the fun:
Let's ask for a cubic scaling to get the kind of picture popular
science papers like:
This method allows the measurement of the brightness of each pixel
to become very precise. So if one pixel is just slightly brighter
than its neighbors, the computer will be able to reveal it. It will
be able to tell us where the planet is located. Once again alas this
won't work. This method is excellent and it is widely used. But in
this given case it won't help. Because the space around, in front
and behind the star isn't clean at all. There are numerous dust
clouds, distant galaxies, star wind bursts... All that makes there
will be brightness variations amongst just every pixels. We will get
bright pixels smeared all over. The planets very probably won't be
amongst the brightest.
So the only solution may be to increase the telescopes' resolution.
Till we get images fine enough for the planets to emerge clearly out
of the noise. Then just a few set of pictures taken a few months
apart will allow to be sure the dot we see is a planet orbiting that
given star. Color wavelength analysis of the dot will help too
(planets have their specific wavelength signatures).
The proposal I would like to make resembles the technique described
above. That is making millions or billions of photos and feed them
to a computer. Yet with two main differences:
- The photos have to be taken spread over a long time. Say each
second a photo is taken, during a year.
- Simply summing up these photos would be nonsense since the
planet moves around the star. Instead, all possible planet
orbits around the star must be calculated by a computer. When
considering one possible planet orbit, each photo is rotated and
flattened before the pixels are summed up. That way, should the
orbit being considered be the right one, the planet will remain
at the same place on all rotated photos. Hence it will appear
clearly on the final sum. In other words: all orbits tried out
will yield nothing, except the one that by chance is the right
one. That day a planet is detected.
This method relies on the planets' motion to detect them. Indeed all
other sources of noise on the photos will move on different paths,
so they will cancel each other out on the final sum.
Let's illustrate this with an example. Suppose we took 31,536,000
photos during an Earth year. This is photo number one:
One of the many possible orbits for a planet around the star is
shown beneath, in blue. The white dot on the right is a possible
start position for the planet. The start position doesn't matter, I
just suppose one to make the explanation more obvious:
After 3 Earth months time the planet would be at this position,
above right of the star on the photo:
What must the computer do to add these two pictures? First it must
flatten the images to make the orbit circular:
Then the images must be rotated so the position of the fictive
planet becomes the same (whatever that position is):
When these two pictures are added, centered around the star, the
brightness of the planet adds to itself in the same pixel or set of
close pixels. Hence if we add enough different pictures we're sure
the brightness of the planet will become distinguishible. Again,
there is little chance that the one given orbit we supposed be the
right one. Far out the most probable outcome is we get nothing on
the final result. But one of the many orbits we try out will be a
At least a supercomputer will be necessary to compute out all
possible orbits, distort the images accordingly and sum them up. A
high technology telescope of today is needed to get pictures sharp
enough for the experiment. I don't know if such a telescope can be
dedicated to get millions or billions of photos of one single star
and its close surrounding. Anyway I believe this method can yield
results with today technology.
The computer does not have to compute out all possible orbits. The
computer just has to compute out all possible planet rotation speeds
around the star, whatever the distance between the star and the
planet. Then it must compute out the ways those orbits are flattened
or expanded towards circles (because of the angle we look at the
star). Once a planet is found, the distance at which it shows to be
from the star and the supposed rotation speed immediately will yield
the star's mass. Also at first only near-circular orbits are to be
considered, like the ones of most planets around the Sun. Only if no
planet is found then elliptical orbits can be tried out too (if
funds for the supercomputer are still available). If the star has at
least one big planet like Jupiter, it will be detected faster and
will yield data on the star that will boost the finding of other
The mathematical method shown here is not the only one possible to
achieve the calculations. Depending on the type of supercomputer or
the mathematical skills of the researchers other methods can be
used. Simplified integer algorithms that use no picture scaling,
Fourrier Transform to get the planet's orbit as a pike on a graph...
Yet the principle remains the same.
What telescope resolution is needed to get a result? Actually this
is not a key question. Rather there is an equation between the
telescope resolution and the number of photos needed to get a
result. The lower the telescope resolution, the more pictures you
need to feed the computer. Even a telescope that confounds the star
and the planet in a same pixel could do the job, provided a reliable
method exists to center the star on the photos and millions billions
billions photos are taken and computed. For example a whole
starfield can be photographed. The star cloud as a whole can be used
to center each star a reliable way. Then the chant of the orbit of
all planets around all stars can be computed out.
Fresnel telescopes seem promising:
The method was used in 2013 to detect the moon S/2004 N 1 of
Eric Brasseur - July 5 2004
till July 17 2013