Eric Brasseur Home    |    Links    |    Contact    |   
   



A proposal to find Earth-like planets






Planets around stars are difficult to find. Many have been found and several search techniques exist. These are the techniques I know about:
Most of these methods detect the planet an indirect way, detect only huge Jupiter-like planets or depend from very special and rare events. Better telescopes are planed that will allow direct detection of remote planets. The purpose of this text is to propose a method that should contribute to detect Earth-like planets.


Let's suppose the Universe is clean and our telescopes are excellent. What would we see looking towards a star orbited by an Earth-like planet? We dream to see something like this, with the star on the left and the earth-like planet as a little dot on the right:



Two things are correct in that picture:
The big error is the luminosity of the star compared to that of the planet. Also the picture suggests the planet has a diameter of 1 pixel while the star has a diameter of 4 pixels.

Suppose we focus on that pixel were the planet is situated. We use a hypothetical hyper-telescope to zoom on it 100 times. This is what we would see:



On that picture we really can see the planet, as a 2 pixel diameter disc. That little 100 x 100 picture is made of 9,996 black pixels and the planet's 4 color pixels. The contribution of the planet to the picture's global brightness is far too faint to be noticeable. If we rescale that picture to make it back 1 pixel, that pixel will be pitch black, the same perfect black as all other pixels in the first picture. So in fact the planet should have been invisible on the first picture.

What's more we supposed the star and the planet have comparable surface brightnesses. That's false too. Just compare the Sun's brightness with the Moon's brightness when both are visible in the sky. The Moon's surface brightness is comparable to that of Earth's. So the planet's total brightness is tremendously weaker than that of the star.

A simple Space-based telescope would have no chance to spot a planet because the star's brightness will overwhelm the telescope's camera. Just have a look at photographs of galaxies. The stars of our own galaxy present in the picture field virtually burn away the camera's sensor in a few pixels radius. Those stars appear quite huge, while their real diameter is only a minute fraction of a pixel. Masking away the star will help. Masking devices are planned for future space telescopes.

I suppose a simple Earth-based telescope has yet another problem to spot a planet: the light produced by its star will overwhelm everything due to light scattering in the Earth atmosphere:



Maybe use a mask in astrosynchronous orbit to help the Earth-based telescope... Or a high altitude dirigible.

Whatever technique used to dimm away the star's light, detecting the faint brightness of the planet will be difficult.

There is a way to detect if a pixel has a slight brightness increase. Therefore a huge amount of photographs must be taken, all the same as the one just above. Maybe a million of them. They must be fed into a computer. Let's explain this with a practical example. Suppose a team of astronomers believes a planet is situated inside a 8 x 8 pixels square at this position around the star:



Our job is to determine if the planet is present and where exactly it is situated inside the 8 x 8 square. Therefor we will use photos of the 8 x 8 region. Each photo will be strongly exposed. Say we seek an average luminosity of 100. That yields little photos like this one:



That little photo seems all gray. In fact some pixels are slightly brighter and some are slightly darker. That's unavoidable when taking a photo. At random there will always be a slight difference between the pixels even if what you photograph is perfectly uniform and your camera is "perfect". Below is the little photo zoomed x16. If you've good eyes and a quality display you will be able to see the brightness of the pixels varies slightly. Maybe help yourself by increasing your display's contrast:



This table shows the brightnesses of the 8 x 8 pixels in numbers:

100 100 100 101 100 100 100 100
100 99 99 100 100 101 100 100
99 100 100 100 99 100 99 100
101 100 100 99 101 100 101 100
100 100 100 100 101 100 101 100
101 99 99 100 101 100 101 99
100 100 101 100 100 99 99 101
100 100 100 101 99 100 100 99

In the top row there is a pixel with a brightness of 101. This doesn't prove at all the planet is situated there. The brightness of the planet I simulated is of 0.01. It would never have made the brightness of a pixel raise from 100 to 101 on its own. Besides other pixels have a value of 101 too. That are unavoidable errors, just like the pixels with brightness 99. The only thing we can state is the pixel where the planet is located will be slightly more often 101 than 99 in the many photos.

Let's take 1,000,000 such little photos and sum them up with the computer. We sum up all 1,000,000 brightness values for each pixel. That yields this table:

100000053 100000393 100000385 99999508 100000941 100000470 100000081 99999820
99997783 100000508 99998497 100001408 100000061 99999286 100001270 100001734
99999250 99999024 99999406 100000668 100009910 99999918 99999647 100000261
99999297 100000310 100000249 99999935 100000042 99999176 100000227 99999233
100000574 99999072 100000182 100000313 100001088 99999329 100000546 100000926
99999716 100001071 100001168 99999342 99999512 100000999 100000133 100000895
99999438 99999494 100000283 100000695 100001030 99999697 99999639 99999414
100000159 99999906 100000208 100000750 99999914 99999609 99998716 100000286

Clearly one cell contains a value significantly higher than the others. That's the cell in bold, the fifth one on the third row.

Let's make the table simpler by making the numbers range from 0 to 255. That is the lowest value becomes 0 and the highest becomes 255, with the formule



We get this:

47 54 54 36 66 56 48 42
0 57 15 76 47 31 73 83
 30  26  34  60 255  44  39  52
31 53 51 45 47 29 51 30
58 27 50 53 69 32 58 66
40 69 71 32 36 67 49 65
34 35 52 61 68 40 39 34
49 44 50 62 44 38 19 52

We can turn that table back in a picture and it yields this:



The position of the planet is clearly visible. Let's increase the size just for the fun:



Let's ask for a cubic scaling to get the kind of picture popular science papers like:



This method allows the measurement of the brightness of each pixel to become very precise. So if one pixel is just slightly brighter than its neighbors, the computer will be able to reveal it. It will be able to tell us where the planet is located. Once again alas this won't work. This method is excellent and it is widely used. But in this given case it won't help. Because the space around, in front and behind the star isn't clean at all. There are numerous dust clouds, distant galaxies, star wind bursts... All that makes there will be brightness variations amongst just every pixels. We will get bright pixels smeared all over. The planets very probably won't be amongst the brightest.

So the only solution may be to increase the telescopes' resolution. Till we get images fine enough for the planets to emerge clearly out of the noise. Then just a few set of pictures taken a few months apart will allow to be sure the dot we see is a planet orbiting that given star. Color wavelength analysis of the dot will help too (planets have their specific wavelength signatures).

The proposal I would like to make resembles the technique described above. That is making millions or billions of photos and feed them to a computer. Yet with two main differences:
This method relies on the planets' motion to detect them. Indeed all other sources of noise on the photos will move on different paths, so they will cancel each other out on the final sum.

Let's illustrate this with an example. Suppose we took 31,536,000 photos during an Earth year. This is photo number one:



One of the many possible orbits for a planet around the star is shown beneath, in blue. The white dot on the right is a possible start position for the planet. The start position doesn't matter, I just suppose one to make the explanation more obvious:



After 3 Earth months time the planet would be at this position, above right of the star on the photo:



What must the computer do to add these two pictures? First it must flatten the images to make the orbit circular:

 

Then the images must be rotated so the position of the fictive planet becomes the same (whatever that position is):

 

When these two pictures are added, centered around the star, the brightness of the planet adds to itself in the same pixel or set of close pixels. Hence if we add enough different pictures we're sure the brightness of the planet will become distinguishible. Again, there is little chance that the one given orbit we supposed be the right one. Far out the most probable outcome is we get nothing on the final result. But one of the many orbits we try out will be a good one...

At least a supercomputer will be necessary to compute out all possible orbits, distort the images accordingly and sum them up. A high technology telescope of today is needed to get pictures sharp enough for the experiment. I don't know if such a telescope can be dedicated to get millions or billions of photos of one single star and its close surrounding. Anyway I believe this method can yield results with today technology.

The computer does not have to compute out all possible orbits. The computer just has to compute out all possible planet rotation speeds around the star, whatever the distance between the star and the planet. Then it must compute out the ways those orbits are flattened or expanded towards circles (because of the angle we look at the star). Once a planet is found, the distance at which it shows to be from the star and the supposed rotation speed immediately will yield the star's mass. Also at first only near-circular orbits are to be considered, like the ones of most planets around the Sun. Only if no planet is found then elliptical orbits can be tried out too (if funds for the supercomputer are still available). If the star has at least one big planet like Jupiter, it will be detected faster and will yield data on the star that will boost the finding of other fainter planets.

The mathematical method shown here is not the only one possible to achieve the calculations. Depending on the type of supercomputer or the mathematical skills of the researchers other methods can be used. Simplified integer algorithms that use no picture scaling, Fourrier Transform to get the planet's orbit as a pike on a graph... Yet the principle remains the same.

What telescope resolution is needed to get a result? Actually this is not a key question. Rather there is an equation between the telescope resolution and the number of photos needed to get a result. The lower the telescope resolution, the more pictures you need to feed the computer. Even a telescope that confounds the star and the planet in a same pixel could do the job, provided a reliable method exists to center the star on the photos and millions billions billions photos are taken and computed. For example a whole starfield can be photographed. The star cloud as a whole can be used to center each star a reliable way. Then the chant of the orbit of all planets around all stars can be computed out.


Fresnel telescopes seem promising:



The method was used in 2013 to detect the moon S/2004 N 1 of Neptune:



Eric Brasseur  -  July 5 2004  till  July 17 2013
www.000webhost.com