Highlight: Woolf | Leger | Fischer | Young | W. Borucki | Boss | Penny | Beichman

Session 5: Leger: opportunities in astronomy

A. Leger - Introduction
We start with what is actually expected, not the ideal spacecraft, but what will be the actual instrument: Darwin, or TPF, I hope both together. We are going to look for terrestrial planets around typically 100 or 200 stars. We're not going to explore the galaxy. On the other hand, we don't want to examine only 10 stars. We want to have a reasonable sample. We want this as soon as possible. And the key point is - this is possible with spectroscopy of resolution of typically 20 and possibly 100 on a few objects. Apparently this was something that we were already considering very useful. It would be very useful to have a spectrometer that can work both regimes with optimum conditions. Of course we have seen from previous talks that when you have something intriguing you really want to go deeper into it. And to go deeper you have to spend more time and you need a higher resolution. Maybe one hundred is much better; 50 is a minimum.

This is a for a 3.5 meter telescope. We might have to reduce that aperture because of various constraints, such as technical ones. Basically the number of planetary detections we can look for is similar, but then the number of objects that we can investigate by spectroscopy would be somewhat fewer. Of course there is a requirement to try to increase the spectral resolution as much as we can. Yesterday afternoon we were dreaming a little bit. Let me say that we were thinking of emission lines that can be visible only with resolution of 0.1 inverse centimeter, which is not really relevant. But if you want to increase the resolution we have to increase the collecting area. It's because of the quantum nature of light. As Dr. Woolf has pointed out, if you want to have a given signal to noise ratio, you need to recognize the statistics of photon counting and there is no way to escape that. So a possible recommendation would be to merge the efforts and capabilities of NASA and ESA on this project. Because, clearly, if we make single mission with twice the budget or something similar to that, we can do much better and then do something much more efficient. That would be my first suggestion for a recommendation.

It's very unpleasant to make a better mission without knowing what you are expecting to look for. So we would like to have an idea of what other planets we are going to find. Are they big, are they small? That's crucial. So we cannot simply develop a theory for that. Maybe I am wrong, but my understanding of the situation is that, presently, theory can explain things. It can explain what is observed. It can explain in fact if there are terrestrial planets. It can explain a something little bit trickier and more difficult, namely the existence of a giant planet, which is the only kind of planet that we are sure indeed exists, because our solar system could be an exception. Remember because of the Anthropic Principle, our solar system might be an exception. Our solar system configuration is necessary for life to occur and this is the reason we are here. So we have to be cautious with the solar system. But anyway we know there are giant exoplanets, and we can explain them with some difficulty, and we can explain the 51 Pegasus class object. This theory has very little prediction power today. It basically relies on one key parameter which is the surface density of the protoplanetary disc, and up to now we have no way to estimate that density. So this means that our theory has very little predicting power. So that's the question I ask: In the future can we think of improving the theory to make it more useful? Maybe by observing protostars, we can get an idea of these surface density parameters. That's needed for the theory to have some real predicting power.

Another way is to try to push the detection method. This would be the case where we are not able to do the spectroscopy but we could do the detection. And this is basically the transit method. There is some interest of course in microlensing. I have been pushing microlensing for a long time, but, to be honest, I'm not sure that microlensing is very capable for detecting terrestrial planets. It might be, but we need to test it first for the detection of giant planets. So I don't think we can rely on microlensing to get data that are statistically useful for predicting what we are going to find with Darwin or TPF.

The only alternative is the transit method. As you have seen two days ago there was not consensus on that. Why? This method can provide you with an estimate of the statistical abundance of terrestrial planets as a function of their radius and as a function of the spectral type of the star. It's only statistical because, as you know, we rely on the fact that the system must be viewed almost edge on. And of course there is absolutely no particular reason for the nearby star to have a planetary system that is oriented edge on with respect to our viewpoint. We have to look at a much broader sample and so we are not looking only at nearby targets, typically we are looking at objects that are up to 500 parsecs from us, whereas TPF will look at objects that are perhaps only as much as 20 parsecs away. The information is only statistical, but it's quite useful. If we knew there are 20% of stars with a planet bigger than twice the size of Earth, and 30% of them with planets having sizes between these larger planets and the Earth, you would have a nice histogram of planet size distributions that could be useful.

The counter-argument is that the transit method doesn't provide a target list specifically for Darwin or TPF. Charles Beichman has mentioned that point and I think we, most of us agree on that. Therefore, in any case, the first year of the Darwin/TPF mission will be spent in the detection mode, looking at about 100 to 200 stars. However, I think that the transit method will provide us with very useful information just to define a strategy. For instance, when you are in the detection phase of the Darwin/TPF mission, are you going to spend a short integration time on many stars or are you going to spend more time, long integration time on fewer stars? Clearly, if there are abundant big planets the first strategy is a good one. If 10% of the stars have big planets, then you have to look for a very big sample, spend a short integration time on each of the stars, and if there is a big planet you would find it. Be careful; when I talk about a big planet, I'm talking about a big terrestrial planet, not a giant gaseous planet. I'm talking about a planet with a radius of 1.5 times or twice the diameter of Earth. And if we know there are very few of them, then it is better to make a long integration to obtain high sensitivity to detect an Earth-size object. So I think that we discussed whether we should put the effort at that level.

Then we discussed the next step in the mission and then reached a consensus on it. Basically we have to choose a priori between building a new mission with either higher spectral resolution or with imaging capability. We discussed it as a group, several attendees have spoken and they all pointed to higher spectral resolution as the better approach. Clearly you can get much information from higher spectral resolution. Imaging, although attractive, has a terrible cost. Even though a 25 by 25 pixel resolution would be wonderful to see, you information of uncertain value at a very high price. Maybe an exception to this assessment would be looking for moons. Where you want to have better imaging capability would be to detect a moon around a giant planet residing in the habitable zone.

Ok these were clear conclusions. Although this situation is not available for the next five years of course, we agree, on the fact that after that, going to higher spectral resolution is the thing to do. Finally, this effort is intrinsically a multidisciplinary subject because there is a definite need for biological and atmospheric chemical input to know that, when we have measured a spectrum, we can determine whether it has something to do with life. I think we're all convinced of that. Woolf. The talk last night pointed me in a direction that I haven't been thinking before. It was the viewgraph that showed the difference between looking at Upsilon Andromeda with an interferometer and looking at it with a spectrograph. The long-period far-out planets are the ones that show up with interferometric astrometry. We have decided that a lot of what we will find in the way of development of an Earth-like planet, will depend on the amount of the volatiles collected. That can depend rather crucially on the presence or absence of planets both at medium distances, a Jupiter-like orbit, and at large distances, a Neptune-like orbit. Now the horrible problem of observing those is the length of observation period needed. If we keep SIM up for 10 years, we will still only have gone through say one-third of an orbit of a Saturn-like planet. I think we need to do something very early that we can do from the ground now. That is long base line astrometric interferometry in both hemispheres focused on the set of stars within 10 parsecs. It's a much smaller sample than the one that we have identified for TPF/Darwin, considerably less than 100 stars, but those will be the ones that will show up best for these planets at large distances and let us understand more about what is out there.

Woolf. The talk last night pointed me in a direction that I havent been thinking before. It was the viewgraph that showed the difference between looking at Upsilon Andromeda with an interferometer and looking at it with a spectrograph. The long-period far-out planets are the ones that show up with interferometric astrometry. We have decided that a lot of what we will find in the way of development of an Earth-like planet, will depend on the amount of the volatiles collected. That can depend rather crucially on the presence or absence of planets both at medium distances, a Jupiter-like orbit, and at large distances, a Neptune-like orbit. Now the horrible problem of observing those is the length of observation period needed. If we keep SIM up for 10 years, we will still only have gone through say one-third of an orbit of a Saturn-like planet. I think we need to do something very early that we can do from the ground now. That is long base line astrometric interferometry in both hemispheres focused on the set of stars within 10 parsecs. Its a much smaller sample than the one that we have identified for TPF/Darwin, considerably less than 100 stars, but those will be the ones that will show up best for these planets at large distances and let us understand more about what is out there.

Leger. Do you mean that if we find a Jupiter or we dont find one, you are going to skip this object or are you going to get it?

Woolf. Im suggesting that after we observe with TPF, we may or may not know much about the distribution of giant planets. But at some point, we will know that information for the nearest stars and that we will know that best by having started very early in observing them. The Palomar test interferometer is, already, I think, a device that can give us a lot of information and I presume that on Mount Hopkins we also could be working toward these objectives. But Im not sure we have set up any devices that are really optimized. The Keck interferometer, for example, is being applied more to using the Keck big telescopes, but this could be done with relatively modest apertures.

Leger. Dont forget that TPF will have the capability to observe bigger planets and more distant ones also.

Woolf. But they get too cool. Theyll get out of our spectral range.

Leger. Youre right.

Fischer. One of the perceptions that Ive had sitting here in last few days is that we are like the five blind men trying to describe the elephant, and we dont have to be like that. I think that there could be a lot more effort put in to ramping up for TPF. I havent heard a lot of talk about SIM. Im a little surprised, but SIM, the Keck interferometer, and the radial velocity work, all of this stuff is complementary. All of it should provide the information that you need to go and find the very best targets for TPF. Quiet stars, close stars, stars without unknown companions. The problems are actually worse for a Saturn-like companion because if you only look at 1/3 of the orbit with an astrometry instrument then a lot of that motion is absorbed in the proper motion solution. Imagine youre seeing a sine wave and youre just looking at this proper motion and in the plane of the sky. So the problem is actually quite bad for those distant planets.

Leger. So what is your conclusion?

Fischer. I think a lot of effort should go immediately into making the missions that are on the table in the very near future, the very best they can be. For example even for SIM, there is work that I would like to see done on the photometric stability of stars and so forth that would involve putting a small Mead telescope up on the Space Station, something like that. We dont know about the activity of stars, how much they are going to vary. SIM will begin to gather that information. But there is a lot of preparatory work that we could do to begin to probe and investigate all of these issues.

Leger. Ok, but you agree its not for terrestrial objects, all of these techniques you mentioned are basically able to detect giant or Uranus-size objects, which is quite interesting also.

Fischer. Absolutely, right, but that gives us the first baseline. Thats all we can do, so lets do it and do it as well as we can.

Leger. There is still the transit photometry method, which is the real method to get information on terrestrial size objects.

Fischer. Sure, now transits with earth-like planets in habitable zones are going to occur roughly once every one or two years, and for a mission like the Kepler mission, youll see three transits if you work real hard. Maybe Dr. Borucki could respond to that.

Young. I guess I didnt quite catch why you said that the transit method couldnt provide a target list for TPF.

Leger. As I had mentioned, you dont get a target list because you are not looking at the same stars you will probably be examining with TPF, because you need the transit to have the system edge on. But what you are going to get is very interesting information, which is the statistical evidence about how many planets of what size are around the stars. You are going to know, for example, how many planets are similar to Earth in size, approximately 20% larger, how many planets have a radius 1.5 times that of the Earth, how many planets have twice this radius. And this is very important. Even in designing the spectrometer for these things, if we have a lot of planets (by a lot I mean 10% is enough) with twice the radius of the Earth, then, remember, that twice the radius means four times the area, which means 16 times less required integration time. Its very interesting information.

W. Borucki. I think we all see the strategy here. We want to get spectra if we are going to understand biology in other atmospheres. So spectra are an absolute requirement. TPF is the prime way, the only way that we see right now, for getting those spectra. Weve got to null the star out. Weve got to look at the light from that planet and get a spectroscopy on it. If we dont know that fraction of the stars that have Earth-like planets, we cant design TPF properly. If most stars have Earth-like planets, it should be a fairly straightforward problem. People today are working on it; they have some good ideas as to how to handle the problem. If it turns out that Earth-like planets are very rare, 1% or something, well still have to use something like TPF, but its going to be bigger. We have to design it properly. Now, what we would like to know and know very quickly is what fraction of stars have Earth-like planets, what kind of stars are they, how many of those planets are in habitable zones. In other words, just finding an Earth-like planet is interesting all right, but we want to find it an a habitable zone. Transit photometry can do that today. We have been proposing a project like that to NASA Headquarters every two years for the Discovery Program and generally the people have liked the science. But they have asked us to go and do a technology demonstration. We are doing that technology demonstration; I dont see a problem with that. But transit photometry is a nice intermediate step that allows us to do the other steps in an appropriate fashion. It gives us the information that we need for those designs and it does it so quickly. It also helps us convince the public that there are Earth-like planets to go and look for. TPF is a big project. It takes a lot of effort to make that work, and we are going to have to convince people that its worth the effort to keep at it, even though we may run into some difficulties. So I think planetary transits can dramatically help us get to our objective which is spectra of planets in habitable zones.

Leger. Who wants to comment on that? This is a pro argument, clearly. Who wants to discuss the counter argument or agree? As you know there is a European mission: Corot. It is a pretty small mission. It is a $50 million mission, very limited. It is a 30 cm telescope and in low orbit so that it can observe only for six months on the same field. So it is not going to detect Earth-like planets. It is going to detect planets that are somewhat bigger. At the margin, a very hot Earth, hotter than Mercury, like 51 Peg the size of the Earth, could be detected. Also, Corot will get in the habitable zone, like I said, objects with 1.5 times the radius of the Earth. It is supposed to be launched in 2003. It will provide us with some of the information, but not at the level of terrestrial objects in the habitable zone, like I said.

Woolf. My impression is somewhat different. If Earth-like planets are rather rare, we will probably still be trying for Terrestrial Planet-Finder at the size that we have designed it simply because we would have to demonstrate it at that size before we could possibly demonstrate anything bigger. But I do see a real advantage. If we knew, perhaps, that Earth-like planets are very, very common, we will probably not design Planet Finder the way that we were going to do. Rather, we would be focusing on that Planet Finder design which is appropriate for more detailed observations of the very closest Earth-like planets because there would be the chance to learn a great deal more with about the same amount of money. What we have done is design Planet Finder/Darwin towards both finding them as well as looking at them. If we knew that they were very common, we could limit that finding process and instead focus on the looking at them, so I do see that transit photometry would be very helpful.

Anonymous. Would you really change the design that much, or would you just change the mission strategy?

Woolf. One of the things that we are caught on right now is that the heat shields for telescopes that are needed to keep them really cold tend to be sufficiently large that if you are going to have free fliers, you have to keep them apart. We might well decide that if we were concentrating on the closest stars that we would get by with telescopes connected by a boom and a single heat shield and that would allow us to focus more on stars that were closer in. So we would have perhaps a different kind of approach.

Leger. Even on the scheduling of the mission it would be quite useful. As you mentioned, either you spend a short time on many objects or a long time on fewer of them. You can do it by chance, but it is better to know.

Anonymous. But isnt that something you could adjust within the course of the mission? I mean, right now its a year on surveying. If you went and in four months you found an Earth-size planet in the habitable zone around every star, you could re-adjust, couldnt you?

Leger. No, no, because, for instance, if 10% of the stars have big terrestrial planets, or even if 5% of stars have them, then we have to investigate a big sample like 500 just to get a significant number of them. This is a very well-defined strategy. You have to look at each star for a short time if you must examine 500 of them and then you are going to miss the terrestrial Earth-size object. So, no, it is very valuable information to know a priori.

Boss. I think you did a good job of summarizing the problems with the theoretical prediction approach. In particular, you emphasized the need to understand surface densities, or solid densities, within the planet-forming disks. There will be a great leap in our understanding of those observations in the next few years, to the extent that the US and Europe agree to build the large millimeter array. That should basically extend the resolution element, or decrease the resolution element, from being on the order of a few tens to hundreds of AUs down to on the order of 1 to a few AU. So we will really be able to probe the inner regions of planet-forming disks at the scale at which we need in order to understand terrestrial-type planet formation. Thus I think we could easily make a recommendation that will cost NASA nothing, which is to support use of the large millimeter array for probing protoplanetary disks because that is basic to understanding not only terrestrial planet formation, but also giant planet formation. The second point that I wanted to make is that long-period Jupiters are in some sense a sign-post for whether or not there is likely to be habitable Earth. If we believe that we need a Jupiter to shelter an inner-terrestrial planet from cometary influx, and if we also want to make sure that the Jupiter isnt a short-period Jupiter that is going to prevent the formation of a terrestrial planet, then we should be able to identify ahead of time the systems that have long-period Jupiters, and no short-period Jupiters, systems that will look promising. And so, yes, as Deborah mentioned and as Nick pointed out, these are things we can search for right now by astrometry at least, perhaps not by radial velocity, but certainly by astrometry. We should make a strong push to say, Lets know as much as we can right now from the ground, and eventually by SIM as well. We should find the nearby stars that do contain long-period Jupiters.

Leger. Does everyone agree on that, or not?

Penny. Coming back to the case where there is a two-Earth-radius planet five parsecs away, you will be able to achieve spectral resolutions significantly above 100. That would actually need a different design of the spectrometer from what is already thought. So, do you build that capability in, in case you find them? And then we need the theoreticians to tell us at a much finer detail about what we can do.

Leger. Its a point in favor of pushing a transit mission. And so to answer the question you ask, are you going to design the instrument in a different way whether they are or there are known objects? The higher spectral resolution that you need, that you would like to have on your instrument, depends on whether the planets are very big objects or not. Because we might want to push spectral resolution to 200, which is a problem, you know. You have to go from the prism to a grating instrument that has a different temperature regime, so it is a real choice. To make this choice and knowing what you are doing, you have to have an idea of whether there are such big and easy planets, easy to observe.

Anonymous. Id also like to comment on something that Allen said about the millimeter array. I mean, the whole problem of the exozodiacal dust disk is very important, both for confusing the observations and also perhaps for giving the mass of the planet. We need to know an awful lot more about that, so all of the work that is going on regarding disks around nearby stars, things like LBT and Keck and Fronta (?) should be supported very strongly.

Leger. By the way, do you think that the large millimeter array will be able to detect exozodiacal light to some level? Let us say 100 times the amount of dust in our solar system or ten times the amount. Do you know about that?

Anonymous. I dont really know. I would have thought there was more to look at in the thermal infrared rather than at millimeter wavelengths. But Im not sure. If Imke were here maybe she could say.

Beichman. I think now theyre doing the very brightest, the fabulous four from IRAS on the JCMT. But they are going to increase their collecting area by a factor of ten with the MMA. You know, operating at millimeter wavelenths, I think they will push down with good spatial resolution, which is what the array will have, to certainly in the range of 100 times solar dust amounts and really let you look at the dust in some detail. I think it will be more like 100 times solar than down in the range observed by the Keck telescopes, because the emission is really much weaker than if you were sitting right in the thermal infrared. But they will have good spatial resolution, so I think that we will advance our understanding of the zodiacal clouds in the brighter cases, and perhaps help us to knit together the picture. I think you still have to have Keck and LBT really looking at the 300 degree dust at 1 AU at closer to the solar system levels. But I think the combination of SIRTF for the outer clouds, the MMA for structures in close, and the interferometers together will give us a good package on the zodiacal dust.

Leger. In your estimation, by what date should we get this information?

Beichman. Certainly by 2001 or 2002.