QUESTION: Why are we be stuck using communications technology that is 10 years obsolete? ANSWER from Steven Tyler on January 15, 1996: Hi, I'm the cognizant engineer for the Galileo Orbiter telecommunications with the Deep Space Stations on earth. Your question is why must we be stuck using communications technology that is 10 years obsolete? Why can't the spacecraft communications be made programmable? If it were programmable, when new technology comes along, we could just send up a change in software. First, the real situation with Galileo is even worse than you suspected. The technology is not 10 years old, it's between 20 and 25 years old! I'm not speaking of the Probe which entered Jupiter's atmosphere and relayed its data back; the Probe's technology may be somewhat newer. I'm speaking of the Orbiter. The Orbiter's major communications parts are the transponder (a combination of receiver and exciter), the power amplifier (a traveling wave tube), a telemetry modulation unit (generates telemetry subcarrier and converts telemetry data bits into convolutionally coded symbols for transmission to earth, and a command detector unit (demodulates the command waveform from the radio receiver's output and sends the command data bits to the on-board computer). All of these parts are leftovers or "spares" from the Voyager spacecraft project. Voyager was launched in 1977, so it uses radio technology from the early 70s. And, truth be known, the fundamental designs of the transponder and traveling wave tube amplifiers date to the Mariner spacecraft of the early 60s to early 70s, and the telemetry modulation and command detection to the Viking spacecraft designed in the early 70s. Unlike some other subsystems on the Orbiter, none of the telecom equipment has any digital components that can be programmed. Why? When Galileo got started in the late 70s, the communications system was not a "driver". The leftover Voyager equipment not only was far less expensive than a new design, it was also "proven" through extensive Voyager pre-launch testing and in-flight operation. Even the unfurlable high gain antenna, the one that didn't unfurl for us in 1991, was a modification of an existing design for the earth- orbiting TDRS (tracking and data relay satellite). The existing designs, with the high gain antenna, could send back data at 134.4 kilobits per second. This was as fast as was needed for the complement of science instruments on the spacecraft. There was no need to have communications equipment that could be "programmed" with later technology to send back data faster. There was no need to have a command detector that could receive commands faster than the 32 bits per second used on Galileo. The communications "power", on both the telemetry downlink and the command uplink, was in the ground station. The telemetry downlink could be relatively inefficient by today's standards because the ground stations had 70-meter antennas with 18 kelvin noise temperature. The command uplink could be inefficient because the ground station had 20 kW (and 100 kW) transmitters. The evolving design philosophy for robotic spacecraft has changed in the last 20-25 years. The NASA administrator, Daniel Goldin, now requires NASA and JPL to design their spacecraft "FBC" -- that's faster, better, and cheaper. If I understand your question right, I suspect the newer FBC projects now underway do have communications systems that are programmable. I don't know for sure that their software could be updated to take advantage of new (meaning, as yet unproven) technology. An analogy would be "hi-fi" FM receivers in the early 60s, before the advent of stereo broadcasting. I was just out of college at that time, and I remember buying an FM receiver that had a "multiplex input" jack for an as-yet unavailable stereo adapter. Now that things are done in software rather than with analog circuits like the Galileo radio or my 60s FM radio, intelligent design would allow for updates after launch. Finally, let me say that even though the spacecraft telecom equipment is old unchangeable hardware, this isn't the case with the other end of the radio link, the Deep Space Network. As a matter of fact, there is also one piece of the spacecraft that is also programmable and *has* been reprogrammed to make communications more efficient. A portion of the software in the central computer (CDS, or command and data subsystem) will be updated later this spring to form a "software convolutional coder". This software coder will work in series with the old hardware convolutional coder to produce an overall code that is more efficient, reducing the signal to noise ratio required to receive a specified quality of telemetry data on earth. This upgrade is part of what is called Phase 2, which will be used to return science data from each of the Jovian satellite encounters, this summer and beyond. At the ground stations, many improvements have been made in anticipation of Phase 1 (the current configuration, for returning the data from the Probe) and Phase 2. You can find descriptions of Phase 2 on the Galileo home page at http://www.jpl.nasa.gov/galileo/. In brief summary, here are some of the changes that allow us to obtain 70% of the mission's science value even though we operate with a low gain antenna on the spacecraft that has about 1/10,000 the communications capability of the high gain antenna that didn't unfurl. A version of the following material was provided on a Galileo web site by Steve Licata of the Galileo Mission Control Team. Several modifications by the DSN, to both hardware and software, have been made to increase the Galileo telemetry rate capability by about an order of magnitude above that normally expected for the low-gain antenna downlink from Jupiter. This has been accomplished through a variety of methods. The five principal improvements are listed below, then explained in greater detail. - The effective antenna gain has been increased through both intersite and intrasite arraying. Arraying is combining the outputs of two or more antennas to form a composite output that has a greater performance. - The antenna system noise has been lowered at DSS-63 (the 70-meter antenna near Canberra Australia). Because Galileo has a southerly declination in 1996 and 1997, the Australian site has the longest viewperiod and the highest elevation angles. It can therefore benefit the most from the use of an ultra-low noise front end called an ultracone (see first paragraph below) - An improved receiver, the Block V or BVR, is installed at each 70-meter station, and at the other antennas used in arraying, as they become available. - the recovery of downlinked data has been enhanced through advanced encoding algorithms. - Telemetry gaps at each site are minimized during signal acquisition, data rate changes and switching between 1-way and 2-way communications through the use of enhanced signal processing and front-end signal recorders. Ultracone: The ultracone installed at DSS-43 (placed in operation on October 9, 1995) reduces the zenith noise system temperature from 13.6 to 10.5 kelvins, which translates into about a 0.8 dB gain in telemetry performance. Use of the ultracone precludes transmitting so that all such tracking passes are 1-way or "listen only". The ultracone can, however, be switched out temporarily to support 2-way telemetry, and then back into position again on the same tracking pass. Block V Receivers: Advanced digital receivers (Block V receivers or BVRs) are employed at all the participating DSS antennas. These are phase-locked receivers with narrower tracking loops than the older Block IV receivers. The narrower loops result in less total noise power and therefore improved performance for telemetry. These receivers should result in an approximate doubling of the telemetry throughput as compared to the Block IV receivers. In addition, the data symbol output stream from multiple BVRs can be readily combined. The new receivers also support the use of suppressed carrier modulation (phase modulation index 90 degrees) where all the radio energy can be put into the telemetry data. The Galileo Project switched over to suppressed carrier as the prime telemetry mode on September 18, 1995. Advanced Coding Algorithms: Variable redundancy Reed-Solomon (R-S) coding algorithms implemented on the spacecraft are expected to improve overall telemetry performance by an estimated 0.6 decibels (15 percent) when compared to single R-S parity length. R-S coding of the downlink telemetry frame is in addition to the convolutional coding already used in previous mission phases and a new Phase 2 software-implemented convolutional code. Operational DSN (Deep Space Network) modifications: Several modifications to operations will support of Galileo Phase 2. Most notable are intersite and intrasite arraying, open loop control of the receiving antenna to anticipate and maintain signal lock during data rate changes, the processing of telemetry into decoded, synchronized frames as standard formatted data units at the DSN site (rather than at JPL), and the introduction of a latency; or delay of 10-60 minutes in the receipt of data at JPL. This latency is caused by efforts to accommodate the arraying of up to five antennas simultaneously and recover telemetry transfer frames normally lost during periods of symbol lock-up. Two redundant data streams from the main antenna (and arrayed antennas) are processed by independent methods and a telemetry block is assembled by merging the "best" data from each stream. A complete record of the full and partial telemetry frames and symbol data is saved in a Post Pass Transfer file, which is sent electronically to JPL within 4 hours of the end of each pass to aid in the filling of data gaps. These enhancements are coordinated at each of the three sites (Goldstone, Calif. and Madrid, Spain, as well as Canberra) through the DSCC (Deep Space Communications Complex) Galileo Telemetry (DGT) Subsystem now undergoing test and fully in place for Phase 2 arraying by November 1996.