CONTENTS

INNOVATIVE INSULATOR

Lightweight aerogels are a strong contender to replace CFC-propelled foams in refrigerators.

THE B FACTORY

A proposed collider designed to produce copious quantities of B mesons would allow physicists to explore one of nature's basic mysteries.

TRAPPED INDOORS

Indoor air has been found to harbor higher concentrations of pollutants than outdoor air.

EXTRA ELECTRONS

Beams of negative ions-atoms with added electrons will play a positive role in fusion-energy experiments and accelerator applications.

IMAGING THE ECLIPSE

An LBL researcher leads high-school teachers on a once-in-a-lifetime tour into the realm of astronomy.

Plaudits & Patents

NEWS REVIEW

ON THE COVER: Scattering light so that they appear bluish against a dark background, aerogels look ethereal. But the material has some down-to-earth uses (see page 2).

INNOVATIVE INSULATOR

LIGHTWEIGHT AEROGELS CAN MAKE A SOLID CONTRIBUTION TO ENERGY CONSERVATION

AT FIRST SIGHT, silica aerogels cause most people to do a double take. They see a ghostly substance which looks like fog molded into a distinct form. Resembling a hologram, an aerogel appears to be a projection rather than a solid object.

Aerogels consist of more than 96 percent air. The remaining four percent is a wispy matrix of silica (silicon dioxide), a principal raw material for glass. Aerogels, consequently, are one of the lightest weight solids ever conceived.

Arlon Hunt was working in LBL's solar energy and energy conversion research program in 1981 when he first saw an aerogel, which had been brought to the Laboratory by a visiting Swedish professor. Hunt says he was immediately fascinated by the material.

''I was intrigued by how lightweight, transparent, and amazingly porous the stuff was," recalls Hunt. ''Porous materials scatter light and almost always are opaque or whitish. The near transparency of the material implied extremely fine pore structure. Later, I found out just how fine."

As Hunt learned of the unique thermal, optical, and acoustical properties of aerogels, he became further intrigued. Since 1982, Hunt and his co-workers in the Applied Science Division have explored fundamental questions about the properties of aerogels and have developed processes for creating thermally and optically enhanced versions. They have also evaluated aerogels for many applications and developed chemical production methods suitable for commercial manufacturing.

Aerogels can be fabricated in slabs, pellets, or almost any shape desired and have a range of potential uses. By mass or by volume, silica aerogels are the best solid insulator ever discovered, says Hunt. They transmit heat only one hundredth as well as normal density glass. Sandwiched between two layers of glass, aerogels could be used to produce double-pane windows with high thermal resistance. Aerogels alone, however, could not be used as windows because the foam-like material easily crumbles into powder.

In the past few years, Hunt has been working on another, very promising application: refrigerator insulation. Aerogels are a more efficient, lighter weight, and less bulky form of insulation than the polyurethane foam currently used in refrigerators, refrigerated vehicles, and containers. And foams have a critical disadvantage: they are blown into refrigerator walls by chlorofluorocarbon (CFC) propellants, the chemical identified as the chief cause of the depletion of the Earth's stratospheric ozone layer.

This layer acts as a shield, protecting life on Earth from the damaging effects of ultraviolet light. Earlier this year, the Environmental Protection Agency reported a 4.5 to 5 percent depletion of the ozone layer over the United States during the last decade; it also projected a significant increase in future skin cancer cases as a result of the additional ultraviolet exposure.

In recognition of the severity of the problem, the U.S. and other countries agreed last year to phase out ozone-depleting chemicals more quickly than planned. Under the 1987 Montreal Protocol, CFC use was to be halved by 1998; under the new protocol, CFC use must be halved by 1995, cut 85 percent by 1997, and phased out entirely by 2000.

Since refrigerators account for an estimated two percent of the United States' annual CFC usage, an alternative insulator must be found. Aerogels could be the answer.

The new generation of aerogels that Hunt is creating is based on the groundwork laid down in the 1930s by Stanford University's Steven Kistler. Kistler worked with gels and in a 1932 paper published in Nature answered many of the basic questions about this odd form of matter. He showed that a gel is an open structure composed of a matrix of solid pore walls and a liquid fill. Subse quently, he invented a way to dry a gel without collapsing or shrinking it. Kist ler called his new material an aerogel.

Aerogels-the name attests to the near paradox of creating a hybrid between a gel and thin air-are not known to exist in nature. Jellyfish are an example of a natural gel. Unlike aerogels, which do not lose volume as they dry out, a dead, desiccated jellyfish ultimately shrinks to 1 0 percent of its former size.

After Kistler's initial work, aerogels remained a forgotten phenomenon for three decades. They reemerged briefly in the scientific literature in the 1960s but were not fully resurrected as an object of significant scientific curiosity until the 1980s.

At that time, aerogels were cloudy rather than transparent. Before they could be used in double-pane windows or skylights, their clarity had to be improved. While they were good insulators, to become a cost-effective alternative to existing products, they had to be made even more thermally resistant.

In addition, there were several obstacles to fabrication. The existing processing technology was too expensive, and it was potentially explosive. Finally, processing required toxic compounds. Taken together, a formidable technological barrier prevented aerogels from making the leap from the laboratory to the consumer.

Over the past eight years, Hunt has confronted each of these obstacles. He has conducted fundamental studies that have resulted in advances toward resolving each of the major shortcomings. Along the way, he founded a private aerogel-manufacturing firm, based on the technology initially developed at LBL. Thermalux, L.P., is the only U.S. aerogel firm and has set up a development- stage pilot plant in Richmond, California. A Swedish company that produces aerogels for use in radiation counters is the only other commercial aerogel manufacturer in the world.

Whether they are the commercial aerogels Thermalux is fabricating for tests and assessment by the refrigeration industry or the experimental compounds Hunt is producing in his laboratory, all aerogels start out as a gel. A gel consists of chains of linked particles, or polymers, permeated by a liquid. To transform a gel into an aerogel, the liquid must be removed without collapsing the solid framework.

Kistler discovered the secret to drying a gel without collapsing it. He dried his gels at high temperatures and pressures, transforming the liquid to a supercritical state in which there is no longer a distinction between a liquid and a gas. Pressure was slowly released. In this way, the supercritical fluid could be vented out of the gel matrix without any surface tension effects.

Aerogels were exquisite structures, but when Hunt first began working with them, they were formulated with a standard starting compound known to damage the cornea of the eye. The toxic material, tetramethylorthosilicate (TMOS), had been introduced in the 1960s as a means of as a means of reducing the preparation time for aerogels from several weeks to a few hours. Hunt and his colleagues Rick Russo, Mike Rubin, Kevin Lofflus, Paul Berdahl, and Param Tewari began looking for safer preparations and processes.

Russo favored the alternate compound tetraethylorthosilicate (TEOS). However, the only aerogels ever made with TEOS were less transparent and more shrunken than those made with TMOS. The LBL group conducted a number of experiments with TEOS and focused on the base catalysis process. Ultimately, they tried ammonium fluoride, an acid catalyst. The result was a clearer aerogel and less shrinkage.

Sven Henning, one of the few other scientists in the world then doing aerogel work, was visiting Hunt's group in 1984 when word arrived that his small aerogel manufacturing facility in Sweden, the world's first, had exploded. Gases escaping the autoclave aerogel drying apparatus had ignited, blown the roof off the plant, demolished the building, and injured several employees.

This motivated Hunt to explore alternate drying processes.

The drying process in use at the time relied on alcohol. When the gel was ready for drying, it was loaded into a pressure vessel, alcohol was added, and heat was applied. At 280 C and 1800 pounds per square inch of pressure, the alcohol became a supercritical fluid. Then pressure was slowly released and the supercritical alcohol gradually vented from the vessel.

Aside from the potential for explosions, Hunt realized that this process was too costly for successful commercialization. The high pressures and temperatures required massive, expensive fabrication chambers, and the process was energy-consuming. Hunt looked for a substitute for alcohol. The surrogate substance had to become supercritical at a lower temperature and pressure, and it had to be nonflammable.

Liquid carbon dioxide proved to be ideal. Under pressure, it becomes liquid at near room temperature. And whereas alcohol can be bomb-like, carbon dioxide is fire quenching. Hunt developed a carbon-dioxide aerogel drying process that has been patented.

Aerogel fabrication begins with the mixing of TEOS and water. To allow these two fluids to mix, alcohol is added. The water breaks apart the TEOS, attacking the silicon bonds, and creating an intermediate ester that condenses into pure silica particles. With the aid of a catalyst, ammonium fluoride, and a solution of ammonium hydroxide to control the pH, the silica particles grow and link, forming an alcogel. This clear gel is sufficiently strong so that when a bottle is half filled with it and turned upside down, the gel will not flow.

The gel is then inserted into a pressure vessel where liquid carbon dioxide flushes out and replaces the alcohol. Pressure is increased, the carbon dioxide becomes supercritical, and as it is slowly vented, the alcogel dries into an aerogel.

Hunt continues to fine-tune the drying process. ''In principle," he says, ''the carbon dioxide process is straightforward, but you have to practice. At 600 to 800 pounds per square inch, there are a whole world of things going on inside that pressure vessel. It's like driving a sports car on a mountain road. You have to slow down, speed up, and make adjustments."

Through these multiple refinements, Hunt and his co-workers created a safer, more energy-efficient process that requires less massive and costly equipment. They turned next to the problem of clarity.

Aerogels were transparent, but they were not transparent enough to be used in double-paned windows. They scatter light through a natural process first described by Lord Rayleigh in the late 19th century. This phenomenon-Rayleigh scattering-is why the sky looks blue against the dark background of outer space and why the same sky looks yellow when viewed in the direction of a setting sun. Hunt's aerogels scatter light in a very similar manner. Placed against a dark background, they appear bluish; against a light background, they are yellowish.

Hunt decided to adjust his recipe, altering the quantities of the five compounds that go into the gel and the temperature in an effort to increase clarity. Some 500 formulations were tested, and additional variations were evaluated using an experimental technique called factorial design analysis that helps pinpoint the roles of different ingredients.

Additionally, Hunt drew on his doctoral thesis work, employing a mothballed device he had devised to measure light scattering: a scanning polarization modulated nephelometer. The nephelometer measured several of the 16 separate elements of the light scattering matrix of various experimental formulations of aerogels, allowing Hunt to isolate and identify the structures responsible for scattering.

''My measurements revealed that the largest of the pores was responsible for the scattering and the haziness in aerogels," says Hunt. ''The cross-linked silica parti cles are extremely fine, 20 to 40 angstroms in diameter. That is smaller than the wavelengths of visible light and too small to cause scattering, which is good news. The average pore size was 200 angstroms, but the largest pores in our TEOS gels were 3000 angstroms."

By filtering out impurities in the starting solution, improving the overall cleanliness of operations, and providing more uniform gelling conditions, the researchers have been able to eliminate pores larger than 500 angstroms. This has considerably improved the clarity of the aerogels, making them suitable for use in skylights or atrium coverings. But further research and development work is necessary before aerogels can be made totally transparent. Until then, the promise of aerogel-insulated double-paned windows will remain just out of sight.

Aerogels could make their debut as insulation in refrigerators sooner.

Refrigerators and freezers account for about 20 percent of residential electricity consumption in the U.S. Because of a vast potential for energy savings through the use of available, cost-effective technology, Congress passed the National Appliance Energy Conservation Act in 1987. Implementing the act, the Department of Energy (DOE) has announced rules requiring improved energy efficiencies in appliances, with new standards for refrigerators taking effect January 1, 1993. Only a handful of the 2000 models now on the American market meet the 1993 standards. DOE has pledged it will impose even more stringent standards in the future as soon as new and affordable technology makes this practical. (The 1993 DOE standards are based on technical and economic analyses performed by Isaac Turiel and Jim McMahon in LBL's Applied Science Division.)

Every year in the U.S., 300 million square feet of insulation are used in new refrigerators. Currently, the insulation is polyurethane foam; since it is expanded into refrigerator walls by CFCs, an alternative insulator must be found.

Three insulating materials, all silica- based, are the leading candidates to replace foams. The competition pits aerogels against silica powder and glass beads that are sealed inside steel sheets. All three insulating systems would be sealed in a partial vacuum to increase their thermal resistance.

In a partial vacuum, aerogels outperform silica powder and glass beads. Inch-thick aerogels have the same R value (a measure of thermal resistance) as inch-thick foams. But when 90 percent of the air is evacuated from a plastic-sealed aerogel packet, the R-7 value nearly triples. To match the R-value of aerogels at this vacuum of one-tenth of an atmosphere, silica powder has to be evacuated to a few thousandths of an atmosphere. Glass beads require one-billionth of an atmosphere.

Achieving a vacuum of one-tenth of an atmosphere and sustaining it for the lifetime of a refrigerator is not a problem. Existing plastic vacuum-packing techniques can do the job. Maintaining a vacuum of one-thousandth of an atmosphere or better is a major technological challenge.

While Hunt doesn't have to worry about vacuum-sealant technology, he does want to reduce the cost of aerogels. About $20 of foam goes into a 1991 model refrigerator using 40 square feet of polyurethane insulation. Insulating the same refrigerator with aerogels would cost more than $80. The aerogels, however, would have double the R-value of foam, and in two years the energy savings would offset the $60 in additional costs.

Hunt has conducted fundamental studies on how heat is transmitted through aerogels, in an effort to further improve the material. The less aerogel necessary for a given application, the lower the cost.

Research shows that the little remaining heat which is conducted through an aerogel under vacuum is attributable to solid conduction through the silica lattice and to radiant heat transfer. The conductive and radiative components each account for about half of the heat that passes. Focusing on eliminating the radiative transfer, Hunt conducted analyses which showed that aerogels block the passage of most wavelengths but are transparent to infrared radiation between three and eight microns in wavelength.

Hunt began a search for an additive that would absorb infrared radiation in this region, be available in small particle sizes, not interfere with the "elation or drying process, disperse uniformly without clumping, and be nontoxic and inexpensive.

Hunt tried carbon black. It worked. Doped (mixed) with carbon, aerogels turn black and become better insulators. Inch-thick, carbon- doped aerogels have been tested and rated at R- 25 per inch.

Hunt remains entranced by aerogels. They were the best solid insulator known when he first saw them, and he has made them even more impervious to heat. Currently, he is contriving to fabricate aerogels using still less raw material so that they are cheaper and lighter, just a wisp of solid within a filigree of air. Beautiful as they are, Hunt intends to create ever more elegant aerogels, materials that consumers and manufacturers will find irresistible.

-JEFFERY KAHN


B FACTORY

A proposed collider to product copious quantities of B mesons would allow physicists to explore one of nature's basic mysteries.

A NEW KIND OF ACCELERATOR specifically designed to probe a fundamental mystery of space and time is being proposed by a collaboration that includes LBL, Stanford Linear Accelerator Center (SLAC), and the Lawrence Livermore National Laboratory (LLNL). The new machine would be built at SLAC

The proposed machine, formally known as the PEP-II asymmetric collider, is also being called a "B factory" because it would make copious quantities of particles known as B mesons-particles that contain the heaviest known fundamental building block, or quark. A formal proposal describing the project was submitted to the Department of Energy in February 1991, requesting funding to begin construction in fiscal year 1993 for a physics start-up date in 1997.

The asymmetric collider idea was first conceived more than three years ago by LBL Physics Division leader (now Deputy Director) Pier Oddone as a novel means of exploring how B mesons-discovered in 1977 in experiments at the Fermi National Accelerator Laboratory (Fermilab)-decay into other types of matter and energy.

One such decay, that of the neutral B meson, offers a powerful way to study the phenomenon known as CP (charge-conjugation/parity) symmetry violation (see ''The Curious Physics of CP Violation," page 18). The use of B particles to study CP violation had been proposed almost 10 years ago by physicists Tony Sanda and Ashton Carter, then at Rockefeller University. However, prior to Oddone's suggestion of an asymmetric collider design, there seemed no practical way to implement the idea.

Colliders-along with cyclotrons, synchrotrons, storage rings, and linear accelerators-are machines designed to produce highly energetic particles of matter, comparable to those existing in the early universe shortly after the Big Bang. In colliders, the creation of these particles is accomplished through head- on collisions between beams of particles moving in opposite directions.

In all colliders up until the present (including the SPEAR and PEP colliders at SLAC and the planned Superconducting Super Collider), the energies of the two colliding beams have been equal.

In the proposed B factory, however, the two beams (composed of ''bunches" of electrons and their antiparticles, positrons) will be of unequal energies (hence the term ''asymmetric"). The newly created B mesons, which would be nearly stationary if produced in a symmetric collider, will be carried along down the beam line in the direction of the more energetic beam.

It was this particular set of conditions- high-energy B mesons moving forward in the laboratory reference frame-that Oddone realized could present a unique opportunity for interesting physics. Because the decay products of the moving B mesons and their antiparticles can be separated in space and time, they may exhibit easily detectable indications of CP violation as well as other phenomena.

Oddone collaborated with a design team in the Accelerator and Fusion Research Division, led by AFRD exploratory studies group leader Swapan Chattopadhyay, in exploring the feasibility of the idea. One of the important questions-whether an optics design to focus and then separate beams of two different energies was practical-was convincingly answered in the affirmative by AFRD's Al Garren, an internationally recognized expert in the design of accelerator optics.

Oddone's suggestion, augmented by the efforts of the AFRD design team and collaborators from SLAC, Caltech, and LLNL, has now evolved into a major high- energy physics initiative, of international scope.

The proposal calls for the construction of the new accelerator at SLAC, utilizing the ring of the current PEP accelerator (hence the name ''PEP-II"), in about three years and at a cost of under $200 million in current dollars. This is considered a fairly modest price for a forefront high-energy accelerator these days. Design and construction would be a joint project of LBL, SLAC, and LLNL. The experimental physics program would be in the hands of an international collaboration.

The design of the proposed accelerator specifies two rings of equal circumference, housed one above the other in the existing PEP tunnel. One (the existing PEP ring) will carry electrons of 9 billion electron volts (GeV); the other (a new ring) will carry positrons of 3.1 billion electron volts. The two beams would collide at one, or possibly two points of intersection.

The machine is designed to produce the particle known as the upsilon 4S, which decays into a B meson and its antiparticle, the anti-B (also called a B, or ''B-bar"). The B and the B-bar in turn decay into many charged and neutral particles. Because any one of these decays is rare, however, the collision rate of the accelerator must be extremely high. Nearly a hundred million B/B-bar pairs must be collected.

To describe an accelerator's capabilities, physicists use the term ''luminosity," which is related to both the collision rate and the apparent size (or ''cross-section") of the colliding particles. The design goal for the B factory is a luminosity of 3 x 1033 (3 followed by 33 zeros) collisions per square centimeter per second-about 15 times higher than the best ever achieved anywhere in the world.

Despite the difficulty, LBL scientists are enthusiastic about the new project- both because of its intrinsic physics interest and because of the challenge it presents to accelerator designers. Mike Zisman, leader of the LBL part of the machine-design effort, believes that the B factory represents one of the most interesting accelerator- design ventures in the world.

Zisman explains, ''The B factory will extend the range of accelerator technology into new domains. In the past, all of the challenges in accelerator design (for example, the Superconducting Super Collider under development in Texas) have been along the 'energy frontier'-reaching for higher and higher energies. But the B factory will be a first bold step over the 'luminosity frontier,' a frontier that must be crossed as we look for rarer and rarer phenomena. Reaching higher energies wouldn't help in this case-in fact, the optimum center-of-mass energy needed to make B particles, about 10 billion electron volts, is not particularly high. But we have to make an awful lot of them!"

To solve the myriad research and development problems presented by the B- factory project, the three collaborating institutions-LBL, SLAC, and LLNL (plus an additional large group working on the detector design and physics program) have organized themselves into teams that may be unique in accelerator physics.

''We like to say we're operating as a 'true collaboration,"' says Zisman. ''In most big accelerator and detector projects, the work is split up, and jobs are farmed out among the participating groups-one lab designs the end- cap detectors, another is responsible for a vertex chamber, a third builds the vacuum system, and so on.

''We're trying something different. We've grouped ourselves into trans-institutional teams focused on specific tasks, with members from all the labs. For example, the team working on the rf (radio frequency cavity) design is headed by Glen Lambertson of our Laboratory, but its 15 members include six people from LBL, eight from SLAC, and one from Livermore."

There is an exceptional level of coordination between teams designing the accelerator and those planning the detector and the physics programs, coordinated by a ''machine/detector interface group" headed by Hobey DeStaebler of SLAC (see "Problems by the Bunch," page 14).

For LBL and its collaborators, however, the challenges presented by the B-factory proposal go far beyond the technical ones. A recent article in Science reflects the fact that B-factory proposals are springing up all over the world. Within the U.S., competition for the SLAC/LBL/LLNL proposal comes from scientists at Cornell, who have submitted a B- factory proposal to the National Science Foundation.

Just as the SLAC/LBL/LLNL proposal uses an existing machine, PEP, as a starting point, the Cornell plan incorporates the Cornell Electron Storage Ring (CESR)- currently the highest luminosity collider in the world. Unlike the LBL design, which uses existing technologies wherever possible, the Cornell design involves two new technologies. One is a superconducting rf system (never before tried with the very high beam currents required for a B factory); the other is a novel kind of beam intersection technique known as ''crab crossing," in which bunches actually collide at an angle but are rotated in such a way that they behave as though the collisions were head-on.

It is unlikely that both B-factory proposals will be funded, and because of the involvement of two independent funding agencies, the Department of Energy and the National Science Foundation, the decision- making process will be more complex than usual.

Other competition comes from overseas. The Japanese are planning a B factory in the tunnel of the existing TRISTAN accelerator at KKK, the high-energy physics laboratory at Tsukuba. CERN, the European center for particle physics in Geneva, Switzerland, has published a feasibility study of a B factory in an existing tunnel that once housed the Intersecting Storage Rings (ISR), but no immediate proposal is in the works. The Soviet Union has approved a B factory to be built in Novosibirsk, but it is unknown when the actual funding will become available.

The SLAC/LBL/LLNL B-factory project is headed by Jonathan Dorfan of SLAC. The design team is led by Zisman of LBL and Andrew Hutton of SLAC. In addition to members from LBL, SLAC, and LLNL, the machine-design group includes collaborators from Caltech, UC San Diego, Fermilab, and the University of Massachusetts. Physics workshops to explore a research program for the new machine and design a detector are under the leadership of Dave Hitlin of Caltech and Jonathan Dorfan of SLAC.

At a recent briefing for the Department of Energy's annual review of LBL's high-energy-physics program, Pier Oddone looked ahead to the next decade and found some of the brightest prospects to be in the area of B-meson physics.

Said Oddone, ''We see LBL involved in exciting B physics on three time scales- short term, medium term, and long term." In the short term, the Laboratory's collaborations at Fermilab-both the Collider Detector Facility (CDF) at the Tevatron and the 'E789' fixed-target experiment- promise exciting developments: 'strange' B mesons (a combination of a B quark and a strange quark), B baryons (a combination of three quarks, at least one of which is a B), and the measurement of the lifetimes of many different B particles. For the medium-term, two major Fermilab detectors-the CDF and the D-zero- have a shot at finding ''fringe effects" of CP violation.

But for the long-term goal, Oddone emphasized, there is no substitute for the asymmetric B factory and a comprehensive study of CP violation.

Said Oddone: ''The breadth of the attack that the asymmetric B factory will make on the CP-violation puzzle is unequalled. B- factory experiments will measure a wide array of B decay channels, including those involving neutral particles. The Standard Model will be put in a straitjacket because the pattern of CP violation for these channels is well prescribed in that model. Any deviations, and we have new physics!"

-JUDITH GOLDHABER


PROBLEMS BY THE BUNCH

''In the B-factory project, communication between the accelerator designers and the detector-design teams must be very close," says Hobey DeStaebler of SLAC, chair of the liaison group that meets weekly to keep on top of problems in this area as they arise. ''With such high luminosity in the machine, background radiation is a very important issue.

''High background levels can damage the detector and can obscure the events of interest. Bunches of particles will be passing through the interaction point every 4.2 nanoseconds, and the memory of the detector elements is considerably longer than that. To keep the detector from being overwhelmed by the confusion of signals, it will be essential to eliminate background as much as possible."

There are two sources of this background noise. One is synchrotron radiation- low-energy photons that are produced whenever the beam goes through a magnet.

''Though the energy of these photons is low, there are a lot of them," says DeStaebler. ''The problem requires very detailed calculations to figure out how many photons you're creating with every magnet placement, where they go, how they scatter, and what their consequences on the detector will be. Then you can try to modify all these factors to reduce the effect."

The other principal source of back ground noise is residual gas in the beam pipe. Though pressure in the beam line is reduced to a near vacuum, some gas always remains. Most of it is hydrogen, which has little effect on the beam because it is so light, but there is usually some residual carbon dioxide and carbon monoxide, and the beam particles can interact with these, eventually causing showers of secondary particles. "Ultimately," says DeStaebler, "it comes down to an economic decision: at what point is it just too expensive to improve the vacuum any further?"

In dealing with both kinds of background, DeStaebler says, the first step is to quantify it by means of computer simulations and comparison with experience at other machines; the second is to find ways to reduce it. ''One approach," he says, ''is to design shielding that intercepts the background radiation and keeps it from reaching the detectors. We've done some of that-for example, we'll be placing masks inside the beam pipe at certain key points."

An alternative approach is to modify the design so as to reduce the sources of background. An example is the shape of the beam at the collision point. Originally, the machine design group explored the possibility of using a round beam, which was thought to be the best shape for achieving high luminosity. But when computer simulations revealed that a round beam would cause more background problems than a flat one, the machine designers were willing to change their approach.

Another example involves the position and strength of the quadrupole magnets, which give rise to synchrotron radiation.

''The placement of each of these magnets became a subject for negotiation between machine-design teams and detector teams," says DeStaebler.

Despite his total immersion in problems of background reduction, DeStaebler does not consider this area the number one challenge for the B-factory design team. That dubious distinction, he believes, is reserved for the rf (radio- frequency) feedback system, a complicated system of electronic controls that keeps the beam motion stable.

''The complications caused by the very high beam currents in the machine are really extraordinary," says DeStaebler. ''I don't think I'd care to change places with Glen Lambertson and his team."

Lambertson himself tends to agree. ''The current of circulating particles (electrons and positrons) in the B factory is about 10 times higher than in any previous accelerator or collider. There are nearly 1700 bunches in each of the two beams, and they pass by a given point every 4.2 nanoseconds.

''To restore the energy lost by each beam in one trip around the collider ring and to keep each of these bunches from spreading in its direction of travel, you need a strong rf system. We use a series of 20 rf cavities (also called resonating chambers) strung like beads on each ring.

''But such a strong rf system has unwanted effects. One bunch of particles may oscillate slightly as it passes through a cavity, and this excites the cavity at high frequency, which then affects the next bunch that passes through, and pretty soon all the bunches in the storage ring are oscillating-both longitudinally (that is, along the beam line) and also transversely. This is known as coupled bunch instability, and it can destroy a beam in much less time than it takes to tell about it."

To cope with coupled bunch instability, Lambertson and his co-workers I have adopted a two-pronged | approach. On the one hand, l they have modified the shape | of the cavities-adding spoke | like extensions through which | the higher frequencies can escape.

"We've been experimenting with different shapes," says Lambertson, "doing computer calculations and check ing them against a model cavity here at LBL. We've come up with a cavity shape that should be 500 times less disturbing to the bunches, and are now ready to specify it for the design of the accelerator."

Lambertson's group is also exploring the ultimate cure for bunch instability problems: a technique known as bunch-by- bunch active feedback. This technique involves detecting the motion of each bunch, measuring its excursions away from normal, and correcting it with a ''kicker" rf pulse on the next orbit. Active feedback must go on continuously for the entire storage time, or the beam will be lost.

''Active feedback has been done in a much more rudimentary fashion on some previous machines (for example, after injection in Fermilab's Tevatron) but never to such a challenging extent," says Lambertson. ''However, we are making progress. John Fox and Don Briggs, two of our team members at SLAC, have developed a promising design for a detector that can track the motion of each of the 1658 bunches separately, and here at LBL we've been working on the design of the 'kicker."'

"What's needed next is a broad-band system to carry the signals from the detector to the kicker, keeping each bunch separate. It can be very expensive to develop this, so a lot of our effort is going into ways to reduce costs.

''An interesting point is that these problems are all similar to the ones our group encountered in designing LBL's Advanced Light Source. The ALS is also a very high current machine-though it will carry only one-fifth the current of the B factory. But the expertise we developed working on the ALS is coming in very handy on this job."

Lambertson notes that international interest in suppressing coupled-bunch instabilities is growing and extends far beyond the B-factory project. At an accelerator design conference held in San Francisco this spring, he was somewhat surprised to find at least 12 papers on 'bunch control' being presented, from all over the world. ''At a similar conference last year," he says, ''there were only three papers on the subject, including ours."

-JG


EDM: ANOTHER TEST OF SYMMETRY VIOLATION

Electricity an d Magnetism-the two great interrelated forces that play such a fundamental role in the universe and our everyday world - have several points of similarity and few striking points of dissimilarities. Magnetic charge is always pictured as a "dipole" - two seperate units of force, "north" and "south," that are sperated in space and always occur in pairs. Electric charge, on the other hand, is ordinarily regarded as point-like.

However, the Standard Model (that set of rules and observations that is the underpinning of our current understanding of force and matter) predicts that the interaction of the electron's charge with surrounding force fields should make a portion of that charge behave as if it were split. In analogy with magnetic phenomena, this apparent division is expressed as a "dipole moment."

The Standard Model predicts an extremely small electric dipole moment (EDM) for the electron, but any at all must violate CP symmetry. An electron EDM, if discovered, would constitute a second clear-cut case of CP violation (see "The Curious Physics of CP violation," page 18.)

A team of researchers from LBL and UC Berkeley recently reported an important new limit for this oddity of particle phyisics. Because their experiment was many times more sensitive than earlier ones, the researchers were able to set a new experimental limit for the electron EDM seven times lower than the previous one.

The team included physicists Eugene Commins and Harvey Gould, graduate student Stephen Ross, postdoctoral fellow Conny Carlberg, and former graduate student Kamal Abdullah. Gould and Commins are researchers in LBL's Chemical Sciences Division, and Commins is also a professor in the UC Berkeley Physics department. They described their experiment in Physical Review Letters in November 1990, and a review article in Science (April 5, 1991) noted "several innovative features" of their approach.

Attempts to find and measure EDM's date back to 1950, when Harvard physicist Norman Ramsey, with Edward Purcell, first pointed out the possible significance of the effect and devised a way to search for it in the neutron. Ramsey has continued the search for over 40 years, and other teams have followed, with the limit for both neutrons and electrons being pushed steadily downwards.

The basic idea behind these experiments remains closely modeled on the technique outlined by Ramsey. If a particle in an electric field shows an asymmetry in it's charge distribution (an EDM), the particle's energy will be larger when its spin is aligned opposite to the external field. To measure that excess energy is not too difficult for a neutral particle, but it is not feasible in the case of a charged particle like the electron. When the field is switched on, such a charged particle simply moves away.

A way around this problem was found in the idea that an electron's EDM may be passed along to a neutral atom in which it resides. The EDM measurement can then be carried out on the neutral particle (the atom) rather than the charged one (the electron). Moreover, as was shown by Patrick Sanders of Oxford University in 1965, the "induced" EDM of the atom may be much larger than the original EDM of the electron, an enhancement that makes it easier to detect.

LBL's Harvey Gould did his first EDM measurement more than 20 years ago as his thesis experiment at Brandies University. Realizing that the experiment could be done with much more precision, GOuld approached Commins - a leading expert in atomic beam experiments and related studies of parity violation - with his proposal. Together, they planned an experiment utilizing a beam of thallium atoms (chosen because in thallium the induced EDM is 600 times greater than the original EDM) and the classic "atomic beam magnetic resonance" technique.

As noted earlier, the basic idea of the experiment is that if a particle in an electric field has an EDM (an asymmetry in it's charge distribution), the particle's energy will be larger when its spin is aligned opposite to the external field.

The experiment involved several steps. First, a bea, of thallium atoms was configured so that the spins of the atoms were lined up with an external magnetic field. The experimenters measured the frequency of the electron transitions in these atoms. The thallium atoms were then rotated 180 degrees with respect to the magnetic field and their electron transitions measured again. While in the second ("antialigned") state, the atoms were allowed to interact with an external electric field.

Finally, the atomswere re-rotated to a new alignment with the magnetic field, and the system was analyzed to see how the frequency of electron transitions was affected by the electric field. These steps were then repeated with the electric field reversed. If there were an EDM, then the transitions frequency would be different for the two directions of the electric field. No such effect was observed.

The sensitivity of the experiment was such that if the electron had an EDM larger than that of one electron-charge unit seperated by a distance of 10 to the -26 centimeters, it would have been observed. This is about one hundred thousand trillion (10 to the 17) times smaller than the most fammiliar dipole moment of chemistry - that of a molecule containing two atoms.

Because the expected effect is so small, the most difficult part of the experiment is to eliminate all sources of systematic error - especially stray magnetic fields. Among the ingenious stratagems the research team devised to cancel out systematic effects was to use two beams of thallium (running in opposite directions) rather than one and to eliminate the effects of gravity on the trajectory of the atoms.

To help scientists decide among several proposed extensions of the Standard Model, each of which predicts a different value for the EDM, the Berkeley team hopes to extend their experimental limit still further. For this they have planned a follow-up experiment, 10 times more sensitive than the first.

-JG


B FACTORY

THE CURIOUS PHYSICS OF CP SYMMETRY VIOLATION

Nature loves symmetry-in snowflakes, in flowers, in the motion of heavenly bodies, in the behavior of subatomic particles. But now and then, it seems, nature makes an exception.

One of the liveliest areas of investigation in particle physics today involves a case in which nature seems to violate what was once thought to be a sacrosanct kind of balance known as ''CP symmetry."

The violation of CP symmetry consists of the observed fact that, on rare occasions, the rate at which a physical event takes place is altered by switching the event to its mirror image and matter to antimatter.

By extension, CP violation implies that nature distinguishes between the forward and backward directions of time: as you play the videotape backwards, you get to a different beginning!

Imagine a physical reaction involving two bits of matter. Particle A bumps into particle B somewhere in outer space. The two particles collide, but do not break up; instead they ricochet like two ping-pony balls-a process known as elastic scattering. Take a snapshot of this event.

Now imagine a mirror placed at the point where the two particles collide, so that the event is inverted in space. Particle A now enters from the left instead of the right; particle B flies off in an upwards direction rather than downwards, and so on. At the same time, switch particle with antiparticle, so that both are composed of antimatter rather than matter. Take a snapshot of this second event.

To an observer, the two events should be indistinguishable-that is, they should proceed at the same rate.

That's the rule. Then there's the exception. And thereon hangs the curious story of CP symmetry violation-a story that may hold an important key to why the universe is the way it is. The seemingly arcane phenomenon of CP violation is believed to be the reason there is so much matter in the universe (protons, neutrons, and electrons) as compared to antimatter (antiprotons, antineutrons, and positrons).

Vast amounts of both matter and antimatter should have been produced in the Big Bang-so what happened to all the antimatter? Why are antiparticles so rare today that we encounter them only in particle accelerators and similar man-made environments? In 1967, the Soviet physicist Andrei Sakharov proposed an ingenious solution to the problem-one that is still widely believed to be the right answer.

According to Sakharov's theory, the two kinds of particles were not produced in precisely the same numbers in the Big Bang; because of CP violation in certain processes, there was a surplus of matter over antimatter. Matter and antimatter particles then proceeded to annihilate each other, leaving only the ''tiny" remnant of matter that constitutes the universe as we know it today.

Our understanding of CP symmetry violation dates back to 1957. The first part of the puzzle emerged in a series of experiments by C. S. Wu and co-workers, following a theoretical suggestion by T.D. Lee and C. N. Yang (for which they were awarded the Nobel Prize). In these experiments, it became clear that in certain types of reactions known as the weak interactions, ''left-handed" processes are more likely to occur than ''right-handed" ones (or vice versa), and reactions involving matter proceed differently from those involving antimatter. This implied basic asymmetries in nature that became known as parity (P) and charge conjugation (C) violation.

Though puzzled, physicists felt that they could make things right again by proposing a new combined symmetry law. The idea was that if you switch image with mirror image but at the same time switch particle to antiparticle, everything comes out even again, and symmetry is preserved.

This explanation was comforting, but didn't last long. In 1964, in an experiment that won the Nobel Prize for James Cronin and Val Fitch, it was shown that CP symmetry also breaks down in certain reactions. Now symmetry could be restored to the universe only by combining CP with another variable, time, to make a new ''higher" symmetry known as CPT invariance. If image is changed to mirror image, particle to antiparticle, and the direction of time is reversed, symmetry is once again rescued.

But note the odd consequence of this reformulation. In cases where CP symmetry is violated but CPT symmetry is preserved, time reversal (T) must itself be violated. Thus the anti- common sense but inescapable conclusion that going backwards in time can bring you to a different beginning.

Today, CP symmetry violation remains a mystery and a possible exception to the neat classification scheme for particles and forces known as the Standard Model.

When he talks to visiting committees, colloquia, and lay audiences about these matters (which he does fairly often these days), LBL physicist Pier Oddone sometimes shows a cartoon in which an ostrich with its head in the sand is likened to the Standard Model's view of CP violation.

Oddone doesn't mean to imply that the Standard Model can't accommodate CP violation (it can and does) or that his fellow physicists have ignored it (they haven't). The point is a subtler one-one that LBL physicist Natalie Roe expresses as follows: ''CP violation is one of the few numbers still to be measured that has the potential for upsetting the Standard Model. In fact, it has the potential to blow the Standard Model to smithereens, and point us in a whole new direction."

Small wonder that there is a growing consensus among physicists and astrophysicists that resolving the unsolved questions related to CP violation is one of the most important pieces of unfinished business for their field in this century.

CP violation has been observed many times since its discovery almost 30 years ago, but so far always in the reaction where it was first observed-the decay of the neutral K meson. This has made physicists wonder whether it is actually a general (though hard to observe) physical phenomenon that must somehow be reconciled with the Standard Model, or something completely different-perhaps the effect of an unknown ''superweak" force.

Clearly, then, the question of CP violation is important, and the key to understanding would seem to lie in finding it operating in another reaction. That is the reason for all the interest in accelerators like the B factory-and also in experiments like the measurement of the electric dipole moment (EDM) of the electron (see sidebar).

The B meson is one of the particles most analogous to the K meson in the sub-nuclear zoo. Both are built out of a massive, unusual quark (''strange" for the K, ''bottom" for the B) paired with an antiquark from the family that makes up ordinary matter (''up" or ''down"). Also, in both K and B mesons, particles and antiparticles share certain decay modes.

Thus it seems likely that if CP violation is going to be seen anywhere else in nature, it may be in the decay of the neutral B meson. Indeed, because the bottom quark has a relatively long lifetime, the CP violation probability for B mesons is expected to be 100 times greater than it is for K mesons.

The problem becomes, then, can one make enough neutral B mesons to do an experiment? That's where ingenious accelerator design comes in. The goal is to collide beams of electrons and positrons in such a way that copious quantities of B mesons are produced with sufficient velocity to travel a measurable distance before decaying.

In the B factory, this will be done by tuning the machine to a collision energy of 10.56 billion electron volts (GeV), at which the particle known as the upsilon (4S) is created. This particle decays spontaneously into a B meson and an anti-B meson, which, in turn, live for about a trillionth of a second before falling apart. Because of the collider's asymmetric design, the B's travel a fraction of a millimeter before decaying, so that it will be possible to separate the decay products of the two types of particles and study them for indications of CP violation.

Physicist Roe has been spending a good deal of her time lately thinking about what she and her colleagues might do with a sample of a hundred million B meson pairs.

''The neutral B meson and its antiparticle can decay in hundreds of different ways," says Roe, ''and the Standard Model predicts that several of these decay modes may exhibit CP violation. Out of these, in turn, no more than a few are accessible to study with the B factory.

''What we're looking for is actually an enormous effect in a very rare process: it's a trade-off," says Roe.

''If CP is violated, we should see a dramatic difference in the decay modes of the B and the anti-B in certain rare decay modes. In these reactions, the Standard Model predicts the asymmetries to be very large- up to 30 percent or so, as opposed to an asymmetry of a few tenths of one percent in the only other CP violating process we know about-the decay of the K meson. The various proposed extensions of the Standard Model-for example, the SuperWeak theory-each have their own and different set of predicted asymmetries."

Asymmetry between the B and the anti-B, Roe explains, will reveal itself in the time evolution of the decay-the differing rates at which the B and anti-B mesons decay into a particular final state. In order to monitor this process, the researchers will have to be able to measure the distance that the B and the anti-B travel between their point of creation and the point where each decays and to identify the particles in the final state. This is the challenge for the detector-design teams.

Roe (an active member of the Fermilab D- zero detector collaboration in addition to her role in the B-factory physics effort) feels that the opportunities for exciting physics at the B factory are exceptional.

''As physicists, we like to fit everything into symmetries," says Roe, ''but when we look around at the real world, we see something completely different. At some point in the history of the universe, the symmetry between matter and antimatter has obviously been violated, and CP violation is a possible mechanism consistent with particle physics. It's exciting to be working in a field that has such far-reaching implications for how the universe was formed."

-JG


TRAPPED INDOORS

TIGHTENING BUILDINGS TO SAVE ENERGY CAN LOCK POLLUTANTS INSIDE

Think of pollution, and in your mind's eye you might see great, tall factory smokestakcs billowing noxiuos clouds of yellow and brown into an unsuspecting sky. Or you might think of the smeared brown and orange of a sunset over automobile jammed Los Angeles. The word pollutionwould probably not bring to mind the air quality inside your own home or office building where you work.

But scientists at Lawrence Berkeley Laboratory have discovered that there is good reason to think of the indoor environment when you think about pollution: some of the most polluted spaces in the world are indoors where we humans spend 85 to 90 percent of our time.

For the past 15 years, researchers at LBL's Indoor Environment Program-the nation's oldest indoor-air research program-have been investigating the relationships between indoor air quality, energy use in buildings, and the health and comfort of the people who inhabit these spaces.

According to program leader and chemist Joan Daisey, LBL's program in the Applied Science Division grew out of concerns about the impact of energy conservation on buildings. ''If you tighten up buildings to save energy, you can adversely affect indoor air quality," she says.

Indoor air was found to harbor higher concentrations of pollutants than outdoor air. These pollutants were discovered lurking in such places as carpeting, particle-board cabinets, and kerosene-stove emissions. Lack of air moving in and out of a building-a more common phenomenon in newer, more tightly constructed buildings-could allow the concentrations of pollutants to grow.

Even the materials that people were using to tighten up drafty spaces in older buildings sometimes released toxic chemicals: Some caulking and sealants used to reduce the infiltration of outside air through cracks around windows and doors, for example, emitted volatile organic compounds. Urea formaldehyde foam, blown into walls as insulation, degraded and emitted formaldehyde.

LBL's Indoor Environment Program includes studies of ventilation-the dominant mechanism for removing indoor pollutants-as well as research on the sources and characteristics of a variety of pollutants. The aim is to arrive at ways to both conserve energy and maintain human health.

Worker health is the focus of a new study undertaken by LBL, the California Department of Health Services (CDHS), and the University of California, Berkeley's, School of Public Health. The object of the California Healthy Buildings Pilot Study is to learn more about the causes of ''sick building" syndrome.

Workers in such buildings complain of headaches, runny noses, and irritated eyes. Sometimes they feel tired all day. When they leave the building, their symptoms disappear.

The incidence of complaints among building occupants appears to have increased simultaneously with the rise in energy conservation efforts to tighten buildings and the use of synthetic materials and new office equipment. But when tests are done, the I concentrations of individual I pollutants inside the buildings are rarely high enough to elicit the symptoms office workers exhibit.

Probably, says Bill Fisk, a mechanical engineer and leader of the Ventilation and Indoor Air Quality Control project, there are multiple factors-the mixture of organic chemicals emitted from building materials, poorly functioning ventilation systems, and environmental factors such as temperature and humidity.

Three types of office buildings in the San Francisco Bay Area were selected for the California Healthy Buildings Pilot Study: naturally ventilated with windows that open; mechanically ventilated with windows that open and fans that draw in and circulate outside air; and sealed, mechanically ventilated, air-conditioned buildings. The buildings were chosen by ventilation type and were not necessarily ''sick" buildings.

The employees being studied-mostly information processors and clerical workers-were asked to complete a questionnaire about their symptoms. Scientists monitored their indoor environment, measuring temperature, humidity, carbon dioxide, carbon monoxide, and numerous volatile organic compounds, including benzene, aldehydes, hydrocarbons, and ketones. The team also measured concentrations of bacteria and fungi.

Because some compounds are much more irritating than others, each substance was rated not only according to its concentration but according to its potential to cause irritation (formaldehyde, which is highly irritating, was given the highest multiplier, for example).

Preliminary findings from the Healthy Buildings study suggest significantly higher rates of sick building symptoms in mechanically ventilated buildings, with and without air conditioning. However, within each class of buildings are wide variations in the prevalence of symptoms. Says Fisk: ''We can now focus more detailed studies on buildings where people have unusually high or low numbers of symptoms and investigate the causes."

The researchers are in the process of correlating worker symptoms with indoor air quality, temperature, and job factors, such as stress. Fisk is also proposing to undertake a more extensive study of buildings in other parts of the country at different times of the year. Such studies would greatly further the understanding of the sick-building syndrome.

Just how badly human beings are hurt by exposure to indoor pollutants is a question being investigated in a collaboration between LBL and UC Berkeley's Department of Biomedical and Health Sciences. In the past, estimates of health risks to humans were based on simple extrapolations from animal studies or studies of humans exposed to very high concentrations of pollutants. Now, the investigators are working to develop risk- assessment methods based on fundamental human biology-how substances are metabolized, for example, and how they reach particular tissues.

One especially deadly chemical pollutant-carbon monoxide-is the focus of work by Greg Traynor and his colleagues in the Indoor Exposure Assessment group. Carbon monoxide, which Traynor calls ''the most significant air-pollution problem in the country," can cause death overnight. Carbon- monoxide poisoning can result from burning charcoal indoors, using unvented kerosene stoves or gas-fired space heaters, or having cracked heat exchangers or old, cracked floor furnaces.

Traynor and his colleagues have found that homes where people used unvented sources of heat almost always have indoor pollutant concentrations exceeding EPA's outdoor air quality standards for nitrogen dioxide and sometimes for carbon monoxide.

On the basis of both laboratory measurements of emission rates and controlled field studies, the researchers developed computer models for predicting pollutant concentrations, taking into account information such as air-exchange rates and reactivity of the pollutants. The team tested the model with gas stoves and kerosene heaters in unoccupied houses and found that their field tests agreed very well with the model's predictions.

Recently, the Indoor Exposure Assessment group has come up with a much more complicated model that employs information-supplied by utility companies in four different regions of the United States- about house volume and insulation levels, number of appliances, leakiness of the home, and outdoor temperature and wind speed, among other factors.

Traynor and Mike Apte are developing a passive monitor to do a field study on indoor carbon-monoxide levels, with emphasis on finding homes with malfunctioning vented appliances. Because the monitor will be small, simple to use, and easy to send through the mail, it will provide LBL scientists with much more information than they now have on carbon monoxide.

Another dangerous indoor pollutant is tobacco smoke. Like anything else, when tobacco burns it produces particles and gases. The intent of a new LBL study is to see how the sizes of tobacco smoke particles change with variations in environmental conditions and how particles of different sizes are distributed.

In this study, scientists will measure particle sizes in indoor air as a function of such factors as the concentration of cigarette smoke, the amount of air circulation, the relative humidity, and the temperature. The work will be done both in real buildings and in a controlled environmental chamber.

LBL physicist Rich Sextro and his colleagues are interested in the size of the tobacco smoke particles because their aerodynamics, how quickly they coagulate and deposit on walls and other surfaces, how the particles are inhaled, and where they go in the lungs all depend on their size.

''Cigarette smoke itself is a fairly complex mixture of many toxic and carcinogenic chemicals, such as benzene and benzo(a)pyrene," says Sextro. ''The amount of chemicals the lungs receive is dependent on how big the particles are."

Tobacco smoke is also related to the health risks of radon in the air, says Anthony Nero, deputy leader of the Indoor Environment Program. Radon is a naturally occurring decay product of the radioactive element, radium, and is found in varying concentrations in soil throughout the planet. Radon gas, in turn, produces radioactive decay products, which are the largest source of exposure to radiation. Studies have shown that people who smoke are more likely to get lung cancer when they are exposed to these products than those who do not.

LBL's radon studies-the world's most comprehensive research program on indoor radon-began in 1980 under Nero's leadership. He and his colleagues at LBL found a wide variation in the concentration of radon in houses in different locations. They also determined the mechanism for radon's entry into buildings.

Says Sextro: ''In the early 1980s, we and others began to appreciate that what were thought to be traditional sources of radon-particularly building materials like wallboard and concrete or some indoor uses of water or natural gas-weren't large enough producers of radon to account for the kinds of indoor concentrations that were found in some houses."

The researchers' attention shifted to what is now known to be the major source of radon in houses: soil gas. They undertook a number of detailed studies to investigate radon entry into houses and found that soil gas gets into buildings because of pressure differences.

Basements (and the slab areas of houses without basements) are at a lower pressure than the surrounding soil, so a crack between the inside of the building and the outside will allow gas to flow in. Anything that penetrates the building shell, such as sewer pipes, water pipes, and natural gas and other utility lines, will allow the soil gas to enter.

Because experiments show houses behave in very complicated ways, the Indoor Radon Group-in collaboration with researchers in the Earth Sciences Division-created a highly controlled environment to do more experiments on radon entry. In 1988 and 1989, they built two small basement structures in the Santa Cruz Mountains of Northern California, on a site where the radon concentrations in the soil are higher than average and where the soil is permeable and allows gas to flow.

The structures are as deep as ordinary basements but not as large. All but 6 inches of the walls are below ground so that the wind's effect is minimal, and the structure has been designed so that air pressure can be closely controlled. Instruments monitor radon concentrations, pressure, temperature, and wind speed and direction. The project will help the researchers validate computer models that simulate radon transport through soil and into houses.

Understanding the way radon enters buildings has prompted Daisey and her team from the Indoor Organic Chemistry group to investigate the possibility that other substances get into houses the same way. As part of the large Superfund program at UC Berkeley, the scientists are examining the transport of volatile organic compounds into houses from contaminated soil gas near hazardous waste sites.

Volatile organics are carbon-containing compounds such as hexane, octane, alcohol, ketones, aldehydes, benzene, vinyl chloride, and perchloroethylene, which vaporize at room temperature. The group has done experiments in a house near a landfill that showed there were many organic compounds in the soil gases around the house, including freons, which do not occur naturally. When the house was depressurized, the compounds moved in.

Various researchers around the country have tried to estimate human exposures to these compounds at sites such as landfills, typically by combining measurement data and models, says Daisey. They look at how much is inhaled from the concentrations outside (which are often two to 10 times lower than those found in a house) and how much is ingested from any contaminated water. More recently, researchers have taken into account chemicals which would degas from contaminated water-from hot showers, for example, which can release volatile organic compounds into a house.

''But," says Daisey, ''nobody has looked at the soil gas transport pathway. And it's not so simple to figure out how much is coming from that pathway. You can go into any house in the United States and find these compounds, because we use a lot of products and materials that contain them."

The LBL group is considering various approaches to determine how much of these chemicals enter a house from soil gases. One is to use radon as a tracer: measure both radon and volatile organics in the soil gas around the house, measure them in the house, and assume the proportion remains constant.

Because soil gases may be the main source of volatile organic compounds in certain houses, Daisey's group wants to find out what kind of soil and site characteristics make this pathway significant. If soil gas transport is the main route, a control method similar to that for controlling radon entry could be used, which works by giving the gas an alternate, low-pressure path to follow.

Studies of exposure to hazardous chemicals from landfills is ''an area of research that is virtually untapped," says Daisey. ''We've just started."

During the past 15 years, research at LBL has proved that indoor air is the major way people are exposed to many harmful pollutants, including toxic chemicals and radiation in the form of radon decay products. Scientists here have shown that identifying and removing the sources of these unhealthy substances is the best way to improve indoor air quality.

Work continues in this young field of research with the exploration of how contaminants get into our homes, how they can be kept out, how different chemicals affect people, and how they interact with each other. Standards for building materials such as carpeting may result from tests done at LBL.

As people have awakened to the dangers of polluted, smoggy outside air and have found the means to begin to deal with it, so with the help of research into indoor air, they may find a way to create a clean, healthy environment indoors.

-DIANE LAMACCHIA


EXTRA ELECTRONS

NEGATIVE-ION BEAMS CAN PLAY A POSITIVE ROLE IN FUSION ENERGY

What does a safe, nonpolluting and virtually limitless potentia| source of energy have in common with a machine that will recreate on earth the conditions of the universe as it was in its infancy? The answer is that someday both fusion reactors superconducting Super Collider are likely to rely on negative-ion sources developed by the Magnetic Fusion Energy (MFE) group of LBL.

When atoms capture extra electrons so that the total outnumbers their protons, they take on a negative electrical charge and become ''negative ions." Negative ions have a wide variety of scientific applications because they can be accelerated to high energies Furthermore, energies a beam of negative ions can be efficiently converted to beam of neutral atoms or a beam of positively charged ions. This was one of the reasons that the scientists and engineers in the MFE group first got into the business of designing negative-ion sources nearly two decades ago.

In 1971, the MFE group, which is a part of the Accelerator and Fusion Research Division (AFRD) and is led by physicist Wulf Kunkel, was given responsibility for developing a neutral-beam injector system. The primary purpose of this system would be to heat the fuel in a magnetically confined thermonuclear fusion reactor.

Thermonuclear fusion-the process by which the sun and other stars ''burn" and hydrogen bombs explode-is the melding together of two atoms of hydrogen to form a single atom of helium. Fusing together hydrogen nuclei unleashes enormous quantities of energy that, sustained and controlled, could be used to generate vast amounts of electrical power with much less radioactive waste than is produced by fission. Furthermore, as a source of energy, hydrogen is clean-burning, which means it makes no contribution to the ''greenhouse effect," and, for all practical purposes, is infinitely abundant in supply. For example, first- generation fusion reactors will probably use as fuel a mixture of deuterium and tritium, more burnable isotopes of hydrogen. Enough deuterium can be readily obtained from one gallon of seawater to produce the same amount of energy obtained from 300 gallons of gasoline.

Thermonuclear fusion takes place under conditions of high temperature and density, such as those created in the core of the sun by immense gravitational pressure. To mimic these conditions on earth, fusionable material must be heated to temperatures of 100 million degrees Celsius and confined in a vacuum chamber long enough for the heated nuclei to interact. One way of doing this is with magnetic fields generated by electrical currents. Fusionable material is initially held in a gaseous state. As the temperature of the gas climbs, its electrons and atomic nuclei separate, and the gas becomes ionized.

An ionized gas is called a ''plasma," and it can be confined within a magnetic field because charged particles cannot readily move across magnetic lines of force. Since neutral particles, on the other hand, pass freely across such a barrier, a beam of energetic neutral atoms can be used to heat a magnetically confined plasma. What happens is that once inside the confinement field, a neutral beam's atoms will collide with the plasma's ions. These collisions strip electrons from the atoms, giving them a charge-which means they, too, are trapped. The additional energy from the trapped particles of the neutral beam raises the plasma's temperature.

The general rule of thumb has been that each electron volt (eV) of energy injected into a plasma raises the temperature of one of its constituent ions by about 10,000 degrees Celsius. This means that lifting the temperature of the entire plasma requires a neutral beam densely packed with highly energetic atoms. Consequently, a neutral- beam injector system starts off with ions of hydrogen or deuterium that are accelerated to a desired kinetic energy and then converted to neutral atoms for passage across the magnetic barrier.

When the MFE group first began working on a system for injecting neutral beams into a magnetically confined plasma, they used sources that made positive hydrogen and deuterium ions. Positive ions of hydrogen and its isotopes are easier to make than negative ions because hydrogen has a low ''electron affinity," meaning it prefers to give up rather than accept an electron.

The MFE group's positive-ion-source work culminated in the development of the Common Long Pulse Source (CLPS), a component of the neutral-beam injection system that was designed for use in the Tokamak Fusion Test Reactor at Princeton University and the D-IIID Tokamak at General Atomics in La Jolla, California. (A tokamak is a type of reactor that contains its plasma fuel in a doughnut-shaped toroidal chamber encircled by magnetic coils.) Consisting of an ion generator and an accelerator, the CLPS can produce beams of deuterium atoms at 120,000 electron volts (120 keV) of energy and a current of 75 amperes.

"A problem arose when performance requirements called for energies above 200 keV," says MFE's Ka-Ngo Leung. ''At those higher energies, the efficiency with which positive hydrogen ions can be neutralized drops to less than 20 percent."

However, negative ions of hydrogen can be converted to hydrogen atoms at a rate of efficiency between 60 and 99 percent, depending upon the method of neutralization.

In 1979, Leung and Ken Ehlers achieved a major advance in negative-ion sources with the development of what they called a ''surface-conversion" source that generated the first steady-state beam of negative hydrogen or deuterium ions at a current greater than one ampere. This source consisted of a chamber, about 37 liters in volume, that would be filled with hydrogen or deuterium gas. An electrical discharge from a tungsten filament ionized the gas, and a ''converter" featuring a surface made of molybdenum was introduced into the plasma. The converter was partially coated with cesium, an element that readily gives up electrons, and negatively biased with respect to the plasma so that the positive ions in the plasma were drawn to it. There they reacted with the cesiated molybdenum surface to create negative hydrogen ions.

The electrical bias that drew the ions to the converter when they were positively charged repelled them when they became negative. Because a curvature in the converter's surface directed the repulsed negative ions through an exit aperture, the source was considered to be a ''self-extraction" device. A unique multicusp magnetic field (the name comes from the geometry of the magnetic lines of force) surrounded the source chamber to contain the plasma and help guide the ions. A second magnetic field from a pair of magnets at' the aperture helped preserve the purity of the emerging ion beam by reflecting most of the free electrons traveling with the hydrogen ions back into the chamber.

''The advantage of the surface-conversion source is a high rate of conversion for the energy expended," says Leung. ''Since the negative-ion output is proportional to the surface area of the converter, higher beam currents can be achieved simply by using a larger converter. "

Surface-conversion sources designed by the MFE group have supplied negative ions to a number of particle accelerators around the world. Among these was the source that fed TRISTAN, an electron-positron collider at KKK, the National Laboratory for High Energy Physics at Tsukuba, Japan. In 1986, TRISTAN used electrons stripped from negative ions to set what was then a world record for center-of-mass energy in an electron- positron collision at 50 GeV (billion electron volts).

Although surface-conversion sources have yet to be used in neutral-beam injector systems for tokamaks, as was originally intended, that is the plan for the proposed International Thermonuclear Experimental Reactor (ITER) project, a reactor designed to demonstrate ignition and long-term sustained burn. (Detailed design work is expected to begin next year.)

To penetrate ITER's enormous magnetic confinement field and the large diameter of its plasma, a neutral-beam system's particles will have to carry 1.3 million electron volts (MeV) of energy-10 times greater than what is now generated from positive-ion sources. The beam must also be delivered in pulse lengths of two weeks, compared to the tens of seconds pulse lengths of existing neutral- beam systems. To meet these performance specifications, the MFE group would couple a surface-conversion type of source to the CCW (constant-current variable voltage) accelerator they have designed that uses electrostatic quadrupoles to handle large charge flows.

In their surface-conversion-type ion sources, Leung and Ehlers minimized the energy needed to produce negative ions and maximized the number of ions reaped by adding cesium to the plasma. While the addition of cesium increased ion production by more than five times it also presented problems. If cesium escapes out of the source chamber, it can cause voltage breakdowns in electrical equipment. This is especially troublesome if a surface-conversion source is supplying negative ions to a particle accelerator.

In 1985, the MFE group developed a different kind of negative-ion source, one that did not require the addition of cesium. Called a ''volume-production" source, it produces negative ions through atomic processes taking place within a confined volume of plasma, rather than through a reaction with a metal surface. A plasma of pure hydrogen is produced inside a chamber and again confined by a multicusp magnetic field. The plasma contains negative as well as positive hydrogen ions and both high-energy (hot) and low-energy (cold) electrons. All but the high-energy electrons are able to penetrate a magnetic filter dividing the chamber's interior into separate source and extraction zones. In the extraction zone, collisions between cold electrons and neutrals, particularly heavily vibrating molecules, create negative hydrogen ions.

Negative ions from a volume-production source are typically low in temperature-about 1 eV compared to the 5 eV temperature of ions from a surface-conversion source. This means that these ions are not moving around as much and lend themselves to forming an intense, straight, easily focused beam-a composite quality referred to as ''brightness" that is highly desirable for use in particle accelerators. Besides being operable without cesium (although cesium can be added to the plasma to enhance the output current), these sources require little startup time to begin producing ions and offer long-term, dependable service. However, such sources do need a large volume of hydrogen plasma, and in the first versions designed by the MFE group, the plasma was produced by electrons emitted from one or more tungsten filaments. Attaining high currents demands a lot of electrical power to heat the filament, which shortens its lifetime for steady state operations or pulsed operations with a high repetition rate.

The MFE group's newest incarnation of the volume-production source over comes these drawbacks by replacing the filament with a glass-coated, copper-coil antenna. Voltage applied to the antenna gives rise to a radio-frequency electric field, which in turn heats enough energetic electrons to transform a gas of hydrogen or deuterium into a source plasma. Given an input power of about 25 kilowatts, the radio-frequency (rf) ion source will furnish an extracted ion current density of about 200 milliamps per square centimeter, which, given the same input power, is about 40 percent higher than would be achieved with a filament discharge. Says Leung, ''The rf-driven negative-ion source is simple to make, easy to use, and rates high in all aspects of performance. Its lifetime is practically unlimited, and it can provide a beam continuously, or in short or long pulses."

One model of the MFE group's rf-driven ion source was developed for use in the Large Electron-Positron (LEP) collider facility at CERN, the European center for particle physics, in Geneva, Switzerland. (LEP is an even larger machine than TRISTAN, capable of providing collisions at an energy of 120 GeV.) The rf-driven source will inject negative ions into a linear accelerator that provides a calibration beam for L3, one of LEP's main detectors. So impressive have the test performances of this rf-driven ion source been that an upgraded model now being designed by the MFE group is expected to be the primary ion source for what will be the largest and most powerful particle accelerator ever built-the Superconducting Super Collider (SSC).

The SSC is expected to take scientists deep into the heart of matter as it was at the very beginning of time, allowing them to search for fundamental particles that have been postulated but never observed. Ten thousand superconducting magnets will guide two beams of protons in opposite directions around an oval path 52 miles in circumference. When the energy in each proton beam reaches 20 trillion electron volts (TeV)-the equivalent of accelerating a baseball to a mass of about three tons-the beams will be made to intersect at a combined collision energy of 40 TeV.

Even though the SSC ultimately accelerates and collides protons, it will start with the easier task of accelerating negative ions. Once these negative ions attain a desired energy, they will be converted into protons. This gives the protons a running start on their way up to collision energies. Originally, the MFE group was designing an rf-driven negative-ion source that would calibrate the SSC's detectors and also serve as a backup to a magnetron-a more conventional type of negative-ion source pioneered in the Soviet Union.

Says Leung, ''The SSC people needed a backup ion source because they wanted their beam to be available 95 percent of the time, and the magnetron may not meet this condition. Because we've been able to operate our rf source so successfully elsewhere, the SSC people now want to test it for their injection system."

Given its simplicity, versatility, and high- quality performance, Leung and his colleagues believe that their rf-driven negative- ion source will have many other applications in addition to particle accelerators. For example, these sources could be used to produce high-current continuous beams of deuterium ions for the neutral-beam injection systems of future tokamak reactors. Rf-driven sources might even be used to make positive ions of argon, boron, carbon, or nitrogen, for implanting into the surfaces of other materials to alter their properties or for manufacturing semiconductor chips. However, it remains to be seen whether the glass coating on the copper antenna that helps prevent voltage breakdown can bear up under the corrosive side-effects of exotic positive ions.

In another ongoing development, MFE researchers are applying their volume- production techniques to the use of a small multicusp source (with a source chamber 15 centimeters in diameter) to make negative ions of carbon much more efficiently than sources now in use. Negative carbon ions could be used as low dosage, highly sensitive radioactive tracers for medical research. They are also well-suited for use in the ''cyclotrino"-a miniature cyclotron invented by LBL physicist Richard Muller for precisely dating ancient objects. The cyclotrino determines the age of samples by measuring the ratio of carbon-14 to carbon- 12. Carbon-14 is an unstable isotope that decays to the stable carbon-12 at a fixed rate of time.

Says Leung, ''Most of the sources in use now produce negative carbon ions by bombarding graphite with cesium, which can contaminate the cyclotrino's electrodes. Our way is much better because we extract the ions directly out of the carbon-monoxide or carbon-dioxide discharge plasma."

One of the MFE group's negative-ion sources also figures to play a role in space, where neutral beams may have a variety of research applications. Space particle beam experiments now being planned call for an LBL-type volume-production source.

''Our negative-ion sources are in use all over the world," Leung has said. He may soon have to amend that statement to ''all over the world and beyond."

-LYNN YARRIS


IMAGING THE ECLIPSE

NATURE AND TECHNOLOGY COMBINE TO GIVE TEACHERS AN EXTRAORDINARY LESSON IN SCIENCE

AS SOON AS THE TEACHER TURNED HIS BACK the students in the class began doing exactly as they pleased. Repeated exhortations-``People, people, could I just have your attention for a moment"-fell on deaf ears. Finally the teacher, Dick Treffers of UC Berkeley's Astronomy Department, shrugged, threw up his hands, and smiled to a colleague: "I have completely lost their attention, and that's great. If this is what happens with high-school students in a real classroom, we'll have made our point."

What was happening in Treffers' improvised classroom in the Intercontinental Hotel on Maui, Hawaii, was that his carefully organized tutorial on how to process astronomical images on personal computers had been completely swamped by the enthusiasm and excitement of the high-school teachers who were his students.

Each teacher - having his or her own PC work station stocked with images of nebulae, planets, and distant galaxies - couldn't resist "playing" with the program, learning within minutes to manipulate the images - zoom in on interesting features, colorize the display to bring out variations in intensity, add and subtract images to highlight changes.

Watching in amazement, Treffers remarked: "An experienced colleagueonce told me: `To use computers in teaching, just boot up the program and go out to lunch.' Now I see what he meant." Another observer commented, "Can you imagine a high-school teacher saying the same thing about a textbook- `Just pass it out and go to lunch'?"

The 10-day teacher-training workshop was a part of the Hands-On Universe, a program to develop a high-school math and physics curriculum based on the use of astronomical images. The program, currently under development, calls for high-school classrooms eventually to be connected via computer networks to telescopes and image memory archives all over the world.

LBL scientists I played a leading role | in the workshop, | which coincided with | the July 1 1 total solar eclipse, the last total eclipse to be visible in the U.S. until the year 2017. Astrophysicist Carl Pennypacker, leader of the automated supernova search team at LBL, was chairman of the workshop.

The workshop involved daytime lectures and teacher training, and nighttime image recording with a telescope and camera system based on CCDs (charge-coupled devices). The teachers then played an active role in taking images of the total eclipse on the Big Island of Hawaii. Research at LBL and the University of California at Santa Cruz has led to the development of a special large-format CCD camera to allow some of the most detailed and sensitive images of an eclipse ever acquired.

Following the workshop, participant Tim Barclay commented, ''In a lifetime as a science teacher, this was my first opportunity to do real science. The experience was unforgettable."

The participating high-school teachers, each with a demonstrated background in astronomy, physics, or math teaching, were chosen on the basis of ability and interest, the need for such a program in their home districts, local district support, and evidence of skill and commitment in conducting the necessary follow-up workshops.

The eclipse images taken during the workshop will be stored on a computer at LBL and distributed either electronically or on floppy disks. After their return home, each of the teachers will conduct regional workshops, organizing groups of approximately 10 other teachers prepared to further test and develop the system. In this way, the sponsors hope that they can reach their goal of 200 class rooms with the new curriculum by the late fall of 1992. Activities tested or created in workshop sessions will undergo field testing with high school students during the 1991-92 school year.

A highlight of the workshop was the contribution (in words and pictures) of NASA astronaut Marcia Ivins. Ivins was a crew member on the space shuttle mission which retrieved the LDEF (Long Duration Exposure Facility) satellite-best remembered as the source of the ''tomato seeds from space" distributed to students all over America.

In addition to her regular duties as a mission specialist, Ivins is in charge of photography for all shuttle missions. She astounded her audience at the workshop by explaining that ''from the beginning of the U.S. space program to this day" there has been no formal requirement for getting pictures from space.

The first few U.S. space missions left no photographic records except for technical, black-and-white engineering photos, says Ivins. Finally, astronaut Wally Schirra changed things by carrying his own Hasselblad camera into space on an early manned flight. The pictures he took were so enthusiastically received by the public that since then there has been some limited provision for photographs on most space flights, but never as a formal part of the mission.

Until 1990, when Ivins received permission to try to improve the situation, virtually all photographic equipment involved 20-year-old technology. With Ivins' enthusiastic nudging, the NASA photographic program is now expanding into autofocus and other features that ordinary consumers take for granted, and eventually, she hopes, into state-of-the-art technology like CCD cameras.

In addition to Ivins and Treffers, speakers during the Hands-On Astronomy workshop included teachers Mary Christian, Jana Gray, Jennifer and Paul Hickman, and Jeff Lockwood, who spoke on various aspects of developing high-school curricula in science and math; LBL physicists Sid Bludman (a visitor in the Nuclear Science Division), Gerson Goldhaber, Carl Pennypacker (chairman of the workshop), and Saul Perlmutter, who talked about topics in physics and astronomy; Andi DiSessa, professor and associate dean in UCB's School of Education, who spoke on computers in education; and Tim Barclay, curriculum developer at the Technical Education Research Centers (TERC) in Massachusetts, who discussed networks.

Teachers participating in the program included: Bruce Downing, Albany High School, Albany, CA; Jennifer Hickman, Phillips Academy, Andover, MA; Paul Hickman, Belmont High School, Belmont, MA; Curtis Craig, American Fork High School, American Fork, UT; John Koser, Wayzeta High School, Bloomington, MN; Bob Marzewski, Berthoud, CO; Mary Christian, Providence, RI; Jatila Van der Veen, Adolfo-Camarillo High School, CA; Jana Gray, Sir Francis Drake High School, San Rafael, CA; Dan Gray, Napa, CA; Jeff Lockwood, Tucson, AZ; Hughes Pack, Northfield-Mt. Herman Academy, Northfield, MA.

LBL staff participants included Elizabeth Arsem, Jigna Desai, and Gerard Monsen of the automated supernova search group. LBL staff who did not make the Hawaii trip but were active in the preparations included Rollie Otto and Eileen Engel of the LBL Center for Science and Engineering Education (CSEE) and LBL mechanical technicians Jack Borde, Tony Freeman, Alan Lyon, Don Krieger and Rod Post (optics).

Participating groups in the Hands-On Astronomy workshop were LBL/CSEE and the Technical Education Research Centers (TERC) of Massachusetts, a nonprofit educational research organization. The host institution in Hawaii was the Hawaii Space Center.

-JUDITH GOLDHABER


As Darkness Falls . . .

Up until 20 minutes before totality, the prevailing mood among the LBL eclipse-watchers assembled on Hawaii's Hapuna Beach was-like the weather- gloomy. It seemed that those who had come so far, waited so long, and got so soaking wet would be rewarded with no view of a total eclipse of the sun-not from this beach, not today.

The rain, though soft and warm as always in Hawaii, had fallen steadily throughout the night, and since local park authorities had declined to relax their ban on tents, eclipse-watchers rose at dawn from sleeping bags that sloshed and squished. The sky that greeted them was not promising-a grey mass of clouds that stretched from the ocean horizon in the west to the foothills of the Hohala mountains in the east.

"Aside from that," as the LBL team reassured each other bravely, the occasion-the LBL Hands-On Astronomy Workshop-had indeed been a success. For the past week a dedicated team of scientists (led by LBL astrophysicist Carl Pennypacker), high-school teachers from all over the United States, and even an astronaut (NASA mission specialist Marcia Ivins) had been preparing for this moment and-along the way-learning how to use astronomical images to excite and inspire students in high-school science classes.

In a frenzy of activity that had found many of them working from seven in the morning till past midnight, the teachers and scientists had developed and learned how to use state-of-the-art image-gathering systems: ultrasensitive charge-coupled device (CCD) cameras married to borrowed telescopes.

The night before, the teams had scattered-one contingent (led by Hughes Pack, a high-school teacher from Massachusetts and a member of LBL's Teacher Research Associate program) to an observatory on Mauna Loa; another to Ka Lae on the southernmost tip of the big island; and a third-the largest group-to Hapuna Beach Park on the dry side of the island.

The local snack bar started serving coffee and donuts to the gloomy campers around 5:30 a.m., and spirits began to lift. Still two hours to go before totality-after all, a lot could happen before then....

And a lot did. By 6:45, 15 minutes into the eclipse, most of the hundreds of campers on the beach had moved to the higher ground, staked out patches of turf, and turned their eyes eastward, where a faint brightening was definitely becoming noticeable. From backpacks and ditty bags came a remarkable assortment of eclipse viewing devices, pinhole cameras, astronomical instruments, cameras, and oh, yes!-eclipse tee shirts. Shortly after seven, a roar went up from the crowd: the sun-never more resplendent-sailed out from behind the curtains of clouds with a small bite of darkness in its upper rim and was to remain in sight virtually continuously until about 45 minutes past totality.

The LBL group surged into action. Pennypacker, Dick Treffers of UC Berkeley's Astronomy Department, and high-school teacher Curtis Craig (from American Fork, Utah), snapped images through the CCD camera; Gerard Monsen, a summer intern and undergraduate student in the UC Berkeley Astronomy Department, shot standard photographic pictures with a tripod-mounted camera borrowed from LBL mechanical technician Jack Borde. Others in the party made themselves useful between gaping at the sky through their mylar sun-peeps or #14 welder's goggles like everyone else.

Virtually all eclipse-watchers on the beach were astonished to discover how little of the sun is needed to light up the whole world. Minutes from totality, a slight darkening-no more than on a cloudy day-became evident tor the first time. Seconds from totality, with no sun left except for a scattering of "Bailey's beads" (formed by rays passing between mountains on the edge of the moon), the day still could pass for an ordinary darkish winter morning.

And then, totality. No, the stars didn't come out-not on Hapuna Beach, anyway-but the world became strangely quiet. Then the combined gasp of hundreds of voices-"the corona, the corona!"-and there it was, just as advertised, pearly white, about as bright as a full moon on a clear night, extending out from the black disc of the sun so as almost to double its apparent size. Slowly, viewers on the beach made out a large, red, [lame-shaped prominence in the upper region. (There were several such prominences, the photographs later showed, but only one was clearly visible to most eclipse- watchers on Hapuna Beach.)

And then it was over. The first returning ray (the "diamond ring effect") returned virtual normality to the landscape within seconds. The slow ebbing of a total eclipse is an anticlimax. The crowd that watched transfixed the hour leading towards totality started to pack their gear or head for the swimming beach almost immediately.

For the LBL. crew, it was time too recall the images to the computer screens and find out-for the first time-whether all the gadgets and equipment-CCDs, telescopes, computer hardware, software-had worked as they were supposed to. A helpful concessionaire allowed the team to plug in to the snack bar's electricity, and the jubilant team watched the instant replay of the images of totality on the computer screen. Later, back in the workshop's headquarters in Maui, the teachers would immediately begin to manipulate the images and incorporate them into teaching materials developed earlier.

-JG


PLAUDITS & PATENTS

Distinguished honors bestowed on members of the LBL scientific staff between mid-April 1991 and mid-July 1991 include the following:

Two members of LBL's Physics Division-theoretical physicist Mary K Gaillard and mathematician Alexandre J. Chorin-were elected to the National Academy of Sciences, one of the highest honors an American scientist can receive. Gaillard has been a leader in the study of theories that attempt to explain the origin of the universe and unite all of its fundamental forces into one. Chorin is especially noted for his development of the vorticity method for solving a wide variety of problems in fluid dynamics.

Applied Science researcher Ashok Gadgil was awarded a $150,000 grant from the Pew Scholars Program in Conservation and the Environment. He received the award for work in energy- efficiency policy issues in developing countries. Gadgil's research has targeted a root cause ofbiodiversity loss: energy inefficiency that creates demand for more resources, such as hydroelectric power or fuelwood, and consequently puts pressure on wildlife habitats. In one case, the result is submergence of ecosystems in hydroelectric reservoirs and in the other, deforestation. Gadgil will be given $50,000 annually for three years to support any project of his choosing.

Angelica Stacy of LBL's Materials Sciences Division and UC Berkeley's College of Chemistry is a recipient of a 1991 Distinguished Teaching Award. Stacy, whose research focuses on the synthesis, structure, and electronic properties of solids, has lectured and written extensively on the chemistry of high- temperature superconductors. She has also worked with high school students and teachers to improve science education.

Bing Jap of the Cell and Molecular Biology Division has been awarded a Humboldt Research Award for Senior U.S. Scientists, allowing him to spend a year at the Max Planck Institute for Biochemistry in Martinsried, Germany. Jap's work focuses on studies of membrane proteins using electron crystallography.

Bruce Novak of LBL's Materials Sciences Division and UC Berkeley's College of Chemistry has won a Presidential Young Investigator Award from the National Science Foundation. Novak has been working on the synthesis and study of advanced polymeric materials, conducting polymers, optical storage devices, and nonlinear optics. Novak and Paul Alivisatos, also a Materials Sciences researcher and chemistry professor, have both been awarded Sloan Research Fellowships, which are presented annually to outstanding young scientists. Each fellowship provides $30,000 over two years, and the researchers are given wide latitude in how they use the funds for their research.

Jorge Llacer of the Engineering Division received a fellowship from the Dutch Organization for Scientific Research. He is spending six months at the University of Utrecht in The Netherlands conducting research on medical image reconstruction.

Steven Martin of LBL's Cell and Molecular Biology Division and UC Berkeley's Molecular and Cell Biology Department has been awarded a Guggenheim Fellowship for 1991. His current research involves studies of yeast and the chemical reactions that control growth and division in these single-celled organisms.

Ron Scanlan of the Accelerator and Fusion Research Division was awarded the 1991 Particle Accelerator Conference Technology Award by the Institute of Electrical and Electronics Engineers. He shared the award with David Larbalestier of the University of Wisconsin for the development of niobium-titanium superconducting materials for application in superconducting magnets.

At the same conference, American Physical Society Fellowship Awards were given to two LBL researchers: Alper Garren of AFRD, in recognition of his work in synchrotron lattice design, including his work on Fermilab's Tevatron and for the Superconducting Super Collider; and Klaus Halbach of the Engineering Division, for his development of the permanent magnet wiggler and undulator insertion devices used in synchrotron radiation sources. Glen Lambertson of AFRD received one the conference's two U.S. Particle Accelerators School Prizes for his accelerator work, including beam extraction, beam impedance, and particle detection.

Nobel laureate Yuan T. Lee, a researcher in LBL's Chemical Sciences Division and professor in UC Berkeley's College of Chemistry, has become the 17th member of the UC system to be awarded the honorary title, ''University Professor."

John Prausnitz of LBL's Chemical Sciences Division and UC Berkeley's Chemical Engineering Department received the Corcoran Award from the American Society for Engineering Education. The annual award is given in recognition of outstanding contributions to the journal Chemical Engineering Education. Prausnitz was the coauthor of a paper entitled, "Chemical Engineering in the Spectrum of Knowledge."

Judith Klinman of Applied Science and the UC Berkeley College of Chemistry has been named a Miller Research Professor for the 1992-93 academic year. The award gives scholars a semester break from teaching and administrative duties to concentrate fully on research.

Larry Myer of the Earth Sciences Division received an award at the U.S. Symposium on Rock Mechanics in the basic research category for his paper, "Transmission of Seismic Waves Across Single Natural Fractures." The paper was co-authored by Neville Cook of ESD and former graduate student Laura Pyrak- Nolte.

Patents Awarded

Patents were awarded recently for inventions by these LBL researchers:

Mehdi Balooch, Donald Olander, and Richard Russo for their long-laser-pulse method of producing thin films;

Donald Morris for preparation of highly oxidized RBA2CU408 superconductors;

Stephen Derenzo and William Moses for lead carbonate scintillator materials;

William Moses for scintillator materials containing lanthanum fluorides;

Ian Brown, Robert MacGill, and James Galvin for an apparatus for coating a surface with a metal utilizing a plasma source;

Billy Loo and Frederick Goulding for a method and apparatus for measuring lung density by Compton backscattering;

Kenneth Raymond, Barry Engelstad, John Huberty, and David White for imaging agents for in vivo magnetic resonance and scintigraphic imaging.


Rock-pore imaging may aid oil recovery

If a picture is worth a thousand words, can it also be worth its weight in gold- black gold, that is? Maybe so, if it is in the hands of a group of researchers in the Earth Sciences Division. Using pictures of the pores in a sample of sedimentary rock, they are accurately predicting the rock's permeability-a critical factor in oil recovery.

Permeability is a measure of the ease with which liquids or gases move through a rock. It is largely determined by the size and shape of the rock's pores. During oil-recovery operations, in which water pushes oil through permeable rocks, pores become blocked with water, reducing oil permeability to zero.

"This is the reason why about 50 percent of the oil remains in the field and is never recovered," says LBL geophysicist Robert Zimmerman. If a substantially greater percentage of oil is to ever be recovered from reservoir rock, Zimmerman says, geologists will need a better understanding of how pore structures and fluids in pores affect permeability, displacement, and other basic hydraulic properties. Such effects are difficult to measure, however, because permeability can vary greatly over a small area.

Zimmerman and colleagues Erika Schlueter, Neville Cook, and Paul Witherspoon have designed a model that predicts hydraulic permeability based on the microscopic geometry of rock pore space.

"We assume that the pores are an interconnected network of cylindrical tubes, with various cross-sectional shapes, arranged on a cubic lattice," Zimmerman says. "We then do a cross-sectional analysis of the area and perimeter of individual pores from two-dimensional scanning electron micrographs of rock sections."

Prior to imaging, the pores are filled with a liquid metal alloy that, after it solidifies, enables the computer to determine what in the micrograph is rock and what is pore space.

"Once the computer has analyzed the image, the permeability of each individual pore is calculated from the basic equations of fluid mechanics," Zimmerman says.

"An effective medium theory originally developed for solid- state physics is then used to find an effective average permeability of the pores in the network.

"Our model is successful for predicting permeability in a core sample when there is a single type of fluid. We want to extend it now to relative permeabilities involving combinations of fluids and gases."


NEWS review

Milk gene research may benefit biotechnology

In research that could lead to the manufacture of medically useful proteins, LBL cell biologists Mina Bissell and Christian Schmidhauser are collaborating with Gerald Casperson of Monsanto Corporation to explore the specifics of gene regulation-how a gene turns on and off. Besides the potential pharmaceutical applications, understanding normal gene regulation may be a crucial step toward understanding misregulated cells-knowledge that would be tremendously useful for understanding how cells become cancerous when normal cellular regulation is somehow lost.

Ten years ago, Bissell and her colleagues at LBL proposed that, in combination with hormones, the extracellular matrix (ECM)-the material that surrounds the cells-can actually send signals to the cell and regulate tissue-specific genes. For example, such a signal can initiate the production of beta casein (a milk protein) in mammary cells. The concept has since been amply confirmed in Bissell's laboratory and many others; what remains is the complex task of understanding the molecular mechanisms involved.

Like other proteins, milk proteins are produced in response to messenger RNA from a specific gene. If the genes do not send the message, protein synthesis will not begin. Although the genes for many milk proteins have been identified, located on the DNA molecule, and cloned, their regulation-and when it occurs-is poorly understood.

Bissell's group in the Cell and Molecular Biology Division has shown that milk protein genes require a particular cellular environment before the protein "recipe" can be transcribed, or copied, from DNA into messenger RNA. Once it has been transcribed, the protein recipe can be carried from the cell's nucleus into its cytoplasm to initiate protein synthesis.

Working with the gene for a bovine milk protein (bovine beta casein), the LBL-Monsanto team has come close to pinpointing the location of this gene regulation in the "promoter region"-a sequence of nucleotides adjacent to the gene along the DNA molecule. From a segment of DNA approximately 1700 nucleotide base pairs in length, they have narrowed their search to only 160 base pairs.

To locate the segment of DNA critical for transcribing the milk protein gene, the scientists used a technique involving a "reporter" gene, chloramphenicol acetyltransferase, which does not occur in mammalian cells. To this reporter gene, they hooked up different lengths of nucleotide sequences from the promoter region. Where the reporter gene became highly active, the team identified a stretch of DNA which clearly was involved in regulation.

"When we compared a long segment of the promoter region to a shorter segment, we found that activity dropped dramatically in the shorter one," says Schmidhauser. "We knew that in the remaining region of the longer segment there is a sequence that regulates the gene."

The researchers studied two different extracellular matrices in various combinations with three hormones to arrive at the correct formula for triggering gene expression. They concluded that the milk gene requires a particular configuration of extracellular matrix and three lactogenic hormones-insulin, hydrocortisone, and especially prolactin-to start the transcription that begins the process of protein synthesis. Expression of the chloramphenicol acetyltransferase gene increased as much as 150-fold when the cells were grown on the extracellular matrix rather than on plastic.

"If you plate cells on normal tissue culture plastic, even in the presence of all three lactogenic hormones, the cells don't produce milk protein," says Schmidhauser. "This clearly shows that extracellular matrix regulates beta casein expression."

The LBL scientists received Exploratory Research and Development funds to pursue the study. "Monsanto became interested because, by mimicking a mammary gland in culture and 'tricking' the cells into thinking they are producing milk, we could get them to produce copious amounts of a desired protein," Bissell says. "Our interaction with Monsanto has been very fruitful, and we hope it will lead to additional technology transfer."

Geologist unearths Hayward Fault history

LBL earthquake geologist Pat Williams has unearthed the beginnings of a chronological history of tine Hayward Fault. His findings help refine the forecasting of future activity of the fault, which runs from north to south through tine East Bay Area in Northern California.

After a months-long analysis of new geologic evidence that he uncovered in Fremont, Calif., Williams has found that the two most recent ruptures of the fault evident at that site occurred no more than 190 years apart.

Williams excavated two trenches across the southern Hayward Fault. This segment of the fault-the 35 kilometers between San Leandro and Fremont-last ruptured in a major quake in 1868. If the recurrence interval (190 years) repeats itself, the segment will break again some time before the year 2058.

Along tine northern 45-kilometer segment of the fault between San Leandro and San Pablo Bay, the last major rupture was in 1836. [Researchers have yet to determine the dates of tine prior quakes along this segment.

"In trying to forecast when the next large Hayward Fault earthquake will occur," Williams says, "geologists have had very little evidence to guide them. Because the Hayward Fault runs through an urban area, decades of building and development have scrambled much of the geologic record. Landslides also have obscured the record.

''It is apparent to those of us who have examined the fault zone that a number of prior earthquakes have occurred, but nobody has known when, how often, or the magnitude of the events. The trenches we have excavated begin to give us a picture of the seismic history of the fault."

Williams found evidence of five ruptures of the fault occurring over the last 2000 years and equivocal evidence of two other events. Sedimentary processes during particular periods at this site may have obscured additional seismic events during the past 2000 years.

Williams notes that his findings reinforce the conclusions reached in August 1990 by the Working Group on Earthquake Probabilities in the San Francisco Bay Area. The group concluded that there is a 45 percent likelihood of a major rupture of the Hayward Fault within the next 30 years.

Soviet energy study reveals inefficiencies

Energy use in the Soviet Union is far less efficient than in the United States or Europe, says LBL energy analyst Lee Schipper of the Applied Science Division. The Soviet Union produces more steel and cement, and ships much more freight per capita, than do Japan, the United States, or the Western European countries. But Soviet citizens have less space in their homes, fewer appliances, far fewer cars, and travel only one- third as much as their Western counterparts. These are some of the findings of a new study by Schipper and Ruth Caron Cooper, formerly of LBL.

Their comparison of the energy intensities of key activities in the industrial, transportation, residential, and service sectors shows that in most cases the Soviet Union uses more energy than Western countries per unit of activity.

"Opportunities for energy conservation in the Soviet Union are truly enormous," the report states. At the same efficiency level as Europe, Soviet fuel use in 1985 could have been one-third lower than it was, for example. But the same types of institutional obstacles that have engendered other economic problems-such as artificially low energy costs and the fact that energy users do not pay directly for the energy they use-have so far prevented the realization of these potential savings. Many individual apartments do not have utility meters or controls, and buildings are often not metered so energy is wasted.

The most important ingredient for promoting energy conservation is economic reform, Schipper says. "The energy problem mirrors the general economic problem in the Soviet Union. Everything is used inefficiently. Food is priced way below what it really costs, and so is housing; the economy is full of distortions."

Schipper says the process of "getting the Soviets to think differently about energy use" was as important as getting the results. "The energy authorities weren't thinking of people as part of the energy problem; they don't have people in the system in their models."

LBL team records magnetic signals from human heart

Only a few months after their first demonstration of supersensitive magnetometers made of multilayer thin films of high-temperature superconductors, physicist John Clarke and his co- workers have used their detectors to record faint magnetic fields produced by the human heart. The work was done with the team's collaborators in Conductus, a start-up company in Sunnyvale, Calif.

With a detector emplaced in a copper- and- steel-shielded enclosure, the team measured magnetic fields from the hearts of three of their own team members. The magnetometers can detect magnetic fields of about 10 nanogauss-approximately one hundred million times weaker than the magnetic field of the earth.

The noninvasive detection of magnetic signals from the human heart and brain (normal by-products of the electrical activity of these organs) is among the most promising applications of the new class of superconducting instruments pioneered by Clarke's group. Other applications include laboratory instruments, geophysical surveying, and nondestructive testing.

The magnetometer makes use of thin films of the high- temperature superconductor yttrium-barium-copper oxide (YBCO). It is in two parts-a superconducting quantum interference device (SQUID) and a flux transformer, which picks up magnetic signals over a comparatively large area and concentrates them in a much smaller, multiturn coil.

Magnetocardiograms are not in themselves new; many groups around the world have made similar measurements using SQUIDs based on low- temperature superconductors for more than a decade. However, these systems involve relatively cumbersome and expensive dewars (to contain the necessary liquid helium). In this case, the inner glass dewar of a Thermos bottle was used to hold the liquid nitrogen needed to operate the SQUID. "This instrument is still very much in the prototype stage," Clarke says. "To make a viable system, one would like appreciably higher sensitivity, and an array of 50 to 100 magnetometers. However, we believe this is the first example of a real measurement by a high- temperature, thin-film SQUID. It shows just how rapidly the field is evolving."