SULI
CCI
PST
FaST

Physics Abstracts:

1-D Simulations of Metallic Foams Heated by Ion Beam Energy Deposition. ALEX ZYLSTRA (Pomona College, Claremont, CA, 91711) JOHN BARNARD (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

One dimensional simulations of various initial average density aluminum foams (modeled as slabs of solid metal separated by low density regions) heated by volumetric energy deposition have been conducted with a Lagrangian hydrodynamics code, DISH (Deeply Simplified Hydrodynamics by R. More), using a van der Waals equation of state (EOS). The resulting behavior has been described to facilitate the design of future warm dense matter (WDM) experiments. Deposition in the simulations ranges from 15 to 30 kJ/g total energy and from 0.075 to 0.9 ns total pulse length, resulting in temperatures from 1 to 4 eV. The peak temperature reached in the foam was found to be greater than linearly dependent on the energy deposition, increasing with increasing density to a peak at approximately 75% solid initial average density and decreasing rapidly with increasing density beyond that peak, and essentially independent of the pulse length for pulse lengths shorter than the macro hydro time, approximately 1 ns. The peak pressure increases rapidly with increasing density, increases with increasing energy, and is roughly independent of the pulse length for lengths on the order of the macro hydro time. For pulse lengths of approximately the hydro time for one slab of the foam (~0.1 ns) an increase in the maximum pressure is observed. The expansion velocity is proportional to the density for pulses on the order of the hydro time of one slab of the foam; for longer pulses a dramatic increase in the expansion velocity is observed at approximately 75% solid density initial. We find that the homogenization time of the foam increases with increasing pulse length, and the remaining inhomogeneities in the homogenized foam decrease with increasing density. These results will help future experiments examine the equation of state in the WDM regime.

3 Inch Double GEM for X-Ray Fluorescent Detector. DERREK ANDERSON and JAMEL GRAY (Southern University, Baton Rouge, LA, 70813) DR. D. PETER SIDDONS (Brookhaven National Laboratory, Upton, NY, 11973)

Two 3-inch diameter gas electron multipliers (GEM) are used to build a high gain X-Ray gas detector for Extended X-Ray Absorption Fine Structures (EXAFS). The X-Ray ionizes the gas and the electrons are drifted towards the first GEM. The strong electric field in the GEM multiplies electrons by impact ionization. The second stage GEM further amplifies the electrons by the same process. The advantage of the double GEM is to provide two stages of electron amplification. This improves the signal magnitude without the introduction of noise. The charge collected from the second GEM is connected to a Keithley Amplifier. We have tested the Double GEM to detect dilute amounts of Mn and Fe in an arbitrary tree leaf.

A Comparative Study of GEM Foils from Tech Etch and CERN. JONATHAN HERSTOFF (Muhlenberg College, Allentown, PA, 18104) CRAIG WOODY (Brookhaven National Laboratory, Upton, NY, 11973)

Gas Electron Multipliers (GEMs) were originally developed at CERN and are now being used in applications such as charged particle tracking. They consist of a thin polyimide foil which is copper clad on both sides and contain a large number of small holes extending through the foil. When high voltage is applied across the copper electrodes, a large electric field is developed inside the holes, which is used to produce gas gain. Different methods of manufacturing these GEMs can potentially be determinative of how the GEMs behave under high voltage. In this study, a company called Tech Etch produced three different batches of foils, each using a different chemical etching method. Although the measurements of gain versus time were varied widely from foil to foil, contrary to what was expected, there does appear to be a correlation between the size of the holes and the performance of the GEM. In general, foils with holes that have a larger polyimide area exposed tended to exhibit poorer gain stability than those with less exposed polyimide.

A Systematic Study of the Effect of Magnetized Oxygen on a Photon Beam. ALISHIA FERRELL (Florida A & M University, Tallahassee, FL, 32307) DR. CAROL Y. SCARLETT (Brookhaven National Laboratory, Upton, NY, 11973)

A systematic study of the effects of oxygen oscillation on a laser beam propagating through an electromagnetic field (EMF) was deemed necessary due to the physical set up of the main experiment concerning space time curvature. The control or "shunt" measurements for the main experiment were made by propagating the laser outside of a vacuum chamber along side a super conducting magnet. However, this caused the beam to travel extremely close to the lead wires. This raised the question, "Was the oxygen movement being created by EMF deviating the laser beam enough to corrupt the control data". In order to see if the oxygen was significantly changing the data a systematic study had to be done. To perform this systematic study a 514 nm helium neon laser generator, several focusing and defocusing optical lenses, a quad cell photo-receiver, and a quadrapole magnet capable of oscillating its current were used. After doing a number of calibrations on the photo receiver we were able to ramp a 10 Amp electromagnetic field using alternating current through a quadrapole on the averages of events given. While the light was hitting the photo-receiver data was being collected from a data acquisition system. Once the data was done being colleted it was converted into text files. With these numbers a FORTRAN program was created using fast Fourier transforms (fft), which showed that there was a bit of movement in the X direction which caused a noticeable signal. Further studies must be done so that we can insure that this signal that we see is not just by coincidence.

A Systematic Study of the Effect of Magnetized Oxygen on a Photon Beam. JOSEPH HEARD (Community College of Philadelphia, Philadelphia, Pa, 19130) CAROL SCARLETT (Brookhaven National Laboratory, Upton, NY, 11973)

A systematic study of the effect of oxygen oscillation due to a paramagnetic effect has on a laser beam propagating through a electromagnetic field (EMF) was deemed necessary due to the physical set up of the experiment concerning space time curvature. The control or "shunt" measurements for the main experiment were taken by propagating the laser outside of the vacuum chamber along the side. However, this caused the beam to travel extremely close to the lead wires of the super conducting electromagnet. This raised the question, "was the oxygen movement being created by EMF of the lead wires deviating the laser beam enough to corrupt the control data." To perform a systematic study of this concern we used a 514 nm helium neon laser generator, several focusing and defocusing optical lenses, a quad cell photo receiver, and a Quadra-pole magnet capable of oscillating current. Also, FORTRAN computer programs containing FFT’s were used to accomplish the Fourier analysis necessary for data analysis. The systematic study was conducted by propagating the laser through an EMF field ramped at 80 MHz. Then different amounts of oxygen were exposed to the field. The photo receiver then tracked the laser beam movement. The results were then analyzed to see if the oxygen oscillation was visible in the data. We observed a large peak in the "x" (horizontal) direction. This might indicate that the signal seen may be partially due to magnetized oxygen. However, further studies must be conducted before conclusive results can be reached. Future studies would include: varying the type of magnet, adjusting the percentage of gaseous oxygen flowing through the magnet, and changing the position of the magnet in relation to the photon beam.

A Systematic Study of the Effect of Magnetized Oxygen on a Photon Beam. RACHAEL MILLINGS (Suffolk County Community College, Selden, NY, 11784) DR. CAROL SCARLETT (Brookhaven National Laboratory, Upton, NY, 11973)

When light is propagated through a ramped magnetic field into a photoreceiver in the presence of air, there exists a possibility that the observed laser beam deviation is due to the paramagnetic behavior of gaseous oxygen in the air. If the signal observed is not negligible, then the deviation cannot be used as a shunt for beam deviation due to light propagated in a vacuum. The purpose of this systematic study is to determine the significance of gaseous oxygen's magnetic susceptibility relative to the observed beam deviation. A photon beam from a 514 nm HeNe laser generator was propagated through a defocusing lens and a focusing lens to focus the beam; two mirrors were then used to reflect the beam through a quadrupole magnet and into a photoreceiver. As an alternating electric current of 10 amperes was directed through the magnet, the amount of light entering the photoreceiver was measured using a data acquisition system to interface with the photoreceiver and a computer. After the data was converted into text files, a programming language, FORTRAN, was used to write a code that analyzed the data by the method of fast Fourier transforms. On a graph of the amplitude of the light as a function of time, a signal was observed at the same frequency as that at which the current in the magnet was alternating, as expected. While the signal was relatively large, future studies must be conducted to yield conclusive results. This systematic study is part of a larger experiment researching the space-time curvature of light passing through a magnetic field in a vacuum as a possible validation of physical theory which postulates the existence of gravity in the absence of mass.

Analysis of Beta-Decay of 51,52K. EMILY JACKSON (Knox College, Galesburg, IL, 61401) MICHAEL CARPENTER (Argonne National Laboratory, Argonne, IL, 60439)

The beta decay of 51,52K has been analyzed from data taken at TRIUMF (TRI-University Meson Facility) in Vancouver, Canada. The high purity Ge detectors were calibrated with respect to energy and efficiency using standard calibration sources (152Eu, 133Ba, and 57Co). The peaks in the beta decay spectra from the two K isotopes were identified and the energy and intensity were fitted. These results were compared to a table of energy and intensity published in a paper by F. Perrot et al. to check for consistency: they were found to agree with the published results. From another data set obtained at the ATLAS accelerator at Argonne with the Gammasphere array, the level scheme for 52Ti was established and expanded a great deal in comparison to the known level scheme. Any further research into the neutron-rich nuclei will require a more powerful accelerator than the accelerator used in this experiment in addition to a radioactive beam.

Analysis of Consistency in Channel Pedestal Readings for the Track Imaging Cerenkov Experiment (TrICE) Camera as a Function of Temperature and Time. ANA CHACHIAN (Florida International University, Miami, FL, 33199) KAREN BYRUM (Argonne National Laboratory, Argonne, IL, 60439)

Track Imaging Cerenkov Experiment (TrICE) is a telescope prototype on site at Argonne National Laboratory. Its camera is composed of an array of 16 high definition multi-anode photomultiplier tubes (MAPMTs) that give an angular pixel spacing (0.08deg) better than most existing Cerenkov shower detecting telescopes (~0.15deg). The TrICE telescope is a testbed for the development of a next-generation gamma-ray telescope. TrICE has been observing cosmic rays since earlier this year. The stability of the TrICE camera performance was analyzed through the study of background noise pedestals recorded by its channels to determine if these are constant under the background sky. The method involved generating histograms that compared the pedestal signals for each channel over different days, times, and temperatures, using a C++ interfaced with Root macro. The results of this analysis concluded that the pedestal means were constant over a variety of conditions and are therefore reliable to reproduce accurate Cerenkov signals. The result of this analysis is the first step in understanding the data taken by the camera. Further steps to this end include research of each channel’s gain as a function of these pedestal fluctuations.

Analysis of Nuclear Semi-Inclusive Deep Inelastic Scattering Events for Charged Pions Using FORTRAN. BRYAN RAMSON (Howard University, Washington, DC, 20059) KAWTAR HAFIDI (Argonne National Laboratory, Argonne, IL, 60439)

Because of the nature of the strong interaction, it is impossible to directly observe free quarks. Therefore their fundamental properties must be studied through the results of deep-inelastic scattering of electrons off stationary nuclei. The Continuous Electron Beam Accelerator (CEBA) at the Thomas Jefferson National Accelerator Facility (JLab) provides an electron beam of sufficient energy (5.014 GeV) to study such reactions. The electron beam was used on targets of deuterium, carbon, iron, and lead. Particles produced in the reactions were detected by the CEBA Large Acceptance Spectrometer (CLAS) and analysis of the data is being conducted through collaboration of teams from JLab and Argonne National Laboratory. One area of analysis is the production of pions in the nuclear medium and the relationship that their production have with the properties of quark propagation in the nuclear medium. The analysis was not completed.

Analysis of the Particle Identification Capabilities of the Proposed Helical Orbit Spectrometer (HELIOS). ZACHARY GRELEWICZ (University of Chicago, Chicago, IL, 60637) DR. BIRGER BACK (Argonne National Laboratory, Argonne, IL, 60439)

In order to study nuclear reactions involving short lived nuclei, inverse kinematic reactions must be used. Therefore, a novel spectrometer, HELIOS, has been designed to optimize the detection of particles in inverse kinematic reactions. In principle, the cyclotron period of an ejectile traveling along a helical orbit in a uniform magnetic field corresponds to a unique charge-to-mass ratio. However, if the ejectile is intercepted before completing a full period, the extended geometry of the detector may be used to determine not only a charge-to-mass ratio, but a unique mass. Using the Geant4 toolkit provided by the European Organization for Nuclear Research (CERN), as well as analytical techniques, the data collected by the detector from proton, deuteron, triton, helium-3, and alpha particle ejectiles were simulated. Then a program for identifying particles based on time-of-flight, energy of impact, and distance traveled along the axis of the detector, as well as an analysis of the characteristics of unidentifiable particles, was developed using the C++ programming language, with visualizations provided by CERN's ROOT system. It was found that successful particle identification depends most strongly on the lab angle of the ejected particles, with different lab angle ranges and acceptances for the five particles. Most particles may be identified by their location in the phase space, with few areas of phase space containing overlapping particles.

Analysis of the CDF II Data in Search of the Higgs Boson Decaying to Two Photons. CALLIE DEMAY (University of Illinois, Urbana-Champaign, IL, 61820) CRAIG GROUP (Fermi National Accelerator Laboratory, Batavia, IL, 60510)

Although the Standard Model is unbelievably accurate, the one fundamental aspect needed to complete the model, a massive particle called the Higgs Boson, has not yet been discovered. In this investigation, we searched for a non-Standard Model, fermiophobic Higgs, which would decay to two photons. This decay mode is predicted to have a very small branching fraction according to the Standard Model; however, some models predict a higher branching fraction. We initially optimized selection criteria in order to become as sensitive as possible to the signal region. Using the 2fb-1 of data provided by Fermilab’s Tevatron and collected by the Collider Detector at Fermilab (CDF) experiment, we were unable to see a signal for the Higgs; therefore, the focus of our research shifted to placing a limit on the cross section for the Higgs in the fermiophobic model. We were able to place a lower limit of 99 GeV on the mass of the fermiophobic Higgs. This limit is currently the best in the world for a hadron collider. The previous limits at Fermilab included one by CDF at 82 GeV, and one by DZero at about 90 GeV. However, the world’s best limit of 109.7 GeV was placed by the Large Electron-Positron Collider (LEP) in Switzerland. Although we were not able to find a Higgs signal, the techniques used to improve sensitivity to photon events will be useful in the next generation of collider experiments, which will be more sensitive to the small branching fraction of diphoton events.

Analysis of the Directivity of a Defective R-F (Radio- Frequency) Coupler Using a Network Analyzer. ZEPHRA BELL (Southern University and A & M College, Baton Rouge, LA, 70807) DR. TERRENCE REESE (Fermi National Accelerator Laboratory, Batavia, IL, 60510)

The R-F couplers that we tested were used in a project called HINS at Fermi National Accelerator Laboratory ( Fermilab). HINS stands for High Intensity Neutrino Sources. These couplers are arranged in a transmission line at Fermilab. The couplers are crucial for the transfer of power in an R-F cavity. However, one of the couplers in the transmission line seemed to be defective. The high power signal down the "line" in the reflected power was too low from that in the forward port. It was measured to be only 13 dB when it should have been 60 dB. Therefore, initially, three different methods were used to test the defective coupler: 1) The connector cables attenuation was measured. Connector cables are used to attach the port of the coupler to other pieces of the R-F cavity. The cable attenuation and the reflection load were good. This meant that the coupler did not have power flowing through faulty cables. 2) Low power in the switch was generated. This still reflected a low signal of 13 dB. 3) Finally, the coupler was physically removed and tested directly. It still reflected a low signal of 13 dB. Hence, the coupler was brought to our team to see if we could determine the defects of the coupler through rigorous testing of the ports and internal modifications.

Analysis of X-Ray Spectra Emitted from the VENUS ECR Ion Source. JANILEE BENITEZ (California State University, East Bay, Hayward, CA, 94542) DANIELA LEITNER (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

The Versatile Electron Cyclotron resonance ion source for NUclear Science, VENUS, produces its record breaking ion beam currents and high charge state distributions because it uses strong magnetic fields to confine the plasma and high microwave frequencies to heat it. The magnetic fields are produced using liquid helium cooled superconducting coils. While in operation, VENUS produces significant quantities of bremsstrahlung, in the form of x-rays, through two processes: 1) electron-ion collisions within the plasma, and 2) electrons are lost from the plasma and collide with the plasma chamber wall and release energy. The energy lost by electron collisions with the chamber wall presents a significant heat load on the cryostat needed to keep the coils superconducting. In order for VENUS to reach its maximum operating potential at 10kW of 28GHz microwave heating frequency, the heat load posed by the emitted bremsstrahlung must be understood. A code has been written, using the Python programming language, to analyze the recorded bremsstrahlung spectra. The code outputs a spectral temperature and total integrated count number corresponding to each spectra. Bremsstrahlung spectra are analyzed and compared by varying two parameters: 1) the heating frequency, 18 and 28GHz, and 2) the magnetic field gradient, 44% and 70%, at the electron resonant zone.

Application to Determine and Control Twiss Parameters of the SNS Accelerator Beam. JENS VON DER LINDEN (University of Pennsylvania, Philadelphia, PA, 19104) SARAH COUSINEAU (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

The accelerator ion beam at the Spallation Neutron Source (SNS) is repeatedly focused and defocused by a series of quadruple magnets as it travels to the target to create neutrons by means of spallation. Physicists are interested in characterizing the accelerator beam in order to understand and improve the focusing and transport of the beam. Wire scans are employed to measure the traverse density profile of the beam. With a minimum of 3 distinct wire scans, the ion beam Twiss parameters, which characterize the phase space properties of the beam, can be determined. In this project, a Graphical User Interface (GUI) Application was developed in JAVA to automate the determination of Twiss parameters from the wire scan data files and to determine how quadruples should be varied to change the Twiss parameters at multiple arbitrary locations. The software is coded within the XAL framework, an existing JAVA library developed at the SNS accelerator which is used for all GUI-based physics software applications. A major contribution of the JAVA GUI developed in this project is that it is generally applicable to any area of the accelerator containing a minimum of 3 wire scanners. Existing applications were tied to specific parts of the accelerator. Twiss results can be saved and compared through graphing and averaging. As the SNS is a high intensity accelerator which requires strict control over the beam losses, it is very important that the beam in the accelerator be transported according to the optimum design configuration. Any deviation from the optimum transport configuration can lead to beam loss that can limit the obtainable beam power in the accelerator. This application will aid in measuring the beam state at any point in the accelerator, and will subsequently allow a user to make adjustments to the beam state in order to restore the optimum configuration and ensure well controlled beam transport.

Assembly of a Time-Correalted Single Photon Counting Experimental Setup. SEAN SWEETNAM (Carleton College, Northfield, MN, 55057) RANDY ELLINGSON (National Renewable Energy Laboratory, Golden, CO, 89401)

Recent developments in nanotechnology have created materials capable of improving the efficiency of solar cells, provided the basic photophysics involved are well understood. To properly characterize and understand the charge carrier processes which occur in nanomaterials, it is necessary to use very fast light gathering techniques, with time resolutions as small as tens of picoseconds. Time-Correlated Single Photon Counting (TCSPC) is such a technique, capable of providing sufficiently fast time resolution to resolve important process in materials by utilizing fast response mechanisms of detectors and electronic components, and by using efficient trigger timing. TCSPC also extends the range of observable photoluminescence with its long wavelength and low power detection capabilities. The goal of the work discussed in this paper is to develop a TCSPC system unique in its time resolution and range of detection. In this paper the principles and components of TCSPC are described, and the preparation of a TCSPC experimental setup is discussed, in particular noting the systematic errors encountered and their solutions. The system was run with a Tsunami Ti:Sapphire laser operating at 864-868 nm with a Si Avalanche Photodiode (APD) detector, yielding a time resolution of less than 800 ps. Emission lifetime measurements of 3.6 nm diameter PbS quantum dots with this setup yield a lifetime greater than 1300 ns; this value varied for different emission wavelengths 965-1030 nm. Because the time resolution is more than three orders of magnitude shorter than the lifetime of PbS quantum dots, it is concluded that the system is sufficiently fast for typical carrier lifetime characterization. Further development of the system will be necessary to improve the time resolution and infrared capabilities of the system; in particular the inclusion of an InGaAs APD in the system will decrease the time resolution, and increase the detection range to as far as 1600 nm. A flexible setup permitting fast switches between the InGaAs and Si detectors will increase the usefulness of the setup by increasing the full range of sensitivity to 400-1600 nm. Further experimentation will be necessary to determine the cause of the emission lifetime variation associated with emission wavelength.

Automation of the Vacuum System along the Advanced Penning Trap Beam Line. LAYRA REZA (University of Texas at El Paso, El Paso, TX, 79968) GUY SAVARD, PHD (Argonne National Laboratory, Argonne, IL, 60439)

A key component in the Canadian Penning Trap (CPT) mass spectrometer, located in the Argonne Tandem-Linear Accelerator System, is an Advanced Penning Trap (APT) filled with gas, the purpose of which is to purify an ion sample before mass measurements. The APT is one of few components where gas is required; however, high precision mass measurements must take place in an ultra high vacuum (UHV) environment, including the APT, to avoid contamination of the ions. An UHV environment is required continuously, even when operators are not present. Then, the goal of the project is to have complete automation of the APT beam line in a way that is fast and error free. The achievement of high vacuum involves a vacuum system composed of ion and thermocouple gauges, mechanical and turbomolecular pumps and pneumatic and solenoid valves. These components can be automated with the use of a Programmable Logic Controller (PLC). To achieve automation in the APT experimental setup, several steps need to be completed. First, a procedure for the safe operation of all the components has been created, a detailed list of components has been constructed, and the missing parts have been ordered. Moreover, a program in ladder logic mode has been written to control the system and avoid both operator and instrumental errors that might damage the system or its components. The project will continue until all new components are installed and wired into the PLC.

Building X-ray Diffraction Calibration Software. JOSHUA LANDE (Marlboro College, Marlboro, VT, 05344) SAMUEL WEBB (Stanford Linear Accelerator Center, Stanford, CA, 94025)

X-ray diffraction is a technique used to analyze the structure of crystals. It records the interference pattern created when x-rays travel through a crystal. Three dimensional structure can be inferred from these two dimensional diffraction patterns. Before the patterns can be analyzed, diffraction data must be precisely calibrated. Calibration is used to determine the experimental parameters of the particular experiment. This is done by fitting the experimental parameters to the diffraction pattern of a well understood crystal. Fit2D is a software package commonly used to do this calibration but it leaves much to be desired. In particular, it does not give very much control over the calibration of the data, requires a significant amount of manual input, does not allow for the calibration of highly tilted geometries, does not properly explain the assumptions that it is making, and cannot be modified. We build code to do this calibration while at the same time overcoming the limitations of Fit2D. This paper describes the development of the calibration software and the assumptions that are made in doing the calibration.

Bunch by Bunch Profiling with a Rotating X-Ray Mask. CHRISTOPHER LEE (University of California, San Diego, La Jolla, CA, 92093) ALAN FISHER (Stanford Linear Accelerator Center, Stanford, CA, 94025)

It is desirable to monitor the cross sections of each positron bunch in the Low Energy Ring (LER) storage rings of the Positron Electron Project II (PEP-II) located at the Stanford Linear Accelerator Center. One method is to pass the x-rays given off by each bunch through a scintillator, thereby studying a visible image. A rotating x-ray mask with three slots scans the beam image in three different orientations, allowing us to mechanically collect data to characterize and profile each image. Progress was made in designing the x-ray mask, researching and procuring parts, as well as advancing project plans. However, due to time constraints and difficulties in procuring special parts, the full system was not completed. A simpler setup was built to test the hardware as well as the feasibility of characterizing a circular image with a rotating mask. A blinking green light emitting diode (LED) simulated a single positron bunch stored in the LER ring. The selected hardware handled this simulation setup well and produced data that led to a reasonable estimation of the LED image diameter.

Calculation of Charge-Changing Cross Sections of Ions or Atoms Colliding with Fast Ions Using Classical Trajectory Method. HARRISON MEBANE (Harvard University, Cambridge, MA, 2138) IGOR KAGANOVICH (Princeton Plasma Physics Laboratory, Princeton, NJ, 8543)

Evaluation of ion-atom charge-changing cross sections is needed for many accelerator applications. Ions lose energy when passing through background gasses, beam transport lines, and detectors. A classical trajectory Monte Carlo simulation has been used to calculate ionization and charge exchange cross sections. For benchmarking purposes, an extensive study has been performed for the simple case of hydrogen and helium targets in collisions with various ions. To improve computational efficiency, several integration methods, including Runge-Kutta with adaptive stepsize and Bulirsch-Stoer with Stoermer’s Rule, were compared. The algorithm was also upgraded to simulate the trajectories of two electrons for a helium target. Despite the fact that the simulation only accounts for classical mechanics, the calculations are comparable to experimental results for projectile velocities in the region corresponding to the vicinity of the maximum cross section. The accuracy of a purely classical simulation allows for simpler and faster calculations of cross sections in the vicinity of maximum cross section, avoiding slower and more complex quantum mechanical calculations. In the future, support will be added for simulations of multiple electron trajectories in more complicated targets, and the algorithms will be further refined to improve speed and accuracy.

Calculation of Divertor Thermal Response as a Function of Material Composition in the National Spherical Torus Experiment. MICHAEL CHAFFIN (Reed College, Portland, OR, 97202) RAJESH MAINGI (Princeton Plasma Physics Laboratory, Princeton, NJ, 8543)

Present tokamak designs use a magnetic divertor to deposit heat from the edge plasma onto Plasma Facing Components (PFCs) designed to remove the heat. Studying how this heat is distributed under various discharge conditions gives insight into how heat deposition can be optimized, and how different materials respond to plasma heating. In the National Spherical Torus eXperiment (NSTX), infrared cameras are used to measure divertor surface temperature, from which heat flux is computed using a one dimensional (1D) semi-infinite slab model with constant thermal conductivity. Here, a 1D simulation of the PFCs incorporating material-dependent thermal properties is used to compute heat flux profiles resolved across time and tile thickness. The PFC response to a given heat flux is also computed, and comparisons of resulting temperature profiles are made for a variety of materials including ATJ graphite (a low thermal expansion coefficient polycrystalline graphite presently in the NSTX divertor), pyrolytic graphite, molybdenum, and tungsten. The relatively high conductivity of pyrolytic graphite allows for greater thermal penetration of the PFCs, resulting in much lower temperatures at the PFC boundary. Using pyrolytic graphite instead of ATJ graphite in future fusion devices would mitigate the effects of higher flux deposition onto the PFCs. Further study is needed to determine the appropriateness of using high conductivity materials in particular reactor designs.

Calibration of the Camera of the LSST. ANDREW SCACCO (University of Colorado, Boulder, CO, 80015) DAVID BURKE (Stanford Linear Accelerator Center, Stanford, CA, 94025)

The camera of the Large Synoptic Survey Telescope (LSST) is analyzed theoretically using the ZEMAX optical design software. The purpose of this analysis is to have a theoretical model for the testing and calibration of the optics before they are installed in the telescope. The most effective way to perform this testing and calibration is also investigated. The calibration of the lenses and sensors in the telescope will be performed using either a highly focused laser beam or a filtered quartz lamp with a monochromator, enabling very precise measurements to be made. The image the light source produces on the focal plane of the camera will be compared to the image predicted by the ZEMAX software and the optics and sensors for the camera will be adjusted until the desired agreement is reached. The minimal size of the spot produced by the light source is determined for a large sampling of angles and locations on the focal plane. A spot size that matches the spot size of the point spread function (PSF) of the telescope can be produced for light that strikes the focal plane at its center or for light that strikes the focal plane parallel to the optical axis of the camera, but not for light that strikes the focal plane off center at a significant angle. This work is a starting point for the testing and calibration of the LSST camera, which will be implemented and modified as necessary as the camera is built, assembled and tested.

Characterization of a Burle Planacon Microchannel Plate Photomultiplier Tube for Use in Picosecond Time-of-Flight Detectors. CAMDEN ERTLEY (University of Akron, Akron, OH, 44325) KAREN BYRUM (Argonne National Laboratory, Argonne, IL, 60439)

Particle accelerators use time-of-flight (TOF) detectors to distinguish between lighter and heavier particles of the same momentum. Current TOF detectors have a timing resolution of ~100 picoseconds. A higher-precision TOF detector would allow more accurate measurement of the particles’ energy in a detector such as CDF at the Fermilab Tevatron. The purpose of this project was to characterize the gain and response uniformity of the Burle Planacon microchannel plate photomultiplier tube (MCPPMT) and to begin the development of a laser test stand. The characterization of the MCPPMT was the beginning stage in the development of a TOF detector with a 1-picosecond resolution. A dark box containing a light-emitting diode, filter wheel and reference photomultiplier tube was used to test the MCPPMT. The diode and filter wheel were used to control the amount of light used to illuminate single pixels of the MCP. The output was recorded and put into a histogramming program. The gain and number of photoelectrons were calculated from this data. The intrinsic timing resolution of the electronic components in a laser test stand has been tested. The gain mapping was not finished due to technical problems. The timing resolution of the CAMAC control module has been found to be 25ps. The next step for this research will be characterizing the timing resolution of the MCP in a laser test stand.

Characterization of a Microchannel Plate Photomultiplier Tube for Use in Picosecond Time-of-Flight Detectors. CAMDEN ERTLEY (University of Akron, Akron, OH, 44325) KAREN BYRUM (Argonne National Laboratory, Argonne, IL, 60439)

Particle accelerators use time-of-flight (TOF) detectors to distinguish between lighter and heavier particles of the same momentum. Current TOF detectors have a timing resolution of ~100 picoseconds. A higher-precision TOF detector would allow more accurate measurement of the particles’ energy in a detector such as the Collider Detector at Fermilab. The purpose of this project was to test the timing resolution of the Burle Planacon microchannel plate photomultiplier tube (MCPPMT) in a laser test stand. The laser test stand consisted of a Hamamatsu picosecond laser pulsar and lenses to focus the laser on the MCPPMT. The timing resolution of the MCPPMT was found to be 70 picoseconds when in a single-photoelectron mode and 32 picoseconds when the number of photoelectrons was high, ~150. A dark box containing a light-emitting diode, filter wheel, and reference photomultiplier tube was used to test the gain and response of the MCPPMT. The diode and filter wheel were used to control the amount of light used to illuminate single pixels of the MCP. The output was recorded and put into a histogramming program. The gain and number of photoelectrons were calculated from these data. The next step for this research is to determine the timing resolution between two MCPPMTs. The ultimate goal is to develop a TOF detector with a 1-picosecond resolution.

Characterization of the Magnetic Field of a Large-Bore Superconducting Solenoid Magnet. JACK WINKELBAUER (Western Michigan University, Kalamazoo, MI, 49009) BIRGER BACK (Argonne National Laboratory, Argonne, IL, 60439)

At Argonne National Lab a new type of spectrometer is being developed, the HELIcal Orbit Spectrometer (HELIOS). HELIOS utilizes a 90cm bore superconducting Magnetic Resonance Imaging (MRI) magnet. To ensure that the magnet will be adequate for the project, the magnetic field will be mapped. Of particular importance is the field’s homogeneity and axis of symmetry. To map the magnetic field in this cylindrical region (345 cm long, 90 cm diameter), an apparatus was designed and built to position a gaussmeter probe in precise cylindrical coordinates. In order to efficiently collect this data, a program was created using the graphical programming software, Labview. This field mapping data will eventually be applied to existing simulations to improve predictions.

Characterizing Surface Layers in Ntininol Using X-ray Photoelectron Spectroscopy. REBECCA CHRISTOPFEL (Western Washington University, Bellingahm, WA, 98225) APURVA MEHTA (Stanford Linear Accelerator Center, Stanford, CA, 94025)

Nitinol is a shape memory alloy whose properties allow for large reversible deformations and a return to its original geometry. This nickel-titanium alloy has become a material used widely in the biomedical field as a stent to open up collapsed arteries. Both ambient and biological conditions cause surface oxidation in these devices which in turn changes its biocompatibility. Depending on the type and abundance of the chemical species on or near the surface, highly toxic metal ions can leak into the body causing cell damage or even cell death. Thus, biocompatibility of such devices is crucial. By using highly surface sensitive x-ray photoelectron spectroscopy to probe the surface of these structures, it is possible to decipher both layer composition and layer thickness. Two different samples, both of which were mechanically polished with one then exposed to a phosphate buffered saline solution to mimic the chemical properties of blood, were investigated. It was found that the latter sample had a slightly thicker oxide layer and more significantly a phosphate layer very near the surface suggesting toxic metal components are well contained within the sample. These are considerable indications of a biocompatible device.

Characterizing the Noise Performance of the KPiX ASIC Readout Chip. JEROME CARMAN (Cabrillo College, Aptos, CA, 95003) TIMOTHY KNIGHT NELSON (Stanford Linear Accelerator Center, Stanford, CA, 94025)

KPiX is a prototype front-end readout chip designed for the Silicon Detector Design Concept for the International Linear Collider (ILC). It is targeted at readout of the outer tracker and the silicon-tungsten calorimeter and is under consideration for the hadronic calorimeter and muon systems. This chip takes advantage of the ILC timing structure by implementing pulsed-power operation to reduce power and cooling requirements and buffered readout to minimize material. Successful implementation of this chip requires optimal noise performance, of which there are two measures. The first is the noise on the output signal, previously measured at 1500e-, which is much larger than the anticipated 500e-. The other is the noise on the trigger logic branch, which determines where thresholds must be set in order to eliminate noise hits, thus defining the smallest signals to which the chip can be sensitive. A test procedure has been developed to measure the noise in the trigger branch by scanning across the pedestal in trigger threshold and taking self-triggered data to measure the accept rate at each threshold. This technique measures the integral of the pedestal shape. Shifts in the pedestal mean from injection of known calibration charges are used to normalize the distribution in units of charge. The shape of the pedestal is fit well by a Gaussian, the width of which is determined to be 2480e-, far in excess of the expected noise. The variation of the noise as a function of several key parameters was studied, but no significant source has been clearly isolated. However, several problems have been identified that are being addressed or are under further investigation. Meanwhile, the techniques developed here will be critical in ultimately verifying the performance goals of the KPiX chip.

CMOS Monolithic Pixel Sensors with in-pixel CDS and fast readout for the ILC Vertex Tracker. TERRI SCOTT (New York University, New York City, NY, 10003) MARCO BATTAGLIA (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

The International Linear Collider (ILC) Vertex Tracker requires detectors of new design in order to meet its physics requirements in terms of material, accuracy and readout speed. The detectors must be sufficiently thin, in order that incident particles may pass through several layers of sensors without substantial scattering. Readout should be fast enough that occupancy caused by machine-induced background does not spoil the track pattern recognition. Monolithic silicon pixel sensors provide a solution to these constraints due to their high resolution and ability to be thinned to several tens of micrometers. One detector currently under testing is the LDRD2 chip, designed and developed at the Lawrence Berkeley National Laboratory. The detector features 20 x 20 µm pixels with in-pixel charge storage for correlated double sampling, a technique by which the difference is taken between a reference and the pixel signal voltage. Half of the chip utilizes 5 x 5 µm diodes, and 3 x 3 µm diodes in the second half. To characterize its performance, lab tests were conducted using a pulsed laser and a Fe55 x-ray source. In particular, the chip was readout at several frequencies to determine the effect of the readout speed and the charge integration time on efficiency and noise. It was found that the LDRD2 chip responds to both laser pulses and incident x-rays at readout frequencies up to the highest design frequency of 25MHz. This work is part of an ongoing R&D program at LBNL which will continue to investigate the LDRD2 and further generations of pixel sensors.

Coarsening of Superconducting Froths. ANDREW FIDLER (Albion College, Albion, MI, 49224) RUSLAN PROZOROV (Ames Laboratory, Ames, IA, 50011)

The structure and dynamics of foams and froths has been a subject of intense interest due to the desire to understand the behavior of complex systems, when topological complexity prohibits exact derivations based on minimum energy arguments. Though exact solutions have proven unattainable, general laws that govern the overall structural evolution have been developed. This is particularity true in the case of two-dimensional foams consisting of arrays of polygonal shaped cells with three edges per vertex. The mathematics for describing the cellular evolution in this system has proven to be surprisingly simple in form, and it applies only to the system as a whole. This gives a hope that, while the behavior of individual cells may be difficult to analyze, the overall system can be described by relatively simple rules. Using magneto-optical imaging, it was recently demonstrated that the intermediate state in superconducting lead exhibits patterns that appears to be very similar to soap foams. While visually alike, physically these systems are quite different. In conventional foams the foaming agent is always some form of matter, while in the intermediate state in lead the structure is characterized by superconducting and normal state regions. In this project, we have investigated these analogies and seen whether the general equations mentioned above apply equally as well to magnetic superconducting foams. It has been determined that laws describing foam evolution, such as von Neumann's law, work remarkably well for superconducting froth, but some parameters are different when compared to conventional foams. The biggest difference between these two systems is the agent that provides the foaming in superconducting lead, the superconducting region, decreasing as the field evolves, whereas in conventional soaps the amount of the foaming agent is constant. Nevertheless, the statistics of the polygons and structural dependence on the applied magnetic field and temperature have proven to be analogous to time. Topological transformations of the cells have also proven to be identical to conventional foams. This new type of superconducting foam could prove to help greatly to the insight into the general physics of foams, since the structure can be controlled to a greater extent by reversible manipulation of magnetic field and temperature, which is impossible in case of conventional foams where time cannot be reversed.

Coil Configurations Study for Bi-2212 Subscale Magnets. CHRISTOPHER ENGLISH (Texas A&M University, College Station, TX, 77841) HELENE FELICE (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

The Superconducting Magnet Group at Lawrence Berkeley National Laboratory is developing subscale magnets consisting of Bi-2212 (Bi2-St2-Ca-Cu2Ox) racetrack coils as part of its subscale program. Several configurations are being considered: the stand-alone racetrack, subscale common coil, subscale dipole, and subscale hybrid dipole. In order to prepare for the assembly and testing of these magnets, a study has been carried out to determine the short sample current (Iss) and the Lorentz forces for each configuration. OPERA 3D has been used to determine the field distributions on the coils. The maximum field on the conductor determined the load line of each subscale magnet. The intersection of these load lines with the engineering critical current density versus magnetic field curve (JEC(B)) for Bi-2212 round wire subsequently determined the Iss. The results show little variation in the Iss of each configuration due to the small slope of the JCE(B) in the field range of 5-10 T. The Lorentz forces, also determined with OPERA 3D, have been analyzed by defining the magnetic pressure on the coils. Results from the analysis show that a possible testing sequence for the subscale program could be the stand-alone racetrack, subscale common coil, subscale dipole, and finally the subscale hybrid dipole, in order of increasing magnetic pressure. Future simulations for hybrid dipoles based on varying the current in the Nb3Sn from the current in the Bi-2212 coil are recommended.

Computational Development of H- Ion Sources for the Spallation Neutron Source. JUSTIN CARMICHAEL (Worcester Polytechnic Institute, Worcester, MA, 1609) ROBERT F. WELTON (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

The US Spallation Neutron Source (SNS) requires a high power H- ion source in order to achieve the desired neutron flux. Over the next several years, the SNS will require substantially higher average H- beam current than can be produced from conventional H- ion sources including our baseline source. H- currents of 70-100 mA with an RMS emittance of 0.20-0.35  mm mrad and a ~7% duty-factor will be needed for the SNS power upgrade project. Presently, external antenna sources, based on Al2O3 plasma chambers, have been developed which have been shown to produce beam currents of 25-35 mA with a duty-factor of 2-3%. Computer simulations employing the Finite Element Method (FEM) with coupled fluid dynamic, heat transfer, and thermal stress and deformation capabilities have been performed to investigate the design of the plasma chambers operating at higher duty-factors. These simulations show that a plasma chamber made from AlN can be designed to meet the full duty-factor requirement. In order to meet the beam current requirements, efforts are being made to (i) increase source plasma density by using magnetic confinement and (ii) improve the efficiency of ion extraction from the plasma. Towards these ends simulations are being performed using LORENTZ for magnetic field modeling and COSMOS for thermal analysis of the electron dumping electrode. An AlN plasma chamber, a solenoid confinement magnet and an electron dumping electrode have been designed. It is anticipated that substantially greater beam currents can be achieved with these improvements to the ion source.

Concept to Employ Magnetohydrodynamic Conversion in a Two Gigawatt Inertial Fusion Energy Direct Drive Power Reactor. BRETT ANDERSON (St. Olaf College, Northfield, MN, 55057) CHARLES GENTILE (Princeton Plasma Physics Laboratory, Princeton, NJ, 8543)

A two gigawatt Inertial Fusion Energy (IFE) direct drive power reactor, currently in conceptual design, injects deuterium-tritium targets into the reactor chamber at the rate of five hertz and uniformly illuminates each target with ultraviolet laser light, resulting in detonation. The conceptual design of this IFE reactor may provide an opportunity to directly harness the power in the post detonation ion fields. This can be accomplished by utilizing a magnetic cusp field to guide the ions into collectors located in the equatorial and polar regions of the reactor. The shaped ion fields resulting from this magnetic intervention configuration pose a distinct challenge, as their intensity may have the potential to damage certain areas within the ion collectors. One method of addressing this challenge is to employ magnetohydrodynamic (MHD) conversion to transform the internal energy of the ion fields directly into electrical energy, a process that would also attenuate the strength of the fields. In order to analyze the potential of MHD conversion in IFE, previous work on MHD conversion in other applications is examined in the context of this proposed IFE reactor configuration. Other conversion techniques are also investigated, including Compact Fusion Advanced Rankine II (CFARII) MHD conversion, radio frequency (RF) particle deceleration, and direct conversion. Analysis reveals that MHD conversion may be a promising solution depending on the intensity of the ion fields. However, a number of engineering and operational concerns need to be addressed; for example, the materials need to be able to withstand extreme conditions. In addition, some elements of the other methods for energy conversion could be incorporated into an MHD conversion design. The next logical step in the development of this aspect of the IFE reactor would be a scaled experimental test facility where material tests and methods can be advanced. This work is in support of efforts to develop an efficient, economical, and clean fusion energy source.

Conceptual Design for a 2 GW Inertial Fusion Energy Direct-Drive Power Reactor Employing a Mechanical Vacuum Pumping System. KELSEY TRESEMER (George Fox University, Newberg, Oregon, 97132) CHARLES GENTILE (Princeton Plasma Physics Laboratory, Princeton, NJ, 8543)

Presented is a conceptual design for a 2 gigawatt Inertial Fusion Energy (IFE) direct-drive power reactor. The reactor operates at 5 Hz, consuming approximately 450,000 tritium-deuterium targets/day, injected at speeds greater than 100 m/s into the target chamber and uniformly illuminated by laser light, leading to detonation. Resulting post-detonation ions are directed away from the first wall of the target chamber and into equatorial and polar caches using a magnetically-induced cusp field. The reactor is designed to breed and recycle fuel through the use of breeder blankets and a fuel recovery system. To minimize target-particle interference, the chamber will be kept at less than 0.5 millitorr through the use of turbomolecular pumps (TMPs) and corresponding mechanical backing pumps. Initially, these pumps were dry-bearing TMPs, however an investigation was performed comparing bearing-based TMP’s to magnetically-levitated TMPs, revealing other vacuum pump options. All pumps were evaluated based on a wide range of specifications, the most crucial being the maximum hydrogen pumping speed, greatest mean time between failure (MTBF), and the least amount of oil (if any) present in the vacuum system. Information collected from journal articles, industry, and operational TMP experience in other fusion related venues indicate that the employment of magnetically-levitated TMP’s appears to be a superior vacuum pumping solution in the IFE environment. Thus, as a direct result of this research, magnetically levitated TMPs will be adopted into the IFE reactor design.

Construction and Commissioning of a Micro-Mott Polarimeter for Photocathode Research and Development. APRIL COOK (Monmouth College, Monmouth, IL, 61462) MARCY STUTZMAN (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

Thomas Jefferson National Accelerator Facility uses polarized electrons to further the understanding of the atomic nucleus. The polarized source produces electrons by directing laser light onto a specially prepared gallium arsenide (GaAs) photocathode. During the course of this project, an off-beamline micro-Mott polarimeter has been built and commissioned within the Source Lab for photocathode research and development. A polarimeter measures the polarization, or spin direction, of electrons. The micro-Mott runs at 30 keV and can be used directly in the Source Lab, off of the main accelerator beamline. Construction of the Mott system began with a polarized source, which consists of a vacuum chamber complete with a cesiator and nitrogen triflouride (NF3) to activate the photocathode, residual gas analyzer (RGA), ultra-high vacuum pumps, an electrostatic deflector to bend the electron beam 90 degrees, and electrostatic lenses. The polarimeter is housed in an adjacent vacuum chamber. The circularly polarized laser light enters the polarized source, hits the GaAs photocathode, and liberates polarized electrons. The original longitudinally-polarized electrons are transformed into transversely-polarized electrons by the electrostatic bend. They are then directed onto a gold target inside the Mott and scattered for data analysis. The polarized source has been commissioned, achieving photoemission from the activated GaAs crystal, and the electrostatic optics have been tuned to direct the electrons onto the gold target. Nearly ten percent of the electrons from the photocathode reach the target, giving adequate current for polarization measurement. The micro-Mott polarimeter will aid in photocathode research and pre-qualification of material for use in the injector.

Construction of a Monitoring System for the Solenoidal Tracker At RHIC (STAR) Forward Meson Spectrometer (FMS). SHAWN PEREZ (State University of New York at Stony Brook, Stony Brook, NY, 11794) LES BLAND (Brookhaven National Laboratory, Upton, NY, 11973)

The Forward Meson Spectrometer (FMS) at Brookhaven National Lab, consists of a matrix of lead-glass bars viewed by photomultiplier tubes that surround the colliding beam axis. The FMS detects the two photons associated with the decay of a p0 meson, or other photons, electrons or positrons produced in the collisions. The pseudo rapidity -ln(tan(/2) dependence of particle production can be analyzed to explore parton distributions within the proton. We are currently designing an LED light pulsing system which will be used to monitor the performance of these 1264 lead-glass detectors. Using Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL) to program the Field Programmable Gate Array (FPGA) via the Xilinx ISE WebPack development environment, the goal is to have the LED Panel mimic events generated from the proton–proton collisions by pulsing different patterns of light with amplitude control into the FMS. The aspect of this project that I have been assisting with is the PC Board design, which integrates the electronic components with the necessary circuitry and mechanics for the monitoring system to function. The software required for this design are Microsoft Office Visio 2007 to generate the block diagram of the processes between the electronics, Cadence OrCAD Capture Schematic to represent the systems circuitry, and PADS – PCB 2007 which is the PC board layout tool manufactured by Mentor Graphics. The LED monitoring system once completed, will provide a means of testing the FMS and monitoring its performance during RHIC operations.

Cosmic Ray Studies. MARGE BARDEEN (University of Illinois, Urbana, IL, 60137) MICHAEL BARDEEN (Fermi National Accelerator Laboratory, Batavia, IL, 60510)

I am writing this memorandum to suggest we segue into a slightly different direction for the QuarkNet evaluation. Based on last year’s Review, we are charged with collecting “metrics.” The issue, as I see it, is that the review recommendations are based on an unrealistic view of QuarkNet and that the current goals do not fit what the program has evolved into.

Definition of a Twelve-Point Polygonal SAA Boundary for the GLAST Mission. SABRA DJOMEHRI (University of California, Santa Cruz, Santa Cruz, CA, 94024) MARKUS ACKERMANN (Stanford Linear Accelerator Center, Stanford, CA, 94025)

The Gamma-Ray Large Area Space Telescope (GLAST), set to launch in early 2008, detects gamma rays within a huge energy range of 100 MeV - 300 GeV. Background cosmic radiation interferes with such detection resulting in confusion over distinguishing cosmic from gamma rays encountered. This quandary is resolved by encasing GLAST’s Large Area Telescope (LAT) with an Anti-Coincidence Detector (ACD), a device which identifies and vetoes charged particles. The ACD accomplishes this through plastic scintillator tiles; when cosmic rays strike, photons produced induce currents in Photomulitplier Tubes (PMTs) attached to these tiles. However, as GLAST orbits Earth at altitudes ~550km and latitudes between -26° and 26°, it will confront the South Atlantic Anomaly (SAA), a region of high particle flux caused by trapped radiation in the geomagnetic field. Since the SAA flux would degrade the sensitivity of the ACD’s PMTs over time, a determined boundary enclosing this region need be attained, signaling when to lower the voltage on the PMTs as a protective measure. The operational constraints on such a boundary require a convex SAA polygon with twelve edges, whose area is minimal ensuring GLAST has maximum observation time. The AP8 and PSB97 models describing the behavior of trapped radiation were used in analyzing the SAA and defining a convex SAA boundary of twelve sides. The smallest possible boundary was found to cover 14.58% of GLAST’s observation time. Further analysis of defining a boundary safety margin to account for inaccuracies in the models reveals if the total SAA hull area is increased by ~20%, the loss of total observational area is < 5%. These twelve coordinates defining the SAA flux region are ready for implementation by the GLAST satellite.

Density Measurements of a 3He Target Cell. SARA MOHON (College of William and Mary, Williamsburg, VA, 23186) JIAN-PING CHEN (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

The discovery and study of elementary particles in nature has always been attractive to scientists. The spin structure of a neutron is one such popular study and typically involves collisions between a high energy electron beam and a neutron target. At the Thomas Jefferson National Accelerator Facility (TJNAF) in Virginia, glass cells full of Helium-3 (3He) serve as effective polarized neutron targets in its particle accelerator. A nucleus of 3He has two protons and one neutron. Typically, ninety percent of a 3He sample has two protons with opposite spins so that the overall spin of the nucleus is determined by the spin of the neutron. Before experiments can begin, characteristics of the cell must be measured and determined including its 3He density. The purpose of this project was to describe how to measure the density of 3He target cells. A laser and optics setup was used to measure how much laser light the 3He would absorb within the target cell at certain frequencies and temperatures. At each temperature, the data was curve-fitted using Root, and statistically analyzed to give a density measurement. As expected, the wider the absorption, the larger the density of 3He within the cell. Also, the density of the cell was larger at higher temperatures. These density measurements will be used to calculate the cell’s maximum polarization, correct data measurements from the particle accelerator experiment, and help further investigate the spin structure of the neutron. In the future, improvements concerning cell alignment in the oven should be made to provide more accurate results.

Designing an LED Monitoring System for the FMS. JONATHAN LANGDON (State University of New York at Stony Brook, Stony Brook, NY, 11794) LES BLAND (Brookhaven National Laboratory, Upton, NY, 11973)

The Forward Meson Spectrometer (FMS) at Brookhaven National Laboratory’s STAR Experiment is composed of lead glass cells which are used to detect photons produced in high energy collisions of gold nuclei or protons. In past iterations of the FMS, panels of fiber optics were used to provide light from light emitting diodes (LEDs) as a calibration signal. This test signal could be observed as an event distribution within the data, far removed from other physical events. For the FMS, the goal is to create a more comprehensive and adaptable LED testing system. Unlike previous iterations, the goal is provide variable light sources to cell clusters allowing for patterns of light to be used instead of simply pulses of light. This would provide a means for proofing the functionality of the FMS’s triggering system. To accomplish this, a type of microchip, known as a Field Programmable Gate Array (FPGA), will be installed to control the LED output. FPGAs are also known as programmable logic chips, since one can use a computer to define its behavior after it has been implemented. Making use of an FPGA development kit in conjunction with an "integrated software environment" (ISE), known as Xilinx ISE Webpack, a functioning system for controlling the light panels has been developed. However, in addition to the hardware aspect, it has also been necessary to develop graphical interface tools for loading light pattern instructions in real time. This was accomplished by way of Microsoft’s "Visual Basic.NET 2005 Express" interactive development environment (IDE). The final product, known as the "Light Panel Control System," is the product of by directional development, starting from the computer out to the development board and from the FPGA back.

Designing the Gamma Calorimeters for the Future International Linear Collider. ERIC JONES (State University of New York at Stony Brook, Stony Brook, NY, 11790) WILLIAM MORSE (Brookhaven National Laboratory, Upton, NY, 11973)

The electron-positron beams of the future International Linear Collider (ILC) must be monitored by utilizing feedback measurements of the bunch characteristics in order to keep them properly aligned for the optimum resulting luminosity once they collide. The Gamma Calorimeter (GamCal) is one of the proposed calorimeter designs to be placed in the very-forward region of the ILC that will be used to gather information about the beam interactions in order to maintain this alignment. It will measure the energy of photons produced by so-called beamstrahlung, a process which results from the intensification of the electromagnetic fields of the bunches as they pass through each other; however, their energy will not be measured directly. The beamstrahlung photons will first be converted into electron-positron pairs by directing them into a 10-5 m thick diamond foil, and then the positrons will be magnetically deflected into a detector grid that will measure their energy. In order to obtain a quantity proportional to the luminosity, this information will then be combined with information from the Beam Calorimeter (BeamCal) that will detect the pair particles produced in the collisions. We have used Daniel Schulte’s simulation program, the Generator of Unwanted Interactions for Numerical Experiment Analysis Program, Interfaced with Geometry and Tracking (GEANT) (GUINEA-PIG), in order to understand the effects of changing important collision parameters such as the beam offset, the incoming beam angles, and the bunch lengths on the produced pair particles and beamstrahlung photons. The resulting GUINEA-PIG data were analyzed using the program Physics Analysis Workstation (PAW) and Excel. The photon energy and angular distributions will be used to optimize the detector placement in the GamCal, while other output data shall be used to determine if the luminosity can be optimized without data from the BeamCal during preliminary runs of the beams. Future studies shall also determine how well the converting foil will survive when the beams fail to interact, as the electron-positron beams are intense enough to punch holes through the foil, and these holes will decrease the acceptance of our detector. What we understand now are the number and energy acceptances for nominal bunch parameters with varying offsets and incoming angles, and that the foil should remain reliable to about 1% error.

Detection of Ultra High Energy Cosmic Rays Using Radar. STEVEN HICK (State University of New York at Stony Brook, Stony Brook, NY, 11794) HELIO TAKAI (Brookhaven National Laboratory, Upton, NY, 11973)

Ultra High Energy Cosmic Rays (UHECR) are constantly bombarding our planet from unknown sources outside the solar system and uncovering their mysteries can provide insight into the origins and evolution of the universe. The Mixed Apparatus for Radar Investigation of Cosmic-rays of High Ionization (MARIACHI) project is a collaborative effort between research scientists, educators, and students that will explore UHECR. Understanding UHECR will be a major accomplishment in the physics community since the energies they produce are orders of magnitude higher than energies we can produce on earth with current particle accelerators. MARIACHI will scatter radio waves off ionization trails that are created when UHECR interact with our atmosphere, and will be detecting these signals using scintillator arrays that are strategically located across Long Island. Along with the scintillator arrays, antennas will be constructed and used for the detection of UHECR, and calibrating these antennas will be a major step forward for MARIACHI. The design for an antenna is a simple double dipole, which we have been experimenting with all summer. Designing, calibrating, and implementing the antennas as a compliment to the scintillator arrays is a main goal for the project. Furthermore, MARIACHI will be developing ways to subtract unwanted background information from meteors and other sources that try and mask the detection of UHECR. The MARIACHI project is still in its initial phases and there is a high risk involved since it is unknown whether or not it’s possible to use radar for UHECR detection. Although there is this risk factor the concept of MARIACHI is highly seductive since the forward scattering technique is extremely inexpensive, which allows for a wide range of users to participate. The project is mainly concerned with pure scientific research, but is unique in the fact that research scientists, educators, and students are all participants. Detecting UHECR and extracting maximal information from the MARIACHI project has promise and potential for opening many new doors in physics and education.

Determining the Components of an Iron Beam at the NASA Space Radiation Laboratory. JENNIFER MABANTA (St. Joseph's College, Patchogue, NY, 11772) MICHAEL SIVERTZ (Brookhaven National Laboratory, Upton, NY, 11973)

Before extended space missions can occur, protective measures must be put in place for astronauts since prolonged exposure to radiation fields can have adverse effects. The purpose of the research done at the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL) is to gain a better understanding of the cosmic rays in space and develop the most efficient countermeasure for the voyagers. Proton and heavy-ion beams from the BNL Booster accelerator are directed along a beam line to NSRL. These beams mimic cosmic rays providing a controlled area in which to study the effects of the rays. The most harmful of the rays is iron and the least destructive and most abundant is hydrogen. Of particular interest to NASA are the iron beams as they are the most destructive form of radiation facing astronauts. Since completely shielding them from these heavy beams would not be feasible given the constraints in space, scientists are seeking to utilize the process of fragmentation in their shielding methods. Fragmentation is the way in which heavy ions break up into lighter less dangerous ions. In order to study this process, it is necessary to measure the components within the beam. To achieve this, a scintillator detector is placed within the beam. However, the response of this scintillator is not linear with the deposited energy; it follows a relation known as Birk’s Law. In order to study the different components of the beam, the response function of the scintillator must be determined. Once this response function is made linear, each elemental ion within the iron beam is more easily identified. To create the best fit, the fewest number of parameters must be used while still keeping the value of chi squared at a minimum. Using the spreadsheet software Excel, a fitting routine was created that could be implemented on each of the components of the iron beam from hydrogen to iron. Using this routine, the centroids for the peak of each element was determined and used to develop the response function. A second order polynomial was found to be adequate to fit the response of the scintillator, y = 0.000121x^2 +1.0x. A comparison of the scintillator response function before and after unfolding shows that the response can be portrayed as a linear function within a given range. Using this function, scientists will be better able to characterize the iron beam in order to develop the best shielding methods for harmful space radiation.

Development of an Apparatus for Analysis of Monolayers by Grazing Incidence X-ray Diffraction (GIXD) and Brewster Angle Microscopy (BAM). MORGAN JACOBS (University of California, Berkeley, Berkeley, CA, 94707) JAMES VICCARO (Argonne National Laboratory, Argonne, IL, 60439)

The simultaneous use of grazing incidence x-ray diffraction (GIXD) and Brewster angle microscopy (BAM) is a powerful tool in imaging surfactant-water interfaces in Langmuir troughs. The use of both techniques allow imaging on both the angstrom and the micron scale. Previously, each technique has been used individually, however, due to the geometrical limitations of the Langmuir trough, it is difficult to use both techniques simultaneously. X-ray diffraction requires that the surfactant be in an inert atmosphere and BAM requires that a microscope be placed close to the surface being analyzed. The BAM setup previously used at Argonne National Laboratory has served as a starting point from which to make modifications. The trough is not large enough to contain the BAM microscope in its entirety, and it is therefore not possible to seal the trough. As such, an inert atmosphere is no longer practical. It may be possible to place the BAM microscope outside of the trough, using a coherent fiber optic bundle to transport the light from the inside of the trough to the microscope. However, there are a few issues that one must consider when using fiber optics. Foremost among these are collecting enough light, keeping high enough resolution, and maintaining polarization. The purpose of this project is to develop an apparatus based on an investigation of these problems. The IG-163 wound fiber optic bundle from Schott Fiber Optics seems like a promising candidate for our setup as it does seem to fit our criteria, but some testing will be required to determine whether or not it will be suitable.

Development of Integrated PV Reporting System. MARIANO PADILLA (Fullerton College, Fullerton, CA, 0) WILLEM BLOKLAND (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

The Spallation Neutron Source (SNS) at Oak Ridge National Laboratory is a state of the art accelerator-based neutron source. Neutron-scattering research helps develop new materials for superconductors, magnets and plastics. SNS uses a hydrogen ion pulse beam to bombard a mercury isotope target to produce the neutrons. Operators control the accelerator complex by using console screens that can display and set Process variables (PV) from the Input/Output Controller (IOC) devices. Reports on the statistics of the accelerator operation are needed to evaluate the performance of the accelerator. Providing time sensitive and accurate reports of the overall health of the accelerator in an automated, efficient and intuitive manner is an important necessity. The reporting system requirements are to provide an intuitive multi-platform and web-browser based user interface, integration with Oracle, e-mail systems and Portable Document Format (PDF) generation. The reporting system provides the user with a web-based interface to setup specific PVs to acquire, how to process, and how to publish the results. An integrated reporting system is developed using PHP, Java, Javascript, Java Server Pages (JSP) and Business Intelligence and Reporting Tools (BIRT) for ECLIPSE Integrated Development Environment (IDE). The Oracle database already in production use at SNS is the primary storage location for the data collected from the PV’s at a rate up to 1Hz. The integrated reporting system will provide physicists, operators and engineers with a simple platform to monitor, analyze, and report on the operation of the SNS accelerator.

Effect of Biotin Density on 2D Streptavidin Crystallization on Lipid Monolayers at the Liquid-Vapor Interface. MATTHEW LOHR (Pennsylvania State University, University Park, PA, 16802) MASAFUMI FUKUTO (Brookhaven National Laboratory, Upton, NY, 11973)

Phospholipid monolayers at the gas-liquid interface are interesting because of their ability to act as templates for two-dimensional (2D) crystallization of various proteins. Such behavior could lead to utilization for assembly of practical bio-nanostructures. Previously studied examples of this phenomenon include the crystallization of streptavidin through binding with biotinylated phospholipids. In this study, we examine the effect of biotin surface density on streptavidin crystallization by observing the behavior of streptavidin at an ionic subphase-vapor interface coated with a phospholipid monolayer comprised of a binary mixture of dimyristoylphosphatidylcholine (DMPC) and biotin-capped dipalymitoylphosphotidylethanolamine (DPPE-x-Biotin). We monitor the formation of 2D streptavidin domains on lipid surfaces using Brewster-angle microscopy (BAM) over 20 hours. The mean molecular area of lipids is fixed at 75 Ĺ2/lipid, and the mole fraction of DPPE-x-Biotin ranges from 100% to 0.01%. These observations have yielded several distinct regimes for crystallization behavior. At 8% DPPE-x-Biotin composition and above (937.5 Ĺ2 per biotin and below), bowtie-shaped crystal domains form almost immediately after injection of streptavidin, grow with time, and eventually cover the entire surface of the sample. At 5% and 4.3% DPPE-x-Biotin composition (1500 and 1744 Ĺ2 per biotin), crystal domains only partially cover the available sample surface. At 3.9% and 3.5% DPPE-x-Biotin composition (1923 and 2142 Ĺ2 per biotin), several crystal domains are observed, but they are not uniformly distributed over the entire surface. At 3% DPPE-x-Biotin and below (2500 Ĺ2 per biotin or larger), the surface shows no crystal domains that can be discerned by BAM (=10 m). According to previous diffraction studies of streptavidin crystals under similar conditions, the proteins arrange in two-dimensional unit cells with an average of 1610 Ĺ2 per biotin binding site. This observation and our BAM studies provide strong evidence that in order for the 2D crystallization of streptavidin to take place, the surface density of biotin linkers must be comparable to or larger than the binding site density in the 2D crystal. This observation may be key to understanding the mechanisms behind streptavidin’s crystallization behavior. These results supplement ongoing studies of streptavidin crystallization, including x-ray and optical studies of the effects of subphase pH.

Effect of Pipes in a Tank to be Purged of Oxygen. LAURA ZANTOUT (University of Minnesota, Minneapolis, MN, 55455) STEPHEN PORDES (Fermi National Accelerator Laboratory, Batavia, IL, 60510)

To produce a viable 50kton LArTPC (liquid argon time projection chamber), as proposed, the liquid argon within must reach a purity level of 10ppt oxygen. The purpose of the Daisy experiment is to determine if pipes used as structural components inside this detector would trap air and cause virtual leaks. A 2 cubic foot tank filled with 50 half-inch diameter pipes was used to simulate the structure that would be within the detector. As argon flowed through the tank, oxygen levels were monitored both on the gas outlet and within using monitors of various sensitivities. Several variations were run: with an internal fan, without a fan, and with one end of the pipes capped. Plots of percent oxygen vs. time for all of the runs were fit well by perfect mixing equations. This suggests that oxygen was not trapped in the pipes, but instead diffused out quickly. Other turbulence (such as convection currents) may have also accounted for some mixing, especially in the capped run. It appears that building a structure of much longer pipes will not contaminate the liquid argon inside a detector via virtual leaks, as long as mixing through diffusion is given time to progress, or sped up by a fan.

Effects of Elemental Impurities on TiNiSn Solution Growths. THOMAS BRENNER (Carleton College, Northfield, MN, 55057) PAUL CANFIELD (Ames Laboratory, Ames, IA, 50011)

The intermetallic compound TiNiSn has been of interest to researchers because of its semiconducting and thermoelectric properties. We attempted to determine the effects of constituent element purity on the growth and electrical properties of TiNiSn single crystals.  We grew TiNiSn single crystals from a Sn flux, each time varying the purity of our constituent elements. In order to test the hypothesis that chlorine impurities affect the crystal formation and electrical properties of TiNiSn, we added nickel chloride powder to the starting elements for several growths. Qualitative differences in crystal growth were observed, and secondary compounds were identified by x-ray diffraction whenever possible. Resistivity measurements were made between 2 and 375 K on TiNiSn crystals from each growth, so that the effect of elemental purity on resistivity could be determined. Using the resistivity data, we calculated the semiconducting gap for each sample.  Lower purity Ti led to variation in growth products, but changing Sn and Ni purity did not produce variation. The addition of nickel chloride produced several changes in the growth. For all samples room temperature semiconducting gaps were between 110 and 180 meV. No trends were observed in either gap energy or resistivity with respect to elemental purity.

Electron Cloud Modeling for the Positron Damping Ring Wigglers in the International Linear Collider. JENNIFER YU (Cornell University, Ithaca, NY, 14853) C.M. CELATA (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)
The positron-electron collisions in the ILC must be simple, clean, and precise. However, the presence of an electron cloud in the positron damping ring would lead to a positron beam with unfavorable behavior and position. The research conducted for the Center of Beam Physics looked at the formation of the electron clouds using computer simulations, specifically in the wiggler section of the ring. The wiggler section is two hundred meters of magnetic dipoles that produce vertical fields of alternating sign. In the wigglers, the accelerated beam emits intense synchrotron radiation. When it hits the chamber walls, the radiation initiates the build-up of electrons. These primary electrons, trapped in the wigglers, also hit the vacuum walls and make even more electrons, called secondary electrons. The secondary electrons also produce electrons as the positron beam attracts the electron cloud and swings it to the opposite wall. The electron cloud can send the beam off track so the beam misses its collision or worse, hits the chamber walls. The electrons can also give the positron beam energy, which can make it harder to focus the beam. The research conducted tracked the formation of the electron clouds using Posinst, a benchmarked 2D code, and Warp, a code with both 2D and 3D capabilities. A 3D code is needed to follow the electrons in the changing magnetic fields of the ILC wiggler. However, as a first step, the research aimed to match the results of 2D Posinst-like Warp with Posinst, which has been checked with numerous electron cloud experiments and against other benchmarked codes. The agreement of 2D-Warp and Posinst shows the accuracy and precision of Warp and will lead to electron cloud modeling with 3D Warp. The computer simulations in 3D Warp will track the electron cloud in the ILC wiggler and will help find limiting parameters for the cloud.The computer simulations in 3D Warp will track the electron cloud in the ILC wiggler and will help find limiting parameters for the cloud.

Electron Cyclotron Resonance Ion Source Interlock Design. FRANCISCO RAMIREZ (Yuba Community Colllege, Marysville, CA, 95901) DR RICHARD PARDO (Argonne National Laboratory, Argonne, IL, 60439)

ATLAS (Argonne Tandem Linac Accelerator System), is a series of machines whose purpose is to accelerate ions and deliver them to several targets. ATLAS has two electron cyclotron resonance (ECR) ion sources. In the event of a failure, several of the ECR source components which includes solenoids, voltage sources, radiofrequency generators, and magnets can damage themselves as well as other machinery or workers around the source. A digital interlock is to be designed so that the source cannot damage itself or humans working around it. This interlock device is a digital circuit constructed of small electronic circuits called logic gates, which are simple electrically controlled switches. This interlock circuit will receive inputs indicating water flow, temperature among other conditions and the interlock will shut down the source or the appropriate component if any failure is detected in any of these inputs. A panic button will be provided which will shut down the source in case of an emergency. In addition, a reset button will be included in the interlock system; its purpose will be to allow the interlock system to function again after a failure has occurred or the panic button has been pressed. This interlock is to be designed in two different ways, TTL (Transistor-Transistor Logic) and Relay logic. This device’s TTL circuit is still being designed, having its frame and part of its relay logic already built.

Electronic Structure of Nanowire Arrays. SAM OCKO (Brown University, Providence, RI, 2912) WEI KU (Brookhaven National Laboratory, Upton, NY, 11973)

It has recently been proposed that electrons will localize in the intersections of nanowires. This is purely a quantum effect, originating from the wave nature of electrons, as there is no attractive potential to keep them in the intersection. This phenomenon of localization is very important because it might provide the basis for a functional device which uses localization to exhibit useful properties such as ferromagnetism, anti-ferromagnetic insulation, and perhaps even superconductivity. We investigate the electronic structure of an array of nanowires using a piece of software we have written, which uses self-adaptive bi-orthogonal wavelet basis to model the electrons’ wave functions. Instead of trying to solve Schrödinger’s equations directly and treating the problem as an eigen-value equation, we use a functional minimization method which uses conjugate gradient and steepest descent methods. The program we have developed is easily extensible, as other systems can be applied simply through changing the energy functional, which makes our program able to model many-body effects and solve other Quantum Mechanical systems. Our program has successfully demonstrated phenomena of localization and showed that the strength of localization is inversely proportional to the diameter of the wires. In application, a grid of nanowires could be possibly be printed on a piece of silicon, with complete control over the properties of the grid. Through properly choosing the diameter of the nanowires and the density of the grid, we could control the kinetic energy and electron interaction of the nanowires. Through modulating the gate voltage, we can tune the number of electrons in the grid. A grid of nanowires whose properties are chosen carefully might show many interesting electrical properties, including ferromagnetism, anti-ferromagnetic insulation, and perhaps even superconductivity, which we hope to investigate further.

Elemental Analysis of a Shrub-Steppe Soil. RACHAEL KALUZNY (Western Michigan University, Kalamazoo, MI, 49071) JAMES MCKINELY (Pacific Northwest National Laboratory, Richland, WA, 99352)

Scanning electron microscopy is an important research tool used widely today in areas such as medical evaluation, forensics evidence examination, and scientific research. Electron microscopes use a beam of highly energetic electrons to examine objects on a microscopic scale. This examination can yield: topography, morphology, composition, and crystallographic structure. Advanced automation on the ASPEX Personal Scanning Electron Microscope (PSEM) 3025 was used to acquire elemental data on a shrub-steppe soil sample obtained from the Yakima Valley near Sunnyside, WA. The PSEM 3025 was designed for semi-automated imaging and analysis of inorganic specimens in the millimeter to sub-micron range. The shrub-steppe soil was analyzed at 20kV with a working distance of 18.4 mm and an emission current of 112 µA. The automated run performed energy dispersive x-ray spectroscopy (EDS) on each particle. EDS is a technique based on characteristic X-ray peaks which are generated when an electron beam interacts with the specimen. Characteristic x-rays are produced for each element present in the region being analyzed. Comparison of the intensities of x-ray peaks are then used to verify the relative abundance of each element in the analyzed region. A total of 6611 particles were analyzed on the soil sample. Rule files were developed to define membership classes based on chemical properties and elemental ratios. Particles were grouped into the defined classes as the data was being acquired. From these results it can be seen that the shrub-steppe sample consists primarily of silicates containing iron, aluminum, and calcium. This is consistent with the composition of silt loam soils in the Yakima Valley.

Establishing Atmospheric Background Ion Levels for the Stand-Off Detection of Ion Sources. MARC PENALVER AGUILA (Montgomery College, Rockville, MD, 20850) JEFFREY GRIFFIN (Pacific Northwest National Laboratory, Richland, WA, 99352)

Airborne ion counts can be used to estimate the source and intensity of combustive and electrostatic activity. To determine the minimum threshold for stand-off detection of ion sources, it is necessary to establish the background levels of ions in the lower atmosphere. Source detection depends on the ability to distinguish between regular background variations and exceptional activity. Natural occurrences such as the diurnal cycle, clouds passing overhead, or changing weather conditions all may contribute to increased ion formation. Gerdien condensers, which draw a constant stream of air through an electric field, were used for sampling atmospheric ions. All samples were taken through the exhaust of a fume hood. Due to a possible thermal response from the operational amplifiers used to magnify the signal, the electronics were calibrated for temperature. An analysis of frequency of ion level variation was performed. Abnormal weather phenomena were noted and correlated to ion levels. Finally ion sources were placed in the hood to determine the sensitivity of the gerdiens. The electronics were found to have a minimal thermal response within the range of temperatures observed during the experiments. No multiplier or offset was needed to normalize the data. Fourier analysis revealed that the diurnal cycle was the only regular period linked to a variation in ion levels. A significant change in ion levels was associated with a thunderstorm. Moreover, the gerdiens were found to be highly sensitive to ions drawn through a fume hood. Nearby ion sources could easily be detected by the gerdiens, despite regular temperature and diurnal variations. The range of detection of known activity merits further investigation, as does the discrimination of sources by ion polarity. Possible applications of ion sensors include off-site and stand off detection of motor vehicles, abnormal laboratory conditions, and other ionizing sources.

Experimental Study of Effects due to Perturbations on Boundary Conditions to Couette Flows. FREDERICK MANLEY (University of Illinois, Champaign, IL, 61820) HANTAO JI (Princeton Plasma Physics Laboratory, Princeton, NJ, 8543)

When fluid flows between two independently rotating cylinders at low aspect ratios (the ratio of the height to the difference in radii), the flow is seen to deviate substantially from ideal Couette flow due to Ekman circulation along the end caps. In the case where the end caps are attached to the outer cylinder, fluid with less angular momentum is advected into the bulk flow, which decreases the mean velocity as predicted by the ideal case. In order to study the stability of Ekman circulation, an experiment was devised to perturb the Ekman boundary layer by modifying the inner cylinder. Water flows between an aluminum inner cylinder and acrylic outer cylinder and its velocity is measured using a Laser Doppler Velocimeter (LDV) scanned radially from underneath to obtain 2-D velocity profiles. The robustness of the Ekman layer was studied against perturbations of varying magnitudes. Though perturbing the inner cylinder boundary did produce profiles closer to the ideal Couette case, the Ekman layer proved to be more robust than predicted. Both a 7mm offset and four o-rings placed on the inner cylinder were needed to produce profiles resembling the ideal Couette case. A new apparatus will be built with a larger aspect ratio to observe the effects of similar perturbations on the less stable Ekman flow. In the future, less viscous fluids may be used to determine the effects of larger Reynolds numbers on Ekman stability.

Extensions to DivGeo, a Graphical Tool for Editing 2D Edge Plasma Computational Meshes Extensions to DivGeo, a Graphical Tool for Editing 2D Edge Plasma Quasi-Orthogonal Computational Meshes. ALAN CHIN (Princeton University, Princeton, NJ, 8540) DAREN P. STOTLER (Princeton Plasma Physics Laboratory, Princeton, NJ, 8543)

Transport of plasma and neutral particles across magnetic flux surfaces in tokamak fusion experiments is a highly complex dynamical system of much practical interest in designing efficient fusion reactors. Codes that have been written to simulate the behavior of such systems include B2 and Eirene, used to model plasma and neutral transport behavior, respectively, in the divertors of ITER, and DEGAS 2, used to model neutral transport during Gas Puff Imaging experiments on the National Spherical Torus eXperiment (NSTX), both of which approximate the plasma region by 2D computational meshes that are designed to be quasi-orthogonal to the poloidal magnetic flux surfaces inside the tokamak. Because the distribution of mesh cells and the topology of the mesh are specific to each experiment, a customized mesh must be created for each study undertaken. DivGeo (DG) is a graphical user interface used, in combination with mesh-generating codes such as Carre and Sonnet, to create and modify such meshes. Using the C programming language and GNU utilities in a Red Hat Linux environment, the source code of DG was modified and subjected to testing by the author and users of DG at Princeton Plasma Physics Laboratory (PPPL) and ITER. After the modifications, DG was now able to be compiled using the freely available Open Motif 2.x graphics library, which allowed it to run reliably on the Linux machines at PPPL. In addition, several new features were added to DG, including an auto-save feature, the ability to recognize concave mesh cells and the segment of the reactor determining the outer bound of the mesh, and the ability to view the mesh at arbitrary angles and aspect ratios. Together, these improvements allow precisely tailored and general meshes to be generated more quickly and easily, accelerating the progress of computational studies on tokamak plasmas.

Fast Track Finding in the ILC's Proposed SiD Detector. DAVID BAKER (Carnegie Mellon University, Pittsburgh, PA, 15289) DR. NORMAN GRAF (Stanford Linear Accelerator Center, Stanford, CA, 94025)

A fast track finder is presented which, unlike its more efficient, more computationally costly O(n3) time counterparts, tracks particles in O(n) time (for n being the number of hits). Developed as a tool for processing data from the ILC’s proposed SiD detector, development of this fast track finder began with that proposed by Pablo Yepes in 1996 [1] and adjusted to accommodate the changes in geometry of the SiD detector. First, space within the detector is voxellated, with hits assigned to voxels according to their r, f, and  coordinates. A hit on the outermost layer is selected, and a "sample space" is built from the hits in the selected hit’s surrounding voxels. The hit in the sample space with the smallest distance to the first is then selected, and the sample space recalculated for this hit. This process continues until the list of hits becomes large enough, at which point the helical circle in the x, y plane is conformally mapped to a line in the x’, y’ plane, and hits are chosen from the sample spaces of the previous fit by selecting the hits which fit a line to the previously selected points with the smallest 2. Track finding terminates when the innermost layer has been reached or no hit in the sample space fits those previously selected to an acceptable 2. Again, a hit on the outermost layer is selected and the process repeats until no assignable hits remain. The algorithm proved to be very efficient on artificial diagnostic events, such as one hundred muons scattered at momenta of 1 GeV/c to 10 GeV/c. Unfortunately, when tracking simulated events corresponding to actual physics, the track finder’s efficiency decreased drastically (mostly due to signal noise), though future data cleaning programs could noticeably increase its efficiency on these events.

Fast Vertexing Studies for the STAR Experiment at RHIC. MICHAEL ERICKSTAD (University of Minnesota, Minneapolis, MN, 55414) HOWARD MATIS (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

On-line or fast vertexing (the real-time determination of the position of a collision) can be used as a trigger in data acquisition. Triggers are used in experiments to select specific classes of events and to reduce the storage of uninteresting events. Fast vertexing, when combined with fast secondary vertexing, also enables a detector to focus on certain physics topics (i.e. B-tagging, a method of determining the presence of a bottom quark based on decay length, the distance of travel before decay). The goal of this project was to test the effectiveness of a few potential fast vertexing algorithms for use with the proposed Heavy Flavor Tracker (HFT) detector at the Solenoidal Tracker at RHIC (Relativistic Heavy Ion Collider) (STAR) experiment. Simulations, which use simulated event data to predict the efficiency of these algorithms, were created. This code was written in the c++ programming language. The layers of the HFT have a cylindrical geometry and coordinate system with the beam-line as the Z-axis. The collisions take place along the beam-line. The most effective algorithm, which was tested, separates the hits (on two detector layers) into groups in Φ and then uses all combinations of two hits (one on each layer) to fill and fit a histogram of the Z intercept values for a line through the two points. It was determined that this method of using linear approximation can measure the collision vertex to a s of 430 ± 20 µm for central Au + Au events, 950 ± 20 µm for minimum-bias Au+Au collisions and 6,100 ± 200 µm for p + p events, when used with the Silicone Strip Detector and the Intermediate Silicone Tracker. It was also found that using this method with the two layers of the Pixel detector can approximate a vertex to 113 ± 7 µm for central Au + Au events, 507 ± 30 µm for minimum-bias Au+Au collisions and 1310 ± 90 µm for p + p events. Each simulation had incident particles at 200 GeV/A. A Pixel detector can achieve these results with this algorithm, if it detects particles with a suitable technology possessing no significant pile-up. These results indicate that this algorithm has a potential for implementation in the STAR experiment, for quick identification of the vertex, as well as for use in B-tagging and other decay length based particle identification methods.

Finding Variable Stars. DANIEL WONG (University of California, Berkeley, Berkeley, CA, 94709) CECILIA ARAGON (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

The the Nearby Supernova Factory (SNF) seeks to observe several hundred type Ia supernovae, but their search is partially impeded by the presence of variable stars in the images they collect. To aid in their search, we tried to develop a catalog of known variable stars using their archival data, consisting of over 1 million images with 1.5 billion photometric measurements. By comparing magnitudes measured on different dates, variability could be detected and a classification could be assigned to each star. Early versions of our selection algorithm were able to detect variability in a subset of stars that we gave as input. Upon closer inspection, we found that the apparent variability in the candidates was most likely due to calibration errors, rather than intrinsic variability. To calibrate the data we supplied to our algorithm, we used the United States Naval Observatory catalog as a reference. We supposed that the difference between our instrument magnitudes (a quantity known as magi) and those listed in the catalog (known as magc to differentiate from magi) would be about constant with respect to each image. Upon reexamination, we found that a non-constant relationship existed between the difference in magnitude and magc. To describe this relationship, we developed an iterative fitting technique to be carried out separately for each image. We did not attempt to develop a physical model to describe this relationship, as an empirical understanding would have been sufficient for our variability study. To evaluate our new calibration technique, we reexamined the candidates previously selected by our algorithm. We found that many of the oddities of the original light curves were removed as a result of our new calibration technique. To see if our new technique provided consistent improvement, we compared the standard deviation in magnitude for each star caused by the original calibration technique with that of the new technique which we had developed. The reason to do this is that most stars are not variable; hence, the best calibration technique should minimize the standard deviation through all magnitude ranges. We would have considered our new technique to be a success if we could improve (i.e. decrease) the spread in all magnitude ranges. Although there was a marked improvement in the one case we examined closely, the overall improvement was not as significant. Further investigations on why this occurred should be conducted in the future.

Flux Jumps in Nb3Sn Superconducting Accelerator Magnets and Implications for a Current Dependent Quench Protection System. CONOR DONNELLY and SAID RAHIMZADEH-KALALEH (Embry-Riddle Aeronautical University, Daytona Beach, FL, 32114) MICHAEL TARTAGLIA (Fermi National Accelerator Laboratory, Batavia, IL, 60510)

With the upcoming inauguration of the Large Hadron Collider (LHC), the practical limit in the performance of Niobium Titanium (NbTi) superconductors will be reached based on the properties of the material. Fermi National Accelerator Laboratory has been developing a new generation of superconducting accelerator magnets based on Niobium Tin (Nb3Sn) intended for a luminosity upgrade to the LHC. The performance of these magnets has been found to be below the expected level due to thermomagnetic instabilities present in the superconductor. In order to develop better magnets for future machines, it is essential to fully understand these instabilities, which are characterized by fast voltage excursions or spikes. For this purpose, a new software application was developed to automate the analysis of voltage spikes in superconducting Nb3Sn magnets. Using the new software, the current dependence of spike amplitudes was determined, which demonstrated the possibility to use a current dependent quench protection threshold. Such a threshold could avoid premature system trips at low current due to non-quenching voltage spikes. This work presents quantitative and analytical studies on voltage spikes arising from flux jumps in superconducting Nb3Sn magnets as well as the features of the software developed for the analysis.

Frequency Quadrupled DUV Laser for Ytterbium 2+ Spectroscopy. JOHN OGREN (The University of New Mexico, Albuquerque, NM, 87131) JUSTIN TORGERSON (Los Alamos National Laboratory, Los Alamos, NM, 87545)

In the case presented, Ytterbium 2+ (Yb2+) was being sought in a linear RF trap. In order to detect and then laser cool the Ytterbium2+, the 1S0 to 3P1 transition at 252 nm was chosen due to the fast transition period of 231 ns. In order to excite this transition a laser system at 252 nm was needed that was widely tunable and had a narrow linewidth. There were no commercially produced laser systems that fulfill these requirements. The proposed solution to this problem was to frequency quadruple a commercially available Titanium-Sapphire (Ti-Sapph) laser from 1008 nm to 252 nm. The titanium doped sapphire crystal was within a monolithic block resonator (MBR) and, when lasing, was tunable over a wide range of wavelengths (approximately 700 nm to 1100 nm) with a linewidth of approximately 100 kHz. This provided the necessary flexibility needed for the spectroscopic and cooling applications. A Potassium Niobate (KNbO3) crystal was mounted within the MBR cavity and frequency doubled the initial Ti-Sapph beam from 1008 nm to 504 nm. The resulting beam was then doubled again using an external doubling cavity with a ß-Barium Borate crystal (BBO). Currently, 12 W from a Diode Pumped Solid-State Continuous Wave (DPSS CW) laser, operating at 532 nm, pumps the MBR which produces roughly 50 mW at 503 nm through the KNbO3. As of recently, 2 mW of power has been output from the BBO. In addition, the linewidth is no more than 400 kHz and is tunable over several GHz. These excellent results encourage the use of frequency quadrupling techniques to retain the original characteristics of commercial lasers while fitting the system to the specific needs and goals of a project.

Functionalized Particles As Templates for Nanoparticles Self-Assembly. CHRIS KNOROWSKI (Virginia Tech, Blacksburg, VA, 24061) DR. ALEX TRAVESSET (Ames Laboratory, Ames, IA, 50011)

Block copolymer solutions or melts exhibit an amazing variety of phases and structures with vast possibilities for the design of novel materials A promising design strategy is the transfer of a polymer structure to an inorganic component present in solution, where the goal is to obtain a self-assembled inorganic crystal exhibiting the mesoscopic order imposed by the polymeric phase, which serves as a template. Establishing general conditions for successful templating is therefore a major theoretical challenge, with fundamental implications for both basic and applied science. In this paper we investigate general conditions leading to successful templating by considering a generic pluronic coexisting with inorganic particles, which herein will be referred as nanoparticles. We assume that nanoparticles tend to crystallize and thus attract each other with a characteristic attractive energy N. We also functionalize the ends of the polymer with an affinity for the nanoparticles to facilitate templation thus introducing a new energy scale F, which is the energy gain for nanoparticles to bind to the functionalized group. We use a short ABA triblock copolymer where the A blocks are hydrophilic and the B blocks are hydrophobic, and model the water implicitly. Using course grained MD simulations we investigate the region where this triblock copolymer forms a hexagonal phase. When the nanoparticles are added to the system an extraordinary variety of exotic phases are realized as we vary the attractive N and F forces. Over a large range we see a double gyroid with the nanoparticles forming one gyroid (space group 1a3d) and the hydrophobic blocks forming another gyroid which interlock around each other. Over a small region we see a perforated lamellar with hexagonal ordering, with the nanoparticles and the hydrophobic blocks forming alternating planes. The hydrophobic forming the perforated lamellar and the nanoparticle Lamellar connecting through the perforations in the hydrophobic plane. As we continued to increase both values of N and F we see a noncentrosymetric double gyroid (space group I4_132). There are also several areas where coexistence between phases occur and many regions where further exploration could reveal new phases of the diagram.

HELIOS Detector Simulations with Geant4. NATANIA ANTLER (Massachusetts Institute of Technology, Cambridge, MA, 1239) BIRGER BACK (Argonne National Laboratory, Argonne, IL, 60439)

ATLAS (Argonne Tandem Linac Accelerator System), is a series of machines whose purpose is to accelerate ions and deliver them to several targets. ATLAS has two electron cyclotron resonance (ECR) ion sources. In the event of a failure, several of the ECR source components which includes solenoids, voltage sources, radiofrequency generators, and magnets can damage themselves as well as other machinery or workers around the source. A digital interlock is to be designed so that the source cannot damage itself or humans working around it. This interlock device is a digital circuit constructed of small electronic circuits called logic gates, which are simple electrically controlled switches. This interlock circuit will receive inputs indicating water flow, temperature among other conditions and the interlock will shut down the source or the appropriate component if any failure is detected in any of these inputs. A panic button will be provided which will shut down the source in case of an emergency. In addition, a reset button will be included in the interlock system; its purpose will be to allow the interlock system to function again after a failure has occurred or the panic button has been pressed. This interlock is to be designed in two different ways, TTL (Transistor-Transistor Logic) and Relay logic. This device’s TTL circuit is still being designed, having its frame and part of its relay logic already built.

HHFW Propagation and Heating in NSTX. JEFFREY PARKER (Cornell University, Ithaca, NY, 14853) CYNTHIA PHILLIPS (Princeton Plasma Physics Laboratory, Princeton, NJ, 8543)

Recent experiments on the National Spherical Torus Experiment (NSTX), a fusion research device, show that the high harmonic fast wave (HHFW) core heating efficiency depends on the antenna phasing and plasma conditions. Power losses in the edge due to rf sheath formation or other parasitic absorption processes could occur if the waves propagate nearly parallel to the wall in the edge regions and intersect nearby vessel structures. To investigate this possibility, the 3D HHFW propagation in NSTX has been studied both analytically and numerically with the ray tracing code GENRAY. Initial calculations show that for certain values of the launched parallel wave number and magnetic field, the waves in NSTX are launched at a shallow angle to the vessel wall. In contrast, for ion cyclotron radio frequency (ICRF) heating in the Alcator C-Mod device at MIT or the not yet built ITER test reactor, the initial ray trajectories tend to be more radially oriented. Comparisons of the GENRAY results with 2D TORIC full wave simulations for the power deposition will also be discussed.

How to Control a 7000 Ton Giant with Your Fingertips: An Interactive 3D Visualization of the ATLAS Experiment. EMILY GREENBERG (Dartmouth College, Hanover, NH, 3755) MICHAEL BARNETT, JOAO PEQUENAO (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

ATLAS, the particle detector currently under construction at CERN, Geneva, Switzerland, is scheduled to begin observing collisions between protons accelerated by the LHC (Large Hadron Collider) in 2008. In an effort to make the exciting events occurring at CERN accessible to students, interested members of the public, and fellow scientists, the ATLAS Collaboration is working to develop AMELIA (ATLAS Multimedia Educational Lab for Interactive Analysis), a real-time educational 3D visualization program featuring the ATLAS detector. Supported by the DOE, NSF, and CERN, AMELIA is expected to enter classrooms with the debut of the ATLAS detector next year. Before interacting with a 3D detector, AMELIA users will be able to learn basic concepts about particle physics and ATLAS through dynamic multimedia and written references. They will then enter a virtual environment where they can visualize an event and select tracks produced by the detected particles in order to analyze real and simulated collision data. After users have analyzed collision data and have found an interesting event, they will be invited to share their findings with the scientific community at a website that displays the analyses done by AMELIA users and provides links to related sites and web-based educational tools. AMELIA is being developed in C++ using wxWidgets, Mozilla, and the Irrlicht 3d Engine, a library typically used in the computer video game industry for real-time 3D graphics. Progress was made on the development of a streamlined interactive interface and menu, as well as the conceptualization of a pedagogical structure to ensure that the program will be comprehensible and engaging for users of differing interest levels and backgrounds.

ILC Electron Source Injector Simulations. MANU LAKSHMANAN (Cornell University, Ithaca, NY, 14853) AXEL BRACHMANN (Stanford Linear Accelerator Center, Stanford, CA, 94025)

As part of the global project aimed at proposing an efficient design for the ILC (International Linear Collider), we simulated possible setups for the electron source injector, which will provide insight into how the electron injector for the ILC should be designed in order to efficiently accelerate the electron beams through the bunching system. This study uses three types of software: E-Gun to simulate electron beam emission, Superfish to calculate solenoidal magnetic fields, and GPT (General Particle Tracer) to trace charged particles after emission through magnetic fields and subharmonic bunchers. We performed simulations of the electron source injector using various electron gun bias voltages (140kV – 200kV), emitted beam lengths (500ps – 1ns) and radii (7mm – 10mm), and electromagnetic field strengths of the first subharmonic buncher (5 – 20 MV/m). The results of the simulations show that for the current setup of the ILC, a modest electron gun bias voltage (~140kV) is sufficient to achieve the required bunching of the beam in the injector. Extensive simulations of parameters also involving the second subharmonic buncher should be performed in order to gain more insight into possible efficient designs for the ILC electron source injector.

Imaging Diagnostic Systems for the Spallation Neutron Source. KATHLEEN GOETZ (Middlebury College, Middlebury, VT, 5753) TOM SHEA (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

Since the start of commissioning, imaging beam diagnostics have been utilized widely at the Spallation Neutron Source (SNS) as quick intuitive diagnostic measures and for the calibration of other diagnostic instrumentation. Because other imaging systems such as the Video Foil Monitor proved indispensable, there was a drive to create new systems such as the temporary Target Viewscreen (TVS) and the Small Angle Neutron Scattering (SANS) Neutron Beam Stop Monitor (NBSM). My work on the TVS was performed over 3 semesters, with this summer’s focus being on system documentation. Although the temporary TVS, a system that my mentor and I designed and implemented, has already served its purpose and has been decommissioned, my work on the project continues in the form of a paper and presentation at the International Accelerator Applications conference that was at the end of July 2007. I am also part of a team that is working on plans for a second generation Target Viewscreen to be implemented next year. The NBSM is a new project with initial design work to be completed by the end of August 2007. Earlier this summer, I performed calculations to estimate the light that will be collected by the second generation TVS and NBSM optics. Presently, my work on the NBSM includes an as yet to be completed experiment at HFIR to look into the types of optics required for a successful system. I am also currently designing the experimental set up.

Implicit Simulation for Beam Problems. KAM MAK (Contra Costa College, San Pablo, CA, 94806) ALEX FRIEDMAN (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

Experimental facilities that generate particle beams are expensive, while computers are getting more powerful. Computer simulation using the methods of computational plasma physics has become a reliable and effective approach to understanding beam behavior. Such simulation enables analysis of more realistic situations than are analytically tractable, so that researchers can explain the behavior of an existing machine, improve its performance, and predict the performance of a future machine. The research group in the Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL) at Lawrence Berkeley National Laboratory (LBNL) develops and uses plasma and beam simulation programs. The most adaptable and reliable tools for the study of plasma behavior are Particle In Cell (PIC) simulations, in which a few thousand to many million particles are used to model the beam and/or plasma. These particles follow the Newton-Lorentz equations of motion in fields governed by Maxwell’s equation. However, traditional PIC codes require use of a time step shorter than the plasma period to maintain numerical stability. When plasma oscillations are not important and a larger time step is required, implicit methods can maintain the stability and model the macroscopic behavior. In such methods, the particle positions at the advanced time level depend on their accelerations due to the electric field at that time level. But that field itself depends on the density of particles at their new positions. A direct method for obtaining this field solves a modifed field equation which takes account of the effect of that field on itself through the particle motion. After the predicted field is known, the particles are advanced to the new time level. In this project we combined several old codes into a modern 1-D implicit test-bed, and made the new code user-steerable using the Python language, linked to Fortran by the Forthon system. We then studied the performance of various algorithms on a plasma expansion problem, and studied the behavior of a test electron in an electrostatic potential well that models the well caused by an ion beam. Regimes of reliable algorithmic performance were clarified.

Improved Calculations of Particle Orbit Times in Tokamaks. ALEXANDER EGAN (University of Pennsylvania, Philadelphia, PA, 19104) JONATHAN MENARD (Princeton Plasma Physics Laboratory, Princeton, NJ, 8543)

Stabilizing the resistive wall mode is important to maximize the plasma pressure in tokamaks and spherical tori. Rotational stabilization of the RWM is predicted from kinetic damping theory to depend strongly on particle bounce and transit times. Previous analytic calculations of bounce and transit times have assumed high aspect ratio and circular flux surfaces. For the low aspect ratio and strongly shaped plasmas of the National Spherical Torus Experiment, recently developed calculations of the particle orbit times in general geometry find that the commonly used analytic approximation is inaccurate by as much as a factor of two. However, the analytic formula is convenient since it is based on a relatively simple elliptic integral function. General geometry extensions to the existing analytic theory are being pursued for RWM stability and other applications. It is expected that some short series of elliptic integral terms added to the current model will concisely capture the aforementioned deviation. This simple form would greatly reduce the computational overhead currently required for accurate bounce and transit time calculations. Applications of this result will include the enhancement of RWM modeling in the widely used MARS stability code.

Improving LER Coupling and Increasing PEP-II Luminosity with Model-Independent Analysis. LACEY KITCH (Massachusetts Institute of Technology, Cambridge, MA, 02139) YITON YAN (Stanford Linear Accelerator Center, Stanford, CA, 94025)

The PEP-II storage ring at SLAC houses electrons (in the High-Energy Ring, or HER) and positrons (in the Low-Energy Ring, or LER) for collision. The goal of this project was to improve the linear optics of the LER in order to decrease coupling, thereby decreasing emittance and increasing luminosity. To do this, we first took turn by turn BPM (Beam Position Monitor) data of a single positron bunch at two betatron resonance excitations, extracted orbits from this data using Model-Independent Analysis, and from these orbits formed a virtual model of the accelerator. We then took this virtual model and found an accelerator configuration which we predicted would, by creating vertical symmetric sextupole bumps and adjusting the strengths of several key quadrupole magnets, improve the coupling and decrease the emittance in the LER. We dialed this configuration into the LER and observed the coupling, emittance, and luminosity. Coupling immediately improved, as predicted, and the y emittance dropped by a dramatic 40%. After the HER was adjusted to match the LER at the Interaction Point (IP), we saw a 10% increase in luminosity, from 10.2 x 1033 cm-2sec-1 to 11.2 x 1033 cm-2sec-1, and achieved a record peak specific luminosity.

Ion beam analysis for the investigation of Bozzolo-Ferrante-Smith predictions of interface stability. KASEY LUND (montana state university, Bozeman, mt, 59715) SHUTTHA SHUTTHANANDAN (Pacific Northwest National Laboratory, Richland, WA, 99352)

The world of nanotechnology is upon us. Today there are a myriad of techniques for growing thin films only nanometers thick. With all the knowledge we have of the laws of physics, we still do not fully understand the interactions of thin metal-metal structures. Currently there is a large effort to grow films with flat, chemically abrupt interfaces that would be applicable to industries making thin film sensors and magnetic data storage devices. One model, known as the Bozzolo-Ferrante-Smith (BFS) model, predicts that a thin layer of either titanium or vanadium can be used to suppress the interdiffusion of aluminum and iron at the Al-Fe interface. Studies of Ti behavior have already been performed, however little is know about the behavior of V. In order to contribute more information to this comprehensive model that would reliably predict atomic behavior at the nano scale, a thin layer of vanadium was deposited between iron and aluminum. Using an RF magnetron sputtering chamber, different combinations of Al, Fe, and V layers were deposited onto a silicon substrate. The samples were analyzed with Rutherford Backscattering Spectroscopy (RBS) to identify the thickness of the layers and to look for interdiffusion between layers, and with X-ray Reflectivity (XRR) to complement the RBS data. The samples were then annealed for various times and reanalyzed with the same techniques. From the RBS and XRR it can be seen that diffusion occurs at the Al-Fe interface during growth and that the diffusion layer will increase after annealing. When Ti was placed between Al and Fe, the Ti hindered the interdiffusion of Al and Fe before and after the annealing up to 350 oC. However, the V did little to suppress the interdiffusion before or after the annealing. The BFS model correctly predicted the behavior of the Al/Ti/Fe structure, but it was incorrect with regards to the behavior of V. With these results it can be seen that the BFS predictions are not completely correct. Future experiments will be conducted to further improve the BFS calculations.

Large Hadron Collider. BOYAN TABAKOV (University of California, Berkeley, Berkeley, CA, 94720) DR. WEI-MING YAO (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

As the Large Hadron Collider is due to start operating in 2008, final performance tests are carried out to confirm the working parameters of its components. The alignment of the middle disk of Endcap A of the Pixel Detector of ATLAS is examined using data from test runs with cosmic muons. In the data, the x and y coordinates of the points of intersection of a particle trajectory with the disks were previously recorded. Based on records for the first and last disks, the position of an expected intersection point on the middle disk is calculated. The differences x and y between the coordinates of the expected and the actual intersection points are attributed to a hypothetical misalignment. With the help of geometrical arguments, a linear transformation that uses as parameters the offsets x0,y  0, and z0 and the displacements due to rotations alpha,beta, and gamma about the principal axes is developed to represent the misalignment. If the transformation is applied to the actual coordinates of an intersection point on the middle disk with the right parameter values, the actual point would be sent into the expected point. Hence, the values for the six parameters that provide the closest match between actual data and expected coordinates can be found from x and y by 2 minimization. The preliminary conclusion is that the disk is perfectly aligned. The relatively high systematic uncertainties and 2 values are subject to current research.

Long-term X-ray Variability of NGC 4945. AMARA MILLER (University of California, Davis, Davis, CA, 95616) GRZEGORZ MADEJSKI (Stanford Linear Accelerator Center, Stanford, CA, 94025)

Though short-term X-ray variability has been studied for the active galaxy NGC 4945, long-term studies promise to contribute to our understanding of the processes involved in accretion onto supermassive black holes. In order to understand the relationship between black hole mass and breaks in the power spectral density (PSD), the long-term X-ray variability of NGC 4945 was studied over the energy range 8-30 keV. Observations occurred over the year 2006 using the Rossi X-ray Timing Explorer. The data was reduced using the package FTOOLS, most notably the scripts Rex and faxbary. Light curves were produced and a PSD was obtained using a Fast Fourier Transform algorithm. Preliminary studies of the light curve show greater X-ray variability at higher frequencies. This result complements previous studies of NGC 4945 by Martin Mueller. However, the PSD produced must go through further study before accurate results can be obtained. A way to account for the window function of the PSD must be found before the behavior at lower frequencies can be studied with accuracy and the relationship between black hole mass and the break in NGC 4945's PSD can be better understood. Further work includes exploration into ways to subtract the window function from the PSD, as well as a closer analysis of the PSD produced by averaging the data into logarithmic bins. The possibility of a better way to bin the data should be considered so that the window function would be minimized.

Measurement of Charge Diffusion in a Thick, High Resistivity Charge-Coupled Device. ANNA DERBAKOVA (University of North Carolina at Chapel Hill, Chapel Hill, NC, 27516) PETER TAKACS (Brookhaven National Laboratory, Upton, NY, 11973)

When a diffraction-limited point source of light is focused onto a charge-coupled device (CCD), the recorded image becomes blurred due to a number of effects, one of which is the lateral diffusion of photogenerated charge within the semiconductor material. The quality of the signal is characterized by the point spread function (PSF) which measures the amount of blurring that is present. The PSF of the present system is a convolution of a known contribution of the pixel array geometry as well as the charge diffusion within the silicon wafer, which can be controlled by varying the applied electric field. The purpose of this experiment was to use two different methods to measure the charge diffusion coefficient of a high resolution prototype CCD which will later form the basis of the CCD camera in the Large Synoptic Survey Telescope (LSST). In the first method, a light spot from a diode laser coupled to a 4 µm optical fiber was projected onto the surface of a CCD. The spot size was characterized via a knife-edge scan technique. An autofocus mechanism was developed to keep the micron-size spot focused on the surface, and images were acquired as the point source was stepped across the CCD in sub-pixel increments. By summing the intensity in a fixed pixel window as the light spot is scanned across the edge of the window, the charge diffusion width can be obtained (“virtual knife-edge” technique). Subtracting the known input light-spot size in quadrature from the width of the fit to the virtual knife edge scans results in an estimate of the diffusion width of 5.95 µm when the electric field in the sensor is 5kV/cm. In the second method, a Michelson interferometer with a motorized tilt adjustment mirror mount was constructed and used to project a sinusoidal interference pattern with a variable spatial frequency onto the surface of the detector. The intensity readout of the detector was fit with a sinusoidal curve to extract the amplitude and calculate the modulation of the image, Mi. This process was shown to work and in the future will be repeated for a range of spatial frequencies to characterize Mi() and obtain the MTF. The charge diffusion coefficient will then be obtained by deconvolving the known Sinc function contribution of the CCD pixel array and the Gaussian diffusion function from the MTF. The results from each method will serve as a cross check in determining the actual intrinsic sensor performance.

Measurement of Copper Deposition Rate and Uniformity Utilizing Electron Cyclotron Resonance Plasma Sputtering Techniques. KELLY GREENLAND (Lock Haven University of Pennsylvania, Lock Haven, PA, 17745) DR. ANDREW ZWICKER (Princeton Plasma Physics Laboratory, Princeton, NJ, 8543)

ECR (electron cyclotron resonance) plasma is used in processing such as circuit manufacturing and thin-film deposition due to its ability to produce more energetic, more dense, and more uniform plasma than other techniques. In this project, an ECR plasma, with argon as the base gas, was used to sputter copper on to silicon wafers at various pressures, powers, and geometries, and then analyzed using a scanning electron microscope to determine the thickness, uniformity and contamination of the copper layer. In addition, a spectroscopic method was developed to measure the electron temperature of the plasma by taking the intensity of certain spectral lines in the light emitted from the plasma. Typical plasma parameters were microwave power of 2500watts, a target bias of 125volts, and an argon pressure of 0.46mTorr. Measurements deduced that these conditions deposited 360 angstroms per minute of copper onto a three inch round wafer sample, and the plasma temperature was found to be approximately 7.88 eV. These results will aid in additional research, including replacing the copper target with a graphite target in order to apply ultra-hard thin films for high performance applications such as laser windows and heat resistant circuit boards.

Measurement of Density of Polarized 3He Target Cell Using Laser Interferometer. ANDREW LEISTER (The College of William and Mary, Williamsburg, VA, 23187) JIAN-PING CHEN (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

As scientists continue to learn more about fundamental matter, they are trying to fully understand the structure of nucleons (protons and neutrons). One of the difficulties in studying the neutron structure is that the isolated neutron decays after approximately ten minutes and is therefore not ideal for experimentation. Fortunately, it has been found that the structure of polarized Helium-3 (3He) allows it to serve as an effective polarized neutron target for studying neutron spin-structure. This is because the spins of the two protons in 3He are anti-aligned about 90% of the time. Therefore the remaining spin is derived entirely from the neutron. There are several factors that determine the maximum polarization of a target 3He cell, one of the key factors being the density of the cell. The focus of this study is to determine an accurate value for the density of a target 3He cell, "Aaron". To calculate the density, a laser with a specified frequency range was projected through the 3He target cell while the intensity was read by a photo-diode. This data was then read by the LabView and Root programs which generated a Lorentzian curve to fit the data. One of the parameters of this Lorentzian curve is directly proportional to the density of the cell. It was found that the density of the target cell was 8.547 ± .333 amagats (amg) at sufficiently high temperatures. Since the density of the cell is now known, the maximum polarization of the cell can be determined. With this value known, there is greater knowledge of the characteristics of "Aaron" and the results of further experiments with the cell can be understood more fully. This will ultimately lead to a deeper understanding of the spin-structure of nucleons.

Measurements of High-Field THz Induced Photocurrents in Semiconductors. MICHAEL WICZER (University of Illinois, Urbana, IL, 61801) AARON LINDENBERG (Stanford Linear Accelerator Center, Stanford, CA, 94025)

THz pulses have provided a useful tool for probing, with time resolution, the free carriers in a system. The development of methods to produce intense THz radiation has been slow since spectroscopists and condensed matter physicists first began probing materials with THz pulses. We have developed a method for producing intense ultra-short THz pulses, which have full width half maximum of 300 fs – approximately a half cycle of THz radiation. These intense half cycle pulses (HCPs) allow us to use THz radiation not only as a probe of the free carriers in a system but also as a source of excitation to alter a system in some way. In particular, HPCs perturb free carriers considerably in short time scales but show minimal effect to individual free carriers over long time. By exposing the semiconductor indium antimonide (InSb) to our intense THz HCP radiation, we have observed non-linear optical effects which suggest the generation of new free carriers by below band-gap THz photons. This generation of free carriers appears to be caused by an avalanche multiplication process, which should amplify the number of free carriers already in the system and then induce a current in the timescale of our THz pulse. This amplification on such a short timescale suggests the possibility of an ultra-fast detector of weak above band-gap radiation. We constructed a device which detects these currents by painting an electrode structure on the surface of the semiconductor. The currents induced across the electrodes by this avalanche multiplication process were measured and compared with other measurements of this non-linear optical process. We successfully measured THz induced currents in InSb, which indicate promise towards the development of an ultra-fast detector, and we gain insight into a possible physical explanation of the THz induced free carriers we observe in InSb.

Measuring Strain Using X-Ray Diffraction. JAMES BELASCO (Villanova University, Villanova, PA, 19085) APURVA MEHTA (Stanford Linear Accelerator Center, Stanford, CA, 94025)

Determining the strain in a material has often been a crucial component in determining the mechanical behavior and integrity of a structural component. While continuum mechanics provides a foundation for dealing with strain on the bulk scale, how a material responds to strain at the very local level-the understanding of which is fundamental to the development of a cohesive framework for the behavior of strained material-is still not well understood. One of the critical components in determination of the behavior of materials under strain at a local scale is an understanding how global average deformation, as a response to an externally applied load, gets distributed locally. This is critical and very poorly understood for a polycrystalline materials-the material of choice for a large variety of structural components. We studied this problem for BCC iron using x-ray diffraction. By using a nanocrystalline iron sample and taking x-ray diffraction patterns at different load levels and at different rotation angles, a complete 2nd rank strain tensor was determined for the three sets of crystallites with three distinct crystallographic orientations. The determination of the strain tensors subsequently allowed the calculation of the elastic modulus along each crystallographic plane. When compared to measured values from single crystal for the corresponding crystal orientations, the data from our polycrystalline sample demonstrated a higher degree of correlation to the single crystal data than expected. The crystallographic planes demonstrated a high degree of anisotropy, and therefore, to maintain displacement continuity, there must be a secondary mode of strain accommodation in a regime that is conventionally thought to be purely elastic.

Module Alignment and Resolution Studies for ATLAS Pixel Detector Endcap A. ANA OVCHAROVA (University of California, Berkeley, Berkeley, CA, 94720) WEI-MING YAO (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

Endcap A, a component of the ATLAS Pixel Detector, constitutes of 144 modules mounted onto three disks, whose centers lie on the Large Hadron Collider (LHC) beam pipe. Each module contains 47, 232 rectangular pixels, which are individually connected to a readout system. When in operation, the pixels containing sufficient charge deposited by particles are read out and the obtained information is used to reconstruct the paths of particles passing through the detector. Through offline analysis of data collected from cosmic muons passing through Endcap A, the current study investigates the relative alignment of the modules using the areas of overlap between adjacent modules. The method used is parallel to the one used in a previous study of the alignment, however, the current results incorporate masking of noise identified on Endcap A in order to determine more precisely the resolution of the detector. The analysis involves considering the module as a rigid body with 4 allowed degrees of freedom, including translation in x, y, z local coordinates of the module and rotation around z. The parameters (alignment constants) obtained in this manner indicate quantitatively the deviation of the position of each module, relative to an adjacent module, from the nominal detector geometry. The alignment constants are determined by examining the mean distance between the actual positions of clusters on one module and their expected positions as seen from an adjacent module. Comparing the values of the parameters to these obtained by surveying the geometry of Endcap A during assembly shows good agreement. Using the obtained alignment constants, the values of the mean distances are recalculated. The resolution obtained by plotting the recalculated values is 16.5 ± 0.2µm and 118 ± 1µm in short and long pixel dimension respectively. Resolution varies according to the number of pixels that are triggered by a passing particle. For two pixels triggered simultaneously, the resolution is 13.8 ± 0.3µm, compared to 14.7 ± 0.4µm for a single pixel. This method has proven itself efficient with the limited data available. When ATLAS is in operation in 2008, more data will provide sufficient statistics to increase the accuracy of the results.

Monte-Carlo Based Simulation of Double-Image Granvitational Lensing by Cosmic Strings. ERIC ALBIN (California Polytechnic State University, San Luis Obispo, CA, 93405) GEORGE SMOOT (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

Cosmic Strings have yet to be observed, but they are still experimentally sought after. The simulation of a Cosmic String lensing event signal is invaluable for testing detection criteria. Light sources from I-band survey flexible image transport system (FITS) images taken by the Advanced Camera for Surveys (ACS) aboard the Hubble Space Telescope (HST) are identified using the open-source astronomical software Source Extractor v2.2.2. All identified sources are then pseudo-randomly assigned redshift based on a parameterization of the measured redshift distribution as a function of source absolute magnitudes. Each identified source is also isolated from the FITS image file so that a particular Cosmic Strings can be simulated by re-integrating selected isolated sources back into the FITS image assuming a CDM cosmology with Om = 0.3. Simulations are limited to perfectly straight Cosmic Strings which span across the entire FITS image used as well as to strongly-lensed events. Simulated Cosmic String lensing event signals are then applied towards calculating detection efficiencies in v1.0 data from the Great Observatories Origins Deep Survey (GOODS).

Noise Studies in ATLAS Pixel Detector Endcap A. KEVIN LUNG (University of California at Berkeley, Berkeley, CA, 94720) DR. WEI-MING YAO (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

In order to improve the results of the ATLAS detector data, it is necessary to remove noise from the data sample before running other analyses. The noise removal involves the definition of hot pixels. These hot pixels are indicative of noise because they have registered data more often than is expected for the typical cosmic ray signal. The goal of this noise removal is to achieve the most noise suppression, resulting in a more clearly defined signal. The noise suppression is measured through the module occupancy, the number of pixel hits per pixel per module per event readout. For this study, a noise occupancy on the order of 10^-10 is obtained for Run 1129, which is much lower than the 10^-7 noise occupancy of the MC simulation. Evidently, this order of noise occupancy shows a large magnitude of noise suppression as compared to the signal occupancy on the order of 10^-7.

Observing Water Uptake in NaCl Nanoparticles using Non-contact AFM. MATTHEW STRASBERG (Cornell University, Ithaca, NY, 14853) ANTONIO CHECOO (Brookhaven National Laboratory, Upton, NY, 11973)

Aerosol particles (nano- and micron-sized particles suspended in air) affect atmospheric radiation and cloud microphysics. A correct description of their behavior is crucial to accurate climate modeling. The processes of aerosol “aging” (a phenomenon in which non-hygroscopic particles become hygroscopic) and aerosol deliquescence (the water uptake of hygroscopic aerosols and their eventual transition to liquid state) are of particular interest. Understanding these processes may improve climate models as well as elucidate the physics of nanoscale wetting. This experiment demonstrates the use of Non-contact Atomic Force Microscopy for studying the initial stages of water uptake and eventual deliquescence of Sodium Chloride nanoparticles deposited on a silicon substrate. These exploratory studies will eventually be extended to other aerosol particles that are relevant to atmospheric science.

Ohmic Heating of Gallium Arsenide Photocathodes. MATT WOOD (Fort Hays State University, Hays, KS, 67601) MARCY L. STUTZMAN (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

The polarized electron beam at Thomas Jefferson National Accelerator Facility (Jefferson Lab) is produced using photoemission from a gallium arsenide (GaAs) cathode. To turn a GaAs crystal into a photocathode, it must be heated to above 500°C to clean the surface, then “activated” using chemicals applied to the surface. At Jefferson Lab, the crystal is heated by conduction using commercial heaters; however, because of a lack of resources, universities and smaller government labs must heat crystals through ohmic heating by passing current through the crystal. These labs have not been successful in using special strained GaAs crystals that can provide high polarization. The goal of this project was to determine if ohmically heating high polarization GaAs crystals can produce a quantum efficiency that is comparable to crystals heated conductively. The two heating methods were compared in the same vacuum conditions, and the effectiveness of each was assessed by monitoring the photoemission from each cathode. The resistive heating method was found to be comparable to the conductive heating method for high polarization crystals. As expected, the vacuum quality and thermal conductivity are the likely reasons that other labs have not had success with high polarization crystals. This suggests that in order to produce high polarization electron beams, other labs need to focus on improving the quality of their vacuum.

Optical and Mechanical Design Features of the Qweak Main Detector. ELLIOTT JOHNSON (North Dakota State University, Fargo, ND, 58105) JAMES SPENCER (Stanford Linear Accelerator Center, Stanford, CA, 94025)

Photonic Band-Gap (PBG) fibers are a periodic array of optical materials arranged in a lattice called a photonic crystal. The use of PBG fibers for particle acceleration is being studied by the Advanced Accelerator Research Department (AARD) at Stanford Linear Accelerator Center. By introducing defects in such fibers, e.g. removing one or more capillaries from a hexagonal lattice, spatially confined modes suitable for particle acceleration may be created. The AARD has acquired several test samples of PBG fiber arrays with varying refractive index, capillary size, and length from an external vendor for testing. The PBGs were inspected with a microscope and characteristics of the capillaries including radii, spacing, and errors in construction were determined. Transmission tests were performed on these samples using a broad-range spectrophotometer. In addition, detailed E-field simulations of different PBG configurations were done using the CUDOS and RSOFT codes. Several accelerating modes for different configurations were found and studied in detail.

Parameterization of Polarized 3He Quasi-Elastic Scattering Cross Sections. OCTAVIAN GEAGLA (University of Virginia, Charlottesville, VA, 22905) XIAOCHAO ZHENG (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

The substructure of the atom can be revealed in electron scattering experiments through the analysis of the scattering cross-section. The scattering can be determined as elastic, quasi-elastic, and inelastic from the relative size of the energy loss of the electron and the four-momentum transfer squared of the virtual photon. In quasi-elastic scattering, each proton and neutron reacts to the electron beam independently. In order to apply radiative corrections to the 3He nucleus, an accurate parameterization of these cross-sectional data are needed. Current world parameterizations do not have access to polarized quasi-elastic scattering cross-sections for the 3He nucleus, but instead use data from other nuclei and combine them with theoretical predictions for the polarized 3He nucleus. However, 3He nuclear effects are neglected. This leads to great discrepancies and uncertainties in the results. In order to perform the parameterization, various computational methods were used to create a physical model of the scattering using magnetic and electric form factors which would not neglect the 3He nuclear effects. The Jefferson Lab National Accelerator Facility data were fit to various nonlinear distribution models and the best fits were found for each beam energy. A global fit was created by fitting the parameters of these distributions. These results can be used to predict polarized quasi-elastic cross sections for unmeasured kinematics and for applying radiative corrections where such parameterizations are needed.

PDF Contributions and Parity Violation at High Bjorken x. TIMOTHY HOBBS (The University of Chicago, Chicago, IL, 60637) WALLY MELNITCHOUK (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

In recent decades, leptonic deep inelastic scattering (DIS) has been widely used to probe nucleon structure. Despite continued success, a number of surprising results have complicated the original picture of a quark-dominated nucleon. Among the most significant recent studies of parton distributions, the European Muon Collaboration (EMC) and NuTeV at Fermilab have challenged these old assumptions in nuclear structure. Problematically, precision data characterizing the d-quark parton distribution function (PDF) and the PDF ratio d/u at high values (i.e. > 0.7) of the momentum fraction (Bjorken x) are fairly incomplete. Calculations of the d/u PDF ratio contribution to parity-violating asymmetries in unpolarized DIS are performed for a range of values of the square momentum transfer Q2; for completeness, calculations involve several PDF models and target/polarization schemes for the neutral/electromagnetic interference current. So far, models demonstrate a significant dependence of beam asymmetries upon the d/u PDF ratio -- a confirmation of theoretical expectation. This evaluation of PDF effects through d/u concurs with and expands earlier findings in nucleon structure, thereby driving further interest and tests of the Quark-Parton Model (QPM). Moreover, these calculations complement a discussion of parity and charge symmetry violation with implications for ongoing study in sub-nuclear theory.

Photo-Excitation of RF Magnetron Sputter Grown SrRuO3 Observed by Electron Crystallography. BRIAN COOPER (Temple University, Philadelphia, PA, 19122) IVAN BOZOVIC AND VLADIMIR BUTKO (Brookhaven National Laboratory, Upton, NY, 11973)

Strongly-correlated electron systems such as high temperature superconductivity, colossal magnetoresistance, and heavy fermion systems are still not well understood by solid state physicists. Since the conventional free electron approximation breaks down in these strongly-correlated systems, it is imperative that experimentalists examine all aspects of materials that exhibit these properties. To better understand the effects of photo-excitation on the strongly-correlated electron systems of some perovskite crystals, we are attempting to grow thin films of SrRuO3 (SRO) on substrates of LaSrAlO4 (LSAO) and SrTiO3 (STO) using RF magnetron sputtering. The impetus for growing SRO films arose from data gathered during ultra-fast electron crystallography (UEC) performed on epitaxially grown oxygen doped La2CuO4+d (LCO). Since a charge mismatch exists between the planes of the crystal, a significant electric field exists between the planes of the crystal. During UEC the lattice constants of LCO increased to amounts that would require equivalent thermal energy transfers associated with temperatures around 2500K. This prompted us to believe that this effect could be a result of the strong electric field present between the planes of LCO. Since SRO is electrically neutral between its planes, but represents a strongly-correlated electron system with a nearly identical crystal structure (perovskite), it is ideal for comparative study. In order to grow SRO, firstly wafers of commercially grown LSAO (or STO) are coated with photo-resist, covered with an etched quartz mask, and undergo lithography. Then using an argon ion (Ar+) mill, a highly collimated beam of Ar+ is used to remove layers from the substrate. After troughs approximately 600 Ĺ in depth are cut out of the substrates and the photo-resist washed away with acetone, the substrates are deposited with the SRO. After various modus operandi used to characterize the films of SRO are performed, we will send these samples to the California Institute of Technology to undergo UEC. We expect to see an ameliorated effect (if any at all) in the SRO films, but even if the effect is still prominent, the results will shed further light on the effects of photo-excitation on highly-correlated electron systems of perovskites.

Plateauing Cosmic Ray Detectors to Achieve Optimum Operating Voltage. ELISSA KNOFF (Northwestern University, Evanston, IL, 60208) ROBERT PETERSON (Fermi National Accelerator Laboratory, Batavia, IL, 60510)

Through QuarkNet, students across the country have access to cosmic ray detectors in their high school classrooms. These detectors operate using scintillator and a photomultiplier tube (PMT). A data acquisition (DAQ) board counts cosmic ray hits from the counters. Through an online e-Lab, students can analyze and share their data. In order to collect viable data, the PMTs should operate at their plateau voltages. In these plateau ranges, the number of counts per minute remains relatively constant with small changes in PMT voltage. We sought to plateau the counters in the test array and to clarify the plateauing procedure itself. In order to most effectively plateau the counters, the counters should be stacked and programmed to record the number of coincident hits as well as their singles rates. We also changed the threshold value that a signal must exceed in order to record a hit and replateaued the counters. For counter 1, counter 2, and counter 3, we found plateau voltages around 1V. The singles rate plateau was very small, while the coincidence plateau was very long. The plateau voltages corresponded to a singles rate of 700-850 counts per minute. We found very little effect of changing the threshold voltages. Our chosen plateau voltages produced good performance studies on the e-Lab. Keeping in mind the nature of the experiments conducted by the high school students, we recommend a streamlined plateauing process. Because changing the threshold did not drastically affect the plateau voltage or the performance study, students should choose a threshold value, construct plateau graphs, and analyze their data using a performance study. Even if the counters operate slightly off their plateau voltage, they should deliver good performance studies and return reliable results.

Quark and Antiquark Distributions in the Proton. RONA BANAI (Cornell University, Ithaca, NY, 14853) PAUL E. REIMER (Argonne National Laboratory, Argonne, IL, 60439)

Quark distributions in the proton cannot be determine theoretically, but require experimental measurements. Data are lacking for the large-x region, where one quark carries most of the momentum of the proton. Experiment 866 completed at Fermilab measured data for the Drell-Yan cross section for pp and pd interactions, which is sensitive to the quark distributions of the interacting hadrons. By examining the data from 175,000 dimuon events in the range (4.2 = M = 16.85 GeV and -0.05 = xF = 0.8), we are able to examine the light quark and anti-quark distributions in the nucleon. In the experiment both hydrogen and deuterium targets were used in order to solve for the absolute quark distributions in the proton and the neutron. In this project we verified the cross section determined as a function of (xF, M) and determined the functional dependence on (x1, x2). The Monte Carlo data show a general agreement with the next-to-leading order experimental data demonstrating an understanding of the theory behind the experiment. Through a comparison of the experimental data to the Monte Carlo results and next-to-leading order cross section calculations, a better determination of the parton distributions will be achieved.

Research and Development of Solution SAXS Data Analysis Methods. JULIETTE ZERICK (University of Mary Washington, Fredericksburg, Virginia, 22401) DR. KENNETH FRANKEL (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

The analysis of solution small angle X-ray scattering (SAXS) data is made difficult by the lack of vigorous and open source data reduction tools. Although singular value decomposition (SVD) has proven effective and practical in the analysis of SAXS data, it is not robust to experimental noise. Additional techniques must be utilized to extract meaningful results from a dataset; autocorrelation tests, chi-square tests for goodness of fit, visual inspection, and other methods are used for this purpose. However, little research has been done on the mathematical justification for their use. In this paper, the soundness and effectiveness of the application of these methods to experimental data were evaluated. It was found that all methods used were either unsupported by the mathematics or were insufficient to resolve the number of components in solution. Therefore, in an attempt to reduce the effects of experimental noise on the application of SVD, two new “pruning” methods were developed and tested on experimental SAXS data. By “pruning” from the dataset points that exhibited a small signal-to-noise ratio, or contained experimental error beyond a defined threshold, the application of SVD to the truncated dataset resolved the number of components in solution with greater accuracy than previous methods alone. The research and development of these methods were performed in order to enhance currently-used techniques and assist in the data analysis of two projects: Structural Cell Biology of DNA Repair Machines (SBDR) and Molecular Assemblies, Genes, and Genomics Integrated Efficiently (MAGGIE). The preliminary results of this undertaking will be released to the SAXS community to spur development of better data analysis methods. The software implementation of the recommended methods will be released under the GNU General Public License (open source), available at the Structurally-Integrated Biology for Life Sciences (SIBYLS) beamline website.

Rietveld Full Profile Refinement of MnO and MnAs. DALGIS MESA and RAHUL PATEL (Florida International University, Miami, FL, 33199) JAIME FERNANDEZ-BACA (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

In the present study, elastic neutron scattering was used to obtain a powder diffraction pattern for MnO at the Wide Angle Neutron Diffractometer (WAND) located at the High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL). The diffraction pattern obtained from the sample was analyzed using the Rietveld method of full profile refinement in order to refine the relevant crystallographic parameters of the specimen. The Rietveld method is based on a least squares fit of the full diffraction pattern to a model that takes into account the crystallographic as well as the instrumental parameters to calculate the full profile. The latter includes the Caglioti resolution parameters (U = 2.36, V = -1.18, W = 0.43) and the wavelength (Λ) of 0.1476 nm. As a resource, the Inorganic Crystal Structure Database (ICSD) was used to obtain the initial crystallographic parameters for MnAs [1] and MnO [2]. The following crystallographic parameters were refined for MnO using the software program FullProf© (Fortran 90 version, Copyright 2006: The FullProf© Team): lattice constant values (a, b, c) of (4.44, 4.44, 4.44) and individual isotropic thermal parameters (B - factor) with an initial default value (D.V.) of 1. The results yielded a percent change of 0.53 for the lattice constant while a 52.48 and 44.40 for the B - factors of Mn and O respectively. For MnAs, the refined crystallographic parameters included the lattice constants (3.72, 3.72, 5.72), the atomic occupancies (D.V.), as well as the B-factors (D.V.). The percentage change for the refined parameters for MnAs were of (0.26, 0.26, 0.30) for the cell parameters, 54 and 36 for the atomic occupancy of Mn and As respectively, and the obtained values for the B-factors were 4.38 and 3.25 for Mn and As correspondingly. The experimental procedure and results will be discussed in detail along with new ideas for possible improvement of the present study.  [1] Nowotny, H., Funk, R., Pesl, J., Kristallchemische Untersuchungen in den Systemen Mn-As, V-Sb, Ti-Sb, Monatshefte fuer Chemie 82, Page 513-525, 1951.  [2] Jay, A. H., Andrews, K. W., Note on Oxide Systems pertaining to Steel-making furnace slags: Fe O - Mn O, Fe O - Mg O, Ca O -Mn O, Mg O - Mn O, Journal of the Iron and Seet Institute, 152, page 15 -18, 1946.

Roughness Analysis of Variously Polished Niobium Surfaces. GUILHEM RIBEILL (North Carolina State University, Raleigh, NC, 27606) CHARLES REECE (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

Niobium superconducting radio frequency (SRF) cavities have gained widespread use in accelerator systems. It has been shown that surface roughness is a determining factor in the cavities’ efficiency and maximum accelerating potential achievable through this technology. Irregularities in the surface can lead to spot heating, undesirable local electrical field enhancement and electron multipacting. Surface quality is typically ensured through the use of acid etching in a Buffered Chemical Polish (BCP) bath and electropolishing (EP). In this study, the effects of these techniques on surface morphology have been investigated in depth. The surface of niobium samples polished using different combinations of these techniques has been characterized through atomic force microscopy (AFM) and stylus profilometry across a range of length scales. The surface morphology was analyzed using spectral techniques to determine roughness and characteristic dimensions. Furthermore, electrical impedance spectroscopy (EIS) was used to investigate the electrical properties of the niobium-acid interaction during electropolishing. Experimentation has shown that the spectral method is a valuable tool that provides quantitative information about surface roughness at different length scales, and has explored the use of EIS in electropolishing. It has demonstrated that light BCP pretreatment and lower electrolyte temperature favors a smoother electropolish. These results will allow for the design of a superior polishing process for niobium SRF cavities and therefore increased accelerator operating efficiency and power.

Scanning 3rd-Order Cross Correlator to Measure Contrast of Ultrashort Laser Pulses. EUGENE EVANS (University of California, Berkeley, Berkeley, CA, 94720) WIM LEEMANS (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

The measurement of laser contrast and the identification of undesirable beam features in high peak power pulse lasers is a key aspect of the development and use of such lasers in the LOASIS group. The Scanning 3rd-Order Cross Correlator (S3OCC), a robust and accurate instrument based on the principles of third-order cross-correlation, has been developed to analyze ultrashort laser pulses. The instrument allows laser contrast measurement and pulse artifact detection in a femtosecond-scale time regime by cross-correlating the pulse with itself in nonlinear crystals. The S3OCC is composed of a number of optics, several nonlinear crystals, a motor-actuated delay stage, and a photomultiplier tube (PMT). Scans of an 800 nm infrared beam from the LOASIS Ti:Sapphire laser used for laser wakefield acceleration were performed to calibrate the zero position of the delay stage, confirm the detection and amplitude of intentionally induced beam artifacts, and characterize the linear range of the PMT and transimpedance amplifier. A new LabVIEW-based control system was also designed and implemented. As a result, the S3OCC can consistently measure the correct position and amplitude of pulse artifacts such as pre- and post-pulses, allowing the device to measure laser contrast with almost seven orders of magnitude dynamic range. However, calibration data also revealed nonlinear responses in the PMT-amplifier system, suggesting that the operating characteristics of each component should be investigated separately to determine the origin of the nonlinearities.

Scintillation in CF4 Gas due to the Passage of Highly Ionizing Particles. JOHN SINSHEIMER (Ohio State University, Columbus, OH, 43210) CRAIG WOODY (Brookhaven National Laboratory, Upton, NY, 11973)

The Hadron Blind Detector (HBD) in the Pioneering High Energy Nuclear Interaction Experiment (PHENIX) at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) utilizes Cherenkov radiation produced in CF4 gas for the detection of electron pairs. During a RHIC collision, both electrons and many heavy particles pass through the HBD. The passage of heavy particles through CF4 creates background scintillation light. A measurement of this background scintillation, in terms of photon yield per MeV deposited, will allow for better analysis of data produced by the HBD and may be applicable for other Cherenkov detectors around the world. In this study, scintillation light was produced in CF4 gas using Americium-241 for an alpha particle source. As alpha particles traverse a known distance through the gas, a known portion of the total light produced is absorbed by a CsI photocathode. Photoelectrons are emitted from the CsI and then drawn in and multiplied by a triple Gas Electron Multiple (GEM) to produce a measurable signal. By measuring the gain of the GEMs and the energy each alpha particle deposits in the gas, one can compute the photon yield per MeV of deposited energy. Measurements and calculations show 110 photons produced per MeV over 4p solid angle.

Search for Scintillators in Barium Compounds: Europium Doping of Barium Titanates and Barium Zirconates and Cerium Doping of Barium Gadolinium Halides. SHAMEKA PARMS (North Carolina Agricultural and Technical State University, Greensboro, NC, 27127) STEPHEN DERENZO (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

Scintillators are most commonly known in the world of medical imaging for their role in detecting radiation emitting from the body. They are also used to detect radiation from dangerous sources; providing security for American citizens. As security issues becomes more important, the demand on the research team increases and the push to develop new and improve existing radiation detectors speeds up. The High Throughput Synthesis and Characterization Facility (HTSCF) allows for the researchers to increase production and screening of possible scintillators. Several scintillators are currently being used in industry, however more ideal scintillators are needed to meet growing concerns. The goal is to find dense, highly luminous scintillators exhibiting fast decay times. In this manuscript we report the results of this high throughput search for scintillators in the following families: Ba-Ti-O, Ba-Zr-O, and BaGdX4 (where X= Cl, Br, and I). Three new scintillators were found during this search but further work needs to be done on these compounds.

Sediment Depth Measurement Using Conventional Ultrasound Techniques in Support of Scaled Mixing Tests Related to the Hanford Site Waste Treatment Plant. ABBY HEIEREN (University of Idaho, Moscow, ID, 83544) RICHARD PAPPAS (Pacific Northwest National Laboratory, Richland, WA, 99352)

The Waste Treatment Plant (WTP) model has proposed mixing waste in tanks using pulse-jet mixers. To help determine mixing effectiveness of the design, two ultrasonic techniques were proposed: the Ultrasonic Doppler Velocimeter (UDV) and conventional ultrasonic pulse-echo techniques. The UDV device is designed to quantify the motion of particles suspended in fluids. The classic pulse-echo method determines position of stationary boundaries based on reflections due to acoustic impedance mismatches. The applicability of an ultrasound-based monitoring system to ascertain the presence of particle build-up on the bottom of pulse-jet mixing (PJM) tanks required further proof-of-concept through quantitative investigations. To establish the applicability of the pulse-echo method, a 2.25 MHz transducer with a plastic-matched wearplate was mounted on an acrylic, 16-in diameter, hemispherical bowl with 3.175 mm wall thickness with 25.4 mm acrylic standoff. Sound velocity through acrylic and water could be determined using partial sound reflections before collecting the following particle data: 2.68 mm/µs and 1.48 mm/µs respectively. Glass beads of diameter 60-80 µm, 150-212 µm, and 600-850 µm were then added to the water individually, retrieved, and combined, to simulate mixing process parameters, while approximating sediment depth and measuring time-of-flight. If the transducer was able to determine the layer of particles, the oscilloscope would show a static-dynamic interface, which could be verified by perturbing the sediment surface. Surface perturbation was not observed using the 2.25 MHz transducer with 600-850 µm bead size. Lower frequency transducers were tested, and a 1.0 MHz transducer was then introduced to achieve improved penetration while maintaining suitable sensitivity. The velocity of sound through each type of particle and combination were approximated to determine sediment depth within the bench-top simulation. Analyzed data represented accuracy and applicability of the conventional method for monitoring the PJM tanks. The ultrasound monitoring system (the UDV approach coupled with conventional ultrasonic methods) was determined to be suitable for this application in a bench-top scenario. If the ultrasound monitoring system is determined viable for the WTP project, future developments will incorporate performance features that are responsive to PJM test requirements. PNNL-SA-56618

Shielded Active Integrators for use in Plasma Magnetic Field Diagnostics. IAN FAUST (University of Michigan, Ann Arbor, Mi, 48109) TOM INTRATOR (Los Alamos National Laboratory, Los Alamos, NM, 87545)

The Field Reversed Experiment (FRX-L) at Los Alamos National Laboratory confines deuterium plasma in a compact toroidal shape known as a Field Reversed Configuration (FRC). The characterization of the plasma necessitates the analysis of its magnetic fields. Through the use of Faraday’s induction law, an electrical signal from a “b-dot” probe must be integrated accurately to understand the strength and nature of the induced magnetic field. Integration of the signal is done precisely using an analog active integrator. It allows for substantial gains, automatic zeroing of the baseline integrated signal and no signal droop, reducing the error. Data accuracy is preserved through diligent shielding from electromagnetic interference. This was done by mounting the cards within a standard rack chassis isolated in a faraday cage-like arrangement. The design has been optimized to reduce cross-interference, ease repairs and reduce temperatures. The overall error and noise fluctuations are expected to be reduced significantly by many orders of magnitude. Magnetic signal analyses, which comprise the primary diagnostic for the FRC, require one to difference two integrated dB/dt signals. This differential integrator provides a simple way to do this electronically without manually zeroing out integrating signals. The increased precision will allow for better characterization of the plasma in the upcoming tests on FRX-L. Depending on the time scales and characteristics of future plasmas, adjustments to longer timescales and greater shielding will necessitate modification of the active integrator.

Silicon Photo Multiplier for Scintillation Hodoscope. ERIK SKAU (North Carolina State University, Raleigh, NC, 27695) STEPAN STEPANYAN (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

Photon counting is a key component in many high energy physics experiments. With the silicon photo multiplier (SiPM), a type of multi-pixel photon counter (MPPC), previous obstacles such as magnetic fields are being surmounted. Though operating at a relatively low voltage (~70V), high gain (~106) should be expected of a SiPM. The focus of this study was to determine if the efficiency of a SiPM with 1mm active area is suitable for charged particle detection in a scintillator layer, read out via green wavelength shifting fiber. A test setup was designed to study the efficiency of a Hamamatsu SiPM system. By situating a SiPM on the end of the green wavelength shifting fiber attached to a scintillator, the MPPC was able to detect the light generated by charged particles passing through the scintillators. Experiments demonstrated that the efficiency of the SiPM setup was an acceptable value. The one millimeter active area is sufficient for detecting photons from a scintillator to wavelength shifting fiber system. With the advantage of being unaffected by magnetic fields, unlike photomultiplier tubes, MPPCs may quickly become an alternative. With the diversity of newly developed MPPC available, scientists are now able to extend previous experimental boundaries.

Simulating Beam Passage Through the Injection Chicane of the Spallation Neutron Source. MATTHEW PERKETT (Denison University, Granville, OH, 43023) JEFF HOLMES (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

The Spallation Neutron Source (SNS) is an accelerator-based neutron source that will become the most powerful device of its kind when full power of 1.4 MW is achieved in 2009. Such a powerful source has a correspondingly small tolerance for uncontrolled beam loss, which leads to activation of the facility and longer maintenance times. To meet radiation requirements at such high energies, only 1/10,000 of the particles can be lost due to collision with the beam pipe. According to recent measurements, the area with the worst beam loss is in the injection chicane and beam dump line. Due to a structural flaw, this region will eventually need to undergo physical modification, so it is critical to accurately track the particles’ paths. Previous studies have been conducted using a piece-wise symplectic magnetic field approximation, but it is now essential to track particles with greater precision using a 3D multipole expansion representation for the magnetic field. To achieve this, a large portion of time was devoted to coding, testing, and adding C++ modules to the new Python wrapper of the Objective Ring Beam Injection and Tracking code (pyORBIT). PyORBIT is an accelerator physics code that utilizes a Message Passing Interface (MPI) for parallel computing capabilities, which is being developed at ORNL and used by accelerator facilities worldwide. Multiple benchmarks completed in the weeks leading to the final measurements agreed well with results calculated by hand. Tracking a Gaussian distribution of particles with SNS injection parameters from the primary stripping foil to the secondary stripping foil at a kinetic energy of 1.0 GeV resulted in a physically realized position and momentum confirming code integrity. It was found that the 3D magnetic field produced 66mm separation between H0 and H- components while the piece-wise symplectic only found a 61mm separation. This higher separation could account for a greater difficulty getting the beam to enter the dump line and the observed high losses in that region. The next logical project will be to utilize the new modules in pyORBIT for a 3D magnetic field from the secondary foil down the dump line.

Simulating Multipacting in Tapered Waveguides using Xing RK4. DAN ZOU (University Of Wisconsin - Madison, Madison, WI, 53706) HAIPENG WANG (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

Radio frequency (RF) waveguides propagate and couple high RF power that is used to accelerate charged particles over short distances. Under certain resonant conditions in the waveguide, this high power can lead to undesirable discharging, or multipacting, consuming the RF power and possibly damaging the waveguide. One method of preventing multipaction is by avoiding these specific resonant conditions, which are most efficiently found by using appropriate computer simulation software. Xing RK4, written in FORTRAN, is one such software, though originally limited to rectangular waveguide analysis. The purpose of this study was to port Xing RK4 to C++, then to expand it to analyze multipaction in tapered waveguides. Expansion of Xing focused primarily on deriving appropriate empirical and analytic formulae for the electromagnetic (EM) fields within tapered structures. The majority of other necessary functions were inherited from the original code, slightly modified to accommodate the new geometry. EM field implementation was not completed due to time constraints and complexity of the analysis. Once completed, however, Xing RK4’s new capabilities will allow scientists to determine the resonant conditions to avoid when using tapered RF waveguides and to benchmark other simulation codes. Additional accuracy may be achieved through further fine-tuning of the analytic formulae.

Simulation of a laser plasma accelerator operating in the bubble regime and using laser assisted injection. JOHN BAILEY, III (University of Alabama in Huntsville, Huntsville, AL, 35899) DR. YUELIN LI (Argonne National Laboratory, Argonne, IL, 60439)

Quark distributions in the proton cannot be determine theoretically, but require experimental measurements. Data are lacking for the large-x region, where one quark carries most of the momentum of the proton. Experiment 866 completed at Fermilab measured data for the Drell-Yan cross section for pp and pd interactions, which is sensitive to the quark distributions of the interacting hadrons. By examining the data from 175,000 dimuon events in the range (4.2 = M = 16.85 GeV and -0.05 = xF = 0.8), we are able to examine the light quark and anti-quark distributions in the nucleon. In the experiment both hydrogen and deuterium targets were used in order to solve for the absolute quark distributions in the proton and the neutron. In this project we verified the cross section determined as a function of (xF, M) and determined the functional dependence on (x1, x2). The Monte Carlo data show a general agreement with the next-to-leading order experimental data demonstrating an understanding of the theory behind the experiment. Through a comparison of the experimental data to the Monte Carlo results and next-to-leading order cross section calculations, a better determination of the parton distributions will be achieved.

Simulation of Electron Trajectories Inside an Annular Dielectric. JULIE MANAGAN (Vanderbilt University, Nashville, TN, 37235) JOHN R. HARRIS (Lawrence Livermore National Laboratory, Livermore, CA, 94550)

Development of a compact Dielectric Wall Accelerator (DWA) is desirable for scientific and medical applications such as proton therapy, but requires vacuum insulators to withstand electric fields (E-field) near 100-MV/m without surface flashover. It is believed that flashover is caused by secondary electron emission avalanches (SEEA), requiring field-emitted electrons to re-strike the insulator surface. In a fast voltage pulse, changing E-field creates a displacement current and induces a magnetic field (B-field), which will influence the trajectories of field-emitted electrons. This may cause them to strike the insulator, triggering SEEA. In a previous study, B-field simulations showed this effect for particles emitted from the exterior of a cylindrical dielectric. This abstract describes a study simulating the B-field effect on electrons emitted from the interior of an annular dielectric capped by disc electrodes. The particle-in-cell code LSP was used to simulate field-emitted electrons under the E and B-fields created on the rising edge of a trapezoidal voltage pulse, while varying outer diameter, inner diameter, dielectric constant, emission time, voltage pulse length, and emission energy. Since conventional flashover research does not account for B-field effects due to displacement current, the code was modified so that the B-field could be eliminated, and all tests were run with and without it. The sign and strength of the B-field determined particle motion: when the E-field had a positive rate of change the B-field deflected electrons away from the surface, but when it had a negative rate of change the B-field deflected electrons toward the surface. Ringing appeared at the end of the voltage ramp, creating a negative rate of change in the E-field. Variations in timing and geometry influenced the fields seen by the particles and changed their trajectories accordingly. Where no B-field was present, the electrons traveled within 1mm of the surface. Under most conditions, the B-field deflected electrons away from the interior surface during the rising edge of the voltage pulse, which is opposite to its effect on the exterior surface. This benefits the DWA system by decreasing possible triggers of SEEA. This study improved understanding of vacuum insulators, and will aid development of materials that are better suited to the high E-fields of the DWA.

Simulation of Neutron Backgrounds from the ILC Extraction Line Beam Dump. SIVA DARBHA (University of Toronto, Toronto, ON, M5S 1A1) TAKASHI MARUYAMA (Stanford Linear Accelerator Center, Stanford, CA, 94025)

The operation of the International Linear Collider (ILC) as a precision measurement machine is dependent upon the quality of the charge-coupled device (CCD) silicon vertex detector. A neutron flux of 1010 neutrons/cm2 incident upon the vertex detector will degrade its performance by causing displacement damage in the silicon. One source of a neutron background arises from the dumping of the spent electron and positron beams into the extraction line beam dumps. The Monte Carlo program FLUKA was used to simulate the collision of the electron beam with the dump and to determine the resulting neutron fluence at the interaction point (IP). A collimator and tunnel were added and their effect on the fluence was analyzed. A neutron source was then generated and directed along the extraction line towards a model of the BeamCal, vertex detector, and beampipe to determine the neutron fluence in the silicon layers of the detector. Scattering in the BeamCal and beampipe was studied by manipulating the composition of the BeamCal. The fluence in the first silicon layer for the current tungsten BeamCal geometry was corrected according to a 1 MeV equivalent silicon displacement damage to obtain a comparable value for the damage done to the CCD vertex detector. The IP fluence was determined to be 3.65*1010 +/- 2.34*1010 neutrons/cm2/year when the tunnel and collimator were in place, with no appreciable increase in statistics when the tunnel was removed. The BeamCal was discovered to act as a collimator by significantly impeding the flow of neutrons towards the detector. The majority of damage done to the first layer of the detector was found to come from neutrons with a direct line of sight from the quadrupole, with only a small fraction scattering off of the beampipe and into the detector. The 1 MeV equivalent neutron fluence was determined to be 1.85*109 neutrons/cm2/year when the positron beam was considered, or 9.27*108 neutrons/cm2/year by one beam alone, which contributes 18.5% of the threshold value in one year. Future work will improve the detector model by adding the endcap sections of the silicon detector, and will study in detail the neutron scattering off of the tunnel walls. Other sources of neutron backgrounds will also be analyzed, including electron-positron pairs, Beamstrahlung photons, and radiative Bhabha scattering, in order to obtain a complete picture of the overall neutron damage done to the vertex detector.

Soil Activation from Alternating Gradient Synchrotron and Booster. JULIA TILLES (University of Massachusetts Amherst, Amherst, MA, 1003) KIN YIP (Brookhaven National Laboratory, Upton, NY, 11973)

At the Brookhaven National Laboratory (BNL), the soil shielding at the Alternating Gradient Synchrotron (AGS) and its Booster counterpart contains some spots that have been subjected to more radiation than others. In keeping with BNL’s efforts to acknowledge the environmental effects of the lab’s operations, the soil radiation levels in those hot spots will be put on record. Protons, making up the beam of AGS and the Booster, are collided with a steel target in simulation. The three-dimensional geometry of the present beam dump of AGS and the past beam dump of the Booster, where excess beam particles were dumped, was used. The simulated radiation levels in the surrounding environment were mapped in two-dimensional plots. A Monte Carlo simulation program, developed at Los Alamos National Laboratory, called MCNPX (Monte Carlo Neutral Particles X-tended) was used. The end result is the compilation of several two-dimensional plots, each of which overlays the simulated radiation levels upon the blueprint of the respective site and plane. The information gathered will go on file for public access.

Structure of ZnO Nanorods using X-Ray Diffraction. MARCI HOWDYSHELL (Albion College, Albion, MI, 49224) MICHAEL TONEY (Stanford Linear Accelerator Center, Stanford, CA, 94025)

Many properties of zinc oxide, including wide bandgap semiconductivity, photoconductivity, and chemical sensing, make it a very promising material for areas such as optoelectronics and sensors. This research involves analysis of the formation, or nucleation, of zinc oxide by electrochemical deposition in order to gain a better understanding of the effect of different controlled parameters on the subsequently formed nanostructures. Electrochemical deposition involves the application of a potential to an electrolytic solution containing the species of interest, which causes the ions within to precipitate on one of the electrodes. While there are other ways of forming zinc oxide, this particular process is done at relatively low temperatures, and with the high amount of x-ray flux available at SSRL it is possible to observe such nucleation in situ. Additionally, several parameters can be controlled using the x-ray synchrotron; the concentration of Zn2+ and the potential applied were controlled during this project. The research involved both gathering the X-ray diffraction data on SSRL beamline 11-3, and analyzing it using fit2d, Origin 6.0 and Microsoft Excel. A time series showed that both the in-plane and out-of-plane components of the ZnO nanorods grew steadily at approximately the same rate throughout deposition. Additionally, analysis of post-scans showed that as potential goes from less negative to more negative, the resulting nanostructures become more oriented.

Studies of a Free Electron Laser Driven by a Laser-Plasma Accelerator. ANDREA MONTGOMERY (Butler University, Indianapolis, IN, 46208) CARL B. SCHROEDER (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

A free electron laser (FEL) uses an undulator, a set of alternating magnets producing a periodic magnetic field, to stimulate emission of coherent radiation from a relativistic electron beam. The Lasers, Optical Accelerator Systems Integrated Studies (LOASIS) group at Lawrence Berkeley National Laboratory (LBNL) will use an innovative laser-plasma wakefield accelerator to produce an electron beam to drive a proposed FEL. In order to optimize the FEL performance, the dependence on electron beam and undulator parameters must be understood. Numerical modeling of the FEL using the simulation code GINGER predicts the experimental results for given input parameters. Among the parameters studied were electron beam energy spread, emittance, and mismatch with the undulator focusing. Vacuum-chamber wakefields were also simulated to study their effect on FEL performance. Energy spread was found to be the most influential factor, with output FEL radiation power sharply decreasing for relative energy spreads greater than 0.33%. Vacuum chamber wakefields and beam mismatch had little effect on the simulated LOASIS FEL at the currents considered. This study concludes that continued improvement of the laser-plasma wakefield accelerator electron beam will allow the LOASIS FEL to operate in an optimal regime, producing high-quality XUV and x-ray pulses.

Studies of Flux Pinning in Superconducting YBa2Cu3O7 Disks. ANGELA KOU (Columbia University, New York, NY, 10027) SOREN PRESTEMON (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

YBa2Cu3O7 (YBCO) is a high temperature superconductor that has been shown to pin high magnetic fields when it is placed in an applied field greater than its lower critical magnetic field, Hc1 and less than its higher critical magnetic field, Hc2. The pinning properties of a bulk YBCO sample are studied in order to probe its ability to pin uniform high magnetic fields for extended periods of time. The YBCO sample was field cooled from 100 K to 13 K using a 3He cryostat in an applied magnetic field of 5 T. A Hall probe was used to monitor the field within the sample. The sample was found to trap a maximum field of 3.5 T. Flux jumps were observed during ramping down of the applied field. Flux creep was observed in the sample; the sample showed a loss of less than 0.1% of the trapped field over a period of 120 s. The slow rate of degradation of the field over time demonstrates the possibility of using bulk samples of YBCO as “permanent magnets”. Finally the trapped field decreased with increasing temperature and lost all trapped field above 100 K, indicating that for this sample the critical temperature is 104 K. Critical state models were used to model the trapped field within the sample. This work is part of an ongoing project to investigate the feasibility of using bulk YBCO with trapped fields as the source of a uniform magnetic field in condensed matter scattering experiments.

Study of KS Production With The BaBar Experiment. THOMAS COLVIN (The Ohio State University, Columbus, OH, 43201) JOCHEN DINGFELDER (Stanford Linear Accelerator Center, Stanford, CA, 94025)

We study the inclusive production of short-lived neutral kaons KS with the BaBar experiment at the Stanford Linear Accelerator Center. The study is based on a sample of 383 million B-Bbar pairs produced in e+e- collisions at the Y(4S) resonance, in which one B meson has been fully reconstructed. We select a clean sample of KS mesons and compare kinematic spectra for data and simulation. We find that the simulation overestimates the total production rate of KSand we see differences in the shape of the KS momentum spectra. We derive correction factors for different momentum intervals to bring the simulation into better agreement with the observed data.

Study of Pedestal Fluctuations of Channels in the Track Imaging Cherenkov Experiment Camera. EMILY GOSPODARCZYK (Sauk Valley Community College, Dixon, IL, 61021) KAREN BYRUM (Argonne National Laboratory, Argonne, IL, 60439)

The Track Imaging Cherenkov Experiment (TrICE) is a prototype telescope on-site at Argonne National Laboratory. It is designed to measure the composition of cosmic rays through the detection of direct Cherenkov radiation. TrICE is exploring the capabilities of a camera composed of a 4×4 array of 16-channel Hamamatsu R8900 multianode photomultiplier tubes (MaPMTs) and their corresponding electronics. The higher angular resolution explored in the TrICE prototype telescope can be applied to the next generation of high energy gamma-ray telescopes. The objective of the research was to study TrICE pedestal data and look for pedestal fluctuations in each channel over the time structure of an event. Data was recorded at a rate of 53MHz (or 19ns sampling). Each event contained a snapshot of eight 19ns time slices of PMT signals. By programming in C++ interfaced with the ROOT graphics language, a macro was written that calculates the mean ADC counts for each time slice for each channel. The code created a plot of the mean ADC counts for all time slices, each represented by a different marker and superimposed. The result of running the code on individual pedestal files illustrated that the pedestal remained relatively consistent between time slices. A further examination compared the Gaussian means of each time slice for channel one. The results proved that the Gaussian means for all the time slices fell within a small range and within the standard deviation. Subsequent steps should involve analyzing multiple pedestal files and looking for variations in pedestal as a function of time and temperature.

Super-hydrophobic Behavior on Nano-structured Surfaces. DANIEL SCHAEFFER (Brigham Young University-Idaho, Rexburg, ID, 83460) JOHN T. SIMPSON (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

Super-hydrophobic behavior has been observed in various natural occurrences such as the leaves of the lotus plant and has been thoroughly studied over the past few years. Water repellant properties of water drops on uniform arrays of vertically aligned nano-cones were investigated to determine the highest achievable contact angle (a measure of water drop repellency), which is measured from the reference plane on which the drop sits to the tangent line of the point at which the water drop makes contact with the reference plane. At low aspect ratios (height versus width of the nano-cones), surface tension pulls the water into the nano-cone array, resulting in a wetted surface. Higher aspect ratios reverse the effect of the surface tension, resulting in a larger contact angle that causes water drops to roll off the surface. Fiber drawing, bundling, and redrawing are used to produce the structured array glass composite surface. Triple-drawn fibers are fused together, annealed, and sliced into thin wafers. The surface of the composite glass is etched with H2O:NH4F:HF etching solutions to form nano-cones through a differential etching process and then coated with a fluorinated self-assembled monolayer. Cone aspect ratios can be varied through changes in the chemistry and concentration of the etching acid solution. Super-hydrophobic behavior occurs at contact angles >150° and it is predicted and measured that optimal behavior is achieved when the aspect ratio is 4:1, which displays contact angles =175°. Super-hydrophobic behavior on uniform arrays of vertically aligned nano-cones demonstrates synthetic fabrication of super-hydrophobic surfaces is genuinely achievable by this process.

Testing a Novel Laser Polarimeter Design. JOAN DREILING (Fort Hays State University, Hays, KS, 67601) MARCY STUTZMAN (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

Polarized electron beams are used at Thomas Jefferson National Accelerator Facility to study the properties of nuclei. When circularly polarized light of the appropriate wavelength illuminates a gallium arsenide cathode, polarized electrons are emitted. Right-hand circularly polarized light excites electrons of one spin polarization state, and left-hand circularly polarized light excites the opposite spin polarization state. The polarization of the electrons is typically measured with complex polarimeters, either in the lab or in the accelerator and experimental halls. A novel polarimetry technique was explored to determine if a simple optical setup could be used to measure polarization of the electron beam from a gallium arsenide cathode. A pump and probe system similar to those in atomic absorption spectroscopy was employed. A circularly polarized pump laser was used in an attempt to deplete one polarization state of the crystal, while the probe laser was varied between the same and opposite circular polarizations. If the pump laser depleted one polarization state, the probe laser would cause additional photocurrent only when the two lasers had opposite polarizations. It was found that the photocurrent did not vary when the polarization of the probe laser was changed. The absence of a statistically significant difference in photocurrent suggests that depletion of electron states was not achieved. Therefore, this proposed easy and low cost method of polarimetry is not feasible, and the more complex polarimeters are still required when knowledge of electron beam polarization is needed.

Testing an Algorithm to Find Cosmic Strings in the CMB. HEATHER BUSK (Ventura College, Ventura, CA, 93003) GEORGE SMOOT (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

A number of theories, such as Grand Unified Theories (GUTs) and superstring theories, predict that cosmic strings would have formed shortly after the Big Bang. They may even continue to exist today. Since most of the predictions of these theories could only be observed at much higher energies than are produced on Earth, strings provide one of the few ways to test the theories. Eunwha Jeong and George Smoot developed a technique to search the Cosmic Microwave Background (CMB) for strings, by looking for a temperature jump (step) across a moving string, via the Kaiser-Stebbins effect. I simulated a square patch of the CMB containing a temperature step, using normally distributed random numbers. I then implemented the algorithm to calculate the properties of the patch. I tested how well the algorithm returned the correct values for a range of input values for the background temperature (T0), temperature step (), standard deviation of random noise (sG), and patch size (n). The preliminary results suggest that there is a small decrease in the accuracy for increasing background temperatures. This is most noticeable for the calculated T0, fraction of blueshifted pixels, and sG. Increasing n significantly improves the results, while accuracy decays with a larger sG. The results had large errors for low , but they attenuated with increasing .

The Assembly and Testing of the BigBite Hadron Detector System. GORDON LOTT (Virginia Tech, Blacksburg, VA, 24061) DOUGLAS HIGINBOTHAM (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

Scintillator detectors are a basic part of nuclear and high energy physics research. When a charged particle passes through a scintillator, it creates photons. The photons travel to the ends of the scintillator and into photomultiplier tubes (PMTs) which turn the photons into an electric pulse. The BigBite Hadron particle detector package has a scintillator plane made of two layers of 24 scintillator bars. This project focuses on this detector package which will be used in a Thomas Jefferson National Accelerator Facility Hall A experiment next year and its scintillator plane and electronics needed to be assembled. All the PMTs were connected to labeled high voltage (HV) cables and signal cables. The signal cables were connected to a series of nuclear instrument modules (NIM) which amplify the signal and convert the analog signal to a digital signal. The different NIM modules needed for logic and triggering were arranged and cabled in an organized fashion for easy troubleshooting and repair. The HV cables were connected to LeCroy high voltage supplies. The whole system was tested with cosmic rays to find problems. This assembly and testing put the detector in complete working order and verified the quality of the set up. The prepared package can now be moved into Hall A for the experiment.

The Effects of Measurement Errors on Neutrino Angular Resolution in the IceCube Neutrino Detector. LESLIE UPTON (Hampton University, Hampton, VA, 23668) AZRIEL GOLDSCHMIDT (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

The IceCube collaboration is actively pursuing neutrino detection to study astrophysical sources. These neutrinos are identified by the secondary muons detected within the IceCube detector array. The muon track is reconstructed using the information provided by the time information of Cherenkov photon illuminated digital optical modules (DOMs) within the detector. However, it is imperative to calculate how different measurement errors affect the reconstruction of the muon. A Monte Carlo simulation was developed in order to study these effects on the resolution of the muon reconstruction. The simulation, developed in ROOT, creates a muon in an array detector and uses time information from illuminated DOMs and Minuit to reconstruct the parameters of the muon without any knowledge of the original coordinates of the muon. Minuit provides precise results, with spikes around zero for the space angle between the original and reconstructed muon tracks. There are correlations between the number of illuminated DOMs, muon track length, and the angular resolution of the reconstructed track. Further work includes exploring photon statistics, energy dependence and more precise DOM information.

The Kaon Charge Ratio in Accelerator Experiments. ANDREW HOFFMAN (Yale University, New Haven, CT, 6520) MAURY GOODMAN (Argonne National Laboratory, Argonne, IL, 60439)

Charged kaons (K+ and K-) are a type of particle that can be produced from various high-energy proton-nucleus interactions due to both cosmic rays and accelerators. Kaons can decay into muons (which are seen by detectors), so the K+/K- ratio affects the muon charge ratio. A search was conducted for articles relevant to the kaon charge ratio to compare accelerator results to the Main Injector Neutrino Oscillation Search (MINOS) Far Detector (FD) interpretation from cosmic ray muons for the ratio of production rates (K+/K- ≈ 2). Most accelerator experiments that were found used colliding proton beams to produce the kaons, whereas one used a proton beam incident on a carbon target and another the collisions of lead ions. Protons colliding with air would be ideal for studying the atmospheric muon charge ratio, but few of these experiments have been done. The accelerator results are consistent with kaon and pion charge ratios that increase with Feynman x (a scaling variable closely related to laboratory momentum); it appears that the K+/K- ratio is most consistent with MINOS for x ≈ 0.15-0.20. Several of the experiments used such a value, whereas one used a lower one and others used a range of values. There is a parameterization of the kaon charge ratio vs. xRx that seems rather consistent with the accelerator results. The muon and kaon charge ratios are important for neutrino physics because decays and interactions of these particles can produce either neutrinos or antineutrinos, depending on the charge of the parent particle. Since it has been shown that the muon charge ratio and neutrino-antineutrino ratio are closely related in the atmosphere, the muon and kaon charge ratios will be useful for interpreting results from neutrino detectors such as the MINOS FD.

Ultraviolet Induced Motion of a Fluorescent Dust Cloud in an Argon Direct Current Glow Discharge Plasma. MICHAEL HVASTA (The College of New Jeresy, Ewing, NJ, 8628) ANDREW ZWICKER (Princeton Plasma Physics Laboratory, Princeton, NJ, 8543)

Dusty plasmas consist of electrons, ions, neutrals and comparatively large particles (dust). In man-made plasmas this dust may represent impurities in a tokamak or in plasma processing. In astronomical plasmas this dust forms structures such as planetary rings and comet tails. To study dusty plasma dynamics an experiment was designed in which a silica (< 2mA) when the cloud is exposed to the UV (100 watts,  = 365 nm) the mixture fluoresces, moves ~2mm towards the light source and begins rotating in a clockwise manner (as seen from the cathode). By using a Charge-Coupled Device camera, dust clouds with diameters ranging from 6-10mm have been observed with particle rotational velocities in excess of 3 mm/s near their periphery. Particle velocities decrease towards the center of the cloud. By calibrating a UV lamp and adjusting the relative intensity of the UV with a variable transformer it was found that both translational and rotational velocities are a function of UV intensity. Additionally, it was determined that bulk cloud rotation is not seen when the dust tray is electrically floated while bulk translation is. This ongoing experiment represents a novel way to control and localize contamination efficiently in man-made plasmas as well as a pathway to better understanding UV-bathed plasma systems in space.

United States A Toroidal Large Hadron Collider Apparatus (US Atlas) Load Test Scheduler. MICHAEL MAY (Loyola College Maryland, Baltimore, MD, 21210) JAY PACKARD (Brookhaven National Laboratory, Upton, NY, 11973)

United States A Toroidal Large Hadron Collider Apparatus (US Atlas) Load Test Scheduler was created by the United States Department of Energy (DOE) at Brookhaven National Laboratory (BNL) to allow users to make a reservation to test the local input/output (I/O) performance at a site (farm) as would be seen by an ATLAS user or analysis job.  To accomplish this, the Java programming language, on the eclipse console, was used along with hibernate mapping and MySQL database.  In this way a user can request a reservation to run a test on a site inserting the start date, end date, farm name, cluster name, node name, and the submitter’s distinct number and an administrator can search the requests for a reservation and approve or deny the reservation.  This project allows for an organized way to make a reservation to perform a load test on a site to measure the I/O performance.

Using "R" to Analyze Radio Signals. CANDICE HUMPHERYS (Brigham Young University-Idaho, Rexburg, ID, 83460) HELIO TAKAI (Brookhaven National Laboratory, Upton, NY, 11973)

Radio data from a distant transmitter is being collected by a MARIACHI (Mixed Apparatus for Radio Investigation of Cosmic rays of High Ionization) antenna in an attempt to understand the background signals to cosmic ray showers. Part of these background signals are meteors. To understand and then analyze these signals, we have elected to use the statistical package “R”. “R” has many advantages over other statistical packages such as: it is open-sourced software, it can run on multiple platforms, it can run in batch mode and can be used to constantly monitor data that is being acquired by the MARIACHI antenna. Data is written to a text file every hour and this data must then be read in to “R” to run analysis. The intent of MARIACHI is to post graphs of the meteor data on the internet each hour. To accomplish this, I wrote a script in the programming environment of “R” that reads in each text file and creates graphs that display when meteors occurred. Analysis of this data will begin as soon as we gather enough data in to “R”. Information about the amplitude and width (time duration) of the meteor signals is also recorded. Analysis of this information has begun, and a strong correlation between amplitude and width has been found. Analysis of the average signal width will also commence when enough data is gathered. One thing that I am working to accomplish before the average signal width analysis becomes meaningful is to distinguish between meteor signals and airplane signals and exclude the latter ones, which have a much longer width than meteors. The purpose of collecting and studying this meteor data, and the main goal of the MARIACHI project is to determine if studying cosmic rays by scattered radio signals is feasible.

Validating the computer simulation of the effects of secondary neutrals on the motional Stark effect diagnostic gas-filled-torus calibration. WILLIAM SCHUMAKER (Lawrence Technological University, Southfield, MI, 48075) HOWARD YUH (Princeton Plasma Physics Laboratory, Princeton, NJ, 8543)

The motional Stark effect (MSE) diagnostic, an important method of determining the magnetic field pitch angle in tokamak plasmas, measures the polarization angle of light emitted from injected neutral atoms that are affected by Stark splitting due to the Lorentz electric field. A common procedure of injecting a neutral beam into a gas-filled-torus with known magnetic fields in vacuum is one technique used to calibrate MSE diagnostics on many tokamak devices. The usefulness of this calibration has been limited on many installations due to anomalies in the measured pitch angles. Recently, this anomaly was explained as a consequence of beam neutrals that ionize after collisions, travel along the magnetic field lines, re-neutralize via a charge exchange, and rapidly produce emission spectra. Under certain conditions, these secondary neutrals emit hydrogen-alpha spectra that have the proper Doppler shift to pass through the MSE optical filters yet have a different polarization angle than those from the primary beam neutrals, thus contaminating the pitch angle measurement. In an effort to study these contaminations, computer code had previously been written to simulate the gas-filled-torus calibration of the MSE diagnostic on the National Spherical Torus Experiment (NSTX). To characterize the effects of secondary neutrals on the MSE gas-filled-torus calibration technique, new programming modules were written to extend the code to simulate other tokamak geometries and neutral beams. Several consistency checks involving numerical integration were meticulously performed on the modules, ensuring that they had been implemented correctly. Using these new modules, a sensitivity study involving various gas pressures, beam injection angles, magnetic field pitch angles, and system resolutions is in the process of validating the code against respective experimental data. If successful, this validation will help resolve a significant calibration issue with a major diagnostic used in current tokamak fusion research.

Vibration Analysis of a Cryocooler for CDMS. LAUREN COUTANT (University of Illinois, Champaign-Urbana, IL, 61820) DAN BAUER (Fermi National Accelerator Laboratory, Batavia, IL, 60510)

A new pulse tube cryocooler was obtained by the Cryogenic Dark Matter Search (CDMS) research group with the intent of replacing their Gifford-McMahon cryocooler which is currently installed at their research site in the Soudan Mine. A study was done on the new cryocooler to determine its vibration spectrum. More analysis needs to be completed to accurately compare the two cryocoolers, but it appears that the new pulse tube system has less intense vibrations. In addition to measuring vibrations, vibration reduction methods were investigated. Installing flexible materials in the cryocooler is the reduction method recommended for the CDMS project.

VISION Simplified Interface for Education. JOSEPH GRIMM (Brigham Young University Idaho, Rexburg, Idaho, 83460) JACOB J JACOBSON (Idaho National Laboratory, Idaho Falls, ID, 83415)

VISION (Verifiable Fuel Cycle Simulation Model), a dynamic model of the US commercial nuclear fuel cycle, allows the user to manipulate the parameters of the model to analyze multiple nuclear fuel cycle scenarios. The model was created in a program called Powersim with inputs and outputs through Excel. Analysis is possible using the output graphs, charts, and tables in Powersim and Excel. There is an advanced version of VISION which allows the user to run the fuel cycle via three methods: base cases, manual mode, and user defined base cases. The base case mode allows the user to click on one of more than 60 base case scenarios; following the selection the remaining variables are set automatically. Manual Mode allows the user to set parameters via seven different interface pages in which there are multiple graphs, table inputs, slider bars and buttons. User defined base cases allow the user to set all of the parameters through the Excel input files which is extremely complex. Due to the advanced nature of the VISION interface it was unable to meet the needs of potential educational institutions for use in fuel cycle classes. Therefore it became necessary to create a simplified version of the VISION interface in order for it to be usable and understandable within an educational setting such as a college classroom or elsewhere outside of the INL. Using the capabilities of Powersim in which VISION was created, my task was to take the four base case scenarios and the three fuel cycle parameters chosen by my mentor and create a new simplified educationally friendly user interface. The four base cases would set all of the parameters within the model automatically for the user except for the three parameters shown on the interface screen. The new interface allows the user to use the VISION model without having to go through the gauntlet of setting all the user inputs. Thus they can concentrate on the system behavior rather on the modeling environment. The new interface is going to be in use this fall by five major universities in their fuel cycle classes within their nuclear programs.

What the Formation of the First Stars Left in its Wake. CHRISTENE LYNCH (Gettysburg College, Gettysburg, PA, 17325) MARCELO ALVAREZ (Stanford Linear Accelerator Center, Stanford, CA, 94025)

The formation of the first stars marked a crucial transition in the formation of structure in the universe. Through their feedback effects, which include ionization by their radiation and the supernovae or black holes formed at the end of their lives, they were able to influence the evolution of their surroundings. In this paper we present a new visualization and use analytical calculations in order to study the influence of these first stars. The visualization was created using both Enzo, a simulation program that uses adaptive mesh refinement, and Amira, a 3D volume rending program. The visualization allows for a better understanding of the impact these stars had on their surroundings and conveys the importance of these stars to a broader audience. The analytical calculations used investigate the possibility that black holes left by the first stars could be seeds for the 109 solar mass black holes seen as quasars at redshift z~6. We found that if a remnant black hole was to begin Eddington accretion at z~20 they will be able to form the 109 solar mass quasars by z~6 but that there is likely to be a delay in the onset of accretion onto the seed black hole because of the radiative feedback of its progenitor. Future, more detailed, numerical calculations will be necessary to understand whether the black holes left by the first stars could possibly be seeds for quasar formation.