SULI
CCI
PST
FaST

Physics Abstracts:

24Al Level Structure and the Corresponding 23Mg(p,) 24Al Astrophysical Reaction Rate. CHRISTOPHER DEATRICK (Western Michigan University, Kalamazoo, Mi, 49008) DARIUSZ SEWERYNIAK (Argonne National Laboratory, Argonne, IL, 60439)

24Al Level Structure and the Corresponding 23Mg(p,) 24Al Astrophysical Reaction Rate. Christopher J. Deatrick (Western Michigan University, Kalamazoo, MI 49008) Dariusz Seweryniak (Argonne National Laboratory, Argonne, IL 60439) In order to better understand the processes involved in heavy nuclide production in explosive stellar environments, the breakout process from the CNO cycles to the NeNa cycle and to the MgAl cycle must be quantified. Better numerical values of proton capture rates are deduced for the 23Mg(p,) 24Al reaction by studying nuclear energy levels in 24Al above the proton capture threshold using high-resolution -ray spectroscopy and -ray angular distribution analysis. 24Al nuclei were produced by colliding an 16O beam delivered by the Argonne Tandem-Linac Accelerator System with a 10B target . Excited states in 24Al were populated after evaporating two neutrons from the compound system. Gamma rays emitted from these states were detected with the GAMASPHERE array of Compton-suppressed Ge detectors. The Argonne Fragment Mass Analyzer was used to separate reaction products from the beam and assign mass and atomic numbers. As a result, states above and below the proton threshold were studied in detail resulting in an improved 24Al level scheme. The analysis of the first state above the proton threshold indicates that the reaction rate contribution of this state could differ by a factor of up to 9 from that of previous calculations in the 0.1-0.5 GK temperature range.

3D Simulation for the ATLAS Education and Outreach Group. BRIAN AMADIO (Rensselaer Polytechnic Institute, Troy, NY, 12180) MICHAEL BARNETT (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

ATLAS is a particle detector under construction at the Large Hadron Collider facility at the CERN Laboratory in Geneva, Switzerland. The project will be the most expansive physics experiment ever attempted. The ATLAS Education and Outreach Group was started to provide information to students and the general public about the importance of this project. A three-dimensional interactive simulation of ATLAS was created, which allows users to explore the detector. This simulation, named the ATLAS Multimedia Educational Laboratory for Interactive Analysis (AMELIA), allows users to view detailed models of each part of the detector, as well as view event data in 3D. A similar project is called ATLANTIS, which allows users to examine events in only two dimensions. Currently ATLANTIS allows more sophisticated analysis of events. AMELIA will provide similar functionality, but in a more intuitive way, which will be much friendlier to the public.

A Catalog of Candidate High-Redshift Blazars for GLAST. TERSI ARIAS (San Francisco State University, San Francisco, CA, 94132) JENNIFER CARSON (Stanford Linear Accelerator Center, Stanford, CA, 94025)

High-redshift blazars are promising candidates for detection by the Gamma-ray Large Area Space Telescope (GLAST). GLAST, expected to be launched in the Fall of 2007, is a high-energy gamma-ray observatory designed for making observations of celestial gamma-ray sources in the energy band extending from 10 MeV to more than 200 GeV. It is estimated that GLAST will find several thousand blazars. The motivations for measuring the gamma-ray emission from distant blazars include the study of the high-energy emission processes occurring in these sources and an indirect measurement of the extragalactic background light. In anticipation of the launch of GLAST we have compiled a catalog of candidate high-redshift blazars. The criteria for sources chosen for the catalog were: high radio emission, high redshift, and a flat radio spectrum. A preliminary list of 307 radio sources brighter than 70mJy with a redshift z = 2.5 was acquired using data from the NASA Extragalactic Database. Flux measurements of each source were obtained at two or more radio frequencies from surveys and catalogs to calculate their radio spectral indices . The sources with a flat-radio spectrum ( = 0.5) were selected for the catalog, and the final catalog includes about 200 sources.

A Geant4 Simulation of the COUPP Bubble Chamber. CHARLES CAPPS (Carnegie Mellon University, Pittsburgh, PA, 15289) ANDREW SONNENSCHEIN (Fermi National Accelerator Laboratory, Batavia, IL, 60510)

It is known that a sensitivity on the order of 1 event per year per ton of detector material is necessary to detect a WIMP (Weakly Interacting Massive Particle) dark matter candidate. After successful veto of cosmic radiation, the neutron background will become the greatest obstacle for COUPP (Chicagoland Observatory for Underground Particle Physics) to achieve this level of sensitivity. Thus, understanding the COUPP bubble chamber's response to low-energy neutrons (< 50 MeV) is crucial. A Geant4 simulation of the COUPP bubble chamber response to an Am/Be neutron source is described. The recoil energy spectra given by the simulation are presented. Simulation results of event rate as a function of chamber pressure are compared to experimental data. Moreover, multiple bubble events--indicative of neutrons--are examined. The ratio of single to multiple bubble events is determined for different energy thresholds. To verify Geant4 for neutrons in this energy regime, cross-sections and differential cross-sections are computed from the simulation and compared to the JENDL, JEFF, and ENDF nuclear databases. Elements present in the COUPP experiment are considered. Good agreement is found between simulation cross-sections and the above nuclear databases.

A Numerical Model of the Critical Charge Density Surface of Ultra High Energy Cosmic Ray Induced Extensive Air Showers Using the SCILAB Programming Language. ALLEN SHARPER (Florida A&M University, Tallahassee, FL, 32301) HELIO TAKAI (Brookhaven National Laboratory, Upton, NY, 11973)

A numerical model of the critical charge density surface of ultra high-energy cosmic ray (UHECR) induced extensive air showers (EAS) has been computed. The critical charge density surface defines the surface for specula reflection of radio waves with frequency less than the natural oscillation frequency (plasma frequency) of the EAS charges. Using a numerical model to understand how radio waves reflect from the air shower will help improve the design of devices (antennas, arrays) used to detect the reflected waves. The numerical model will allow the power and direction of the reflected waves to be calculated which will provide a map of the spatial distribution of reflected wave power and polarization incident to the surface of the earth. The program of the numerical model, written in the SCILAB language, calculates the density of ionization electrons as a function of radial distance from the shower axis and location along the axis. The cosmic ray tracing model is part of the Mixed Apparatus for Radar investigation of Cosmic-rays of High Ionization (MARIACHI) project. The MARIACHI project, consist of research that investigates an unconventional way of detecting UHECR. Based upon a method successfully used to detect meteors entering the upper atmosphere. Mariachi seeks to listen to television signals reflected off the ionization trail of an UHECR.

A Plasma Gun for the Next Generation of Spallation Neutron Source H- Ion Sources. JUSTIN CARMICHAEL (Worcester Polytechnic Institute, Worcester, MA, 1609) ROBERT F. WELTON (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

The ion source for the Spallation Neutron Source (SNS) is required produce 40-50 mA of H- current depending on emittance with a duty factor of ~7% for baseline facility operation. The SNS Power Upgrade Project requires this current to be increased to 75-100 mA at the same duty factor. In its present form, the baseline SNS ion source is unable to deliver this performance over sustained periods of time. A new generation of RF-driven, multicusp, ion sources based on external antennas are therefore being designed to meet these requirements. It was found that by injecting a stream of plasma particles from a simple, steady-state, DC glow-discharge into the RF-plasma (i) H- production can be dramatically increased and (ii) H- pulse rise time can be significantly reduced. The design of a suitable plasma gun is presented which features a hollow anode and mechanical compatibility with the new ion sources. The Finite Element Method (FEM) has been employed to optimize the design: coupled fluid dynamic, heat transfer, mechanical stress and deformation, and ion/electron trajectory simulations were performed. Several design improvements over earlier versions were implemented such as the addition of an extraction system. The FEM simulations showed that the design of the new plasma gun is sufficient to handle the thermal stresses resulting from a 1 kW load on the cathode face. The ion/electron simulations demonstrated a high degree of control over the plasma beam, allowing for manipulation of the intensity, mean energy, and divergence of the streaming plasma. The extraction system also allows for selective emission of electrons or ions. It is anticipated that the plasma beam can be optimized with the extraction system to significantly increase the H- current in the new ion sources.

A Portable Water Cherenkov Detector: Measuring Particle Flux at Different Altitudes. LUKAS BAUMGARTEL (University of New Mexico, Albuquerque, NM, 87131) BRENDA DINGUS (Los Alamos National Laboratory, Los Alamos, NM, 87545)

High-energy cosmic particles initiate extensive air showers (EAS) as they interact with the air molecules in Earth’s upper atmosphere. If the primary particle carries sufficient energy, the shower reaches the ground. With an array of photo-multiplier tubes (PMT’s) located in a pool of water, the direction and energy of the primary particle can be reconstructed from the Cherenkov light generated as the EAS hits the detector. Milagro is one such detector, and has observed high energy (~1TeV) gammas and protons from high energy cosmic phenomena such as active galactic nuclei and supernova. A new detector called HAWC (High Altitude Water Cherenkov), similar to Milagro but with new electronics and high-altitude location (4000m), data was taken at University of New Mexico to perfect the measurement method and to characterize how other factors, such as weather, tank configuration, and power sources affect count rates. Once these variables were well understood, the detector was used to measure particle flux at four different altitudes: 1540m, 2650m, 3231m, and 4308m. The rates of the low energy electromagnetic particles were found to increase with altitude, and had values of 4.38 kHz, 4.82 kHz, 5.50 kHz, and 6.92 kHz, respectively. The 2650m data was taken at the Milagro site as a reference point. Based on the current data acquisition and analysis algorithm used for Milagro, singles rates at the HAWC site should be less than a factor of two greater then they are at Milagro. The rates measured at >4000m were 1.44 times greater than the rates at Milagro, leading to the conclusion that singles rates at a HAWC site >4000m will have a negligible effect on triggering and pulse height measurement.

A Preliminary CCD Cosmetic Grading System for the Dark Energy Survey Camera Focal Plane CCDs. SARAH CARLSON (DePauw University, Greencastle, IN, 46135) JUAN ESTRADA (Fermi National Accelerator Laboratory, Batavia, IL, 60510)

The Dark Energy Survey (DES) is a 5000 sq-degrees sky survey that will strive to make more precise measurements of dark energy. The DES team at Fermilab is responsible for the construction of the Dark Energy Camera (DECam) that will be mounted along with corrective optics and electronics on the Blanco 4-meter telescope at the Cerro Tololo Inter-American Observatory (CTIO) in Chile. The camera will be comprised of 62 image charge-coupled devices (CCDs) and 8 guiding, focusing and aligning CCDs. These CCDs are made of silicon and manufactured at Lawrence-Berkley National Laboratory (LBNL). Part of the task of building the DECam is understanding how each CCD functions. This understanding includes knowing the limitations of each CCD. Cosmetic defects can be crippling to the performance of a CCD. Cosmetic defects include white and dark pixels, bad columns, as well as defects caused by dark current and quantum efficiency (QE) non-uniformity problems. Dark current is a small electric current generated by the thermal motion of the silicon atoms in the CCD. QE is the measure of the CCD’s sensitivity to a certain wavelength. Using the popular astronomical source detection program Source Extractor we make two separate analyses: a flat-fielding analysis and a uniformity analysis which includes both the dark current and QE uniformity. Catalogs of all the defects found are created for each analysis and analyzed. To be an acceptable candidate for the DECam focal plane, each CCD must meet the requirement of no more than 5% non-usable image area. Using this requirement as a starting point, we have devised a preliminary cosmetic grading system to be used for each CCD. Each CCD will be given two grades, one for the flat-fielding analysis and one for the uniformity analysis. The CCD will receive a grade of 0 if the affected area is 2.5% or less of the total CCD area. A grade of 1 will be given if the affected area is between 2.5% and 5% of the total CCD area. A grade of 2 will be given if the affected area is 5% or more of the total CCD area. Our logic for giving each CCD two grades instead of one overall grade is that we will be able to better characterize CCDs, and if the occasion should arise that we need to pick between groups of CCDs for the DECam focal plane the two grades will assist us in making our choice.

A Study of Gas Electron Multiplier (GEM) Foils. MATTHEW RUMORE (Worcester Polytechnic Insitute, Worcester, MA, 1609) CRAIG WOODY (Brookhaven National Laboratory, Upton, NY, 11973)

Advances have been made in the field of high-energy nuclear physics due to the increased usage of Gas Electron Multiplier (GEM) foils, which are known for their versatility and their ability to detect and amplify charge. For instance, GEM foils will be implemented in the Pioneering High Energy Nuclear Interaction eXperiment (PHENIX) and Solenoidal Tracker At RHIC (STAR) experiments at the Relativistic Heavy Ion Collider (RHIC) for the detection of signals from high-energy particle interactions. The reliable production of GEM foils depends on a study of their properties and operating conditions. Two of the most important properties are the absolute gain and the gain stability over time. To test the gain, each foil is placed in an Argon/Carbon Dioxide environment in the ratio of 70:30. An alpha particle emitted above the foil by an Americium-241 source ionizes the gas and produces a cluster of electrons. This primary charge is then collected and amplified by the GEM foil. The amplified signals are read out through a conductive pad on the bottom of the GEM detector using a digital oscilloscope. Because the amount of primary charge is known, the absolute gain will be calculated for each GEM foil as a function of voltage. The gain stability measurements entail taking successive gain measurements over time for a constant voltage. The manufacturing process strongly influences the performance of each GEM foil. As a result, a number of foils produced under different manufacturing conditions were studied in terms of their overall gain and gain stability. This study, which will mostly likely continue until August 2006, will allow the scientific community to understand the properties of the GEM foils and improve the ability to manufacture better foils in the future.

A Twin Ionization Chamber Arrangement for the Study of 12C(a,)16O Through the ß-Delayed a Spectrum of 16N. ALESSANDRO LAURO (University of Chicago, Chicago, IL, 60637) ERNST REHM (Argonne National Laboratory, Argonne, IL, 60439)

The most fundamental reactions that describe the complex process of helium burning during stellar evolution include the 12C(a,)16O reaction and the 3 4He  12C +  reaction. While the latter has been studied quantitatively over the last decades, the properties of the 12C(a,)16O are surrounded by a great deal of uncertainty due to its very small cross section of 10-41 cm2. Despite this restriction, the 12C(a,)16O reaction can be studied indirectly by observing the ß-delayed a spectrum of 16N. While it is energetically impossible for ground state 16O to a decay to 12C, it is possible to examine 16O*, excited states of 16O, that result from the ß-decay of 16N. Even though the branching ratio of this reaction favors  decay over a decay by a factor of about 105, a precise measurement of a-particles can be carried out by a specially designed dual twin ionization chamber located at the Argonne Tandem Linear Accelerator System (ATLAS) at Argonne National Laboratory. An important step in this experiment involves the calibration of the ionization chamber using a Pu-Be neutron source in order to test the a emission produced by the 10B(n,a)7Li and 6Li(n,a)t reactions. It is from this procedure that a new method of obtaining information about the emission angle of a-particles from the source has been found.

Afterglow Radiation from Gamma-Ray Bursts. HUGH DESMOND (Katholieke Universiteit Leuven, Leuven, Belgium, 3000) WEIQUN ZHANG (Stanford Linear Accelerator Center, Stanford, CA, 94025)

Gamma-ray bursts (GRB) are huge fluxes of gamma rays that appear randomly in the sky about once a day. It is now commonly accepted that GRBs are caused by a stellar object shooting off a powerful plasma jet along its rotation axis. After the initial outburst of gamma rays, a lower intensity radiation remains, called the afterglow. Using the data from a hydrodynamical numerical simulation that models the dynamics of the jet, we calculated the expected light curve of the afterglow radiation that would be observed on earth. We calculated the light curve and spectrum and compared them to the light curves and spectra predicted by two analytical models of the expansion of the jet (which are based on the Blandford and McKee solution of a relativistic isotropic expansion; see Sari's model and Granot's model). We found that the light curve did not decay as fast as predicted by Sari; the predictions by Granot were largely corroborated. Some results, however, did not match Granot's predictions, and more research is needed to explain these discrepancies.

AMELIA: ATLAS Multimedia Educational Lab for Interactive Analysis. DAVID MEDOVOY (Columbia University, New York, NY, 10027) MICHAEL BARNETT (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

AMELIA is an educational software program designed to allow the public to view, on a home computer, a 3D model of ATLAS detector at CERN, as well as visualization of recorded particle tracks. Models of the detector’s geometry, created in ‘3ds Max,’ are loaded by the software, which is written using the C++ language and the ‘Irrlicht’ visualization engine. Particle track data in the JiveXML file format is loaded and displayed simultaneously. The use of the standard JiveXML format allows for file-type compatibility with other software, such as the 2D visualization tool ATLANTIS. The ‘camera’ is fully movable by the user, and custom cutaway views can be created based on the camera’s position, to facilitate viewing the interior parts of the detector, as well the particle tracks within. Tracks are color-coded based on particle type, and will soon be individually selectable. Programs exist to visualize particle track data in 3D, and to simplify scientific data for outreach purposes, but only AMELIA is designed for both. Further, AMELIA is the only project of its kind designed to take advantage of technology developed for video games. An early 2007 public release is anticipated.

Analysis of a Proposed Very Long Baseline Neutrino Oscillation Experiment. CHRISTINE LEWIS (Columbia University, New York, NY, 10027) MILIND DIWAN (Brookhaven National Laboratory, Upton, NY, 11973)

The Very Long Baseline Neutrino Oscillation (VLBNO) study aims to determine how to best design a second generation experiment to measure the neutrino oscillation parameters and possible violation of charge/parity (CP) invariance. Using the General Long Baseline Experiment Simulation (GLoBES) software and considering a 500kT water Cherenkov detector at 1300km, corresponding to a baseline from Fermilab to the Homestake mine, we calculate the sensitivity to 13 and the CP phase. We find that with 2500kT*MW*107s of neutrino running and 5000kT*MW*107s of antineutrino running the experiment could measure sin2213 to 9-14% and dCP to ~15° at 1s. Moreover, the experiment is sensitive to non-zero sin2213 as low as 4x10-3 at 99% confidence.

Analysis of Beam Deviation Due to Quadrupole Misalignment Caused by Ambient Ground Motion in the Relativistic Heavy Ion Collider. BRANDON BELEW (Rensselaer Polytechnic Institute, Troy, NY, 12180) CHRISTOPH MONTAG (Brookhaven National Laboratory, Upton, NY, 11973)

Within the Relativistic Heavy Ion Collider (RHIC) and particle accelerators in general, proper alignment of the focusing and defocusing quadrupole magnets is essential to maintaining a stable beam orbit. However, it is also a fact that there will always be some misalignment due to ambient ground motion. The extent of this displacement is given by a simple linear formula of time, distance and a site-specific constant (the ‘ATL Rule’). Using known values of the beta function, phase advance, and focusing strength at the RHIC quadrupoles, code was written to simulate expected beam deviation at specific monitor points according to the ATL rule, for arbitrary values of the constant A. This expected deviation was then compared with actual logged beam data over the course of several months to arrive at the site-specific value of A. The end result, the constant of ambient ground motion specific to the RHIC location, was calculated to be around 3e-11 mm^2/m*s. Knowing this value will allow for more accurate predictions of future beam deviation, and facilitate the proper application of correcting dipole magnets to counteract these effects.

Analysis of Creep in Polyvinyl Chloride for the NOvA Detector. CHRISTINE MIDDLETON (Wesleyan University, Middletown, CT, 6459) HANS JOSTLEIN (Fermi National Accelerator Laboratory, Batavia, IL, 60510)

This analysis is an attempt to predict creep in the NOvA detector, a proposed electron neutrino detector for the NuMI beamline at Fermilab. The NOvA detector is constructed of large PVC extrusions which contain 25 ktons of scintillating oil. Due to the scale of this detector and the proposed experiment length of 20 years, creep in the structural PVC of the detector is a concern. Creep data was taken on 18 samples over 188 days. Stress levels ranged from 500 PSI to 2100 PSI, with 2 samples being tested at each stress. A variety of models and fit functions from the literature were used, however a best fit function was not readily apparent. Although the data could be fit reasonably well, these functions were unable to give a reasonable prediction for the creep after 20 years. In order to improve these results, more data is needed which represents the secondary and tertiary creep stages. We anticipate being able to study this behavior using accelerated high temperature creep tests.

Analysis of Off-Nuclear X-Ray Sources in Galaxy NGC 4945. SARAH HARRISON (Massachusetts Institute of Technology, Cambridge, MA, 22901) GRZEGORZ MADEJSKI (Stanford Linear Accelerator Center, Stanford, CA, 94025)

Recently, X-ray astronomy has been used to investigate objects such as galaxies, clusters of galaxies, Active Galactic Nuclei (AGN), quasars, starburst superbubbles of hot gas, X-ray binary systems, stars, supernova remnants, and interstellar and intergalactic material. By studying the x-ray emission patterns of these objects, we can gain a greater understanding of their structure and evolution. We analyze X-ray emission from the galaxy NGC 4945 using data taken by the Chandra X-ray Observatory. The Chandra Interactive Analysis of Observations (CIAO) software package was used to extract and fit energy spectra and to extract light curves for the brightest off-nuclear sources in two different observations of NGC 4945 (January, 2000 and May, 2004). A majority of sources were closely fit by both absorbed power law and absorbed bremsstrahlung models, with a significantly poorer Χ2/dof for the absorbed blackbody model, and most sources had little variability. This indicates that the sources are accreting binary systems with either a neutron star or black hole as the compact object. The calculated luminosities were about 1038 erg/s, which implies that the mass of the accreting object is close to 10 solar masses and must be a black hole.

Analysis of the properties of particles emerging from Deep Inelastic Scattering off a range of nuclei. SERERES JOHNSTON (Andrews University, Berrien Springs, MI, 49103) KAWTAR HAFIDI (Argonne National Laboratory, Argonne, IL, 60439)

Hadronization, the process by which a struck quark evolves into a hadron, is not well understood in the nuclear medium. Experiments done with medium energy electron beams and multiple nuclear targets can investigate hadronization at nuclear scales. Understanding this process would provide insight into the confinement property of the nuclear strong force. The data collected by the E02-104 Nuclear Semi Inclusive Deep Inelastic Scattering experiment, performed at Jefferson Laboratory with a 5 GeV electron beam, can be used to characterize hadronization as a function of multiple variables. E02-104 ran with several solid targets of differing atomic radius and the data taken is sensitive to early hadronization processes. Programs were written which compared the hadron attenuation and transverse momentum broadening in the three nuclear targets, carbon, iron, and lead. Greater attenuation is observed in large nuclei. Hadron attenuation is described by the multiplicity ratio, RhM, which is a multivariable function. The high statistics of the E02-104 data allowed its dependence on four different variables to be examined in detail. The quark energy loss indicated by the transverse momentum broadening is seen to increase with the square of the nuclear distance traveled. This agrees with QCD predictions based on quark energy loss through gluon radiation. There is also some evidence that the transverse momentum broadening approaches a limit in larger nuclei. Not enough nuclear targets were examined for this last result to be definitive.

Analysis Strategy of Powder Diffraction Data with 2-D Detector. ABHIK KUMAR (Austin College, Sherman, TX, 75090) APURVA MEHTA (Stanford Linear Accelerator Center, Stanford, CA, 94025)

To gain a clearer understanding of orientation and grain deformation of crystalline materials, x-ray powder diffraction has played an integral role in extracting three-dimensional structural information from one-dimensional diffraction patterns. Powder diffraction models identical geometry to the intersection of a normal right cone with a plane. The purpose of this paper is to develop a general expression defining the conic sections based on the geometry of a powder diffraction experiment. Applying the derived formulation of a diffraction arc to experimental data will give insight to the molecular and structural properties of the sample in question. Instead of using complex three-dimensional Euclidian geometry, we define the problem solving technique with a simpler two-dimensional transformation approach to arrive at a final equation describing the conic sections. Using the diffraction geometry parameters, we can use this equation to calibrate the diffractometer from the diffraction pattern of a known reference material, or to determine the crystalline lattice structure of the compound.

Analytical Data Acquisition via Radar of Ionization Electrons in Cosmic-ray Extensive Air Showers. JEREMY MARTIN (FAMU-FSU College of Engineering, Tallahassee, FL, 32310) HELIO TAKAI (Brookhaven National Laboratory, Upton, NY, 11973)

Ultrahigh-energy cosmic-rays initiate cosmic showers of high-energy, electrically charged particles upon interaction with the atmosphere of the earth. Evidence of radio wave reflection is collected from ultrahigh-energy comic-ray (UHECR)-induced extensive air showers (EAS) of high energy electrically charged particles when entering the stratosphere. Optimal detection of UHECR entails facilitating a more pertinent understanding of the high-energy particle and its celestial origin. The challenge is using a type of radar detection to separate the radio signals reflected by EAS from radio waves reflected from other sources (i.e., clouds, meteor trails, air crafts, emissions from lightening). This requires an approach involving Fourier transform, power spectrum analysis, as well as other series evaluation techniques to discriminate between the EAS reflected waves which contain data from high ionization particle interaction and oscillations that are irrelevant to this research. To distinguish EAS radio waves from other atmospheric sources of radio waves, a set of software tools were developed. Using this uniquely developed data acquisition software, randomly reflected radio waves about the atmosphere can be manipulated from their intrinsic state to isolate the power spectrum of the EAS waveforms in an attempt to efficiently identify these particular signatures of undulation in greater proportion. By analyzing the frequencies of these signals the intention is to later demonstrate a correlation between the radio waves occurring in specific VHF radio frequencies and cosmic-ray events detected by other means. This research is a portion of a process in which the goal is to develop a basis for further study of UHECR within the Mixed Apparatus for Radar Investigation of Cosmic-rays of High Ionizations (MARIACHI) project, which seeks to develop radar detection techniques for related studies of high energy physics.

Calculation of Particle Bounce and Transit Times on General Geometry Flux Surfaces. DOUGLAS SWANSON (Yale University, New Haven, CT, 6520) DR. JONATHAN MENARD (Princeton Plasma Physics Laboratory, Princeton, NJ, 8543)

Minimizing magnetohydrodynamic (MHD) instabilities is essential to maximizing the plasma pressure and the fusion power output from toroidal plasmas. One such instability is the resistive wall mode (RWM). Plasma rotation above a critical frequency has been observed to stabilize the RWM. The critical frequency is predicted in some theories to depend strongly on characteristic bounce and transit times particles take to complete orbits. Bounce times are orbit times for particles with large magnetic moments that are trapped poloidally in banana orbits. Transit times are orbit times for particles with small magnetic moments that are able to complete full poloidal circuits around the plasma. Previous calculations of these bounce and transit times have assumed high aspect ratio and circular flux surfaces, approximations unsuitable for the National Spherical Torus Experiment (NSTX). Analytic solutions for the bounce and transit times were derived as functions of particle energy and magnetic moment for low aspect ratio and elliptical flux surfaces. Numeric solutions for arbitrary aspect ratio and flux surface geometry were also computed using Mathematica and IDL and agree with the analytic forms. The solutions were found to scale as the elongation at low aspect ratio, and as the square root of the elongation at high aspect ratio. For typical values of the parameters the bounce and transit times can differ from the high aspect ratio, circular results by as much as 40%. Analytic transformations to map the high aspect ratio, circular solutions into the general geometry solutions are being investigated. Such transformations could be easily incorporated into existing stability codes such as MARS to refine models of RWM rotational stabilization.

Calibration of Small Pb-Glass Photomultiplier Cells in the FPD++ (Forward Pion Detector). SHAWN PEREZ (State University of New York at Stony Brook, Stony Brook, NY, 11790) LESLIE BLAND (Brookhaven National Laboratory, Upton, NY, 11973)

The FPD++ (Forward Pion Detector) at Brookhaven National Lab, consists of two matrices of Pb-glass bars viewed by photomultiplier tubes that are positioned left and right of the colliding beam axis. These detectors are used to explore transverse single spin asymmetries through analysis of forward pion production and its corresponding jet shape. In order to extract information from polarized proton - proton collisions, the FPD++ had to be calibrated cell by cell. Reconstructed di-photon invariant mass is associated with the highest energy cell in the inner matrix. Distributions of high tower invariant mass are fit by a Gamma function to describe background in the detectors and a Gaussian to describe the Pi0 peak. The absolute gain of each tower is then varied until the Pi0 peak is centered at its known position of .135 GeV/c^2. Once the relative gain correction factors of each iteration performed have converged, the cells have been calibrated. Currently the Small cells of the FPD++ are calibrated within an accuracy of 2%, while the large cells still need to be calibrated. Comparing the summed energy spectra of polarized up and down proton collisions in the west-north and west-south modules of the FPD++, will reveal more information about transverse single spin asymmetries and possibly the relative contributions from the Collins and Sivers effect toward these asymmetries observed in forward pion production. The Collins and Sivers effect are theoretical models developed to explain transverse single spin asymmetries, dependent on spin and transverse momentum distribution functions or fragmentation functions. Analyzing the pseudorapidity (-ln(tan(/2)) dependence on particle production will explore parton distributions within the proton.

CCD Quantum Efficiency Characterization for LSST. XIAOQIAN ZHANG (Cornell University, Ithaca, NY, 14853) JAMES S. FRANK (Brookhaven National Laboratory, Upton, NY, 11973)

The optical performance of charge coupled devices (CCDs), the fundamental units used for digital cameras, can be characterized by their quantum efficiency. The Large Synoptic Survey Telescope (LSST) project, an ongoing project aiming for completion in 2013, needs a high-efficiency digital camera with 3.2 Giga-pixels of CCDs for its acquisition of astronomical images. The CCDs used in this camera need to attain nearly 50 % quantum efficiency in the near-infrared (1000nm) while operating in a vacuum Dewar at a temperature of 173K. To test this requirement, instrumentation of a device that measures quantum efficiency of manufactured CCDs is under development at Brookhaven National Laboratory (BNL) and is based partially on the similar instrumentation developed in 2004 and 2005 in the Lawrence Berkeley National Laboratory (LBNL). This device consists of multiple light sources, a shutter, several filters, a coupled monochromator, a 12-inch diameter integrating sphere, a black box, and a dewar where the cooled CCD is to be placed. The equipment is assembled in the order listed above so that the monochromator first selects the desired wavelength of light emitted from the light source. This beam is then monitored in the integrating sphere, and made uniform through the black box before it reaches the CCD in the dewar. Picoammeters and photodiodes are placed in several locations for light intensity measurements. After examining the light leakage of the integrating sphere, calibration of the coupled monochromator was performed using a Hg light source. Properties of gratings, filters, and slits were studied by comparing measured spectral lines of Hg and Xe light source. A LabView program was developed and used to operate the assembled devices and take readings from the photodiodes. These programs, devices, and data will be used to measure the quantum efficiency as a function of wavelength in CCDs currently under development for LSST.

Centrality Determination in Heavy Ion Collisions for the Pioneering High Energy Nuclear Interaction eXperiment (PHENIX) at the Relativistic Heavy Ion Collider (RHIC). ELI LANSEY (Yeshiva University, New York, NY, 10033) ALEXANDER MILOV (Brookhaven National Laboratory, Upton, NY, 11973)

In the physics of Relativistic Heavy Ions (RHI), the centrality related parameters (such as the number of participating nucleons or number of binary collisions between nucleons) are the essential characteristics of the collisions. The majority of publications from all four RHIC experiments related to RHI physics present their results as functions of one or more centrality-related parameters. Centrality's precise determination is therefore critical to understand most of the RHI results. The distribution of the number of participating nucleons can be obtained with the commonly used Glauber model. In the PHENIX experiment, this distribution is related to the number of particle hits in the Beam-Beam Counters via statistics of the Negative Binomial Distribution (NBD). These properties allow us to achieve two principle goals: to validate the commonly used theoretical model and to establish an accurate relationship between the observable quantity (number of hits) and the number of participating nucleons. Using the data collected during full energy (200 GeV) Au+Au Run4 of the PHENIX experiment we studied the parameters of the NBD, their systematic dependencies and accuracy to which they can be determined. The work is done by using the MINUIT minimization tool in the ROOT environment. This work will contribute to future analysis by many members of the PHENIX collaboration, yielding better measurements of the centrality-related parameters.

Characterization of an Electromagnetic Calorimeter for the Proposed International Linear Collider. MERIDETH FREY (Wellesley College, Wellesley, MA, 2481) NORMAN GRAF (Stanford Linear Accelerator Center, Stanford, CA, 94025)

The International Linear Collider (ILC) is part of a new generation of accelerators enabling physicists to gain a deeper understanding of the fundamental components of the universe. The proposed ILC will accelerate positrons and electrons towards each other with two facing linear colliders, each twenty kilometers long. Designing and planning for the future accelerator has been undertaken as a global collaboration, with groups working on several possible detectors to be used at the ILC. The following research at the Stanford Linear Accelerator Center (SLAC) pertained to the design of an electromagnetic calorimeter. The energy and spatial resolution of the calorimeter was tested by using computer simulations for proposed detectors. In order to optimize this accuracy, different designs of the electromagnetic calorimeter were investigated along with various methods to analyze the data from the simulated detector. A low-cost calorimeter design was found to provide energy resolution comparable to more expensive designs, and new clustering algorithms offered better spatial resolution. Energy distribution and shape characteristics of electromagnetic showers were also identified to differentiate various showers in the calorimeter. With further research, a well-designed detector will enable the ILC to observe new realms of physics.

Characterization of Long Cosmic Ray Muon Tracks in IceCube detector at South Pole. DANIEL HART (Southern University, Baton Rouge, LA, 70813) AZRIEL GOLDSCHMIDT (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

Ice Cube is studying high-energy neutrino astronomy. IceCube uses an array of optical modules to detect faint Cerenkov light produced by muons. These muons are the result of nuetrino interactions with matter. Using the information received from data acquisition systems at the South Pole, software was developed using the C language to read this data and use it to produce possible muon paths. Events were filtered through by placing cuts on the calculated paths that passed through the full geometry of IceCube, had velocity within 5% the speed of light, and were of low multiplicity. This resulted in path distance distributions that showed exponential decay of modules to receive light as a function of distance. The probability curves produced, followed along the same traits. However, the distance distributions were not exactly smooth as would be expected, and the selection of paths that were to be considered as neutrino candidates behaved similarly. These impurities are interpreted as the integration of multiple muons in a single event. In further studies, the plan is not only to add in more data from additional days, but also to employ more sophisticated methods for separating true events from those produced by multiple muons.

Characterization of Sub-diffusion within Benard-Rayleigh Advective Cells by Examination of a Velocity Field with Additive Noise. MARSHA LAROSEE (University of Michigan, Dearborn, Mi, 48128) BEN CARRERAS (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

Normal diffusion worked out by Einstein and Taylor is modeled by averaged particle ‘Brownian motion’ such that a given particle’s motion is determined by random collisions with surrounding particles. Less well understood is the subject of anomalous diffusion, which is studied in many fields where diffusion influences the system (e.g. heat, fluids, chemical kinetics). The distinction between normal diffusion, a random mechanism and anomalous diffusion, that is a mixture of random and deterministic processes, is the time scale at which the transport occurs. Both diffusion and anomalous diffusion follow a power law relation < r s > 1/s ˜ t q(s), where q(s) = 1/2, < 1/2, > 1/2 for diffusion, sub-diffusion, and super-diffusion. Thus, sub-diffusion and super-diffusion scale with time differently than random motion predicts. In order to study sub-diffusion a deterministic model must be used while adding randomness, or noise to the system. A model referred to as the random walk with pauses or trapping events was investigated in order to characterize sub-diffusion in a fluid system. The system that was studied is an array of Benard Rayleigh advective cells where the velocity fields cause 10,000 tracer particles to circulate within a cell. Noise added to the velocity field causes diffusion between cells. Moments of the displacement were calculated as a function of time while varying the frequency and magnitude of noise in order to magnify the region where sub-diffusion is observed. Frequency of the additive noise extended the time frame in which sub-diffusion was observed and appears to extend the time frame non-linearly. Moments of the displacement show that the diffusive exponent q(s) is the same for all higher moments which indicates scale invariance, or q(s) = constant. This property is characteristic of both anomalous and normal diffusion. The exponent observed q(s) ~ 0.4, was larger than the typical exponent of sub-diffusive systems q(s) ~ 1/3. The reason for this is undetermined but may indicate an influence of normal diffusion within the system and future investigation is planned.

Characterizing the Charge Collection of the 0.13 µm IBMPIX Prototype Pixel Detector. ANJALI TRIPATHI (Massachusetts Institute of Technology, Cambridge, MA, 2139) RONALD LIPTON (Fermi National Accelerator Laboratory, Batavia, IL, 60510)

Central to collider physics experiments are silicon detectors, which track the trajectory of particles produced during collisions. With the drive for ever more precise resolution in particle tracking, a prototype pixel detector (IBMPIX) manufactured by IBM in a 0.13 µm RF CMOS process, was investigated to understand its charge collection and individual pixel behavior. As a Monolithic Active Pixel Sensor, it was comprised of arrays of pixels (10µm by 150µm) divided into three diode types - a standard N-well, a deep (or triple) N-well, and a control without a diode. To measure the pixel to pixel variations from the readout electronics, a signal generator pulsed charge directly into the analog circuitry, bypassing the diodes, at different threshold settings. Upon characterizing the response of the readout electronics for each pixel, a pulsed 1.06 µm Nd:YAG laser was used to determine the relationship between the input charge and the output pulse width. This pulse width was the amount of time that the input charge was above a set threshold. With the data from both the laser and the signal generator, a mathematical model was made for charge diffusion across the chip. From the signal generator tests, the readout electronics showed significant pixel to pixel variation. This variation was nearly proportional to the threshold current setting. Additionally, testing of the diodes yielded a precise equation relating pulse-width to charge. For an idealized laser beam of zero width, a diffusion length of approximately 80 microns was determined. The source of the pixel to pixel variation can be attributed to gain variations due to the fabrication. Further studies should employ a laser with a spot size contained within one pixel, include a diffusion model incorporating variable beam width, and use an additional current source to set the threshold value.

Comparison of Non-Redundant Array and Double Pinhole CoherenceMeasurements with Soft X-rays. GABRIEL WEIL (Northwestern University, Evanston, IL, 60201) JAN LUNING (Stanford Linear Accelerator Center, Stanford, CA, 94025)

Experiments on the future Linac Coherent Light Source (LCLS) and other Free Electron Lasers will need to be performed on a single-shot basis. The double pinhole method of measuring spatial coherence requires a separate measurement, with a different pinhole separation distance, for each length scale sampled. This limits its utility for LCLS. A potential alternative uses a Non-Redundant Array (NRA) of apertures designed to probe the coherence over the range of length scales defined by their physical extent, in a single measurement. This approach was tested by comparing diffraction patterns from soft x-rays incident on double pinhole and NRA absorption mask structures. The double pinhole fringe visibility data serve as discrete reference points that verify the continuous spectrum of the NRA coherence data. The results present a quantitative analysis of the double pinhole coherence measurements and a qualitative comparison to the NRA images.

Conceptual Overview of the NPDGamma Experiment. JOSEPH JANOSIK (University of Dayton, Dayton, OH, 45469) W. SCOTT WILBURN (Los Alamos National Laboratory, Los Alamos, NM, 87545)

The magnitude of the weak force is not yet well defined. The NPDGamma experiment at the Los Alamos Neutron Science Center will soon be taking data that will place bounds on the hadronic weak coupling constant Hπ1. This experiment uses polarized cold neutrons which capture on a liquid hydrogen (proton) target, and emit gamma rays that have a directional dependence on the spin of the incident neutrons only according to the weak force. Thus, a measured asymmetry in gamma ray detection can be used to extrapolate the weak coupling. The asymmetry is expected to be very small, around 5 x 10-8, so careful experimental design and construction have been executed to ensure accuracy of this measurement. This document provides a conceptual overview of the workings of this experiment.

Construction and Commissioning of a Micro-Mott Polarimeter for Photocathode Research and Development. APRIL COOK (Monmouth College, Monmouth, IL, 61462) MARCY STUTZMAN (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

Thomas Jefferson National Accelerator Facility uses polarized electrons to further the understanding of the atomic nucleus. The polarized source produces electrons by directing laser light onto a specially prepared gallium arsenide (GaAs) photocathode. During the course of this project, an off-beamline micro-Mott polarimeter has been built and commissioned within the Source Lab for photocathode research and development. A polarimeter measures the polarization, or spin direction, of electrons. The micro-Mott runs at 30 keV and can be used directly in the Source Lab, off of the main accelerator beamline. Construction of the Mott system began with a polarized source, which consists of a vacuum chamber complete with a cesiator and nitrogen triflouride (NF3) to activate the photocathode, residual gas analyzer (RGA), ultra-high vacuum pumps, an electrostatic deflector to bend the electron beam 90 degrees, and electrostatic lenses. The polarimeter is housed in an adjacent vacuum chamber. The circularly polarized laser light enters the polarized source, hits the GaAs photocathode, and liberates polarized electrons. The original longitudinally-polarized electrons are transformed into transversely-polarized electrons by the electrostatic bend. They are then directed onto a gold target inside the Mott and scattered for data analysis. The polarized source has been commissioned, achieving photoemission from the activated GaAs crystal, and the electrostatic optics have been tuned to direct the electrons onto the gold target. Nearly ten percent of the electrons from the photocathode reach the target, giving adequate current for polarization measurement. The micro-Mott polarimeter will aid in photocathode research and pre-qualification of material for use in the injector.

Construction of the La-Bi-O Phase Equilibria: The Search for Inorganic Scintillators. STEVEN VILAYVONG (North Carolina A&T State University, Greensboro, NC, 27411) YETTA PORTER-CHAPMAN (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

Scintillators have been around forever, however the demand for scintillators has risen exponentially since World War II. This need for viable scintillators to be used in the detection of ionizing radiation has spurred research around the world. The New Detector Group of the Department of Functional Imaging in the Life Sciences Division of Lawrence Berkeley National Laboratory conducts systematic searches of various compounds to find the most effective scintillator. Much of their work focuses on compounds that contain Bismuth (III) and Lanthanum (III) ions. Bismuth (III) ions have the capability to be luminescent sometimes providing intrinsic scintillation as seen in the commonly used scintillator, Bi4Ge3O12, (BGO). Phases containing lanthanum (III) ions are also investigated to utilize their sites for cerium (III) (another luminescent ion) doping. In this work, various molar ratios of La2O3 and Bi2O3 were reacted by solid state chemistry techniques to find phases that may exhibit scintillation. To be classified as a good scintillator, one must characterize these phases by x-ray diffraction (XRD), fluorescence spectroscopy, and pulsed x-ray measurements. Four La-Bi-O phases, La0.176Bi0.824O1.5, La0.12Bi1.88O3, La4Bi2O9, and an unknown phase were found however, none are good scintillators.

Design, Fabrication and Measurement of Nb/Si multilayers and Nb Transmission Filters. SUNEIDY LEMOS FIGUEREO (University of Puerto Rico, Rio Piedras, P.R, 979) DAVID ATTWOOD (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

The extreme ultraviolet (EUV) region of the electromagnetic spectrum is being used in multilayer optical systems to design technology projected for use in the fabrication of nano-electronics. Multilayer optical systems with high reflectivity have been produced in the soft x-ray and EUV regions of the spectrum. Due to the limited understanding of the Nb/Si optical systems, our research group fabricated and measured Nb/Si multilayers and Nb transmission filters for the soft x-ray and EUV regions. Multilayer optical systems are used in applications ranging from EUV lithography to synchrotron radiation. The films were deposited using dc magnetron sputtering in the Center for X-Ray Optics at the Lawrence Berkeley National Laboratory. Reflectivity and transmission measurements were performed at the Advanced Light Source beamline 6.3.2. The Nb/Si multilayer mirrors fabricated have a reflectivity of approximately 65% in the extreme ultraviolet region, which makes these systems practical for applications where a high reflectivity is required, such as Astronomy and instrumentation development. Transmission measurements of up to 90% were observed in the soft x-ray and EUV regions as well. Future work in the research group includes the design and fabrication of an Nb/Si multilayer with a B4C interface. The Nb/B4C/Si optical systems are expected to have a higher reflectivity than Nb/Si systems.

Detection of Ionizing Radiation Based on Metastable States of Polymer Dispersed Liquid Crystals. TIMOTHY PHUNG (University of California, Berkeley, Berkeley, CA, 94720) CARL HABER (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

Polymer dispersed liquid crystals (PDLCs) may be a suitable medium for future tracking detectors used in particle physics. Such a detector will work in analogy with the bubble chamber by using the metastable states in LC materials. A PDLC cell is fabricated and the optical transmission of the cell is measured as a function of the voltage applied across the cell and the temperature. The optical transmission of the PDLC is found to be temperature dependent below the nematic-isotropic phase transition temperature when a field is applied across the cell as is reported in the literature. When no field is applied, the PDLC cell is strongly temperature dependent near the nematic-isotropic phase transition temperature, which also agrees with previous results. Future research in this area will focus on finding the metastable phenomena that exist near phase transitions of LC materials and on the use of electric fields to shift the transition temperature.

Determining Micromechanical Strain in Nitinol. MATTHEW STRASBERG (Cornell University, Ithaca, NY, 14850) APURVA MEHTA (Stanford Linear Accelerator Center, Stanford, CA, 94025)

Nitinol is a superelastic alloy made of equal parts nickel and titanium. Due to its unique shape memory properties, nitinol is used to make medical stents, lifesaving devices used to allow blood flow in occluded arteries. Micromechanical models and even nitinol-specific finite element analysis (FEA) software are insufficient for unerringly predicting fatigue and resultant failure. Due to the sensitive nature of its application, a better understanding of nitinol on a granular scale is being pursued through X-ray diffraction techniques at the Stanford Synchrotron Radiation Laboratory (SSRL) at the Stanford Linear Accelerator Center (SLAC). Through analysis of powder diffraction patterns of nitinol under increasing tensile loads, localized strain can be calculated. We compare these results with micromechanical predictions in order to advance nitinol-relevant FEA tools. From this we hope to gain a greater understanding of how nitinol fatigues under multi-axial loads.

Development of Emittance Analysis Software for Ion Beam Characterization. MARIANO PADILLA (Fullerton College, Fullerton, CA, 92832) YUAN LIU (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

Transverse beam emittance is a crucial property which describes the angular and spatial spread of charged particle beams. It is a figure of merit frequently used to determine the quality of ion beams, the compatibility of an ion beam with a given beam transport system, and the ability to suppress neighboring isotopes at on-line mass separator facilities. Generally a high quality beam is characterized by a small emittance. In order to determine and improve the quality of ion beams used at the Holifield Radioactive Ion beam Facility (HRIBF) for nuclear physics and nuclear astrophysics research, the emittances of the ion beams are measured at the off-line Ion Source Test Facilities. In this project, an emittance analysis software was developed to perform various data processing tasks for noise reduction and to evaluate root-mean-square emittance, Twiss parameters, and area emittance of different beam fractions. The software also provides 2D and 3D graphical views of the emittance data, beam profiles, emittance contours, and ellipses. Noise exclusion is essential for accurate determination of beam emittance values. A Self-Consistent, Unbiased Elliptical Exclusion (SCUBEEx) method is employed. Numerical data analysis techniques such as interpolation and nonlinear fitting are also incorporated into the software. The software will provide a simplified and fast tool for comprehensive emittance analyses. The main functions of the software package have been completed. In preliminary tests with real experimental emittance data, the analysis results using the software were proven to be correct.

Development of Nanofluidic Cells for Ultrafast X-ray Studies of Water. MELVIN IRIZARRY (University of Puerto Rico, Mayaguez, PR, 667) AARON LINDENBEREG (Stanford Linear Accelerator Center, Stanford, CA, 94025)

In order to study the molecular structure and dynamics of liquid water with soft x-ray probes, samples with nanoscale dimensions are needed. This paper describes a simple method for preparing nanofluidic water cells. The idea is to confine a thin layer of water between two silicon nitride windows. The windows are 1 mm × 1 mm and 0.5 mm × 0.5 mm in size and have a thickness of 150 nm. The thickness of the water layer was measured experimentally by probing the infrared spectrum of water in the cells with a Fourier Transform InfraRed (FTIR) apparatus and from soft x-ray static measurements at the Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory. Water layers ranging from 10 nm to more than 2 µm were observed. Evidence for changes in the water structure compared to bulk water is observed in the ultrathin cells.

Development of Powder Diffraction Analysis Tools for a Nanocrystalline Specimen: An Emphasis upon NiTi (Nitinol). ERICH OWENS (Albion College, Albion, MI, 49224) APURVA MEHTA (Stanford Linear Accelerator Center, Stanford, CA, 94025)

Powder diffraction is a specialized technique whose investigatory limits are constrained by the scale of the crystallized substance being scanned versus the probe beam used. When disparate in scale, with the photon spot size larger than the crystal being probed, many are employed, the resulting diffraction image being cast from all possible incident angles, constructing -arcs containing information about the crystalline structure of the material under examination. Of particular interest to our collaboration is the structure of Nitinol, a superelastic Nickel-Titanium alloy, whose phase transformations and load bearing deformations can be studied by usage of diffraction, with wide sweeping biomedical uses. Analysis of this data is complicated by phase transformation and material fluorescence, which make difficult the computational modeling of the peaks within concentric -arcs. We endeavored to construct a series of computational tools (the amalgamation of them known as 2DPeakFinder) for refining and extracting this relevant data, toward the end of employing previously developed algorithms in the material’s structural analysis. We succeeded to a large degree with the use of an iterative algorithm to navigate radial complexity of the signal and manage to retain a distinction between useful signal and superfluous background noise. The tools developed in this project are a small step in readily streamlining the analysis and physical modeling of a Nanocrystalline material’s structural properties.

Diffusion-controlled Apparatus' for Microgravity. PRADEEP RAJENDRAN (Stanford University, Stanford, CA, 94305) DR. P. THIYAGARAJAN (Argonne National Laboratory, Argonne, IL, 60439)

Large photoactive yellow protein (PYP) crystals are being grown using diffusion-controlled apparatus’ for microgravity (DCAMs) for proposed neutron crystallography experiments. The basis for this experiment is that a short strong hydrogen bond (SSHB) appears to play an important role in the function of PYP in the photocycle. In order to fully understand the structure and dynamics of the SSHB, it is necessary to accurately locate the nuclear position of the proton (or deuteron) with respect to the heavy atoms involved in the hydrogen bond. Pervious attempts to grow PYP crystals using the batch and hanging drop method have had limited success. The resolution of diffraction data collected from PYP crystals grown using these methods was determined to be between 0.95  to 1.40 . PYP crystallizes best between a concentration range of 2.3 M to 2.5 M. Two DCAM units have been set up. The units have a concentration of 1.6 M and 2.0 M (ND4)2SO4 in the small chamber, respectively, and a concentration of 3.0 M (ND4)2SO4 in the large chambers. As the higher concentration solution in the large chamber equilibrates with the lower concentration solution in the small chamber through the diffusion control plug, the concentration increases within the "button" containing the PYP sample, causing crystallization to begin. Large PYP crystals have been grown following these procedures using DCAMs.

Eddy Current Non-destructive Inspection Using Giant Magnetoresistive Technology. STEVEN GARDNER (Brigham Young University - Idaho, Rexburg, ID, 83460) DENNIS C. KUNERTH, PH.D. (Idaho National Laboratory, Idaho Falls, ID, 83415)

A prototype eddy current probe utilizing giant Magnetoresistive technology was found to have the following benefits: 1. It functions at very low frequencies (DC to 30 kHz) and deep sample depths. 2. The probe can be characterized to reveal thickness of an aluminum sample up to 10 mm thick, and much thicker in stainless steel and other less conductive metals. 3. The probe design uses a noise reduction coil and shield system which increases signal to noise ratio by nulling drive coil noise. This probe would be ideal for testing thickness of metals with low magnetic permeability. It is also effective at locating defects greater than 0.25 mm on the surface or within the depth of penetration as determined by the relationship depth (m) = v(1/ (p*f*µ0*µr*s)) where µ0 is the permeability of free space, µr is the relative magnetic permeability and s is the electrical conductivity. Its limitations include: 1. It was unable to function at frequencies greater than 30 kHz. 2. It was not able to detect defects smaller than .25 mm or surface defects which did not extend this same distance or more into the metal. Attempts were also made to operate the probe using pulse and linear sweep drives. These attempts did not create satisfactory results. Much of the lack of success could be due to the fact that it was not possible to null the drive signal using the noise reduction coil for either the pulse or the sweep drive methods. The inability to resolve small defects, and the limitations on the frequency range appear to be caused by the large coil geometry and the inductance change introduced by the iron shield.

Effect of Acid Agitation on Buffered Chemical Polishing of Niobium for Radiofrequency Cavities. SARA MOHON (The College of William and Mary, Williamsburg, VA, 23186) ANDY WU (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

The performance of niobium superconducting radiofrequency (SRF) cavities can be affected by the smoothness of their inner surfaces. Smoother inner surfaces tend to result in better performance. Normally, smooth niobium surfaces are obtained by buffered chemical polish (BCP). BCP is necessary to remove the damaged layer created during fabrication and machining of the cavities. Previous experiments conducted in the Surface Science Lab at Thomas Jefferson National Accelerator Facility have shown that the morphology of niobium surfaces may be altered via different agitation constraints during BCP. The focus of this research is a systematic study of the effect of agitation on the BCP treatment of niobium. Six samples of niobium, each one square centimeter in size, were prepared using a 1:1:2 BCP mixture for 75 minutes. A control sample was also analyzed with no BCP treatment. Each BCP treated sample was agitated after a certain amount of time, varying from 0 to 75 minutes. After this treatment, the samples were examined by a 3D profilometer, where quantitative information about surface morphology was extracted. Qualitative inspection of the surface of each sample was performed a metallographic optical microscope (MOM). It was found that the surface roughness increased up to a certain asymptotic limit as the time interval between agitations increased, and that a green unidentified cloud-like material appeared above the inner surface area of niobium samples when there was no agitation. The MOM photographs show evidence that the BCP mixture attacked the grain boundaries and defect locations more than it did elsewhere, making a BCP treated surface rougher in comparison to some other polishing methods. The treated samples became smoother as the time interval between agitations decreased although they never become as smooth as the control sample. Smoother surfaces were also found in areas where the green cloud formed than in areas where it was absent. A model is proposed to qualitatively explain the experimental results. Further investigation is warranted for different BCP ratios to see if a smoother surface finish is possible and what agitation it requires. In addition, a BCP study of how larger grain samples affect niobium surface morphology is promising to the improvement of SRF cavities. These endeavors and the experimental results are useful for the BCP treatment of niobium SRF cavities to be used in particle accelerators.

Effects of Tellurium Precipitates in CdZnTe (CZT) Radiation Detectors. KYLE KOHMAN (Kansas State University, Manhattan, KS, 66506) ALEKSEY BOLOTNIKOV (Brookhaven National Laboratory, Upton, NY, 11973)

CdZnTe (CZT) crystal is a material that has great potential for high-resolution detection of gamma and X-rays because of its high gamma ray stopping power, durability, and abilities to be made portable operated at room temperature. It is well known that tellurium (Te) precipitates may affect performance of CZT detectors; however, it is not specifically known how, or to what extent these inclusions affect spectral response in CZT detectors. This study sought a quantitative correlation between precipitate sizes and concentrations of CZT material and the spectral response of the material as a detector. Te precipitate sizes and concentrations were observed in CZT crystals using an infrared microscope coupled with a digital camera. A program written in Interactive Data Language (IDL) counted Te precipitates within sample images at different magnifications throughout the material and characterized each individual inclusion by size and location. The crystals were then made into Frisch-ring detectors by placing gold contacts on each side and wrapping them in Teflon tape and copper. Spectral responses of the detectors were measured by placing them into a pre-amplifier connected to standard nuclear instrumentation module components. The results indicate that detector response has a strong dependence on concentrations of Te precipitates greater than 5 µm. 1-µm precipitates were very close to the resolution limit of the IR system and so accuracy was insufficient for quantitative measurements of correlations between device spectral responses, precipitate concentrations, and crystal thicknesses (etch pit technique would be more appropriate for measurements of such small precipitates). Nevertheless, based on these measurements, an upper limit can be given. Te precipitates with diameters

Effects of UV light on a fluorescent dust cloud. ENRIQUE MERINO (Ramapo College of New Jersey, Mahwah, NJ, 7430) ANDREW POST-ZWICKER (Princeton Plasma Physics Laboratory, Princeton, NJ, 8543)

Dusty plasma research has applications ranging from microchip fabrication to the study of planetary rings. Typically, the behavior of laboratory dusty plasmas is studied by laser scattering techniques. A new diagnostic technique developed at the Science Education Laboratory at the Princeton Plasma Physics Laboratory uses a 100W ultra-violet (UV) light to illuminate a fluorescent organic dust cloud created in an argon DC glow discharge plasma. By using the UV light, the fluorescent particles can be clearly seen during and after cloud formation and their 3D properties analyzed. This technique has been successfully used to study formation and transport of the dust cloud. Observations have shown that after the dust cloud has formed, the UV light causes rotation of the edge of the cloud (˜ 3mm/s), while particles in the center of the cloud remain stable. Displacements of several millimeters up and towards the UV light have also been recorded by modulating the UV light. Through the use of a Langmuir probe, changes in the charge of the plasma were recorded whenever UV light was introduced. These changes occurred both in the presence of a dust cloud and with a clean plasma as well. Although these experiments were helpful in demonstrating that UV light had an effect on the plasma, it is left as future work to determine the effect of UV light on the dust particles themselves and propose analytical models for the displacements experienced.

Energizing a Superconducting Solenoid for Applications in Precise Mass Measurements. DAVID DANAHER (Monmouth College, Monmouth, IL, 61462) GUY SAVARD (Argonne National Laboratory, Argonne, IL, 60439)

The precise mass measurement of ions is the main objective of the Canadian Penning Trap (CPT) collaboration at Argonne National Laboratory. The mass measurements require a beam from the Argonne Tandem Linear Accelerator System (ATLAS) to fuse with a target to create the desired reaction products. A new beamline has nearly been completed in Area II of ATLAS for the purpose of taking mass measurements of ions with greater divergence angles that were previously hard or nearly impossible to measure, which diverge due to alpha decay and are refocused for later measurement. A solenoid can be used to refocus the ions before they can be sent through the rest of the system and, consequently, measured. This component of the new beamline in Area II is a superconducting solenoid with a 68 centimeter inner bore and a central field of 1.5 Tesla, weighing over 10 tons with all of the necessary shielding and support in place. For the solenoid to produce a magnetic field, it must be energized and continually carry a current of about 200 Amps within its coils. The process of energizing such a device is a subtle exercise and requires creating a vacuum within the solenoid, pre-cooling of the cryostat with liquid Nitrogen, filling the cryostat with liquid Helium, running a cryo-compressor to limit Helium boil-off, and well-regulated power sources to bring the solenoid to maximum field. Once carefully assembled, however, the components of the energization process may be used to safely energize and de-energize the system time and time again. My project was to help locate all the components of the energization process, facilitate the interconnections between components, and, finally, to aide with the powering of the magnet.

Error Reduction in Polarization Measurement. DONALD JONES (Acadia University, Wolfville, NS, 0) ROBERT MICHAELS (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

One of the greatest barriers to definitive conclusions in any scientific experiment is the accumulation of errors. Due to the precision required in parity-violation experiments, an effort has been made in Hall A at Jefferson Lab (JLab) to reduce the cumulative error by targeting specific sources. A particular focus has been placed on reduction of error in beam polarization measurements. Compton polarimetry is utilized at JLab because of its unique advantage of allowing polarization to be determined while an experiment is running, without interrupting the beam. Electrical signals produced by scattered photons and electrons are used to determine beam polarization. The helicity of the beam is reversed every 33 milliseconds (ms) and the asymmetry that arises from pulse measurements during consecutive 33 ms intervals, is used to calculate beam polarization. While this asymmetry has been created in the past by counting photons, electrons or electron-photon coincidences, this method gives rise to many systematic errors. New methods are being sought to more accurately calculate polarization. The focus of this research has been to determine whether signal integration can be used to effectively reduce the error to under the 1% level within a feasible time frame. Extensive tests have been done to determine the reliability of signal integration across the full 33ms gate, in order to determine if the background noise is too great to make this technique useful. Because of difficulties encountered, and the lack of beam operation during the time this research was done, a pulse generator was used to simulate the electrical pulses that arise from electron scattering in the Compton polarimeter. The data from the pulse-generated asymmetry indicates that polarization can be accurately determined within three hours of beam operation. Because experiments can last for days, this is a reasonable length of time. To ensure the reliability of this technique, the results were then verified using Monte Carlo simulations. The results of this research show that this method of beam polarization measurement has great promise of being able to reduce the measurement error from the present 3%, to below 1%. This method has yet to be tested during beam operation, but its success would enhance future parity violation experiments.

Experimental Studies of Electrode Biased Compact Toroid Plasmas in the Magnetic Reconnection Experiment. ELIJAH MARTIN (North Carolina State University, Raleigh, NC, 27609) DR. MASAAKI YAMADA (Princeton Plasma Physics Laboratory, Princeton, NJ, 8543)

Compact Toroid (CT) plasmas such as Field-Reversed Configurations (FRC’s) and Spheromaks are known to exhibit a global instability known as the tilt mode, where the magnetic moment of the CT tilts to align itself with the external magnetic field, as well other non-rigid body instabilities. Possible tilt stabilizing mechanisms for these instabilities include external field shaping, nearby passive stabilizers, and plasma rotation. This research focuses on reducing the growth of the tilt instability by introducing toroidal rotation in spheromaks formed in the Magnetic Reconnection Experiment (MRX). Rotation is introduced by the use of interior and exterior electrodes; the result is a Jbias x Binternal torque on the CT plasma which in turn leads to toroidal rotation of the CT plasma. In order to power the bias electrode a 450 V 8800 µF capacitor bank capable of delivering up to 450 amperes was constructed along with the required control and triggering circuitry. Solid state switches allow for fast turn on and turn off times of Jbias. The bias current and the voltage drop across the electrodes are measured using a current shunt and voltage divider respectively, and the resulting flow in the CT plasma is measured with a Mach probe. Internal arrays of magnetic probes and optical diagnostics will be used to parameterize the performance of the CT plasma during bias. Construction and testing of all necessary components and diagnostics is complete; preliminary experiments were designed such that the resistivity of the plasma could be determined. It was found that a typical CT plasma has a resistivity of 34.1 ± 3.6 ohm m, a leaky capacitor model of the CT plasma was applied to determine the resistivity theoretically. A theoretical resistivity of 4.9 x 10-3 ohm m was calculated based on conditions of a typical CT plasma. The strong disagreement between experimentally and theoretically determined values is hypothesized to be due to non-optimal control of CT plasmas formed in MRX. The focus of future research will be optimizing control of the CT plasma; agreement between the model and experiment will then be studied as well as experiments designed to induce toroidal rotation.

Formation and Transport of a Fluorescent Dust Cloud: a New Diagnostic Technique in Dusty Plasma Research. ENRIQUE MERINO (Ramapo College of New Jersey, Mahwah, NJ, 7430) ANDREW POST-ZWICKER (Princeton Plasma Physics Laboratory, Princeton, NJ, 8543)

Dusty plasma research has applications ranging from microchip fabrication to the study of planetary rings. Typically, the behavior of laboratory dusty plasmas is studied by laser scattering techniques which give a 2D slice of the dust cloud or by the use of 3D particle image velocimetry methods (PIV). Although these diagnostic techniques allow for the study of cloud formation, they require extensive computer processing or simulations to study the formation processes. A new diagnostic technique has been devised to study 3D behavior and cloud formation, without the need for the complicated and usually expensive methods mentioned above. A fluorescent organic dust is used to create a cloud in an argon DC glow discharge plasma, illuminated by a 100W ultra-violet (UV) light. By using UV light, the fluorescent particles can be clearly seen during cloud formation and their 3D properties analyzed. One question remaining, however, is whether the UV light perturbs the plasma by changing the local charge balance or actively changing the dust cloud charge by photoelectric emission. In fact, initial observations show that particles in the back of the dust cloud experience a displacement towards the UV light, while particles in the front move away from it. Velocities ranging in the order of 1.5 mm/sec to 3 mm/sec were recorded for different areas of the cloud. Future work will use a Langmuir probe to separate changes in plasma parameters from changes in dust charge.

G0++: Creating a User Interface for Fastbus Data. JONATHAN HOOD (University of Maryland, College Park, MD, 20742) TANJA HORN (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

The G0 collaboration at Thomas Jefferson National Accelerator Facility (Jefferson Lab) investigates the contribution of strange quarks to the fundamental properties of the nucleon. To do this, the G0 experiment sends a highly energized electron beam into a target, such as liquid hydrogen. When the electrons in the beam collide with nucleons in the target, a spray of particles radiates from the collision point. This spray of particles travels across a magnetic field into a variety of particle detectors, resulting in large amounts of unanalyzed data. Data taken during the G0 experiment pertaining to the time of flight and amplitudes of particle trajectories is referred to as Fastbus data, which is organized by Root, a C++ interpreter designed for manipulating large number of data points. For the G0 experiment, Fastbus data provide a general evaluation of the performance of the detector and electronic systems. Therefore, an efficient analysis of these data is essential at the startup and during the running of an experiment. The goal of this project was to make the Fastbus data more accessible, especially to people who are not familiar with the Root language. To do this, a good understanding of the data acquisition and its analysis was required. By collaborating with scientists at the National Jefferson Laboratory, a design was constructed and implemented. Scientists need to look at the data in many different ways, and the program was designed to include a variety of tools that allows them to do so. In order to analyze the data quickly and effectively, a Graphical User Interface (GUI) was programmed to give users a convenient way to create, display, and manipulate graphs. The interface has been constructed with many useful tools that aid scientists in the G0 experiment, including such features as the ability to add cuts to remove unwanted data points and the ability to fit the data with numerical equations. The program, titled “G0++: GoFast Goes Fast” will be used by G0 scientists, and eventually the program could be used for general Hall C data analysis in a C++ framework.

Gain Mapping and Response Uniformity Testing of the Hamamatsu R8900 Multianode Photomultiplier Tube and the Burle Planacon Microchannel Plate Photomultiplier Tube for the Picosecond Timing Project. MELINDA MORANG (University of Chicago, Chicago, IL, 60637) KAREN BYRUM (Argonne National Laboratory, Argonne, IL, 60439)

Research is currently underway for the development of a microchannel plate photomultiplier tube with picosecond timing capabilities, a property that would be extremely useful in many fields of physics and in radiology. It is important in a research and development study such as this one to fully understand the currently available technology, and the study presented in this paper focuses on characterizing gain and response uniformity in the Hamamatsu R8900-00-M16 multianode photomultiplier tube (R8900) and the Burle Planacon 85011-501 microchannel plate photomultiplier tube (MCPPMT). The tubes were tested using a dark box setup in which a moveable fiber carrying an LED pulse could be directed into each pixel in the tube. The gains for each pixel of ten R8900 tubes and for part of an MCPPMT were calculated, and a horizontal scan in 1mm steps was performed across the breadth of the R8900 and across one quadrant of an MCPPMT. Plots showing the signal output in the horizontal scans showed that the MCPPMT had much greater response uniformity across the tube, but due to time and equipment restraints, these results are only preliminary, and more extensive study of uniformity is necessary, as is study of many other properties of these tubes.

GEANT Simulations of the Preshower Calorimeter for CLAS12 Upgrade of the Forward Electromagnetic Calorimeter. KRISTIN WHITLOW (University of Florida, Gainesville, FL, 32612) STEPAN STEPANYAN (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

Hall B at the Thomas Jefferson National Accelerator Facility uses the CEBAF (Continuous Electron Beam Accelerator Facility) Large Acceptance Spectrometer (CLAS) to study the structure of the nucleon. The accelerator is currently planning to upgrade from using a 6 GeV beam to a 12 GeV beam. With the beam upgrade, more high-energy pions will be created from the interaction of the beam and the target. Above 6 GeV, the angle between the two-decay photons of high-energy pions becomes too small for the current electromagnetic calorimeter (EC) of CLAS to differentiate between two photon clusters and single photon events. Thus, a preshower calorimeter will be added in front of the EC to enable finer granularity and ensure better cluster separation for all CLAS experiments at higher energies. In order to optimize cost without compromising the calorimeter’s performance, three versions of the preshower varying in number of scintillator and lead layers were compared by their resolution and efficiency. Through the use of GSIM, a GEANT detector simulation program for CLAS, po’s and single photons were passed through the CLAS detector with the added preshower calorimeter (CLAS12 EC). The resolution of the CLAS12 EC was calculated from the Gaussian fit of the sampling fraction, the energy CLAS12 EC detected over the Monte Carlo simulated momentum. The single photon detection efficiency was determined from the energy and position of the photon hits. The resolution measured in the five-modules version was 0.0972, the four-modules version was 0.111, and three-modules version was 0.149. Both the five- and four-modules versions had 99% efficiency above 0.5 GeV while the 3 module version had 99% efficiency above 1.5 GeV. Based on these results, the suggested preshower configuration is the four-modules version containing twelve layers of scintillator and fifteen layers of lead because it is the most realistic choice to construct in resolution, efficiency and cost. The next step will be to do additional GSIM simulations to verify that the four-modules version has acceptable po mass reconstruction and continue Research and Development (R&D) analysis on the preshower calorimeter.

Generation of Theoretical Spectra for Hot Dense Matter. ELIAS ALLISON, MATILDA FERNANDEZ, & AJIT HIRA (Northern New Mexico Community College, Espanola, NM, 87532) DR. RICHARD W. LEE (Lawrence Livermore National Laboratory, Livermore, CA, 94550)

The focus of this project is to use computer codes to generate spectroscopic for data plasmas and for hot dense matter, and to use these theoretical spectra to interpret experimental results. These theoretical spectra are important because of their applications in the study of ionized matter and magnetic fields near the surfaces of the sun and stars, the origin of cosmic rays, the dispersion and broadening of signals traveling through interstellar space, the realization of controlled release of nuclear fusion energy, and the development of new electronic devices, among others. The computer codes model specified interactions for elements ranging from Helium to Iron, with a particular interest in lithium-like, helium-like and hydrogenic ionic species. Some of the important events included in the model are collisional ionization and recombination, radiative, bound-free processes, bound-bound processes, auto-ionization and electron capture. At present the focus of the project is the design and implementation of a good dynamic interface to improve the use of existing codes.

Hold-up Time Measurements for Various Targets. EMILY PRETTYMAN (DePaul University, Chicago, IL, 60614) KEN CARTER (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

At Oak Ridge National Laboratory the Holifield Radioactive Ion Beam Facility (HRIBF) produces radioactive ion beams (RIBs) typically by proton-induced fission on an actinide target. The RIB yields depend on the chemical and physical properties of the target used, such as the chemical compound, density and temperature. The rates at which chemical elements are released from the target ion source, called hold-up times, can give yield information and information about the movement of chemical elements within the target material and the ion source. This information may be useful in designing optimal targets and choosing operating conditions to maximize production of specific isotopes. Hold-up times are measured using the UNISOR isotope separator, connected to the tandem accelerator. Typically, the proton beam is left on until the element of interest reaches equilibrium between production and release. The proton beam is then turned off and the decrease of the release is observed using a moving tape collector and a germanium detector. Hold-up times have been measured using a pressed-powder Uranium Carbide (UCx) target at different temperatures for four elements: 78Ge, 92Sr, 128Sn, and 130Sb. In addition, hold-up times have been measured using two ThO2 targets of different densities for 132Te. For the various elements measured, as the temperature increases the hold-up times tend to decrease as expected. In addition, the denser ThO2 target has drastically longer hold-up time than the less dense ThO2 target for 132Te. The current analysis was done by fitting the data with two exponential decay functions and as expected the hold-up times decrease as target temperature increases and the porous ThO2 target has shorter hold-up times and higher yields than the denser ThO2 target. Another attempt to fit the data would be to use equations which take into account diffusion and effusion with the goal of determining the ratios between diffusion and effusion to see which process dominates. There is still more work that must be done before conclusions can be drawn about the best design of a target and the most efficient operating conditions to maximize the yields.

Host Galaxies of X-Shaped Radio Sources. ALESSONDRA SPRINGMANN (Wellesley College, Wellesley, MA, 2481) TEDDY CHEUNG (Stanford Linear Accelerator Center, Stanford, CA, 94025)

The majority of radiation from galaxies containing active galactic nuclei (AGNs) is emitted not by the stars composing the galaxy, but from an active source at the galactic center, most likely a supermassive black hole. Of particular interest are radio galaxies, the active galaxies emitting much of their radiation at radio wavelengths. Within each radio galaxy, an AGN powers a pair of collimated jets of relativistic particles, forming a pair of giant lobes at the end of the jets and thus giving a characterisitic double-lobed appearance. A particular class of radio galaxies have an “X”-shaped morphology: in these, two pairs of lobes appear to originate from the galactic center, producing a distinctive X-shape. Two main mechanisms have been proposed to explain the X-shape morphology: one being through the merger of a binary supermassive black hole system and the second being that the radio jets are expanding into an asymmetric medium. By analyzing radio host galaxy shapes, we probe the distribution of the stellar mass to compare the differing model expectations regarding the distribution of the surrounding gas and stellar material about the AGN.

Hydrocode Simulations of Mach Stem Formation. STEPHEN DAUGHERTY (Vanderbilt University, Nashville, TN, 37235) DENNIS PAISLEY (Los Alamos National Laboratory, Los Alamos, NM, 87545)

The study of shock waves often makes use of metallic disks propelled at high velocities to act as impactors. There are a number of ways to supply flyers with energy, but it takes some scheming to achieve extreme accelerations; a laser can focus a large amount of energy into a small area, but at some point thermal effects will take over, scattering much of the pulse energy and heating the sample. One solution is to strike a cone with a high-energy pulse and allow the resulting shock to converge toward the center. The shock reflected from the center will move at supersonic speed behind the incident shock, causing a flat energy disk—a Mach wave—to propagate through the center of the cone. The Mach wave will carry a high energy density to the base of the cone, where it can be transferred to a flyer.

Improvement of PEP-II Linear Optics with a MIA-Derived Virtual Accelerator. BEN CERIO (Colgate University, Hamilton, NY, 13346) YITON YAN (Stanford Linear Accelerator Center, Stanford, CA, 94025)

In several past studies, model independent analysis, in conjunction with a virtual accelerator model, has been successful in improving PEP-II linear geometric optics. In many cases, optics improvement yielded an increase in machine luminosity. In this study, an updated characterization of linear optics is presented. With the PEP-II beam position monitor (BPM) system, four independent beam centroid orbits were extracted and used to determine phase advances and linear Green’s functions among BPM locations. A magnetic lattice model was then constructed with a singular value decomposition-enhanced least-square fitting of phase advances and Green’s functions, which are functions of quadrupole strengths, sextupole feed-downs, as well as BPM errors, to the corresponding measured quantities. The fitting process yielded a machine model that matched the measured linear optics of the real machine and was therefore deemed the virtual accelerator. High beta beat, as well as linear coupling, was observed in both LER and HER of the virtual accelerator. Since there was higher beta beating in LER, focus was shifted to the improvement of this ring. By adjusting select quadrupoles of the virtual LER and fitting the resulting beta functions and phase advances to those of the desired lattice, the average beta beat of the virtual machine was effectively reduced. The new magnet configuration was dialed into LER on August 10, 2006, and beta beat was reduced by a factor of three. After fine tuning HER to match the improved LER for optimal collision, a record peak luminosity of 12.069x1033 cm-2s-1 was attained on August 16, 2006.

Indium Zinc Oxide Active Channel Layer in Transparent Thin Film Transistors. ANDREW CAVENDOR (Colorado School of Mines, Golden, CO, 80401) DAVID GINLEY (National Renewable Energy Laboratory, Golden, CO, 89401)

Amorphous indium zinc oxide (IZO) shows promise for being the active channel layer in a transparent thin film transistor (TTFT). In the last 2 years, oxide based transistors have begun to be investigated showing the promise to replace amorphous silicon (a-Si) and microcrystalline silicon TFTs to lead to transparent electronics. IZO is a lead candidate because it can be deposited at room temperature and is amorphous, making it suitable for flexible substrates. Also, IZO has higher Hall mobility (µh) > 30 cm2V-1s-1 than conventional materials (< 1 cm2V-1s-1), which makes faster TFT switching times possible. We used DC magnetron sputtering with O2/Ar gas from 0-10% to optimize the active channel layer, for transistors. Addition of O2 to the sputter gas reduces the carrier concentration () while preserving high mobility µh. Amorphous IZO films were produced at 70/30 atomic percent In/Zn with µh as high as ~55 cm2V-1s-1 and  as low as ~1016 cm-3. TFT devices were made in the gate down method through photolithography. The source and drain were produced at 84/16 atomic percent In/Zn, to achieve good conductivity ~2000 S cm-1. Films were incorporated into functional transistors which showed on-to-off current ratios ~106.

Investigating the Polarization Effects of Free Electrons Through a Longitudinal Stern-Gerlach Apparatus. RACHEL SPARKS (Old Dominion University, Norfolk, VA, 23517) DR. DOUGLAS W. HIGINBOTHAM (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

Otto Stern and Walther Gerlach completed an experiment in 1922 that opened many important questions and discoveries about the quantum mechanical world. Known as the Stern-Gerlach experiment, a beam of neutral silver atoms was passed through an inhomogeneous transverse magnetic field and then projected onto a screen. Classically, one would expect the screen to display a single blob of silver atoms; however, two blobs appeared. The breakthrough from this experiment was that the electron has an intrinsic spin, similar to a rotating top. Attempts have been made to conduct the Stern-Gerlach experiment with a beam of free electrons; but the Lorentz force, along with the Heisenberg uncertainty principle, blurs the splitting of the beam. In 1928, Brillouin suggested that a longitudinal Stern-Gerlach apparatus would minimize the effect of the Lorentz force. While this idea was dismissed in 1930 by Wolfgang Pauli, recent papers have shown that Brillouin's idea may have been valid. A computer simulation for an experiment to use the longitudinal Stern-Gerlach was completed this summer by analyzing the effect of different spin separations of the beam. Enhancement of the polarization was seen even using a realistic spread in the electron's momentum. Thus, it appears feasible to conduct an experiment that will investigate Brillouin's idea. In the future, this could be done in the Jefferson Lab test cave with equipment that is readily available.

Investigation of the Optical Properties of Light-Emitting Diodes for Use in Fluorescence-Based Detection of Biological Threat Particles. SHAWN BALLENGER (Florence-Darlington Technical College, Florence, SC, 29501) NORM ANHEIER (Pacific Northwest National Laboratory, Richland, WA, 99352)

The optical properties of ultra-violet (UV) light-emitting diodes (LEDs) are being investigated to determine the LEDs’ applicability in detecting biological particles using intrinsic fluorescence. When a biological material fluoresces it gives off a specific spectrum. This spectrum can be analyzed to identify threat particles from the background interferents. Pulsed laser excitation has previously been used to induce fluorescence in biological materials; however, the continued improvement in LED technology has made LEDs a viable alternative to lasers, which would enable the development of economical detectors for biological threats. Digital logic pulsing techniques were used to drive the LEDs, and an integrating sphere was used to measure the LEDs’ average optical power as the duty-cycle and pulse repetition frequency (PRF) was varied. The digital logic pulsing techniques were successful in driving the LEDs; however, alternative pulsing techniques need to be developed to extract additional optical power from the LEDs. Once the LED driver circuit is properly developed, experiments investigating the LEDs’ ability to produce fluorescence in biological particles can be done to determine if the LED is a viable alternative to the more costly laser source.

Java Based Interface for Particle Fragmentation Studies at the NASA Space Radiation Laboratory. JENNIFER MABANTA (St. Joseph's College, Patchogue, NY, 11772) DR. MICHAEL SIVERTZ (Brookhaven National Laboratory, Upton, NY, 11973)

Before extended space missions can occur, protective measures must be in place for astronauts as prolonged exposure to radiation fields can have harmful and sometimes permanent effects. The purpose of the research done at the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL) is to gain a better understanding of the cosmic rays in space and develop the most efficient countermeasure for the voyagers. Proton and heavy-ion beams from the BNL Booster accelerator are directed along a beam line to NSRL. By mimicking cosmic rays with the beam, the lab provides a controlled area in which to study the effects of the rays. The most harmful of the rays is iron, and the least destructive and most abundant is hydrogen. Of particular interest to NASA is the process of fragmentation, or the way in which heavy ions, like iron, break up into lighter less dangerous ions, such as protons. Currently, the methods used by the scientists to conduct analysis on fragmentation data are cumbersome at best. A program needed to be created to facilitate the studies being conducted at NSRL. A program called “Fragmentation Process” was developed which provides a graphical user interface (GUI) for the scientists interested in fragmentation data. This Java-based program, written using an Emacs text editor, allows the user to collect data while the beam is on and provides graphs generated by the data in real time. These graphs can be made while data is still being taken or at the end with the total data sample collected. The graphs are displayed using a program called Ghostview and provide the scientist with a measure of the fragmentation in the beam. Prior to this, the work required to generate these graphs could take up to a week. Now, by utilizing the program, the calculations and resulting graphs are done immediately. Comparison of these graphs and the graphs generated prior to the creation of the GUI show that the program is displaying accurate information. The increased efficiency in studying fragmentation data at NSRL will be of great value to NASA and researchers will be able to gain a deeper understating of fragmentation and develop the most effective shielding or other countermeasures for our astronauts.

Jet-Like Event Simulations for the FPD++ Calorimeter at STAR. CHRISTOPHER MILLER (Stony Brook University, Stony Brook, NY, 11794) LES BLAND (Brookhaven National Laboratory, Upton, NY, 11973)

The Forward Pion Detector upgrade (FPD++) to the STAR experiment at Brookhaven National Laboratory is capable of detecting photons at high rapidity. Most of these photons are produced from the decay of neutral pions (p0) which are fragments of quark or gluon jets that have undergone small angle scattering in proton+proton collisions. My project was to quantify the average production of these photons with azimuthal symmetry about the forward jet axis. Asymmetries can arise in the fragmentation of forward scattered quarks and gluons from particles produced by spectators to the quark/gluon scattering. These asymmetries can serve as backgrounds to measurements that aim to establish the spin dependence of forward jet production. In order to accomplish my goal, I used a Monte Carlo generator called PYTHIA 6.222 to simulate the particle production at a total center of mass energy of 200 GeV. The correlation between the energy of each photon and its distance from the thrust axis, calculated from the photon’s momentum sum vector, was histogrammed. The results were averaged over many events and a visualization of the jet shape was created, which allowed for a further analysis to quantify its azimuthal symmetry. In the end, the computer generated simulation showed a falling exponential trend of the jet shape from the leading pion and a small asymmetry between the left and right section of the jet. These results give an estimation of the underlying event background.

Jovian Planet Formation in 50 AU Binary Star Systems. CHRISTIAN LYTLE (University of St. Thomas, St. Paul, MN, 55105) ANDY NELSON (Los Alamos National Laboratory, Los Alamos, NM, 87545)

The detection of Jovian planets in large-separation binaries (>100 AU) has motivated investigation into the probability of planet formation in approximately 50 AU and smaller systems. We have run smoothed-particle hydrodynamics (SPH) simulations of binary systems with circumstellar disks and compared our results with others in the literature. Cooling based both on a fraction of the orbital period and a fully radiative model are implemented, but neither produce gravitational instabilities of the magnitude required for long term fragmentation of the disks, due primarily to the strong heating which occurs when the disks are near periapse. These results are in conflict with simulations from the literature that have produced fragmentation in disks with morphologies similar to ours. We propose that the inconsistencies are attributable to numerical deficiencies (low resolution and fixed gravitational softening) and unrealistic initial conditions present in the previous work.

Laser Induced Fluorescence Motional Stark Effect Control & Data Acquisition Application. PATRICK MALONEY (Carleton College, Northfield, MN, 55057) JILL FOLEY (Princeton Plasma Physics Laboratory, Princeton, NJ, 8543)

The motional Stark effect (MSE) is a standard technique for measuring spatially resolved magnetic fields in plasma experiments. These fields are found with a beam of neutral hydrogen atoms which when traveling through a magnetic field perceive a Lorentz electric field in their own frame. The resulting electric field causes observable line splitting in hydrogen’s spectral lines, which is proportional to the electric field. The polarization of the lines gives the field direction. However, while MSE is extremely effective for fields > 1 T, it can be difficult to measure weaker fields as a consequence of spectral line overlap. A dominant source of the overlap is from different emission lines being red and blue shifted across finite sized collection optics, an effect referred to as geometric broadening. To counter this effect, a method using Laser-Induced Fluorescence (LIF) to excite a single atomic transition at a time in the hydrogen beam has been proposed. Using this technique, the exact energy and thus wavelength of incoming photons is set by the laser, allowing geometric broadening effects to be completely ignored. Already the combination of MSE and LIF has been able to measure magnetic fields on the order of tens of Gauss in neutral gases but has yet to be tested on plasma. The purpose of this project has been to implement a data acquisition and control system with LabView for the MSE-LIF development laboratory, including the new Spiral Antenna Helicon High Intensity Background (SAHHIB) experiment, a plasma test bed for LIF-MSE. The resulting program displays and records a variety of measurements including pressures at critical points in the vacuum system, laser and RF power characteristics, and most importantly, time dependent LIF signals used to find magnetic field magnitude and direction. The program is useful because previous measurements were primarily taken by hand making the collection of experimental parameters tedious. The data acquisition and control application developed to study LIF-MSE may also be employed for studying different instabilities in the helicon plasma source SAHHIB. Should the LIF-MSE device on SAHHIB prove successful, it will eventually be employed at the National Spherical Torus experiment (NSTX).

Limited Streamer Tube System for Detecting Contamination in the Gas Used in the BaBar Instrumented Flux Return. LAURA HUNTLEY (Franklin & Marshall College, Lancaster, PA, 17603) MARK CONVERY (Stanford Linear Accelerator Center, Stanford, CA, 94025)

The Resistive Plate Chambers (RPCs) initially installed in the Instrumented Flux Return (IFR) of the BaBar particle detector have proven unreliable and inefficient for detecting muons and neutral hadrons. In the summer of 2004, the BaBar Collaboration began replacing the RPCs with Limited Streamer Tubes (LSTs). LST operation requires a mixture of very pure gases and an operating voltage of 5500 V to achieve maximum efficiency. In the past, the gas supplies obtained by the BaBar Collaboration have contained contaminants that caused the efficiency of the IFR LSTs to drop from approximately 90% to approximately 60%. Therefore, it was necessary to develop a method for testing this gas for contaminants. An LST test system was designed and built using two existing LSTs, one placed 1 cm above the other. These LSTs detect cosmic muons in place of particles created during the BaBar experiment. The effect of gas contamination was mimicked by reducing the operating voltage of the test system in order to lower the detection efficiency. When contaminated gas was simulated, the coincidence rate and the percent coincidence between the LSTs in the test system dropped off significantly, demonstrating that test system can be used as an indicator of gas purity. In the fall of 2006, the LST test system will be installed in the gas storage area near the BaBar facility for the purpose of testing the gas being sent to the IFR.

Measuring the Point Spread Function of a High Resolution Charge-Coupled Device. ANNA DERBAKOVA (University of North Carolina - Chapel Hill, Chapel Hill, NC, 27028) PAUL O'CONNOR (Brookhaven National Laboratory, Upton, NY, 11973)

When a diffraction-limited point source of light passes through an optical system, it acquires aberrations from imperfections in the optics and becomes blurred due to lateral diffusion of photogenerated charge within the semiconductor detector. The quality of the optical system is characterized by the point spread function (PSF) which measures the amount of blurring that is present. The purpose of this experiment was to produce a light spot much smaller than a CCD pixel (15 x 15 µm square), characterize this light spot, and then use it to measure the PSF of a high resolution prototype charge-coupled device (CCD), which will later form the basis of the CCD camera in the Large Synoptic Survey Telescope (LSST). The point source for this experiment was obtained by coupling a diode laser to an optical fiber 4 µm in diameter. A light spot was obtained by imaging this point source using a long working-distance microscope objective. In order to characterize this light spot, the knife-edge scan technique was used. Here, the point source was directed at a detector (photodiode) and a razor blade was scanned laterally through the focal region in increments of 0.1 µm at various axial positions. The amount of light incident on the detector was measured as a current using a picoammeter. The intensity was plotted versus position and fitted with an Erf function from which the rms spread, s, of the spot was determined. The rms spread of the light spot was measured to be: s = 1.18 µm. This spot was then projected onto the surface of the CCD and images were acquired as the point source was stepped across the CCD in sub-pixel increments. By summing the intensity in a fixed pixel window as the light spot is scanned across the edge of the window, the PSF can be obtained (“virtual knife-edge” technique). Subtracting the light-spot size in quadrature from the width of the fit to the virtual knife edge scans results in an estimate for the PSF of the CCD. The result in this case is s = 7.25 µm or a FWHM of 17.06 µm which is the contribution of charge diffusion to the broadening of the image in the CCD. The results of this experiment will be useful in determining the optimum parameters for future prototype CCDs that will be used in the LSST high resolution CCD camera.

Measurement of Fair Weather Air Conductivity. KAMIL ZAKHOUR (Florence Darlington Technical College, Florence, SC, 29501) (509) 375-2081 (Pacific Northwest National Laboratory, Richland, WA, 99352)

Measurement of Fair Weather Air Conductivity at the Hanford Site Dallas Monroe Jr., Natissa McClester and Kamil Zakhour Department of Civil Engineering Technology Florence Darlington Technical College Florence, South Carolina Bruce Watson, Paul Sabin, Sakher Khayyat, Jeff Griffin Applied Physics and Materials Characterization Sciences Group Pacific Northwest National Laboratory Richland, Washington ABSTRACT For the past 4 years, staff at the US Department of Energy’s Pacific Northwest National Laboratory have been investigating the generation, and transport of radiation-induced ions near the ground. Baseline measurements of fair weather atmospheric conductivity are required in order to estimate ions lifetimes and predict ions detectability downwind of a radioactive source. Using a Gerdien condenser, atmospheric conductivity measurements were made over a two week period, July 10-21, 2006 in the 300 Area of the Hanford Site. Experimental data, during that time period, show some uniformity, with atmospheric conductivity values ranging from 1.4 to1.8 x 10^ -14 S/m. These results are consistent with published values for arid rural desert regions throughout the world. Weather conditions were similar over the two weeks that the experiments were performed. Therefore; to obtain more valid background atmospheric conductivities, future experiments should look into variable weather conditions and evaluate their effects on atmospheric conductivities at the site.

Mixed Apparatus Radar Investigation of Atmospheric Cosmic Rays of High Ionization Via the Triple Inverted V Array Antenna. ALISHIA FERRELL (Florida A & M University, Tallahassee, FL, 32307) HELIO TAKAI (Brookhaven National Laboratory, Upton, NY, 11973)

The observation of ultra high energy cosmic rays (UHECR) is an ongoing mystery. There are two main issues surrounding these subatomic mechanisms. The first of the two is how do these mechanisms accelerate to such high energies? The second question is where are these mechanisms located in our universe? By studying these intense mechanisms we are only brushing the tip of the iceberg to our elusive universe. The Mixed Apparatus for Radar Investigation of Atmospheric Cosmic Rays of High Ionization (MARIACHI) will search for UHECR by detecting radio signals reflected from the extensive air showers (EAS) of charged particles created when the UHECR interacts with the earth’s atmosphere. The task given was to design and build a portable, reconfigurable receiving antenna that is tunable to multiple Very High Frequency (VHF) frequencies. This receiving antenna also known as the Triple Inverted V Array Antenna (TIVAA) is based on a half-wave inverted v dipole design. TIVAA consists of three dipoles using metal tubes. The length of each dipole is variable leading to a change in the optimally detected frequency. Additionally, the angle of each v can be altered; orientation of the dipole array may also be altered as well. Mounted on a patio umbrella frame that was modified specifically as the antenna frame, which can be folded downward in a straight line for ease of portability the TIVAA is positioned at a specific location above the ground to minimize destructive interference with ground reflected waves. The TIVAA has special features which make it unique. The orientation of this receiving antenna can be altered, as well as the polar angle of each quarter wave element. The design has been chosen to optimize detection of radio signals which may consist of multiple polarizations. The results (to produce radio signals from local television and FM radio stations) aid the TIVAA in demonstrating the ability to tune well, to gain good isotropic reception, as well as attract other phenomena such as meteors and lighting. The TIVAA design may form the basis of a future array of antennas providing timing, direction, and ranging information for UHECR induced EAS.

Model Independent Analysis of Beam Position Monitor Data in the Spallation Neutron Source Accumulator Ring. STEPHEN THORSON (University of Wisconsin, Madison, WI, 53706) SARAH COUSINEAU (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

Model independent analysis (MIA) has been proposed as a method to obtain certain particle accelerator parameters from turn-by-turn beam position monitor (BPM) data. In particular, the accelerator physics group at the Spallation Neutron Source (SNS) is interested in using MIA to determine the magnetic optics of the SNS accumulator ring, especially the betatron function. This analysis was performed using MATLAB® software package and an online model of the SNS accelerator. The MIA method has several advantages over other optics measurement techniques. It requires few details about the accelerator specifications, is stable against random measurement noise, and does not require alteration of magnet currents. Drawbacks of applying this method are that the betatron function is only accurate to a scale factor which is difficult to calculate, and the method requires good BPM gain calibration and at least 150 turns of good BPM data, as determined by sensitivity tests with simulated data. After analysis of several sets of measurements, it was determined that the MIA beta function measurement deviates by up to 20% from the theoretical model. It is unclear at this time if this is due to a problem with the BPM data, or if the beta function in the machine truly varies this much from the model. Further work in this area will include using MIA to calculate the measured chromaticity of the beam, and measurement and analysis of BPM data sets with more than 200 turns.

Modeling Soil Activation Underneath the Alternating Gradient Synchrotron (AGS) Complex, Building 912, at Brookhaven National Laboratory. ISADEL EDDY (Mount Allison University, Sackville, NB, 0) DR. KIN YIP (Brookhaven National Laboratory, Upton, NY, 11973)

When high-energy protons hit a target in Building 912 of the AGS complex at Brookhaven National Laboratory large numbers of secondary particles are produced. These secondary particles then bombard all materials around the target and activate some of it; i. e., some of the atoms become radioactive. Most of these radioactive atoms quickly decay back to stable atoms; however, some are induced into the soil and are potentially dispersible. Tritium, which easily leaches into the soil, and sodium-22, which leaches relatively slowly, are fairly stable isotopes with half-lives of 12.43 and 2.60 years respectively, making them the radionuclides of primary concern for this project. Shielding, made primarily of steel, concrete, and earth, was built around the targets to contain the spread of radiation. With the three dimensional geometry of Monte Carlo N Particle Extended (MCNPX) simulation software, detailed structure of all matter near the targets was modeled. MCNPX was used to simulate the particle interactions between the matter surrounding the target, the proton beams, and the secondary particles in order to evaluate the amount of radiation flux, or number of neutrons per cm2, present in the soil underneath Building 912. Three regions of radiation flux were ultimately identified. The highest amount of radiation was found to be located near the targets where it reached 1.4x10-4 neutrons per cm2 per incident proton around Target A and 2.7x10-4 neutrons per cm2 per incident proton around Target D. The radiation flux was, however, relatively high all along the beam lines. The radiation flux also dramatically decreased with soil depth. At five meters below Targets A and D, the radiation flux was 4x10-7 and 5x10-7 neutrons per cm2 per incident proton respectively. Radiation flux, along with the total number of protons that were accelerated, is needed in order to determine soil activation. This work is part of a project to document the extent of soil activation underneath Building 912 at Brookhaven National Laboratory.

Modeling X-ray Imaging with Monte Carlo N-Particle Software. JEREMIAH RUESCH (California State University Chico, Chico, CA, 95929) DR. TIMOTHY RONEY (Idaho National Laboratory, Idaho Falls, ID, 83415)

Monte Carlo simulation uses stochastic (random sampling) processes to solve equations that are difficult to solve by other means. For radiation transport the Boltzmann equation may be solved by Monte Carlo methods. Monte Carlo N-Particle (MCNP), a stochastic approach to solving the Boltzmann equation is utilized for our study. To ensure accuracy between the output of MCNP and known analytic results, a simple geometry is created with a monoenergetic pencil beam source shooting through a homogeneous material to a single detector element. The simulation was performed for three material thicknesses. The simulated detector response is used to derive the x-ray linear attenuation coefficient (LAC) of the material from Beer’s Law and compared to the known LAC of the material for the energy of the source. The processed MCNP results were less than one percent different from the known LAC. To begin modeling the radiographic imaging process the material was changed to be spatially varying, the single detector element was replaced by a linear detector array of six elements, and the source changed from a pencil beam to a cone-beam configuration. The source was centered vertically on the linear detector array with the face of the array perpendicular to the source. The distance between source and detector is held fixed. The object is defined as a rectangular block of iron with a rectangular aluminum insert. The object is located so that its face is perpendicular to the source, a distance of two centimeters from the detector array, and is then held fixed in this location. Moving the source and detector array to one end of the object, a run of MCNP is taken, creating a single vertical line of image data. The source and detector array are then moved a distance of one detector width and the simulation is repeated yielding an adjoining detector line. This process is repeated until the entire object is scanned and a two-dimensional image is produced. The images produced in this manner appear to have qualitative spatial and contrast features in agreement with our intuition. We have demonstrated the potential for using a stochastic particle transport modeling code (MCNP) to simulate x-ray imaging and have developed confidence in the numerical results by comparing derived quantities with known quantities (linear attenuation coefficients). Comparisons with experimental data are needed to validate the methods employed. This will be our next step.

Monitoring Displays for GLAST: Building ISOC Status Displays for the Large Area Telescope Aboard the Gamma Ray Large Area Space Telescope (GLAST) Observatory. CHRISTINA KETCHUM (Lewis and Clark College, Portland, OR, 97219) ROB CAMERON (Stanford Linear Accelerator Center, Stanford, CA, 94025)

In September 2007 the Gamma Ray Large Area Space Telescope (GLAST) is scheduled to launch aboard a Delta II rocket in order to put two high-energy gamma-ray detectors, the Large Area Telescope (LAT) and the GLAST Burst Monitor (GBM) into low earth orbit. The Instrument Science Operations Center (ISOC) at SLAC is responsible for the LAT operations for the duration of the mission, and will therefore build an operations center including a monitoring station at SLAC to inform operations staff and visitors of the status of the LAT instrument and GLAST. This monitoring station is to include sky maps showing the location of GLAST in its orbit as well as the LAT’s projected field of view on the sky containing known gamma-ray sources. The display also requires a world map showing the locations of GLAST and three Tracking and Data Relay Satellites (TDRS) relative to the ground, their trail lines, and "footprint" circles indicating the range of communications for each satellite. The final display will also include a space view showing the orbiting and pointing information of GLAST and the TDRS satellites. In order to build the displays the astronomy programs Xephem, DS9, SatTrack, and STK were employed to model the position of GLAST and pointing information of the LAT instrument, and the programming utilities Python and Cron were used in Unix to obtain updated information from database and load them into the programs at regular intervals. Through these methods the indicated displays were created and combined to produce a monitoring display for the LAT and GLAST.

Nanosecond-length Electron Pulses for a Time-of-Flight Mass Spectrometer. LIANNE MARTINEZ (University of Nevada Las Vegas, Las Vegas, NV, 89123) HERBERT FUNSTEN AND PAUL JANZEN (Los Alamos National Laboratory, Los Alamos, NM, 87545)

The Spatially Isochronous Time-of-Flight (SITOF) mass spectrometer is a rapid mass analysis of gaseous samples at a high mass resolution in a small volume. The mass spectrometer incorporates a pulsed electron ionization source within the drift region itself, eliminating a separate ion source and its associated mass, power, and volume resources. Gas in the drift region is ionized at the same time by the pulsed electron source, and the ions are accelerated by a linear electric field in the drift region so that their time-of-flight in the drift region is independent of the location at which they were ionized. The current proof-of-concept pulsed electron source uses a channel electron multiplier stimulated by a weak radioactive source to produce electron pulses approximately 10 ns long.  These pulses have been characterized, and development has started on a pulsed electron source which uses a microchannel plate stack to multiply photoelectrons produced from a fast ultraviolet LED. This method produces electron pulses of a shorter duration, over a larger area, at a controllable frequency. We discuss the time dispersion of the pulsed electron source, its dependence on detector bias and gain, and its impact on the mass resolution of the SITOFS mass spectrometer.

Noise Performance of Next-Generation Electronics for the SuperCDMS 25kg Experiment. PETER BROOKS (Stanford University, Stanford, CA, 94305) FRITZ DEJONGH (Fermi National Accelerator Laboratory, Batavia, IL, 60510)

The Cryogenic Dark Matter Search (CDMS) is a direct detection experiment looking for signatures of elastic scattering between dark matter and atomic nuclei in high purity, cryogenic silicon and germanium crystals. In order to discover rare dark matter events against an overwhelming background of natural radioactive decay processes, the experiment relies on high-precision measurements of ionization and phonon signals from these interactions. To increase the sensitivity of the detector, it will be necessary to scale the size of the experiment beyond the current 5.85 kg of detector mass. In order to scale the necessary electronics to the 25kg scale for the proposed SuperCDMS experiment, a prototype electronics board has been developed, which condenses all of the current electronics system onto a single circuit board by digitizing the signal as soon as possible and analyzing offline in software. Statistical and Fourier analysis have been performed on the prototype board, both without signals and with test signals mimicking the actual detector signal, in a number of operating modes for the board. These analyses show that the noise performance of the prototype board is comparable to that of the current system, and it is believed that with further refinement the noise could be improved to offer better precision than the current CDMS electronics. This development is an important step in demonstrating the feasibility of a 25kg SuperCDMS experiment.

Novel Coarsening of Pb Nanostructures on Si(111) 7 X 7. CHARLES PYE (University of Kansas, Lawrence, KS, 66044) MICHAEL TRINGIDES (Ames Laboratory, Ames, IA, 50011)

This work investigated the feasibility of using electron back scattered diffraction (EBSD) to associate, or differentiate, metal fracture fragments. The objective of this work was to determine an empirical basis for the hypothesis that a minimum sequence of grains can be used to identify a metal fracture line beyond a reasonable doubt. Crystallographic misorientations between individual grains were determined using EBSD along several lines (point-to-origin vectors) of metal crystals within the microstructure of a 304 stainless steel. From a given starting crystal, a grid of vectors was used to do relative referencing comparison of the grain orientation profiles along each vector. A radial grid was used with 5° spacing between vectors at a radius of 11.5 times the average grain diameter. Misorientation angle between grains was calculated by averaging the misorientation angles within a single grain and referencing this average misorientation angle back to the origin grain. The average vector matched 2.6±2 grains with adjacent vectors out of 16.9±7 grains characterized per vector. In point by point matching, vectors placed 5° apart had 83±11% of data points match in the first 2.5 grains away from the origin, with that falling to 36±10% in the last 2.5 grains characterized in the vector. The extent of point to point matching confirms that this method can properly identify when similar or dissimilar profiles are being compared. This leads to the conclusion that a relatively low number of grains need to be analyzed to uniquely characterize a fracture line by relative crystallographic orientations. It also opens the possibility to a more extensive statistical review of concepts and data.

Numerical Simulations of Electric and Magnetic Fields for the Pulse Line Ion Accelerator. SAMUEL PEREZ (Contra Costa College, San Pablo, CA, 94806) ENRIQUE HENESTROZA (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

The Pulse Line Ion Accelerator (PLIA) is a (slow-wave) helical structure whose purpose is to accelerate charged particles. It uses a voltage pulse to generate a traveling electric field along its length. In order to efficiently produce a working PLIA and know how to use it, a simulation of the PLIA was created and the electric and magnetic fields it produces were plotted using the Electromagnetic Field Solver “CST Microwave Studio”. In addition to the plots offered by Microwave Studio, more specialized plots were created using the particle code WARP. The geometry is simple: a helix embedded in dielectric material inside a conducting box with a cylindrical vacuum through its center. The voltage pulse is applied on a loop around one end of the helix, thus coupling inductively to the helix. A coarse mesh was used at first so that adjustments could be made quickly. Once the simulation correctly produces a traveling wave, the helix was changed to its final dimensions and a finer mesh was used to generate sufficient data. The plots created by CST Microwave Studio show that the electric field travels the length of the PLIA at about 1/60 the speed of light, the expected value given the permittivity of the dielectric material, the helix dimensions, and the beam pipe radius. The field does not begin to move until after the voltage pulse ended. There are actually two fields, a positive and a negative field, which move with the negative field ahead of the positive field. The field is also defocusing in the transverse direction so that positive charges will be sent outward into the walls of the PLIA. In order to successfully use the PLIA, the beam will have to enter the structure so that it only interacts with the positive field. It is also necessary to have powerful solenoids around the PLIA to focus the beam. Continued work on the PLIA could prove it as an inexpensive slow-wave accelerator for use in Fusion research.

Nucleation Properties of Materials Deposited onto Carbon Nanotubes at Low Temperatures. DAMION CAMPBELL & LIONEL COHEN (The City College of New York, New York, NY, 10031) MYRON STRONGIN (Brookhaven National Laboratory, Upton, NY, 11973)

Nano-science is recognized as the new emerging field for the 21th century. In order to reliably manufacture nano-sized devices, the electronic properties of low dimensional materials, which maybe drastically different from their bulk values, need to be understood. Carbon nanotubes (CNTs) are fascinating one-dimensional objects: they can be insulating, metallic or even superconducting depending on the chirality. They have also been found to be ideal templates to engineer one-dimensional nano-wires. In this work, thin gold films have been deposited onto CNTs coated with amorphous Ge in ultra-high vacuum at low temperatures. The morphology of the Au films under different annealing procedures is examined using high-resolution TEM (transmission electron microscopy). Under suitable conditions, Au nano-clusters are seen to form percolated continuous structures on CNTs, making them effectively one-dimensional nano-wires. Preliminary transport and optical studies have been carried out on arrays of CNTs and CNTs coated with Au. Different behaviors of CNTs arrays have been observed with and without Au coating, providing a method to alter their electronic properties. This work is a small portion of a much larger project aimed at understanding the charge dynamics in low dimensional systems and CNTs-based nano-composite materials. This research and education project is an ongoing collaboration between the City College of New York and Brookhaven National Laboratory (BNL) and will continue much beyond the 10-week period of the joint DOE/NSF FaST (Faculty and Student Team) summer program. The City College students involved will have an opportunity to work at the nation’s premier large-scale facilities, such as the National Synchrotron Light Source and the Center for Functional Nano-materials at BNL.

Optical and Mechanical Design Features of the Qweak Main Detector. ELLIOTT JOHNSON (North Dakota State University, Fargo, ND, 58105) DAVE MACK (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

Cerenkov radiation is emitted when charged particles exceed the local speed of light within a transparent medium. In experimental particle physics, this radiation can be used to detect high-energy particles. The Cerenkov radiation detector being used in the Qweak experiment at Thomas Jefferson National Accelerator Facility (Jefferson Lab) has unique design features compared with previous detectors, imposing more specific requirements on the detector’s radiator components. The new design includes a glue joint in the beam pathway, and side-mounting Photomultiplier Tubes (PMT’s) instead of the traditional end-on layout, a change that allows increased detector mobility during beam operation. The system implements aluminum trays to support the glue joints and terminal cap pieces that hold the ends of the quartz bars. Ultra-pure artificial quartz (Spectrosil 2000) bars with a large resistance to radiation damage are needed for a high transmission of UV photons. Dimensional and optical specifications of these bars were assessed with the use of a Coordinate Measuring Machine (CMM) and a traveling microscope. Presented here is a synopsis of the high-precision measurements performed on the radiator bars, as well as a summary of the R&D conducted to develop a usable detector integrating the new design features. The quality assurance procedures developed will be essential for future detector projects involving quartz detectors.

Optimization of Shield Mass for a Low Power, Fast-Spectrum Liquid-Metal Cooled Surface Reactor System. ROBERT FORESMAN (Pomona College, Claremont, CA, 91711) DAVID I. POSTON (Los Alamos National Laboratory, Los Alamos, NM, 87545)

Extending our presence farther into space furnishes opportunities for research science, potential human colonization beyond Earth, and a more mature understanding of the cosmos. One of the foremost challenges in this effort is the identification of a low-cost power source that will accommodate scientific instrumentation and mission necessities for long periods of time. Low power nuclear reactors that utilize well-tested materials and concepts such as stainless steel and water shields can operate in the 25 kW electrical range. By short-circuiting exotic designs, these reactors reduce the cost and time of development in the face of a strict US budget. Gamma radiation dose to the Stirling alternator power conversion system and total astronaut dose can be kept to their nominal minimums of 20 MRad and 5 Rem/yr by burying the reactor in lunar surface material (regolith) on Moon missions.1 An alternative option is to install a permanent water shield on the reactor. Minimization of additional shield mass for such a system is an interesting engineering problem that is ideally suited to a radiation transport software program called MCNPX (Monte Carlo Neutral Particle). MCNPX output files contain criticality statistics to ensure a stable reaction and allow direct determination of dosages. Shield thickness, placement of interstitial, high-Z shield elements, and boron concentration in shield water will be treated as variables for system optimization. Initial simulations show that roughly 93% of the gamma dose to astronauts at 800 meters from the reactor core is due to radiative capture in the water shield. A borated water shield that meets astronaut dosage requirements and has a total mass of roughly 5,000 kg can be constructed utilizing interstitial stainless-steel layers of varied thickness. This total shield mass of 5000 kg must be compared to the total mass of a buried system configuration including burying equipment. Further investigations include the addition of different shielding materials such as depleted uranium throughout the shield as well as moving the reactor core off-center within the water shield. 1Marcille, T. F., Dixon, D. D., Fischer, G. A., Doherty, S. P., Poston, D. I., and Kapernick, R. J. Design Of a Low Power, Fast-Spectrum, Liquid-Metal Cooled Surface Reactor System. Nuclear Systems Design Group, Los Alamos National Laboratory, Los Alamos, NM 87544., pp 1-3.

Particle Physics in the High School Classroom. CANDICE HUMPHERYS (Brigham Young University - Idaho, Rexburg, ID, 83460) HELIO TAKAI (Brookhaven National Laboratory, Upton, NY, 11973)

The main goal of the Mixed Apparatus for Radar Investigation of Atmospheric Cosmic rays of High Ionization (MARIACHI) project is to explore the detection of ultra high energy cosmic rays via radio signals. To confirm that the data received using radio is legitimate, scintillator detectors (a known way to detect cosmic rays) have also been set up to collect data and have been placed in high school classrooms on Long Island. MARIACHI is involving students as participants in the project and faces two main problems: (a) the world of modern physics is absent from the high school physics curriculum and (b) techniques for data analysis need to be introduced. To bridge the gap between a cosmic ray shower being recorded by the scintillator setup and the actual physics phenomena, we have developed two low-cost experiments. First, the existence of atoms can be shown by the phenomenon of Brownian motion (particles in a fluid move randomly). Software was tried out successfully to test the practicality of tracking a particular particle to plot its position and show its randomness. The second experimental apparatus is a novel cloud chamber with a strong magnetic field. With this cloud chamber in the classroom, tracks from comic ray particles (as well as particles from other sources of radiation) can be visualized and students can get a grasp on what the scintillators are detecting. This particular cloud chamber exposes students to a wide range of modern physics from the existence of elementary particles to the relation between energy and matter through electron-positron production events. To introduce the first steps of data analysis, tutorials were developed to allow students and teachers alike to analyze the scintillator data to look for any possible problems with the detectors or interesting correlations with external parameters such as barometric pressure. The tutorials were tested by two high school students and their feedback was used to improve them.

Phase Separation Between Perpendicular and Parallel Ferromagnetic Ordering in a Quantum Well Model of Ga1-xMnxAs. ALEX BRANDT (Minnesota State University Moorhead, Moorhead, MN, 56563) RANDY FISHMAN (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

Dilute-magnetic semiconductors (DMS) are promising materials for spintronic applications (electronics that take advantage of spin to carry information). Ga1-xMnxAs is the most commonly studied DMS system. The direction of the ferromagnetic ordering in a quantum well model of Ga1-xMnxAs with doping x is determined. The magnetic order is perpendicular to the plane for low carrier densities, n < n1, and becomes parallel within the plane for higher carrier densities, n > n2. However, it is not known if phase separation occurs between regions having carrier density n1 and moments pointing perpendicular to the plane, and others having n2 > n1 and moments within the plane. The first two wave functions of the carriers in the quantum well are used in calculating the energy of the system as a function of angle and filling. The energy is then minimized with respect to angle, and the chemical potential is calculated. Phase separation occurs in the system if the chemical potential at two different fillings is the same. A Maxwell construction indicates phase separation occurs between n1 = 3.1% and n2 = 5.3% for x = 0.35. The complete phase diagram for this system is calculated. Phase separation of the ferromagnetic phases in a quantum well may imply that some regions are magnetically ordered up to higher temperatures. This might allow the design of spintronic devices that can operate at room temperature.

Programmatic Analysis of the FPD++ "Large" Cell Module. JONATHAN LANGDON (Stony Brook University, Stony Brook, NY, 11794) LESLIE BLAND (Brookhaven National Laboratory, Upton, NY, 11973)

The Forward Pion Detector second edition (FPD++) at Brookhaven National Laboratory’s STAR Experiment is composed of lead glass cells. These cells detect photons which are produced in high energy collisions of gold nuclei or protons. In the calorimeter, there are two distinct types of cells. The first type is referred to as the "small cells" and the second type is known as the "Large Cells." The smaller cells are preferable due to the higher spatial granularity that can be achieved. In the previous version of the Forward Pion Detector (FPD), the small cells were the only types that were used. However, in order to improve the spatial coverage of the calorimeter in the future Forward Meson Spectrometer (FMS), large cells must be used. To assist in the characterization of the large cells, I have written analysis routines to assure the quality of the calibration and the future analysis of the large cells. The ultimate goal is to create a robust, quantitative comparison of the cluster properties of the two cell types. Theoretically, it is possible to show correlations between the transverse size of showers formed by photons impacting the calorimeter in either the small cell module or the large cell module. Awaiting calibration, it is possible to characterize some of the properties of clusters formed in the large cells. To accomplish this, I wrote a Physics Analysis Workstation (PAW) script along with a FORTRAN program to analyze raw data from the FPD++. I found evidence of cross module pion decay. This reassures that once the calibration of the large cells is completed, it should be possible to draw comparisons between the two cell types and eventually allow for thorough cross type reconstructions.

Properties of Light Vector mesons in Dense Nuclei. SCARLET NORBERG (Kent State University, Kent, OH, 44243) DENNIS WEYGAND (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

The strong force is a fundamental force, which describes the forces in the nucleus, and is expressed by Quantum Chromodynamics (QCD). In dense and hot matter, masses may change while the underlying symmetries remain intact. In addition, coupling constants may change. Thus properties of vector mesons, such as their masses and widths, may change in dense or hot matter. These modifications are related to the partial restoration of chiral symmetry at high density and/or temperature. The g7 experiment was performed using the Large Acceptance Spectrometer (CLAS) at Jefferson Lab using a tagged photon beam of energies up to 4 GeV on various nuclear targets. Because the photon can penetrate the nuclear volume, the interaction occurs approximately uniformly throughout the nuclear medium. The properties of the lightest vector mesons, rho, omega, and phi are investigated through their rare leptonic decay to e+e-. This decay channel is preferred over hadronic modes in order to eliminate the final state interactions of the decay products in the nuclear matter. The goal of this study is to examine any changes in the mass and/or width of rho, omega and phi produced in the nuclear medium. In this study, the mass and width of the rho meson have been measured in both a light nuclear target, deuterium, an intermediate target, carbon, and a heavy target, iron. The spectral function of the rho meson was compared to the correct resonance form, a Breit-Wigner function modified by an electromagnetic propagator (1/m3) required by the e+e- decay. It was found that the mass and width of the rho meson does not change in carbon nor deuterium. In iron, while the mass of the meson is stable, there is an indication that the width changes by about 2 sigma, consistent with collisional broadening effects. Implications of the results of g7 will have a major impact on the interpretation of experiments of the low mass e+e- pairs currently being performed in high-energy heavy ion collisions at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven.

Quark Propagation and Hadronization in the Nuclear Medium. GRANT LARSEN (University of Chicago, Chicago, IL, 60637) KAWTAR HAFIDI (Argonne National Laboratory, Argonne, IL, 60439)

Hadrons are color neutral combinations of quarks. The theory of the strong, or color, force is not yet entirely clear on how quarks break free of gluons (the particles that hold hadrons together), and form new hadrons, a process known as hadronization. Because of confinement, an exotic effect of the strong force, no color charged particle (such as a quark) can exist in solitude. Therefore, to measure such particles, an indirect process is required. Firing electrons at targets in the right energy range can strike a quark hard enough to make this process of hadronization occur, but slowly enough that this process occurs within the small space of the target’s nucleus. This was done with a 5.014 GeV beam in Thomas Jefferson National Accelerator Facility in Newport News, Virginia, on targets of deuterium, carbon, iron, and lead. The number of pions emitted from the heavier target compared to the number emitted from deuterium (normalized by the total number of events recorded in each case) was measured for the carbon, iron, and lead targets. This multiplicity ratio was compared between the different atoms and for pions that had different energies available, took different fractions of that energy, and that lost different amounts of that energy to gluon radiation. Also measured were quantities directly related to the energy loss of quarks propagating in the nuclear medium and the distance they do so before becoming hadrons. Using these data, relationships between the momentum and energy the electron imparts to the quark, the energy lost radiating gluons, the distance the quark goes before hadronizing, the fraction of the energy available that the hadron ends up with, and the likelihood that the nucleus reabsorbs the hadron can all be inferred. These results confirm the expectations of theorists as well as back up the results of related experiments, expanding the results to different kinematical ranges. These data also give theorists more information to work with to understand these processes. Still more interesting kinematical ranges, details of the effects of nuclear size, and the dependence of these processes on the flavor of the struck quark, are all yet to be investigated. All of these goals can be met using the forthcoming 11 GeV upgrade to Jefferson Lab.

Radioactive Ion Beam Production from a Thorium Oxide Target. DAVID BUNCE (University of North Texas, Denton, TX, 76203) H. K. CARTER (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

Radioactive Ion Beams (RIB) are crucial for nuclear astrophysics and nuclear structure studies. Such beams can be produced at the Holifield Radioactive Ion Beam Facility at ORNL. Previously, Uranium Carbide was used as a target to create RIB. It is expected that a Thorium Oxide (ThO2) target has a higher yield for isotopes with an atomic mass between 80 and 90 than Uranium Carbide target. Previously, an 8 g/cm3 target was used to create the RIB, but the yields were much lower than expected. A 0.8 g/cm3 target might produce better isotope yields than an 8 g/cm3 target. However, a 0.8 g/cm3 target is not dense enough to stop a 40 MeV proton beam in the current setup. The goal of this project is to correct for a fully stopped beam in the low-density target and to compare isotope yields between the high and low-density targets. To correct for a fully stopped beam, the isotope yields are measured for two different proton energies; 30 MeV and 40 MeV. From this comparison, estimation for a fully stopped beam can be made. For the measurement of the RIB isotope yield, individual masses can be separated from the RIB using an online mass separator. The RIB is collected onto a tape, which switches from the collection position to a measurement position very rapidly. The beam composition is analyzed by gamma ray spectroscopy using a germanium detector and peak-fitting software. This project has shown that the isotope yields for the 30 MeV beam and the 40 MeV beam are very similar within the studied isotopes. Also, the low density ThO2 target had a much better yield of isotopes than the high density target. However, the isotope yield is still much lower than that of Uranium Carbide. It is clear that the physical properties of a ThO2 target affect the yields of isotopes. However, it is not know why the isotope yields for ThO2 are still not high enough to compete against a Uranium Carbide target for atomic masses 80-90. Further study is required to continue to increase the total yield of isotopes with an atomic mass of 80 through 90 to surpass Uranium Carbide.

RF TE011 Cavity Prototyping and Characterization for Surface Science Studies. JARED NANCE (Beloit College, Beloit, WI, 53511) HAIPENG WANG (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

Understanding the properties of superconducting materials is crucial to the future development of accelerating structures. According to BCS theory, superconductors possess a very small surface impedance at RF frequencies. For superconducting particle accelerators operating at RF frequencies such as the Continuous Electron Beam Accelerator Facility (CEBAF), this means that there is some mechanism by which the energy stored in the RF fields can be dissipated, and the accelerator's performance degraded. The SRF (Superconducting RF) group at Thomas Jefferson National Accelerator Facility has developed a cavity which will be used for testing the surface impedance of flat superconducting samples. The cavity is optimized for operation at superconducting temperatures of 2K. The intent of the design is to store energy in the form of a dominant TE011 mode resonance at 7.5GHz. The TE011 mode is well suited for surface science studies because of its inherently high ratio of power stored to power dissipated (the Quality (Q) factor), as well as its simple and well understood cylindrical symmetry. The complex geometry of the cavity itself is not, however, exclusive to the TE011 mode; a large number of resonances are permitted, all with varying characteristics and symmetries. Without conclusive knowledge of which resonant mode the samples are exposed to, the field at the surface of the sample cannot be known exactly. Therefore, correct identification and characterization of the TE011 mode is critical for the experiment. Vector Network Analysis techniques were used to characterize the fields by monitoring the response of the Scattering (S) Parameters to perturbations of the cavity geometry. The S-Parameters of a network (the cavity) are a measure of the reflectance and transmittance of that network. Resonance conditions in the cavity can therefore be detected by interpreting peaks in the S-Parameter spectra as stored or dissipated energy in the cavity. The results of the analysis indicate that the TE011 mode is in fact the dominant main cavity mode at 7.5GHz, in agreement with theory.

Sensitivity Study of the Relative Fraction of top/anti-top events Produced via Gluon-fusion. KELLY GREENLAND (Lock Haven University of Pennsylvania, Lock Haven, PA, 17745) DR. RICARDO EUSEBI (Fermi National Accelerator Laboratory, Batavia, IL, 60510)

In high-energy experimental particle physics, a primary goal is to study particles that the scientists create, and by studying those particles physicists results aid in confirming the validity of the standard model of particles. To date, the standard model is the best representative model of basic particles, and confirming the model helps to reassure physicists the model is indeed on the right path. At the Fermi National Accelerator Laboratory a circular particle accelerator, called the Tevatron, accelerates protons and anti-protons to approximately 0.999 times the speed of light before colliding with each other. The proton's constituents, gluons and quarks, may interact with enough energy to create particles not initially present. If a quark and anti-quark interact they will annihilate and may produce a top/anti-top quark pair. Also, a gluon may fuse with another gluon to produce a top/anti-top pair. This study focuses on finding the accuracy with which scientists can measure the ratio of top/anti-top pairs created by quark/anti-quark annihilation to that created by gluon-fusion. The degree of accuracy of measuring the relative fractions was determined via computer simulation. Initially gluon-fusion and quark-annihilation top/anti-top events were generated by Monte-Carlo simulations. Next, matrix element probabilities were determined for each event. Using these probabilities, a set of templates were created. Finally, in preparation to analyze the data, a likelihood was constructed based on the templates. The likelihood was never run on data, but only on Monte-Carlo simulations to obtain the degree of accuracy, or sensitivity, that this method can provide. Past studies claim an accuracy measurement of 22%. This study shows a strong improvement of the ratio to approximately 15%. While this is clearly an improvement, the expected degree of accuracy cannot yet ensure that the ratios predicted by the standard model, regarding production top/anti-top events, are indeed correct.

Simulation of a Very Long Baseline Neutrino Beam. JORDAN HEIM (Purdue University, West Lafayette, IN, 47906) MARY BISHAI (Brookhaven National Laboratory, Upton, NY, 11973)

Observations indicating that the number of solar neutrinos incident upon Earth is fewer than predicted by the Standard Solar Model have led to the theory that the neutrinos are experiencing a phenomenon known as oscillation, where one flavor of neutrino becomes another. The confirmation of this theory is crucial to understanding the behavior of neutrinos. By employing a man-made neutrino beam, the energies and types of neutrinos can be carefully controlled and the expected numbers of neutrinos, taking into account oscillation, thus known. Computer simulations of the particle interactions throughout the beamline's many components were carried out with various pieces of software including MARS, GEANT, and Fluka05. This data was then analyzed using packages such as GloBeS, and ROOT to create histograms of the physical phenomena. Repeating these exercises, while varying simulation parameters such as target properties, beam energy, and the length of the decay pipe, allowed the optimal characteristics to be found. With a predetermined far-detector distance of 1300Km, it was found that as the target length and density is increased, more particle interactions can be observed, as expected. Also, using a beam energy of forty to sixty GeV and using a shorter, but wider decay tunnel provides the best balance between reducing the unwanted high energy contribution while maintaining a reasonable flux rate at the far detector. These results provide guidelines for building a successful project which will yield an answer for the value of the mixing parameter which dictates the oscillating behavior of the particles and will reveal CP (charge parity) violation.

Simulation of Radioactive Ions for the Tevatron. JOSEPH BOUIE III (Southern University, Baton Rouge, LA, 70813) TERRENCE REESE (Fermi National Accelerator Laboratory, Batavia, IL, 60510)

To generate a very pure and intense neutrino beam for Neutrino Physics experiments, it has been proposed to use the beta-decay of radioactive ions stored in a high energy decay ring. The original proposal was to use parts of the existing CERN infrastructure, namely the Proton Synchrotron (PS) and the Super Proton Synchrotron (SPS), to accelerate the ions to 100GeV, but recent work has shown that it would be advantageous to go to even higher energies. The Tevatron, located at Fermi National Accelerated Laboratory, will be retired from Collider Physics in a few years, and could be used for this purpose. However, the decay products from the ions will present a significant heat load to the superconducting magnets, which will limit the number of ions that can be accelerated. To understand where the limit is, a simulation of heat deposition from the decay products is needed.

Simulation of the BaBar Drift Chamber. RACHEL ANDERSON (University of Wisconsin - Eau Claire, Eau Claire, WI, 54701) JOCHEN DINGFELDER (Stanford Linear Accelerator Center, Stanford, CA, 94025)

The BaBar drift chamber (DCH) is used to measure the properties of charged particles created from e+ e- collisions in the PEP-II asymmetric-energy storage rings by making precise measurements of position, momentum and ionization energy loss (dE/dx). In October of 2005, the PEP-II storage rings operated with a luminosity of 10x1033 cm-2 s-1; the goal for 2007 is a luminosity of 20x1033 cm-2s-1, which will increase the readout dead time, causing uncertainty in drift chamber measurements to become more significant in physics results. The research described in this paper aims to reduce position and dE/dx uncertainties by improving our understanding of the BaBar drift chamber performance. A simulation program --- called GARFIELD--- is used to model the behavior of the drift chamber with adjustable parameters such as gas mixture, wire diameter, voltage, and magnetic field. By exploring the simulation options offered in GARFIELD, we successfully produced a simulation model of the BaBar drift chamber. We compared the time-to-distance calibration from BaBar to that calculated by GARFIELD to validate our model as well as check for discrepancies between the simulated and calibrated time-to-distance functions, and found that for a 0° entrance angle there is a very good match between calibrations, but at an entrance angle of 90° the calibration breaks down. Using this model, we also systematically varied the gas mixture to find one that would optimize chamber operation, which showed that the gas mixture of 80:20 Helium:isobutane is a good operating point, though more calculations need to be done to confirm that it is the optimal mixture.

Simulations of the ILC Electron Gun and Electron Bunching System. CHRISTIAN HAAKONSEN (McGill University, Montreal, QC, 0) AXEL BRACHMANN (Stanford Linear Accelerator Center, Stanford, CA, 94025)

The International Linear Collider (ILC) is a proposed electron-positron collider, expected to provide insight into important questions in particle physics. A part of the global R&D effort for the ILC is the design of its electron gun and electron bunching system. The present design of the bunching system has two sub-harmonic bunchers, one operating at 108 MHz and one at 433MHz, and two 5-cell 1.3 GHz (L-band) bunchers. This bunching system has previously been simulated using the Phase and Radial Motion in Electron Linear Accelerators (PARMELA) software, and those simulations indicated that the design provides sufficient bunching and acceleration. Due to the complicated dynamics governing the electrons in the bunching system we decided to verify and expand the PARMELA results using the more recent and independent simulation software General Particle Tracer (GPT). GPT tracks the motion and interactions of a set of macro particles, each of which represent a number of electrons, and provides a variety of analysis capabilities. To provide initial conditions for the macro particles, a method was developed for deriving the initial conditions from detailed simulations of particle trajectories in the electron gun. These simulations were performed using the Egun software. For realistic simulation of the L-band bunching cavities, their electric and magnetic fields were calculated using the Superfish software and imported into GPT. The GPT simulations arrived at similar results to the PARMELA simulations for sub-harmonic bunching. However, using GPT it was impossible to achieve an efficient bunching performance of the first L-band bunching cavity. To correct this, the first L-band buncher cell was decoupled from the remaining 4 cells and driven as an independent cavity. Using this modification we attained results similar to the PARMELA simulations. Although the modified bunching system design performed as required, the modifications are technically challenging to implement. Further work is needed to optimize the L-Band buncher design.

Single and Double GEM X-Ray based Detectors. ERIC HUEY (Southern University A&M, Baton Rouge, LA, 70811) DR. DAVID PETER SIDDONS (Brookhaven National Laboratory, Upton, NY, 11973)

The purpose of this research is to present results obtained from testing of a single and double GEM X-ray gaseous detectors. The detectors have been designed, assembled, and tested at NSLS controls and detector’s group at BNL. The single and the double GEM detectors are intended to provide a noise and discharge free amplification to be used for Extended X-ray Absorption Fine Structure (EXAFS) procedure. A voltage of 450 volts across the single GEM provided a maximum gain of 900. A voltage of 350 volts across each of the double GEM provided a gain of 9000 without stressing the GEM and creating the possibility of discharges.

Software for The Perfect PID. KURTIS GEERLINGS (Michigan State University, East Lansing, MI, 48825) HOLGER MEYER (Fermi National Accelerator Laboratory, Batavia, IL, 60510)

The Main Injector Particle Production Experiment (MIPP, FNAL E-907) promises to shed light on many aspects of the particle physics world, from proton radiography to nuclear physics and from neutrino flux measurements to non-perturbative QCD. MIPP uses several detectors in order to obtain nearly 100 percent acceptance for charged particles over a vast momentum range, yielding nearly perfect particle identification. The detectors include two beam cerenkovs, a time projection chamber (TPC), a threshold cerenkov, a ring imaging cerenkov (RICH), a time-of-flight system and an electromagnetic/hadronic calorimeter. While the detectors are very important, without software to analyze the data, no physics can be discovered. The MIPP software, written mainly in C++, includes packages to reconstruct tracks of charged particles in the TPC and packages to fit rings to the cerenkov radiation in the RICH. Equally important is the monte carlo package, which allows one to compare simulations with data. To most effectively test the analysis software with the simulation, the monte carlo must output its data in the same format as the experimental data. This process is often called digitization. In the case of the threshold cerenkov, it represents converting exact timing information from the simulation into a time to digital converter (TDC) signal, and the number of photoelectrons detected into an analog to digital converter (ADC) signal. This way, the monte carlo information resembles the data coming from detectors. Another important aspect to consider is testing and debugging of existing code. Since many people have contributed to the MIPP software, it is important to debug and test code written by other people. This helps to ensure that the software does what it is supposed to, and results are not based on false analysis.

Speeding up the Raster Scanning Methods used in the X-Ray Fluorescence Imaging of the. MANISHA TURNER (Norfolk State University, Norfolk, VA, 23504) UWE BERGMANN (Stanford Linear Accelerator Center, Stanford, CA, 94025)

Progress has been made at the Stanford Linear Accelerator Center (SLAC) toward deciphering the remaining 10-20% of ancient Greek text contained in the Archimedes palimpsest. The text is known to contain valuable works by the mathematician, including the Method of Mechanical Theorems, the Equilibrium of Planes, On Floating Bodies, and several diagrams as well. The only surviving copy of the text was recycled into a prayer book in the Middle Ages. The ink used to write on the goat skin parchment is partly composed of iron, which is visible by x-ray radiation. To image the palimpsest pages, the parchment is framed and placed in a stage that moves according to the raster method. When an x-ray beam strikes the parchment, the iron in the ink is detected by a germanium detector. The resulting signal is converted to a gray-scale image on the imaging program, Rasplot. It is extremely important that each line of data is perfectly aligned with the line that came before it because the image is scanned in two directions. The objectives of this experiment were to determine the best parameters for producing well-aligned images and to reduce the scanning time. Imaging half a page of parchment during previous beam time for this project was achieved in thirty hours. Equations were produced to evaluate count time, shutter time, and the number of pixels in this experiment. On Beamline 6-2 at the Stanford Synchrotron Radiation Laboratory (SSRL), actual scanning time was reduced by one fourth. The remaining pages were successfully imaged and sent to ancient Greek experts for translation.

Star Formation and Feedback in Adaptive Mesh Refinement Cosmological Simulations. SAM SKILLMAN (Harvey Mudd College, Claremont, CA, 91711) BRIAN O'SHEA (Los Alamos National Laboratory, Los Alamos, NM, 87545)

Correctly simulating star formation within large-scale cosmological simulations is currently a problem of great interest. This is in part due to the recent explosion of observational data from projects such as the Sloan Digital Sky Survey, DEEP, and 2dF. Using ENZO, an adaptive mesh refinement(AMR) with N-body plus hydrodynamics cosmological code, I explore three star formation algorithms which lead to a range of star formation histories. Stellar feedback models are coupled to each of the star formation algorithms, and provide thermal, kinetic, and metal feedback in our simulations. Each star formation algorithm uses several parameters which I vary in order to gain an understanding of their effect on the star formation history of the universe.

Structural and Functional Studies of Multidrug Binding Protein, AcrR. DENAE CLAMPITT (Western Illinois University, Macomb, IL, 61455) EDWARD YU (Ames Laboratory, Ames, IA, 50011)

This project addresses fundamental questions regarding the nature of multi-ligand recognition in transcriptional regulators. The primary target is the Escherichia coli AcrR repressor that regulates the multidrug transporter AcrB. The 215-residue AcrR consists of two domains, the C-terminal multi-ligand binding and the N-terminal DNA binding domains. Upon binding a wide variety of structurally diverse ligands in the C-terminal region, it triggers conformational change at the N-terminal domain, prohibiting the binding of AcrR to its target DNA. The sum result is the over-expression of the AcrB transporter, which, in turn, promotes efflux from the cell, thus protecting it from toxic substances. How can AcrR and other transcriptional repressors recognize a variety of toxic chemicals? To gain insight into the mechanism that AcrR uses to recognize multi-ligand, we crystallized the AcrR protein, and studied its function using circular dichroism and tryptophan fluorescence measurements. We also measured the binding affinities of rhodamine-6G, ciprofloxacin, and ethidium bromide with AcrR. As AcrR is capable of recognizing a variety of toxic chemicals, this research has the potential to contribute to the development of protein-based chemical sensors.

Study of the Effect on Larger Wire Diameter on Drift Chamber Performa. NATALIE HANSEN (Brigham Young University, Provo, UT, 84606) MAC MESTAYER (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

A wire chamber is essentially a gas-filled box traversed by anode (sense) and cathode (field) wires held at a high electric potential difference that is used to detect charged particles. Electrons are freed as a particle ionizes gas atoms in its path. These electrons travel along field lines and register as a current when reaching the sense wires. The distance between wires, the gas composition and the voltage difference between wires all determine the detection efficiency. The drift chambers for the Continuous Electron Beam Accelerator Facility (CEBAF) Large Acceptance Spectrometer (CLAS) detector at Jefferson Lab use a 90% argon and 10% CO2 gas mixture and sense wires of 20 µm diameter. Proposed changes for a planned upgrade include increasing the sense wire diameter to 30 µm, which, however, may lead to increased levels of noise. This project focused on assembling a prototype chamber to test the effect of changing the sense wire diameter. A previously built chamber was restrung with 30 µm wire, leaving one 20 µm wire for comparison. The associated electronic equipment and gas system were set up and the chamber is operational. The experimental results were plots of count rate versus voltage for different discriminator settings. Individual plots show a hint of a high voltage plateau around 1500 - 1525 V, despite the fact that much of the data were inconsistent, presumably due to bursts of external, electronic noise. When the hit rate was plotted versus gas gain rather than voltage, the individual graphs for different discriminator settings came very close to a universal curve. Any slight deviations from that curve may be due to the increased probability of spontaneous electron emission (noise) from the field wire. Increasing the sense wire radius and the voltage seems to yield only a modest increase of noise levels.

Suitability of a New Calorimeter for Identifying Exotic Meson Candidates. CRAIG BOOKWALTER (Florida State University, Tallahassee, FL, 32306) PAUL EUGENIO (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

Exotic mesons, particles that have quantum numbers that are inaccessible to conventional quark-model mesons, are predicted by quantum chromodynamics (QCD), but past experiments seeking to identify exotic candidates have produced controversial results. The HyCLAS experiment (E04005) at Thomas Jefferson National Accelerator Facility (TJNAF) proposes the use of the Continuous Electron Beam Accelerator Facility (CEBAF) Large Acceptance Spectrometer (CLAS) in Hall B to study the photoproduction of exotic mesons. However, the base detector package at CLAS is not ideal for observing and measuring neutral particles, particularly at forward angles. The Deeply Virtual Compton Scattering (DVCS) experiment at TJNAF has commissioned a new calorimeter for detecting small-angle photons, but studies must be performed to determine its suitability for a meson spectroscopy experiment. The ηπ system has been under especial scrutiny in the community as a source for potential exotics, so the new calorimeter's ability at reconstructing these resonances must be evaluated. To achieve this, the invariant mass of showers in the calorimeter are reconstructed. Also, two electroproduction reaction channels analogous to photoproduction channels of interest to HyCLAS are examined in DVCS data. It is found that, while not ideal, the new calorimeter will allow access to additional reaction channels, and its inclusion in HyCLAS is warranted. Results in basic shower reconstruction show that the calorimeter has good efficiency in resolving π0 decays, but its η reconstruction is not as strong. When examining ep  epπ0η, preliminary reconstruction of the ηπ0 system shows faint signals in the a0(980) region. In the ep  e n π+ η channel, preliminary reconstruction of the ηπ+ system gave good signals in the a0(980) and a2(1320) regions, but statistics were poor. While more analyses are necessary to improve statistics and remove background, these preliminary results support the claim that the DVCS calorimeter will be a valuable addition to CLAS for upcoming exotic meson searches in photoproduction.

Systematic Search for Lanthanum or Bismuth Oxide Scintillators. ALEISHA BAKER (North Carolina Agricultural & Technical State University, Greensboro, NC, 27411) STEPHEN DERENZO (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

More than ever, there is need for new or improved scintillators to keep up with the advancements in radiation detection technology. Known scintillators decay slowly, have low light output, and can be difficult to manufacture. In efforts to discover new ideal scintillators, researchers must search, synthesize, and characterize compounds that exhibit luminescent traits. This research focused on 12 bismuth and lead compounds. Candidates were synthesized by means of a solid-state chemistry technique known as the ceramic method. The products were characterized using x-ray diffraction, fluorescence spectroscopy, and pulsed x-ray measurements. The compounds studied did not show attributes of a high scintillation. However, since several of the band gaps have band gaps greater than 3.5 eV, they may be modified to form semiconductor scintillators in the future.

Systematic studies of the Faraday Effect on glass interfaces. DANIEL CARRERO (Stony Brook University, Stony Brook, NY, 11794) DR. CAROL SCARLETT (Brookhaven National Laboratory, Upton, NY, 11973)

The investigation into anomalous space-time curvature has shown strong signals of heavy consequence to this experiment. The understanding of said signals must be investigated in order to determine whether or not these signals are of consequence to anomalous space-time curvature due to photon interactions with an alternating magnetic field, or other effects including systematic anomalies. With respect to these systematic effects, the study of birefringence as a result of a possible Faraday Effect on glass windows used to encase a vacuum of 1x10-9 Torr was explored. A Relativistic Heavy Ion Collider (RHIC) quadra-pole magnet housing this vacuum will be ramped at a frequency of 80 mHz where residual magnetic field on the order of 200 gauss will represent a changing flux through the glass caps at the ends. Photon interactions propagating through the vacuum via an external laser will be subject to any briefringent effects of the glass. Such effects can cause deviations in our beam at the characteristic frequency of 80 mHz which when resolved on a photo receiver can mask any true, non systematic effects. In an effort to understand this, the development of an independent system isolating the glass windows was implemented. Using a motorized rotating stage, two permanent magnets each having a magnitude of .1 Tesla, was placed on the stage positioned 2cm form the window interface such that the overall magnetic field through the window was on the order of 200 gauss. A He-Ne laser beam with a wavelength the of 514nm was then propagated through the window and magnetic field onto a photo receiver. The stage rotating at a frequency of 80 mHz generated a changing magnetic flux such that any birefringence effects will be resolved onto the photo receiver at the characteristic frequency of 80 mHz. Data was then collected from the photo receiver in 5 minute intervals totaling 25 Hrs. This data, using Fortran code, was then Fourier analyzed such that any signals residing in a frequency range of 0 Hz-.1 Hz can be determined. Analyzing this data demonstrated that within the bounds of the sensitivity of the photo receiver and Fortran code, no signal was determined. The understanding of this effect helps rule out one systematic effect that can cause illegitimate results. These ramifications open other avenues for investigation, possibly minimizing systematic effects ultimately leaving true physical results.

The ChicaneTracker Module in the ORBIT Injection Upgrade. DANIEL COUPLAND (Albion College, Albion, MI, 49224) SARAH COUSINEAU (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

The accumulator ring in the Spallation Neutron Source (SNS) will accumulate up to 1014 protons at a time from a 1 GeV linear accelerator to produce neutrons by spallation. In order to satisfy radiation hazard requirements, no more than 0.02% of the beam in the SNS ring may be lost by collisions with the beam pipe. Since most accelerators lose 1-2% of their beam, the SNS must be far more effective in beam loss control than any previous high-intensity accelerator. This requires very precise understanding of the mechanisms which contribute to beam loss. One of the main sources of loss in the accelerator is in the injection area of the ring, where H- particles are converted to protons by passing through a thin foil that strips off the electrons. Some of the H- particles are not fully stripped, and live for a short time as H0 particles in excited quantum states before decaying in the ring magnetic fields. If the magnetic fields are not optimized, particles that decay may be deflected from the paths intended for either H- or protons and become lost in the accelerator. The SNS project will upgrade the beam power in the year 2010, and this will require a redesign of the injection region of the ring. Testing of new design schemes will be done primarily with the Objective Ring Beam Injection and Tracking Code (ORBIT), a modular simulation package designed at SNS specifically for modeling high intensity rings. In this project, the ORBIT code was upgraded to allow precise simulations of the injection region of the ring. This work included installing code that performs detailed tracking of all three relevant states of hydrogen through the dipole magnetic fields in the injection region, and determines on each step which neutral particles decay into protons. The physical configuration of the chicane is customizable, allowing particle positions to be compared to the real location of the beam pipe and losses to be determined. The code is configured to allow parallel runs on multiple machines to accommodate computationally intensive tests. Results from this code were benchmarked with previous studies of excited-state decay done by Danilov and Galambos et al, and showed excellent agreement. Future modifications to the ORBIT injection region code may include a foil heating routine and propagation through higher-order magnetic fields. The updated ORBIT code will provide an important tool for optimizing the new injection region design.

The Development of a Calibration System for Pulsed Laser Tests of Silicon Pixel Detectors for the International Linear Collider. KATHERINE PHILLIPS (North Carolina State University, Raleigh, NC, 27607) MARCO BATTAGLIA (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

The International Linear Collider is the next large scale facility in accelerator particle physics, designed to study the Higgs sector and new physics beyond the Standard Model. Efforts are underway to develop silicon pixel detectors for use in the vertex tracker. These detectors must meet strict specifications, including the requirement that the chips thickness does not exceed about 50 microns. In order to assess the feasibility to meet this requirement with monolithic CMOS pixel sensors, a number of prototype chips, named MimosaV, have been back-thinned from 450 to 50 and 39 microns. The MimosaV chip response is tested by electrons, radioactive sources, and lasers before and after back-thinning. The lasers consist of a solid state diode coupled to an optical fiber terminated on a collimating lens doublet. The goal of this project is to setup a laser calibration system to ensure that possible variations in chip response are to due to the chip post-processing and not due to the laser source instabilities. The experimental setup consists of a 1060 nm laser, a photodiode to read the laser's output, readout electronics, and a data acquisition (DAQ) analog-to-digital converter board. Laser pulses of 100 ns length are converted to voltages through a photodiode, and the signal is then sent to the readout electronics, which amplify, shape, and lengthen the pulse. The signal is then sent to DAQ board, which samples the input signal every 10 microseconds. A LabVIEW program stores peak values and computes the average peak value, the standard deviation, and the error. Statistical analysis removes peaks due to noise. It was observed that the setup takes twenty minutes before it gathers precise and consistent peak values. However, after this warming-up period, the peak values are stable to better than 1%, well within the tolerance of the measurements. When the pulse length is shortened, the peak value decreases in a non-linear fashion. The stability could be improved by adding a reset functionality into the readout electronics. More investigation is needed into possible causes of the warming-up time. It will also be necessary to develop a more consistent way to align and secure the laser in place. The present laser calibration system will be added to the chip test setup and will provide calibration for all future tests of monolithic pixel chips.

The Effect of the Earth's Atmosphere on LSST Photometry. ALEXANDRA RAHLIN (Massachusetts Institute of Technology, Cambridge, MA, 2139) DAVID L. BURKE (Stanford Linear Accelerator Center, Stanford, CA, 94025)

The Large Synoptic Survey Telescope (LSST), a ground-based telescope currently under development, will allow a thorough study of dark energy by measuring, more completely and accurately than previously, the rate of expansion of the universe and the large-scale structure of the matter in it. The telescope utilizes a broadband photometric system of six wavelength bands to measure the redshifts of distant objects. The earth's atmosphere makes it difficult to acquire accurate data, since some of the light passing through the atmosphere is scattered or absorbed due to Rayleigh scattering, molecular absorption, and aerosol scattering. Changes in the atmospheric extinction distribution due to each of these three processes were simulated by altering the parameters of a sample atmospheric distribution. Spectral energy distributions of standard stars were used to simulate data acquired by the telescope. The effects of changes in the atmospheric parameters on the photon flux measurements through each wavelength band were observed in order to determine which atmospheric conditions must be monitored most closely to achieve the desired 1% uncertainty on flux values. It was found that changes in the Rayleigh scattering parameter produced the most significant variations in the data; therefore, the molecular volume density (pressure) must be measured with at most 8% uncertainty. The molecular absorption parameters produced less significant variations and could be measured with at most 62% uncertainty. The aerosol scattering parameters produced almost negligible variations in the data and could be measured with >100% uncertainty. These atmospheric effects were found to be almost independent of the redshift of the light source. The results of this study will aid the design of the atmospheric monitoring systems for the LSST.

The Efficiency of Stripe Removal from a Galactic Map. CAROLYN MELDGIN (Harvey Mudd College, Claremont, CA, 91711) GEORGE SMOOT (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

The Galactic Emissions Mapping project (GEM) aims to isolate the radiation emitted by the Milky Way and other galaxies from the cosmic microwave background radiation. Two dimensional scans of the sky at different frequencies allow separation of the cosmic microwave background from the galactic signals. Signals are most informative in the microwave spectrum, where the frequency of the light is in the 0.5 to 5 GHz range, because the intensity of the galactic signal relative to the cosmic microwave background changes most rapidly. In recent years, widely available technology such as cell phones and microwave ovens have increased terrestrial noise, making the galactic signal extremely difficult to analyze. GEM uses data taken in 1999, when terrestrial sources of microwaves were less common. These scans were taken at 2.3 GHz in Cachoeria Paulista, Brazil. The scans have striped flaws caused by microwaves from a nearby radio station diffracting over the top of the shielding around the antenna. This paper describes the use of filtering techniques involving Fourier transforms to reduce or remove the striped flaws. It also describes a metric, based on the Principle of Maximum Entropy, to determine the efficiency of different stripe-removal filters.

The Occasional Appearance of Carbon in Type Ia Supernovae. JEROD PARRENT (University of Oklahoma, Norman, OK, 73071) R. C. THOMAS (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

Recent observations made by the Nearby Supernova Factory reveal C II absorption features below 14,000 km/s in the early photospheric spectra of the Type Ia Supernova (SN Ia) 2006D. The largest of these features is attributed to C II λ6580, which dissipates before maximum brightness. Only a handful of SNe Ia have been observed to contain the C II λ6580 feature. This is the largest observed carbon feature in SN Ia thus far and is additional evidence that carbon, and hence unburned material, can be detected at early times in SN Ia spectra. Presently, certain 3D hydro-dynamical SN Ia deflagration models contain unburned carbon and oxygen below the W7 (1D deflagration model) cutoff of 14,000 km/s. These observations support explosion models that contain unburned material at low velocities, such as 3D deflagrations and certain delayed-detonations. The sporadic nature of observed carbon features in SN Ia and its implications for explosion models will be discussed. We also emphasize the importance of obtaining spectrapolarimetry data in order to test for asymmetries in the supernova.

The Systematic Study of the Faraday Effect on Glass. JOSEPH HEARD (Community College of Philadelphia, Philadelphia, PA, 19150) DR. CAROL SCARLETT (Brookhaven National Laboratory, Upton, NY, 11973)

Systematic Studies of the Faraday effect on glass. In the midst of ongoing experiments to detect axion interactions a question arose concerning the systematic effects on the experiment. Specifically, whether laser beam deviation through a vacuum can cause skewed data. This was a concern due to the birefringence effects caused by electromagnetic fields radiating away from the glass interface enclosed in a vacuum. A separate experiment dedicated to determining if such an effect exists was needed. The experiment used a Newport motorized rotating stage on which two permanent magnets were mounted. Each magnet had a magnitude of .1 tesla. The stage was placed 2 cm. from a glass window interface. The magnetic field generated was measured at 200 gauss. A laser beam of wavelength 514 nm was directed through the glass and magnetic field on to a photo receiver. The rotation device was set into motion at an 80 mHz frequency. The photo receiver collected data every 0.1 seconds for 25 hours. Data collection occurred in five minute cycles. The 300 data periods were then averaged together to eliminate random fluctuations. The data was analyzed using Fast Fourier Transforms (FFT). The data analysis involved taking the FFT of the raw data to produce plots of amplitude versus frequency. FFT’d "white noise" produces a 1/f curve. The derivative of this curve is then taken. This technique eliminates the drift of the laser, which flattens out the 1/f curve. Any random fluctuations are reduced significantly, improving the resolution of any true signal. After analyzing the results of the experiment it is apparent that, if any Faraday effect is present, it does not deviate the laser beam's path enough to create any influence on the overall experiment data. Thus axion interaction experiments that employ this technique do not suffer from any corrupted data.

The Target View Screen and Related Imaging Systems at the Spallation Neutron Source. KATHLEEN GOETZ (Middlebury College, Middlebury, VT, 5753) THOMAS J. SHEA (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

The most intuitive method for monitoring a beam is to create a dynamic image at the location of interest. At this point in time, the Target View Screen System (TVS) is being employed to monitor the proton beam immediately before the spallation target. The Target View Screen system has produced very useful data during the commissioning of SNS but will soon succumb to radiation damage. As a result another similar radiation resistant system is being designed for permanent installation. Other proposed systems that use similar imaging technology include a view screen for the ring to target beam transport (RTBT) line, a pepper pot emittance meter for the linear accelerator front end and a neutron beam imager for neutron beam line six. A large portion of time is also being devoted to running the Target View Screen system in its last days of operation and analyzing the data being produced. Approximately one terabyte of data has been acquired.

Thermal Analysis of the ILC Superconducting Magnets. IAN ROSS (Rose-Hulman Institute of Technology, Terre Haute, IN, 47803) JOHN WEISEND (Stanford Linear Accelerator Center, Stanford, CA, 94025)

Critical to a particle accelerator’s functioning, superconducting magnets serve to focus and aim the particle beam. The Stanford Linear Accelerator Center (SLAC) has received a prototype superconducting quadrupole designed and built by the Centro de Investigaciones Energticas, Medioambientales y Tecnolgicas (CIEMAT) to be evaluated for the International Linear Collider (ILC) project. To ensure proper functioning of the magnet, the device must be maintained at cryogenic temperatures by use of a cooling system containing liquid nitrogen and liquid helium. The cool down period of a low temperature cryostat is critical to the success of an experiment, especially a prototype setup such as this one. The magnet and the dewar each contain unique heat leaks and material properties. These differences can lead to tremendous thermal stresses. The system was analyzed mathematically, leading to ideal liquid helium and liquid nitrogen flow rates during the magnet’s cool-down to 4.2 K, along with a reasonable estimate of how long this cool-down will take. With a flow rate of ten gaseous liters of liquid nitrogen per minute, the nitrogen shield will take approximately five hours to cool down to 77 K. With a gaseous helium flow rate of sixty liters per minute, the magnet will take at least nineteen hours to cool down to a temperature of 4.2 K.

Time Synchronization Between Data Acquisition Boards Using GPS Signals. JOSEPH WYCKOFF (Eastern Illinois University, Charleston, IL, 61920) THOMAS JORDAN (Fermi National Accelerator Laboratory, Batavia, IL, 60510)

Since Cosmic Rays move at an incredible speed, timing is an important part in the components in the detectors used to measure them. QuarkNet has many sites surrounding the country running cosmic ray shower studies. It uses Data AcQuisition (DAQ) boards to convert the timing signals from Photo Multiplier Tubes (PMT) to an ANSII format that is readable using a terminal emulator program on a computer. The sites upload their data to a central server so users can run analysis, using data from all over the nation. Therefore timing is important so that students using data from different GPS sources can sort the data and see if an event happened at the same time in two different areas. An experiment was completed using 2 different GPS antennae to test the timing difference between two DAQ boards that received signals at identical times. A pulse generator set at 10 Hz to mimic a signal was used. The experiment was run for two periods of thirty minutes where the GPS antennae were switched between the two DAQ boards after the first period. The results of the tests showed that the detectors had a difference in timing of 121 nanoseconds with one of the detectors having a bias of 12 nanoseconds. The experiment showed that while using the QuarkNet detectors a user could only confidently say which detector fired first if there is more than a 100 ns difference in the signals. These results are in agreement with previous calculations of the timing difference between detectors using separate GPS antennae. More tests should be done to examine whether the timing difference between two DAQ boards could be reduced or if the timing difference will cause a problem to research that is being conducted.

Upgrading the digital electronics of the bunch current monitors at the Stanford linear accelerator center. JOSH KLINE (Sacramento State, Sacramento, CA, 95813) ALAN FISHER (Stanford Linear Accelerator Center, Stanford, CA, 94025)

The testing of the upgrade prototype for the bunch current monitors (BCMs) in the PEP-II storage rings at the Stanford Linear Accelerator Center (SLAC) is the topic of this paper. Bunch current monitors are used to measure the charge in the electron/positron bunches traveling in particle storage rings. The BCMs in the PEP-II storage rings need to be upgraded because components of the current system have failed and are known to be failure prone with age, and several of the integrated chips are no longer produced making repairs difficult if not impossible. The main upgrade is replacing twelve old (1995) field programmable gate arrays (FPGAs) with a single Virtex II FPGA. The prototype was tested using computer synthesis tools, a commercial signal generator, and a fast pulse generator.

Using Boosted Decision Trees to Separate Signal and Background in B→Xsγ Decays. JAMES BARBER (University of Massachusetts, Amherst, Amherst, MA, 1003) PHILIP BECHTLE (Stanford Linear Accelerator Center, Stanford, CA, 94025)

The measurement of the branching fraction of the flavor changing neutral current B→Xsγ transition can be used to expose physics outside the Standard Model. In order to make a precise measurement of this inclusive branching fraction, it is necessary to be able to effectively separate signal and background in the data. In order to achieve better separation, an algorithm based on Boosted Decision Trees (BDTs) is implemented. Using Monte Carlo simulated events, `forests' of trees were trained and tested with different sets of parameters. This parameter space was studied with the goal of maximizing the figure of merit, Q, the measure of separation quality used in this analysis. It is found that the use of 1000 trees, with 100 values tested for each variable at each node, and 50 events required for a node to continue separating give the highest figure of merit, Q = 18.37.

Using LabVIEW for Complete Systems Control of an ECR Thin Film Deposition System. BRANDON BENTZLEY (The College of New Jersey, Ewing, NJ, 8628) ANDREW POST-ZWICKER (Princeton Plasma Physics Laboratory, Princeton, NJ, 8543)

Interest in studying an electron cyclotron resonance (ECR) deposition system is fueled by ECR’s better deposition rates and precision relative to traditional deposition systems; however, the ECR deposition system is extremely complex and requires that its gas flow control, magnets, 2.45 GHz source, and other components all work in concert. Operating the system requires an experienced user to constantly compensate for the dynamics of the system, such as argon gas pressure and magnetic field strength. A method of computer automation, such as LabVIEW, permits the system to operate itself, allowing for less experienced operators, reproducible conditions, and a safer working environment. LabVIEW, in conjunction with National Instruments hardware, sends and receives voltage signals and serial commands in order to control microwave power, magnet current, target bias voltage, vacuum and compressed-gas valve position, chamber pressure, and robotics commands. The VI takes many factors into account simultaneously, such as chamber pressure, ion current and spectroscopic data, in order to make decisions about the system state. LabVIEW was found to produce easy to manage, consistent, and reproducible conditions by simplifying complex procedures, such as system startup routines and robotics commands, to a click of a button, by compensating both accurately and quickly for changes in plasma conditions, and by checking the state of the system in order to prevent system malfunction.

Using sub-teraherz spectroscopy to detect unique absorption spectra of trace explosives. ROBERT DIETZ (Iowa State University, Ames, IA, 50010) HUAL TE-CHIEN (AND SAMI GOPALSAMI) (Argonne National Laboratory, Argonne, IL, 60439)

A heat-cell spectrometer consisting of a heat cell, a backward wave oscillator (BWO), and a hot-electron bolometer were employed to identify unique absorption spectra for trace explosives in sub-teraherz frequencies (~220-370 GHz) at low vapor pressures. Close agreement between the Jet Propulsion Laboratory’s simulated absorption spectra for acetonitrile (CH3CN) and our experimentally determined spectra for acetonitrile lend confidence to the accuracy of our results with respect to the heavier explosive molecules. Samples of trace explosives were initially loaded in liquid phase, contained in solutions of acetonitrile, methanol, and/or ethanol, but difficulties in distinguishing explosive absorption lines from solvent absorption lines prompted using a different method. Sample solutions were poured into a small copper cup and evaporated over several days to leave only the solid residue of the explosive material. We believe that we have located unique rotational and vibrational spectra for TNT and PETN. At least three consistent and readily identifiable absorption lines for TNT are noted, and new lines appear at higher temperatures, which suggests molecular fragmentation. Some uncertainty remains due to system contamination, however. Sample vapors may condense onto fiberglass heating cables inside the spectrometer and emerge again later during other tests when the temperature of the cables increases. We believe that the contaminant species, if present, can be identified and distinguished from the sample species, but prudence would dictate redesigning the spectrometer -- such that all fibers and materials that could potentially trap sample vapors lay outside the stainless steel tubes -- before continuing with data collection.

We Control the Phonons: Coherent Control of Surface Phonons with a Picosecond Pulse Laser. RYAN LEWIS (Whitman College, Walla Walla, WA, 99362) DAVID HURLEY (Idaho National Laboratory, Idaho Falls, ID, 83415)

All over the world, scientists use lasers to produce very high frequency sound waves, often using a pump and probe beam to produce and detect the phonons. In this experiment, we shine our pump beam onto a semiconductor sample (either Si or GaAs) beneath an Al grating-thusly generating a coherent surface acoustic wave. Furthermore, we employ a Michelson interferometer to produce a second, time-delayed pump pulse, which produces a second surface acoustic wave. With fine tuning of the micrometer, we achieve both constructive and destructive interference of the waves. Such coherent control will be useful in piezoelectric eddy current imaging, as well as managing the induced strain on quantum structures.