SULI
CCI
PST
FaST

Engineering Abstracts:

Adding a parallel HEPA filtered exhaust to lab 164 in building 331. KEVIN MERKLING (University of Idaho, Moscow, ID, 83843) SHAN T. BELEW (Pacific Northwest National Laboratory, Richland, WA, 99352)

The high airflow of 2000 CFM (cubic feet per minute) through a HEPA (high efficiency particulate air) filtered exhaust rated for 1000 CFM produced the need for a second HEPA filter. This filter will lower the pressure drop on the existing filter and increase the fan efficiency of removing unwanted particles in the lab. The task was to figure out where and how to install the new HEPA filter. Upon receiving the service request, a visit to the site where the work would be done was needed. After taking many measurements and pictures of the small space to work with, a quick sketch of where the new HEPA filter would be installed was drawn up. A meeting with sheet metal and air balance craftsmen was set up to make sure the design was feasible. The building engineer, our customer for this project, was also informed on what was going on to make sure the design met his needs. Once all pictures, measurements, hand sketches, and meetings were completed, some research on PNNL’s specifications for installing heating ventilation and air conditioning (HVAC) units was needed. The specification were applied and customized to fit the job. There were also some test ports that were to be installed to test airflow and a pressure gauge that needed to be researched and put into the specifications. After the specifications were complete, along with the hand sketches, a designer was taken out to the work site to show them what was being worked on. The designer’s job was to sketch the updates on AutoCAD. AutoCAD is a computer drawing program that creates 2-dimensional drawings in a neat and presentable format for approval. Once the designer finished drawing the project on AutoCAD, the lead engineer and I reviewed the design and upon our approval the design package was then reviewed through the Facility Review Board (FRB) process and for management approval. Although a relatively small job, this was a good introduction for an intern as to what facility design engineers do at PNNL along with improving the quality of the laboratories for research.


ADVANCES IN DUST DETECTION AND REMOVAL FOR TOKAMAKS. ALEJANDRO CAMPOS (University of Rochester, Rochester, NY, 14627) CHARLES H. SKINNER (Princeton Plasma Physics Laboratory, Princeton, NJ, 08543)

Dust diagnostics and removal techniques are vital for the safe operation of next step fusion devices such as ITER. An electrostatic dust detector developed in the laboratory is being applied to the National Spherical Torus Experiment (NSTX). In the tokamak environment, large particles or fibres can fall on the electrostatic detector potentially causing a permanent short. We report on the development of a gas puff system that uses helium to clear such particles from the detector. Experiments at atmospheric pressure with varying nozzle designs, backing pressures, puff durations, and exit flow orientations have obtained an optimal configuration that effectively removes particles from a 25 cm² area. Similar removal efficiencies were observed under a vacuum pressure of 10 mTorr. Dust removal from next step tokamaks will be required to meet regulatory dust limits. A tripolar grid of fine interdigitated traces has been designed that generates an electrostatic travelling wave for conveying dust particles to a 'drain’. First trials with only two working electrodes have shown particle motion in optical microscope images. Grids with three working electrodes will be tested and should improve particle movement.


An Analysis of the Fiducial Timing System for the Linear Coherent Light Source Radio Frequency Timing System. JACHIN SPENCER (University of Delaware, Newark, DE, 19805) RON AKRE (Stanford Linear Accelerator Center, Stanford, CA, 94025)

In preparation for the completion of the Linear Coherent Light Source (LCLS) in 2009, many upgrades are being considered for the Stanford Linear Accelerator (linac). The radio frequency (RF) timing system is of particular importance to LCLS, because it is the standard reference for the linac's electron beam, and is used to trigger automated events along the two mile linac. Two primary questions under consideration are the feasibility of using a missing pulse triggering scheme, and what are the potential benefits of upgrading to a bipolar fiducial pulse. To evaluate potential changes to the RF timing system, a simulation of the Main Drive Line (MDL) system response and fiducial pulse was developed. Using the MDL model, to simulate the effect of the MDL on the monopolar fiducial pulse and the bipolar fiducial, it has been confirmed that upgrading to the bipolar pulse would provide no significant benefit to the RF timing system. For the purpose of timing, the monopolar pulse, and the bipolar pulse are interchangeable. The key difference between the pulses can be observed in sector 0, and sector 30. The key difference is, in sector 0 the bipolar pulse has 2.0079 times the power of the monopolar pulse; in sector 30 the power ratio is approximately 0.4972. The sector 30 power ratio is a compelling argument in favor of the monopolar pulse. Because the majority of the power for the monopolar pulse comes from low frequency (down to DC) components in the Fourier transform of the signal, the nonlinear response of the MDL causes the wave form to spread and encroach on the cycles of the carrier signal following the pulse, but this is not a major issue for the purpose of detecting a trigger. Because of the power of modern signal processing techniques, a monopolar pulse is effectively equivalent to a bipolar pulse for the purpose of timing, meaning the negative aspects of both signals can be handled with equivalent amount of effort. Using the MDL model it has also been confirmed that a missing pulse is precluded as a possible triggering scheme for the monopolar fiducial pulse due to pulse spreading. The bipolar fiducial pulse cannot be used in a missing pulse configuration due to severe attenuation. Future work in the analysis of the RF timing system would include adding frequency dispersion to the MDL model, building a prototype bipolar pulser, and refining the MDL model.


Analysis of a Helium Brayton Power Cycle for a Direct-Drive Inertial Fusion Energy Power Reactor. SCOTT WAGNER (University of Michigan, Ann Arbor, MI, 48109) CHARLES A. GENTILE (Princeton Plasma Physics Laboratory, Princeton, NJ, 08543)

Presented is an engineering analysis of a helium Brayton power cycle for direct-drive laser inertial fusion energy (IFE) power reactor. The Brayton cycle is the ideal operating cycle for gas turbines, consisting of four internally reversible processes. Preliminary reactor design goals include production of 2GW of thermal power and an estimated 700MW of electricity. Economic considerations require that the power cycle be as efficient as possible to offset high capital investment. The closed indirect helium Brayton cycle is very efficient at the high inlet temperatures where traditional power conversion cycles are not adequate; helium is also very advantageous in a fusion reactor design. A thermodynamic model using baseline technology specifications and generalized thermodynamic assumptions is shown to produce a 51% cycle gross efficiency, increasing to 64% with expected near-term technological advancements. A comparative study of in-cycle component solutions and operational and engineering considerations are also presented.


Analysis of the ITER Draining and Drying System. CHRISTIN WALKER (East Tennessee State University, Johnson City, TN, 37614) JUAN FERRADA (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

The purpose of the ITER project is to build a fusion reactor that will serve as the basis for designing future power plants to produce electricity. Fusion energy would reduce the emissions of greenhouse gases into the atmosphere, which should slow global climate change. The ITER project involves collaborators from many countries; the various countries have different project responsibilities. One of the tasks assigned to the United States is that of managing the water cooling system. This summer’s research involved verifying engineering designs for the draining and drying systems that must be done for the effective procurement of the process experiment for this project. Modeling and simulation programs in FLOW (an ORNL software platform) were used to accomplish this. I prepared a module in FLOW that simulates compressors. The module consisted of code, written in Python, that uses the temperature, pressure, and mass flow of the incoming stream, as well as the adiabatic efficiency and outlet pressure. The module uses these data as inputs, and calculates the volumetric flow rate, the adiabatic head size, density, and power. Results from the FLOW simulations corroborated several of the design parameters. In fact my modeling analysis almost exactly corroborated modeling analysis performed earlier by the project team. I am in the PST program and will someday have students. I will have to find a way to relate what I am doing for the ITER project to what my future students will be learning. My future students will be learning the flow of energy from one organism to the next. Just as I have learned how the energy flows from one model to the next. Students will be asked to build a flow sheet showing the way energy flows through an organism, similar to the way flow sheets are made to show the flow of energy through the ITER project.


Analyzing Exhaust Gas Recirculation Contaminated Engine Oil. ANDREW LARKIN (University of Illinois at Urbana/champaign, Champaign, IL, 61820) ROBERT ERCK (Argonne National Laboratory, Argonne, IL, 60439)

Exhaust Gas Recirculation (EGR), is commonly used in diesel engines to reduce emissions of nitrous oxides. However, EGR also contaminates engine oil by adding soot particulates and acidic molecules. The goal of this study was to determine the extent and mechanism of wear on several different materials using EGR contaminated oils of varying soot content and acidity. The contaminated oils were obtained from a Cummins M11 test with EGR. They had soot percents of 8.4, 10.3, 9.0, and 12.0 with total acid numbers of 1.68, 1.91, 1.91, and 2.36 respectively. The different materials used were 52100 steel, Al2O3 , ZrO2, and Si3N4. The materials were tested in a Falex four-ball tribological test machine according to the American Society for Testing Materials’ protocol D-4172. The oxides showed very high friction coefficients of about .12 while Si3N4 was nearly the same as steel at about .1. ZrO2 also showed a strange drop in friction during the tests from about .12 to as low as .02. This may be due to a phase transition from the high pressures of the rig. It also may have been a false reading caused by the ball’s increasing surface area throughout the test which transferred the test into hydrodynamic lift as opposed to boundary lubrication. The balls were then examined for wear using an optical profilometer through interferometry. In these tests, the Si3N4 suffered some surface cracking but no measurable material removal while the oxides took above average wear using 52100 steel as a baseline. The wear was determined to be abrasive in nature rather than corrosive. Alumina wore by grain pullout. Si3N4 should be retested to determine the maximum loads that it can bear in contaminated oils. The strange friction transition of ZrO2 should also be probed.


Arc Flash Analysis for Laboratory-wide Equipment Rating. JONATHAN TODZIA (Rensselaer Polytechnic Institute, Troy, NY, 12180) SWAPNA MUKHERJI (Brookhaven National Laboratory, Upton, NY, 11973)

At Brookhaven National Laboratories, all electrical equipment is subjected to rating for arc flash hazards in order to conform to NFPA-70E-2004 standards. Part of an ongoing project, the purpose of the analysis and eventual labeling is to safeguard all workers against both electric shock and arc flash dangers by making them aware of the risk at hand. Labels will be affixed to all equipment warning all workers of the hazards involved. To begin the analysis, data was obtained from equipment on site via manufacturers’ labels, existing drawings, electricians, etc., to create one line drawings of each building or group of buildings. Information such as panel ampacity, fuse and breaker ratings, feeder lengths and sizes, and transformer ratings were all needed to produce an accurate one line diagram. The incident energy, required personal protective equipment (PPE), and safe boundary conditions were calculated from the information in these drawings. The drawings and calculations were performed utilizing SKM Power*Tools software which also provided the final rating of each piece of equipment. The ratings consist of a number from 0 to 4 (4 being the highest danger) which correspond to the level of PPE required to operate the equipment safely. The entire sites’ electrical power system needs to be examined and with roughly fifty buildings completed this summer; the project is about 80% complete. Upon the completion of the project, all equipment on site will be given new labels stating any arc flash hazards in addition to the proper PPE required for safe operation.


Arc Flash Analysis Project. JOHN BOUCHER (Middlebury College, Middlebury, VT, 05753) ALAN RAPHAEL (Brookhaven National Laboratory, Upton, NY, 11973)

Before an arc flash accident prompted Brookhaven National Laboratory (BNL) to devise the Arc Flash Analysis Project, a project designed to achieve a complete electrical systems analysis of all BNL systems and buildings, many of BNL’s older facilities had not been inspected to determine if they satisfied the National Fire Protection Association’s "Standard for Electrical Safety in the Workplace" (NFPA 70E-2004). For the Arc Flash Analysis Project, equipment layout and manufacturing and operating information for all electrical components such as panels, fuses, and circuit breakers, as well as cable sizes, types, and approximate lengths was obtained by manually inspecting and tracing out the facilities’ electrical systems. Using SKM PTW Power Tools Software (PTW), this information was organized, illustrated, and then analyzed to establish the electrical systems’ susceptibility to and energy available for arc flash. The work done for this study produced single-line electrical diagrams containing all electrical equipment down through the lowest rated panels (480 Volt or 208 Volt) to any 3 phase 480 Volt or 3 phase 208 Volt / 225 Amp or greater equipment. PTW was used to compute information such as arc flash incident energy level at each equipment location, the flash protection boundary, and the recommended Personal Protective Equipment (PPE). This study sought to achieve greater safety for those working on BNL electrical systems by providing recommendations for necessary PPE for electrical workers, collecting data to be archived, managed, updated as necessary and made accessible to facility engineers for future electrical work, and affixing up-to-date arc flash warning labels to all appropriate electrical equipment.


Argonne National Laboratory Facility Electrical Equipment Inspections. STEVEN KOMPERDA (Bradley University, Peoria, IL, 61625) EUGENE KENDALL (Argonne National Laboratory, Argonne, IL, 60439)

In an effort to ensure that all electrical equipment is in safe working condition, at Argonne National Laboratory, the Department of Energy has begun a rigorous inspection process for electrical safety standards. The Facilities Management and Services Division is responsible for the inspections of the facility equipment at Argonne, such as transformers, electric motors, lighting systems, and circuit breaker panels. It is the job of a Designated Electrical Equipment Inspector (DEEI) to make sure this equipment is properly installed and is in a safe working condition. A DEEI must inspect all equipment that has been installed or assembled onsite, and equipment that has not been listed by a Nationally Recognized Testing Laboratory (NRTL). NRTL listed equipment has been rigorously tested to verify safe operation by the public and requires no further examination. The goal set forth by Argonne is to have completed 40% of the estimated 50,000 unlisted items in the laboratory by the end of the fiscal year. Approximately 35% of the equipment has been inspected thus far.


Baseline Performance Modeling and Hydrofoil Survey for an Axial-Flow Tidal Turbine. DANNY SALE (University of Tennessee, Knoxville, TN, 37916) WALT MUSIAL (National Renewable Energy Laboratory, Golden, CO, 89401)

The National Wind Technology Center (NWTC), a division of the National Renewable Energy Laboratory, has expanded its research into ocean renewable energy technologies. The NWTC’s current interest is in the preliminary design and baseline performance modeling of axial-flow tidal turbines. These axial-flow tidal turbines operate on many of the same principles that wind turbines do, and are a class of “zero-head” hydropower which use the kinetic energy of flowing water to drive a turbine. This paper focuses on use of the NWTC’s existing wind turbine performance code WT_Perf, which relies on Blade Element Momentum theory to analyze an existing tidal turbine rotor. The assumption was made that as long a cavitation is introduced as a constraint on a wind turbine type model, then the Blade Element Momentum theory is still valid and reasonably realistic results would be achieved. Once the modeling input parameters were verified to accurately represent the tidal turbine configuration, the code’s accuracy was verified by comparing the code’s predictions to experimental test data. Variations of rotor geometry and hydrofoil options were then modeled in order to find configurations for optimal performance. Candidate rotor configurations and hydrofoils were judged on their favorable characteristics for energy capture at low flow speeds, hydrodynamic stall regulation, and for resistance to cavitation. The relative performance of the candidate hydrofoils is then compared to the performance of the baseline turbine configuration. Agreement between the measured and predicted values were excellent for flow speed less than the rated speed of the turbine. As the flow speed approached and exceeded the rated speed of the turbine, there began to be deviation between the measured and predicted values. Possible reasons for the deviation between measured and predicted values may be attributed to cavitation, dynamic effects, modeling of the turbine wake, and also experimental error in the measured values. Recommendations for improvement to the WT_Perf hydrodynamic model and cavitation prediction models are also discussed in this report.


Bridging Dynamic Needs to Solid Engineering. ERIC MADDEN (Columbia Basin College, Pasco, WA, 99301) RAUL CARRENO (Pacific Northwest National Laboratory, Richland, WA, 99352)

Facility needs for fuel cell research and development (R&D) are dynamic. Consequently, laboratories will require periodic modification to meet the demands of this R&D. Current fuel cell prototypes are requiring thousand-hour testing capabilities in which the facilities need to be able to maintain constant control of ambient conditions. This research will require backup power and gas supplies for caustic, explosive, toxic, and possibly highly toxic gasses. It has been proposed to modify an existing laboratory building to support the necessities of this long-term testing. The effort starts with a Service Request (the means to initiate any service or facility modification within PNNL). The Service Request was initiated and submitted by the researchers and building management in order to develop a Project Baseline Document (PBD) which delineates the scope of work, cost, and schedule for the proposed facility renovations. Once the Service Request was received by the Project & Construction Management Group Manager, the service request was assigned. The Project Manager is the individual responsible for the overall management of the effort which includes the development of the PBD and the coordination of gathering all the required subject matter experts to obtain the necessary information contained in the PBD and subsequent project phases such as definitive design, construction management, contracting, safety, etc. My mentor, a Project Manager, assigned this project to me. As a Project Manager I facilitated communications between involved disciplines (researchers, engineers, estimators, designers, specialists, and other student interns) and by doing so, a team was formulated. As a team we worked together to establish the required functional criteria, recognize and resolve problems, and identify risks and their mitigating alternatives. By communicating with the researchers that would occupy this facility, we found that they would need two custom 12’X5’ fume hoods, with backup generator power for the HVAC systems. Piping leading outside the building will route gasses from gas storage racks to the fume hoods. The PBD will serve as a basis in order for the building manager to formally request the required funds for Fiscal Year 2009, therefore this modification effort will be fully developed and executed next Fiscal Year provided the funds are approved.


Building Power Monitoring. ADRIAN ONTIVEROS (Illinois Institute of Technology, Chicago, IL, 60616) PAUL DOMAGALA (Argonne National Laboratory, Argonne, IL, 60439)

The objective of this project is to convert from existing analog power meters to digital remotely monitored power meters. In many of the buildings on the Argonne National Laboratory site analog meters are still in place requiring personnel to physically read the meters wasting valuable manpower and money. To remedy this situation, this project will investigate the needs and requirements to install a digital meter that can be remotely read. The meter should measure voltage, current, the power factor and the power usage and have Ethernet capability so it can connect to the Metasys system. The Metasys system will serve as the online building management system to consolidate information from various buildings. The meter will be chosen from the electronics company, Schneider Electrics, from the ION series because of their compatibility with Metasys. This power project will also deal with power consumption and show that Argonne National Laboratories can decrease its power usage and ultimately save money. By performing experiments, such as having employees shut down their computers over the weekend, it will be seen that a considerable amount of power can be saved. Along with ways to conserve power and implementing new meters this project will prove to be beneficial for the Laboratory.


Characterization of Nano-Sized Hydrodesulfurization Catalyst Derived from Metal Carbonyls by Sonication. IRINA DOVGANI (Stony Brook University, Stony Brook, NY, 11794) D. MAHAJAN (Brookhaven National Laboratory, Upton, NY, 11973)

The strict environmental regulations to limit sulfur to 90% yield was synthesized in hexadecane by sonolysis of a metal carbonyl (Mo(CO)6) at ~50oC for 35 hours. The resultant MoS2 was characterized for particle size, shape and crystalline structure using SEM, XRD and Computed Microtomography (CMT) techniques. The resultant product of Mo(CO)6 and S in 1:2.5 and 1:5 molar ratio has cluster-like nature where an individual particle is non-distinguishable. The EDAX elemental analysis of both products confirmed the presence of Mo, S, C and O in variable concentrations. XRD pattern of the product showed a broad peak at 2 = 40o which may be a combination of peaks at 2 = 32.7oand 39.5o which are assigned to the (100), and (103) lattice planes of MoS2 respectively. Moreover, another broad peak at 2 = 58o corresponding to (110) and CMT 2-D and 3-D images confirmed the synthesis of nano-sized particles and the absence of MoS2 crystallites with high defect densities.


CHEMICAL VAPOR DEPOSITION OF AN ALUMINUM AND ALUMINUM NITRIDE COMPOSITE. NICHOLAS SEWELL (University of Tennessee, Knoxville, TN, 37916) THEODORE BESMANN (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

The aim of this study is to improve the thermal expansion match of adjacent layers for insulated gate bipolar transistors (IGBT) to reduce thermomechanical stress by increasing the thermal expansion coefficient for the dielectric layer. Accomplishing this with a composite of aluminum and aluminum nitride to replace the current pure aluminum nitride dielectric is proposed. The addition of the aluminum will increase the thermal expansion coefficient, and through controlled amounts and microstructure, will not reduce the insulating properties of the dielectric. This is being accomplished by way of metal-organic CVD (MOCVD). CVD is a process for the depositing of coatings onto substrates using a volatile precursor undergoing a chemical reaction driven by high temperature. CVD is used extensively in the semiconductor industry and in other areas for protective coatings. Dimethylethylamine alane (DMEAA) was the selected CVD precursor for its volatility and high reactivity. The CVD process was run several times to begin to ascertain the appropriate parameters for preparing the Al-AlN composite. Initially, aluminum was deposited as a crystalline material by using the precursor alone. Then, Al-AlN composition was investigated by varying the temperature using increments of 50°C from 200°C to 400°C. While the theoretical Al-AlN composition was not yet realized, the first few compositional data points have been obtained.


Comparison of Sodium and Potassium Carbonates as Lithium Zirconate Modifiers for High-Temperature Carbon Dioxide Capture from Biomass-Derived Synthesis Gas. JESSICA OLSTAD ( Colorado School of Mines, Golden, CO, 80401) STEVEN PHILLIPS (National Renewable Energy Laboratory, Golden, CO, 89401)

The process of gasification converts biomass into synthesis gas (syngas), which can be used to synthesize biofuels. During the gasifying process, large amounts of carbon dioxide (CO2) are created along with the syngas, requiring the process to have larger equipment and use more energy. CO2 can also cause the formation of unwanted byproducts in the process, thus creating the need for CO2 removal. Solid-phase sorbents were investigated for the removal of CO2 from a N2/ CO2 gas stream using a CO2 concentration similar to that found in a gasification process. A thermogravimetric analyzer was used to test the absorption rates from sorbents composed of lithium zirconate (Li2ZrO3), as well as mixtures of Li2ZrO3 with potassium carbonate (K2CO3) and sodium carbonate (Na2CO3). The experimental results show that Li2ZrO3 has a low CO2 absorption rate, but sorbents containing combinations of Li2ZrO3 and the K2CO3 and Na2CO3 additives have high uptake rates. The CO2 absorption and regeneration stability of the solid-phase sorbents were also examined. A sorbent composed of Li2ZrO3 and 12.1 weight % Na2CO3 was shown to be stable, based on the consistent CO2 uptake rates. Sorbents prepared with Li2ZrO3, 17.6 weight % K2CO3, and 18.1 weight % Na2CO3 showed instability during regeneration cycles in air at 800°C. Sorbent stability improved during regeneration cycles at 700°C. Further testing of the Li2ZrO3 sorbent under real syngas conditions, including higher pressure and composition, should be done. Once the optimum sorbent has been found, a suitable support system will be needed to use the sorbent in a real reactor. In conclusion, it was shown that Li2ZrO3 mixed with Na2CO3 gives a CO2 uptake that is comparable to that of the Li2ZrO3 and K2CO3 mixture. It was also shown that there is an optimum mixture of both carbonates that gives a better uptake rate than either carbonate by itself. These results support the use of solid-phase sorbents as a way to remove CO2 from syngas.


Concepts of Planning and Designing. TAHIEM WILLIAMS (Bethune Cookman University, Daytona Beach, FL, 32114) BOBBY MCKEE (Stanford Linear Accelerator Center, Stanford, CA, 94025)

The field of engineering demands that a sequence of procedures be carried out before any practical design of a product is manufactured. These procedures consist of the following: recognition of need, definition of problem, synthesis, analysis and optimization, evaluation and presentation. Most of these procedures were performed during the completion of the projects that were assigned. The assessment of relocating a steel ladder required both analysis and optimization. The pursuit of a surveillance camera for End Station A required evaluation and presentation. Lastly, the installation of metal sheets on the Radio Frequency distribution structure to prevent sections of the structure from shifting required synthesis and evaluation.


Creation and Implementation of Dynamic Blocks. JOHN GANGL (Columbia Basin College, Pasco, WA, 99301) SHAUNA ANDERSON (Pacific Northwest National Laboratory, Richland, WA, 99352)

The CAD static block library for the Design & Drafting Group is becoming more dynamic. AutoCAD is a computer aided drafting (CAD) program that manipulates lines and arcs to create 2D shapes and objects. The Facility Projects & Engineering Services’ Design & Drafting Group uses this program to create floor plan and elevation drawings for laboratory construction and modification. Fundamentally, Blocks are separate entities composed of lines and arcs that form static pictures of objects like cabinetry, doors, windows, or any repetitive item that would have to be drawn several times or copied out of other drawings. These Blocks can be inserted like a picture into an open CAD file in the attempt to save the drafter time they would have otherwise spent drawing every detail and object that they could have represented with a block. Dynamic blocks can be easily manipulated to represent multiple variations of an object like a door that could have multiple widths. Our goal was to learn how develop dynamic blocks to replace existing static blocks and other objects that still needed to be represented as blocks. The purpose was to promote uniformity and consistency through drawings and to minimize errors that might occur when copying entities out of other drawings.


Design and Fabrication of a Q-BPM Calibration Tone Generator for the ATF2. IAN BULLOCK (Harvey Mudd College, Claremont, CA, 91711) DOUG MCCORMICK (Stanford Linear Accelerator Center, Stanford, CA, 94025)

The Accelerator Test Facility (ATF) in Japan uses cavity beam position monitors (BPMs) attached to quadrupole magnets to determine the position of the electron beam. The beam position monitors are connected to a downmix circuit which produces a 20 MHz signal from the 6426 MHz input from the BPM. A circuit has been made to produce a switchable calibration signal that is the same frequency as the 6426 MHz signal from the BPM cavity. This circuit, which will use a 714 MHz damping ring source as its input, can then be used to calibrate the gain of the downmix electronics to provide more accurate measurements of beam position. Both SMA connectorized and printed circuit board versions of the calibration tone device were produced. Both gave a little less than 0 dBm output at 6426 MHz with a 10 dBm 714 MHz input. The switch isolation for each design was found to be 90 dB or better, which should ensure that the residual calibration signal does not prevent accurate measurement of the electron bunch.


Design for Shielding Photomultiplier Tube from External Magnetic Field. TRAVIS MADLEM (University of Virginia, Charlottesville, VA, 23904) VALERY KUBAROVSKY (Thomas Jefferson National Accelerator Facility, Newport News, VA, 23606)

Photomultiplier tubes are used to collect faint traces of light in detectors at Thomas Jefferson National Accelerator Facility. The Photonis XP4508B photomultiplier tube (PMT) will be used to detect light from Cherenkov radiation in the High Threshold Cherenkov Counter in the CEBAF large acceptance spectrometer (CLAS). Incident photons strike a photocathode, which emits electrons (via the photoelectric effect) that are gathered and multiplied to produce an electric signal. This process is sensitive to magnetic fields above 0.2 Gauss, and the location of this PMT in the detector will be subject to magnetic fields up to 50G. The focus of this study was to design a ferromagnetic shielding model to reduce the magnetic field at the location of the photocathode to around 0.2G in an external field of 30 to 50G. The program Poisson Superfish was used to model the shielding and external field and to calculate the field strength at the photocathode. A variety of models were tested, varying in number of shield layers (up to three), size, and material type. It was found that a three-layer shield comprised of an outer shield of hyperm49 and inner shields of conetic mu-metal reduced the external magnetic field to the required level for correct operation of the PMT. This result confirms that a three-layer model is necessary to provide adequate shielding from external fields of around 50G. It also shows that models using regular iron or soft iron will not reduce the external magnetic field to a level low enough to allow the PMT to operate efficiently. For external field strengths up to 50G, it is clear that the materials hyperm49 and conetic mu-metal possess superior magnetic properties to provide shielding for PMT’s, and the calculations suggest that increasing the number of shields and their thicknesses will further reduce the magnetic field strength.


Design of a 1 kW Proton Exchange Membrane (PEM) Fuel Cell with Dual Cooling Systems. MICHAEL ESPINOZA (Stony Brook University, Stony Brook, NY, 11794) DEVINDER MAHAJAN (Brookhaven National Laboratory, Upton, NY, 11973)

Across the United States, the growing recognition of hydrogen's potential as a fuel has increased hydrogen research and development. The theoretical efficiency of fuel cell is higher than the internal combustion engine which represents the main source of energy for today’s transportation vehicles. In a Proton Exchange Membrane (PEM) fuel cell, the power output is influenced by the humidity and temperature inside the power stack. The electrochemical reaction inside a fuel cell produces 50% energy in electric current and 50% energy in heat. Therefore, a cooling system must be designed as a part of the power stack in order for the fuel cell to function safely, not exceed 80ºC, and maintain the Membrane Electrode Assembly (MEA) in a good working condition. In this project, various cooling systems have been designed, utilizing Inventor CAD software, to absorb this excess heat and maintain the fuel cell operating at a safe temperature level. Each design was transferred seamlessly to Finite Element Analysis (FEA) software (Algor) to perform heat transfer and cooling analysis which determined the optimum cooling system and the entire power stack configuration. Dual air cooling systems using fins and air flow conduits were shown to be the best method with minimal parasitic power necessary to operate a cooling fan, unlike liquid coolant which would require more parasitic power from the fuel cell and produce lower efficiency. FEA shows that the finned design cools 10 degrees more than using conduits alone. The FEA also tells us how much air flow is needed through the conduits in order to keep the stack under 80ºC. The final stage of this project was to optimize the design of the power stack so that it contained an optimal number of cooling plates. The fixed cost and the running cost were compared and showed that the running cost of a cooling fan was almost negligible. With this information, the 1kW fuel cell can be optimally cooled.


Design of an Experiment to Examine Molten Salt as a Heat Transfer Medium. JOHN JORDAN (Texas A&M University-Kingsville, Kingsville, TX, 78363) DR. GRAYDON YODER (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

This study supports the Advanced High Temperature Reactor (AHTR) concept, a high temperature molten salt cooled reactor design, through designing a simplified experiment to study the corrosion characteristics of molten salts. Historically, corrosion studies have been performed using relatively complicated natural circulation loops. The objective here is to determine if a simplified natural convection geometry can be used to acquire data on the corrosivity of molten salt (LiF-NaF-KF). The test unit is 76 cm tall and uses a furnace to heat a pool of molten salt in a nickel crucible. A heated nickel rod is submerged in the center of the molten salt to initiate the natural convection circulation. Using the proposed geometry, the experiment can be implemented at a lower cost and smaller scale compared to natural circulation loops. We completed a thermal analysis before the experiment was performed. Known parameters are entered into the computer algebra system, Mathematica, to perform the calculations. The calculations are plotted in Excel to obtain values for temperature, salt velocity, heat flux and heat loss. The computational fluid dynamics software, FLUENT, numerically simulates flow patterns in the molten salt that result from natural convection. During the experiment, salt velocity is measured by Laser Doppler Velocimetry using a Class IV 514.5-nm laser probe and 899-mm convex lens. Temperature values are measured using thermocouples. An infrared camera will capture pictures of the salt through sapphire viewports. The Excel plots revealed a maximum velocity of approximately 4 cm/s, depending on the radial distance and height from the bottom of the central heater. These plots are based on a temperature range of 700 and 740 °C between the heater and the bulk of the molten salt. FLUENT flow patterns show the natural convection movement that is needed to accurately simulate corrosion characteristics on the nickel surfaces that exist in real reactors. Experiments are scheduled for August 2008. We will compare the empirical results of these experiments to the numerical results to test the model’s validity. The analysis of the natural convection geometry provides valuable information to demonstrate that the smaller design can be used to acquire data in the same way as full-sized authentic circulation loops. Multiple, simplified, natural convection experiments can be conducted to test heater rods of different materials and sizes.


Determining the Ability to Monitor the Viability of Transplant Rat Glioma Cells with an Optically Enhanced Catheter. RACHEL DYER ( St. Olaf College , Northfield, MN, 55057) BOYD M. EVANS III (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

Approximately fifty thousand cases of Parkinson's Disease are diagnosed within the United States each year. This debilitating disease results from the dissolution of dopamine-dependent communication between the substantia nigra and the striatum of the brain. Cellular replacement therapy, in which stem cells are introduced to supplant dead or stressed cells, has shown promise in animal models. However, the viability of transplanted cells and their survival rate is poorly accounted for by early tests. A novel design coupling a surgical catheter with fiber optic technology provides a tissue delivery platform that can monitor cell viability with sensing techniques widely accepted in the medical industry. The goal of this work is to monitor the health of transplant cells in real time at the final point of delivery using the optically enhanced catheter. Rat glioma cells were separately labeled with CellTracker Orange (CTO) (Invitrogen) and JC1 stain from BioVision’s MitoCapture Mitochondrial Apoptosis Detection Kit and fluorescence was characterized by confocal microscopy. CTO exhibited a single emission peak at 570 nm upon excitation with a 488 nm argon laser. JC1 exhibited two emission peaks corresponding to fluorescence of viable cells and apoptotic cells, 595 and 540 nm respectively. JC1 was used to monitor the viability of cells under apoptotic conditions induced by incubating JC1-labeled cells with carbonyl cyanide 3-chlorophenylhydrazone or etoposide. Observation of fluorescence using a mercury fluorescence microscope over a four hour period demonstrated JC1’s ability to shift in color to reflect cell viability. To detect cell movement through the catheter, cells were labeled with CTO, excited by an argon ion laser with a 501 nm wavelength and a peak emission at 570 nm was detected by an Ocean Optics spectrometer. JC1 was also used to detect the movement and the viability of cells through the catheter. Cells excited by an argon ion laser with a 488 nm wavelength exhibited emission peaks at 540 and 595 nm, demonstrating the ability to detect both viable and apoptotic cells at the final point of delivery. From the detection of rat glioma cells labeled with CTO and JC1 using the diagnostic catheter, and the characterized response of JC1-labeled cells to apoptotic conditions, it can be concluded that these fluorescent probes are suitable for tracking and monitoring the viability of transplant cells through the optically enhanced catheter.


Developing Genomics-Scale Throughput for Small Angle X-Ray Scattering at the Advanced Light Source. PATRICK MCGUIRE (California Polytechnic State University, San Luis Obispo, CA, 93407) GREG HURA (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

Small Angle X-Ray Scattering (SAXS) has become an attractive tool for structural biology to observe marcomolecules in solution. Maintaining the biological interactions that occur in solution provides great insight into the conformations, assembly, and shape changes associated with protein folding and function. The brevity of sample preparation, exposure time, data acquisition, and solution structure modeling furthers SAXS as a high throughput technique. Two example proteins are analyzed, as the process displays the potential of the SAXS technique. The design of a multi-well sample cell increases the number of experiments that can be performed, while reducing the accumulation of residue from continued use. Sample cells of varying thicknesses are interchangeable to optimize the scattering intensity based on absorption properties. Not only does the quality of data collection drastically improve, but from a biological standpoint it enables a reduction in the concentration of protein needed for an experiment.


Development of a Simulation Program for Photovoltaic and Wind Integrated CAES Facilities. JESSE MCMANUS (Tulane University, New Orleans, LA, 70118) VASILIS FTHENAKIS (Brookhaven National Laboratory, Upton, NY, 11973)

Increased interests in the use of renewable energy sources, including Photo Voltaic (PV) arrays and wind turbines, have brought much attention to their effective deployment. With both wind and PV energy sources, intermittency and output variability serve to complicate their successful use in a plant setting. Integration with a Compressed Air Energy Storage (CAES) facility may provide a means to overcome this hurdle by storing excess energy as compressed air for generation during times of greater power demand; however, this integration has yet to be effectively modeled. In this investigation, a CAES plant model was developed through thermodynamic analyses of the turbomachinery from an existing, non-PV/Wind integrated CAES plant in McIntosh, Alabama. The thermodynamic plant model was then coded into a simulation program using Visual Basic for Applications (VBA) complete with an interactive user interface and a Microsoft Excel data exportation module. Insolation data was then taken from the National Renewable Energy Laboratory (NREL) databases and combined with hourly wind profiles generated using NREL’s HOMER software. These PV and wind resource data were subsequently integrated into the simulation program with a time-step of one hour. This VBA simulation program allows the user to specify any configuration for a PV/Wind integrated CAES plant with solar and wind data from any time or location, returning the hourly plant operation profile for any duration of time. This software will provide insight on the feasibility of integrated CAES plants as a way to meet baseload or spinning reserve energy demand profiles. Additionally, this CAES plant simulation program could be further integrated into the NREL’s HOMER software to yield a powerful micropower optimization model that would aid in the development of worldwide, PV/Wind integrated CAES power systems.


Development of a Supersonic Gas Jet for Ion Beam Diagnostic Purposes. REYNALDO LOPEZ (University of California, Los Angeles, Los Angeles, CA, 90095) MATTHAEUS LEITNER (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

A supersonic gas jet test stand has been developed for non-intercepting ion beam diagnostic applications. This research focuses on the first efforts to characterize the supersonic gas jet by ionizing the jet stream, and visually measure the location of the first density knot (Mach disk) that is formed by the self-focusing behavior of an underexpanded supersonic gas flow. The Mach disk will be used as an ion beam target to characterize any ion beam profile. A progress report describing the system design, experimental setup, and observations is presented. Future work will include a redesign of the electrode structure, addition of more powerful vacuum pumps, and the development of experiments to characterize the density profile of the supersonic gas jet stream.


Distributed Energy Communications and Control (DECC) laboratory Enhancement. PHILIP IRMINGER (University of Tennessee - Knoxville, Knoxville, TN, 37996) JOHN KUECK (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

Distributed Energy Resources (DER) are being integrated into the electric grid here at Oak Ridge National Lab (ORNL) in order to provide reliability services such as voltage regulation. The objective of this study was to help identify instrument and control requirements needed for DER involving a remote inverter at the Distributed Energy Communications and Control laboratory. DER can come in many forms, but the primary technologies that we are currently focusing on are Microturbines, and Photovoltaic Systems. While researching instrument and control requirements I used the existing inverter setup here at ORNL in order to base equipment selection on. I examined what equipment is currently in place, and compared it to some new instrument and controls that are offered today. This included new control equipment that will be used for automating our research areas for both control and safety. I found that much of the instrumentation requirements remained the same from our existing inverter system to the new remote inverter system, however, with it being a remote facility much of the controls had to be modified in order to treat it as a remote facility. Through much discussion with colleagues at the laboratory, an Automated Logic system seemed to be a good solution for controlling and monitoring our remote inverter system. At this point we have decided on what the instrumentation and control requirements are for the remote inverter site and are ordering the equipment needed to implement this system. The next step will be to explore possible physical configurations for the remote inverter test system and gain hands on experience with the control and monitoring equipment created by Automated Logic.


Dynamic Blocks - Creating an Advanced Architectural and Civil Library. KAYLA COALY (Columbia Basin College, Pasco, WA, 99301) SHAUNA ANDERSON (Pacific Northwest National Laboratory, Richland, WA, 99352)

The facilities at the Pacific Northwest National Laboratory are constantly undergoing change through new construction, demolition, or remodeling. The drafters are kept incredibly busy with all the drawings, details, notes, and specifications needed for these projects. Currently, the blocks used by the Engineering and Design group are time-consuming to modify and lack accuracy and consistency. Since the ability to create dynamic blocks was added in AutoCAD 2006, the design group is beginning to replace their old legacy blocks with "smart" dynamic blocks that can be created with adjustable geometry. Using AutoCAD 2008, we composed a library of dynamic blocks for architectural and civil use. These dynamic blocks are able to be modified in specific ways applicable to the type of block; whether it is a piece of case work that is available in several different sizes or a piece of equipment with many different views. Having a library of dynamic blocks will enable drafters and architects to maintain consistency through out drawings. Dynamic blocks will also enhance efficiency through the automated adjustment built into the block.


Educational Materials Developed about Biodiesel as an Alternative Fuel at Sagamore Hill National Historic Site. KAITLIN THOMASSEN (State University of New York at Geneseo, Geneseo, NY, 11787) DR. THOMAS BUTCHER (Brookhaven National Laboratory, Upton, NY, 11973)

Biodiesel is a renewable, environmentally friendly “green fuel” that can replace no. 2 oil in traditional space-heating applications. Thus biodiesel, which can be domestically produced from various feedstocks including virgin vegetable oils, waste vegetable oils, and animal fats, directly promotes the conservation of our nation’s natural resources. There are, however, several combustion and material compatibility characteristics of biodiesel that differ slightly from its petroleum-based counterpart, no. 2 fuel oil. In order to determine the feasibility of using biodiesel in traditional oil-fired space heating equipment, Brookhaven National Laboratory, in conjunction with the National Park Service, began running a blend of 20% biodiesel and 80% no. 2 oil (B20) at the Sagamore Hill National Historic Site in Oyster Bay, NY. The overall purpose of this study was to characterize the feasibility of converting home heating systems to run on biodiesel and to provide the public with educational materials about the benefits of “green fuels” and the use of biodiesel at Sagamore Hill. This summer, the main focus of this multi-year project was to develop these educational materials. First, a tri-fold brochure was developed on Microsoft Word that gave a brief history of Theodore Roosevelt’s conservationist efforts, an overview of biodiesel and its environmental benefits, the purpose for burning biodiesel at Sagamore Hill, as well as other important information regarding alternative fuels. After approval from the public relations department of Brookhaven National Laboratory, the New York State Energy Research and Development Agency, and the National Park Service, this brochure will be available to the public at the Visitor’s Center at Sagamore Hill. Additionally, a PowerPoint presentation on the use of biodiesel at Sagamore Hill was created to provide educators with a more in-depth look at biodiesel and its properties as well as President Theodore Roosevelt’s contributions towards conservation in the United States. In order to make this presentation as versatile as possible for educators, an extensive use of both history and science concepts were utilized. The goal is to have this PowerPoint presentation available on the internet at the Sagamore Hill National Historic Site website as soon as possible so that teachers from all over the nation will be able to access the information.


Effect of Chemistry on the Life and Performance of High-Power Lithium-Ion Cells. MAGDALENA FURCZON (University of Illionois at Chicago, Chicago, IL, 60680) DANIEL ABRAHAM (Ames Laboratory, Ames, IA, 50011)

High-power battery technology is key to the commercial success of hybrid electric vehicles (HEVs). These vehicles combine the advantages of the extended driving range and rapid refuelling capability of a conventional vehicle with the increased fuel economy and reduced exhaust gases of an electric vehicle. The relatively high specific-energy and specific-power characteristics of rechargeable lithium-ion batteries make them an attractive alternative to the nickel metal-hydride batteries used in hybrid vehicles currently in the market. The goal of this project is to determine the suitability of various electrode-electrolyte combinations for HEV applications. The cells typically contain a layered oxide-based positive electrode, a graphite-based negative electrode, and an electrolyte containing an organic solvent and lithium-bearing salts (such as LiPF6). Project activities to date have involved investigation of the effect of alternative salts, such as LiF2B(C2O4) and LiB(C2O4)2,on cell cycling performance. Experiments were conducted on ~2 mAh coin cells and on ~35 mAh cells containing a lithium-tin reference electrode. The cells were electrochemically cycled or subjected to above-ambient temperatures (up to 55 °C). Capacity and impedance measurements were made periodically to determine the deterioration of cell performance with age. Initial data indicate that cells containing the LiF2B(C2O4) salt show better long-term performance than do cells containing the LiPF6 and LiB(C2O4)2 salts.


Effects of Friction Stir Processing on Scuffing Resistance, Friction and Wear Performance, and Hardness of Bronze. JOHN JAST (University of Illinois, Urbana-Champaign, IL, 61801) CINTA LORENZO-MARTIN, OYELAYO AJAYI (Argonne National Laboratory, Argonne, IL, 60439)

Friction-stir processing (FSP) is a surface-engineering technology used to locally modify a material’s microstructure and eliminate casting defects to enhance specific properties. FSP often results in increased resistance to corrosion, improved strength and hardness, and enhanced formability. The technology involves a rapidly rotating tool with a small diameter pin (6mm) and larger diameter (16mm) concentric shoulder being plunged into a surface (a few mm) and traversed along the surface in the direction of interest. The friction created between the tool and the workpiece creates heat and results in metal deformation (modifying grain size) but not melting. This frictional heating and extreme deformation causes the plasticized material to flow around the tool and merge behind it. Overall, the process creates a finer, homogeneous microstructure with equiaxed grains, increases yield and tensile strengths, and virtually eliminates casting defects such as porosity. The objectives of this project were to determine how FSP affects the friction and wear performance of a material, its resistance to scuffing, and its hardness. In order to complete these objectives, base bronze and friction-stirred bronze samples were tested using a block-on-ring test rig for friction and wear performance and Vickers tests for quantifying hardness and creating surface hardness profiles. Friction and wear data were compiled and analyzed to determine if a finer grain size due to the FSP resulted in improved friction and wear performance and an increase in scuffing resistance, as expected. In addition, the effect that FSP has on a material’s hardness was explored. Data shows that FSP resulted in a 19-33% increase in surface hardness of bronze. In addition, FSP resulted in a 72.8% decrease in wear amount and up to a 58.0% decrease in surface friction coefficient. However, the FSP technology had no significant effect on the resistance of bronze to scuffing. As this investigation continues, the effects of FSP on the friction and wear properties of other materials needs to be explored. In addition, further studies need to be conducted to determine the extent to which FSP processing parameters, including tool speed and number of tool passes, affect surface hardness, friction and wear performance, and scuffing resistance.


Effects of Varying Wind Speed on Analytically Predicted Thermal Efficiencies of Solar Collectors. EMILY RADER (North Carolina State University, Raleigh, NC, 27695) TIM MERRIGAN (National Renewable Energy Laboratory, Golden, CO, 89401)

A numerical first-principles-based prediction method could be substituted for experimental testing in order to rate hot water solar collectors. In addition, such a tool could be used to normalize to a common wind speed efficiency results with different winds during testing. The validity of the Collector Design Program (CoDePro) was tested for suitability in meeting these objectives. Efficiencies predicted by CoDePro were compared to experimental test reports for four glazed collectors and three unglazed collectors. Due to uncertainties with basic input data (such as absorber coating properties), the absolute values differed from data results up to 20%, and this objective appears infeasible. These collectors were then analyzed at wind velocities varying from 0-10 mph and an efficiency equation was calculated as a function of wind speed. The efficiencies of the unglazed collectors varied more drastically under changing wind velocity than did the glazed collectors. While CoDePro experienced many problems calculating the efficiencies of unglazed collectors, the predicted efficiencies compared favorably to the experimental results.


Electrical Equipment Inspections in Argonne National Laboratory Facilities. CHRISTOPHER CHEN (Purdue University, West Lafayette, IN, 47906) FRANK PERROTTA (Argonne National Laboratory, Argonne, IL, 60439)

Electrical Equipment Inspections in Argonne National Laboratory Facilities. CHRISTOPHER CHEN (Purdue University, West Lafayette, IN 47906) FRANK PERROTTA (Argonne National Laboratory, Lemont, IL 60439) Electrical equipment inspections of Argonne National Laboratory are based on eight criteria to ensure the safety of the users. National laboratories have seen many accidents due to improper maintenance or installation of electrical equipment; therefore the Department of Energy now asks their laboratories to hire Designated Electrical Equipment Inspectors (DEEI). The first of the criterion used by a DEEI is a solid ground wire that can withstand an instantaneous current of 10 to 65,000 amps. This will prevent electrocution of anyone in close proximity to the equipment. Proper grounding is stressed in the National Electrical Code (NEC) which shows how important a proper ground can be to save a life. All electricians nationwide and some worldwide must follow the NEC whenever any electrical work is to be done no matter the scope. This project involved inspections of the electrical equipment at Argonne National Laboratory. Each inspection included examining proper grounding, durability of equipment, wire placement, electrical connectivity, heating effects, arcing effects, proper labeling, and interlocking system. These rules create the safe work environment that has led Argonne National Laboratory to a small number of accidents at work in the past five years.


Electricity Market Complex Adaptive System (EMCAS) for Energy Planning. ANGEL REYES-HERNANDEZ and ZULINETTE RODRIGUEZ COLON(University of Puerto Rico at Mayaguez, Mayaguez, PR, 00680) GUENTER CONZELMANN (Argonne National Laboratory, Argonne, IL, 60439)

Energy has been one of the biggest concerns of modern times. This makes it imperative to use tools that can foresee the future need of electricity generating systems. In these tools, decision analysis techniques are used to help system planners to study future changes in energy systems. An example of a system planning software is the Wien Automatic System Planning (WASP) program, developed by the International Atomic Energy Agency (IAEA). This tool helps decision makers to study the development of an electric generation system and to predict appropriate technology for long term expansion. WASP has been used globally to aid decision makers working with energy systems for the last 30 years. However, recent changes in the power industry, notably privatization and restructuring, have altered the way investment decisions in new power plants are made. Therefore, a new tool has been developed at Argonne National Laboratory to study power system expansion, called Electricity Market Complex Adaptive System (EMCAS), in traditional as well as restructured power markets. EMCAS can help plan short -term operations and long-term system expansion. The EMCAS model simulates generation investment decisions of decentralized generating companies. This software uses a probabilistic dispatch algorithm to calculate prices and profits for new candidate units in different future states of the system. The objective of this project is to analyze EMCAS. The goal is to produce results under a variety of scenarios and compare them with results from the WASP model in order to gain a deeper understanding of the interactions of generation companies and their impact on investment decisions. This will be particularly valuable, when studying a system’s long-term expansion. By presenting a simple case in both programs and comparing the results, the value of new insights provided by EMCAS can be demonstrated. Fuel prices were raised and it was found that it affects the decision making of the EMCAS results. In addition, the differences of the results when competition is taken into consideration are presented, to reflect that adding competition improves the representation of actual markets in EMCAS. As a future work, other sensitivity analyses will be done to further improve our insights into multi-agent investment decisions using the EMCAS software.


Engine Test Cell Preparation and Experimental Apparatus Design. RYAN THORPE (University of Colorado at Boulder, Boulder, CO, 80310) STEVE CIATTI (Argonne National Laboratory, Argonne, IL, 60439)

Biodiesel has been receiving a lot of attention lately with regard to its use as a transportation fuel. Scientists at Argonne’s Center of Transportation Research will begin experimenting with biodiesel/butanol fuel. To do research on the biodiesel/butanol fuel a GM 1.9L diesel engine test cell must be fully operational, safe and easy to use. Making the engine fully operational will require engine disassembly and diagnostic tests. The engine was disassembled to set proper timing and diagnostic tests were performed to identify DTCs (Diagnostic Trouble Codes) . Unfortunately, the GM 1.9L engine still failed to fully operate. Since the ECU’s (Engine Control Unit) programming is GM’s (General Motors) proprietary information it’s architecture is locked. That makes it difficult to troubleshoot engine problems. To fix that problem an open architecture ECU will be purchased, allowing researchers to fully monitor ECU activity. That could result in the GM 1.9L engine test cell being fully operational in the future. While preparing the test cell, an optical pressure vessel was designed to explore a novel optical diagnostic technique. If this technique becomes well understood temperature, pressure and molecular species can be discerned in temporal resolution from an engine operating at full load. To safely design the pressure vessel, many pressure vessel code handbooks had to be used. These handbooks contained material strength data and solid mechanics formulas that were essential to the design process. The material data and solid mechanics formulas enabled the designer to specify the thickness of an optical quartz window to prevent sudden fracture. The resulting quartz window thickness was calculated to be 7.7 mm. Although 7.7 mm is thicker than most optical quartz windows it is feasible because nearly 25 bar of pressure will be acting on it; which is an unusually high pressure. The final pressure vessel design was simple, compatible with common laboratory fittings and provided a factor of safety of 10 to resist a pressurized explosion. Future work could consist of applying this technique to a production internal combustion engine. That would allow the combustion process to be studied under realistic operating conditions. Keywords: Biofuels, Biodiesel, Butanol, Optical Diagnostics, Test Cell Preparation, Safety, Low Temperature Combustion


Engineering an Affordable and Handheld Nucleic Acid Dipstick System Prototype for Rapid Field Pathogen Detection. DAVID GEB (University of California, Los Angeles, Los Angeles, CA, 90024) TORSTEN STAAB (Los Alamos National Laboratory, Los Alamos, NM, 87545)

The development of an inexpensive, reliable, user-friendly, and self-contained molecular diagnostic assay is needed for diagnosis of present and emerging infectious diseases in resource-limited settings, and high-throughput disease screening in developed countries. To facilitate the necessary bio-chemical reactions an effective and embedded heater and micro-fluidic dispensing and flow control mechanism must be engineered and controlled by a microprocessor. In effectively engineering the prototype of this device, the heater was designed and located to promote efficient heat transfer to the solution. The liquid dispensing device uses a nickel-chromium alloy heating element to sear open a pressurized capsule. The subsequent pressure equalization in the dispenser expels the liquid that was held in by capillary forces. The micro-fluidic flow control mechanism uses a shape memory alloy as an actuator to regulate flow. A microprocessor electronically controls the heater and the micro-fluidic dispensing and control mechanisms, permitting an autonomous system. Each of the above individual components has been modeled and demonstrated successfully. The next step is to assemble them into a functional Dipstick prototype. This prototype will then serve as a model for the final design stage prior to manufacturing. An effectively engineered system will allow the Dipstick to perform at its full potential. Such a design has the potential to revolutionize clinical diagnostics, and afford improved health care to people throughout the world.


Ensuring the Coplanarity of Tiled Optical Surfaces. MICHAEL DAWSON-HAGGERTY (Tufts University, Medford, MA, 02155) PAUL O'CONNOR (Brookhaven National Laboratory, Upton, NY, 11973)

The Large Synoptic Survey Telescope (LSST) is a proposed ground- based telescope that would have a uniquely large field of view, allowing it to image a large section of sky. This introduces numerous engineering challenges, one of which, addressed in this project, is the coplanarity requirement of the focal plane, a peak to valley requirement of 6.5 microns. The surface consists of a mosaic of imaging chips divided onto “rafts,” 120-centimeter square packages each holding nine chips. Each chip is mounted at three points, a sixty-micron per turn differential screw at each. These mounting points allow the surface of the sensors to be adjusted with a resolution of less then one micron by combining instant position feedback from a .1-micron resolution confocal laser displacement sensor and precise threads. To bring the surface to flatness, the displacement sensor was positioned over the mounted tiles by a gantry system, and the value was read. To eliminate the mechanical error from the gantry, we then repeated the scan on a control surface, a commercial optical flat with a specified coplanarity of several wavelengths of light, and subtracted the control surface from the data. There was a significant amount of drift in the gantry due to temperature, but we were able to bypass this by scanning fewer points. We then mapped a plane to each sensor, and used the average plane to calculate the adjustment for each differential screw. By repeating this process we were able to bring the average planes flat to 1.8 microns and total peak to valley flatness under 6.5 microns. While we were able to bring the surface to the required flatness, there are concerns about the stability of the positioning once the raft is cooled to its operating temperature. Further study and extensive stability testing will be required to ensure that the final focal plane meets all specifications.


Enzyme Use in Fuel Cell Applications. SAMUEL LABARGE (Rensselaer Polytechnic Institute, Troy, NY, 12180) ABHIJEET BOROLE (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

Fuel cells are ideal for many applications, but their commercial development has been hindered significantly by problems associated with catalysis. Platinum-catalyzed fuel cells, for example, provide higher power densities than fuel cells catalyzed by other metals. Unfortunately, platinum is expensive and quantities are limited. One alternative to catalysis by platinum is the use of microbes and enzymes purified from microbes and fungi. Most fuel cells catalyzed by enzymes use a planar electrode with alcohols and glucose as the fuel. Our research focuses on developing a three-dimensional enzymatic fuel cell (EFC). We used laccase from Trametes versicolor (a polypore mushroom) as the catalyst. This laccase is a copper-centered enzyme that reduces oxygen in air to water. Our device consists of an anode compartment with a platinum-carbon catalyst separated from a cathode compartment. The carthode compartment contains a 1-cm2 by 0.5-cm carbon felt that supports an ion-selective Nafion® membrane. The cathode compartment contains a carbon support and a solution of laccase. We attempted to optimize the cathode side of the fuel cell by evaluating its power density in response to (1) variations in concentrations of the enzyme and electron mediator, and (2) differences in the concentration of buffer and solution-loading volume. In response to these variations, we measured fuel cell power densities ranging from 1-12 W/m2 . While these power densities are still an order of magnitude lower than those of Pt-fuel cells, EFCs have the potential to provide cost-effective mobile power subject to further improvements in power density. Continuing development will characterize and optimize the stability of the enzyme on the carbon support to provide longer lifetimes and higher power outputs.


Ergonomics Evaluation of Single Channel Pipettes. MONICA LICHTY (University of Michigan, Ann Arbor, MI, 48104) IRA JANOWITZ (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

At the Lawrence Berkeley National Laboratory (LBNL) and other science laboratories worldwide, ergonomics is an important issue. Repetitive laboratory activities such as pipetting can cause cumulative trauma disorders for scientists and technicians. The role of ergonomics in a laboratory setting is to assist in the layout of the workplace and design of tools to fit the worker and reduce risk factors that can contribute to CTDs. The main purposes of this study were to identify relationships between user characteristics and pipette preferences and to identify trends in user preferences. The study consisted of 21 users who completed the pipetting task with 5 manual and 5 electronic pipettes. Prior to completing the task, each participant completed a brand preference questionnaire to identify their existing biases towards certain pipette brands and models. Their hand length, breadth, strength, and digit 1 (thumb) length were measured. They then filled one 8-well row of a 96-well plate with each pipette, completing 3 volume changes during the cycle. After filling a row, they completed a rating sheet which utilized a series of visual analog scales (VAS) to rate the pipettes based upon a variety of attributes. A Friedman test with a Tukey follow-up was used to determine the statistical significance of the ratings. Although no single pipette was significantly better than all of the others for all of the attributes tested, there were some significant relationships found among pipettes for single attributes. Two sets of rankings were formed using median ratings; the Rainin Pipet-Lite was the dominant manual pipette while the Thermo Finnpipette Novus was the top ranking electronic pipette. The Finnpipette surpassed all of the manual pipettes except for the Rainin Pipet-Lite. This is a great advance for electronic pipettes since they provide lower plunger forces and reduced repetition through the use of programs. There was no significant correlation between any measured personal characteristics and ratings of individual pipettes. These results combined with the remarks given by many participants emphasize that the task and setting of use contribute to pipette preferences; although, the significance of rankings suggest that certain pipette features are desirable.


Evaluating Data Center Indoor Air Quality. BENJAMIN CHU (University of California Berkeley, Berkeley, CA, 94720) ASHOK GADGIL (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

Data centers are important locations for storing information, but their energy demands are steadily rising. An emerging technology to alleviate this problem is air-side economizers. Economizers work by turning off the power consuming chillers and to bringing in large amounts of cool outside air. This cutback of chiller operating hours results in energy savings. However, urban air contaminants can lower the reliability of the electronics in the data centers. Economizing seems like a promising way to remediate the problem, yet there is no research that documents the effect of economizers on indoor air quality. The industry is hesitant to install economizers because indoor contamination can negate any energy savings. This experiment seeks to determine if the indoor air quality inside an economizer based data centers are clean enough for the servers. To accomplish this, the concentration and sizes of the particulate matter were measured both inside and outside of the data center. Temperature and relative humidity sensors were used to analyze the indoor environment of the data center. Speciation of the aerosol constituents is to be done using ion chromatography. After comparison of indoor and outdoor air, it is shown that indoor particle concentrations rise when the economizer is on, due to bringing in outside air. Despite this increase, the concentrations are well below EPA standards, which shows that economizer use does not pose a risk for servers.


Evaluation of Driving and Charge Cycle Impacts on PHEV Technologies. MICHAEL KUSS (Embry-Riddle, Daytona Beach, FL, 32114) TONY MARKEL (National Renewable Energy Laboratory, Golden, CO, 89401)

Plug-in hybrid electric vehicles (PHEVs) have the ability to displace substantial portions of gasoline consumption and vehicle tailpipe emissions compared to non-hybrid and non-plug-in hybrid electric vehicles (HEVs). However, these fuel savings and emissions reductions are highly dependant upon the distance driven and individual driver behavior, as well as the frequency at which the batteries are charged and discharged. A more quantitative understanding of these cycling effects is needed, and thus several vehicle models were created and simulated over thousands of real-world driving cycles, collected as part of regional transportation surveys. Several energy storage system (ESS) sizes and charging methods are simulated. Fuel consumption and emissions results of various cycling conditions are compared in order to quantify the fuel savings for smaller, lower-cost energy storage systems.


Evaluation of the Catalytic Performance of Copper In the Oxidation of Biodiesel. CHRISTOPHER BROWN (Clarkson University, Potsdam, NY, 13699) C.R. KRISHNA (Brookhaven National Laboratory, Upton, NY, 11973)

Fatty Acid Methyl Ester (FAME) made to the American Society for Testing and Materials (ASTM) standard D6751 is a renewable fuel that when domestically produced, can decrease dependency on foreign energy while reducing harmful emissions including sulfur oxides, particulates, and CO2. FAME, commonly known as biodiesel, is often associated with natural degradation and poor stability properties. With this speculative association, the industry is concerned that copper fuel system components will catalytically effect biodiesel degradation. The purpose of this study is to quantitatively evaluate the catalytic performance of copper fuel system components in the oxidation of biodiesel over time. It was hypothesized that with age, copper tubing used in the field would have significant coatings and deposits from previous fuel degradation that would limit the catalytic effects of copper on biodiesel through minimal surface area contact. In order to determine the catalytic effect of new copper on biodiesel, samples of 100 percent biodiesel (B100) were aged in tubing utilized by industry fuel systems. To age the fuel samples, a temperature controlled environment was utilized to thermally accelerate the fuel’s effective exposure to the copper. The aged fuel was then analyzed through Fourier Transform Infrared (FTIR) spectroscopy to measure the change in percent of infrared (IR) absorption across the IR spectra. Through the oxidation process of biodiesel, peroxides are among the first components to be produced. Based on IR absorption signatures that specific compounds produce, it is possible to correlate an increase in absorption of IR light at the wave number range of 3500cm -1 to the formation of peroxides. The relative quantity of peroxide formation is accounted by the total change in IR absorption. Through this technique, it is possible to determine the magnitude of oxidative degradation in biodiesel. The FTIR results have shown the catalytic performance of new copper in biodiesel is not measurable. New copper, in comparison to quartz and stainless steel, performed identically. Because of the performance of new copper, it was not necessary to test the performance of aged copper. These results have produced a broad mapping of the peroxide formation in biodiesel with. From this, it is conclusive that the overall catalytic performance of copper is low under these conditions, and that copper component based fuel systems are biodiesel compatible.


Explaining leakage in HTS wires and Cables. JAMES ELLIAS (University of Southern California, Los Angeles, CA, 90089) ARNO GODEKE (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

Bi-2212 superconducting wire has a much higher critical temperature and critical current than NbTi, making it a promising replacement material for the construction of high field accelerator magnets. Bi-2212 powder filaments are placed in a silver matrix within silver wires. These wires are then wound into cables and heat treated at 890° C in a pure oxygen environment to allow powder grains to sinter together. When these cables are covered in insulation, Bi-2212 is observed to leak through the casing during heat treatment. Possible reasons for leakage include cracks in wires, diffusion of Bi-2212 though the matrix, and cracks created in the winding process. This project tests all three of these possibilities. Chemical penetrants are applied to new wires to detect possible cracks. SEM is used to search for cracks in both green and heat treated wires. SEM is also used to examine cross sections of wires in order to find Bi-2212 constituents outside filaments, which would indicate diffusion through the silver matrix. Slurries made from oxide powders are applied to the surface of wires. Slurries will also be applied to the edges of cables to find out if the cable winding process increases the chances of leakage. The wires and cables are then heat treated and leakage is observed with both an optical microscope and an SEM. Penetrant tests and SEM examination revealed no signs of cracks on green wires. SEM examination of the surface of heat treated wires found no Bi-2212 constituents. SEM examination of wire cross sections found a ring of Bi-2212 constituents between the outer casing of the wire and the silver matrix. This indicates that intense heat treatments may cause a void between the casing and the silver supported matrix, this might stimulate leakage. The slurry tests discovered leakage on some of the wire samples and one of the cable samples, indicating leakage during heat treatment. Leakage observed in cable samples was much greater, implying that the cable winding process exacerbates, but does not cause leakage.


Formation and Dissociation of Methane Hydrates in Consolidated Sand: Duplicating Methane Hydrate Dynamics beneath the Seafloor. KRISTINE HORVAT (Stony Brook University, Stony Brook, NY, 11794) DEVINDER MAHAJAN (Brookhaven National Laboratory, Upton, NY, 11973)

Methane hydrates, inclusion compounds where water molecules form an icy cage to surround a methane molecule, occur in permafrost and marine environments where high pressure and low temperature conditions coexist. Estimates of more methane in nature in the form of hydrates throughout world than any other fossil fuel have sparked an interest in hydrates for a potential energy supply for decades. However, there has been growing evidence that methane hydrates play a crucial role in seafloor stability and global warming. To alleviate these problems, more must be understood about sediment-hydrate interaction during dissociation, so in this study under the seafloor conditions have been duplicated in the Flexible Integrated Study of Hydrates (FISH) Unit to observe the pressure, temperature, and gas output responses upon methane hydrate formation and dissociation. The FISH Unit consists of a pressurized Temco cell filled with water saturated Ottawa sand maintained at a low temperature and high pressure. After charging methane gas, methane hydrate formation pressure-temperature (PT) kinetics is monitored over time until pore pressure asymptotes at hydrate equilibrium pressure. Dissociation is achieved through depressurization from equilibrium predictions in pure water and pure methane, and cooling due to the endothermic nature of hydrate dissociation is observed throughout the sample using thermocouples placed at different lateral and radial positions. During dissociation, temperature monitoring shows that the center radius of the cell drops in temperature more rapidly than the middle radius, thereby indicating that hydrates start to dissociate from the center of sample towards the walls. In addition, calculations using PT data collected shows that for the enthalpy of dissociation is found to be 59.45 kJ/mol, which confirms previously reported calculations of 56.43 kJ/mol. Also, post-depressurization PT equilibrium fit theoretical data for methane hydrates; therefore methane hydrates were indeed formed.


Geographic Information Analysis for Wind Resource Assessments and Cost Path Optimization. BENJAMIN NGUYEN (University of Washington, Seattle, WA, 98195) MICHAEL KINTNER-MEYER (Pacific Northwest National Laboratory, Richland, WA, 99352)

Twenty-three states have committed to developing more renewable electric generation technologies in the next 10-15 years. There is a growing need for analysis tools to estimate the cost associated with interconnecting future wind farms to the existing electric grid. In this study, resource assessment tools were implemented that first estimated the available wind resources in terms of installed capacity for the state of Washington. The capacity was a function of area within a proximity to existing bulk power transmission lines. For estimating the cost associated with connecting a future wind energy site, an optimal routing tool was used in a graphical information system (GIS) to determine the least-cost path from a future site to the existing grid. A cost function was defined which based transmission line construction on terrain, slope, distance, and land-use. The resulting assessments showed the total potential capacities of wind farms within a range of 5-15km from existing power lines. The optimal routing function accurately estimated the costs of building transmission lines, and it clearly showed the ability to find the least cost path; the paths created avoided high cost zones such as mountainous terrain. The study was a stepping stone for larger and broader applications such as the development and optimization of other natural resources. Other projects might include cost analysis of building new highways and laying oil pipelines underground. With the majority of wind located in the Midwest, the hope in the future will be to do large scale assessment and analysis on future wind farms within those states.


Greenhouse Gas Emissions Life Cycle Assessment and Carbon Payback Times for Parabolic Trough Concentrating Solar Power Storage Systems. TERESE DECKER ( University of Colorado at Boulder, Boulder, CO, 80310) CHUCK KUTSCHER (National Renewable Energy Laboratory, Golden, CO, 89401)

Concentrating solar power (CSP) systems have the potential to provide the world with clean, renewable, cost-competitive power on a large scale. CSP plants emit virtually no greenhouse gases during operation because they utilize solar energy and do not burn any fossil fuel. One key advantage of CSP plants over other renewable technologies is their ability to store energy and produce electricity during the period of highest power demand, thus helping to meet the utility’s peak demand. This study determines how much carbon will be emitted for two CSP storage systems and how long it will take for each system to “payback” the energy that was consumed, or emissions that were caused, during the systems’ entire life cycle. The plant studied in this project is a hypothetical 100 MW parabolic trough plant located in the southwest United States. Upon completion of a thorough materials list, to determine a carbon footprint for a system, several life cycle inventory (LCI) databases and programs were used to track the greenhouse gas emissions associated with the material. In most cases, the output from the LCI program includes the emissions to the air, water, and soil due to all the steps of production, including varying levels of transportation. In the LCA discussed in this paper, the focus is on greenhouse gas emissions to the air from the materials included in the thermal energy storage system. Two LCAs were performed and compared in order to determine which storage system has a lesser carbon footprint over its life cycle. This study has shown that the development of a two-tank indirect storage system emits approximately 90 million kg of carbon over its lifetime while that of a thermocline storage system emits 29.4 million kg of carbon over its lifetime. This equates to a carbon payback time of 3.5 years for the two-tank system and 1.1 years for the thermocline system. Since the thermocline system would reduce total costs and carbon, those involved with the completion of the CSP plants currently in development should consider accelerating the development of the thermocline system in order to employ it for CSP plants as soon as possible. Since the molten salt is the system’s largest contributing material to climate change, it would be helpful to research other fluids and salts in order to develop a storage medium with a lesser carbon footprint and resulting in a reduced total carbon footprint.


Heavy Truck Duty Cycle Data Collection. FIONA DUNNE (University of California, Santa Barbara, Santa Barbara, CA, 93106) GARY CAPPS (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

Real-world data on Class-8 truck operation is necessary for fuel efficiency studies as well as for use in vehicle powertrain design software. To gather this data, six Class-8 trucks were instrumented with a data acquisition system (DAS) and a set of sensors to monitor numerous vehicle performance parameters from engine to tires, as well as weather conditions, road slope, and load weight over a one year period. First, the truck's J1939 vehicle network was tested to learn what vehicle performance information was available on it, and how to retrieve the data of interest. The other sensors and DAS were then installed on each truck, and all data was recorded to the DAS as the trucks then continued in regular operation. During operation, data was checked weekly for errors to determine whether the equipment was functioning correctly. By checking the data, it was discovered that weather sensors began failing from water entry due to unexpected pressure washing of the trucks. Load weight data was found to be inaccurate, as truck drivers had not correctly calibrated the weighing system. Road slope and vehicle network data results were as expected. It was concluded that weather sensors should be covered during pressure washing, and an alternative method for calibrating the weighing system was devised. It was also concluded that the method used to obtain road slope, a derivation from GPS vertical and ground velocity data, was adequate. Finally, it was determined that no changes needed to be made in the method of communication with the vehicle network. Data will continue to be checked for errors throughout the remainder of the one year test, and changes will be made as necessary.


High Flux Isotope Reactor Modern Safety System. ABDIEL QUETZ (Andrews University, Berrien Springs, MI, 49104) CARL SCHEPENS (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

Many components of the Reactor Safety System (RSS) at the ORNL High Flux Isotope Reactor (HFIR) are 1960’s-vintage: they are becoming increasingly more difficult to maintain or replace. The objective of this study was to design, construct and test a new, state-of-the-art, computer-based system to replace the HFIR’s current RSS. The concept for a new RSS is the design of a digital Programmable Logic Controller (PLC) equipped with algorithms designed to replicate the calculations and outputs of the original analog equipment. The PLC will use existing monitored plant parameters as inputs. The algorithm uses inputs of data on heat power (in Megawatts) and actual measured neutron flux over time to determine a corrected neutron flux level in relation to the more accurate heat power readings. Testing of the new PLC system demonstrated that, with identical inputs, the algorithm in the PLC reproduced the results of the present system. The PLC has self-diagnostic capabilities, which will make the equipment easier to monitor. Using this state-of-the-art system will facilitate system maintenance, increase system reliability and reduce the time and costs to maintain and troubleshoot HFIR operations.


Imaging Residual Immiscible Fluids at Different Wettability Conditions of Permeable Media. MAUDE JOHNSON, MEAGAN PINKNEY, and LINDSEY THOMAS (Southern University and A&M College, Baton Rouge, LA, 70813) RIYADH AL-RAOUSH (Argonne National Laboratory, Argonne, IL, 60439)

In this research, microtomography was used to image the residual saturation of the pore-scale distribution of organic immiscible fluid (Soltrol 220) at different wettability conditions. Boron carbide was chosen as the porous media with a grain size of 250 microns to represents natural sand particles. The boron carbide was treated to obtain systems with different fractional wettability. Four levels of wettability was observed in a percentage ratio of water-wet (ww) / oil-wet (ow) porous media; 100%ww / 0%ow, 75%ww / 25%ow, 50%ww / 50%ow, and 0%ww / 100%ow. A column with an inside diameter of 6.0mm and a height of 83.5mm was used to conduct the flow and entrapment of immiscible organic fluid at the residual level. Changing the wettability was obtained by Bradford and Leij (1997) procedure, adding two percent by volume of octadecyltrichlorosilane (OTS) to ethanol, and saturating the boron carbide completely. Doping the boron carbide with OTS allows there to be contrast between the de-aired water and organic immiscible fluid in the imaging. Results show that the 100%ww / 0%ow system contains a minimal amount of residual saturation, which means the Soltrol 220 is located in small porous areas that resemble being trapped between tightly packed areas of the boron carbide, and the 0%ww / 100%ow system has the most residual saturation, with the Soltrol 220 largely distributed amongst the large and medium porous areas, and the de-aired water located in smaller porous areas. The research shows that microtomography is an effective and efficient tool to non-destructively create 3-D images of the oil’s distribution. For future research, characterization of the pore-scale distribution of the organic fluid including locations, shapes, sizes, interfacial areas, orientations and correlation at different wettability conditions of the porous media will be studied.


Implementation of a Proactive Diagnostic Testing Algorithm for Building Automation Systems. SCOTT KENEALY (University of Nebraska, Omaha, NE, 68182) SRINIVAS KATIPAMULA (Pacific Northwest National Laboratory, Richland, WA, 99352)

Building automation systems (BASs) have come into widespread use in commercial buildings as computing power has become easily available, communication protocols have become reliable and standardized, and open control system architectures have driven down the price and commitment necessary to implement a BAS. These systems have the potential to increase building energy efficiency by 15 to 30 percent. In many cases, improper maintenance causes BASs to operate sub-optimally. Because a BAS can only work efficiently with accurate data, many of these problems are rooted in sensors that need recalibration or replacement. Researchers at Pacific Northwest National Laboratory and Portland Energy Conservation have created an algorithm to automatically detect and diagnose faults in BASs caused by to faulty sensors and malfunctioning equipment. This algorithm monitors building conditions through a supervisory control and data acquisition (SCADA) network. If a fault is detected, the BAS initiates tests to isolate the problem and recover from it. A test for diagnosing problems in temperature sensors was implemented with the Python programming language and loaded into an existing SCADA network. This test determines which of three sensors has faulted. Once the faulty sensor is identified, the fault is diagnosed and recalibration or a replacement request is performed based on results. This test operates using only existing sensors coupled with the knowledge that at least one sensor is inaccurate. Automated testing ensures accurate data for systems management, which, in turn, reduces wear and tear on the system, increases occupant comfort, and decreases operating expense. This work allows one such test to be performed utilizing existing systems, without the need to purchase additional hardware.


Improving High-Performance CdTe Solar Cell Processing Through Programming. ISAAC SACHS-QUINTANA (New Mexico Institute of Mining and Technology, Socorro, NM, 87801) BREN NELSON (National Renewable Energy Laboratory, Golden, CO, 89401)

Programming can be used to improve the quality and the quantity of photovoltaic research and development by controlling experiments, gathering data, and performing calculations. Programs were created in each of these areas using the LabVIEW programming language. The programs were designed, tested, and revised using the best programming techniques as defined by the LabVIEW community. Empirical tuning of a proportional-integral-derivative controller algorithm was investigated. It was found that the best LabVIEW programming practices are necessary in large applications; however the best programming practices do not always apply when rapid prototyping is required or when there are large amounts of calculations. The programs created improved photovoltaic research and development even though some of the programs deviate from the best programming practices.


Inkjet Printing of Metal Front Contacts on Cu(In1-xGax)Se2 Thin Film Solar Cells. JOHN KREUDER (Rochester Institute of Technology, Rochestrer, NY, 14623) MAIKEL VAN HEST (National Renewable Energy Laboratory, Golden, CO, 89401)

Inkjet printing as a manufacturing process is poised to become a viable means of mass producing solar cells at the industrial level. This experiment focuses on optimizing the method of inkjet printing nickel and silver front contacts onto copper-indium-gallium-diselenide (CIGS) thin film solar cells. The proprietary organometallic inks developed by the National Renewable Energy Laboratory (NREL) are deposited using a Fujifilm Dimatix ™ Materials Printer retrofitted with a hot plate to allow substrate temperatures of up to 300°C. Substrate temperature and drop spacing are manipulated to produce uniform, continuous metal lines with a low electrical resistivity. A Transfer Length Method (TLM) test is performed to ensure low electrical contact resistance. Contacts on CIGS cells measuring approximately 0.45 cm2 were printed and characterized. The CIGS cells nickel-silver layered contacts were printed, with nickel acting as a barrier to stop silver from diffusing into the solar cell at the elevated printing temperature. The resulting CIGS cells achieved efficiencies as high as 11.58%, comparable to a reference cell with evaporated contacts. Higher efficiencies are achievable if the printing process can be shortened or the required substrate temperature for ink decomposition can be lowered.


Investigating Energy Use in the United States Through the Update of an Industrial Modeling System. AMY HAMMERVOLD (Oregon State University, Corvallis, OR, 97330) JOE ROOP (Pacific Northwest National Laboratory, Richland, WA, 99352)

Simulations of energy use in the United States are important to help policy makers determine the influence of energy supply and demand on the economy. One way to investigate the influence of the energy supply and demand is by utilizing a model and associated database called CIMS (Consolidated Impacts Modeling System). CIMS is a set of economic, energy and materials models that are designed to provide information to policy makers on the ability of their policies to achieve specific objectives and the likely costs of achieving these objectives associated with the model’s simulations. CIMS covers the entire United States economy and requires both engineering and economic information. The engineering part of the simulation illustrates energy consumption of each type of energy service technology and explains the relationship between energy services. The economic side of the simulation helps to establish which technologies are preferable by cost minimization. The CIMS model has been updated by adding a new sector in addition to updating sectors already in the model. The petrochemical industry was added to the CIMS model to differentiate itself from petroleum refining in order to gain a better understanding of the distinction between the manufacturing of petrochemicals versus the manufacturing of refined petroleum products. The model was also updated within the mining sector. In order to update this new sector data was compiled and collected from a variety of sources that are available on through online databases and peer-reviewed journals. The update of the CIMS model helps create a thorough representation of energy use in the United States and provides policy makers with specific information that will help influence their future technology acquisition and decisions. This research is part of a larger ongoing study being done and the model will continue to be updated in the future.


Life Cycle Cost Analysis of Battery Storage for Regulation Services. ERIN SOLVIE (Oklahoma Christian University, Edmond, OK, 73136) MICHAEL KINTNER-MEYER (Pacific Northwest National Laboratory, Richland, WA, 99352)

With the anticipated growing contribution of intermittent renewable energy resources to the generation mix, capacity requirements for regulation services are expected to increase to manage the fluctuations in the new electricity supply. Commonly, combustion turbines (CT) are used to provide regulation services in the electric grid. Batteries are an alternative technology for regulation services. Recently, the use of electric vehicles (PHEVs and EVs) for grid services has received public attention. This concept, usually referred to as Vehicle-to-Grid (V2G), is based on the assumption that the battery in a PHEV or EV could be utilized for regulation services while the vehicle is parked for long periods during the day. This research analyzes the cost competitiveness of replacing the CT technology as technology of choice with stationary batteries. The comparison was based on a twenty year life cycle cost (LCC) and used the cost, performance, and life degradation characteristics of currently available batteries designed for transportation purposes. The results of the LCC analysis suggest that current battery technology would reduce emissions but would be more expensive than CTs. However, if battery technology achieves the cost and performance target set by the USABC for 2016, then replacing the CTs with battery storage would provide significant savings.


Linking Field-Programmable Gate Arrays to Linux PC. MARCUS BAINES (Norfolk State University, Norfolk, VA, 23504) RYAN HERBST (Stanford Linear Accelerator Center, Stanford, CA, 94025)

The purpose of this project is to provide a high speed data link between FPGA-based designs and a Linux PC. This is a general purpose design that will be used in many applications, with the primary target being ILC detector development. This project is broken into three pieces: the hardware interface, the software API (Application programming interface), and the data transfer between hardware and the software. Through the use of two program codes, C++ and VHDL, the hardware and software will be able to break up packages that are too large (larger than 8 KB) into smaller frames and send them over the Ethernet, using the PGP protocol, at the theoretical rate of 125 MB per second. Also, through this protocol, the user interface will be rate matched with the Ethernet MAC so handshaking is not required from the receiver of the frames.


Medium Truck Duty Cycle Study. BRIAN MCDIVITT (Pensacola Christian College, Pensacola, FL, 32503) GARY CAPPS (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

Currently, there exists little knowledge of real-world duty-cycle operations for class-6 and class-7 commercial trucks. The goal of the Medium Truck Duty Cycle (MTDC) project is to provide a comprehensive database that will help improve the efficiency for these classes of vehicles. A data acquisition system (DAS) will be placed on each of six field operational test vehicles to collect the duty cycle information. The data are gathered during normal vehicle operations in order to give more accurate, real-world information about vehicle usage. Each DAS is configured via the internet, using a Raven X wireless modem incorporated in each DAS. Also contained within each DAS is an eDAQ Lite data logger, which performs the data acquisition function. A VBOX II Lite, which logs information such as GPS location, velocity, and acceleration, transmits its information to the eDAQ via the J1939 controller area network. Also installed on each test vehicle is an Air-Weigh self-weighing device, which monitors the vehicle’s weight throughout the testing period. This information is sent to the eDAQ through the same network as well. A major landmark in the project was the installation of a wireless modem in each DAS. Previously, in a similar project involving class-8, long-haul tractor-trailers, periodic manual travel to the fleet’s home base was necessary for on-site data uploads from the DAS units. Through the Raven X modem, the eDAQ webpage is remotely accessible and allows for changes in system and hardware settings. The eDAQ’s Test Control Environment Software can be used on a remote computer to view the database information in real-time. The vehicle latitude and longitude information from the VBOX also can be loaded into a mapping program to graphically display the route taken during a specified time. The implementation of the DAS units is not yet complete. After being installed, they will gather duty cycle information for a 12-month period, thus creating a database with seasonal representation. This project provides data that can lead to improvements in the efficiency and safety of class-6 and class-7 commercial vehicles. Future studies could focus on characterizing driving patterns, fuel trends, and the effects of geometrics, weather, and other factors on vehicle performance.


Modal Calculations, and harmonic observations of the carbon fiber mounting for the NSLS II Beam Position Monitor. ZAID AZIZ (Rensselar Polytechnic Institute, Troy, NY, 12180) SUSHIL SHARMA (Brookhaven National Laboratory, Upton, NY, 11973)

This paper is study of the carbon fiber mountings of the NSLS II Beam Line Monitoring device. In order for the NSLS II beam line to perform what is known as a "Golden orbit", where the X-ray beam fluctuate within its tolerances of 5 microns in the horizontal direction, and .008 microns in the vertical, X-ray monitors are implemented on the storage ring to electronically correct the beams divergence off its designed path. In this paper we discuss the process of experimentation, and the calculations of the carbon fiber support of the Beam Position Monitor , in order to insure that the supports deflection does not compromise the strict tolerances of the Beam Position Monitor.


Modeling Wind Turbines in the GridLAB-D Software Environment. JASON FULLER (Washington State University Tri-Cities, Richland, WA, 99352) KEVIN P. SCHNEIDER (Pacific Northwest National Laboratory, Richland, WA, 99352)

In recent years, the rapid expansion of wind power has resulted in a need to more accurately model the effects of wind penetration on the electricity infrastructure. GridLAB-D is a new simulation environment developed for the U.S. Department of Energy (DOE) by the Pacific Northwest National Laboratory (PNNL), in cooperation with academic and industrial partners. GridLAB-D was originally written and designed to help integrate end-use smart grid technologies, and is currently being expanded to include a number of other technologies, including Distributed Energy Resources (DER). The specific goal of this project is to create a preliminary wind turbine generator (WTG) model for integration into GridLAB-D. As wind power penetration increases, models are needed to accurately study the effects of increased penetration; this project is a beginning step at examining these effects within the GridLAB-D environment. Aerodynamic, mechanical, and electrical power models were designed to simulate the process by which mechanical power is extracted by a wind turbine and converted into electrical energy. The process was modeled using historic atmospheric data, collected over a period of thirty years as the primary energy input. This input was then combined with preliminary models for synchronous and induction generators. Additionally, basic control methods were implemented, using either constant power factor or constant power modes. The model was then compiled into the GridLAB-D simulation environment, and the power outputs were compared against manufacturers’ data and then a variation of the IEEE 4 node test feeder was used to examine the model’s behavior. Results showed the designs were sufficient for a prototype model and provided output power similar to the available manufacturers’ data. The prototype model is designed as a template for the creation of new modules, with turbine specific specifications to be added by the user.


Net Zero Energy Community: A Preliminary Evaluation. CHRISTOPHER BOWSER (Lipscomb University, Nashville, TN, 37204) SRINIVAS KATIPAMULA (Pacific Northwest National Laboratory, Richland, WA, 99352)

As a result of concerns over the security of the national energy supply and the adverse environmental effects of traditional power generation, the Department of Energy has developed a strategy for reducing the energy demand of buildings through its Building Technologies Program. The goal of marketable net zero energy buildings by 2025 will significantly reduce the growing energy demand of the nation’s largest energy user. In general, net zero energy describes a building that produces as much energy on-site through renewable sources as it draws from the existing power grid. The concept has been applied on an individual building basis, although the benefits achieved through economies of scale suggest that an aggregate of buildings would be more apt to achieve the net zero energy goal. Consequently, a literature review was conducted to determine if any community-scale research on the concept had been undertaken. The literature review produced much information on the progress of technology development and endeavors into net zero energy home and commercial building design. Also, community-scale renewable energy systems have received much attention as a power solution for isolated communities, while in the U. S., distributed resources are clustered together as support for the traditional grid. Additionally, the introduction of solar power technologies suitable for community-scale projects promises better efficiencies and lower capital costs than photovoltaic (PV) arrays. Combining these elements holds promise for the possibility of developing a net zero energy community, which has not been presented in scholarly works or developed in the real world. Looking ahead to the prospect of exploring the net zero energy community concept, a basic outline for research has been presented that includes four components: selecting a model community, gathering initial data, sizing a renewable energy system, and evaluating the theoretical system.


Nitric Oxide Emissions Conversion to Load- and Distance-Specific Units Methods and Impact on Reported Results. ISAAC PERRON (Rutgers University, New Brunswick, NJ, 08854) THOMAS WALLNER (Argonne National Laboratory, Argonne, IL, 60439)

The use of hydrogen as an internal combustion engine fuel is being researched because of its favorable physical properties. Research engineers are seeking to reach certain targets in order to comply with U.S. Department of Energy standards. DOE has set challenging goals, aiming for 45% peak brake thermal efficiency and nitric oxide (NOx) emissions as low as 0.07 g/mile. However, NOx emissions are measured in parts per million (ppm) at the emissions bench, and the conversion is not easily done or readily available. Therefore, most researchers choose to report NOx emissions in ppm, which can be misleading and disadvantageous. This paper shows two methods of calculating load-specific NOx emissions, which can then be converted to distance-specific units; the first method uses the balanced molecular equation of hydrogen combustion, while the second one uses the measured oxygen, hydrogen, and nitric oxide content in the exhaust to calculate the volumetric percentages. The two methods both consider humidity and water flow, pertaining to an experimental water injection mechanism. After analyzing numerous parts per million versus grams per mile plots, some trends stand out. There is a high linear correlation between similar tests at the same load point. However, the linear correlation is unique to each engine and load point. Therefore, one cannot compare different engines or different load points when NOx is reported in ppm-a serious disadvantage. Also, NOx emissions reported in ppm can be deceiving, showing a low volumetric concentration, but in fact having a high grams per mile reading. Though the conversion to grams per mile is not necessary at all times, it prevents emissions readings from being specific to each engine and load point, and, by using simplifying assumptions, allows research engineers to see how close or far they are from the DOE NOx target.


Noise Characterization of the SIDECAR ASIC Readout Chip. MARTIN DIAZ (University of Nebraska-Lincoln, Lincoln, NE, 68505) LEONID SAPOZHNIKOV (Stanford Linear Accelerator Center, Stanford, CA, 94025)

The System for Image Digitization Enhancement, Control and Retrieval (SIDECAR) Application Specific Integrated Circuit (ASIC) is intended to be used in the Supernova/Acceleration Probe (SNAP) to provide some understanding of the accelerating expansion of the universe. This single chip serves as an alternative to previous discrete electronics in order to minimize space, weight, and power consumption. With the ability to provide a pure digital interface for operating an infrared array and to operate under cryogenic temperatures down to 30 K, the SIDECAR ASIC is ideal for all aspects of imaging array operation and output digitization. Optimal noise performance is required for the successful implementation of the SIDECAR ASIC. This project is focused on reducing the noise throughout the SIDECAR through optimization of the microcode. By adjusting key parameters of the Pre Amplifier configuration, the noise level was reduced from a root mean square value of 9.7 to 4.8, which is slightly greater than the desired acceptable noise level of 3.8.


Noninvasive Optical Detection of Retinal Neurotransmitters and Metabolic Coenzymes. WHITNEY ENGLAND (Earlham College, Richmond, IN, 47374) JUSTIN BABA (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

Changes in physiological concentrations of the metabolic coenzymes nicotinamide adenine dinucleotide (NADH) and flavin adenine dinucleotide (FAD) and of neurotransmitters such as glutamate can be indicative of a number of different disease states. For example, an increase in the ratio of NADH to FAD in a given tissue signifies a decreased rate of aerobic cellular respiration, an indicator of decreased tissue viability for transplanted organs. Also, increases in the concentration of the retinal neurotransmitter glutamate in the vitreous humor have been linked to glaucoma, an ocular disorder that is one of the leading causes of blindness in the United States. Current in vivo detection methods for these molecules are invasive, indirect, or time-delayed. Research was conducted to explore the use of noninvasive optical methods to detect NADH and five of the major neurotransmitters of the retina: glutamate, glycine, -aminobutyric acid (GABA), serotonin, and dopamine. Solutions of NADH were examined by ultraviolet-to-visible (UV-Vis) absorption spectroscopy and fluorescence spectroscopy. Additionally, solutions of glutamate, glycine, GABA, serotonin, and dopamine were analyzed by UV-Vis absorption and Raman spectroscopies. Varying concentrations of the analytes were used to determine the lowest detectable concentrations, and spectra were analyzed to find distinctive characteristics to enable identification of individual analytes in a complex biological matrix. Porcine ocular fluid samples were collected and analyzed ex vivo using Raman spectroscopy to check for identifiability of the individual species. Fluorescence spectroscopy results indicated that NADH fluoresces detectably with a peak emission at 495nm when using 405nm excitation. No other molecule studied was found to have unique absorptions or emissions in the visible range. Distinctive Raman signatures were obtained in vitro for glutamate, GABA, and glycine, and glutamate was determined to be detectable at 0.5M concentration. No analytes were detectable in ocular fluid samples ex vivo. These results indicate that fluorescence and Raman spectroscopy can be used to detect NADH and retinal neurotransmitters, respectively, in vitro and that these techniques may have potential for detecting these molecules in vivo. Future work will be directed to measuring these analytes in ex vivo biological samples by developing a system with sufficient sensitivity to detect physiological concentrations.


Performance of Military Specification Lubricant in Extreme Tribological Environments. JARED BERG (Illinois Institute of Technology, Chicago, IL, 60616) OYELAYO AJAYI (Argonne National Laboratory, Argonne, IL, 60439)

Military ground vehicles involved in combat operations present a particularly extreme tribological environment. This environment includes lubricant contamination by various particulate matter and loss of lubrication due to mechanical damage sustained during battle. It is advantageous for a vehicle to be able to return its occupants to safety under these circumstances. To evaluate a current MIL-L-2104 specification engine lubricant (Shell Rotella T SAE 30), two types of tests were conducted. The first involved mixing the specified oil with test dusts containing four different particle diameters (avg. diameters 4.5, 10, 12 and 30 µm) at four different concentrations (5g/kg, 1g/kg, .5g/kg, .1g/kg). These contaminated oils and a clean control oil were tested in a four-ball test rig according to ASTM D-4172. The data from these tests and an analysis of the balls using optical profilometry was used to determine friction and wear characteristics. It was observed that smaller particle sizes caused higher wear, possibly due to particle coagulation or a comparatively greater number of particles in the oil. Higher levels of contaminant loading also led to higher wear. Additionally, high contaminant loadings caused the friction to temporarily rise, and then fall and stabilize at a lower level for the remainder of the test. The "oil-off" test utilized a block-on-ring rig. A 500 N load was applied, the supply of lubricant removed, and the time to scuffing failure was recorded. Various liquid and solid lubricant additives were evaluated. The most promising were Elektrion R (10:1) with an average performance increase of 73%, and molybdenum disulfide (1.6wt%) with an average performance increase of 64%. Further research into lubricant formulation and the addition of solid lubricants should increase the reliability and robustness of vehicles in the aforementioned environments.


Performance Prediction of a Solar Thermal Collector Array. JOACHIM DESMANGLES (Tallahassee Community College, Tallahassee, FL, 32304) THOMAS BUTCHER (Brookhaven National Laboratory, Upton, NY, 11973)

The aim of this summer’s project was to predict the performance of a solar collector array that will be part of a prototype solar thermal combisystem. A solar combisystem is a revolutionary system that uses both solar and non-solar energy to provide heat and hot water to a building. The demo system, to be installed in the next 12 to 24 months, will be used to supply part of the heating and domestic hot water demand of Building 30 on the Brookhaven National Laboratory (BNL) campus. The solar array will consist of 5 collectors covering a total area of 13.70m2. Various resources were used during the course of the project. An extensive bibliography provided valuable knowledge about the factors influencing the performance of a solar collector. Simulation software such as Volker Quaschning’s "Sun Position Calculator" was use to confirm some calculated data. Typical Meteorological Year (TMY3) data sets were utilized to find local meteorological information. The final results were computed by using Microsoft Excel and Visual Basic Editor to combine TMY records with the collector efficiency calculations. The five panel solar array under consideration for installation at BNL should produce over 32 million BTU per year and save a total of 231 gal of oil. The predicted energy outputs will be compared to data gathered from the solar array after its installation. If the estimates are accurate, the Excel files will be used in the planning of future solar thermal projects.


Performance-Based Brake Tester Data Correlation. MARKTHOMAS CUTONE (Clarkson University, Potsdam, NY, 13699) GARY CAPPS (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

On US roadways approximately 42,642 deaths occurred in 2006 (preliminary data), of which 4,995 involved commercial vehicles. During an investigation from 2001 - 2003, 30% of all trucks involved in the study had brake deficiencies. Efforts are being made by law enforcement to reduce the number of accidents involving heavy vehicles through the North American Standard (NAS) inspection program. This inspection program results in 55.3% of vehicles inspected being placed out of service (OOS) because of vehicle brake issues. This study seeks to explore a possible correlation between the Performance Based Brake Tester (PBBT) and NAS Level-1 inspection results. A PBBT is a device that can assess the braking capabilities of a vehicle through the measurement of brake forces as a vehicle engages in a braking event while on a PBBT. A direct correlation could lead to the use of the PBBT machine as a major brake inspection tool. If found to be a viable, the PBBT could reduce inspection time, increase the number of vehicles receiving brake inspections, and potentially, increase roadway safety. Since July 2007, the Tennessee Department of Safety has been using a PBBT to collect data as they perform their NAS Level-1 vehicle inspections. Using this collected data, an analysis involving 277 trucks and their 2518 wheel-ends was conducted. Based on the OOS and Overall Efficiency results, the inspections and the PBBT agreed 69.31% ± 5.44% of the time. Discrepancies occurred when the vehicle failed inspection and passed on PBBT, occurring 19.86% ± 4.7% of the time. Inspections in which a vehicle passed but failed PBBT testing occurred at a rate of 10.83% ± 3.66%, all with 95% confidence. The wheel-end analysis resulted in a matched result rate of 67.38%. The inspection pass - PBBT fail discrepancy (P-F) occurred on average 29.9% of the time; while the inspection fail - PBBT pass (F-P) inconsistency occurred 2.71% of the time on average. This high correlation (near 70%) shows value in using a PBBT as an enforcement tool. It is evident that the PBBT machine picks up on specific wheel-ends incurring a violation seen in the P-F wheel-end statistics. This feature can allow for possible diagnosis of failed brake hoses or other internal air system components. Finally, the F-P scenario suggests that there are certain instances in which the PBBT cannot mechanically find a fault, indicating that there will still be a need for officers to visually inspect vehicles.


Phase Inversion Behavior of Liquid-Liquid Dispersion in Centrifugal Contactors. MEECKRAL WILLIAMS (Prairie View A&M University, Prairie View, TX, 77446) DR. COSTAS TSOURIS (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

Phase inversion is a phenomenon in which the dispersed phase in a liquid-liquid dispersion, such as oil-water dispersion, becomes continuous, and the continuous phase becomes dispersed. It occurs in liquid-liquid extraction systems in various applications including hydrometallurgy, alternative fuel reprocessing, and chemical separations. The goal of this project is to better understand and improve the performance of centrifugal contactors. Phase inversion is determined by the physical properties of the liquids, geometrical factors and operating parameters. In experiments reported here, two immiscible fluids of different densities -- an aqueous solution and an organic solvent -- were fed into a centrifugal contactor from different inlets. The two fluids form a dispersion by means of shear rate generated in the contactor, before being separated by centrifugal force. The fluids then exit through the designated outlets. In this project, we focused on the ambivalence region, where the role of each phase is interchangeable. If the volume fraction of the organic phase is greater than the upper bound of the ambivalence region, then the organic phase is always continuous and the aqueous is always dispersed; this means that the dispersion consists of water droplets in oil. Conversely, if the volume fraction of the organic phase is less than the lower bound of the ambivalence region, the aqueous phase is always continuous and the organic is always dispersed. A conductivity probe placed inside the rotor inlet of the centrifugal contactor is used to determine phase inversion. The operating parameters that we manipulated include rotational speed of the rotor of the contactor and flow rates of the two phases. In this experiment, five agitation rates were used; these ranged from 3000 to 4200 rpm. For the lower boundary curve, we started the flow rate at 200:200 organic to aqueous, which fall within the ambivalence region. The organic flow rate then was reduced and the aqueous flow rate was increased stepwise until phase inversion occurred. For the upper boundary curve, we started the flow rate at 100:300 organic to aqueous. The organic flow rate was increased and the aqueous flow rate was reduced stepwise until phase inversion occurred. Each point was tested three times to characterize reproducibility. Results show that agitation speed does not affect the phase inversion point for the conditions used: the phase inversion point remained similar throughout the experiment.


Planetary Gearbox Bearing Calibration for Wind Turbine Gearboxes. BRADEN KAPPIUS ( Colorado School of Mines, Golden, CO, 80401) HAL LINK (National Renewable Energy Laboratory, Golden, CO, 89401)

Wind turbine gearbox reliability has been a major concern due to premature failure. The National Renewable Energy Laboratory (NREL) created an industry wide collaborative, the Gearbox Reliability Collaborative (GRC) to better understand gearbox failure mechanisms. One goal of the GRC is to understand how loads and events translate to bearing response, including reactions, load distribution, displacements, temperature, stress and slip. Computer models predict these responses, but data obtained from real gearboxes need to validate these models. By instrumenting gearbox bearings in two gearboxes, one to be field tested and the other to be tested in the National Wind Technology Center’s dynamometer, bearing load data can be compared and analyzed. Before the bearings are installed in the gearboxes, baseline load values must be gathered by a bearing calibration method. The bearing calibration applies radial loads ranging from 10,000 to 100,000 lbs to two coaxial bearings mounted on a shaft inside two circular rollers. A model of bearing load distribution provides a relationship between total load and load on each roller. The data collected are used to derive the loading as a single roller element passes. By increasing the load in specific increments and correlating it to the gage response, a direct relationship between the two can be determined. The relationship is expressed as: Lroller = A · egage + B. where Lroller is the roller load in pounds, A is the regression slope (lbs/ue), egage is the gage response (ue) and B is the regression offset (lbs). The data for each gage is presented in the text. This relationship will be applied to interpret dynamometer and field test data to determine loads on the bearings. Initial results of this research indicated a correlation with the regression coefficient for the roller load equation of greater than 0.98 for the majority of the lines. This high regression coefficient and consistency of coefficients indicate the calibration method to be valid. Future calibration tests could optimize the alignment procedure of the test apparatus and/or to allow for testing of a single bearing at a time.


Polysaccharide accumulation in nitrogen-limited cultures of two photosynthetic microorganisms - Plectonema boryanum and Chlamydomonas reinhardtii. LEO KUCEK (University of Minnesota, Minneapolis, MN, 55455) DR. MICHAEL HUESEMANN (Pacific Northwest National Laboratory, Richland, WA, 99352)

Plectonema boryanum and Chlamydomonas reinhardtii are two distinct photosynthetic microorganisms that are capable of accumulating polysaccharide storage products (glycogen and starch, respectively) under nitrogen-limited culture conditions. In the presence of light, these high-energy intracellular storage products can be converted by these two microorganisms to hydrogen gas in a process called indirect biophotolysis. Previously published experiments concluded that in cultures of P. boryanum, higher production rates of glycogen led to higher subsequent production rates of hydrogen. The purpose of this study was to determine the effects of different culture conditions on the accumulation of these two polysaccharide storage products. Specifically, the effect of dilution rates on glycogen production was studied in continuous cultures of P. boryanum, while the production of starch by C. reinhardtii was examined using different culturing conditions: batch, fed-batch, semi-continuous, and continuous. Both of these microorganisms were grown in continuously agitated and illuminated 1 L Roux bottles sparged with air/CO2 (99.7%/0.3% vol%/vol%). For the continuous cultures of P. boryanum, a range of dilution rates (D = F/V, F = medium feed flow rate, V = culture volume) was tested (.03, .05, .13, and .20 day-1), and it was determined that culture dilution rates between .05 and .12 day-1 resulted in the highest glycogen productivities in continuous cultures. Moreover, this information complements previous data, confirming that throughout this range of dilution rates, continuous cultures performed less effectively than batch and fed-batch cultures in producing glycogen. The second part of this study was conducted to determine the effects of different culturing conditions on starch productivity in C. reinhardtii. The highest starch productivity was observed during the initial growth phase in the batch and fed-batch cultures. Progressively lower productivities were exhibited in the pseudo-steady-state phases of the semi-continuous, fed-batch and continuous cultures.


Primer on a Supercritical Carbon Dioxide Gas Brayton Cycle. THOMAS TABB (University of Massachusetts Amherst, Amherst, MA, 01002) DAVID EASON (Argonne National Laboratory, Argonne, IL, 60439)

Compared and contrasted the steam Rankine power cycle to a supercritical carbon dioxide gas Brayton cycle for naval nuclear propulsion. Studied and evaluated the thermophysical properties of supercritical carbon dioxide and water. Developed an ideal gas Brayton cycle model using MathCAD software to calculate net work output, thermal cycle efficiency, and turbomachinery and heat exchanger energy loads. Developed a simplified gas Brayton cycle model that accounted for parasitic pressure losses in piping and heat exchangers. Developed a detailed gas Brayton cycle model that accounted for pressure losses due to specific pipe dimensions and heat exchanger geometries. Calculated heat transfer areas for a shell and U-tube, a shell and straight tube, and a printed circuit heat exchanger. Evaluated turbomachinery operation over a range of inlet conditions using performance maps, calculated flow rates, and isentropic efficiencies. Performed a small and large scale leak analysis and discussed the economical and physical advantages and disadvantages for implementation of a supercritical carbon dioxide gas Brayton cycle.


Prototype Roof and Attic for the Residential Retrofit Market. TRAVIS COWART (Bevill State Community College, Fayette, AL, 35555) DR. WILLIAM (BILL) A. MILLER (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

We designed, constructed and installed a prototype roof assembly to determine the energy and economic justification that would promote its acceptance in the retrofit market, where the roof is replaced every 15 years. We computed the daily and weekly energy savings for the prototype assembly versus the conventional roof assembly. Our research revealed a 90% drop in the peak heat transfer for the prototype roof, compared to the conventional asphalt shingle roof. The prototype roof has an expanded polystyrene insulation (EPS) with aluminum-foil facing affixed to a standard shingle base. Oriented strand board (OSB), also with aluminum foil facing, was attached (foil facing down) to the EPS insulation. Between the EPS insulation and the OSB is an air space of about 2.5 cm, which allows for natural convection to carry heat flow away from the attic. Heat flux transducers and thermocouples embedded in the roof deck and the attic floor allow measurements of thermal performance; the data were written electronically to an Excel file. Three weeks of field measurements have been completed. These measurements show that the integrated heat flow for the prototype system is about 38% less than the heat penetrating through the conventional roof and attic system. The temperature measurements showed that the EPS insulation never exceeded 38 °C. Thus, the prototype roof does not suffer the severe temperature swings that usually occur in conventional roof systems. Only minor improvement through the 24-h ceiling heat ratings. However, we found, on average, a 38% reduction in heat flow for the day time, and a 22% reduction in heat flow during the night. These results demonstrate that the prototype assembly reduces heat loss during all times of the day. The prototype roof system costs about $250 per square (100 square feet) to install on an existing shingle roof compared to the $100 per square (100 square feet) for conventional shingles and labor (in southeast). Annualized energy savings can be used to judge the cost premium for retrofitting a conventional roof system with the prototype system.


Reactor Design for Double Diffusion Experimentation. BRIAN PAYNE (Missouri University of Science and Technology, Rolla, MO, 65401) GEORGE REDDEN (Idaho National Laboratory, Idaho Falls, ID, 83415)

Strontium-90 (90Sr) is a uranium fission product, a by product of nuclear energy production. The radionuclide 90Sr is a contaminant of significant concern at Idaho National Laboratory and other Department of Energy (DOE) sites. The radionuclide is chemically similar to calcium, allowing the body to readily uptake it. However, this affinity for calcium also makes it favorable for 90Sr to co-precipitate with calcium carbonate (CaCO3), a naturally occurring mineral. In addition 90Sr has a relatively short half life of 29.1 years. If 90Sr can be bound up in a stable CaCO3 precipitate, in a few hundred years the concentrations will be reduced by 99.9% while preventing the contaminant from migrating into groundwater. The success of in situ remediation depends on being able to accurately predict and control the precipitation of 90Sr and CaCO3 in the subsurface environment. Unfortunately the subsurface is a less then ideal reactor for controlling chemical reactions. It is often heterogeneous, not fully characterized, and limits mixing to diffusion and dispersion along chemical gradients. Due to this poor mixing, volume averaged rates are likely to be inaccurate and inapplicable. Further complicating mixing is the altering of flowpaths as precipitates form. The goal of this project is to develop a reactor where the precipitation of CaCO3 can be monitored under conditions simulating the diffusion driven, poorly mixed subsurface environment. Spatial and temporal changes in subsurface conditions due to deposition of mineral phases will be noted. A double diffusion reactor was designed and built by linking two 1L reservoirs with a small diameter glass tube about 5cm in length. Each reservoir was than filled either calcium chloride or sodium carbonate solution. To help prevent advection through the column, various gel configurations were tested. The gel was optimized to provide a minimum reduction in the hydraulic conductivity while still preventing advection. Testing showed that the gel was effective at limiting mixing to diffusion only, but all the gel configurations tested slowly began to dissolve and were not stable for more than a day once put into the reactors and immersed in water. Additional sand filled columns simulating subsurface media were successfully designed, built, and operated in the reactor. While time restraints prevented further testing with the reactors, they were successfully built allowing for additional experimentation in the future.


Reflective Memory vs. Ethernet: Evaluating Data Network Solutions for LCLS Fast Feedback Controls. MARYA PEARSON (Norfolk State University, Norfolk, VA, 23504) ERNEST WILLIAMS (Stanford Linear Accelerator Center, Stanford, CA, 94025)

For reliable beam performance and X-ray Free Electron Laser delivery, the Linear Coherent Light Source (LCLS) requires a feedback system. LCLS has software in place for temporary use, but currently, no dedicated data network exists for feedback. While software is an essential factor in the feedback system, the focus of this study is an appropriate data network system that can support 120 Hz beam operation, provide reliable data transfer, and remain scalable for future modifications. Reflective memory and Ethernet are particularly interesting solutions for this task as they may provide deterministic, scalable, and unique networking options. Reflective memory handles data by simultaneously replicating and storing data to multiple memories in the network architecture. Ethernet, a common local area network technology, transports data according to MAC address and other higher-level protocol. A review of the advantages and disadvantages of each data network solution was conducted based on cost, performance, topology, and compatibility. Although no measurements were collected in favor of either solution, this assessment suggests that Ethernet with multicast capability will fulfill the performance requirements.


Rheological Studies on Physical Simulants of Hanford Legacy Tank Waste Using the Anton Paar HVA 6 Automated High Shear Capillary Viscometer. MICHAEL WORKMAN (St. Petersburg College, St. Petersburg, FL, 34683) LYNETTE K JAGODA (Pacific Northwest National Laboratory, Richland, WA, 99352)

The 53 million gallons of nuclear and chemical legacy waste stored in the 177 underground tanks at the Hanford Site is one of the foremost environmental concerns in the world. In an effort to remediate the site, the Department of Energy, Bechtel National, Inc. and the Pacific Northwest National Laboratory are designing, commissioning and constructing the Waste Treatment Plant (WTP) to process the waste by a process that blends the waste with molten glass and allows for safer storage. A principle concern of the project is how to transport the waste from the tanks to the WTP; to this end, engineers analyzed the tank waste rheology, which is the study of a deformation and the flow that is induced. In order to efficiently and effectively test Hanford tank waste rheology, physical simulants that imitate the physical properties of actual waste were created and analyzed. The Anton Paar HVA 6 Automated High Shear Capillary Viscometer was used to determine the material’s law of the sample from measured parameters. Through calculation, the relationship between viscosity and shear rate could be obtained. Also used in analysis were the Anton Paar MCR 501 Rheometer with pressure cell unit and the TA AR-2000 Rheometer with pressure cell unit. Hanford tank wastes have exhibited both Newtonian and non-Newtonian fluid properties. Thus, distilled water and viscosity standards from Brookfield Engineering Laboratories, Inc. of 9.1 centipoises (cP), 48 cP and 98.2 cP were tested along with kaolin clay mixtures of 5%, 10% and 15% solid by weight to obtain data of both types of fluids. The viscosity curves were analyzed. The data might lead to more ambitious analysis of different types of samples exhibiting legacy tank waste properties. The Anton Paar HVA 6 also allows manipulation of temperature through use of a heater-chiller unit, has a special pressure head called the Non-Sedimentation Device (NSD) which allows analysis of samples with suspended solids while negating settling issues and allows high shear stresses to be obtained that can not be mimicked by other forms of viscometer (rotational, falling-ball, etc.). Therefore, extensive research with the HVA 6 may lead to more cost-effective and efficient operating design and procedure for the WTP.


Roadway Condition Assessments and Traffic Flow Studies at Argonne National Laboratory. ALEXANDER BRAND (University of Illinois, Urbana-Champaign, IL, 61820) PHILIP RASH (Argonne National Laboratory, Argonne, IL, 60439)

Roadway conditions decline over time as a result of age, load application, traffic amount, weather, and climate. The 27.3 miles of roadways at Argonne National Laboratory require yearly maintenance to ensure the safety of drivers. This maintenance can range from filling a pothole to routine crack sealing. However, normally only the main roadways receive this maintenance. The majority of roadways at Argonne are over twenty years old, and most of those with an age in excess of 35 years are in very poor to near-failure condition. In order to determine which roadways are in the most peril, each Argonne roadway was assessed according to the roadway distresses involved, and the type, volume, and severity of each distress. From that assessment data, a Pavement Condition Index (PCI) value was calculated. In addition, a pneumatic traffic counter was placed on each main section of roadway, and the data collected was used to classify that section as major, minor, local, or low-use. Priority was then determined by combining the PCI and the classification. This project focuses on the conditions and classifications of the roadways, and what needs to be done in order to repair them. The overall average PCI reflects that most roadways are currently in need of reconstruction or will be in need in the coming years. The traffic counts provide conclusive evidence that certain roadways, such as Watertower Road, can be closed down during the wintertime to help save on costs of salting and snow plowing. By calculating priority values, roadways such as Westgate Road are deemed to be in peril and must be repaired.


Safety Set-Point Allocation. ELISABETH BYRD (Georgia Institute of Technology, Atlanta, GA, 30332) DAVE LOUSTEAU (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

The objective of this study was to establish a safety mechanism that will prevent a catastrophic failure due to a flow blockage, occurring primarily in the heat exchanger inside the Spallation Neutron Source (SNS) mercury target system. The target service bay houses the mercury loop consisting of the SNS target, pump, storage tank, and a heat exchanger along with various piping connecting the components. If a blockage occurs in any of the heat exchanger tubes, the pressure around the system will rise and the cooling of the mercury will be inhibited, leading to unsafe conditions. A pressure set point will be established, based on these results, to trigger proton beam shut down if this occurs. In order to understand the effects of blockage in the heat exchanger tubing, a fluid analysis needed to be performed by modeling the mercury loop using a computer program. A flow modeling program called Fathom was used to map out the loop components. Data on each part of the loop including pipe elbows, venturies, wyes and various other components was entered into the program. An existing 3D model of the loop, previously made using ProE, was the source of input data for the Fathom model which performed flow analysis with differing percentages of blocked tubes in the heat exchanger. Pressures at the inlet and exit of the pump were calculated based on varying percentages of heat exchanger tube blockage. A forty percent blockage results in a ten percent increase in pressure at the pump outlet. Blockage of this magnitude is created by a hundred square inch obstacle which is unrealistic. Thus, a pressure set point will not be indicative of a heat exchanger flow blockage. Therefore, other parts of the mercury loop need to be considered for blockage. Similarly, the thermal couples in the flow do not respond significantly to blockage in the heat exchanger. The use of flow meters upstream of the target is now being investigated in conjunction with Computational Fluid Dynamics modeling of the target to convey what flow causes an unsafe rise in temperature. When designing a system, safety is always the number one concern. This research will facilitate monitoring possible blockage in the mercury loop in order to keep the loop's components from failing and the immediate area safe.


Separation and Recovery of Materials from Shredder Residue. ERIC TAYLOR (Midlands Technical College, Columbia, SC, 29170) JEFFREY SPANGENBERGER (Argonne National Laboratory, Argonne, IL, 60439)

Separation and Recovery of Materials from Shredder Residue ERIC L. TAYLOR (Midlands Technical College, Columbia, SC 29210) Shredder residue (SR) is the waste that is generated when automobiles, home appliances and other durable goods are shredded and their metals content is recovered. Presently about 5 million tons of SR is landfilled in the U.S. and about 15 million tons worldwide. Shredder residue contains valuable polymers (about 30% of the SR weight) and residual metals (about 10% of the SR weight) that could be recovered and reused instead of being landfilled. Today, there is no commercial process to separate and recover marketable materials from SR. Argonne National Laboratory has been developing a process to separate and recover plastics, metals, foam and rubber from SR. The process consists of two parts. First a dry mechanical separation process is used to separate the SR into a polymer concentrate which contains over 20 different plastics and rubber materials, a ferrous metals concentrate, a non-ferrous metals concentrate, a fines fraction and other materials. The second part of the process is a wet separation facility that separates individual polymers from the polymer concentrate. The solution added to each of the tanks in the wet process is prepared to allow the selective flotation or sinking of one or more of the polymers in the polymer concentrate. My participation in the research included running of the pilot plant, determination of operating conditions to enable separation of individual materials and analysis of recovered polymers to determine the purity and yield of the recovered fractions using Fourier Transform Infra-Red (FTIR) spectroscopy.


SEWAGE TREATMENT PLANT OPTIMIZATION FOR REDUCTION OF NITRATES. ALEXANDER HOIMES (The Pennsylvania State, University Park, PA, 16801) ROBERT LEE (Brookhaven National Laboratory, Upton, NY, 11973)

Sanitary and process wastewater generated by Brookhaven National Laboratory (BNL) operations is conveyed to the Sewage Treatment Plant (STP) for processing before discharge to the Peconic River. Under a permit issued by New York State Department of Environmental Conservation (NYSDEC), the STP effluent uses a discharge point authorized under the State Pollutant Discharge Elmination System (SPDES) which regulates wastewater effluent at the laboratory. Starting in February 2005, the STP effluent total nitrogen concentrations began to exceed the SPDES limit of 10 mg/L. As a result, BNL has been investigating the potential sources of elevated nitrogen concentrations observed at the STP. Lower than normal flow conditions and decreased nutrients in the waste have been identified as the most likely causes to the increased levels of nitrogen in the discharge. This research evaluated and determined the correlation between dissolved oxygen, total nitrate, and nutrient loading at the STP. With this information, the STP process can be optimized to minimize the release of nitrogen into the Peconic River. Using commercially available test equipment, the levels of nitrate (the predominant component of the STP total nitrogen measurements) and dissolved oxygen were measured during the STP treatment sequence. The concentrations were tracked and plotted during periods of supplemental nutrient addition to the STP and during periods of sustained nutrient addition. Rates of nutrient addition were also measured daily and compared to base load nutrients in the STP influent. Nutrient loads were measured using contract analytical laboratory analysis for Biochemical oxygen demand (BOD), and based upon a 24-hour composite sample. From this, we concluded that the addition of nutrients to the STP is a necessary ingredient to keeping the nitrate levels from exceeding the SPDES limit.


Sidearm. JESSE FRITZ (Tennessee Technological University, Cookeville, TN, 38505) JAMES YOUNKIN (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

A microprocessor based device was designed, implemented and tested for the detection and communication of events to the Automated Weapon Inventory System (AWIS). AWIS is a weapon and related asset tracking system which uses low-frequency radio tags. The device designed, called a Sidearm, detects events such as motion, light beam breaks, or pressure pad activation, and sends a message to the AWIS server, which then initiates a scan and updates inventory status. The development of Sidearm was made possible by the Rabbit Semiconductor RCM 4200 core module. This device has the ability to serve web pages, link equipment to the internet, and control remote devices. In addition, Dynamic C programming software was used to write Sidearm’s firmware. An Oak Ridge National Laboratory custom printed circuit board utilizes the RCM 4200 and firmware to create the Sidearm package. The Sidearm program code runs on the RCM 4200 and performs event detection and transmission. The device also features a web-based graphical user interface (GUI) for user configuration. The GUI displays and provides the ability to configure the AWIS server internet protocol (IP) address, the Sidearm’s IP address and a time delay filter variable used in the processing of event triggers. The program’s GUI uses rigorous error checking functions. Errors are immediately reported, and only valid entries are accepted from the user. The Sidearm’s purpose is to provide flexible event notification to the AWIS system. This device is of extreme importance to the AWIS system as it is responsible for detecting environmental events that cause the system to scan for inventory changes. The web-based GUI provides the ability to change variable values easily so that the device can be configured for different operating parameters without requiring special tools or knowledge.


Simulating an RF Cavity in Real Time using an FPGA. JASON TRINIDAD (Inter American University of Puerto Rico, Bayamon, PR, 00956) WARREN SCHAPPERT (Fermi National Accelerator Laboratory, Batavia, IL, 60510)

An RF cavity simulator can be used to test control electronics. One way to make such simulators is with the use of an FPGA. We configured a Xilinx® XtremeDSP Development Kit, Virtex-4 Edition, for doing such simulations. Also, an interface was created in MATLAB using C MEX-files. This allowed easy communication with the FPGA. A discrete difference equation was implemented into the firmware to do the simulations. After the FPGA was configured, it was tested. The output was that expected but still some minor fixes have to be done. Essentially, the core part of the simulator was successfully done, but some components still need to be added to make the actual simulator.


Simulation of Hydrogen Occurrence in Parabolic Trough Power Plants. DOUGLAS DEVOTO (University of Delaware, Newark, DE, 19716) GREG GLATZMAIER (National Renewable Energy Laboratory, Golden, CO, 89401)

Hydrogen occurrence in the annulus of heat collection element (HCE) receiver tubes reduces thermal performance of HCEs and decreases overall efficiency of trough solar thermal power plants. Degradation reactions in the heat transfer fluid (HTF) used in power plants, most commonly Therminol VP-1TM, produce hydrogen as a byproduct. Hydrogen is distributed through all parts of the plant where the HTF is present and eventually permeates through piping and equipment to ambient air. Hydrogen permeating through HCE receiver tube walls forms an equilibrium partial pressure in the annulus region and causes increases in thermal losses. This report describes the development and preliminary results of a hydrogen model created in Engineering Equation Solver (EES). The model simulates the hydrogen generation and permeation processes to predict the annulus pressure developed in HCEs. It also evaluates several solutions to minimize hydrogen pressure in annuluses of HCEs. Combining solutions of relocating a trough plant’s expansion vessel directly before the solar array, including a permeation barrier coating on HCEs, and adding hydrogen venting would help to reduce the annular partial pressure in HCEs from original levels of 0.9-1 torr to acceptable levels of 0.001-0.004 torr.


Study of Membrane Poisoning by Sulfur Compounds. ASHLEY JONES (Prairie View A&M University, Prairie View, TX, 77446) COSTAS TSOURIS (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

The objective of this study was to propose a mechanism for Palladium (Pd) poisoning and simulate hydrogen (H2) flow in simple and composite membranes. We show how mathematical models can be used to describe and predict multiphase separation processes, such as those involving the separation of H2 from other gases by Pd membranes. It is a theoretical study that includes mathematical derivations and development of computer codes for computations to be performed on a personal computer. Pd alloy membranes have been used to separate H2 from coal gasification and steam reforming product streams. In many cases, hydrogen sulfide (H2S) in the process streams deactivates the metal membrane and reduces H2 permeation and selectivity. H2 is a very important gas in many industrial fields. Currently, hydrocarbons are the main source for hydrogen H2 production, and membrane reactors are used to separate the H2 from other gases that are produced. These reactors combine reaction and separation in the same unit operation. The proposed model predicts H2 fluxes through composite membranes of several layers for standard operating conditions. The model was constructed for H2 permeation through Pd and accounts for external mass transfer, surface adsorption and desorption, transitions to and from the bulk metal, and diffusion within the metal. The main assumptions in this modeling approach are (i) absence of external mass transfer resistance, and (ii) diffusion-limited permeation for pure Pd for temperatures above 573 °K and membrane thicknesses down to 1 micrometer. Low-temperature permeation of H2 is limited by desorption, while adsorption is expected to impact permeation only at very low upstream H2 partial pressures, or under conditions of substantially reduced sticking due to surface contamination. This project deals with ways to design better membranes by developing a model for the poisoning of the membranes by H2S.


Study of Techniques Used for Demolition of the High Flux Beam Reactor Exhaust Stack at Brookhaven National Laboratory. CRAIG LEBEL (Clarkson University, Potsdam, NY, 13699) THOMAS DANIELS (Brookhaven National Laboratory, Upton, NY, 11973)

The High Flux Beam Reactor (HFBR) stack at Brookhaven National Laboratory (BNL) is a 320 ft structure, still in use by the Hot Laboratory (Building 00801), that was once also used to ventilate exhausts from buildings in the HFBR complex and the Brookhaven Graphite Research Reactor (BGRR. The stack is 59 years old and is being evaluated for Decontamination and Decommissioning (D&D). The stack D&D project involves the evaluation of alternative D&D methods, the preparation of technical specifications, and detailed project planning which includes implementing work plans and procedures. Due to the stack’s close proximity to operating facilities and the internal contamination within the stack itself, the safe methods for removal of the stack are limited. After a study of conventional stack demolition methodologies was performed to select the most likely to lead to a successful D&D of the stack, rubblization and segmentation were selected for further investigation. Based on standard elements of environmental restoration projects as discussed in the Environmental Restorations Projects (ERP) department, a set of criteria was created to help choose the most appropriate method for demolition. The criteria used for evaluating the methods included worker safety, costs, scheduling, contamination control, and waste transportation and disposal. When using the established criteria to compare the two methods, rubblization was found to be the more practical approach for removing the stack. Rubblization is safer than segmentation because the concrete cutting device used to crush the concrete can be remotely operated so that workers will not be operating machinery at elevated heights once the device is in place. Further, crushing and cutting the concrete into small pieces makes packaging and shipping of waste easier and more affordable. Finally, the spread of dust will be minimal due to the application of water spray nozzles to the cutting device. The method of rubblization was chosen as the demolition technique due to the low safety risks, lower costs, and the efficiency of the method. Segmentation was not chosen because of the increased risk due to working at elevated heights, and the troublesome process of size reducing the pieces for disposal. Also, loading and shipping the large pieces increases the risk for potential accidents at the workplace. Budgetary quotes have been acquired from proposed vendors and an approximate work schedule has been created so the length and cost of work are able to be calculated. Funding in not available to begin working on the stack due to the costs of other procedures going on at the BGRR and the HFBR complex. Once funding is available from the Department of Energy, work can begin.


Superluminal RF. IAN HIGGINSON (University of Idaho, Moscow, ID, 83843) LAWRENCE EARLEY (Los Alamos National Laboratory, Los Alamos, NM, 87545)

A source of electromagnetic radiation is said to be superluminal when its velocity exceeds that of its emitted waves (v > c) in vacuo. This allows for superluminal sources to make multiple contributions to the electromagnetic field of an observer at some point. In particular, pulsar emissions have been demonstrated to result from modulated waves formed by superluminal distribution patterns of the polarization current rotating within the pulsar’s plasma atmosphere [2]. Radiation from a rotating superluminal source forms a cusp, where waves emitted over a long period of source time are received simultaneously by an observer on the cusp [4]. The power emitted by the cusp decays as a function of the inverse radius from the source (1/R, rather than the conventional inverse square law), presenting unique potential for applications in radar and directed-energy technologies, secure communications, and astrophysics [4]. To simulate this phenomenon experimentally, a specialized circular antenna was designed, composed of 72 individual antenna elements, each receiving a different modulated signal input. By varying the phase relationship between these signals, the angular velocity around the antenna, multiplied by the antenna’s radius, results in a v > c. Phase one of the project involved determining the most effective method of achieving signal modulation with full amplitude and 360 degrees of phase control using ADL5390 vector multipliers. To complete the preliminary portion of the project, a control system was designed to specify amplitude and phase in an 8 channel prototype of the eventual 72 channel system. Implemented with LabVIEW code, the control system is capable of setting the signal to an accuracy of within 0.5 dB of the desired amplitude and one degree of the desired phase.


Technical Documentation of TTB and LTB Beam Stops, REV00 Aug 15, 2008. EUI SANG SONG (Queensborough Community College, Bayside, NY, 11364) VINCENT J CASTILLO (Brookhaven National Laboratory, Upton, NY, 11973)

The beam stops in the Tandem Van de Graaff to Booster (TTB) and Linear Accelerator to Booster (LTB) beam lines are part of many subsystems in the Collider-Accelerator (C-A) complex at Brookhaven National Laboratory (BNL). Beam stops are used as an initial safety response if beam-control parameters stray from the required specifications. They perform this task by blocking the path of the accelerator beam during accelerator operations. Documentation of these subsystems is desired in the C-A complex at BNL to centralize and consolidate information so that it will be readily available for C-A department staff, particularly newly hired employees. I gathered information on the location, construction and operation of the beam stops from C-A staff members, manufacturer's manuals, and electrical schematics. With this information, I have written a document detailing four important aspects of the LTB and TTB beam stops: one, the locations; two, the physical aspects needed to recognize the beam stops themselves; three, how they function, mechanically and electrically; four, the necessary requirements for entry. This work is a small component of a larger effort to document the various systems and subsystems in the C-A complex in order to provide information to employees, who often work in hazardous areas, so that they can work as safely and quickly as possible.


TECHNOLOGY FORECAST FOR BROADBAND COMMUNICATION. KIRPATRICK DORSEY and BRANDON NELSON (Jackson State University, Jackson, MS, 39217) PAUL D. EWING (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

We looked at current communication technology and forecast the evolution of broadband communication networks as they become more efficient, more resilient, and more robust. Our study places special attention on a competent solution to post-disaster communication barriers currently faced by first responders in the United States. We reviewed 15 wireless broadband deployments, referenced 20 journal articles, tracked broadband communication news articles dating back seven years, and referenced the white pages of fixed and wireless technology providers to develop an informative prediction on the leading broadband communication technology. Our findings show that IEEE Standard 802.16e-2005 is the next innovation to facilitate a broadband communication network that is more efficient, more resilient, and more robust than that which is currently in operation. Worldwide Interoperability for Microwave Access (WiMAX), trade name for this IEEE open standard, provides "last-mile, best-effort" broadband data transmission to send up to 30 megabits of data per second (Mbps) to residential customers and small businesses. In contrast, proprietary transmission rates are up to 6 Mbps for Digital Subscriber Lines (DSL), and up to 12 Mbps for coaxial cable. WiMAX is a robust and resilient Internet Protocol (IP) based data transmission platform using smart Multiple Input, Multiple Output (MIMO) antennas with beam forming in a mesh network configuration along with Orthogonal Frequency Division Multiplexing (OFDM) to transmit data over 4.82 kilometers; IEEE 802.11 Wireless Fidelity (Wi-Fi) has only 100-meter transmission. The WiMAX base stations (BS) can ideally be used as an internet backhaul to securely transmit data 14.48 kilometers in a non-line-of-sight (NLOS) environment and 48.28 kilometers in the line-of-sight (LOS). These innovations present WiMAX as a justifiable option for communication in areas where data network infrastructure is minimal or non-existent, providing cost effective broadband data transmission in rural areas. We suggest ongoing evaluation of WiMAX deployments to determine the best use by region in the United States with regard to disaster response and rural deployment. The WiMAX standard’s ability to create more efficient, more robust, and more resilient communication systems thus limiting the amount of lives lost in times of peril due to lack of interoperability demonstrates the value of this research to the global community.


The Development of the Radionuclide Inventory Workbook for Building 315 Vault 40. ASHLEY MEKLIS (University of Notre Dame, Notre Dame, IN, 46556) ARTHUR FRIGO (Argonne National Laboratory, Argonne, IL, 60439)

Argonne National Laboratory’s Building 315 Vault 40 (Vault 40) is currently compiling information into an Excel spreadsheet to use as a reference. The primary purpose of this project was to produce an Excel document containing all necessary information for the nuclear material inventory of Vault 40. The Radionuclide Inventory Workbook (RIW) is designed to include as much information as needed and to be used to track the de-inventory of Vault 40. In addition, each workbook is designed to perform Hazard Category and Pu-239 Fissile Gram Equivalence calculations as well as any other desired computations. Beginning with checking data from hard-copy postings, this project has evolved into using the RIW to produce and verify postings for the Vault 40 individual storage cells. The RIW underwent the validation and verification process in order to become a useable document for tracking the de-inventory of Vault 40.


The Effects of Polymer Electrolyte Membrane Fuel Cell Membrane Electrode Assembly Manufacturing Defects on Cell Performance. CHRISTOPHER BOCHERT ( Colorado School of Mines, Golden, CO, 80401) MICHAEL ULSH (National Renewable Energy Laboratory, Golden, CO, 89401)

Decreasing reliance on foreign energy sources is an important goal for America. Fuel cells powered by clean hydrogen and air are one renewable energy technology being researched and developed. To get fuel cells on the market, tolerances for manufacturing must be established. This study attempted to correlate manufacturing defects with fuel cell performance so that validated manufacturing tolerances for fuel cell components can be established. Accuracy of the fuel cell testing station equipment was verified according to United States Fuel Cell Council (USFCC) protocol. Pristine and “flawed” (manufactured with a defect e.g. a pinhole) polymer electrolyte membrane fuel cell (PEMFC) membrane electrode assemblies (MEAs) were tested according to USFCC protocol. Ex-situ characterization of the MEAs was conducted by means of thickness, in-plane and through-plane resistance measurements, and optical microscopy. The MEA was first “broken in” to fully humidify and condition the cell. In-situ characterization techniques were then employed including fuel cell polarization, hydrogen crossover, and electrochemical surface area measurements. The fuel cell polarization curves (cell voltage vs. current density) were compared to published data as well as to each other to gauge performance (Cell voltage at 0.8A/cm²: BASF published = 0.68V; Pristine MEA = 0.54V; Flawed MEA = 0.45V). It was found that the flawed MEA performed worse than the pristine MEA. Hydrogen crossover data showed conclusively that the flawed MEA with the pinhole experienced significant hydrogen crossover while the pristine MEA did not (11.4 and 0.7 mA/cm² respectively), as expected. However a greater than acceptable high frequency resistance (HFR) was observed (100 mΩ*cm² is acceptable while 150 - 400 mΩ*cm² was observed) for the MEAs. The high HFR paired with a lack of repeatability of results due to time restrictions (resulting in the testing of only one pristine and one flawed MEA) does not allow for a conclusive statement to be made at this time concerning the effects of PEMFC MEA manufacturing defects on cell performance. Further studies in which repeatability of results can be established - while decreasing the HFR - using multiple pristine and multiple flawed MEAs are required to effectively correlate the effects of MEA manufacturing defects on cell performance.


The Hybrid Cu-Cl Thermochemical Cycle: Crystallization and Hydrolysis of CuCl2. ADAM GROSS (University of Rochester, Rochester, NY, 14627) MICHELE LEWIS (Argonne National Laboratory, Argonne, IL, 60439)

The crystallization and hydrolysis of CuCl2 are two keys steps in the Cu-Cl thermochemical cycle. The hydrolysis of CuCl2 is a reaction with water which forms Cu2OCl2 and HCl at 400°C. One of the main difficulties with this reaction has been the need for excess water, and a competing decomposition reaction which forms CuCl(s) and Cl2(g). The operational parameters for a continuous spray-type hydrolysis reactor were studied and optimized by analyzing the composition of the solid products obtained via x-ray diffraction techniques. Overall, tests have shown that reactor tempuratures at or above 400°C tend to favor the formation of CuCl and that increasing the residence time of the copper (II) chloride solution is essential for achieving complete conversion of CuCl2. Another step in the cycle’s process design, the crystallization of CuCl2, was also studied. In the process design for the Cu-Cl thermochemical cycle, CuCl2 for use in the hydrolysis reactor is purified via crystallization. The solubility of CuCl2 in pure water was determined and compared to the available literature data between 0 and 100°C. Data was also collected and compared to literature data for the solubility of CuCl, and CuCl2 in water at various concentrations of HCl. This was done at room tempurature for CuCl, and at 10, 25, and 55°C for CuCl2. Although all data collected for the solubility of CuCl2 agreed with the available literature, the data obtained for the solubility of CuCl remains fairly inconclusive.


The Use of Polypropylene Heat Exchangers in Oil-Burning Boilers. JULIAN TAWFIK (Stony Brook University, Stony Brook, NY, 11794) THOMAS BUTCHER (Brookhaven National Laboratory, Upton, NY, 11973)

Condensing boilers achieve high thermal efficiency by recapturing latent heat in the flue gases. The condensate that forms is corrosive, requiring stainless steels. Plastics, an alternative, meet the temperature requirements, but have low thermal conductivity. In this project, potential use of nanoparticles to improve the thermal conductivity of plastics and the heat transfer conditions in heat exchangers were studied. Metal tubes used in oil boiler heat exchangers are not cost effective, not easy to fabricate, corrode, and foul. These factors lead to heat loss from flue gases which could be retained to preheat boiler water and improve heating unit efficiency. PP is relatively cheap and easy to manufacture, with no corrosion/fouling difficulties. A literature review, calculations and experiments were done to determine if the low thermal conductivity of PP is a major inhibitor when used as a heat exchanger, and if it could be offset through the use of nanoparticles. The thermal conductivity (K) of PP is approximately 0.22 (W/mK), low when compared to the metals used in heat exchangers such as Al (K=250 W/mK). Use of nanoparticles such as Talc (30 vol %) can raise the thermal conductivity to 2.5 (W/mK). This is still too low compared to Al. It was discovered that the main inhibitor to heat conduction through the exchanger is not the thermal coefficient (K); rather, the thermal convective film between flue gases and the inner wall of the plastic tube. This contributes over 99% of the resistance experienced by the heat exchanger. It was found that nanoparticles had an effect on the exchanger’s ability to move heat from flue gases to the boiler water. PP can also withstand the high temperatures of flue-gases since it would contact the cooler boiler water (80°F) while PP’s melting point is 320°F. To conclude, the thermal conductivity of PP is not an obstacle. This plastic can withstand the high temperatures it is exposed to. The use of nanoparticles does not make a significant enough difference in the ability of PP to recapture waste heat. Thus, it was found that a plastic heat exchanger in an oil boiler is feasible since it is not vulnerable to corrosive flue-gases, as metals are. Plastic heat exchangers can replace metal heat exchangers, resulting in equal efficiency of heat recapture without corrosion effects. Also, experimental work confirmed our previous calculations.


Thermal Analysis Validation through Thermal Imaging. MANUEL CASTRO, GUADALUPE CORTES, and HENRIETTA TSOSIE (Illinois Institute of Technology, Chicago, IL, 60067) JEFF COLLINS (Argonne National Laboratory, Argonne, IL, 60439)

All beam-interacting components in the Advanced Photon Source (APS) require thermal and stress analyses using computer aided software prior to fabrication and installation to ensure that thermo-mechanical limits are not exceeded during operation. A common question asked is how accurate these results are and if adequate operational safety margins have been incorporated in the component design. This study was conducted to validate that the theoretical findings resulting from computer thermal simulations match the experimental findings for a GlidCop® (alumina oxide dispersion strengthened copper) block sample as it is heated by the APS x-ray source. The block is water cooled and uses wire coil inserts for enhanced heat transfer. The U-shaped water channel in the block uses forced convection to keep the block from overheating. The heat transfer coefficients for this configuration are experimentally determined as a function of coolant flow rate. The emissivity of the test piece is experimentally determined as a function of temperature, surface finish, and orientation. These parameters are fed into the computer simulation model and compared to experimental results obtained at the beamline. An infrared camera is used to measure the surface temperature of the block and the enthalpy rise of the water is measured to determine the total power absorbed in the block. When compared, the final results confirm that thermal simulations provide fairly accurate results and can be used with confidence in the design of new beamline components.


Thermal investigation and management of a PEM (Polymer Electrolyte Membrane) fuel cell stack. CHRISTIAN GRIGOLEIT (Farmingdale State College, Farmingdale, NY, 11735) DIVINDER MAHAJAN (Brookhaven National Laboratory, Upton, NY, 11973)

Thermal management systems to control temperature and humidity inside a PEM fuel cell are crucial to optimizing its performance and efficiency. Temperatures that are higher than 80°C increase the membrane’s malleability and escalates its permeability of reactant gases, both of which reduce the efficiency of the electrochemical process, thus making thermal management crucial. Accordingly, an experimental setup was built to monitor the internal temperature of a fuel cell power stack during normal operating conditions and under variable power loading levels. A ten-cell power stack with three plates at the beginning, middle, and end with each fitted with five thermocouples was used to monitor the internal temperature of the stack. Tests were run each at a different humidity above and below 30°C because the humidity requirements of the stack change with temperature. During each test, the current of the electrical load was gradually increased while all humidity levels, power outputs, and stack temperatures were monitored and recorded. When the heat and electrical power produced was graphed against time, it was seen that both increased at about the same rate, proving that the ratio of heat to electrical power was about 1:1. Therefore, higher cooling loads are expected at higher currents and more parasitic energy is also required to cool down the fuel cell stack. Thus, it is recommended to run the fuel cell stack at lower currents to increase the efficiency and minimize the parasitic energy necessary to operate cooling systems.


Tribological Evaluation of Multicomponent Nanoparticle-based Lubricant Additives. JOSHUA WEBER (Grinnell College, Grinnell, IA, 50112) OSMAN L. ERYILMAZ AND ALI ERDEMIR (Argonne National Laboratory, Argonne, IL, 60439)

Friction and wear are significant sources of energy and material losses in mechanical processes, and thus lubrication, which can mitigate these losses, is a principal focus of efforts to improve energy efficiency and mechanical durability. Multicomponent nanoparticle-based additives have shown promise as an environmentally neutral alternative to the damaging chemical additives that are currently used to enhance lubrication by reducing friction and wear. This investigation focused on inorganic-organic nanocomposite concentrate lubricants consisting of MoS2-based inorganic nano-solid lubricants that are intercalated and encapsulated with an organic Canola oil medium. These lubricants were provided by NanoMech, LLC, and were evaluated under a DOE-funded project with Caterpillar and the University of Arkansas. A trial on a pin-disk tribometer setup with a five-newton load over a range of revolutions per minute (1-50 rpm) was used in order to evaluate the performance of each lubricant in the hydrodynamic, mixed, and boundary regimes of lubrication. The performance of the lubricants under prolonged extreme conditions was assessed by using day-long, low-speed (1 and 3 rpm) testing. After each test, an optical profilometer was used to analyze wear. In the variable-speed test, compared to the base oil alone, the additives lowered the coefficient of friction by over 50% for speeds over 10 rpm and by approximately 10-30% for lower speeds. Moreover, with the additives, the contact surfaces showed considerably less wear. For both long-term tests, both additives had lower maximum coefficients of friction than that of the base oil, but their steady-state coefficients were approximately 10% higher for the 3-rpm test and 10-30% higher for the 1-rpm test. Furthermore, the additives either did not affect or increased the amount of wear. While the nanoparticle-based additives reduce friction over the short term, the results suggest that over time they cause wear that increases friction. As this investigation continues, more trials will be performed to confirm these initial results, to explore the differences between the additives, and to examine their performance under various conditions. The preliminary results show potential, so the lubricants will be tested in a block-on-ring tribometer, which more closely simulates engine conditions, and the mechanisms by which these environmentally innocuous additives reduce friction and wear will be analyzed.


Use of Congo Red Assay and HK Assay for Cellulase Characterization. JASMINE CANCINO (Ohlone Community College, Fremont, CA, 94539) MASOOD HADI (Lawrence Berkeley National Laboratory, Berkley, CA, 94720)

This research was undertaken to determine optimal methods to characterize cellulase. The Glucose Hexokinase (HK) Assay was used to measure glucose in solution. The performance of Congo Red Assay was used to determine the sensitivity and usefulness of the assay analyzing enzyme activity. Ramazol Brilliant Blue R. (RBBR) is a dye used in Azo CMC Assay protocol and an unknown molar extinction coefficient and a maximum absorbency wavelength is to be determined. Series of concentrations of glucose were mixed with HK and inserted in the multimode detector. CMC/Agar plates of different pHs were spotted with different concentrations of enzymes and incubated for two nights. Several Concentrations of RBBR were inserted into the UV-Vis spectrometer. The Glucose HK Assay had a positive relationship with enzyme activity vs. area. The sensitivity of Congo Red Assay was not enough to detect significant difference in enzyme activity based on pH. The molar extinction coefficient value of RBBR was 4346.3/(M•cm) and the maximum absorbency wavelength is in the range from 590nm to 597nm. In performance of glucose HK Assay protocol, the standard graph is ready to be used for future usage with enzymes. Congo Red Assay is not sensitive enough to detect significant difference in enzyme activity.


Use of National Instrument’s LabVIEW to Create Database of Tested Batteries. MICHAEL COZZA (University of Illinois, Urbana-Champaign, IL, 61820) JOHN BASCO (Argonne National Laboratory, Argonne, IL, 60439)

LabVIEW (Laboratory Virtual Instrumentation Engineering Workbench) is a graphical computer language, founded by National Instruments, that uses virtual wires to connect nodes in a flow chart manner as the language, rather than a text code common in such languages as C, Visual BASIC, and Python. Each new program made in LabVIEW is called a virtual instrument, or a VI. LabVIEW was used to create a VI to make a database from the ground up. The database was needed to replace an outdated non-LabVIEW program to store battery specifications, including the battery identification number (BIN), the manufacturer, dimensions and voltage, and peak power, among other information. This program is needed because the research group is changing over to more and more National Instruments software and hardware. The old equipment is being phased out and a new way to store battery information was needed.


Vehicle-to-Grid Technology: Using Plug-In Hybrid Vehicles to Facilitate the Integration of Wind Power into the Electric System. EMILY HUMPHREYS (Stanford University, Stanford, CA, 94305) TONY MARKEL (National Renewable Energy Laboratory, Golden, CO, 89401)

Plug-in hybrid electric vehicles (PHEVs) with bidirectional power flow capability can provide vehicle-to-grid (V2G) services for utilities by charging and discharging to help balance electricity supply and demand. V2G has the potential to help utilities integrate more wind power into their mix by compensating for the error inherent in wind forecasting. This service would increase the value of PHEVs and wind power simultaneously, allowing each to reach higher penetration levels. For this project, simulations were run in Matlab to determine the degree to which a PHEV fleet in the Xcel Energy service territory could compensate for the forecast error associated with a 20% wind power penetration. 27 different scenarios were examined, each with a different combination of three variables: PHEV fleet penetration (0.57%, 10%, and 20%), PHEV electric range (10, 20, and 40 miles), and charging strategy (deep cycling, shallow cycling, and a combination.) The additional benefits offered by increasing PHEV penetration levels dropped off quickly, and even the 20% penetration case was unable to compensate for all of the forecast error. PHEV20s and PHEV40s presented only modest increases in forecast error compensation compared to the PHEV10s. The shallow charging strategy decreased error compensation significantly, but the mixed strategy was nearly as effective as the deep strategy. Impact of this V2G service on PHEV battery life and fuel consumption was also examined. In the low penetration case, battery life and fuel consumption were both impacted by 30-50% compared to a similar PHEV fleet without V2G capability. The PHEV10s generally experienced the least impact on battery life and fuel consumption. The mixed charging strategy saved significant amounts of fuel compared to the deep charging strategy, especially at higher penetration levels, without impacting battery life significantly more than the shallow charging strategy. In conclusion, it appears that high PHEV penetration levels will be needed to compensate for the majority of forecast error associated with a 20% wind penetration; PHEV10s provide adequate forecast error compensation while saving significantly in battery costs; and a mixed charging strategy is optimal for providing sufficient error compensation while maximizing battery life and minimizing fuel consumption. Further work is necessary to evaluate the economic impact of forecast error compensation on utilities, rate-payers, and PHEV owners.


Wireless Traffic Density Counter. KENNETH SWAN (Jackson State University, Jackson, MS, 39207) PAUL D. EWING (Oak Ridge National Laboratory, Oak Ridge, TN, 37831)

The objective of this project is to develop a wireless traffic density counter to aid in pre-disaster emergency evacuation. It will be small, lightweight, and easy to deploy with minimal set-up or calibration. It will also be relatively inexpensive and easy to maintain, requiring very little attention over extended periods of time. All of this so that if an event occurs similar to Hurricane Katrina, there is a means of monitoring not only the main evacuation routes but also secondary routes to aid in expeditious and effective evacuation. The following factors were considered: type of sensors, type of wireless transmitting device, what information would be most useful, and deployment methods. The sensor choices considered were: Hall Effect, magnetic proximity, ultrasonic, and passive infrared (PIR). The wireless devices considered were: Archrock Wireless Sensor Nodes, Rabbit Technologies Zigbee Application Kit, and the Texas Instruments MSP 430 eZ430-RF2500 Development Tool. The most useful information was determined to be number of vehicles, speed, size, and type. The methods of deployment that were considered included: gluing the unit to the road, embedding it in the road, mounting it under a bridge, and mounting it on a sign beside the road. By emphasizing low-power consumption, the selection came down to PIRs connected to TI MSP430’s to first-measure speed and count the number of vehicles mounted on a sign beside the road. The next step was to build a test circuit based on logic gates. The circuit has no memory and its only output is 16 LEDs. The circuit uses one PIR to start a counter and one PIR to stop the counter. After the counter is stopped the LEDs will either be on or off, indicating a binary number combination that can then be translated into a decimal number and converted to miles per hour. The reaction of the PIRs and count were consistent based on laboratory conditions. The next phase of the project will be to connect and program the automated circuit and perform final testing. In terms of pre-disaster emergency evacuation, a system like this will be invaluable in aiding the safe and expeditious flow of thousands of people. It will have an impact on more than just hurricane evacuation: other applications could include use at the perimeter of large urban areas in the event a disaster requiring evacuation, i.e. chemical spills, nuclear hazards, wildfire, etc.