Science.gov

Sample records for experiment cepex design

  1. Central Equatorial Pacific Experiment (CEPEX). Design document

    SciTech Connect

    Not Available

    1993-04-01

    The Earth`s climate has varied significantly in the past, yet climate records reveal that in the tropics, sea surface temperatures seem to have been remarkably stable, varying by less than a few degrees Celsius over geologic time. Today, the large warm pool of the western Pacific shows similar characteristics. Its surface temperature always exceeds 27{degree}C, but never 31{degree}C. Heightened interest in this observation has been stimulated by questions of global climate change and the exploration of stabilizing climate feedback processes. Efforts to understand the observed weak sensitivity of tropical sea surface temperatures to climate forcing has led to a number of competing ideas about the nature of this apparent thermostat. Although there remains disagreement on the processes that regulate tropical sea surface temperature, most agree that further progress in resolving these differences requires comprehensive field observations of three-dimensional water vapor concentrations, solar and infrared radiative fluxes, surface fluxes of heat and water vapor, and cloud microphysical properties. This document describes the Central Equatorial Pacific Experiment (CEPEX) plan to collect such observations over the central equatorial Pacific Ocean during March of 1993.

  2. Central Equatorial Pacific Experiment (CEPEX)

    SciTech Connect

    Not Available

    1993-01-01

    The Earth's climate has varied significantly in the past, yet climate records reveal that in the tropics, sea surface temperatures seem to have been remarkably stable, varying by less than a few degrees Celsius over geologic time. Today, the large warm pool of the western Pacific shows similar characteristics. Its surface temperature always exceeds 27[degree]C, but never 31[degree]C. Heightened interest in this observation has been stimulated by questions of global climate change and the exploration of stabilizing climate feedback processes. Efforts to understand the observed weak sensitivity of tropical sea surface temperatures to climate forcing has led to a number of competing ideas about the nature of this apparent thermostat. Although there remains disagreement on the processes that regulate tropical sea surface temperature, most agree that further progress in resolving these differences requires comprehensive field observations of three-dimensional water vapor concentrations, solar and infrared radiative fluxes, surface fluxes of heat and water vapor, and cloud microphysical properties. This document describes the Central Equatorial Pacific Experiment (CEPEX) plan to collect such observations over the central equatorial Pacific Ocean during March of 1993.

  3. High albedos of cirrus in the tropical Pacific warm pool: Microphysical interpretation from CEPEX and from Kwajalein, Marshall Islands

    SciTech Connect

    Heymsfield, A.J.; McFarquhar, G.M.

    1996-09-01

    Recent studies suggest that extensive shields of cirrus clouds over the equatorial Pacific {open_quotes}warm pool{close_quotes} may have a significant influence on the global climate, yet details of the links between cloud microphysical properties, upper-tropospheric latent and radiative beating rates, and climate are poorly understood. This study addresses whether relatively reflective ice crystals with dimensions smaller than about 100 {mu}m near the tops of tropical cirrus clouds, produced by deep convection when the sea surface temperature exceeds 300 K, are principally responsible for the high albedos observed in this region. In situ measurements of ice crystal size distributions and shapes, acquired during the Central Equatorial Pacific Experiment (CEPEX), are used to derive cloud ice water content (IWC), particle cross-sectional area (A), and other microphysical and optical properties from particles with sizes down to 5 {mu}m. These measurements are needed to ascertain the microphysical properties primarily responsible for determining cloud optical depth and albedo in visible wavelengths. Analysis shows that IWC, A, and various measures of particle size all tend to decrease with decreasing temperature and increasing altitude, although considerable scatter is observed. Small ice crystals make up more than half the mass and cause more than half the extinction on average in the upper, colder parts of the cirrus; however, the predominantly large particles in the lower, warmer parts of the cirrus contain at least an order of magnitude greater mass and are dominant in producing the high observed albedos. An examination of the lidar and radiometer data acquired onboard the NASA ER-2, which overflew the Learjet during CEPEX, supports the conclusion that the higher, colder regions of the cirrus typically have volume extinction coefficients that are only about 10% of those in the lower, warmer regions. 36 refs., 25 figs., 4 tabs.

  4. High Albedos of Cirrus in the Tropical Pacific Warm Pool: Microphysical Interpretations from CEPEX and from Kwajalein, Marshall Islands.

    NASA Astrophysics Data System (ADS)

    Heymsfield, Andrew J.; McFarquhar, Greg M.

    1996-09-01

    Recent studies suggest that extensive shields of cirrus clouds over the equatorial Pacific `warn pool' may have a significant influence on the global climate, yet details of the links between cloud microphysical properties, upper-tropospheric latent and radiative heating rates, and climate are poorly understood. This study addresses whether relatively reflective ice crystals with dimensions smaller than about 100 µm near the tops of tropical cirrus clouds, produced by deep convection when the sea surface temperature exceeds 300 K, are principally responsible for the high albedos observed in this region.In situ measurements of ice crystal size distributions and shapes, acquired during the Central Equatorial Pacific Experiment (CEPEX), are used to derive cloud ice water content (IWC), particle cross-sectional area (A), and other microphysical and optical properties from particles with sizes down to 5 µm. These measurements are needed to ascertain the microphysical properties primarily responsible for determining cloud optical depth and albedo in visible wavelengths and were acquired by a Learjet flying in tropical cirrus and occasionally in convection between altitudes of 8 and 14 km (20°C to 70°C). Previously unanalyzed microphysical measurements in the vicinity of Kwajalein, Marshall Islands, acquired in the mid-1970s from a WB57F aircraft between altitudes of 5 and 17 km, are also used to study the variation in microphysical properties from cirrus base to top, using a combination of constant-altitude penetrations and steep ascents and descents through cloud.Analysis shows that IWC, A, and various measures of particle size all tend to decrease with decreasing temperature and increasing altitude, although considerable scatter is observed. Small ice crystals make up more than half the mass and cause more than half the extinction on average in the upper, colder parts of the cirrus; however, the predominantly large particles in the lower, warmer parts of the cirrus

  5. SEDS experiment design definition

    NASA Technical Reports Server (NTRS)

    Carroll, Joseph A.; Alexander, Charles M.; Oldson, John C.

    1990-01-01

    The Small Expendable-tether Deployment System (SEDS) was developed to design, build, integrate, fly, and safely deploy and release an expendable tether. A suitable concept for an on-orbit test of SEDS was developed. The following tasks were performed: (1) Define experiment objectives and requirements; (2) Define experiment concepts to reach those objectives; (3) Support NASA in experiment concept selection and definition; (4) Perform analyses and tests of SEDS hardware; (5) Refine the selected SEDS experiment concept; and (6) Support interactive SEDS system definition process. Results and conclusions are given.

  6. Design Experiments in Educational Research.

    ERIC Educational Resources Information Center

    Cobb, Paul; Confrey, Jere; diSessa, Andrea; Lehrer, Richard; Schauble, Leona

    2003-01-01

    Indicates the range of purposes and variety of settings in which design experiments have been conducted, delineating five crosscutting features that collectively differentiate design experiments from other methodologies. Clarifies what is involved in preparing for and carrying out a design experiment and in conducting a retrospective analysis of…

  7. Structural Assembly Demonstration Experiment (SADE) experiment design

    NASA Technical Reports Server (NTRS)

    Akin, D. L.; Bowden, M. L.

    1982-01-01

    The Structural Assembly Demonstration Experiment concept is to erect a hybrid deployed/assembled structure as an early space experiment in large space structures technology. The basic objectives can be broken down into three generic areas: (1) by performing assembly tasks both in space and in neutral buoyancy simulation, a mathematical basis will be found for the validity conditions of neutral buoyancy, thus enhancing the utility of water as a medium for simulation of weightlessness; (2) a data base will be established describing the capabilities and limitations of EVA crewmembers, including effects of such things as hardware size and crew restraints; and (3) experience of the M.I.T. Space Systems Lab in neutral buoyancy simulation of large space structures assembly indicates that the assembly procedure may create the largest loads that a structure will experience during its lifetime. Data obtained from the experiment will help establish an accurate loading model to aid designers of future space structures.

  8. Designing experiments through compressed sensing.

    SciTech Connect

    Young, Joseph G.; Ridzal, Denis

    2013-06-01

    In the following paper, we discuss how to design an ensemble of experiments through the use of compressed sensing. Specifically, we show how to conduct a small number of physical experiments and then use compressed sensing to reconstruct a larger set of data. In order to accomplish this, we organize our results into four sections. We begin by extending the theory of compressed sensing to a finite product of Hilbert spaces. Then, we show how these results apply to experiment design. Next, we develop an efficient reconstruction algorithm that allows us to reconstruct experimental data projected onto a finite element basis. Finally, we verify our approach with two computational experiments.

  9. Designing Effective Undergraduate Research Experiences

    NASA Astrophysics Data System (ADS)

    Severson, S.

    2010-12-01

    I present a model for designing student research internships that is informed by the best practices of the Center for Adaptive Optics (CfAO) Professional Development Program. The dual strands of the CfAO education program include: the preparation of early-career scientists and engineers in effective teaching; and changing the learning experiences of students (e.g., undergraduate interns) through inquiry-based "teaching laboratories." This paper will focus on the carry-over of these ideas into the design of laboratory research internships such as the CfAO Mainland internship program as well as NSF REU (Research Experiences for Undergraduates) and senior-thesis or "capstone" research programs. Key ideas in maximizing student learning outcomes and generating productive research during internships include: defining explicit content, scientific process, and attitudinal goals for the project; assessment of student prior knowledge and experience, then following up with formative assessment throughout the project; setting reasonable goals with timetables and addressing motivation; and giving students ownership of the research by implementing aspects of the inquiry process within the internship.

  10. Role-Based Design: Design Experiences

    ERIC Educational Resources Information Center

    Miller, Charles; Hokanson, Brad; Doering, Aaron; Brandt, Tom

    2010-01-01

    This is the fourth and final installment in a series of articles presenting a new outlook on the methods of instructional design. These articles examine the nature of the process of instructional design and are meant to stimulate discussion about the roles of designers in the fields of instructional design, the learning sciences, and interaction…

  11. Experiment Design and Analysis Guide - Neutronics & Physics

    SciTech Connect

    Misti A Lillo

    2014-06-01

    The purpose of this guide is to provide a consistent, standardized approach to performing neutronics/physics analysis for experiments inserted into the Advanced Test Reactor (ATR). This document provides neutronics/physics analysis guidance to support experiment design and analysis needs for experiments irradiated in the ATR. This guide addresses neutronics/physics analysis in support of experiment design, experiment safety, and experiment program objectives and goals. The intent of this guide is to provide a standardized approach for performing typical neutronics/physics analyses. Deviation from this guide is allowed provided that neutronics/physics analysis details are properly documented in an analysis report.

  12. Nova pulse power design and operational experience

    NASA Astrophysics Data System (ADS)

    Whitham, K.; Larson, D.; Merritt, B.; Christie, D.

    1987-01-01

    Nova is a 100 TW Nd++ solid state laser designed for experiments with laser fusion at Lawrence Livermore National Laboratory (LLNL). The pulsed power for Nova includes a 58 MJ capacitor bank driving 5336 flashlamps with millisecond pulses and subnanosecond high voltages for electro optics. This paper summarizes the pulsed power designs and the operational experience to date.

  13. Design of the QPS Experiment

    NASA Astrophysics Data System (ADS)

    Nelson, B. E.

    2002-11-01

    The Quasi-Poloidal Stellarator, QPS, is a low-aspect-ratio (R/a = 2.7), compact stellarator in the final stages of conceptual design. The device parameters are R = 0.9 m, a = 0.33 m, and B = 1 T for 1 s with 1-3 MW of plasma heating. The stellarator core consists of a modular coil set that provides the primary magnetic field configuration. It has two field periods with eight modular coils per period. Due to stellarator symmetry, there are only four different coil types. Flexible copper cable conductor will be wound on a stainless steel form, vacuum impregnated with epoxy, and canned for vacuum compatibility. The coil form allows the coils to be connected into an integral structure. Unlike most stellarators, a vacuum vessel surrounds the coils rather than fitting inside the coils, allowing excellent access for plasma diagnostics and heating. In addition there are external vertical field and toroidal field coils and an ohmic current solenoid for configuration flexibility. First plasma operation is planned for September 2007. Details of the engineering design and analysis will be presented.

  14. Design of the QPS Experiment.

    NASA Astrophysics Data System (ADS)

    Nelson, B. E.

    2003-10-01

    The Quasi-Poloidal Stellarator, QPS, is a low-aspect-ratio (R/a = 2.7), compact stellarator under design at ORNL. The device parameters are R = 0.95 m, a = 0.35 m, and B = 1 T for 1.5 s with 2 MW of ECH and 3.5 MW of ICRF for plasma heating. A nonplanar modular coil set provides the primary magnetic field configuration. It has two field periods with ten modular coils per period. Due to stellarator symmetry, there are only five different coil types. Flexible copper cable conductor will be wound on a stainless steel form, vacuum impregnated with epoxy, and canned for vacuum compatibility. The coil form allows the coils to be connected into an integral structural shell. Unlike most stellarators, a vacuum vessel surrounds the coils rather than fitting inside the coils, allowing excellent access for plasma diagnostics and heating. In addition there are external vertical field and toroidal field coils and an ohmic current solenoid for configuration flexibility. First plasma operation is planned for the end of 2007. Details of the engineering design and analysis will be presented.

  15. Design of the QPS Experiment

    NASA Astrophysics Data System (ADS)

    Nelson, Brad; Qps Team

    2004-11-01

    The Quasi-Poloidal Stellarator, QPS, is a low-aspect-ratio (R/a = 2.7), compact stellarator under design at ORNL. The device parameters are R = 0.95 m, a = 0.35 m, and B = 1 T for 1.5 s with 2 MW of ECH and 3.5 MW of ICRF for plasma heating. A nonplanar modular coil set provides the primary magnetic field configuration. It has two field periods with ten modular coils per period. Due to stellarator symmetry, there are only five different coil types. Flexible copper cable conductor will be wound on a stainless steel form, vacuum impregnated with epoxy, and canned for vacuum compatibility. The coil form allows the coils to be connected into an integral structural shell. Unlike most stellarators, a vacuum vessel surrounds the coils rather than fitting inside the coils, allowing excellent access for plasma diagnostics and heating. In addition there are external vertical field and toroidal field coils and an ohmic current solenoid for configuration flexibility. First plasma operation is planned for early 2009. Details of the engineering design and analysis will be presented.

  16. Spaceflight payload design flight experience G-408

    NASA Technical Reports Server (NTRS)

    Durgin, William W.; Looft, Fred J.; Sacco, Albert, Jr.; Thompson, Robert; Dixon, Anthony G.; Roberti, Dino; Labonte, Robert; Moschini, Larry

    1992-01-01

    Worcester Polytechnic Institute's first payload of spaceflight experiments flew aboard Columbia, STS-40, during June of 1991 and culminated eight years of work by students and faculty. The Get Away Special (GAS) payload was installed on the GAS bridge assembly at the aft end of the cargo bay behind the Spacelab Life Sciences (SLS-1) laboratory. The Experiments were turned on by astronaut signal after reaching orbit and then functioned for 72 hours. Environmental and experimental measurements were recorded on three cassette tapes which, together with zeolite crystals grown on orbit, formed the basis of subsequent analyses. The experiments were developed over a number of years by undergraduate students meeting their project requirements for graduation. The experiments included zeolite crystal growth, fluid behavior, and microgravity acceleration measurement in addition to environmental data acquisition. Preparation also included structural design, thermal design, payload integration, and experiment control. All of the experiments functioned on orbit and the payload system performed within design estimates.

  17. Super Spool: An Experiment in Powerplant Design

    ERIC Educational Resources Information Center

    Kesler, Ronald

    1974-01-01

    Discusses the use of rubberbands, an empty wooden thread spool, two wooden matches, a wax washer, and a small nail to conduct an experiment or demonstration in powerplant design. Detailed procedures and suggested activities are included. (CC)

  18. Design for Engaging Experience and Social Interaction

    ERIC Educational Resources Information Center

    Harteveld, Casper; ten Thij, Eleonore; Copier, Marinka

    2011-01-01

    One of the goals of game designers is to design for an engaging experience and for social interaction. The question is how. We know that games can be engaging and allow for social interaction, but how do we achieve this or even improve on it? This article provides an overview of several scientific approaches that deal with this question. It…

  19. Adaptive design of visual perception experiments

    NASA Astrophysics Data System (ADS)

    O'Connor, John D.; Hixson, Jonathan; Thomas, James M., Jr.; Peterson, Matthew S.; Parasuraman, Raja

    2010-04-01

    Meticulous experimental design may not always prevent confounds from affecting experimental data acquired during visual perception experiments. Although experimental controls reduce the potential effects of foreseen sources of interference, interaction, or noise, they are not always adequate for preventing the confounding effects of unforeseen forces. Visual perception experimentation is vulnerable to unforeseen confounds because of the nature of the associated cognitive processes involved in the decision task. Some confounds are beyond the control of experimentation, such as what a participant does immediately prior to experimental participation, or the participant's attitude or emotional state. Other confounds may occur through ignorance of practical control methods on the part of the experiment's designer. The authors conducted experiments related to experimental fatigue and initially achieved significant results that were, upon re-examination, attributable to a lack of adequate controls. Re-examination of the original results and the processes and events that led to them yielded a second experimental design with more experimental controls and significantly different results. The authors propose that designers of visual perception experiments can benefit from planning to use a test-fix-test or adaptive experimental design cycle, so that unforeseen confounds in the initial design can be remedied.

  20. Analysis of designed experiments with complex aliasing

    SciTech Connect

    Hamada, M.; Wu, C.F.J. )

    1992-07-01

    Traditionally, Plackett-Burman (PB) designs have been used in screening experiments for identifying important main effects. The PB designs whose run sizes are not a power of two have been criticized for their complex aliasing patterns, which according to conventional wisdom gives confusing results. This paper goes beyond the traditional approach by proposing the analysis strategy that entertains interactions in addition to main effects. Based on the precepts of effect sparsity and effect heredity, the proposed procedure exploits the designs' complex aliasing patterns, thereby turning their 'liability' into an advantage. Demonstration of the procedure on three real experiments shows the potential for extracting important information available in the data that has, until now, been missed. Some limitations are discussed, and extentions to overcome them are given. The proposed procedure also applies to more general mixed level designs that have become increasingly popular. 16 refs.

  1. Affective loop experiences: designing for interactional embodiment

    PubMed Central

    Höök, Kristina

    2009-01-01

    Involving our corporeal bodies in interaction can create strong affective experiences. Systems that both can be influenced by and influence users corporeally exhibit a use quality we name an affective loop experience. In an affective loop experience, (i) emotions are seen as processes, constructed in the interaction, starting from everyday bodily, cognitive or social experiences; (ii) the system responds in ways that pull the user into the interaction, touching upon end users' physical experiences; and (iii) throughout the interaction the user is an active, meaning-making individual choosing how to express themselves—the interpretation responsibility does not lie with the system. We have built several systems that attempt to create affective loop experiences with more or less successful results. For example, eMoto lets users send text messages between mobile phones, but in addition to text, the messages also have colourful and animated shapes in the background chosen through emotion-gestures with a sensor-enabled stylus pen. Affective Diary is a digital diary with which users can scribble their notes, but it also allows for bodily memorabilia to be recorded from body sensors mapping to users' movement and arousal and placed along a timeline. Users can see patterns in their bodily reactions and relate them to various events going on in their lives. The experiences of building and deploying these systems gave us insights into design requirements for addressing affective loop experiences, such as how to design for turn-taking between user and system, how to create for ‘open’ surfaces in the design that can carry users' own meaning-making processes, how to combine modalities to create for a ‘unity’ of expression, and the importance of mirroring user experience in familiar ways that touch upon their everyday social and corporeal experiences. But a more important lesson gained from deploying the systems is how emotion processes are co-constructed and

  2. Affective loop experiences: designing for interactional embodiment.

    PubMed

    Höök, Kristina

    2009-12-12

    Involving our corporeal bodies in interaction can create strong affective experiences. Systems that both can be influenced by and influence users corporeally exhibit a use quality we name an affective loop experience. In an affective loop experience, (i) emotions are seen as processes, constructed in the interaction, starting from everyday bodily, cognitive or social experiences; (ii) the system responds in ways that pull the user into the interaction, touching upon end users' physical experiences; and (iii) throughout the interaction the user is an active, meaning-making individual choosing how to express themselves-the interpretation responsibility does not lie with the system. We have built several systems that attempt to create affective loop experiences with more or less successful results. For example, eMoto lets users send text messages between mobile phones, but in addition to text, the messages also have colourful and animated shapes in the background chosen through emotion-gestures with a sensor-enabled stylus pen. Affective Diary is a digital diary with which users can scribble their notes, but it also allows for bodily memorabilia to be recorded from body sensors mapping to users' movement and arousal and placed along a timeline. Users can see patterns in their bodily reactions and relate them to various events going on in their lives. The experiences of building and deploying these systems gave us insights into design requirements for addressing affective loop experiences, such as how to design for turn-taking between user and system, how to create for 'open' surfaces in the design that can carry users' own meaning-making processes, how to combine modalities to create for a 'unity' of expression, and the importance of mirroring user experience in familiar ways that touch upon their everyday social and corporeal experiences. But a more important lesson gained from deploying the systems is how emotion processes are co-constructed and experienced

  3. Power and replication - designing powerful experiments

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Biological research is expensive, with monetary costs to granting agencies and emotional costs to researchers. As such, biological researchers should always follow the mantra, "failure is not an option." A failed experimental design is generally manifested as an experiment with high P-values, leavin...

  4. Designing a Curriculum for Clinical Experiences

    ERIC Educational Resources Information Center

    Henning, John E.; Erb, Dorothy J.; Randles, Halle Schoener; Fults, Nanette; Webb, Kathy

    2016-01-01

    The purpose of this article is to describe a collaborative effort among five teacher preparation programs to create a conceptual tool designed to put clinical experiences at the center of our programs. The authors refer to the resulting product as a clinical curriculum. The clinical curriculum describes a developmental sequence of clinical…

  5. THE QUEST TO DESIGN BETTER EXPERIMENTS.

    PubMed

    Perkel, Jeffrey

    2016-01-01

    First suggested by R.A. Fisher in the 1930s, design of experiments (DOE) strategies are finding their way into modern life science research. Jeffrey Perkel looks at how DOE is impacting everything from genome editing to mass spectrometry. PMID:27401668

  6. Design and Simulation of Hybridization Experiments

    Energy Science and Technology Software Center (ESTSC)

    1995-11-28

    DB EXP DESIGN is a suite of three UNIX shell-like programs, DWC which computes oligomer composition of DNA texts using directed acyclic word data structures; DWO, which simulates hybridization experiments; and DMI, which calculates the information contenet of individual probes, their mutual information content, and their joint information content through estimation of Markov trees.

  7. Conceptual design for spacelab pool boiling experiment

    NASA Technical Reports Server (NTRS)

    Lienhard, J. H.; Peck, R. E.

    1978-01-01

    A pool boiling heat transfer experiment to be incorporated with a larger two-phase flow experiment on Spacelab was designed to confirm (or alter) the results of earth-normal gravity experiments which indicate that the hydrodynamic peak and minimum pool boiling heat fluxes vanish at very low gravity. Twelve small sealed test cells containing water, methanol or Freon 113 and cylindrical heaters of various sizes are to be built. Each cell will be subjected to one or more 45 sec tests in which the surface heat flux on the heaters is increased linearly until the surface temperature reaches a limiting value of 500 C. The entire boiling process will be photographed in slow-motion. Boiling curves will be constructed from thermocouple and electric input data, for comparison with the motion picture records. The conduct of the experiment will require no more than a few hours of operator time.

  8. Advanced ISDN satellite designs and experiments

    NASA Technical Reports Server (NTRS)

    Pepin, Gerard R.

    1992-01-01

    The research performed by GTE Government Systems and the University of Colorado in support of the NASA Satellite Communications Applications Research (SCAR) Program is summarized. Two levels of research were undertaken. The first dealt with providing interim services Integrated Services Digital Network (ISDN) satellite (ISIS) capabilities that accented basic rate ISDN with a ground control similar to that of the Advanced Communications Technology Satellite (ACTS). The ISIS Network Model development represents satellite systems like the ACTS orbiting switch. The ultimate aim is to move these ACTS ground control functions on-board the next generation of ISDN communications satellite to provide full-service ISDN satellite (FSIS) capabilities. The technical and operational parameters for the advanced ISDN communications satellite design are obtainable from the simulation of ISIS and FSIS engineering software models of the major subsystems of the ISDN communications satellite architecture. Discrete event simulation experiments would generate data for analysis against NASA SCAR performance measure and the data obtained from the ISDN satellite terminal adapter hardware (ISTA) experiments, also developed in the program. The Basic and Option 1 phases of the program are also described and include the following: literature search, traffic mode, network model, scenario specifications, performance measures definitions, hardware experiment design, hardware experiment development, simulator design, and simulator development.

  9. Simulator design for advanced ISDN satellite design and experiments

    NASA Technical Reports Server (NTRS)

    Pepin, Gerald R.

    1992-01-01

    This simulation design task completion report documents the simulation techniques associated with the network models of both the Interim Service ISDN (integrated services digital network) Satellite (ISIS) and the Full Service ISDN Satellite (FSIS) architectures. The ISIS network model design represents satellite systems like the Advanced Communication Technology Satellite (ACTS) orbiting switch. The FSIS architecture, the ultimate aim of this element of the Satellite Communications Applications Research (SCAR) program, moves all control and switching functions on-board the next generation ISDN communication satellite. The technical and operational parameters for the advanced ISDN communications satellite design will be obtained from the simulation of ISIS and FSIS engineering software models for their major subsystems. Discrete events simulation experiments will be performed with these models using various traffic scenarios, design parameters and operational procedures. The data from these simulations will be used to determine the engineering parameters for the advanced ISDN communications satellite.

  10. Statistical considerations in design of spacelab experiments

    NASA Technical Reports Server (NTRS)

    Robinson, J.

    1978-01-01

    After making an analysis of experimental error sources, statistical models were developed for the design and analysis of potential Space Shuttle experiments. Guidelines for statistical significance and/or confidence limits of expected results were also included. The models were then tested out on the following proposed Space Shuttle biomedical experiments: (1) bone density by computer tomography; (2) basal metabolism; and (3) total body water. Analysis of those results and therefore of the models proved inconclusive due to the lack of previous research data and statistical values. However, the models were seen as possible guides to making some predictions and decisions.

  11. Simulation of integrated beam experiment designs

    SciTech Connect

    Grote, D.P.; Sharp, W.M.

    2004-06-11

    Simulation of designs of an Integrated Beam Experiment (IBX) class accelerator have been carried out. These simulations are an important tool for validating such designs. Issues such as envelope mismatch and emittance growth can be examined in a self-consistent manner, including the details of injection, accelerator transitions, long-term transport, and longitudinal compression. The simulations are three-dimensional and time-dependent, and begin at the source. They continue up through the end of the acceleration region, at which point the data is passed on to a separate simulation of the drift compression. Results are be presented.

  12. CMM Interim Check Design of Experiments (U)

    SciTech Connect

    Montano, Joshua Daniel

    2015-07-29

    Coordinate Measuring Machines (CMM) are widely used in industry, throughout the Nuclear Weapons Complex and at Los Alamos National Laboratory (LANL) to verify part conformance to design definition. Calibration cycles for CMMs at LANL are predominantly one year in length and include a weekly interim check to reduce risk. The CMM interim check makes use of Renishaw’s Machine Checking Gauge which is an off-the-shelf product simulates a large sphere within a CMM’s measurement volume and allows for error estimation. As verification on the interim check process a design of experiments investigation was proposed to test a couple of key factors (location and inspector). The results from the two-factor factorial experiment proved that location influenced results more than the inspector or interaction.

  13. Design Calculations For NIF Convergent Ablator Experiments

    SciTech Connect

    Olson, R E; Hicks, D G; Meezan, N B; Callahan, D A; Landen, O L; Jones, O S; Langer, S H; Kline, J L; Wilson, D C; Rinderknecht, H; Zylstra, A; Petrasso, R D

    2011-10-25

    The NIF convergent ablation tuning effort is underway. In the early experiments, we have discovered that the design code simulations over-predict the capsule implosion velocity and shock flash rhor, but under-predict the hohlraum x-ray flux measurements. The apparent inconsistency between the x-ray flux and radiography data implies that there are important unexplained aspects of the hohlraum and/or capsule behavior.

  14. Design Calculations for NIF Convergent Ablator Experiments

    NASA Astrophysics Data System (ADS)

    Olson, R. E.; Callahan, D. A.; Hicks, D. G.; Landen, O. L.; Langer, S. H.; Meezan, N. B.; Spears, B. K.; Widmann, K.; Kline, J. L.; Wilson, D. C.; Petrasso, R. D.; Leeper, R. J.

    2010-11-01

    Design calculations for NIF convergent ablator experiments will be described. The convergent ablator experiments measure the implosion trajectory, velocity, and ablation rate of an x-ray driven capsule and are a important component of the U. S. National Ignition Campaign at NIF. The design calculations are post-processed to provide simulations of the key diagnostics -- 1) Dante measurements of hohlraum x-ray flux and spectrum, 2) streaked radiographs of the imploding ablator shell, 3) wedge range filter measurements of D-He3 proton output spectra, and 4) GXD measurements of the imploded core. The simulated diagnostics will be compared to the experimental measurements to provide an assessment of the accuracy of the design code predictions of hohlraum radiation temperature, capsule ablation rate, implosion velocity, shock flash areal density, and x-ray bang time. Post-shot versions of the design calculations are used to enhance the understanding of the experimental measurements and will assist in choosing parameters for subsequent shots and the path towards optimal ignition capsule tuning. *SNL, LLNL, and LANL are operated under US DOE contracts DE-AC04-94AL85000. DE-AC52-07NA27344, and DE-AC04-94AL85000.

  15. Design calculations for NIF convergent ablator experiments.

    SciTech Connect

    Callahan, Debra; Leeper, Ramon Joe; Spears, B. K.; Zylstra, A.; Seguin, F.; Landen, Otto L.; Petrasso, R. D.; Rinderknecht, H.; Kline, J. L.; Frenje, J.; Wilson, D. C.; Langer, S. H.; Widmann, K.; Meezan, Nathan B.; Hicks, Damien G.; Olson, Richard Edward

    2010-11-01

    Design calculations for NIF convergent ablator experiments will be described. The convergent ablator experiments measure the implosion trajectory, velocity, and ablation rate of an x-ray driven capsule and are a important component of the U. S. National Ignition Campaign at NIF. The design calculations are post-processed to provide simulations of the key diagnostics: (1) Dante measurements of hohlraum x-ray flux and spectrum, (2) streaked radiographs of the imploding ablator shell, (3) wedge range filter measurements of D-He3 proton output spectra, and (4) GXD measurements of the imploded core. The simulated diagnostics will be compared to the experimental measurements to provide an assessment of the accuracy of the design code predictions of hohlraum radiation temperature, capsule ablation rate, implosion velocity, shock flash areal density, and x-ray bang time. Post-shot versions of the design calculations are used to enhance the understanding of the experimental measurements and will assist in choosing parameters for subsequent shots and the path towards optimal ignition capsule tuning.

  16. Design of a water electrolysis flight experiment

    NASA Technical Reports Server (NTRS)

    Lee, M. Gene; Grigger, David J.; Thompson, C. Dean; Cusick, Robert J.

    1993-01-01

    Supply of oxygen (O2) and hydrogen (H2) by electolyzing water in space will play an important role in meeting the National Aeronautics and Space Administration's (NASA's) needs and goals for future space missios. Both O2 and H2 are envisioned to be used in a variety of processes including crew life support, spacecraft propulsion, extravehicular activity, electrical power generation/storage as well as in scientific experiment and manufacturing processes. The Electrolysis Performance Improvement Concept Study (EPICS) flight experiment described herein is sponsored by NASA Headquarters as a part of the In-Space Technology Experiment Program (IN-STEP). The objective of the EPICS is to further contribute to the improvement of the SEF technology, specifially by demonstrating and validating the SFE electromechanical process in microgravity as well as investigating perrformance improvements projected possible in a microgravity environment. This paper defines the experiment objective and presents the results of the preliminary design of the EPICS. The experiment will include testing three subscale self-contained SFE units: one containing baseline components, and two units having variations in key component materials. Tests will be conducted at varying current and thermal condition.

  17. The POLARBEAR Experiment: Design and Characterization

    NASA Astrophysics Data System (ADS)

    Kermish, Zigmund David

    We present the design and characterization of the POLARBEAR experiment. POLARBEAR is a millimeter-wave polarimeter that will measure the Cosmic Microwave Background (CMB) polarization. It was designed to have both the sensitivity and angular resolution to detect the expected B-mode polarization due to gravitational lensing at small angular scales while still enabling a search for the degree scale B-mode polarization caused by inflationary gravitational waves. The instrument utilizes the Huan Tran Telescope (HTT), a 2.5-meter primary mirror telescope, coupled to a unique focal plane of 1,274 antenna-coupled transition-edge sensor (TES) detectors to achieve unprecedented sensitivity from angular scales of the experiment's 4 arcminute beam to several degrees. This dissertation focuses on the design, integration and characterization of the cryogenic receiver for the POLARBEAR instrument. The receiver cools the ˜20 cm focal plane to 0.25 Kelvin, with detector readout provided by a digital frequency-multiplexed SQUID system. The POLARBEAR receiver was been successfully deployed on the HTT for an engineering run in the Eastern Sierras of California and is currently deployed on Cerro Toco in the Atacama Dessert of Chile. We present results from lab tests done to characterize the instrument, from the engineering run and preliminary results from Chile.

  18. The design of macromolecular crystallography diffraction experiments

    SciTech Connect

    Evans, Gwyndaf Axford, Danny; Owen, Robin L.

    2011-04-01

    Thoughts about the decisions made in designing macromolecular X-ray crystallography experiments at synchrotron beamlines are presented. The measurement of X-ray diffraction data from macromolecular crystals for the purpose of structure determination is the convergence of two processes: the preparation of diffraction-quality crystal samples on the one hand and the construction and optimization of an X-ray beamline and end station on the other. Like sample preparation, a macromolecular crystallography beamline is geared to obtaining the best possible diffraction measurements from crystals provided by the synchrotron user. This paper describes the thoughts behind an experiment that fully exploits both the sample and the beamline and how these map into everyday decisions that users can and should make when visiting a beamline with their most precious crystals.

  19. JASMINE project Instrument design and centroiding experiment

    NASA Astrophysics Data System (ADS)

    Yano, Taihei; Gouda, Naoteru; Kobayashi, Yukiyasu; Yamada, Yoshiyuki

    JASMINE will study the fundamental structure and evolution of the Milky Way Galaxy. To accomplish these objectives, JASMINE will measure trigonometric parallaxes, positions and proper motions of about 10 million stars with a precision of 10 μarcsec at z = 14 mag. In this paper the instrument design (optics, detectors, etc.) of JASMINE is presented. We also show a CCD centroiding experiment for estimating positions of star images. The experimental result shows that the accuracy of estimated distances has a variance of less than 0.01 pixel.

  20. Investment casting design of experiment. Final report

    SciTech Connect

    Owens, R.

    1997-10-01

    Specific steps in the investment casting process were analyzed in a designed experiment. The casting`s sensitivity to changes in these process steps was experimentally determined Dimensional and radiographic inspection were used to judge the sensitivity of the casting. Thirty-six castings of different pedigrees were poured and measured. Some of the dimensional inspection was conducted during the processing. It was confirmed that wax fixturing, number of gates, gate location, pour and mold temperature, pour speed, and cooling profile all affected the radiographic quality of the casting. Gate and runner assembly techniques, number of gates, and mold temperature affect the dimensional quality of the casting.

  1. Principal component analysis for designed experiments

    PubMed Central

    2015-01-01

    Background Principal component analysis is used to summarize matrix data, such as found in transcriptome, proteome or metabolome and medical examinations, into fewer dimensions by fitting the matrix to orthogonal axes. Although this methodology is frequently used in multivariate analyses, it has disadvantages when applied to experimental data. First, the identified principal components have poor generality; since the size and directions of the components are dependent on the particular data set, the components are valid only within the data set. Second, the method is sensitive to experimental noise and bias between sample groups. It cannot reflect the experimental design that is planned to manage the noise and bias; rather, it estimates the same weight and independence to all the samples in the matrix. Third, the resulting components are often difficult to interpret. To address these issues, several options were introduced to the methodology. First, the principal axes were identified using training data sets and shared across experiments. These training data reflect the design of experiments, and their preparation allows noise to be reduced and group bias to be removed. Second, the center of the rotation was determined in accordance with the experimental design. Third, the resulting components were scaled to unify their size unit. Results The effects of these options were observed in microarray experiments, and showed an improvement in the separation of groups and robustness to noise. The range of scaled scores was unaffected by the number of items. Additionally, unknown samples were appropriately classified using pre-arranged axes. Furthermore, these axes well reflected the characteristics of groups in the experiments. As was observed, the scaling of the components and sharing of axes enabled comparisons of the components beyond experiments. The use of training data reduced the effects of noise and bias in the data, facilitating the physical interpretation of the

  2. Preliminary design of the redundant software experiment

    NASA Technical Reports Server (NTRS)

    Campbell, Roy; Deimel, Lionel; Eckhardt, Dave, Jr.; Kelly, John; Knight, John; Lauterbach, Linda; Lee, Larry; Mcallister, Dave; Mchugh, John

    1985-01-01

    The goal of the present experiment is to characterize the fault distributions of highly reliable software replicates, constructed using techniques and environments which are similar to those used in comtemporary industrial software facilities. The fault distributions and their effect on the reliability of fault tolerant configurations of the software will be determined through extensive life testing of the replicates against carefully constructed randomly generated test data. Each detected error will be carefully analyzed to provide insight in to their nature and cause. A direct objective is to develop techniques for reducing the intensity of coincident errors, thus increasing the reliability gain which can be achieved with fault tolerance. Data on the reliability gains realized, and the cost of the fault tolerant configurations can be used to design a companion experiment to determine the cost effectiveness of the fault tolerant strategy. Finally, the data and analysis produced by this experiment will be valuable to the software engineering community as a whole because it will provide a useful insight into the nature and cause of hard to find, subtle faults which escape standard software engineering validation techniques and thus persist far into the software life cycle.

  3. Average power laser experiment (APLE) design

    NASA Astrophysics Data System (ADS)

    Parazzoli, C. G.; Rodenburg, R. E.; Dowell, D. H.; Greegor, R. B.; Kennedy, R. C.; Romero, J. B.; Siciliano, J. A.; Tong, K.-O.; Vetter, A. M.; Adamski, J. L.; Pistoresi, D. J.; Shoffstall, D. R.; Quimby, D. C.

    1992-07-01

    We describe the details and the design requirements for the 100 kW CW radio frequency free electron laser at 10 μm to be built at Boeing Aerospace and Electronics Division in Seattle with the collaboration of Los Alamos National Laboratory. APLE is a single-accelerator master-oscillator and power-amplifier (SAMOPA) device. The goal of this experiment is to demonstrate a fully operational RF-FEL at 10 μm with an average power of 100 kW. The approach and wavelength were chosen on the basis of maximum cost effectiveness, including utilization of existing hardware and reasonable risk, and potential for future applications. Current plans call for an initial oscillator power demonstration in the fall of 1994 and full SAMOPA operation by December 1995.

  4. Interim Service ISDN Satellite (ISIS) hardware experiment design for advanced ISDN satellite design and experiments

    NASA Technical Reports Server (NTRS)

    Pepin, Gerard R.

    1992-01-01

    The Interim Service Integrated Services Digital Network (ISDN) Satellite (ISIS) Hardware Experiment Design for Advanced Satellite Designs describes the design of the ISDN Satellite Terminal Adapter (ISTA) capable of translating ISDN protocol traffic into time division multiple access (TDMA) signals for use by a communications satellite. The ISTA connects the Type 1 Network Termination (NT1) via the U-interface on the line termination side of the CPE to the V.35 interface for satellite uplink. The same ISTA converts in the opposite direction the V.35 to U-interface data with a simple switch setting.

  5. Design Point for a Spheromak Compression Experiment

    NASA Astrophysics Data System (ADS)

    Woodruff, Simon; Romero-Talamas, Carlos A.; O'Bryan, John; Stuber, James; Darpa Spheromak Team

    2015-11-01

    Two principal issues for the spheromak concept remain to be addressed experimentally: formation efficiency and confinement scaling. We are therefore developing a design point for a spheromak experiment that will be heated by adiabatic compression, utilizing the CORSICA and NIMROD codes as well as analytic modeling with target parameters R_initial =0.3m, R_final =0.1m, T_initial =0.2keV, T_final =1.8keV, n_initial =1019m-3 and n_final = 1021m-3, with radial convergence of C =3. This low convergence differentiates the concept from MTF with C =10 or more, since the plasma will be held in equilibrium throughout compression. We present results from CORSICA showing the placement of coils and passive structure to ensure stability during compression, and design of the capacitor bank needed to both form the target plasma and compress it. We specify target parameters for the compression in terms of plasma beta, formation efficiency and energy confinement. Work performed under DARPA grant N66001-14-1-4044.

  6. Design and implementation of the STAR experiment`s DAQ

    SciTech Connect

    Ljubicic, A. Jr.; Botlo, M.; Heistermann, F.

    1997-12-01

    The STAR experiment is one of the two large detectors currently being built at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory, Upton, New York, USA. The major issue of STAR`s DAQ is the large amount of data that has to be processed as fast as possible. The required data rate is of the order of 90 Gbits/s which has to be processed and scaled down to about 15 MBytes/s and stored to tape or other permanent archiving media. To be able to do so the STAR DAQ uses a custom built ASIC which preprocesses the raw data for later use by a software Level 3 trigger. The Level 3 trigger selects events to be archived depending on physics criteria based upon the particle track information extracted during Level 3 processing. The design presented is a massively parallel multi-processor system which consists of front end microprocessors hierarchically organized within a VME crate system. Each VME crate contains 6 custom made Receiver Boards with 3 Intel I960HD processors per board for a total of 18 processors per crate. The STAR`s TPC detector uses 24 such crates and the SVT detector will use 4 crates for a total of 504 microprocessors.

  7. Shear wall experiments and design in Japan

    SciTech Connect

    Park, Y.J.; Hofmayer, C.

    1994-12-01

    This paper summarizes the results of recent survey studies on the available experimental data bases and design codes/standards for reinforced concrete (RC) shear wall structures in Japan. Information related to the seismic design of RC reactor buildings and containment structures was emphasized in the survey. The seismic requirements for concrete structures, particularly those related to shear strength design, are outlined. Detailed descriptions are presented on the development of Japanese shear wall equations, design requirements for containment structures, and ductility requirements.

  8. Distributed Design and Analysis of Computer Experiments

    SciTech Connect

    Doak, Justin

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. For example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an

  9. Distributed Design and Analysis of Computer Experiments

    Energy Science and Technology Software Center (ESTSC)

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. Formore » example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation

  10. An Undergraduate Experiment in Alarm System Design.

    ERIC Educational Resources Information Center

    Martini, R. A.; And Others

    1988-01-01

    Describes an experiment involving data acquisition by a computer, digital signal transmission from the computer to a digital logic circuit and signal interpretation by this circuit. The system is being used at the Illinois Institute of Technology. Discusses the fundamental concepts involved. Demonstrates the alarm experiment as it is used in…

  11. Designing Effective Research Experiences for Undergraduates (Invited)

    NASA Astrophysics Data System (ADS)

    Jones Whyte, P.; Dalbotten, D. M.

    2009-12-01

    The undergraduate research experience has been recognized as a valuable component of preparation for graduate study. As competition for spaces in graduate schools become more keen students benefit from a formal introduction to the life of a scholar. Over the last twenty years a model of preparing students for graduate study with the research experience as the base has been refined at the University of Minnesota. The experience includes assignment with a faculty member and a series of seminars that support the experience. The seminars cover topics to include academic writing, scholarly literature review, writing of the abstract, research subject protection protocols, GRE test preparation, opportunities to interact with graduate student, preparing the graduate school application, and preparation of a poster to demonstrate the results of the research. The next phase of the process is to determine the role of the undergraduate research experience in the graduate school admission process.

  12. A Photogate Design for Air Track Experiments.

    ERIC Educational Resources Information Center

    Hinrichsen, P. F.

    1988-01-01

    Introduces a photogate arrangement using a photo-reflective sensor for air track experiments. Reports that the sensitivity to sunlight can be eliminated and a mechanically more convenient package produced. Shows the mounting, circuit, and usage of the photogate. (YP)

  13. Statistical design of a uranium corrosion experiment

    SciTech Connect

    Wendelberger, Joanne R; Moore, Leslie M

    2009-01-01

    This work supports an experiment being conducted by Roland Schulze and Mary Ann Hill to study hydride formation, one of the most important forms of corrosion observed in uranium and uranium alloys. The study goals and objectives are described in Schulze and Hill (2008), and the work described here focuses on development of a statistical experiment plan being used for the study. The results of this study will contribute to the development of a uranium hydriding model for use in lifetime prediction models. A parametric study of the effect of hydrogen pressure, gap size and abrasion on hydride initiation and growth is being planned where results can be analyzed statistically to determine individual effects as well as multi-variable interactions. Input to ESC from this experiment will include expected hydride nucleation, size, distribution, and volume on various uranium surface situations (geometry) as a function of age. This study will also address the effect of hydrogen threshold pressure on corrosion nucleation and the effect of oxide abrasion/breach on hydriding processes. Statistical experiment plans provide for efficient collection of data that aids in understanding the impact of specific experiment factors on initiation and growth of corrosion. The experiment planning methods used here also allow for robust data collection accommodating other sources of variation such as the density of inclusions, assumed to vary linearly along the cast rods from which samples are obtained.

  14. Hybrid Rocket Experiment Station for Capstone Design

    NASA Technical Reports Server (NTRS)

    Conley, Edgar; Hull, Bethanne J.

    2012-01-01

    Portable hybrid rocket motors and test stands can be seen in many papers but none have been reported on the design or instrumentation at such a small magnitude. The design of this hybrid rocket and test stand is to be small and portable (suitcase size). This basic apparatus will be used for demonstrations in rocket propulsion. The design had to include all of the needed hardware to operate the hybrid rocket unit (with the exception of the external Oxygen tank). The design of this project includes making the correlation between the rocket's thrust and its size, the appropriate transducers (physical size, resolution, range, and cost), compatability with a laptop analog card, the ease of setup, and its portability.

  15. EXPERIENCES IN DESIGNING SOLVENTS FOR THE ENVIRONMENT

    EPA Science Inventory

    To meet the great need of replacing many harmful solvents commonly used by industry and the public with environmentally benign substitute solvents, the PARIS II solvent design software has been developed. Although the difficulty of successfully finding replacements increases with...

  16. Thermal Characterization of Functionally Graded Materials: Design of Optimum Experiments

    NASA Technical Reports Server (NTRS)

    Cole, Kevin D.

    2003-01-01

    This paper is a study of optimal experiment design applied to the measure of thermal properties in functionally graded materials. As a first step, a material with linearly-varying thermal properties is analyzed, and several different tran- sient experimental designs are discussed. An optimality criterion, based on sen- sitivity coefficients, is used to identify the best experimental design. Simulated experimental results are analyzed to verify that the identified best experiment design has the smallest errors in the estimated parameters. This procedure is general and can be applied to design of experiments for a variety of materials.

  17. Designing a successful HMD-based experience

    NASA Technical Reports Server (NTRS)

    Pierce, J. S.; Pausch, R.; Sturgill, C. B.; Christiansen, K. D.; Kaiser, M. K. (Principal Investigator)

    1999-01-01

    For entertainment applications, a successful virtual experience based on a head-mounted display (HMD) needs to overcome some or all of the following problems: entering a virtual world is a jarring experience, people do not naturally turn their heads or talk to each other while wearing an HMD, putting on the equipment is hard, and people do not realize when the experience is over. In the Electric Garden at SIGGRAPH 97, we presented the Mad Hatter's Tea Party, a shared virtual environment experienced by more than 1,500 SIGGRAPH attendees. We addressed these HMD-related problems with a combination of back story, see-through HMDs, virtual characters, continuity of real and virtual objects, and the layout of the physical and virtual environments.

  18. STELLA Experiment: Design and Model Predictions

    SciTech Connect

    Kimura, W. D.; Babzien, M.; Ben-Zvi, I.; Campbell, L. P.; Cline, D. B.; Fiorito, R. B.; Gallardo, J. C.; Gottschalk, S. C.; He, P.; Kusche, K. P.; Liu, Y.; Pantell, R. H.; Pogorelsky, I. V.; Quimby, D. C.; Robinson, K. E.; Rule, D. W.; Sandweiss, J.; Skaritka, J.; van Steenbergen, A.; Steinhauer, L. C.; Yakimenko, V.

    1998-07-05

    The STaged ELectron Laser Acceleration (STELLA) experiment will be one of the first to examine the critical issue of staging the laser acceleration process. The BNL inverse free electron laser (EEL) will serve as a prebuncher to generate {approx} 1 {micro}m long microbunches. These microbunches will be accelerated by an inverse Cerenkov acceleration (ICA) stage. A comprehensive model of the STELLA experiment is described. This model includes the EEL prebunching, drift and focusing of the microbunches into the ICA stage, and their subsequent acceleration. The model predictions will be presented including the results of a system error study to determine the sensitivity to uncertainties in various system parameters.

  19. Hypersonic drone vehicle design: A multidisciplinary experience

    NASA Technical Reports Server (NTRS)

    1988-01-01

    UCLA's Advanced Aeronautic Design group focussed their efforts on design problems of an unmanned hypersonic vehicle. It is felt that a scaled hypersonic drone is necesary to bridge the gap between present theory on hypersonics and the future reality of the National Aerospace Plane (NASP) for two reasons: (1) to fulfill a need for experimental data in the hypersonic regime, and (2) to provide a testbed for the scramjet engine which is to be the primary mode of propulsion for the NASP. The group concentrated on three areas of great concern to NASP design: propulsion, thermal management, and flight systems. Problem solving in these areas was directed toward design of the drone with the idea that the same design techniques could be applied to the NASP. A 70 deg swept double-delta wing configuration, developed in the 70's at the NASA Langley, was chosen as the aerodynamic and geometric model for the drone. This vehicle would be air launched from a B-1 at Mach 0.8 and 48,000 feet, rocket boosted by two internal engines to Mach 10 and 100,000 feet, and allowed to cruise under power of the scramjet engine until burnout. It would then return to base for an unpowered landing. Preliminary energy calculations based on flight requirements give the drone a gross launch weight of 134,000 pounds and an overall length of 85 feet.

  20. Batch sequential designs for computer experiments

    SciTech Connect

    Moore, Leslie M; Williams, Brian J; Loeppky, Jason L

    2009-01-01

    Computer models simulating a physical process are used in many areas of science. Due to the complex nature of these codes it is often necessary to approximate the code, which is typically done using a Gaussian process. In many situations the number of code runs available to build the Guassian process approximation is limited. When the initial design is small or the underlying response surface is complicated this can lead to poor approximations of the code output. In order to improve the fit of the model, sequential design strategies must be employed. In this paper we introduce two simple distance based metrics that can be used to augment an initial design in a batch sequential manner. In addition we propose a sequential updating strategy to an orthogonal array based Latin hypercube sample. We show via various real and simulated examples that the distance metrics and the extension of the orthogonal array based Latin hypercubes work well in practice.

  1. Designing Learning Experiences for Deeper Understanding

    ERIC Educational Resources Information Center

    Stripling, Barbara K.; Harada, Violet H.

    2012-01-01

    Planning is the less visible part of the teaching and learning process; however, it serves as the blueprint for student learning. To conceptualize the unit or project as a holistic learning experience, the authors created the C.L.E.A.R. G.O.A.L.S. guidelines that address the major elements of unit planning. An essential step is identifying the…

  2. Experience in Constructions: Designing a Wall

    ERIC Educational Resources Information Center

    Glenn, Barbara

    1978-01-01

    Viewing a contemporary artist's works to learn about the artist and his/her personal vision is one thing for elementary school students. Adding an actual experience of doing makes the exposure much more alive. Students at Snail Lake Elementary School in Moundsview, Minnesota, viewed a Louise Nevelson exhibit and were inspired to new uses of art…

  3. Principles of Designing Interpretable Optogenetic Behavior Experiments

    ERIC Educational Resources Information Center

    Allen, Brian D.; Singer, Annabelle C.; Boyden, Edward S.

    2015-01-01

    Over the last decade, there has been much excitement about the use of optogenetic tools to test whether specific cells, regions, and projection pathways are necessary or sufficient for initiating, sustaining, or altering behavior. However, the use of such tools can result in side effects that can complicate experimental design or interpretation.…

  4. Hypersonic drone design: A multidisciplinary experience

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Efforts were focused on design problems of an unmanned hypersonic vehicle. It is felt that a scaled hypersonic drone is necessary to bridge the gap between present theory on hypersonics and the future reality of the National Aerospace Plane (NASP) for two reasons: to fulfill a need for experimental data in the hypersonic regime, and to provide a testbed for the scramjet engine which is to be the primary mode of propulsion for the NASP. Three areas of great concern to NASP design were examined: propulsion, thermal management, and flight systems. Problem solving in these areas was directed towards design of the drone with the idea that the same design techniques could be applied to the NASP. A seventy degree swept double delta wing configuration, developed in the 70's at NASA Langley, was chosen as the aerodynamic and geometric model for the drone. This vehicle would be air-launched from a B-1 at Mach 0.8 and 48,000 feet, rocket boosted by two internal engines to Mach 10 and 100,000 feet, and allowed to cruise under power of the scramjet engine until burnout. It would then return to base for an unpowered landing. Preliminary energy calculations based upon the flight requirements give the drone a gross launch weight of 134,000 lb. and an overall length of 85 feet.

  5. Learning Experience as Transaction: A Framework for Instructional Design

    ERIC Educational Resources Information Center

    Parrish, Patrick E.; Wilson, Brent G.; Dunlap, Joanna C.

    2011-01-01

    This article presents a framework for understanding learning experience as an object for instructional design--as an object for design as well as research and understanding. Compared to traditional behavioral objectives or discrete cognitive skills, the object of experience is more holistic, requiring simultaneous attention to cognition, behavior,…

  6. Physics design options for compact ignition experiments

    SciTech Connect

    Uckan, N.A.

    1985-01-01

    This paper considers the following topics: (1) physics assessments-design and engineering impact, (2) zero-dimensional confinement studies relating to physics requirements and options for ignited plasmas, classes of devices with equivalent performance, and sensitivity to variations in confinement models, and (3) one and one-half dimensional confinement studies relating to dynamic simulations, critical physics issues, startup analyses, and volt-second consumption. (MOW)

  7. Experiment to measure vacuum birefringence: Conceptual design

    NASA Astrophysics Data System (ADS)

    Mueller, Guido; Tanner, David; Doebrich, Babette; Poeld, Jan; Lindner, Axel; Willke, Benno

    2016-03-01

    Vacuum birefringence is another lingering challenge which will soon become accessible to experimental verification. The effect was first calculated by Euler and Heisenberg in 1936 and is these days described as a one-loop correction to the differential index of refraction between light which is polarized parallel and perpendicular to an external magnetic field. Our plan is to realize (and slightly modify) an idea which was originally published by Hall, Ye, and Ma using advanced LIGO and LISA technology and the infrastructure of the ALPS light-shining-through-walls experiment following the ALPS IIc science run. This work is supported by the Deutsche Forschungsgemeinschaft and the Heising-Simons Foundation.

  8. Student designed experiments to learn fluids

    NASA Astrophysics Data System (ADS)

    Stern, Catalina

    2013-11-01

    Lasers and high speed cameras are a wonderful tool to visualize the very complex behavior of fluids, and to help students grasp concepts like turbulence, surface tension and vorticity. In this work we present experiments done by physics students in their senior year at the School of Science of the National University of Mexico as a final project in the continuum mechanics course. Every semester, the students make an oral presentation of their work and videos and images are kept in the web page ``Pasión por los Fluidos''. I acknowledge support from the Physics Department of Facultad de Ciencias, Universidad Nacional Autónoma de México.

  9. Proper battery system design for GAS experiments

    NASA Technical Reports Server (NTRS)

    Calogero, Stephen A.

    1992-01-01

    The purpose of this paper is to help the GAS experimenter to design a battery system that meets mission success requirements while at the same time reducing the hazards associated with the battery system. Lead-acid, silver-zinc and alkaline chemistry batteries will be discussed. Lithium batteries will be briefly discussed with emphasis on back-up power supply capabilities. The hazards associated with different battery configurations will be discussed along with the controls necessary to make the battery system two-fault tolerant.

  10. Design for a High Energy Density Kelvin-Helmholtz Experiment

    SciTech Connect

    Hurricane, O A

    2007-10-29

    While many high energy density physics (HEDP) Rayleigh-Taylor and Richtmyer-Meshkov instability experiments have been fielded as part of basic HEDP and astrophysics studies, not one HEDP Kelvin-Helmholtz (KH) experiment has been successfully performed. Herein, a design for a novel HEDP x-ray driven KH experiment is presented along with supporting radiation-hydrodynamic simulation and theory.

  11. JASMINE Project --Instrument Design and Centroiding Experiment--

    NASA Astrophysics Data System (ADS)

    Yano, T.; Gouda, N.; Kobayashi, Y.; Yamada, Y.; Jasmine Working Group

    JASMINE is the acronym of the Japan Astrometry Satellite Mission for INfrared z-band 0 9 micron Exploration and is planned to be launched around 2015 The main objective of JASMINE is to study the fundamental structure and evolution of the Milky Way Galaxy Another important objective is to investigate stellar physics In order to accomplish these objectives JASMINE will measure trigonometric parallaxes positions and proper motions of about ten million stars during the observational program with the precision of 10 microarcsec at z 14mag We present the instrument design of JASMINE optics detectors etc and techniques for estimating the centroiding of satar images to accomplish the objectives In order to obtain measurements of astrometric parameters with high accuracy the optics with a long focal length and a wide focal plane is requested The Korsch system 3-mirror system is one of the convincing models However the center of the field is totally vignetted because of the fold mirror Therefore we consider the improved Korsch system in which the center of the field is not vignetted We obtain the diffraction limited optical design with small distortion We place dozens of CCD arrays with high quantum efficiency at z-band on the focal plane This new type of detectors is now being developed mainly at National Astronomical Observatory of Japan In order to accomplish the objective we must estimate positions of star images on the CCD array with sub-pixel accuracy Therefore we need a technique to obtain precise positions of star

  12. Design of a Microgravity Spray Cooling Experiment

    NASA Technical Reports Server (NTRS)

    Baysinger, Kerri M.; Yerkes, Kirk L.; Michalak, Travis E.; Harris, Richard J.; McQuillen, John

    2004-01-01

    An analytical and experimental study was conducted for the application of spray cooling in a microgravity and high-g environment. Experiments were carried out aboard the NASA KC-135 reduced gravity aircraft, which provided the microgravity and high-g environments. In reduced gravity, surface tension flow was observed around the spray nozzle, due to unconstrained liquid in the test chamber and flow reversal at the heat source. A transient analytical model was developed to predict the temperature and the spray heat transfer coefficient within the heated region. Comparison of the experimental transient temperature variation with analytical results showed good agreement for low heat input values. The transient analysis also verified that thermal equilibrium within the heated region could be reached during the 20-25s reduced gravity portion of the flight profile.

  13. The design of macromolecular crystallography diffraction experiments

    PubMed Central

    Evans, Gwyndaf; Axford, Danny; Owen, Robin L.

    2011-01-01

    The measurement of X-ray diffraction data from macro­molecular crystals for the purpose of structure determination is the convergence of two processes: the preparation of diffraction-quality crystal samples on the one hand and the construction and optimization of an X-ray beamline and end station on the other. Like sample preparation, a macromolecular crystallography beamline is geared to obtaining the best possible diffraction measurements from crystals provided by the synchrotron user. This paper describes the thoughts behind an experiment that fully exploits both the sample and the beamline and how these map into everyday decisions that users can and should make when visiting a beamline with their most precious crystals. PMID:21460444

  14. [Design of a pedagogical project: collective experience].

    PubMed

    da Silva, Maria Josefina; Araújo, Maria Fátima Maciel; Leitão, Glória da Conceição Mesquita

    2003-01-01

    This study describes the ongoing experience to reform the pedagogical project of the undergraduate nursing program from Federal University of Ceará, based on the curricular guidelines approved by CNE/CE,S in 2001. It exposes the process of compliance the professors and students had to go through in order to abide by the proposal introduced by LDB/96 (Basic Guidelines Law/95), as well as the different stages needed to build referential, conceptual, and philosophical landmarks. The difficulties experienced relate to a poor commitment of professors to this process, due to the peculiar situation of the program: many professors retiring; a high number of substitute professors under provisional circumstances, which refrained them from long-term commitment. The adoption of a political-pedagogical project aimed at a reform of nursing teaching cannot occur in this situation. PMID:14699756

  15. Hypersonic Wind Tunnel Calibration Using the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    Rhode, Matthew N.; DeLoach, Richard

    2005-01-01

    A calibration of a hypersonic wind tunnel has been conducted using formal experiment design techniques and response surface modeling. Data from a compact, highly efficient experiment was used to create a regression model of the pitot pressure as a function of the facility operating conditions as well as the longitudinal location within the test section. The new calibration utilized far fewer design points than prior experiments, but covered a wider range of the facility s operating envelope while revealing interactions between factors not captured in previous calibrations. A series of points chosen randomly within the design space was used to verify the accuracy of the response model. The development of the experiment design is discussed along with tactics used in the execution of the experiment to defend against systematic variation in the results. Trends in the data are illustrated, and comparisons are made to earlier findings.

  16. Design and Analysis of AN Static Aeroelastic Experiment

    NASA Astrophysics Data System (ADS)

    Hou, Ying-Yu; Yuan, Kai-Hua; Lv, Ji-Nan; Liu, Zi-Qiang

    2016-06-01

    Static aeroelastic experiments are very common in the United States and Russia. The objective of static aeroelastic experiments is to investigate deformation and loads of elastic structure in flow field. Generally speaking, prerequisite of this experiment is that the stiffness distribution of structure is known. This paper describes a method for designing experimental models, in the case where the stiffness distribution and boundary condition of a real aircraft are both uncertain. The stiffness distribution form of the structure can be calculated via finite element modeling and simulation calculation and F141 steels and rigid foam are used to make elastic model. In this paper, the design and manufacturing process of static aeroelastic models is presented and a set of experiment model was designed to simulate the stiffness of the designed wings, a set of experiments was designed to check the results. The test results show that the experimental method can effectively complete the design work of elastic model. This paper introduces the whole process of the static aeroelastic experiment, and the experimental results are analyzed. This paper developed a static aeroelasticity experiment technique and established an experiment model targeting at the swept wing of a certain kind of large aspect ratio aircraft.

  17. Divertor design for the Tokamak Physics Experiment

    SciTech Connect

    Hill, D.N.; Braams, B.; Brooks, J.N.

    1994-05-01

    In this paper we discuss the present divertor design for the planned TPX tokamak, which will explore the physics and technology of steady-state (1000s pulses) heat and particle removal in high confinement (2--4{times} L-mode), high beta ({beta}{sub N} {ge} 3) divertor plasmas sustained by non-inductive current drive. The TPX device will operate in the double-null divertor configuration, with actively cooled graphite targets forming a deep (0.5 m) slot at the outer strike point. The peak heat flux on, the highly tilted (74{degrees} from normal) re-entrant (to recycle ions back toward the separatrix) will be in the range of 4--6 MW/m{sup 2} with 18 MW of neutral beams and RF heating power. The combination of active pumping and gas puffing (deuterium plus impurities), along with higher heating power (45 MW maximum) will allow testing of radiative divertor concepts at ITER-like power densities.

  18. Designing Artificial Selection Experiments for Specific Objectives

    PubMed Central

    Bohren, B. B.

    1975-01-01

    The observed genetic gain (ΔP) from selection in a finite population is the possible expected genetic gain E(Δ G) minus the difference in inbreeding depression effects in the selected and control lines. The inbreeding depression can be avoided by crossing the control and selected ♂ and ♀ parents to unrelated mates and summing the observed gains. The possible expected gain will be reduced by an amount D from the predicted gain because of the effects of the genetic limit and random genetic drift, the magnitude of which is a function of effective population size, N. The expected value of D is zero in unselected control populations and in the first generation for selected populations. Therefore, this source of bias can be reduced by increasing N in the selected populations and can be avoided by selecting for a single generation. To obtain observed responses which are unbiased estimates of the predicted response from which to estimate the realized heritability (or regression) in the zero generation, or to test genetic theory based on infinite population size, single-generation selection with many replications would be most efficient. To measure the "total" effect or genetic efficiency of a selection criterion or method, including the effect of different selection intensities, effective population sizes, and space requirements, more than one generation of selection is required to estimate the expected response in breeding values. The efficiency, in the sense of minimum variance, of estimating the expected breeding values at any generation t will decline as the number of generations t increases. The variance of either the estimated mean gain or the regression of gain on selection differential can be reduced more by increasing the number of replicates K than by increasing the number of generations t. Also the general pattern of the response over t can be estimated if the N's are known. Therefore, two- or not more than three-generation selection experiments with many

  19. Recent experience with design and manufacture of cine lenses

    NASA Astrophysics Data System (ADS)

    Thorpe, Michael D.; Dalzell, Kristen E.

    2015-09-01

    Modern cine lenses require a high degree of aberration correction over a large and ever expanding image size. At low to medium volume production levels, these highly corrected designs also require a workable tolerance set and compensation scheme for successful manufacture. In this paper we discuss the design and manufacture of cine lenses with reference to current designs both internal and in the patent literature and some experience in design, tolerancing and manufacturing these lenses in medium volume production.

  20. User experience interaction design for digital educational games

    NASA Astrophysics Data System (ADS)

    Yuan, Jiugen; Zhang, Wenting; Xing, Ruonan

    2014-04-01

    Leading the elements of games into education is the newest teaching concepts in the field of educational technology, which is by using healthy games to impel and preserve the learner's motivation, improve the learning efficiency and bring one experience in learning something by playing games. First of all, this article has introduced the concept of Digital Game and User Experience and brought the essence of Digital Game to light to construct the frame of user experience interaction design for digital educational games and offer one design idea for the development of related products and hoping that Digital Game will bring us continuous innovation experience.

  1. Design Considerations for Large Mass Ultra-Low Background Experiments

    SciTech Connect

    Aguayo Navarrete, Estanislao; Reid, Douglas J.; Fast, James E.; Orrell, John L.

    2011-07-01

    Summary The objective of this document is to present the designers of the next generation of large-mass, ultra-low background experiments with lessons learned and design strategies from previous experimental work. Design issues divided by topic into mechanical, thermal and electrical requirements are addressed. Large mass low-background experiments have been recognized by the scientific community as appropriate tools to aid in the refinement of the standard model. The design of these experiments is very costly and a rigorous engineering review is required for their success. The extreme conditions that the components of the experiment must withstand (heavy shielding, vacuum/pressure and temperature gradients), in combination with unprecedented noise levels, necessitate engineering guidance to support quality construction and safe operating conditions. Physical properties and analytical results of typical construction materials are presented. Design considerations for achieving ultra-low-noise data acquisition systems are addressed. Five large-mass, low-background conceptual designs for the one-tonne scale germanium experiment are proposed and analyzed. The result is a series of recommendations for future experiments engineering and for the Majorana simulation task group to evaluate the different design approaches.

  2. Conceptual design of liquid droplet radiator shuttle-attached experiment

    NASA Technical Reports Server (NTRS)

    Pfeiffer, Shlomo L.

    1989-01-01

    The conceptual design of a shuttle-attached liquid droplet radiator (LDR) experiment is discussed. The LDR is an advanced, lightweight heat rejection concept that can be used to reject heat from future high-powered space platforms. In the LDR concept, submillimeter-sized droplets are generated, pass through space, radiate heat before they are collected, and recirculated back to the heat source. The LDR experiment is designed to be attached to the shuttle longeron and integrated into the shuttle bay using standard shuttle/experiment interfaces. Overall power, weight, and data requirements of the experiment are detailed. The conceptual designs of the droplet radiator, droplet collector, and the optical diagnostic system are discussed in detail. Shuttle integration and safety design issues are also discussed.

  3. Electrical design of payload G-534: The Pool Boiling Experiment

    NASA Technical Reports Server (NTRS)

    Francisco, David R.

    1992-01-01

    Payload G-534, the Pool Boiling Experiment (PBE), is a Get Away Special that is scheduled to fly on the shuttle in 1992. This paper will give a brief overall description of the experiment with the main discussion being the electrical design with a detailed description of the power system and interface to the GAS electronics. The batteries used and their interface to the experiment Power Control Unit (PCU) and GAS electronics will be examined. The design philosophy for the PCU will be discussed in detail. The criteria for selection of fuses, relays, power semiconductors and other electrical components along with grounding and shielding policy for the entire experiment will be presented. The intent of this paper is to discuss the use of military tested parts and basic design guidelines to build a quality experiment for minimal additional cost.

  4. 2011 AERA Presidential Address: Designing Resilient Ecologies--Social Design Experiments and a New Social Imagination

    ERIC Educational Resources Information Center

    Gutiérrez, Kris D.

    2016-01-01

    This article is about designing for educational possibilities--designs that in their inception, social organization, and implementation squarely address issues of cultural diversity, social inequality, and robust learning. I discuss an approach to design-based research, social design experiments, that privileges a social scientific inquiry…

  5. Adaptive multibeam phased array design for a Spacelab experiment

    NASA Technical Reports Server (NTRS)

    Noji, T. T.; Fass, S.; Fuoco, A. M.; Wang, C. D.

    1977-01-01

    The parametric tradeoff analyses and design for an Adaptive Multibeam Phased Array (AMPA) for a Spacelab experiment are described. This AMPA Experiment System was designed with particular emphasis to maximize channel capacity and minimize implementation and cost impacts for future austere maritime and aeronautical users, operating with a low gain hemispherical coverage antenna element, low effective radiated power, and low antenna gain-to-system noise temperature ratio.

  6. Apollo experience report: Thermal design of Apollo lunar surface experiments package

    NASA Technical Reports Server (NTRS)

    Harris, R. S., Jr.

    1972-01-01

    The evolution of the thermal design of the Apollo lunar surface experiments package central station from the basic concept to the final flight hardware is discussed, including results of development, prototype, and qualification tests that were used to verify that the flight hardware would operate adequately on the lunar surface. In addition, brief discussions of the thermal design of experiments included in the experiments package are presented. The flight thermal performance is compared with analytical results and thermal-vacuum test results, and design modifications for future lunar surface experiment packages are presented.

  7. EXPERIMENTAL DESIGN AND INSTRUMENTATION FOR A FIELD EXPERIMENT

    EPA Science Inventory

    This report concerns the design of a field experiment for a military setting in which the effects of carbon monoxide on neurobehavioral variables are to be studied. ield experiment is distinguished from a survey by the fact that independent variables are manipulated, just as in t...

  8. Shuttle wave experiments. [space plasma investigations: design and instrumentation

    NASA Technical Reports Server (NTRS)

    Calvert, W.

    1976-01-01

    Wave experiments on shuttle are needed to verify dispersion relations, to study nonlinear and exotic phenomena, to support other plasma experiments, and to test engineering designs. Techniques based on coherent detection and bistatic geometry are described. New instrumentation required to provide modules for a variety of missions and to incorporate advanced signal processing and control techniques is discussed. An experiment for Z to 0 coupling is included.

  9. Teaching Optimal Design of Experiments Using a Spreadsheet

    ERIC Educational Resources Information Center

    Goos, Peter; Leemans, Herlinde

    2004-01-01

    In this paper, we present an interactive teaching approach to introduce the concept of optimal design of experiments to students. Our approach is based on the use of spreadsheets. One advantage of this approach is that no complex mathematical theory is needed nor that any design construction algorithm has to be discussed at the introductory stage.…

  10. Building a Framework for Engineering Design Experiences in High School

    ERIC Educational Resources Information Center

    Denson, Cameron D.; Lammi, Matthew

    2014-01-01

    In this article, Denson and Lammi put forth a conceptual framework that will help promote the successful infusion of engineering design experiences into high school settings. When considering a conceptual framework of engineering design in high school settings, it is important to consider the complex issue at hand. For the purposes of this…

  11. Investigating a Method of Scaffolding Student-Designed Experiments

    ERIC Educational Resources Information Center

    Morgan, Kelly; Brooks, David W.

    2012-01-01

    The process of designing an experiment is a difficult one. Students often struggle to perform such tasks as the design process places a large cognitive load on students. Scaffolding is the process of providing support for a student to allow them to complete tasks they would otherwise not have been able to complete. This study sought to investigate…

  12. Thinking about "Design Thinking": A Study of Teacher Experiences

    ERIC Educational Resources Information Center

    Retna, Kala S.

    2016-01-01

    Schools are continuously looking for new ways of enhancing student learning to equip students with skills that would enable them to cope with twenty-first century demands. One promising approach focuses on design thinking. This study examines teacher's perceptions, experiences and challenges faced in adopting design thinking. There is a lack of…

  13. Recreation Programming: Designing Leisure Experiences. 5th Edition

    ERIC Educational Resources Information Center

    Rossman, J. Robert; Schlatter, Barbara Elwood

    2008-01-01

    Originally published in 1989, "Recreation Programming: Designing Leisure Experiences" has become a standard in the park, recreation, and leisure service industry. This title has been used to teach beginning and experienced programmers in over 100 higher-education institutions, both nationally and internationally. Designed in a user-friendly…

  14. Factorial Design: An Eight Factor Experiment Using Paper Helicopters

    NASA Technical Reports Server (NTRS)

    Kozma, Michael

    1996-01-01

    The goal of this paper is to present the analysis of the multi-factor experiment (factorial design) conducted in EG490, Junior Design at Loyola College in Maryland. The discussion of this paper concludes the experimental analysis and ties the individual class papers together.

  15. Working Theory into and out of Design Experiments

    ERIC Educational Resources Information Center

    Palincsar, Annemarie Sullivan

    2005-01-01

    In this response, I advocate for the value of considering theory in the design-based research that Gersten describes in Behind the Scenes of an Intervention Research Study. I argue that such an emphasis: is consistent with the literature on design experiments, is integral to advancing knowledge building within domains, serves to advance the work…

  16. What Kind of Creature Is a Design Experiment?

    ERIC Educational Resources Information Center

    Gorard, Stephen; Roberts, Karen; Taylor, Chris

    2004-01-01

    This article considers the emerging method of design experimentation, and its developing use in educational research. It considers the extent to which design experiments are different from other, more established, methods and the extent to which elements of established methods can be adapted for use in conjunction with them. One major issue to be…

  17. On Design Experiment Teaching in Engineering Quality Cultivation

    ERIC Educational Resources Information Center

    Chen, Xiao

    2008-01-01

    Design experiment refers to that designed and conducted by students independently and is surely an important method to cultivate students' comprehensive quality. According to the development and requirements of experimental teaching, this article carries out a study and analysis on the purpose, significance, denotation, connotation and…

  18. Design of spatial experiments: Model fitting and prediction

    SciTech Connect

    Fedorov, V.V.

    1996-03-01

    The main objective of the paper is to describe and develop model oriented methods and algorithms for the design of spatial experiments. Unlike many other publications in this area, the approach proposed here is essentially based on the ideas of convex design theory.

  19. Thermal design, analysis and testing of the Halogen Occultation Experiment

    NASA Technical Reports Server (NTRS)

    Foss, Richard A.; Smith, Dewey M.

    1987-01-01

    This paper briefly introduces the Halogen Occultation Experiment (HALOE) and describes the thermal requirements in some detail. The thermal design of the HALOE is described, together with the design process and the analytical techniques used to arrive at this design. The flight hardware has undergone environmental testing in a thermal vacuum chamber to validate the thermal design. The HALOE is a unique problem in thermal control due to its variable solar loading, its extremely sensitive optical components and the high degree of pointing accuracy required. This paper describes the flight hardware, the design process and its verification.

  20. Advances in Experiment Design for High Performance Aircraft

    NASA Technical Reports Server (NTRS)

    Morelli, Engene A.

    1998-01-01

    A general overview and summary of recent advances in experiment design for high performance aircraft is presented, along with results from flight tests. General theoretical background is included, with some discussion of various approaches to maneuver design. Flight test examples from the F-18 High Alpha Research Vehicle (HARV) are used to illustrate applications of the theory. Input forms are compared using Cramer-Rao bounds for the standard errors of estimated model parameters. Directions for future research in experiment design for high performance aircraft are identified.

  1. Vestibular Function Research (VFR) experiment. Phase B: Design definition study

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The Vestibular Functions Research (VFR) Experiment was established to investigate the neurosensory and related physiological processes believed to be associated with the space flight nausea syndrome and to develop logical means for its prediction, prevention and treatment. The VFR Project consists of ground and spaceflight experimentation using frogs as specimens. The phase B Preliminary Design Study provided for the preliminary design of the experiment hardware, preparation of performance and hardware specification and a Phase C/D development plan, establishment of STS (Space Transportation System) interfaces and mission operations, and the study of a variety of hardware, experiment and mission options. The study consist of three major tasks: (1) mission mode trade-off; (2) conceptual design; and (3) preliminary design.

  2. Design of Orion Soil Impact Study using the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2010-01-01

    Two conventional One Factor At a Time (OFAT) test matrices under consideration for an Orion Landing System subscale soil impact study are reviewed. Certain weaknesses in the designs, systemic to OFAT experiment designs generally, are identified. An alternative test matrix is proposed that is based in the Modern Design of Experiments (MDOE), which achieves certain synergies by combining the original two test matrices into one. The attendant resource savings are quantified and the impact on uncertainty is discussed.

  3. Design of Experiments, Model Calibration and Data Assimilation

    SciTech Connect

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of emulation, calibration and experiment design for computer experiments. Emulation refers to building a statistical surrogate from a carefully selected and limited set of model runs to predict unsampled outputs. The standard kriging approach to emulation of complex computer models is presented. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Markov chain Monte Carlo (MCMC) algorithms are often used to sample the calibrated parameter distribution. Several MCMC algorithms commonly employed in practice are presented, along with a popular diagnostic for evaluating chain behavior. Space-filling approaches to experiment design for selecting model runs to build effective emulators are discussed, including Latin Hypercube Design and extensions based on orthogonal array skeleton designs and imposed symmetry requirements. Optimization criteria that further enforce space-filling, possibly in projections of the input space, are mentioned. Designs to screen for important input variations are summarized and used for variable selection in a nuclear fuels performance application. This is followed by illustration of sequential experiment design strategies for optimization, global prediction, and rare event inference.

  4. Investigating a Method of Scaffolding Student-Designed Experiments

    NASA Astrophysics Data System (ADS)

    Morgan, Kelly; Brooks, David W.

    2012-08-01

    The process of designing an experiment is a difficult one. Students often struggle to perform such tasks as the design process places a large cognitive load on students. Scaffolding is the process of providing support for a student to allow them to complete tasks they would otherwise not have been able to complete. This study sought to investigate backwards-design, one form of scaffolding the experimental design process for students. Students were guided through the design process in a backwards manner (designing the results section first and working backwards through typical report components to the materials and safety sections). The use of reflective prompts as possible scaffold for metacognitive processes was also studied. Scaffolding was in the form of a computer application built specifically for this purpose. Four versions of the computer application were randomly assigned to 102 high school chemistry students and students were asked to the design of an experiment, producing a report. The use of backwards-design scaffolding resulted in significantly higher performance on lab reports. The addition of reflective prompts reduced the effect of backwards-design scaffolding in lower-level students.

  5. Structural Design Feasibility Study for the Global Climate Experiment

    SciTech Connect

    Lewin,K.F.; Nagy, J.

    2008-12-01

    Neon, Inc. is proposing to establish a Global Change Experiment (GCE) Facility to increase our understanding of how ecological systems differ in their vulnerability to changes in climate and other relevant global change drivers, as well as provide the mechanistic basis for forecasting ecological change in the future. The experimental design was initially envisioned to consist of two complementary components; (A) a multi-factor experiment manipulating CO{sub 2}, temperature and water availability and (B) a water balance experiment. As the design analysis and cost estimates progressed, it became clear that (1) the technical difficulties of obtaining tight temperature control and maintaining elevated atmospheric carbon dioxide levels within an enclosure were greater than had been expected and (2) the envisioned study would not fit into the expected budget envelope if this was done in a partially or completely enclosed structure. After discussions between NEON management, the GCE science team, and Keith Lewin, NEON, Inc. requested Keith Lewin to expand the scope of this design study to include open-field exposure systems. In order to develop the GCE design to the point where it can be presented within a proposal for funding, a feasibility study of climate manipulation structures must be conducted to determine design approaches and rough cost estimates, and to identify advantages and disadvantages of these approaches including the associated experimental artifacts. NEON, Inc requested this design study in order to develop concepts for the climate manipulation structures to support the NEON Global Climate Experiment. This study summarizes the design concepts considered for constructing and operating the GCE Facility and their associated construction, maintenance and operations costs. Comparisons and comments about experimental artifacts, construction challenges and operational uncertainties are provided to assist in selecting the final facility design. The overall goal

  6. Transforming Inclusion: Designing in the Experience of Greater Technological Possibility.

    PubMed

    Bridge, Catherine; Demirbilek, Oya; Mintzes, Alicia

    2016-01-01

    Universal Design seeks to contribute to the sustainability and inclusivity of communities and co-design and participatory methods are a critical tool in this evolution. The fact that technology permeates our society is undeniable and the form and materials that technology takes in turn shape the basics of human life such as being able to shower and toilet oneself. In contrast, the various existing approaches to co-design have very different sorts of metaphysical, epistemological and normative assumptions behind them. As a result, design has recognised a set of problems surrounding the position of the "user" in design innovation. Additionally, there are many different perspectives on technology and the role of technology in co-design methods. Consequently, there are a number of different ways of conceiving of the "problem" of integrating technologies into co-design methods. Traditionally, participatory design has been viewed as merely the insertion of a more public dialog of the potential target market within technological design practices. Our research indicates that most if not all co-designers rely on their own personal and collective knowledge and experience and that if this is not actively explored as a part of a co-design methodology that both participation and innovation will be less than hoped for. For instance, assuming only known fixtures, fittings with current codes and standards is unlikely to result in product innovation. PMID:27534298

  7. Conceptual design for scaled truss antenna flight experiment

    NASA Technical Reports Server (NTRS)

    Lee, W. H.

    1984-01-01

    The conceptual design for a scaled truss antenna structures experiment program (STASEP) is presented. The hardware analysis of the scaled truss antenna structure (STAS) was performed by interactive design and evaluation of advanced spacecraft (IDEAS) computer aided, interactive, design and analysis program. Four STAS's were designed to be launched by the Shuttle, tested by using the space technology experiments platform (STEP) and space transportation system (STS), and then free flown in short lifetime orbits. Data were gathered on deployment, structural characteristics, geometric accuracies, thermal performance, and drag and lifetime as an orbiting spacecraft. Structural and thermal properties were determined for the STAS, including mass properties, thermal loading, structural natural frequencies, and mode shapes. The necessary analysis, scaling, and ground testing are discussed.

  8. Photon detection system designs for the Deep Underground Neutrino Experiment

    NASA Astrophysics Data System (ADS)

    Whittington, D.

    2016-05-01

    The Deep Underground Neutrino Experiment (DUNE) will be a premier facility for exploring long-standing questions about the boundaries of the standard model. Acting in concert with the liquid argon time projection chambers underpinning the far detector design, the DUNE photon detection system will capture ultraviolet scintillation light in order to provide valuable timing information for event reconstruction. To maximize the active area while maintaining a small photocathode coverage, the experiment will utilize a design based on plastic light guides coated with a wavelength-shifting compound, along with silicon photomultipliers, to collect and record scintillation light from liquid argon. This report presents recent preliminary performance measurements of this baseline design and several alternative designs which promise significant improvements in sensitivity to low-energy interactions.

  9. Photon Detection System Designs for the Deep Underground Neutrino Experiment

    SciTech Connect

    Whittington, Denver

    2015-11-19

    The Deep Underground Neutrino Experiment (DUNE) will be a premier facility for exploring long-standing questions about the boundaries of the standard model. Acting in concert with the liquid argon time projection chambers underpinning the far detector design, the DUNE photon detection system will capture ultraviolet scintillation light in order to provide valuable timing information for event reconstruction. To maximize the active area while maintaining a small photocathode coverage, the experiment will utilize a design based on plastic light guides coated with a wavelength-shifting compound, along with silicon photomultipliers, to collect and record scintillation light from liquid argon. This report presents recent preliminary performance measurements of this baseline design and several alternative designs which promise significant improvements in sensitivity to low-energy interactions.

  10. Preliminary Design Program: Vapor Compression Distillation Flight Experiment Program

    NASA Technical Reports Server (NTRS)

    Schubert, F. H.; Boyda, R. B.

    1995-01-01

    This document provides a description of the results of a program to prepare a preliminary design of a flight experiment to demonstrate the function of a Vapor Compression Distillation (VCD) Wastewater Processor (WWP) in microgravity. This report describes the test sequence to be performed and the hardware, control/monitor instrumentation and software designs prepared to perform the defined tests. the purpose of the flight experiment is to significantly reduce the technical and programmatic risks associated with implementing a VCD-based WWP on board the International Space Station Alpha.

  11. Linear design considerations for TO-10 candidate experiment

    SciTech Connect

    Atchison, Walter A; Rousculp, Christopher L

    2011-01-12

    As part of the LANL/VNIIEF collaboration a high velocity cylindrical liner driven Hugoniot experiment is being designed to be driven by a VNIEF Disk Explosive Magnetic (flux compression) Generator (DEMG). Several variations in drive current and liner thickness have been proposed. This presentation will describe the LANL 1D and 2D simulations used to evaluate those designs. The presentation will also propose an analysis technique to assess a high current drive systems ability to stably and optimally drive a cylindrical aluminum liner for this type of experiment.

  12. Design of a microwave calorimeter for the microwave tokamak experiment

    SciTech Connect

    Marinak, M. )

    1988-10-07

    The initial design of a microwave calorimeter for the Microwave Tokamak Experiment is presented. The design is optimized to measure the refraction and absorption of millimeter rf microwaves as they traverse the toroidal plasma of the Alcator C tokamak. Techniques utilized can be adapted for use in measuring high intensity pulsed output from a microwave device in an environment of ultra high vacuum, intense fields of ionizing and non-ionizing radiation and intense magnetic fields. 16 refs.

  13. A Bubble Mixture Experiment Project for Use in an Advanced Design of Experiments Class

    ERIC Educational Resources Information Center

    Steiner, Stefan H.; Hamada, Michael; White, Bethany J.Giddings; Kutsyy, Vadim; Mosesova, Sofia; Salloum, Geoffrey

    2007-01-01

    This article gives an example of how student-conducted experiments can enhance a course in the design of experiments. We focus on a project whose aim is to find a good mixture of water, soap and glycerin for making soap bubbles. This project is relatively straightforward to implement and understand. At its most basic level the project introduces…

  14. Designing a Hybrid Laminar-Flow Control Experiment: The CFD-Experiment Connection

    NASA Technical Reports Server (NTRS)

    Streett, C. L.

    2003-01-01

    The NASA/Boeing hybrid laminar flow control (HLFC) experiment, designed during 1993-1994 and conducted in the NASA LaRC 8-foot Transonic Pressure Tunnel in 1995, utilized computational fluid dynamics and numerical simulation of complex fluid mechanics to an unprecedented extent for the design of the test article and measurement equipment. CFD was used in: the design of the test wing, which was carried from definition of desired disturbance growth characteristics, through to the final airfoil shape that would produce those growth characteristics; the design of the suction-surface perforation pattern that produced enhanced crossflow-disturbance growth: and in the design of the hot-wire traverse system that produced minimal influence on measured disturbance growth. These and other aspects of the design of the test are discussed, after the historical and technical context of the experiment is described.

  15. Design and performance of a Cryogenic Heat Pipe Experiment (CRYOHP)

    NASA Technical Reports Server (NTRS)

    Beam, Jerry; Brennan, Patrick J.; Bello, Mel

    1992-01-01

    The Cryogenic Heat Pipe Experiment which is designed to demonstrate the thermal performance of two different axially grooved oxygen heat pipes in microgravity is discussed. The CRYOHP is manifested for flight aboard STS-53. The first heat pipe design is based on an extrapolated 0-g transport capability of about 20 W-m with oxygen in the range of 80-100 K. The second heat pipe design permits 0-g 'dry-out' in the CRYOHP and offers improved ground testability for 1-g correlation.

  16. Design and performance of a Cryogenic Heat Pipe Experiment (CRYOHP)

    SciTech Connect

    Beam, J.; Brennan, P.J.; Bello, M. OAO Corp., Greenbelt, MD Aerospace Corp., Los Angeles, CA )

    1992-07-01

    The Cryogenic Heat Pipe Experiment which is designed to demonstrate the thermal performance of two different axially grooved oxygen heat pipes in microgravity is discussed. The CRYOHP is manifested for flight aboard STS-53. The first heat pipe design is based on an extrapolated 0-g transport capability of about 20 W-m with oxygen in the range of 80-100 K. The second heat pipe design permits 0-g 'dry-out' in the CRYOHP and offers improved ground testability for 1-g correlation. 5 refs.

  17. Preliminary design of two Space Shuttle fluid physics experiments

    NASA Technical Reports Server (NTRS)

    Gat, N.; Kropp, J. L.

    1984-01-01

    The mid-deck lockers of the STS and the requirements for operating an experiment in this region are described. The design of the surface tension induced convection and the free surface phenomenon experiments use a two locker volume with an experiment unique structure as a housing. A manual mode is developed for the Surface Tension Induced Convection experiment. The fluid is maintained in an accumulator pre-flight. To begin the experiment, a pressurized gas drives the fluid into the experiment container. The fluid is an inert silicone oil and the container material is selected to be comparable. A wound wire heater, located axisymmetrically above the fluid can deliver three wattages to a spot on the fluid surface. These wattages vary from 1-15 watts. Fluid flow is observed through the motion of particles in the fluid. A 5 mw He/Ne laser illuminates the container. Scattered light is recorded by a 35mm camera. The free surface phenomena experiment consists of a trapezoidal cell which is filled from the bottom. The fluid is photographed at high speed using a 35mm camera which incorporated the entire cell length in the field of view. The assembly can incorporate four cells in one flight. For each experiment, an electronics block diagram is provided. A control panel concept is given for the surface induced convection. Both experiments are within the mid-deck locker weight and c-g limits.

  18. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, Estelle; Garcia, Xavier

    2013-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.

  19. Experiences of Computer Science Curriculum Design: A Phenomenological Study

    ERIC Educational Resources Information Center

    Sloan, Arthur; Bowe, Brian

    2015-01-01

    This paper presents a qualitative study of 12 computer science lecturers' experiences of curriculum design of several degree programmes during a time of transition from year-long to semesterised courses, due to institutional policy change. The background to the study is outlined, as are the reasons for choosing the research methodology. The main…

  20. From Content to Context: Videogames as Designed Experience

    ERIC Educational Resources Information Center

    Squire, Kurt

    2006-01-01

    Interactive immersive entertainment, or videogame playing, has emerged as a major entertainment and educational medium. As research and development initiatives proliferate, educational researchers might benefit by developing more grounded theories about them. This article argues for framing game play as a "designed experience." Players'…

  1. The Design of Learning Experiences: A Connection to Physical Environments.

    ERIC Educational Resources Information Center

    Stueck, Lawrence E.; Tanner, C. Kenneth

    The school environment must create a rich, beautiful, dynamic, meaningful experience for students to learn; however, architects, school boards, and the state focus almost exclusively only on the building when making design decisions. This document lists specific aspects to developing a visionary campus: one that provides a three-dimensional…

  2. Conceptual Design of the Harbin Reconnection eXperiment (HRX)

    NASA Astrophysics Data System (ADS)

    Mao, Aohua; E, Peng; Wang, Xiaogang; Ji, Hantao; Ren, Yang

    2015-11-01

    A new terrella device, called the Space Environment Simulation and Research Infrastructure or SESRI, is under construction at Harbin Institute of Technology, in which the Harbin Reconnection eXperiment (HRX) system is one of the most important components. The goal of HRX reconnection experiment design is to provide a unique platform for studying reconnections relevant to those in magnetopause and magnetotail. Most of the currently existing terrella experiments have been focusing on global phenomena, e.g. bow shock, in either linear or toroidal geometry, which are typically very different in magnetosphere plasmas. The new HRX regimes explores both local and global reconnection dynamics by driving reconnection with a unique set of coils in a dipole magnetic field configuration which will be able to investigate a range of important reconnection issues in magnetosphere geometry. The design of the HRX device approximately follows the Vlasov similarity laws between the laboratory plasma of the device and the magnetosphere plasma to match local reconnection dynamics. Motivation, design criteria for the HRX experiments, and the preliminary experiment proposal will be discussed.

  3. Designing Undergraduate Research Experiences: A Multiplicity of Options

    NASA Astrophysics Data System (ADS)

    Manduca, C. A.

    2001-12-01

    Research experiences for undergraduate students can serve many goals including: developing student understanding of the process of science; providing opportunities for students to develop professional skills or test career plans; completing publishable research; enabling faculty professional development; or enhancing the visibility of a science program. The large range of choices made in the design of an undergraduate research program or opportunity must reflect the goals of the program, the needs and abilities of the students and faculty, and the available resources including both time and money. Effective program design, execution, and evaluation can all be enhanced if the goals of the program are clearly articulated. Student research experiences can be divided into four components: 1) defining the research problem; 2) developing the research plan or experiment design; 3) collecting and interpreting data, and 4) communicating results. In each of these components, the program can be structured in a wide variety of ways and students can be given more or less guidance or freedom. While a feeling of ownership of the research project appears to be very important, examples of successful projects displaying a wide range of design decisions are available. Work with the Keck Geology Consortium suggests that four strategies can enhance the likelihood of successful student experiences: 1) students are well-prepared for research experience (project design must match student preparation); 2) timelines and events are structured to move students through intermediate goals to project completion; 3) support for the emotional, financial, academic and technical challenges of a research project is in place; 4) strong communications between students and faculty set clear expectations and enable mid-course corrections in the program or project design. Creating a research culture for the participants or embedding a project in an existing research culture can also assist students in

  4. Reduction of animal use: experimental design and quality of experiments.

    PubMed

    Festing, M F

    1994-07-01

    Poorly designed and analysed experiments can lead to a waste of scientific resources, and may even reach the wrong conclusions. Surveys of published papers by a number of authors have shown that many experiments are poorly analysed statistically, and one survey suggested that about a third of experiments may be unnecessarily large. Few toxicologists attempted to control variability using blocking or covariance analysis. In this study experimental design and statistical methods in 3 papers published in toxicological journals were used as case studies and were examined in detail. The first used dogs to study the effects of ethanol on blood and hepatic parameters following chronic alcohol consumption in a 2 x 4 factorial experimental design. However, the authors used mongrel dogs of both sexes and different ages with a wide range of body weights without any attempt to control the variation. They had also attempted to analyse a factorial design using Student's t-test rather than the analysis of variance. Means of 2 blood parameters presented with one decimal place had apparently been rounded to the nearest 5 units. It is suggested that this experiment could equally well have been done in 3 blocks using 24 instead of 46 dogs. The second case study was an investigation of the response of 2 strains of mice to a toxic agent causing bladder injury. The first experiment involved 40 treatment combinations (2 strains x 4 doses x 5 days) with 3-6 mice per combination. There was no explanation of how the experiment involving approximately 180 mice had actually been done, but unequal subclass numbers suggest that the experiment may have been done on an ad hoc basis rather than being properly designed. It is suggested that the experiment could have been done as 2 blocks involving 80 instead of about 180 mice. The third study again involved a factorial design with 4 dose levels of a compound and 2 sexes, with a total of 80 mice. Open field behaviour was examined. The author

  5. System design of the ATS-F RFI measurement experiment.

    NASA Technical Reports Server (NTRS)

    Henry, V. F.; Schaefer, G.

    1972-01-01

    Description of the system design of an RFI measurement experiment regarding optimal sharing of the 5.925- to 6.425-GHz frequency band between the ATS-F synchronous satellite and terrestrial telecommunication systems. The parametric measurements made will include transmitted and received power levels, propagation-path loss and variations as a function of range, elevation angle, RF polarization, and geographical location of interference sources. The technical objectives of the C-band RFI experiment are outlined, a functional diagram of the total system for the RFI measurement experiment is presented, and the design features of the receiver RF and IF circuits, the filters and detectors, and the computer control are summarized. A basic RFI measurement plan is presented which defines and briefly states the measurement procedure, which involves a number of different measurement modes, each of which is described in detail.

  6. Facilitating an accelerated experience-based co-design project.

    PubMed

    Tollyfield, Ruth

    This article describes an accelerated experience-based co-design (AEBCD) quality improvement project that was undertaken in an adult critical care setting and the facilitation of that process. In doing so the aim is to encourage other clinical settings to engage with their patients, carers and staff alike and undertake their own quality improvement project. Patient, carer and staff experience and its place in the quality sphere is outlined and the importance of capturing patient, carer and staff feedback established. Experience-based co-design (EBCD) is described along with the recently tested accelerated version of the process. An overview of the project and outline of the organisational tasks and activities undertaken by the facilitator are given. The facilitation of the process and key outcomes are discussed and reflected on. Recommendations for future undertakings of the accelerated process are given and conclusions drawn. PMID:24526020

  7. Optimal Experiment Design for Thermal Characterization of Functionally Graded Materials

    NASA Technical Reports Server (NTRS)

    Cole, Kevin D.

    2003-01-01

    The purpose of the project was to investigate methods to accurately verify that designed , materials meet thermal specifications. The project involved heat transfer calculations and optimization studies, and no laboratory experiments were performed. One part of the research involved study of materials in which conduction heat transfer predominates. Results include techniques to choose among several experimental designs, and protocols for determining the optimum experimental conditions for determination of thermal properties. Metal foam materials were also studied in which both conduction and radiation heat transfer are present. Results of this work include procedures to optimize the design of experiments to accurately measure both conductive and radiative thermal properties. Detailed results in the form of three journal papers have been appended to this report.

  8. Explorations in Teaching Sustainable Design: A Studio Experience in Interior Design/Architecture

    ERIC Educational Resources Information Center

    Gurel, Meltem O.

    2010-01-01

    This article argues that a design studio can be a dynamic medium to explore the creative potential of the complexity of sustainability from its technological to social ends. The study seeks to determine the impact of an interior design/architecture studio experience that was initiated to teach diverse meanings of sustainability and to engage the…

  9. Design of a proof of principle high current transport experiment

    SciTech Connect

    Lund, S.M.; Bangerter, R.O.; Barnard, J.J.; Celata, C.M.; Faltens, A.; Friedman, A.; Kwan, J.W.; Lee, E.P.; Seidl, P.A.

    2000-01-15

    Preliminary designs of an intense heavy-ion beam transport experiment to test issues for Heavy Ion Fusion (HIF) are presented. This transport channel will represent a single high current density beam at full driver scale and will evaluate practical issues such as aperture filling factors, electrons, halo, imperfect vacuum, etc., that cannot be fully tested using scaled experiments. Various machine configurations are evaluated in the context of the range of physics and technology issues that can be explored in a manner relevant to a full scale driver. it is anticipated that results from this experiment will allow confident construction of next generation ''Integrated Research Experiments'' leading to a full scale driver for energy production.

  10. Developing Conceptual Hypersonic Airbreathing Engines Using Design of Experiments Methods

    NASA Technical Reports Server (NTRS)

    Ferlemann, Shelly M.; Robinson, Jeffrey S.; Martin, John G.; Leonard, Charles P.; Taylor, Lawrence W.; Kamhawi, Hilmi

    2000-01-01

    Designing a hypersonic vehicle is a complicated process due to the multi-disciplinary synergy that is required. The greatest challenge involves propulsion-airframe integration. In the past, a two-dimensional flowpath was generated based on the engine performance required for a proposed mission. A three-dimensional CAD geometry was produced from the two-dimensional flowpath for aerodynamic analysis, structural design, and packaging. The aerodynamics, engine performance, and mass properties arc inputs to the vehicle performance tool to determine if the mission goals were met. If the mission goals were not met, then a flowpath and vehicle redesign would begin. This design process might have to be performed several times to produce a "closed" vehicle. This paper will describe an attempt to design a hypersonic cruise vehicle propulsion flowpath using a Design of' Experiments method to reduce the resources necessary to produce a conceptual design with fewer iterations of the design cycle. These methods also allow for more flexible mission analysis and incorporation of additional design constraints at any point. A design system was developed using an object-based software package that would quickly generate each flowpath in the study given the values of the geometric independent variables. These flowpath geometries were put into a hypersonic propulsion code and the engine performance was generated. The propulsion results were loaded into statistical software to produce regression equations that were combined with an aerodynamic database to optimize the flowpath at the vehicle performance level. For this example, the design process was executed twice. The first pass was a cursory look at the independent variables selected to determine which variables are the most important and to test all of the inputs to the optimization process. The second cycle is a more in-depth study with more cases and higher order equations representing the design space.

  11. Statistically designed experiments to screen chemical mixtures for possible interactions.

    PubMed Central

    Groten, J P; Tajima, O; Feron, V J; Schoen, E D

    1998-01-01

    For the accurate analysis of possible interactive effects of chemicals in a defined mixture, statistical designs are necessary to develop clear and manageable experiments. For instance, factorial designs have been successfully used to detect two-factor interactions. Particularly useful for this purpose are fractionated factorial designs, requiring only a fraction of all possible combinations of a full factorial design. Once the potential interaction has been detected with a fractionated design, a more accurate analysis can be performed for the particular binary mixtures to ensure and characterize these interactions. In this paper this approach is illustrated using an in vitro cytotoxicity assay to detect the presence of mixtures of Fusarium mycotoxins in contaminated food samples. We have investigated interactions between five mycotoxin species (Trichothecenes, Fumonisins, and Zearalenone) using the DNA synthesis inhibition assay in L929 fibroblasts. First, a central composite design was applied to identify possible interactive effects between mycotoxins in the mixtures (27 combinations from 5(5) possible combinations). Then two-factor interactions of particular interest were further analyzed by the use of a full factorial design (5 x 5 design) to characterize the nature of those interactions more precisely. Results show that combined exposure to several classes of mycotoxins generally results in effect addition with a few minor exceptions indicating synergistic interactions. In general, the nature of the interactions characterized in the full factorial design was similar to the nature of those observed in the central composite design. However, the magnitude of interaction was relatively small in the full factorial design. PMID:9860893

  12. Optimized design and analysis of sparse-sampling FMRI experiments.

    PubMed

    Perrachione, Tyler K; Ghosh, Satrajit S

    2013-01-01

    Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase

  13. Design of a reacceleration experiment using the Choppertron

    SciTech Connect

    Fiorentini, G.M.; Wang, C. ); Houck, T.L. )

    1993-01-01

    The Microwave Source Facility at the Lawrence Livermore National Laboratory is commencing a series of experiments involving reacceleration of a modulated beam alternating with extraction of energy in the form of X-band microwaves. The Choppertron, a high-power microwave generator, is used to modulate a 5-MV, 1-kA induction accelerator beam. The modulated beam is then passed through a series of traveling-wave output structures separated by induction cells. In this paper we report on computer simulations used in the design of these experiments. Simulations include analysis of beam transport, modulation, power extraction and transverse instabilities.

  14. SSSFD manipulator engineering using statistical experiment design techniques

    NASA Technical Reports Server (NTRS)

    Barnes, John

    1991-01-01

    The Satellite Servicer System Flight Demonstration (SSSFD) program is a series of Shuttle flights designed to verify major on-orbit satellite servicing capabilities, such as rendezvous and docking of free flyers, Orbital Replacement Unit (ORU) exchange, and fluid transfer. A major part of this system is the manipulator system that will perform the ORU exchange. The manipulator must possess adequate toolplate dexterity to maneuver a variety of EVA-type tools into position to interface with ORU fasteners, connectors, latches, and handles on the satellite, and to move workpieces and ORUs through 6 degree of freedom (dof) space from the Target Vehicle (TV) to the Support Module (SM) and back. Two cost efficient tools were combined to perform a study of robot manipulator design parameters. These tools are graphical computer simulations and Taguchi Design of Experiment methods. Using a graphics platform, an off-the-shelf robot simulation software package, and an experiment designed with Taguchi's approach, the sensitivities of various manipulator kinematic design parameters to performance characteristics are determined with minimal cost.

  15. Design of experiments applications in bioprocessing: concepts and approach.

    PubMed

    Kumar, Vijesh; Bhalla, Akriti; Rathore, Anurag S

    2014-01-01

    Most biotechnology unit operations are complex in nature with numerous process variables, feed material attributes, and raw material attributes that can have significant impact on the performance of the process. Design of experiments (DOE)-based approach offers a solution to this conundrum and allows for an efficient estimation of the main effects and the interactions with minimal number of experiments. Numerous publications illustrate application of DOE towards development of different bioprocessing unit operations. However, a systematic approach for evaluation of the different DOE designs and for choosing the optimal design for a given application has not been published yet. Through this work we have compared the I-optimal and D-optimal designs to the commonly used central composite and Box-Behnken designs for bioprocess applications. A systematic methodology is proposed for construction of the model and for precise prediction of the responses for the three case studies involving some of the commonly used unit operations in downstream processing. Use of Akaike information criterion for model selection has been examined and found to be suitable for the applications under consideration. PMID:24123959

  16. Beryllium ignition target design for indirect drive NIF experiments

    NASA Astrophysics Data System (ADS)

    Simakov, A. N.; Wilson, D. C.; Yi, S. A.; Kline, J. L.; Salmonson, J. D.; Clark, D. S.; Milovich, J. L.; Marinak, M. M.

    2016-03-01

    Beryllium (Be) ablator offers multiple advantages over carbon based ablators for indirectly driven NIF ICF ignition targets. These are higher mass ablation rate, ablation pressure and ablation velocity, lower capsule albedo, and higher thermal conductivity at cryogenic temperatures. Such advantages can be used to improve the target robustness and performance. While previous NIF Be target designs exist, they were obtained a long time ago and do not incorporate the latest improved physical understanding and models based upon NIF experiments. Herein, we propose a new NIF Be ignition target design at 1.45 MJ, 430 TW that takes all this knowledge into account.

  17. Experimental Design and Power Calculation for RNA-seq Experiments.

    PubMed

    Wu, Zhijin; Wu, Hao

    2016-01-01

    Power calculation is a critical component of RNA-seq experimental design. The flexibility of RNA-seq experiment and the wide dynamic range of transcription it measures make it an attractive technology for whole transcriptome analysis. These features, in addition to the high dimensionality of RNA-seq data, bring complexity in experimental design, making an analytical power calculation no longer realistic. In this chapter we review the major factors that influence the statistical power of detecting differential expression, and give examples of power assessment using the R package PROPER. PMID:27008024

  18. Skylab Medical Experiments Altitude Test /SMEAT/ facility design and operation.

    NASA Technical Reports Server (NTRS)

    Hinners, A. H., Jr.; Correale, J. V.

    1973-01-01

    This paper presents the design approaches and test facility operation methods used to successfully accomplish a 56-day test for Skylab to permit evaluation of selected Skylab medical experiments in a ground test simulation of the Skylab environment with an astronaut crew. The systems designed for this test include the two-gas environmental control system, the fire suppression and detection system, equipment transfer lock, ground support equipment, safety systems, potable water system, waste management system, lighting and power system, television monitoring, communications and recreation systems, and food freezer.

  19. Aircraft integrated design and analysis: A classroom experience

    NASA Technical Reports Server (NTRS)

    Weisshaar, Terrence A.

    1989-01-01

    AAE 451 is the capstone course required of all senior undergraduates in the School of Aeronautics and Astronautics at Purdue University. During the past year the first steps of a long evolutionary process were taken to change the content and expectations of this course. These changes are the result of the availability of advanced computational capabilities and sophisticated electronic media availability at Purdue. This presentation will describe both the long range objectives and this year's experience using the High Speed Commercial Transport design, the AIAA Long Duration Aircraft design and RPV design proposal as project objectives. The central goal of these efforts is to provide a user-friendly, computer-software-based environment to supplement traditional design course methodology. The Purdue University Computer Center (PUCC), the Engineering Computer Network (ECN) and stand-alone PC's are being used for this development. This year's accomplishments center primarily on aerodynamics software obtained from NASA/Langley and its integration into the classroom. Word processor capability for oral and written work and computer graphics were also blended into the course. A total of ten HSCT designs were generated, ranging from twin-fuselage aircraft, forward swept wing aircraft to the more traditional delta and double-delta wing aircraft. Four Long Duration Aircraft designs were submitted, together with one RPV design tailored for photographic surveillance.

  20. Scaling studies and conceptual experiment designs for NGNP CFD assessment

    SciTech Connect

    D. M. McEligot; G. E. McCreery

    2004-11-01

    The objective of this report is to document scaling studies and conceptual designs for flow and heat transfer experiments intended to assess CFD codes and their turbulence models proposed for application to prismatic NGNP concepts. The general approach of the project is to develop new benchmark experiments for assessment in parallel with CFD and coupled CFD/systems code calculations for the same geometry. Two aspects of the complex flow in an NGNP are being addressed: (1) flow and thermal mixing in the lower plenum ("hot streaking" issue) and (2) turbulence and resulting temperature distributions in reactor cooling channels ("hot channel" issue). Current prismatic NGNP concepts are being examined to identify their proposed flow conditions and geometries over the range from normal operation to decay heat removal in a pressurized cooldown. Approximate analyses have been applied to determine key non-dimensional parameters and their magnitudes over this operating range. For normal operation, the flow in the coolant channels can be considered to be dominant turbulent forced convection with slight transverse property variation. In a pressurized cooldown (LOFA) simulation, the flow quickly becomes laminar with some possible buoyancy influences. The flow in the lower plenum can locally be considered to be a situation of multiple hot jets into a confined crossflow -- with obstructions. Flow is expected to be turbulent with momentumdominated turbulent jets entering; buoyancy influences are estimated to be negligible in normal full power operation. Experiments are needed for the combined features of the lower plenum flows. Missing from the typical jet experiments available are interactions with nearby circular posts and with vertical posts in the vicinity of vertical walls - with near stagnant surroundings at one extreme and significant crossflow at the other. Two types of heat transfer experiments are being considered. One addresses the "hot channel" problem, if necessary

  1. Thermal design support for the Explorer gamma ray experiment telescope

    NASA Technical Reports Server (NTRS)

    Almgren, D. W.; Lee, W. D.; Mathias, S.

    1975-01-01

    The results of a thermal design definition study for the GSFC Explorer Gamma Ray Experiment Telescope (EGRET) were documented. A thermal computer model of EGRET with 241 nodes was developed and used to analyze the thermal performance of the experiment for a range of orbits, payload orientations and internal power dissipations. The recommended thermal design utilizes a small radiator with an area of 1.78 square foot on the anti-sun side of the mission adaptor and circumferential heat pipes on the interior of the same adaptor to transfer heat from the electronics compartments to the single radiator. Fifty watts of thermostatically controlled heater power are used to control the temperature level to 10 C + or - 20 C inside the insulated dome structure.

  2. Design of the NASA Lewis 4-Port Wave Rotor Experiment

    NASA Technical Reports Server (NTRS)

    Wilson, Jack

    1997-01-01

    Pressure exchange wave rotors, used in a topping stage, are currently being considered as a possible means of increasing the specific power, and reducing the specific fuel consumption of gas turbine engines. Despite this interest, there is very little information on the performance of a wave rotor operating on the cycle (i.e., set of waves) appropriate for use in a topping stage. One such cycle, which has the advantage of being relatively easy to incorporate into an engine, is the four-port cycle. Consequently, an experiment to measure the performance of a four-port wave rotor for temperature ratios relevant to application as a topping cycle for a gas turbine engine has been designed and built at NASA Lewis. The design of the wave rotor is described, together with the constraints on the experiment.

  3. Tokamak Fusion Core Experiment: design studies based on superconducting and hybrid toroidal field coils. Design overview

    SciTech Connect

    Flanagan, C.A.

    1984-10-01

    This document is a design overview that describes the scoping studies and preconceptual design effort performed in FY 1983 on the Tokamak Fusion Core Experiment (TFCX) class of device. These studies focussed on devices with all-superconducting toroidal field (TF) coils and on devices with superconducting TF coils supplemented with copper TF coil inserts located in the bore of the TF coils in the shield region. Each class of device is designed to satisfy the mission of ignition and long pulse equilibrium burn. Typical design parameters are: major radius = 3.75 m, minor radius = 1.0 m, field on axis = 4.5 T, plasma current = 7.0 MA. These designs relay on lower hybrid (LHRH) current rampup and heating to ignition using ion cyclotron range of frequency (ICRF). A pumped limiter has been assumed for impurity control. The present document is a design overview; a more detailed design description is contained in a companion document.

  4. The Modern Design of Experiments: A Technical and Marketing Framework

    NASA Technical Reports Server (NTRS)

    DeLoach, R.

    2000-01-01

    A new wind tunnel testing process under development at NASA Langley Research Center, called Modern Design of Experiments (MDOE), differs from conventional wind tunnel testing techniques on a number of levels. Chief among these is that MDOE focuses on the generation of adequate prediction models rather than high-volume data collection. Some cultural issues attached to this and other distinctions between MDOE and conventional wind tunnel testing are addressed in this paper.

  5. Optimizing the design of geophysical experiments: Is it worthwhile?

    NASA Astrophysics Data System (ADS)

    Curtis, Andrew; Maurer, Hansruedi

    Determining the structure, composition, and state of the Earth's subsurface from measured data is the principal task of many geophysical experiments and surveys. Standard procedures involve the recording of appropriate data sets followed by the application of data analysis techniques to extract the desired information. While the importance of new tools for the analysis stage of an experiment is well recognized, much less attention seems to be paid to improving the data acquisition.A measure of the effort allocated to data analysis research relative to that devoted to data acquisition research is presented in Figure 1. Since 1955 there have been more than 10,000 publications on inversion methods alone, but in the same period only 100 papers on experimental design have appeared in journals. Considering that the acquisition component of an experiment defines what information will be contained in the data, and that no amount of data analysis can compensate for the lack of such information, we suggest that greater effort be made to improve survey planning techniques. Furthermore, given that logistical and financial constraints are often stringent and that relationships between geophysical data and model parameters describing the Earths subsurface are generally complicated, optimizing the design of an experiment may be quite challenging. Here we review experimental design procedures that optimize the benefit of a field survey, such that maximum information about the target structures is obtained at minimum cost. We also announce a new Web site and e-mail group set up as a forum for communication on survey design research and application.

  6. Stable vacuum UV CCD detectors designed for space flight experiments

    NASA Technical Reports Server (NTRS)

    Socker, Dennis G.; Marchywka, Mike; Taylor, G. C.; Levine, P.; Rios, R.; Shallcross, F.; Hughes, G.

    1993-01-01

    Thinned, backside-illuminated, p-channel CCD images are under development which can exploit the surface potential in VUV applications, yielding enhanced quantum efficiency to wavelengths as short as 1100 A. The current goal is production of large-format, 5-micron pixel imagers for spectrographic and imaging VUV spaceflight experiments. Model predictions of the effect of device design on quantum efficiency, well capacity, and crosstalk are presented for such 5-micron-approaching pixel sizes.

  7. DIME Students Discuss Final Drop Tower Experiment Design

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Students discuss fine points of their final design for the Drop Tower experiment during the second Dropping in a Microgravity Environment (DIME) competition held April 23-25, 2002, at NASA's Glenn Research Center. Competitors included two teams from Sycamore High School, Cincinnati, OH, and one each from Bay High School, Bay Village, OH, and COSI Academy, Columbus, OH. DIME is part of NASA's education and outreach activities. Details are on line at http://microgravity.grc.nasa.gov/DIME_2002.html.

  8. Analysis of Variance in the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    Deloach, Richard

    2010-01-01

    This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.

  9. Recent developments in virtual experience design and production

    NASA Astrophysics Data System (ADS)

    Fisher, Scott S.

    1995-03-01

    Today, the media of VR and Telepresence are in their infancy and the emphasis is still on technology and engineering. But, it is not the hardware people might use that will determine whether VR becomes a powerful medium--instead, it will be the experiences that they are able to have that will drive its acceptance and impact. A critical challenge in the elaboration of these telepresence capabilities will be the development of environments that are as unpredictable and rich in interconnected processes as an actual location or experience. This paper will describe the recent development of several Virtual Experiences including: `Menagerie', an immersive Virtual Environment inhabited by virtual characters designed to respond to and interact with its users; and `The Virtual Brewery', an immersive public VR installation that provides multiple levels of interaction in an artistic interpretation of the brewing process.

  10. Design of a Magnetic Reconnection Experiment in the Collisionless Regime

    NASA Astrophysics Data System (ADS)

    Egedal, J.; Le, A.; Daughton, W. S.

    2012-12-01

    A new model for effective heating of electrons during reconnection is now gaining support from spacecraft observations, theoretical considerations and kinetic simulations [1]. The key ingredient in the model is the physics of trapped electrons whose dynamics causes the electron pressure tensor to be strongly anisotropic [2]. The heating mechanism becomes highly efficient for geometries with low upstream electron pressure, conditions relevant to the magnetotail. We propose a new experiment that will be optimized for the study of kinetic reconnection including the dynamics of trapped electrons and associated pressure anisotropy. This requires an experiment that accesses plasmas with much lower collisionality and lower plasma beta than are available in present reconnection experiments. The new experiment will be designed such that a large variety of magnetic configurations can be established and tailored for continuation of our ongoing study of spontaneous 3D reconnection [3]. The flexible design will also allow for configurations suitable for the study of merging magnetic islands, which may be a source of super thermal electrons in naturally occurring plasmas. [1] Egedal J et al., Nature Physics, 8, 321 (2012). [2] Le A et al., Phys. Rev. Lett. 102, 085001 (2009). [3] Katz N et al., Phys. Rev. Lett. 104, 255004 (2010).;

  11. An industrial approach to design compelling VR and AR experience

    NASA Astrophysics Data System (ADS)

    Richir, Simon; Fuchs, Philippe; Lourdeaux, Domitile; Buche, Cédric; Querrec, Ronan

    2013-03-01

    The convergence of technologies currently observed in the field of VR, AR, robotics and consumer electronic reinforces the trend of new applications appearing every day. But when transferring knowledge acquired from research to businesses, research laboratories are often at a loss because of a lack of knowledge of the design and integration processes in creating an industrial scale product. In fact, the innovation approaches that take a good idea from the laboratory to a successful industrial product are often little known to researchers. The objective of this paper is to present the results of the work of several research teams that have finalized a working method for researchers and manufacturers that allow them to design virtual or augmented reality systems and enable their users to enjoy "a compelling VR experience". That approach, called "the I2I method", present 11 phases from "Establishing technological and competitive intelligence and industrial property" to "Improvements" through the "Definition of the Behavioral Interface, Virtual Environment and Behavioral Software Assistance". As a result of the experience gained by various research teams, this design approach benefits from contributions from current VR and AR research. Our objective is to validate and continuously move such multidisciplinary design team methods forward.

  12. Report on the first VLHC photon stop cryogenic design experiment

    SciTech Connect

    Michael Geynisman et al.

    2003-09-15

    As part of Fermilab's study of a Very Large Hadron Collider, a water-cooled photon stop was proposed as a device to intercept the synchrotron radiation emitted by the high-energy proton beams in the high field superconducting magnets with minimal plug-cooling power. Photon stops are radiation absorbers operating at room temperature that protrude into the beam tube at the end of each bending magnet to scrape the synchrotron light emitted by the beam one magnet up-stream. Among the technological challenges regarding photon stops is their cryo-design. The photon stop is water-cooled and operates in a cryogenic environment. A careful cryo-design is therefore essential to enable operation at minimum heat transfer between the room temperature sections and the cryogenic parts. A photon stop cryo-design was developed and a prototype was built. This paper presents the results of the cryogenic experiments conducted on the first VLHC photon stop prototype.

  13. HTGR nuclear heat source component design and experience

    SciTech Connect

    Peinado, C.O.; Wunderlich, R.G.; Simon, W.A.

    1982-05-01

    The high-temperature gas-cooled reactor (HTGR) nuclear heat source components have been under design and development since the mid-1950's. Two power plants have been designed, constructed, and operated: the Peach Bottom Atomic Power Station and the Fort St. Vrain Nuclear Generating Station. Recently, development has focused on the primary system components for a 2240-MW(t) steam cycle HTGR capable of generating about 900 MW(e) electric power or alternately producing high-grade steam and cogenerating electric power. These components include the steam generators, core auxiliary heat exchangers, primary and auxiliary circulators, reactor internals, and thermal barrier system. A discussion of the design and operating experience of these components is included.

  14. Thermal Design and Analysis for the Cryogenic MIDAS Experiment

    NASA Technical Reports Server (NTRS)

    Amundsen, Ruth McElroy

    1997-01-01

    The Materials In Devices As Superconductors (MIDAS) spaceflight experiment is a NASA payload which launched in September 1996 on the Shuttle, and was transferred to the Mir Space Station for several months of operation. MIDAS was developed and built at NASA Langley Research Center (LaRC). The primary objective of the experiment was to determine the effects of microgravity and spaceflight on the electrical properties of high-temperature superconductive (HTS) materials. The thermal challenge on MIDAS was to maintain the superconductive specimens at or below 80 K for the entire operation of the experiment, including all ground testing and 90 days of spaceflight operation. Cooling was provided by a small tactical cryocooler. The superconductive specimens and the coldfinger of the cryocooler were mounted in a vacuum chamber, with vacuum levels maintained by an ion pump. The entire experiment was mounted for operation in a stowage locker inside Mir, with the only heat dissipation capability provided by a cooling fan exhausting to the habitable compartment. The thermal environment on Mir can potentially vary over the range 5 to 40 C; this was the range used in testing, and this wide range adds to the difficulty in managing the power dissipated from the experiment's active components. Many issues in the thermal design are discussed, including: thermal isolation methods for the cryogenic samples; design for cooling to cryogenic temperatures; cryogenic epoxy bonds; management of ambient temperature components self-heating; and fan cooling of the enclosed locker. Results of the design are also considered, including the thermal gradients across the HTS samples and cryogenic thermal strap, electronics and thermal sensor cryogenic performance, and differences between ground and flight performance. Modeling was performed in both SINDA-85 and MSC/PATRAN (with direct geometry import from the CAD design tool Pro/Engineer). Advantages of both types of models are discussed

  15. Aircraft integrated design and analysis: A classroom experience

    NASA Technical Reports Server (NTRS)

    1988-01-01

    AAE 451 is the capstone course required of all senior undergraduates in the School of Aeronautics and Astronautics at Purdue University. During the past year the first steps of a long evolutionary process were taken to change the content and expectations of this course. These changes are the result of the availability of advanced computational capabilities and sophisticated electronic media availability at Purdue. This presentation will describe both the long range objectives and this year's experience using the High Speed Commercial Transport (HSCT) design, the AIAA Long Duration Aircraft design and a Remotely Piloted Vehicle (RPV) design proposal as project objectives. The central goal of these efforts was to provide a user-friendly, computer-software-based, environment to supplement traditional design course methodology. The Purdue University Computer Center (PUCC), the Engineering Computer Network (ECN), and stand-alone PC's were used for this development. This year's accomplishments centered primarily on aerodynamics software obtained from the NASA Langley Research Center and its integration into the classroom. Word processor capability for oral and written work and computer graphics were also blended into the course. A total of 10 HSCT designs were generated, ranging from twin-fuselage and forward-swept wing aircraft, to the more traditional delta and double-delta wing aircraft. Four Long Duration Aircraft designs were submitted, together with one RPV design tailored for photographic surveillance. Supporting these activities were three video satellite lectures beamed from NASA/Langley to Purdue. These lectures covered diverse areas such as an overview of HSCT design, supersonic-aircraft stability and control, and optimization of aircraft performance. Plans for next year's effort will be reviewed, including dedicated computer workstation utilization, remote satellite lectures, and university/industrial cooperative efforts.

  16. Creating meaningful learning experiences: Understanding students' perspectives of engineering design

    NASA Astrophysics Data System (ADS)

    Aleong, Richard James Chung Mun

    , relevance, and transfer. With this framework of student learning, engineering educators can enhance learning experiences by engaging all three levels of students' understanding. The curriculum studies orientation applied the three holistic elements of curriculum---subject matter, society, and the individual---to conceptualize design considerations for engineering curriculum and teaching practice. This research supports the characterization of students' learning experiences to help educators and students optimize their teaching and learning of design education.

  17. Designing a future Conditions Database based on LHC experience

    NASA Astrophysics Data System (ADS)

    Barberis, D.; Formica, A.; Gallas, E. J.; Govi, G.; Lehman Miotto, G.; Pfeiffer, A.

    2015-12-01

    Starting from the experience collected by the ATLAS and CMS experiments in handling condition data during the first LHC run, we present a proposal for a new generation of condition databases, which could be implemented by 2020. We will present the identified relevant data flows for condition data and underline the common use cases that lead to a joint effort for the development of a new system. Condition data is needed in any scientific experiment. It includes any ancillary data associated with primary data taking such as detector configuration, state or calibration or the environment in which the detector is operating. Condition data typically reside outside the primary data store for various reasons (size, complexity or availability) and are best accessed at the point of processing or analysis (including for Monte Carlo simulations). The ability of any experiment to produce correct and timely results depends on the complete and efficient availability of needed conditions for each stage of data handling. Therefore, any experiment needs a condition data architecture which can not only store conditions, but deliver the data efficiently, on demand, to potentially diverse and geographically distributed set of clients. The architecture design should consider facilities to ease conditions management and the monitoring of its conditions entry, access and usage.

  18. Design and MHD modeling of ATLAS experiments to study friction

    SciTech Connect

    Faehl, R. J.; Hammerberg, J. E.

    2002-01-01

    Transverse shear at the interface of two solids occurs when these solids move at different velocities. This frictional phenomenon is being studied in a series of experiments on the ATLAS capacitor bank at Los Alamos. Cylindrical targets to test friction force models are composed of alternating regions of high- and low-shock speed materials. When the target is impacted by a cylindrical, magnetically-accelerated aluminum liner, the differential shock velocity in the two materials establishes the desired shear at the interface. One- and two-dimensional MHD calculations have been performed to design liners with suitable properties to drive these 'friction-like' ATLAS experiments. A thick impactor allows the shock to be maintained for several microseconds. The ATLAS experiments use a liner that is approximately 10 mm thick at impact, with an inner surface velocity of {approx} 1.4-1.5 km/s. Interaction of this thick liner with the electrodes, or glide planes, results in significant deformation of the hardened stainless steel electrodes. Data from the ATLAS experiments and comparisons with the calculations will be presented, along with plans for future experiments.

  19. Design and modeling of small scale multiple fracturing experiments

    SciTech Connect

    Cuderman, J F

    1981-12-01

    Recent experiments at the Nevada Test Site (NTS) have demonstrated the existence of three distinct fracture regimes. Depending on the pressure rise time in a borehole, one can obtain hydraulic, multiple, or explosive fracturing behavior. The use of propellants rather than explosives in tamped boreholes permits tailoring of the pressure risetime over a wide range since propellants having a wide range of burn rates are available. This technique of using the combustion gases from a full bore propellant charge to produce controlled borehole pressurization is termed High Energy Gas Fracturing (HEGF). Several series of HEGF, in 0.15 m and 0.2 m diameter boreholes at 12 m depths, have been completed in a tunnel complex at NTS where mineback permitted direct observation of fracturing obtained. Because such large experiments are costly and time consuming, smaller scale experiments are desirable, provided results from small experiments can be used to predict fracture behavior in larger boreholes. In order to design small scale gas fracture experiments, the available data from previous HEGF experiments were carefully reviewed, analytical elastic wave modeling was initiated, and semi-empirical modeling was conducted which combined predictions for statically pressurized boreholes with experimental data. The results of these efforts include (1) the definition of what constitutes small scale experiments for emplacement in a tunnel complex at the Nevada Test Site, (2) prediction of average crack radius, in ash fall tuff, as a function of borehole size and energy input per unit length, (3) definition of multiple-hydraulic and multiple-explosive fracture boundaries as a function of boreholes size and surface wave velocity, (4) semi-empirical criteria for estimating stress and acceleration, and (5) a proposal that multiple fracture orientations may be governed by in situ stresses.

  20. Designing Experiments to Discriminate Families of Logic Models.

    PubMed

    Videla, Santiago; Konokotina, Irina; Alexopoulos, Leonidas G; Saez-Rodriguez, Julio; Schaub, Torsten; Siegel, Anne; Guziolowski, Carito

    2015-01-01

    Logic models of signaling pathways are a promising way of building effective in silico functional models of a cell, in particular of signaling pathways. The automated learning of Boolean logic models describing signaling pathways can be achieved by training to phosphoproteomics data, which is particularly useful if it is measured upon different combinations of perturbations in a high-throughput fashion. However, in practice, the number and type of allowed perturbations are not exhaustive. Moreover, experimental data are unavoidably subjected to noise. As a result, the learning process results in a family of feasible logical networks rather than in a single model. This family is composed of logic models implementing different internal wirings for the system and therefore the predictions of experiments from this family may present a significant level of variability, and hence uncertainty. In this paper, we introduce a method based on Answer Set Programming to propose an optimal experimental design that aims to narrow down the variability (in terms of input-output behaviors) within families of logical models learned from experimental data. We study how the fitness with respect to the data can be improved after an optimal selection of signaling perturbations and how we learn optimal logic models with minimal number of experiments. The methods are applied on signaling pathways in human liver cells and phosphoproteomics experimental data. Using 25% of the experiments, we obtained logical models with fitness scores (mean square error) 15% close to the ones obtained using all experiments, illustrating the impact that our approach can have on the design of experiments for efficient model calibration. PMID:26389116

  1. Tuning Parameters in Heuristics by Using Design of Experiments Methods

    NASA Technical Reports Server (NTRS)

    Arin, Arif; Rabadi, Ghaith; Unal, Resit

    2010-01-01

    With the growing complexity of today's large scale problems, it has become more difficult to find optimal solutions by using exact mathematical methods. The need to find near-optimal solutions in an acceptable time frame requires heuristic approaches. In many cases, however, most heuristics have several parameters that need to be "tuned" before they can reach good results. The problem then turns into "finding best parameter setting" for the heuristics to solve the problems efficiently and timely. One-Factor-At-a-Time (OFAT) approach for parameter tuning neglects the interactions between parameters. Design of Experiments (DOE) tools can be instead employed to tune the parameters more effectively. In this paper, we seek the best parameter setting for a Genetic Algorithm (GA) to solve the single machine total weighted tardiness problem in which n jobs must be scheduled on a single machine without preemption, and the objective is to minimize the total weighted tardiness. Benchmark instances for the problem are available in the literature. To fine tune the GA parameters in the most efficient way, we compare multiple DOE models including 2-level (2k ) full factorial design, orthogonal array design, central composite design, D-optimal design and signal-to-noise (SIN) ratios. In each DOE method, a mathematical model is created using regression analysis, and solved to obtain the best parameter setting. After verification runs using the tuned parameter setting, the preliminary results for optimal solutions of multiple instances were found efficiently.

  2. Using nearest-neighbor designs and analyses in ecological experiments

    SciTech Connect

    Dixon, P.M. )

    1994-06-01

    Most ecological experiments handle spatial variation either by ignoring it (e.g. completely randomized designs) or by identifying putatively homogeneous areas (e.g. a blocked design). Analysis of data from two experiments, estimating density effects on frog growth and variety differences in yield of triticale, show that there can be large correlations (r = 0.51 and r = 0.72) between residuals on adjacent plots. Nearest-neighbor ANOVA methods use the spatial correlation among residuals to improve the estimation of treatment effects. Incorporating the spatial correlation reduces the variance of treatment effects by 50-75%, depending on the size of the correlation and the arrangement of plots in the field. This decreased variance is equivalent to that from experiments that are 2[times] or 4[times] larger. Potential problems include estimating the spatial correlation, adjusting the error degrees of freedom, and confounding the treatment and spatial effects if there are few replicates. The consequences of these problems will be illustrated.

  3. Conceptual design of a massive aerometric tracer experiment (MATEX)

    SciTech Connect

    Hidy, G.M.

    1987-10-01

    A hypothetical field experiment is evaluated that relates, through tracer releases, reactive pollutant emissions to long range transport and deposition. The feasibility of such an approach is established provided certain requirements can be met. The experiment must: (a) trace emissions from several sources simultaneously and repetitively over an extended period to time, (b) link a tracer to the chemical behavior of emissions, and (c) apply a statistically sound method of guidance for deducing empirical source-receptor relationships (SRRs) while accounting for natural variability. One design approach would use perfluorocarbon tracers (PFTs), which are nonreactive in the atmosphere, to simulate the transport and dispersion of reactive species such as sulfur and nitrogen oxides. Conversion and loss factors would be calibrated using isotopic sulfur and nitrogen compounds with PFTs, in combination with aerometric and deposition observations. An experimental concept is described that determines SRRs for deposition from observations and their interpolation, synthesized by an empirical model. If implemented, the experiment would be very expensive and has high design risk for achieving its goals given present knowledge.

  4. Engineering design of the National Spherical Torus Experiment

    SciTech Connect

    C. Neumeyer; P. Heitzenroeder; J. Spitzer, J. Chrzanowski; et al

    2000-05-11

    NSTX is a proof-of-principle experiment aimed at exploring the physics of the ``spherical torus'' (ST) configuration, which is predicted to exhibit more efficient magnetic confinement than conventional large aspect ratio tokamaks, amongst other advantages. The low aspect ratio (R/a, typically 1.2--2 in ST designs compared to 4--5 in conventional tokamaks) decreases the available cross sectional area through the center of the torus for toroidal and poloidal field coil conductors, vacuum vessel wall, plasma facing components, etc., thus increasing the need to deploy all components within the so-called ``center stack'' in the most efficient manner possible. Several unique design features have been developed for the NSTX center stack, and careful engineering of this region of the machine, utilizing materials up to their engineering allowables, has been key to meeting the desired objectives. The design and construction of the machine has been accomplished in a rapid and cost effective manner thanks to the availability of extensive facilities, a strong experience base from the TFTR era, and good cooperation between institutions.

  5. Object oriented design and programming for experiment online applications---Experiences with a prototype application

    SciTech Connect

    Oleynik, G.

    1991-03-01

    The increase in the variety of computer platforms incorporated into online data acquisition systems at Fermilab compels consideration of how to best design and implement applications to be maintainable, reusable and portable. To this end we have evaluated the applicability to Object Oriented Design techniques, and Object Oriented Programming languages for online applications. We report on this evaluation. We are designing a specific application which provides a framework for experimenters to access and display their raw data on UNIX workstations that form part of their distributed online data acquisition system. We have chosen to implement this using the C++ OOP language. We report on our experiences in object oriented design and lessons learned which we will apply to future software development. 14 refs.

  6. Propagation of Computational Uncertainty Using the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2007-01-01

    This paper describes the use of formally designed experiments to aid in the error analysis of a computational experiment. A method is described by which the underlying code is approximated with relatively low-order polynomial graduating functions represented by truncated Taylor series approximations to the true underlying response function. A resource-minimal approach is outlined by which such graduating functions can be estimated from a minimum number of case runs of the underlying computational code. Certain practical considerations are discussed, including ways and means of coping with high-order response functions. The distributional properties of prediction residuals are presented and discussed. A practical method is presented for quantifying that component of the prediction uncertainty of a computational code that can be attributed to imperfect knowledge of independent variable levels. This method is illustrated with a recent assessment of uncertainty in computational estimates of Space Shuttle thermal and structural reentry loads attributable to ice and foam debris impact on ascent.

  7. Design challenges and safety concept for the AVANTI experiment

    NASA Astrophysics Data System (ADS)

    Gaias, G.; Ardaens, J.-S.

    2016-06-01

    AVANTI is a formation-flight experiment involving two noncooperative satellites. After a brief overview of the challenges that experiment design and scenario induce, this paper presents the safety concept retained to guarantee the safety of the formation. The peculiarity of the proposed approach is that it does not rely on the continuous availability of tracking data of the client spacecraft but rather exploits the concept of passive safety of special relative trajectories. To this end, the formation safety criterion based on the minimum distance normal to the flight direction has been extended in order to be applicable also to drifting relative orbits, resulting from non-vanishing relative semi-major axis encountered during a rendezvous or produced by the action of the differential aerodynamic drag.

  8. Vanguard/PLACE experiment system design and test plan

    NASA Technical Reports Server (NTRS)

    Taylor, R. E.

    1973-01-01

    A system design and test plan are described for operational evaluation of the NASA-Goddard position location and aircraft communications equipment (PLACE), at C band (4/6GHz), using NASA's ship, the USNS Vanguard, and the ATS 3 and ATS 5 synchronous satellites. The Sea Test phase, extending from March 29, 1973 to April 15, 1973 was successfully completed; the principal objectives of the experiment were achieved. Typical PLACE-computed, position-location data is shown for the Vanguard. Position location and voice-quality measurements were excellent; ship position was determined within 2 nmi; high-quality, 2-way voice transmissions resulted as determined from audience participation, intelligibility and articulation-index analysis. A C band/L band satellite trilateration experiment is discussed.

  9. Design and status of the Mu2e experiment

    NASA Astrophysics Data System (ADS)

    Miscetti, Stefano

    2016-04-01

    The Mu2e experiment aims to measure the charged-lepton flavor violating neutrino-less conversion of a negative muon into an electron in the field of an aluminum nucleus. The objective is to improve the previous measurement by four orders of magnitude with the use of a similar technique. For the conversion process, the signal will appear as a mono-energetic electron very close to the muon rest mass. In the Standard Model these process have negligible rates. However, in many Beyond the Standard Model scenarios their rates are within the reach of next generation experiments. In this paper, we explain the sensitivity to new physics scale and the complementarity of approach and reach provided by MU2E with respect to Mu3e and MEG upgrade. Mu2e experimental technique, design and status will be the focus of this paper.

  10. Criteria for the design or selection of a bioelectromagnetics experiment

    NASA Astrophysics Data System (ADS)

    Pickard, W. F.

    1985-06-01

    By athermal, one normally means an effect which: (1) has been explained explicitly and unambiguously in terms of mechanisms other than increased random molecular motion (i.e., heating), or (2) occurs at absorbed power levels so low that a thermal mechanism seems unlikely, or (3) displays so unexpected a dependence upon some experimental variable that it is hard to see how heating could lie behind it. The aim of this report is to discuss guidelines by which and experiment directed toward the discovery or elucidation of an athermal effect might be designed or evaluated. It will consist of three main sections. First, the great commandment of athermal bioelectromagnetics will be discussed; that any reported, putatively athermal bioeffect be robustly reproducible. Six criteria are propounded for judging the credibility of a reported experiment in athermal bioelectromagnetics. It is concluded that experimentation in athermal bioelectromagnetics would be markedly facilitated by the identification of the mechanisms underlying the observed effects.

  11. CELSS experiment model and design concept of gas recycle system

    NASA Technical Reports Server (NTRS)

    Nitta, K.; Oguchi, M.; Kanda, S.

    1986-01-01

    In order to prolong the duration of manned missions around the Earth and to expand the human existing region from the Earth to other planets such as a Lunar Base or a manned Mars flight mission, the controlled ecological life support system (CELSS) becomes an essential factor of the future technology to be developed through utilization of space station. The preliminary system engineering and integration efforts regarding CELSS have been carried out by the Japanese CELSS concept study group for clarifying the feasibility of hardware development for Space station experiments and for getting the time phased mission sets after FY 1992. The results of these studies are briefly summarized and the design and utilization methods of a Gas Recycle System for CELSS experiments are discussed.

  12. Design and experience with large-size CFB boilers

    SciTech Connect

    Darling, S.L.

    1994-12-31

    CFB boilers have been in operation for many years in industrial steam and power generation applications demonstrating the low SO{sub x}/NO{sub x} emissions and fuel flexibility of the technology. In the past few years, several large-size CFB boilers (over 100 MWe) have entered service and are operating successfully. On the basis of this experience, CFB boilers up to 400 MWe in size are now being offered with full commercial guarantees. Such large CFB boilers will be of interest to countries with strict emission regulations or the need to reduce emissions, and to countries with both a large need for additional power and low grade indigenous solid fuel. This paper will describe Ahlstrom Pyropower`s scale-up of the AHLSTROM PYROFLOW CFB boiler, experience with large-size CFB boilers and the design features of CFB boilers in the 400 MWe size range.

  13. On Becoming a Civic-Minded Instructional Designer: An Ethnographic Study of an Instructional Design Experience

    ERIC Educational Resources Information Center

    Yusop, Farrah Dina; Correia, Ana-Paula

    2014-01-01

    This ethnographic study took place in a graduate course at a large research university in the Midwestern United States. It presents an in-depth examination of the experiences and challenges of a group of four students learning to be Instructional Design and Technology professionals who are concerned with the well-being of all members of a society,…

  14. Scaling and design of landslide and debris-flow experiments

    USGS Publications Warehouse

    Iverson, Richard M.

    2015-01-01

    Scaling plays a crucial role in designing experiments aimed at understanding the behavior of landslides, debris flows, and other geomorphic phenomena involving grain-fluid mixtures. Scaling can be addressed by using dimensional analysis or – more rigorously – by normalizing differential equations that describe the evolving dynamics of the system. Both of these approaches show that, relative to full-scale natural events, miniaturized landslides and debris flows exhibit disproportionately large effects of viscous shear resistance and cohesion as well as disproportionately small effects of excess pore-fluid pressure that is generated by debris dilation or contraction. This behavioral divergence grows in proportion to H3, where H is the thickness of a moving mass. Therefore, to maximize geomorphological relevance, experiments with wet landslides and debris flows must be conducted at the largest feasible scales. Another important consideration is that, unlike stream flows, landslides and debris flows accelerate from statically balanced initial states. Thus, no characteristic macroscopic velocity exists to guide experiment scaling and design. On the other hand, macroscopic gravity-driven motion of landslides and debris flows evolves over a characteristic time scale (L/g)1/2, where g is the magnitude of gravitational acceleration and L is the characteristic length of the moving mass. Grain-scale stress generation within the mass occurs on a shorter time scale, H/(gL)1/2, which is inversely proportional to the depth-averaged material shear rate. A separation of these two time scales exists if the criterion H/L < < 1 is satisfied, as is commonly the case. This time scale separation indicates that steady-state experiments can be used to study some details of landslide and debris-flow behavior but cannot be used to study macroscopic landslide or debris-flow dynamics.

  15. Active array design for FAME: Freeform Active Mirror Experiment

    NASA Astrophysics Data System (ADS)

    Jaskó, Attila; Aitink-Kroes, Gabby; Agócs, Tibor; Venema, Lars; Hugot, Emmanuel; Schnetler, Hermine; Bányai, Evelin

    2014-07-01

    In this paper a status report is given on the development of the FAME (Freeform Active Mirror Experiment) active array. Further information regarding this project can be found in the paper by Venema et al. (this conference). Freeform optics provide the opportunity to drastically reduce the complexity of the future optical instruments. In order to produce these non-axisymmetric freeform optics with up to 1 mm deviation from the best fit sphere, it is necessary to come up with new design and manufacturing methods. The way we would like to create novel freeform optics is by fine tuning a preformed high surface-quality thin mirror using an array which is actively controlled by actuators. In the following we introduce the tools deployed to create and assess the individual designs. The result is an active array having optimal number and lay-out of actuators.

  16. Spacecraft and mission design for the SP-100 flight experiment

    NASA Technical Reports Server (NTRS)

    Deininger, William D.; Vondra, Robert J.

    1988-01-01

    The design and performance of a spacecraft employing arcjet nuclear electric propulsion, suitable for use in the SP-100 Space Reactor Power System (SRPS) Flight Experiment, are outlined. The vehicle design is based on a 93 kW(e) ammonia arcjet system operating at an experimentally measured specific impulse of 1031 s and an efficiency of 42.3 percent. The arcjet/gimbal assemblies, power conditioning subsystem, propellant feed system, propulsion system thermal control, spacecraft diagnostic instrumentation, and the telemetry requirements are described. A 100 kW(e) SRPS is assumed. The spacecraft mass is baselined at 5675 kg excluding the propellant and propellant feed system. Four mission scenarios are described which are capable of demonstrating the full capability of the SRPS. The missions considered include spacecraft deployment to possible surveillance platform orbits, a spacecraft storage mission, and an orbit raising round trip corresponding to possible orbit transfer vehicle (OTV) missions.

  17. The design and analysis of transposon insertion sequencing experiments.

    PubMed

    Chao, Michael C; Abel, Sören; Davis, Brigid M; Waldor, Matthew K

    2016-02-01

    Transposon insertion sequencing (TIS) is a powerful approach that can be extensively applied to the genome-wide definition of loci that are required for bacterial growth under diverse conditions. However, experimental design choices and stochastic biological processes can heavily influence the results of TIS experiments and affect downstream statistical analysis. In this Opinion article, we discuss TIS experimental parameters and how these factors relate to the benefits and limitations of the various statistical frameworks that can be applied to the computational analysis of TIS data. PMID:26775926

  18. Illumination system development using design and analysis of computer experiments

    NASA Astrophysics Data System (ADS)

    Keresztes, Janos C.; De Ketelaere, Bart; Audenaert, Jan; Koshel, R. J.; Saeys, Wouter

    2015-09-01

    Computer assisted optimal illumination design is crucial when developing cost-effective machine vision systems. Standard local optimization methods, such as downhill simplex optimization (DHSO), often result in an optimal solution that is influenced by the starting point by converging to a local minimum, especially when dealing with high dimensional illumination designs or nonlinear merit spaces. This work presents a novel nonlinear optimization approach, based on design and analysis of computer experiments (DACE). The methodology is first illustrated with a 2D case study of four light sources symmetrically positioned along a fixed arc in order to obtain optimal irradiance uniformity on a flat Lambertian reflecting target at the arc center. The first step consists of choosing angular positions with no overlap between sources using a fast, flexible space filling design. Ray-tracing simulations are then performed at the design points and a merit function is used for each configuration to quantify the homogeneity of the irradiance at the target. The obtained homogeneities at the design points are further used as input to a Gaussian Process (GP), which develops a preliminary distribution for the expected merit space. Global optimization is then performed on the GP more likely providing optimal parameters. Next, the light positioning case study is further investigated by varying the radius of the arc, and by adding two spots symmetrically positioned along an arc diametrically opposed to the first one. The added value of using DACE with regard to the performance in convergence is 6 times faster than the standard simplex method for equal uniformity of 97%. The obtained results were successfully validated experimentally using a short-wavelength infrared (SWIR) hyperspectral imager monitoring a Spectralon panel illuminated by tungsten halogen sources with 10% of relative error.

  19. Optimal experiment design for time-lapse traveltime tomography

    SciTech Connect

    Ajo-Franklin, J.B.

    2009-10-01

    Geophysical monitoring techniques offer the only noninvasive approach capable of assessing both the spatial and temporal dynamics of subsurface fluid processes. Increasingly, permanent sensor arrays in boreholes and on the ocean floor are being deployed to improve the repeatability and increase the temporal sampling of monitoring surveys. Because permanent arrays require a large up-front capital investment and are difficult (or impossible) to re-configure once installed, a premium is placed on selecting a geometry capable of imaging the desired target at minimum cost. We present a simple approach to optimizing downhole sensor configurations for monitoring experiments making use of differential seismic traveltimes. In our case, we use a design quality metric based on the accuracy of tomographic reconstructions for a suite of imaging targets. By not requiring an explicit singular value decomposition of the forward operator, evaluation of this objective function scales to problems with a large number of unknowns. We also restrict the design problem by recasting the array geometry into a low dimensional form more suitable for optimization at a reasonable computational cost. We test two search algorithms on the design problem: the Nelder-Mead downhill simplex method and the Multilevel Coordinate Search algorithm. The algorithm is tested for four crosswell acquisition scenarios relevant to continuous seismic monitoring, a two parameter array optimization, several scenarios involving four parameter length/offset optimizations, and a comparison of optimal multi-source designs. In the last case, we also examine trade-offs between source sparsity and the quality of tomographic reconstructions. One general observation is that asymmetric array lengths improve localized image quality in crosswell experiments with a small number of sources and a large number of receivers. Preliminary results also suggest that high-quality differential images can be generated using only a small

  20. Designing Statistical Language Learners: Experiments on Noun Compounds

    NASA Astrophysics Data System (ADS)

    Lauer, Mark

    1996-09-01

    The goal of this thesis is to advance the exploration of the statistical language learning design space. In pursuit of that goal, the thesis makes two main theoretical contributions: (i) it identifies a new class of designs by specifying an architecture for natural language analysis in which probabilities are given to semantic forms rather than to more superficial linguistic elements; and (ii) it explores the development of a mathematical theory to predict the expected accuracy of statistical language learning systems in terms of the volume of data used to train them. The theoretical work is illustrated by applying statistical language learning designs to the analysis of noun compounds. Both syntactic and semantic analysis of noun compounds are attempted using the proposed architecture. Empirical comparisons demonstrate that the proposed syntactic model is significantly better than those previously suggested, approaching the performance of human judges on the same task, and that the proposed semantic model, the first statistical approach to this problem, exhibits significantly better accuracy than the baseline strategy. These results suggest that the new class of designs identified is a promising one. The experiments also serve to highlight the need for a widely applicable theory of data requirements.

  1. Frac-and-pack stimulation: Application, design, and field experience

    SciTech Connect

    Roodhart, L.P.; Fokker, P.A.; Davies, D.R.; Shlyapobersky, J.; Wong, G.K.

    1994-03-01

    This paper discusses the criteria for selecting wells to be frac-and-packed. The authors show how systematic study of the inflow performance can be used to assess the potential of frac-and-packed wells, to identify the controlling factors, and to optimize design parameters. They also show that fracture conductivity is often the key to successful treatment. This conductivity depends largely on proppant size; formation permeability damage around the created fracture has less effect. Appropriate allowance needs to be made for flow restrictions caused by the presence of the perforations, partial penetration, and non-Darcy effects. They describe the application of the overpressure-calibrated hydraulic fracture model in frac-and-pack treatment design, and discuss some operational considerations with reference to field examples. The full potential of this promising new completion method can be achieved only if the design is tailored to the individual well. This demands high-quality input data, which can be obtained only from a calibration test. This paper presents their strategy for frac-and-pack design, drawing on examples from field experience. They also point out several areas that the industry needs to address, such as the sizing of proppant in soft formations and the interaction between fracturing fluids and resin in resin-coated proppant.

  2. Safeguard By Design Lessons Learned from DOE Experience Integrating Safety into Design

    SciTech Connect

    Hockert, John; Burbank, Roberta L.

    2010-04-13

    This paper identifies the lessons to be learned for the institutionalization of Safeguards by Design (SBD) from the Department of Energy (DOE) experience developing and implementing DOE-STD-1189-2008, Integration of Safety into the Design Process. The experience is valuable because of the similarity of the challenges of integrating safety and safeguards into the design process. The paper reviews the content and development of DOE-STD-1189-2008 from its initial concept in January 2006 to its issuance in March 2008. Lessons learned are identified in the areas of the development and structure of requirements for the SBD process; the target audience for SBD requirements and guidance, the need for a graded approach to SBD, and a possible strategy for development and implementation of SBD within DOE.

  3. Control system design for spacecraft formation flying: Theory and experiment

    NASA Astrophysics Data System (ADS)

    Robertson, Andrew Dunbar

    Spacecraft formation flying is an enabling technology for many future space science missions, such as separated spacecraft interferometers (SSI). However the sensing, control and coordination of such instruments pose many new design challenges. SSI missions will require precise relative sensing and control, fuel-efficient, fuel-balanced operation to maximize mission life and group-level autonomy to reduce operations costs. Enabling these new formation flying capabilities requires precise relative sensing and estimation, enhanced control capabilities such as cooperative control (multiple independent spacecraft acting together), group-level formation management and informed design of a system architecture to manage distributed sensing and control-system resources. This research defines an end-to-end control system, including the key elements unique to the formation flying problem: cooperative control, relative sensing, coordination, and the control-system architecture. A new control-system design optimizes performance under typical spacecraft constraints (e.g., on-off actuators, finite fuel, limited computation power, limited contact with ground control, etc.). Standard control techniques have been extended, and new ones synthesized to meet these goals. In designing this control system, several contributions have been made to the field of spacecraft formation flying control including: an analytic two-vehicle fuel-time-optimal cooperative control algorithm, a fast numeric multi-vehicle, optimal cooperative control algorithm that can be used as a feedforward or a feedback controller, a fleet-level coordinator for autonomous fuel balancing, validation of GPS-based relative sensing for formation flying, and trade studies of the relative control and relative-estimation-architecture design problems. These research contributions are mapped to possible applications for three spacecraft formation flying missions currently in development. The lessons learned from this research

  4. Design of MagLIF experiments using the Z facility

    NASA Astrophysics Data System (ADS)

    Sefkow, Adam

    2013-10-01

    The MagLIF (Magnetized Liner Inertial Fusion) concept has been presented as a path toward obtaining substantial fusion yields using the Z facility, and related experiments have begun in earnest at Sandia National Laboratories. We present fully integrated numerical magnetohydrodynamic simulations of the MagLIF concept, which include laser preheating of the fuel, the presence of electrodes, and end loss effects. These simulations have been used to design neutron-producing integrated MagLIF experiments on the Z facility for the capabilities that presently exist, namely, D2 fuel, peak currents of Imax 15-18 MA, pre-seeded axial magnetic fields of Bz0 = 7-10 T, and laser preheat energies of Elaser = 2-3 kJ delivered in 2 ns. The first fully integrated experiments, based on these simulations, are planned to occur in 2013. Neutron yields in excess of 1011 are predicted with the available laser preheat energy and accelerator drive energy. In several years, we plan to upgrade the laser to increase Elaser = by several more kJ, provide Bz0 up to 30 T, deliver Imax 22 MA or more to the load, and develop the capability to use DT fuel. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  5. Mission and Design of the Fusion Ignition Research Experiment (FIRE)

    SciTech Connect

    Meade, D. M.; Jardin, S. C.; Schmidt, J. A.; Thome, R. J.; Sauthoff, N. R.; Heitzenroeder, P.; Nelson, Brad E; Ulrickson, M. A.; Kessel, C. E.; Mandrekas, J.; Neumeyer, C. L.; Schultz, J. H.; Rutherford, P. H.; Wesley, J. C.; Young, K. M.; Nevins, W. M.; Houlberg, Wayne A; Uckan, Nermin A; Woolley, R. W.; Baker, C. C.

    2001-01-01

    Experiments are needed to test and extend present understanding of confinement, macroscopic stability, alpha-driven instabilities, and particle/power exhaust in plasmas dominated by alpha heating. A key issue is to what extent pressure profile evolution driven by strong alpha heating will act to self-organize advanced configurations with large bootstrap current fractions and internal transport barriers. A design study of a Fusion Ignition Research Experiment (FIRE) is underway to assess near term opportunities for advancing the scientific understanding of self-heated fusion plasmas. The emphasis is on understanding the behavior of fusion plasmas dominated by alpha heating (Q ≥ 5) that are sustained for durations comparable to the characteristic plasma time scales (≥ 20 τE and ~ τskin, where τskin is the time for the plasma current profile to redistribute at fixed current). The programmatic mission of FIRE is to attain, explore, understand and optimize alphadominated plasmas to provide knowledge for the design of attractive magnetic fusion energy systems. The programmatic strategy is to access the alpha-heating-dominated regime with confidence using the present advanced tokamak data base (e.g., Elmy-H-mode, ≤ 0.75 Greenwald density) while maintaining the flexibility for accessing and exploring other advanced tokamak modes (e. g., reversed shear, pellet enhanced performance) at lower magnetic fields and fusion power for longer durations in later stages of the experimental program. A major goal is to develop a design concept that could meet these physics objectives with a construction cost in the range of $1B.

  6. Lower hybrid system design for the Tokamak physics experiment

    SciTech Connect

    Goranson, P.L.; Conner, D.L.; Swain, D.W.; Yugo, J.J.; Bernabei, S.; Greenough, N.

    1995-12-31

    The lower hybrid (LH) launcher configuration has been redesigned to integrate the functions of the vertical four-way power splitter and the front waveguide array (front array). This permits 256 waveguide channels to be fed by only 64 waveguides at the vacuum window interface. The resulting configuration is a more compact coupler, which incorporates the simplicity of a multijunction coupler while preserving the spectral flexibility of a conventional lower hybrid launcher. Other spin-offs of the redesign are reduction in thermal incompatibility between the front array and vacuum windows, improved maintainability, in situ vacuum window replacement, a reduced number of radio frequency (rf) connections, and a weight reduction of 7300 kg. There should be a significant cost reduction as well. Issues associated with the launcher design and fabrication have been addressed by a research and development program that includes brazing of the front array and testing of the power splitter configuration to confirm that phase errors due to reflections in the shorted splitter legs will not significantly impact the rf spectrum. The Conceptual Design Review requires that radiation levels at the torus radial port mounting flange and outer surface of the toroidal field coils should be sufficiently low to permit hands-on maintenance. Low activation materials and neutron shielding are incorporated in the launcher design to meet these requirements. The launcher is configured to couple 3 MW of steady state LH heating/LH current drive power at 3.7 GHz to the Tokamak Physics Experiment plasma.

  7. Target Station Design for the Mu2e Experiment

    SciTech Connect

    Pronskikh, Vitaly; Ambrosio, Giorgio; Campbell, Michael; Coleman, Richard; Ginther, George; Kashikhin, Vadim; Krempetz, Kurt; Lamm, Michael; Lee, Ang; Leveling, Anthony; Mokhov, Nikolai; Nagaslaev, Vladimir; Stefanik, Andrew; Striganov, Sergei; Werkema, Steven; Bartoszek, Larry; Densham, Chris; Loveridge, Peter; Lynch, Kevin; Popp, James

    2014-07-01

    The Mu2e experiment at Fermilab is devoted to search for the conversion of a negative muon into an electron in the field of a nucleus without emission of neutrinos. One of the main parts of the Mu2e experimental setup is its Target Station in which negative pions are generated in interactions of the 8-GeV primary proton beam with a tungsten target. A large-aperture 5-T superconducting production solenoid (PS) enhances pion collection, and an S-shaped transport solenoid (TS) delivers muons and pions to the Mu2e detector. The heat and radiation shield (HRS) protects the PS and the first TS coils. A beam dump absorbs the spent beam. In order for the PS superconducting magnet to operate reliably the sophisticated HRS was designed and optimized for performance and cost. The beam dump was designed to absorb the spent beam and maintaining its temperature and air activation in the hall at the allowable level. Comprehensive MARS15 simulations have been carried out to optimize all the parts while maximizing muon yield. Results of simulations of critical radiation quantities and their implications on the overall Target Station design and integration will be reported.

  8. Design and Implementation of a Laboratory-Based Drug Design and Synthesis Advanced Pharmacy Practice Experience

    PubMed Central

    Philip, Ashok; Stephens, Mark; Mitchell, Sheila L.

    2015-01-01

    Objective. To provide students with an opportunity to participate in medicinal chemistry research within the doctor of pharmacy (PharmD) curriculum. Design. We designed and implemented a 3-course sequence in drug design or drug synthesis for pharmacy students consisting of a 1-month advanced elective followed by two 1-month research advanced pharmacy practice experiences (APPEs). To maximize student involvement, this 3-course sequence was offered to third-year and fourth-year students twice per calendar year. Assessment. Students were evaluated based on their commitment to the project’s success, productivity, and professionalism. Students also evaluated the course sequence using a 14-item course evaluation rubric. Student feedback was overwhelmingly positive. Students found the experience to be a valuable component of their pharmacy curriculum. Conclusion. We successfully designed and implemented a 3-course research sequence that allows PharmD students in the traditional 4-year program to participate in drug design and synthesis research. Students report the sequence enhanced their critical-thinking and problem-solving skills and helped them develop as independent learners. Based on the success achieved with this sequence, efforts are underway to develop research APPEs in other areas of the pharmaceutical sciences. PMID:25995518

  9. Laser communication experiment. Volume 1: Design study report: Spacecraft transceiver. Part 3: LCE design specifications

    NASA Technical Reports Server (NTRS)

    1970-01-01

    The requirements for the design, fabrication, performance, and testing of a 10.6 micron optical heterodyne receiver subsystem for use in a laser communication system are presented. The receiver subsystem, as a part of the laser communication experiment operates in the ATS 6 satellite and in a transportable ground station establishing two-way laser communications between the spacecraft and the transportable ground station. The conditions under which environmental tests are conducted are reported.

  10. Computational design aspects of a NASP nozzle/afterbody experiment

    NASA Technical Reports Server (NTRS)

    Ruffin, Stephen M.; Venkatapathy, Ethiraj; Keener, Earl R.; Nagaraj, N.

    1989-01-01

    This paper highlights the influence of computational methods on design of a wind tunnel experiment which generically models the nozzle/afterbody flow field of the proposed National Aerospace Plane. The rectangular slot nozzle plume flow field is computed using a three-dimensional, upwind, implicit Navier-Stokes solver. Freestream Mach numbers of 5.3, 7.3, and 10 are investigated. Two-dimensional parametric studies of various Mach numbers, pressure ratios, and ramp angles are used to help determine model loads and afterbody ramp angle and length. It was found that the center of pressure on the ramp occurs at nearly the same location for all ramp angles and test conditions computed. Also, to prevent air liquefaction, it is suggested that a helium-air mixture be used as the jet gas for the highest Mach number test case.

  11. Gender Consideration in Experiment Design for Air Break in Prebreathe

    NASA Technical Reports Server (NTRS)

    Conkin, Johnny; Dervay, Joseph P.; Gernhardt, Michael L.

    2007-01-01

    If gender is a confounder of the decompression sickness (DCS) or venous gas emboli (VGE) outcomes of a proposed air break in oxygen prebreathe (PB) project, then decisions about the final experiment design must be made. We evaluated if the incidence of DCS and VGE from tests in altitude chambers over 20 years were different between men and women after resting and exercise PB protocols. Nitrogen washout during PB is our primary risk mitigation strategy to prevent subsequent DCS and VGE in subjects. Bubbles in the pulmonary artery (venous blood) were detected from the precordial position using Doppler ultrasound bubble detectors. The subjects were monitored for VGE for four min at about 15 min intervals for the duration of the altitude exposure, with maximum bubble grade assigned a Spencer Grade of IV.

  12. The MARTE VNIR Imaging Spectrometer Experiment: Design and Analysis

    NASA Astrophysics Data System (ADS)

    Brown, Adrian J.; Sutter, Brad; Dunagan, Stephen

    2008-10-01

    We report on the design, operation, and data analysis methods employed on the VNIR imaging spectrometer instrument that was part of the Mars Astrobiology Research and Technology Experiment (MARTE). The imaging spectrometer is a hyperspectral scanning pushbroom device sensitive to VNIR wavelengths from 400-1000 nm. During the MARTE project, the spectrometer was deployed to the Río Tinto region of Spain. We analyzed subsets of three cores from Río Tinto using a new band modeling technique. We found most of the MARTE drill cores to contain predominantly goethite, though spatially coherent areas of hematite were identified in Core 23. We also distinguished non Fe-bearing minerals that were subsequently analyzed by X-ray diffraction (XRD) and found to be primarily muscovite. We present drill core maps that include spectra of goethite, hematite, and non Fe-bearing minerals.

  13. Design of Experiments Results for the Feedthru Insulator

    SciTech Connect

    BENAVIDES,GILBERT L.; VAN ORNUM,DAVID J.; BACA,MAUREEN R.; APPEL,PATRICIA E.

    1999-12-01

    A design of experiments (DoE) was performed at Ceramtec to improve the yield of a cermet part known as the feedthru insulator. The factors chosen to be varied in this DoE were syringe orifice size, fill condition, solvent, and surfactant. These factors were chosen because of their anticipated effect on the cermet slurry and its consequences to the feedthru insulator in succeeding fabrication operations. Response variables to the DoE were chosen to be indirect indicators of production yield for the feedthru insulator. The solvent amount used to mix the cermet slurry had the greatest overall effect on the response variables. Based upon this DoE, there is the potential to improve the yield not only for the feedthru insulator but for other cermet parts as well. This report thoroughly documents the DoE and contains additional information regarding the feedthru insulator.

  14. Design, Construction, Alignment, and Calibration of a Compact Velocimetry Experiment

    SciTech Connect

    Kaufman, Morris I.; Malone, Robert M.; Frogget, Brent C.; Esquibel, David L.; Romero, Vincent T.; Lare, Gregory A.; Briggs, Bart; Iverson, Adam J.; Frayer, Daniel K.; DeVore, Douglas; Cata, Brian

    2007-09-21

    A velocimetry experiment has been designed to measure shock properties for small cylindrical metal targets (8-mm-diameter by 2-mm thick). A target is accelerated by high explosives, caught, and retrieved for later inspection. The target is expected to move at a velocity of 0.1 to 3 km/sec. The complete experiment canister is approximately 105 mm in diameter and 380 mm long. Optical velocimetry diagnostics include the Velocity Interferometer System for Any Reflector (VISAR) and Photon Doppler Velocimetry (PDV). The packaging of the velocity diagnostics is not allowed to interfere with the catchment or an X-ray imaging diagnostic. A single optical relay, using commercial lenses, collects Doppler-shifted light for both VISAR and PDV. The use of fiber optics allows measurement of point velocities on the target surface during accelerations occurring over 15 mm of travel. The VISAR operates at 532 nm and has separate illumination fibers requiring alignment. The PDV diagnostic operates at 1550 nm, but is aligned and focused at 670 nm. The VISAR and PDV diagnostics are complementary measurements and they image spots in close proximity on the target surface. Because the optical relay uses commercial glass, the axial positions of the optical fibers for PDV and VISAR are offset to compensate for chromatic aberrations. The optomechanical design requires careful attention to fiber management, mechanical assembly and disassembly, positioning of the foam catchment, and X-ray diagnostic field-of-view. Calibration and alignment data are archived at each stage of the assembly sequence.

  15. PV-Diesel Hybrid SCADA Experiment Network Design

    NASA Technical Reports Server (NTRS)

    Kalu, Alex; Durand, S.; Emrich, Carol; Ventre, G.; Wilson, W.; Acosta, R.

    1999-01-01

    The essential features of an experimental network for renewable power system satellite based supervisory, control and data acquisition (SCADA) are communication links, controllers, diagnostic equipment and a hybrid power system. Required components for implementing the network consist of two satellite ground stations, to satellite modems, two 486 PCs, two telephone receivers, two telephone modems, two analog telephone lines, one digital telephone line, a hybrid-power system equipped with controller and a satellite spacecraft. In the technology verification experiment (TVE) conducted by Savannah State University and Florida Solar Energy Center, the renewable energy hybrid system is the Apex-1000 Mini-Hybrid which is equipped with NGC3188 for user interface and remote control and the NGC2010 for monitoring and basic control tasks. This power system is connected to a satellite modem via a smart interface, RS232. Commands are sent to the power system control unit through a control PC designed as PC1. PC1 is thus connected to a satellite model through RS232. A second PC, designated PC2, the diagnostic PC is connected to both satellite modems via separate analog telephone lines for checking modems'health. PC2 is also connected to PC1 via a telephone line. Due to the unavailability of a second ground station for the ACTS, one ground station is used to serve both the sending and receiving functions in this experiment. Signal is sent from the control PC to the Hybrid system at a frequency f(sub 1), different from f(sub 2), the signal from the hybrid system to the control PC. f(sub l) and f(sub 2) are sufficiently separated to avoid interference.

  16. Designing an experiment to measure cellular interaction forces

    NASA Astrophysics Data System (ADS)

    McAlinden, Niall; Glass, David G.; Millington, Owain R.; Wright, Amanda J.

    2013-09-01

    Optical trapping is a powerful tool in Life Science research and is becoming common place in many microscopy laboratories and facilities. The force applied by the laser beam on the trapped object can be accurately determined allowing any external forces acting on the trapped object to be deduced. We aim to design a series of experiments that use an optical trap to measure and quantify the interaction force between immune cells. In order to cause minimum perturbation to the sample we plan to directly trap T cells and remove the need to introduce exogenous beads to the sample. This poses a series of challenges and raises questions that need to be answered in order to design a set of effect end-point experiments. A typical cell is large compared to the beads normally trapped and highly non-uniform - can we reliably trap such objects and prevent them from rolling and re-orientating? In this paper we show how a spatial light modulator can produce a triple-spot trap, as opposed to a single-spot trap, giving complete control over the object's orientation and preventing it from rolling due, for example, to Brownian motion. To use an optical trap as a force transducer to measure an external force you must first have a reliably calibrated system. The optical trapping force is typically measured using either the theory of equipartition and observing the Brownian motion of the trapped object or using an escape force method, e.g. the viscous drag force method. In this paper we examine the relationship between force and displacement, as well as measuring the maximum displacement from equilibrium position before an object falls out of the trap, hence determining the conditions under which the different calibration methods should be applied.

  17. Thermal Design of a Bitter-Type Electromagnet for Dusty Plasma Experiments: Prototype Design and Construction

    NASA Astrophysics Data System (ADS)

    Birmingham, W. J.; Bates, E. M.; Romero-Talamás, Carlos; Rivera, W. F.

    2015-11-01

    For the purpose of analyzing magnetized dusty plasma at the University of Maryland Baltimore County (UMBC) Dusty Plasma Laboratory, we are designing a resistive water cooled Bitter-Type electromagnet. When completed, the magnet will be programmable to generate fields of up to 10 T for at least 10 seconds and up to several minutes. An analytic thermal design method was developed for establishing the location of elongated axial cooling passages. Comparisons with finite element analysis (FEA) data reveals that the thermal design method was capable of generating cooling channel patterns which establish manageable temperature profiles within the magnet. With our analytic method, cooling hole patterns can be generated in seconds instead of hours with FEA software. To further validate our thermal analysis as well as manufacturing techniques of our magnet design, we are now constructing a prototype electromagnet. The prototype is designed to operate continuously at 1 T with a current of 750 A, and has four diagnostic ports that can accommodate thermocouples and optical access to the water flow. A 1.25 inch diameter bore allows for axial field measurements and provides space for small scale experiments. Thermal analysis and specifics of the electromagnet design are presented.

  18. Design of a miniature explosive isentropic compression experiment

    SciTech Connect

    Tasker, Douglas G

    2010-01-01

    The purpose of this design study is to adapt the High Explosive Pulsed Power Isentropic Compression Experiment (HEPP-ICE) to milligram quantities of materials at stresses of {approx}100 GPa. For this miniature application we assume that a parallel plate stripline of {approx}2.5 mm width is needed to compress the samples. In any parallel plate load, the rising currents flow preferentially along the outside edges of the load where the specific impedance is a minimum [1]. Therefore, the peak current must be between 1 and 2 MA to reach a stress of 100 GPa in the center of a 2.5 mm wide parallel plate load; these are small relative to typical HEPP-ICE currents. We show that a capacitor bank alone exceeds the requirements of this miniature ICE experiment and a flux compression generator (FCG) is not necessary. The proposed circuit will comprise one half of the 2.4-MJ bank, i.e., the 6-mF, 20-kV, 1.2 MJ capacitor bank used in the original HEPP-ICE circuit. Explosive opening and closing switches will still be required because the rise time of the capacitor circuit would be of the order of 30 {micro}s without them. For isentropic loading in these small samples, stress rise times of {approx}200 ns are required.

  19. Fast ignition integrated experiments and high-gain point design

    NASA Astrophysics Data System (ADS)

    Shiraga, H.; Nagatomo, H.; Theobald, W.; Solodov, A. A.; Tabak, M.

    2014-05-01

    Integrated fast ignition experiments were performed at ILE, Osaka, and LLE, Rochester, in which a nanosecond driver laser implodes a deuterated plastic shell in front of the tip of a hollow metal cone and an intense ultrashort-pulse laser is injected through the cone to heat the compressed plasma. Based on the initial successful results of fast electron heating of cone-in-shell targets, large-energy short-pulse laser beam lines were constructed and became operational: OMEGA-EP at Rochester and LFEX at Osaka. Neutron enhancement due to heating with a ˜kJ short-pulse laser has been demonstrated in the integrated experiments at Osaka and Rochester. The neutron yields are being analysed by comparing the experimental results with simulations. Details of the fast electron beam transport and the electron energy deposition in the imploded fuel plasma are complicated and further studies are imperative. The hydrodynamics of the implosion was studied including the interaction of the imploded core plasma with the cone tip. Theory and simulation studies are presented on the hydrodynamics of a high-gain target for a fast ignition point design.

  20. Fast ignition integrated experiments and high-gain point design

    SciTech Connect

    Shiraga, H.; Nagatomo, H.; Theobald, W.; Solodov, A. A.; Tabak, M.

    2014-04-17

    Here, integrated fast ignition experiments were performed at ILE, Osaka, and LLE, Rochester, in which a nanosecond driver laser implodes a deuterated plastic shell in front of the tip of a hollow metal cone and an intense ultrashort-pulse laser is injected through the cone to heat the compressed plasma. Based on the initial successful results of fast electron heating of cone-in-shell targets, large-energy short-pulse laser beam lines were constructed and became operational: OMEGA-EP at Rochester and LFEX at Osaka. Neutron enhancement due to heating with a ~kJ short-pulse laser has been demonstrated in the integrated experiments at Osaka and Rochester. The neutron yields are being analyzed by comparing the experimental results with simulations. Details of the fast electron beam transport and the electron energy deposition in the imploded fuel plasma are complicated and further studies are imperative. The hydrodynamics of the implosion was studied including the interaction of the imploded core plasma with the cone tip. Theory and simulation studies are presented on the hydrodynamics of a high-gain target for a fast ignition point design.

  1. Design study for a diverging supernova explosion experiment on NIF

    NASA Astrophysics Data System (ADS)

    Flaig, Markus; Plewa, Tomasz; Keiter, Paul; Grosskopf, Michael; Kuranz, Carolyn; Drake, Paul; Park, Hye-Sook

    2013-10-01

    We report on design simulations of a spherically-diverging, multi-interface, supernova-relevant Rayleigh-Taylor experiment (DivSNRT) to be carried out at the National Ignition Facility (NIF). The simulations are performed in two and three dimensions using the block-adaptive, multi-group radiative diffusion hydrodynamics code CRASH and the FLASH-based MHD code Proteus. In the present study, we concentrate mainly on a planar variant of the experiment. We assess the sensitivity of the Rayleigh-Taylor instability growth on numerical discretization, variations in the laser drive energy and the manufacturing noise at the material interfaces. We find that a simple buoyancy-drag model accurately predicts the mixed-layer width obtained in the simulations. We use synthetic radiographs to optimize the diagnostic system and the experimental setup. Finally, we perform a series of exploratory MHD simulations and investigate the self-generation of magnetic fields and their role in the system evolution. Supported by the DOE grant DE-SC0008823.

  2. Design and Assembly of the Magnetized Dusty Plasma Experiment (MDPX)

    NASA Astrophysics Data System (ADS)

    Fisher, Ross; Artis, Darrick; Lynch, Brian; Wood, Keith; Shaw, Joseph; Gilmore, Kevin; Robinson, Daniel; Polka, Christian; Konopka, Uwe; Thomas, Edward; Merlino, Robert; Rosenberg, Marlene

    2013-10-01

    Over the last two years, the Magnetized Dusty Plasma Experiment (MDPX) has been under construction at Auburn University. This new research device, whose assembly will be completed in late Summer, 2013, uses a four-coil, superconducting, high magnetic field system (|B | >= 4 Tesla) to investigate the confinement, charging, transport, and instabilities in a dusty plasma. A new feature of the MDPX device is the ability to operate the magnetic coils independently to allow a variety of magnetic configurations from highly uniform to quadrapole-like. Envisioned as a multi-user facility, the MDPX device features a cylindrical vacuum vessel whose primary experimental region is an octagonal chamber that has a 35.5 cm inner diameter and is 19 cm tall. There is substantial diagnostics and optical access through eight, 10.2 cm × 12.7 cm side ports. The chamber can also be equipped with two 15.2 cm diameter, 76 cm long extensions to allow long plasma column experiments, particularly long wavelength dust wave studies. This presentation will discuss the final design, assembly, and installation of the MDPX device and will describe its supporting laboratory facility. This work is supported by a National Science Foundation - Major Research Instrumentation (NSF-MRI) award, PHY-1126067.

  3. Propagation-related AMT design aspects and supporting experiments

    NASA Technical Reports Server (NTRS)

    Dessouky, Khaled; Estabrook, Polly

    1991-01-01

    The ACTS Mobile Terminal (AMT) is presently being developed with the goal of significantly extending commercial satellite applications and their user base. A thorough knowledge of the Ka-band channel characteristics is essential to the proper design of a commercially viable system that efficiently utilizes the valuable resources. To date, only limited tests have been performed to characterize the Ka-band channel, and they have focused on the needs of fixed terminals. As part of the value of the AMT as a Ka-band test bed is its function as a vehicle through which tests specifically applicable to the mobile satellite communications can be performed. The exact propagation environment with the proper set of elevation angles, vehicle antenna gains and patterns, roadside shadowing, rain, and Doppler is encountered. The ability to measure all of the above, as well as correlate their effects with observed communication system performance, creates an invaluable opportunity to understand in depth Ka-band's potential in supporting mobile and personal communications. This paper discusses the propagation information required for system design, the setup with ACTS that will enable obtaining this information, and finally the types of experiments to be performed and data to be gathered by the AMT to meet this objective.

  4. The design of the Tokamak Physics Experiment (TPX)

    NASA Astrophysics Data System (ADS)

    Schmidt, J. A.; Thomassen, K. I.; Goldston, R. J.; Neilson, G. H.; Nevins, W. M.; Sinnis, J. C.; Andersen, P.; Bair, W.; Barr, W. L.; Batchelor, D. B.; Baxi, C.; Berg, G.; Bernabei, S.; Bialek, J. M.; Bonoli, P. T.; Boozer, A.; Bowers, D.; Bronner, G.; Brooks, J. N.; Brown, T. G.; Bulmer, R.; Butner, D.; Campbell, R.; Casper, T.; Chaniotakis, E.; Chaplin, M.; Chen, S. J.; Chin, E.; Chrzanowski, J.; Citrolo, J.; Cole, M. J.; Dahlgren, F.; Davis, F. C.; Davis, J.; Davis, S.; Diatchenko, N.; Dinkevich, S.; Feldshteyn, Y.; Felker, B.; Feng, T.; Fenstermacher, M. E.; Fleming, R.; Fogarty, P. J.; Fragetta, W.; Fredd, E.; Gabler, M.; Galambos, J.; Gohar, Y.; Goranson, P. L.; Greenough, N.; Grisham, L. R.; Haines, J.; Haney, S.; Hassenzahl, W.; Heim, J.; Heitzenroeder, P. J.; Hill, D. N.; Hodapp, T.; Houlberg, W. A.; Hubbard, A.; Hyatt, A.; Jackson, M.; Jaeger, E. F.; Jardin, S. C.; Johnson, J.; Jones, G. H.; Juliano, D. R.; Junge, R.; Kalish, M.; Kessel, C. E.; Knutson, D.; LaHaye, R. J.; Lang, D. D.; Langley, R. A.; Liew, S.-L.; Lu, E.; Mantz, H.; Manickam, J.; Mau, T. K.; Medley, S.; Mikkelsen, D. R.; Miller, R.; Monticello, D.; Morgan, D.; Moroz, P.; Motloch, C.; Mueller, J.; Myatt, L.; Nelson, B. E.; Neumeyer, C. L.; Nilson, D.; O'Conner, T.; Pearlstein, L. D.; Peebles, W. A.; Pelovitz, M.; Perkins, F. W.; Perkins, L. J.; Petersen, D.; Pillsbury, R.; Politzer, P. A.; Pomphrey, N.; Porkolab, M.; Posey, A.; Radovinsky, A.; Raftopoulis, S.; Ramakrishnan, S.; Ramos, J.; Rauch, W.; Ravenscroft, D.; Redler, K.; Reiersen, W. T.; Reiman, A.; Reis, E.; Rewoldt, G.; Richards, D. J.; Rocco, R.; Rognlien, T. D.; Ruzic, D.; Sabbagh, S.; Sapp, J.; Sayer, R. O.; Scharer, J. E.; Schmitz, L.; Schnitz, J.; Sevier, L.; Shipley, S. E.; Simmons, R. T.; Slack, D.; Smith, G. R.; Stambaugh, R.; Steill, G.; Stevenson, T.; Stoenescu, S.; Onge, K. T. St.; Stotler, D. P.; Strait, T.; Strickler, D. J.; Swain, D. W.; Tang, W.; Tuszewski, M.; Ulrickson, M. A.; VonHalle, A.; Walker, M. S.; Wang, C.; Wang, P.; Warren, J.; Werley, K. A.; West, W. P.; Williams, F.; Wong, R.; Wright, K.; Wurden, G. A.; Yugo, J. J.; Zakharov, L.; Zbasnik, J.

    1993-09-01

    The Tokamak Physics Experiment is designed to develop the scientific basis for a compact and continuously operating tokamak fusion reactor. It is based on an emerging class of tokamak operating modes, characterized by beta limits well in excess of the Troyon limit, confinement scaling well in excess of H-mode, and bootstrap current fractions approaching unity. Such modes are attainable through the use of advanced, steady state plasma controls including strong shaping, current profile control, and active particle recycling control. Key design features of the TPX are superconducting toroidal and poloidal field coils; actively-cooled plasma-facing components; a flexible heating and current drive system; and a spacious divertor for flexibility. Substantial deuterium plasma operation is made possible with an in-vessel remote maintenance system, a lowactivation titanium vacuum vessel, and shielding of ex-vessel components. The facility will be constructed as a national project with substantial participation by U.S. industry. Operation will begin with first plasma in the year 2000.

  5. Computational Design of Short Pulse Laser Driven Iron Opacity Experiments

    NASA Astrophysics Data System (ADS)

    Martin, Madison E.; London, Richard A.; Goluoglu, Sedat; Whitley, Heather D.

    2015-11-01

    Opacity is a critical parameter in the transport of radiation in systems such as inertial confinement fusion capsules and stars. The resolution of current disagreements between solar models and helioseismological observations would benefit from experimental validation of theoretical opacity models. Short pulse lasers can be used to heat targets to higher temperatures and densities than long pulse lasers and pulsed power machines, thus potentially enabling access to emission spectra at conditions relevant to solar models. In order to ensure that the relevant plasma conditions are accessible and that an emission measurement is practical, we use computational design of experiments to optimize the target characteristics and laser conditions. Radiation-hydrodynamic modeling, using HYDRA, is used to investigate the effects of modifying laser irradiance, target dimensions, and dopant dilution on the plasma conditions and emission of an iron opacity target. Several optimized designs reaching temperatures and densities relevant to the radiative zone of the sun will be discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Lawrence Livermore National Security, LLC.

  6. The application of statistically designed experiments to resistance spot welding

    NASA Technical Reports Server (NTRS)

    Hafley, Robert A.; Hales, Stephen J.

    1991-01-01

    State-of-the-art Resistance Spot Welding (RSW) equipment has the potential to permit realtime monitoring of operations through advances in computerized process control. In order to realize adaptive feedback capabilities, it is necessary to establish correlations among process variables, welder outputs, and weldment properties. The initial step toward achieving this goal must involve assessment of the effect of specific process inputs and the interactions among these variables on spot weld characteristics. This investigation evaluated these effects through the application of a statistically designed experiment to the RSW process. A half-factorial, Taguchi L sub 16 design was used to understand and refine a RSW schedule developed for welding dissimilar aluminum-lithium alloys of different thickness. The baseline schedule had been established previously by traditional trial and error methods based on engineering judgment and one-factor-at-a-time studies. A hierarchy of inputs with respect to each other was established, and the significance of these inputs with respect to experimental noise was determined. Useful insight was gained into the effect of interactions among process variables, particularly with respect to weldment defects. The effects of equipment related changes associated with disassembly and recalibration were also identified. In spite of an apparent decrease in equipment performance, a significant improvement in the maximum strength for defect-free welds compared to the baseline schedule was achieved.

  7. National Spherical Torus Experiment (NSTX) Torus Design, Fabrication and Assembly

    SciTech Connect

    C. Neumeyer; G. Barnes; J.H. Chrzanowski; P. Heitzenroeder; et al

    1999-11-01

    The National Spherical Torus Experiment (NSTX) is a low aspect ratio spherical torus (ST) located at Princeton Plasma Physics Laboratory (PPPL). Fabrication, assembly, and initial power tests were completed in February of 1999. The majority of the design and construction efforts were constructed on the Torus system components. The Torus system includes the centerstack assembly, external Poloidal and Toroidal coil systems, vacuum vessel, torus support structure and plasma facing components (PFC's). NSTX's low aspect ratio required that the centerstack be made with the smallest radius possible. This, and the need to bake NSTXs carbon-carbon composite plasma facing components at 350 degrees C, was major drivers in the design of NSTX. The Centerstack Assembly consists of the inner legs of the Toroidal Field (TF) windings, the Ohmic Heating (OH) solenoid and its associated tension cylinder, three inner Poloidal Field (PF) coils, thermal insulation, diagnostics and an Inconel casing which forms the inner wall of the vacuum vessel boundary. It took approximately nine months to complete the assembly of the Centerstack. The tight radial clearances and the extreme length of the major components added complexity to the assembly of the Centerstack components. The vacuum vessel was constructed of 304-stainless steel and required approximately seven months to complete and deliver to the Test Cell. Several of the issues associated with the construction of the vacuum vessel were control of dimensional stability following welding and controlling the permeability of the welds. A great deal of time and effort was devoted to defining the correct weld process and material selection to meet our design requirements. The PFCs will be baked out at 350 degrees C while the vessel is maintained at 150 degrees C. This required care in designing the supports so they can accommodate the high electromagnetic loads resulting from plasma disruptions and the resulting relative thermal expansions

  8. Design issues in toxicogenomics using DNA microarray experiment

    SciTech Connect

    Lee, Kyoung-Mu; Kim, Ju-Han; Kang, Daehee . E-mail: dhkang@snu.ac.kr

    2005-09-01

    The methods of toxicogenomics might be classified into omics study (e.g., genomics, proteomics, and metabolomics) and population study focusing on risk assessment and gene-environment interaction. In omics study, microarray is the most popular approach. Genes falling into several categories (e.g., xenobiotics metabolism, cell cycle control, DNA repair etc.) can be selected up to 20,000 according to a priori hypothesis. The appropriate type of samples and species should be selected in advance. Multiple doses and varied exposure durations are suggested to identify those genes clearly linked to toxic response. Microarray experiments can be affected by numerous nuisance variables including experimental designs, sample extraction, type of scanners, etc. The number of slides might be determined from the magnitude and variance of expression change, false-positive rate, and desired power. Instead, pooling samples is an alternative. Online databases on chemicals with known exposure-disease outcomes and genetic information can aid the interpretation of the normalized results. Gene function can be inferred from microarray data analyzed by bioinformatics methods such as cluster analysis. The population study often adopts hospital-based or nested case-control design. Biases in subject selection and exposure assessment should be minimized, and confounding bias should also be controlled for in stratified or multiple regression analysis. Optimal sample sizes are dependent on the statistical test for gene-to-environment or gene-to-gene interaction. The design issues addressed in this mini-review are crucial in conducting toxicogenomics study. In addition, integrative approach of exposure assessment, epidemiology, and clinical trial is required.

  9. Interim Service ISDN Satellite (ISIS) hardware experiment development for advanced ISDN satellite designs and experiments

    NASA Technical Reports Server (NTRS)

    Pepin, Gerard R.

    1992-01-01

    The Interim Service Integrated Service Digital Network (ISDN) Satellite (ISIS) Hardware Experiment Development for Advanced Satellite Designs describes the development of the ISDN Satellite Terminal Adapter (ISTA) capable of translating ISDN protocol traffic into Time Division Multiple Access (TDMA) signals for use by a communications satellite. The ISTA connects the Type 1 Network Termination (NT1) via the U-interface on the line termination side of the CPE to the RS-499 interface for satellite uplink. The same ISTA converts in the opposite direction the RS-499 to U-interface data with a simple switch setting.

  10. Having One's Cake and Eating It, Too: Combining True Experiments with Regression Discontinuity Designs

    ERIC Educational Resources Information Center

    Mandell, Marvin B.

    2008-01-01

    Both true experiments and regression discontinuity (RD) designs produce unbiased estimates of effects. However, true experiments are, of course, often criticized on equity grounds, whereas RD designs entail sacrifices in terms of statistical precision. In this article, a hybrid of true experiments and RD designs is considered. This hybrid entails…

  11. Interlopers 3D: experiences designing a stereoscopic game

    NASA Astrophysics Data System (ADS)

    Weaver, James; Holliman, Nicolas S.

    2014-03-01

    Background In recent years 3D-enabled televisions, VR headsets and computer displays have become more readily available in the home. This presents an opportunity for game designers to explore new stereoscopic game mechanics and techniques that have previously been unavailable in monocular gaming. Aims To investigate the visual cues that are present in binocular and monocular vision, identifying which are relevant when gaming using a stereoscopic display. To implement a game whose mechanics are so reliant on binocular cues that the game becomes impossible or at least very difficult to play in non-stereoscopic mode. Method A stereoscopic 3D game was developed whose objective was to shoot down advancing enemies (the Interlopers) before they reached their destination. Scoring highly required players to make accurate depth judgments and target the closest enemies first. A group of twenty participants played both a basic and advanced version of the game in both monoscopic 2D and stereoscopic 3D. Results The results show that in both the basic and advanced game participants achieved higher scores when playing in stereoscopic 3D. The advanced game showed that by disrupting the depth from motion cue the game became more difficult in monoscopic 2D. Results also show a certain amount of learning taking place over the course of the experiment, meaning that players were able to score higher and finish the game faster over the course of the experiment. Conclusions Although the game was not impossible to play in monoscopic 2D, participants results show that it put them at a significant disadvantage when compared to playing in stereoscopic 3D.

  12. Global 21 cm signal experiments: A designer's guide

    NASA Astrophysics Data System (ADS)

    Liu, Adrian; Pritchard, Jonathan R.; Tegmark, Max; Loeb, Abraham

    2013-02-01

    The global (i.e., spatially averaged) spectrum of the redshifted 21 cm line has generated much experimental interest lately, thanks to its potential to be a direct probe of the epoch of reionization and the dark ages, during which the first luminous objects formed. Since the cosmological signal in question has a purely spectral signature, most experiments that have been built, designed, or proposed have essentially no angular sensitivity. This can be problematic because with only spectral information, the expected global 21 cm signal can be difficult to distinguish from foreground contaminants such as galactic synchrotron radiation, since both are spectrally smooth and the latter is many orders of magnitude brighter. In this paper, we establish a systematic mathematical framework for global signal data analysis. The framework removes foregrounds in an optimal manner, complementing spectra with angular information. We use our formalism to explore various experimental design trade-offs, and find that (1) with spectral-only methods, it is mathematically impossible to mitigate errors that arise from uncertainties in one’s foreground model; (2) foreground contamination can be significantly reduced for experiments with fine angular resolution; (3) most of the statistical significance in a positive detection during the dark ages comes from a characteristic high-redshift trough in the 21 cm brightness temperature; (4) measurement errors decrease more rapidly with integration time for instruments with fine angular resolution; and (5) better foreground models can help reduce errors, but once a modeling accuracy of a few percent is reached, significant improvements in accuracy will be required to further improve the measurements. We show that if observations and data analysis algorithms are optimized based on these findings, an instrument with a 5° wide beam can achieve highly significant detections (greater than 5σ) of even extended (high Δz) reionization scenarios

  13. Experiments and other methods for developing expertise with design of experiments in a classroom setting

    NASA Technical Reports Server (NTRS)

    Patterson, John W.

    1990-01-01

    The only way to gain genuine expertise in Statistical Process Control (SPC) and the design of experiments (DOX) is with repeated practice, but not on canned problems with dead data sets. Rather, one must negotiate a wide variety of problems each with its own peculiarities and its own constantly changing data. The problems should not be of the type for which there is a single, well-defined answer that can be looked up in a fraternity file or in some text. The problems should match as closely as possible the open-ended types for which there is always an abundance of uncertainty. These are the only kinds that arise in real research, whether that be basic research in academe or engineering research in industry. To gain this kind of experience, either as a professional consultant or as an industrial employee, takes years. Vast amounts of money, not to mention careers, must be put at risk. The purpose here is to outline some realistic simulation-type lab exercises that are so simple and inexpensive to run that the students can repeat them as often as desired at virtually no cost. Simulations also allow the instructor to design problems whose outcomes are as noisy as desired but still predictable within limits. Also the instructor and the students can learn a great deal more from the postmortum conducted after the exercise is completed. One never knows for sure what the true data should have been when dealing only with real life experiments. To add a bit more realism to the exercises, it is sometimes desirable to make the students pay for each experimental result from a make-believe budget allocation for the problem.

  14. Bio-inspired design of dental multilayers: experiments and model.

    PubMed

    Niu, Xinrui; Rahbar, Nima; Farias, Stephen; Soboyejo, Wole

    2009-12-01

    This paper combines experiments, simulations and analytical modeling that are inspired by the stress reductions associated with the functionally graded structures of the dentin-enamel-junctions (DEJs) in natural teeth. Unlike conventional crown structures in which ceramic crowns are bonded to the bottom layer with an adhesive layer, real teeth do not have a distinct "adhesive layer" between the enamel and the dentin layers. Instead, there is a graded transition from enamel to dentin within a approximately 10 to 100 microm thick regime that is called the Dentin Enamel Junction (DEJ). In this paper, a micro-scale, bio-inspired functionally graded structure is used to bond the top ceramic layer (zirconia) to a dentin-like ceramic-filled polymer substrate. The bio-inspired functionally graded material (FGM) is shown to exhibit higher critical loads over a wide range of loading rates. The measured critical loads are predicted using a rate dependent slow crack growth (RDEASCG) model. The implications of the results are then discussed for the design of bio-inspired dental multilayers. PMID:19716103

  15. Irradiation Experiment Conceptual Design Parameters for NBSR Fuel Conversion

    SciTech Connect

    Brown, N. R.; Brown, N. R.; Baek, J. S; Hanson, A. L.; Cuadra, A.; Cheng, L. Y.; Diamond, D. J.

    2014-04-30

    It has been proposed to convert the National Institute of Standards and Technology (NIST) research reactor, known as the NBSR, from high-enriched uranium (HEU) fuel to low-Enriched uranium (LEU) fuel. The motivation to convert the NBSR to LEU fuel is to reduce the risk of proliferation of special nuclear material. This report is a compilation of relevant information from recent studies related to the proposed conversion using a metal alloy of LEU with 10 w/o molybdenum. The objective is to inform the design of the mini-plate and full-size-Plate irradiation experiments that are being planned. This report provides relevant dimensions of the fuel elements, and the following parameters at steady state: average and maximum fission rate density and fission density, fuel temperature distribution for the plate with maximum local temperature, and two-dimensional heat flux profiles of fuel plates with high power densities. The latter profiles are given for plates in both the inner and outer core zones and for cores with both fresh and depleted shim arms (reactivity control devices). A summary of the methodology to obtain these results is presented. Fuel element tolerance assumptions and hot channel factors used in the safety analysis are also given.

  16. Prototype internal target design for storage ring experiments

    NASA Astrophysics Data System (ADS)

    Petridis, N.; Grisenti, R. E.; Litvinov, Yu A.; Stöhlker, Th

    2015-11-01

    The introduction of cryogenically cooled, few micrometer-sized nozzle geometries and an essential modification of the experimental storage ring (ESR) target station allowed for a reliable operation using low-Z gases at target area densities in the range of 1013-1014 cm-2. Therefore, a remarkably versatile target source was established, enabling operation over the whole range of desired target gases (from H2 to Xe) and area densities (˜1010 to ˜1014 cm-2). Moreover, the considerably smaller orifice diameter of the new target source enables a much more compact inlet chamber while, at the same time, maintaining the demanding vacuum requirements of a storage ring. A completely new inlet chamber design is presented here, which, besides the improvements regarding the achievable area densities, will feature a variable beam width down to 1 mm at the ion beam interaction region. This is of paramount importance with respect to the realization of high precision experiments, e.g. by reducing the inaccuracy of the observation angle causing the relativistic Doppler broadening. While being intended for the deployment at the future high energy storage ring within the SPARC collaboration, the new inlet chamber can also replace the current one at the ESR or serve as an internal target for CRYRING.

  17. Conceptual design study for Infrared Limb Experiment (IRLE)

    NASA Technical Reports Server (NTRS)

    Baker, Doran J.; Ulwick, Jim; Esplin, Roy; Batty, J. C.; Ware, Gene; Tew, Craig

    1989-01-01

    The phase A engineering design study for the Infrared Limb Experiment (IRLE) instrument, the infrared portion of the Mesosphere-Lower Thermosphere Explorer (MELTER) satellite payload is given. The IRLE instrument is a satellite instrument, based on the heritage of the Limb Infrared Monitor of the Stratosphere (LIMS) program, that will make global measurements of O3, CO2, NO, NO2, H2O, and OH from earth limb emissions. These measurements will be used to provide improved understanding of the photochemistry, radiation, dynamics, energetics, and transport phenomena in the lower thermosphere, mesosphere, and stratosphere. The IRLE instrument is the infrared portion of the MELTER satellite payload. MELTER is being proposed to NASA Goddard by a consortium consisting of the University of Michigan, University of Colorado and NASA Langley. It is proposed that the Space Dynamics Laboratory at Utah State University (SDL/USU) build the IRLE instrument for NASA Langley. MELTER is scheduled for launch in November 1994 into a sun-synchronous, 650-km circular orbit with an inclination angle of 97.8 deg and an ascending node at 3:00 p.m. local time.

  18. Magnetohydrodynamic Augmented Propulsion Experiment: I. Performance Analysis and Design

    NASA Technical Reports Server (NTRS)

    Litchford, R. J.; Cole, J. W.; Lineberry, J. T.; Chapman, J. N.; Schmidt, H. J.; Lineberry, C. W.

    2003-01-01

    The performance of conventional thermal propulsion systems is fundamentally constrained by the specific energy limitations associated with chemical fuels and the thermal limits of available materials. Electromagnetic thrust augmentation represents one intriguing possibility for improving the fuel composition of thermal propulsion systems, thereby increasing overall specific energy characteristics; however, realization of such a system requires an extremely high-energy-density electrical power source as well as an efficient plasma acceleration device. This Technical Publication describes the development of an experimental research facility for investigating the use of cross-field magnetohydrodynamic (MHD) accelerators as a possible thrust augmentation device for thermal propulsion systems. In this experiment,a 1.5-MW(sub e) Aerotherm arc heater is used to drive a 2-MW(sub e) MHD accelerator. The heatsink MHD accelerator is configured as an externally diagonalized, segmented channel, which is inserted into a large-bore, 2-T electromagnet. The performance analysis and engineering design of the flow path are described as well as the parameter measurements and flow diagnostics planned for the initial series of test runs.

  19. Irradiation Experiment Conceptual Design Parameters for NBSR Fuel Conversion

    SciTech Connect

    Brown N. R.; Brown,N.R.; Baek,J.S; Hanson, A.L.; Cuadra,A.; Cheng,L.Y.; Diamond, D.J.

    2013-03-31

    It has been proposed to convert the National Institute of Standards and Technology (NIST) research reactor, known as the NBSR, from high-enriched uranium (HEU) fuel to low-enriched uranium (LEU) fuel. The motivation to convert the NBSR to LEU fuel is to reduce the risk of proliferation of special nuclear material. This report is a compilation of relevant information from recent studies related to the proposed conversion using a metal alloy of LEU with 10 w/o molybdenum. The objective is to inform the design of the mini-plate and full-size plate irradiation experiments that are being planned. This report provides relevant dimensions of the fuel elements, and the following parameters at steady state: average and maximum fission rate density and fission density, fuel temperature distribution for the plate with maximum local temperature, and two-dimensional heat flux profiles of fuel plates with high power densities. . The latter profiles are given for plates in both the inner and outer core zones and for cores with both fresh and depleted shim arms (reactivity control devices). In addition, a summary of the methodology to obtain these results is presented.

  20. GPS Antenna Characterization Experiment (ACE): Receiver Design and Initial Results

    NASA Technical Reports Server (NTRS)

    Martzen, Phillip; Highsmith, Dolan E.; Valdez, Jennifer E.; Parker, Joel J. K.; Moreau, Michael C.

    2015-01-01

    The GPS Antenna Characterization Experiment (ACE) is a research collaboration between Aerospace and NASA Goddard to characterize the gain patterns of the GPS L1 transmit antennas. High altitude GPS observations are collected at a ground station through a transponder-based or "bent-pipe" architecture where the GPS L1 RF spectrum is received at a platform in geosynchronous orbit and relayed to the ground for processing. The focus of this paper is the unique receiver algorithm design and implementation. The high-sensitivity GPS C/A-code receiver uses high fidelity code and carrier estimates and externally supplied GPS message bit data in a batch algorithm with settings for a 0 dB-Hz threshold. The resulting carrier-to-noise measurements are used in a GPS L1 transmit antenna pattern reconstruction. This paper shows initial transmit gain patterns averaged over each block of GPS satellites, including comparisons to available pre-flight gain measurements from the GPS vehicle contractors. These results provide never-before-seen assessments of the full, in-flight transmit gain patterns.

  1. The OECI certification/designation program: the Genoa experience.

    PubMed

    Orengo, Giovanni; Pronzato, Paolo; Ferrarini, Manlio

    2015-12-31

    Accreditation and designation procedures by the Organisation of European Cancer Institutes (OECI) have represented a considerable challenge for most of the Italian cancer centers. We summarize the experience of the San Martino-IST in Genoa, which, on the whole, was satisfactory, albeit demanding for the staff. The reorganization of most oncology/hematology operations within the disease management teams was probably the key point that allowed us to obtain approval as it brought about the possibility of bringing in uniform methods of diagnosis/treatment, increasing patient recruitment in clinical trials, and fostering translational research by promoting collaboration between clinicians and laboratory investigators. The creation of a more cohesive supportive and terminal care team facilitated both the OECI procedures as well as the operations within the institution. Finally, some considerations are added to the doctor and nurse management roles in Italian hospitals characterized by noticeable differences from northern Europe. These differences may represent an extra challenge for hospital management and evaluator teams more used to the northern European type of organization. PMID:27096267

  2. Introduction to the Design and Optimization of Experiments Using Response Surface Methodology. A Gas Chromatography Experiment for the Instrumentation Laboratory

    ERIC Educational Resources Information Center

    Lang, Patricia L.; Miller, Benjamin I.; Nowak, Abigail Tuttle

    2006-01-01

    The study describes how to design and optimize an experiment with multiple factors and multiple responses. The experiment uses fractional factorial analysis as a screening experiment only to identify important instrumental factors and does not use response surface methodology to find the optimal set of conditions.

  3. Clinical experiences with three different designs of ankle prostheses.

    PubMed

    Rippstein, Pascal F

    2002-12-01

    Until 1995, fusion was in our institution the only rational surgical option for a severe ankle arthrosis. Consistent reports about good mid- and long-term results with ankle replacement allowed us to change our minds. Ankle replacement became the gold standard and fusion was then almost totally banished. Because ankle arthrosis can be morphologically different from one patient to another, we soon believed that one single type of ankle prosthesis would not be the universal optimal solution for all patients. We therefore divided the ankle arthrosis into three groups. Each group shows the best solution from each of the ankle prostheses with which we had gained experience (Agility, STAR, and BP). The Agility prosthesis, which was indicated for ankles with extremely damaged geometry, did not restore sufficiently the ankle motion. Preoperatively stiff ankles remained stiff postoperatively. Additionally, significant residual pain was more likely to occur in those patients. These cases did not show significant advantages compared with ankle fusion, especially from a functional point of view. Fusion for these stiff ankles is therefore today our first treatment of choice. In our experience, the malleolar joints do not have to be replaced. Even a severe arthrosis at this level does not produce significant pain, provided that osteophytes have been removed and joint height has been restored by the implanted prosthesis. It is our strong belief that these malleolar joints are also less sensitive to pain, similar to the femoropatellar joint. For these reasons, a replacement of the malleolar joints and the resurfacing of the talar sides is not necessary. Leaving the talar sides untouched requires less bone resection and makes the implantation of the talar component easier. Although we obtained good results with the STAR prosthesis, we progressively abandoned it because of these reasons, and we preferred the BP prosthesis. The BP prosthesis works on the same biomechanic principle as

  4. Gender Consideration in Experiment Design for Airbrake in Prebreathe

    NASA Technical Reports Server (NTRS)

    Conkin, Johnny; Gernhardt, Michael I.; Dervay, Joseph P.

    2007-01-01

    If gender is a confounder of the decompression sickness (DCS) or venous gas emboli (VGE) outcomes of a proposed air break in oxygen prebreathe (PB) project, then decisions about the final experiment design must be made. We evaluated if the incidence of DCS and VGE from tests in altitude chambers over 20 years were different between men and women after resting and exercise prebreathe protocols. Nitrogen washout during PB is our primary risk mitigation strategy to prevent subsequent DCS and VGE in subjects. Bubbles in the pulmonary artery (venous blood) were detected from the precordial position using Doppler ultrasound bubble detectors. The subjects were monitored for VGE for four min at about 15 min intervals for the duration of the altitude exposure, with maximum bubble grade assigned a Spencer Grade of IV. There was no difference in DCS incidence between men and women in either PB protocol. The incidence of VGE and Grade IV VGE is statistically lower in women compared to men after resting PB. Even when 10 tests were compared with Mantel-Haenszel 2 where both men (n = 168) and women (n = 92) appeared, the p-value for VGE incidence was still significant at 0.03. The incidence of VGE and Grade IV VGE is not statistically lower in women compared to men after exercise PB. Even when six tests were compared with Mantel-Haenszel x2 where both men (n = 165) and women (n = 49) appeared, the p-value for VGE incidence was still not significant at 0.90. Our goal is to understand the risk of brief air breaks during PB without other confounding variables invalidating our conclusions. The cost to additionally account for the confounding role of gender on VGE outcome after resting PB is judged excessive. Our decision is to only evaluate air breaks in the exercise PB protocol. So there is no restriction to recruiting women as test subjects.

  5. Predictive Model for the Design of Zwitterionic Polymer Brushes: A Statistical Design of Experiments Approach.

    PubMed

    Kumar, Ramya; Lahann, Joerg

    2016-07-01

    The performance of polymer interfaces in biology is governed by a wide spectrum of interfacial properties. With the ultimate goal of identifying design parameters for stem cell culture coatings, we developed a statistical model that describes the dependence of brush properties on surface-initiated polymerization (SIP) parameters. Employing a design of experiments (DOE) approach, we identified operating boundaries within which four gel architecture regimes can be realized, including a new regime of associated brushes in thin films. Our statistical model can accurately predict the brush thickness and the degree of intermolecular association of poly[{2-(methacryloyloxy) ethyl} dimethyl-(3-sulfopropyl) ammonium hydroxide] (PMEDSAH), a previously reported synthetic substrate for feeder-free and xeno-free culture of human embryonic stem cells. DOE-based multifunctional predictions offer a powerful quantitative framework for designing polymer interfaces. For example, model predictions can be used to decrease the critical thickness at which the wettability transition occurs by simply increasing the catalyst quantity from 1 to 3 mol %. PMID:27268965

  6. Design and Implementation of an experiment-specific Payload Orientation Platform for balloon-borne Experiment .

    NASA Astrophysics Data System (ADS)

    Devarajan, Anand; Rodi, Ashish; Ojha, Devendra

    2012-07-01

    To investigate the mesospheric dynamics and its coupling to the upper atmospheric regions above, a Balloon-borne optical Investigation of Regional-atmospheric Dynamics (BIRD) experiment was jointly conducted by Physical Research Laboratory Ahmedabad and Boston University, on 08 March 2010 from TIFR Balloon Facility, Hyderabad. Along with the BIRD payload, a nano payload of University of York, Canada was also flown for aerosol studies during sunset. The balloon carrying a 335kg BIRD payload was launched at 1052 hrs, reached a float altitude of 34.8km amsl at 1245 hrs and was allowed to float till 1825 hrs before it was parachuted down. To achieve the experimental objectives, it was essential that the payload Gandola, comprising of two optical spectrographs, is programmed to rotate azimuthally in 3 steps of 30 degrees each from East-West (E-W) to North-South (N-S) direction, stop at each step for 5 minutes for data acquisition, return to the original E-W position and keep repeating the sequence continuously with a provision to start or stop the orientation from Ground station through telecommand. To meet these unique requirements, we designed developed and implemented a Payload Orientation Platform (POP), using flux-gate magnetometer for direction-finding, which worked satisfactorily in the BIRD flight. This paper presents an overview of the POP implemented, focuses on the design considerations of the associated electronics and finally presents the results of the performance during the entire balloon flight.

  7. Skylab Earth Resource Experiment Package critical design review. [conference

    NASA Technical Reports Server (NTRS)

    1973-01-01

    An outline of the conference for reviewing the design of the EREP is presented. Systems design for review include: tape recorder, support equipment, view finder/tracking, support hardware, and control and display panel.

  8. Viking dynamics experience with application to future payload design

    NASA Technical Reports Server (NTRS)

    Barrett, S.; Rader, W. P.; Payne, K. R.

    1978-01-01

    Analytical and test techniques are discussed. Areas in which hindsight indicated erroneous, redundant, or unnecessarily severe design and test specifications are identified. Recommendations are made for improvements in the dynamic design and criteria philosophy, aimed at reducing costs for payloads.

  9. Model-Driven Design: Systematically Building Integrated Blended Learning Experiences

    ERIC Educational Resources Information Center

    Laster, Stephen

    2010-01-01

    Developing and delivering curricula that are integrated and that use blended learning techniques requires a highly orchestrated design. While institutions have demonstrated the ability to design complex curricula on an ad-hoc basis, these projects are generally successful at a great human and capital cost. Model-driven design provides a…

  10. A Learning, Research and Development Framework to Design for a "Holistic" Learning Experience

    ERIC Educational Resources Information Center

    Carroll, Fiona; Kop, Rita

    2011-01-01

    The design of experiences and, in particular, educational experiences is a complex matter and involves not only using effective technologies and applying cognitive triggers, but there is a need to "think outside the box" in order to also design for the affective dimension of human experiences; the impressions, feelings and interactions that a…

  11. The Historical and Situated Nature Design Experiments--Implications for Data Analysis

    ERIC Educational Resources Information Center

    Krange, I.; Ludvigsen, Sten

    2009-01-01

    This article is a methodological contribution to the use of design experiments in educational research. We will discuss the implications of a historical and situated interpretation to design experiments, the consequences this has for the analysis of the collected data and empirically based suggestions to improve the designs of the computer-based…

  12. How Design Experiments Can Inform a Rethinking of Transfer and Vice Versa.

    ERIC Educational Resources Information Center

    Lobato, Joanne

    2003-01-01

    Proposes that the theoretical assumptions underlying a researcher's model of transfer affect how design decisions are made. Discusses limitations with the two most common approaches to transfer in design experiments, exploring how design experiments can challenge the basic assumptions underlying transfer models, and illustrating this point by…

  13. Taxonomic Organization Scaffolds Young Children's Learning from Storybooks: A Design Experiment

    ERIC Educational Resources Information Center

    Kaefer, Tanya; Pinkham, Ashley M.; Neuman, Susan B.

    2010-01-01

    The purpose of this design experiment was to research, test and iteratively design a set of taxonomically-organized storybooks that served to scaffold young children's word learning and concept development. Specifically, Phase 1 of the design experiment asked: (1) What are the effects of taxonomic organization on children's ability to acquire…

  14. On the design of experiments to study extreme field limits

    SciTech Connect

    Bulanov, S. S.; Chen, M.; Schroeder, C. B.; Esarey, E.; Leemans, W. P.; Bulanov, S. V.; Esirkepov, T. Zh.; Kando, M.; Koga, J. K.; Zhidkov, A. G.; Chen, P.; Mur, V. D.; Narozhny, N. B.; Popov, V. S.; Thomas, A. G. R.; Korn, G.

    2012-12-21

    We propose experiments on the collision of high intensity electromagnetic pulses with electron bunches and on the collision of multiple electromagnetic pulses for studying extreme field limits in the nonlinear interaction of electromagnetic waves. The effects of nonlinear QED will be revealed in these laser plasma experiments.

  15. Applying modeling Results in designing a global tropospheric experiment

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A set of field experiments and advanced modeling studies which provide a strategy for a program of global tropospheric experiments was identified. An expanded effort to develop space applications for trospheric air quality monitoring and studies was recommended. The tropospheric ozone, carbon, nitrogen, and sulfur cycles are addressed. Stratospheric-tropospheric exchange is discussed. Fast photochemical processes in the free troposphere are considered.

  16. Ignition experiment design based on γ-pumping gas lasers

    NASA Astrophysics Data System (ADS)

    Bonyushkin, E. K.; Il'kaev, R. I.; Morovov, A. P.; Pavlovskii, A. I.; Lazhintsev, B. V.; Basov, N.; Gus'kov, S. Yu.; Rosanov, V. B.; Zmitrenko, N. V.

    1996-05-01

    Comparative analysis of gas lasers pumped by γ-radiation for ignition experiment is carried out. The possibilities of frequency-time pulse shaping are discussed for these kinds of laser drivers. New type of ICF target (LIGHT-target), which is able to provide an uniform deposition of laser driver energy is proposed as a target for ignition experiment.

  17. Education Through the Dance Experience. Designed for Children Series.

    ERIC Educational Resources Information Center

    Docherty, David

    This text presents a creative, child-centered approach to the teaching of dance in the elementary school based on the theories and methods of Rudolf Laban and Joyce Boorman. The content area of dance is briefly described so that the practical experiences presented later in the text can be viewed in perspective. Dance experiences are presented that…

  18. The BWR advanced fuel design experience using Studsvik CMS

    SciTech Connect

    DiGiovine, A.S.; Gibbon, S.H.; Wiksell, G.

    1996-12-31

    The current trend within the nuclear industry is to maximize generation by extending cycle lengths and taking outages as infrequently as possible. As a result, many utilities have begun to use fuel designed to meet these more demanding requirements. These fuel designs are significantly more heterogeneous in mechanical and neutronic detail than prior designs. The question arises as to how existing in-core fuel management codes, such as Studsvik CMS perform in modeling cores containing these designs. While this issue pertains to both pressurized water reactors (PWRs) and boiling water reactors (BWRs), this summary focuses on BWR applications.

  19. Laser communication experiment. Volume 1: Design study report: Spacecraft transceiver. Part 1: Transceiver design

    NASA Technical Reports Server (NTRS)

    1970-01-01

    The ATS-F Laser Communications Experiment (LCE) is the first significant step in the application of laser systems to space communications. The space-qualified laser communications system being developed in this experiment, and the data resulting from its successful deployment in space, will be applicable to the use of laser communications systems in a wide variety of manned as well as unmanned space missions, both near earth and in deep space. Particular future NASA missions which can benefit from this effort are the Tracking and Data Relay Satellite System and the Earth Resources Satellites. The LCE makes use of carbon dioxide lasers to establish simultaneous, two-way communication between the ATS-F synchronous satellite and a ground station. In addition, the LCE is designed to permit communication with a similar spacecraft transceiver proposed to be flown on ATS-G, nominally one year after the launch of ATS-F. This would be the first attempt to employ lasers for satellite-to-satellite communications.

  20. Use of Taguchi design of experiments to optimize and increase robustness of preliminary designs

    NASA Technical Reports Server (NTRS)

    Carrasco, Hector R.

    1992-01-01

    The research performed this summer includes the completion of work begun last summer in support of the Air Launched Personnel Launch System parametric study, providing support on the development of the test matrices for the plume experiments in the Plume Model Investigation Team Project, and aiding in the conceptual design of a lunar habitat. After the conclusion of last years Summer Program, the Systems Definition Branch continued with the Air Launched Personnel Launch System (ALPLS) study by running three experiments defined by L27 Orthogonal Arrays. Although the data was evaluated during the academic year, the analysis of variance and the final project review were completed this summer. The Plume Model Investigation Team (PLUMMIT) was formed by the Engineering Directorate to develop a consensus position on plume impingement loads and to validate plume flowfield models. In order to obtain a large number of individual correlated data sets for model validation, a series of plume experiments was planned. A preliminary 'full factorial' test matrix indicated that 73,024 jet firings would be necessary to obtain all of the information requested. As this was approximately 100 times more firings than the scheduled use of Vacuum Chamber A would permit, considerable effort was needed to reduce the test matrix and optimize it with respect to the specific objectives of the program. Part of the First Lunar Outpost Project deals with Lunar Habitat. Requirements for the habitat include radiation protection, a safe haven for occasional solar flare storms, an airlock module as well as consumables to support 34 extra vehicular activities during a 45 day mission. The objective for the proposed work was to collaborate with the Habitat Team on the development and reusability of the Logistics Modules.

  1. Use of Taguchi design of experiments to optimize and increase robustness of preliminary designs

    NASA Astrophysics Data System (ADS)

    Carrasco, Hector R.

    1992-12-01

    The research performed this summer includes the completion of work begun last summer in support of the Air Launched Personnel Launch System parametric study, providing support on the development of the test matrices for the plume experiments in the Plume Model Investigation Team Project, and aiding in the conceptual design of a lunar habitat. After the conclusion of last years Summer Program, the Systems Definition Branch continued with the Air Launched Personnel Launch System (ALPLS) study by running three experiments defined by L27 Orthogonal Arrays. Although the data was evaluated during the academic year, the analysis of variance and the final project review were completed this summer. The Plume Model Investigation Team (PLUMMIT) was formed by the Engineering Directorate to develop a consensus position on plume impingement loads and to validate plume flowfield models. In order to obtain a large number of individual correlated data sets for model validation, a series of plume experiments was planned. A preliminary 'full factorial' test matrix indicated that 73,024 jet firings would be necessary to obtain all of the information requested. As this was approximately 100 times more firings than the scheduled use of Vacuum Chamber A would permit, considerable effort was needed to reduce the test matrix and optimize it with respect to the specific objectives of the program. Part of the First Lunar Outpost Project deals with Lunar Habitat. Requirements for the habitat include radiation protection, a safe haven for occasional solar flare storms, an airlock module as well as consumables to support 34 extra vehicular activities during a 45 day mission. The objective for the proposed work was to collaborate with the Habitat Team on the development and reusability of the Logistics Modules.

  2. Physics design requirements for the Tokamak Physics Experiment (TPX)

    SciTech Connect

    Neilson, G.H.; Goldston, R.J.; Jardin, S.C.; Reiersen, W.T.; Nevins, W.M.; Porkolab, M.; Ulrickson, M.

    1993-11-01

    The design of TPX is driven by physics requirements that follow from its mission. The tokamak and heating systems provide the performance and profile controls needed to study advanced steady state tokamak operating modes. The magnetic control systems provide substantial flexibility for the study of regimes with high beta and bootstrap current. The divertor is designed for high steady state power and particle exhaust.

  3. The Future of Management as Design: A Thought Experiment

    ERIC Educational Resources Information Center

    Bouchard, Veronique; del Forno, Leon

    2012-01-01

    Purpose: Management practices and education are presently in a stage of reappraisal and a growing number of scholars and experts are suggesting that managers should be taught and adopt the approach and methodologies of designers. The purpose of this paper is to imagine the impact of this move and to try and foresee whether "management as design"…

  4. Experiences performing conceptual design optimization of transport aircraft

    NASA Technical Reports Server (NTRS)

    Arbuckle, P. D.; Sliwa, S. M.

    1984-01-01

    Optimum Preliminary Design of Transports (OPDOT) is a computer program developed at NASA Langley Research Center for evaluating the impact of new technologies upon transport aircraft. For example, it provides the capability to look at configurations which have been resized to take advantage of active controls and provide and indication of economic sensitivity to its use. Although this tool returns a conceptual design configuration as its output, it does not have the accuracy, in absolute terms, to yield satisfactory point designs for immediate use by aircraft manufacturers. However, the relative accuracy of comparing OPDOT-generated configurations while varying technological assumptions has been demonstrated to be highly reliable. Hence, OPDOT is a useful tool for ascertaining the synergistic benefits of active controls, composite structures, improved engine efficiencies and other advanced technological developments. The approach used by OPDOT is a direct numerical optimization of an economic performance index. A set of independent design variables is iterated, given a set of design constants and data. The design variables include wing geometry, tail geometry, fuselage size, and engine size. This iteration continues until the optimum performance index is found which satisfies all the constraint functions. The analyst interacts with OPDOT by varying the input parameters to either the constraint functions or the design constants. Note that the optimization of aircraft geometry parameters is equivalent to finding the ideal aircraft size, but with more degrees of freedom than classical design procedures will allow.

  5. Space Shuttle Orbiter thermal protection system design and flight experience

    NASA Technical Reports Server (NTRS)

    Curry, Donald M.

    1993-01-01

    The Space Shuttle Orbiter Thermal Protection System materials, design approaches associated with each material, and the operational performance experienced during fifty-five successful flights are described. The flights to date indicate that the thermal and structural design requirements were met and that the overall performance was outstanding.

  6. Redesigning the Urban Design Studio: Two Learning Experiments

    ERIC Educational Resources Information Center

    Pak, Burak; Verbeke, Johan

    2013-01-01

    The main aim of this paper is to discuss how the combination of Web 2.0, social media and geographic technologies can provide opportunities for learning and new forms of participation in an urban design studio. This discussion is mainly based on our recent findings from two experimental urban design studio setups as well as former research and…

  7. Developing Teachers' Competences for Designing Inclusive Learning Experiences

    ERIC Educational Resources Information Center

    Navarro, Silvia Baldiris; Zervas, Panagiotis; Gesa, Ramon Fabregat; Sampson, Demetrios G.

    2016-01-01

    Inclusive education, namely the process of providing all learners with equal educational opportunities, is a major challenge for many educational systems worldwide. In order to address this issue, a widely used framework has been developed, namely the Universal Design for Learning (UDL), which aims to provide specific educational design guidelines…

  8. Kinetics experiments and bench-scale system: Background, design, and preliminary experiments

    SciTech Connect

    Rofer, C.K.

    1987-10-01

    The project, Supercritical Water Oxidation of Hazardous Chemical Waste, is a Hazardous Waste Remedial Actions Program (HAZWRAP) Research and Development task being carried out by the Los Alamos National Laboratory. Its objective is to obtain information for use in understanding the basic technology and for scaling up and applying oxidation in supercritical water as a viable process for treating a variety of DOE-DP waste streams. This report gives the background and rationale for kinetics experiments on oxidation in supercritical water being carried out as a part of this HAZWRAP Research and Development task. It discusses supercritical fluid properties and their relevance to applying this process to the destruction of hazardous wastes. An overview is given of the small emerging industry based on applications of supercritical water oxidation. Factors that could lead to additional applications are listed. Modeling studies are described as a basis for the experimental design. The report describes plug flow reactor and batch reactor systems, and presents preliminary results. 28 refs., 4 figs., 5 tabs.

  9. Upside-down protein crystallization: designing microbatch experiments for microgravity.

    PubMed

    Khurshid, Sahir; Chayen, Naomi E

    2006-09-01

    The benefits of protein crystal growth in microgravity are well documented. The crystallization vessels currently employed for microgravity crystallization are far from optimal with regards to cost, sample volume, size, and ease of use. The use of microbatch experiments is a favorable alternative in each respect: 96 experiments of 0.5-2 microL volumes can be performed in a single microtiter tray measuring 5 x 8 cm and costing 1 pound sterling each. To date, the use of microbatch has not been pursued on account of concerns of oil leakage. To address this issue, a novel approach to microbatch crystallization experiments is described, where the microbatch plates are inverted throughout the duration of the experiment. The findings intimate the application of the microbatch method to space flight and the potential to drastically increase the output of microgravity crystallization research . PMID:17124125

  10. Design reuse experience of space and hazardous operations robots

    NASA Technical Reports Server (NTRS)

    Oneil, P. Graham

    1994-01-01

    A comparison of design drivers for space and hazardous nuclear waste operating robots details similarities and differences in operations, performance and environmental parameters for these critical environments. The similarities are exploited to provide low risk system components based on reuse principles and design knowledge. Risk reduction techniques are used for bridging areas of significant differences. As an example, risk reduction of a new sensor design for nuclear environment operations is employed to provide upgradeable replacement units in a reusable architecture for significantly higher levels of radiation.