Sample records for simulation requires extensive

  1. Payload crew training complex simulation engineer's handbook

    NASA Technical Reports Server (NTRS)

    Shipman, D. L.

    1984-01-01

    The Simulation Engineer's Handbook is a guide for new engineers assigned to Experiment Simulation and a reference for engineers previously assigned. The experiment simulation process, development of experiment simulator requirements, development of experiment simulator hardware and software, and the verification of experiment simulators are discussed. The training required for experiment simulation is extensive and is only referenced in the handbook.

  2. A glacier runoff extension to the Precipitation Runoff Modeling System

    Treesearch

    A. E. Van Beusekom; R. J. Viger

    2016-01-01

    A module to simulate glacier runoff, PRMSglacier, was added to PRMS (Precipitation Runoff Modeling System), a distributed-parameter, physical-process hydrological simulation code. The extension does not require extensive on-glacier measurements or computational expense but still relies on physical principles over empirical relations as much as is feasible while...

  3. Extensible Adaptable Simulation Systems: Supporting Multiple Fidelity Simulations in a Common Environment

    NASA Technical Reports Server (NTRS)

    McLaughlin, Brian J.; Barrett, Larry K.

    2012-01-01

    Common practice in the development of simulation systems is meeting all user requirements within a single instantiation. The Joint Polar Satellite System (JPSS) presents a unique challenge to establish a simulation environment that meets the needs of a diverse user community while also spanning a multi-mission environment over decades of operation. In response, the JPSS Flight Vehicle Test Suite (FVTS) is architected with an extensible infrastructure that supports the operation of multiple observatory simulations for a single mission and multiple mission within a common system perimeter. For the JPSS-1 satellite, multiple fidelity flight observatory simulations are necessary to support the distinct user communities consisting of the Common Ground System development team, the Common Ground System Integration & Test team, and the Mission Rehearsal Team/Mission Operations Team. These key requirements present several challenges to FVTS development. First, the FVTS must ensure all critical user requirements are satisfied by at least one fidelity instance of the observatory simulation. Second, the FVTS must allow for tailoring of the system instances to function in diverse operational environments from the High-security operations environment at NOAA Satellite Operations Facility (NSOF) to the ground system factory floor. Finally, the FVTS must provide the ability to execute sustaining engineering activities on a subset of the system without impacting system availability to parallel users. The FVTS approach of allowing for multiple fidelity copies of observatory simulations represents a unique concept in simulator capability development and corresponds to the JPSS Ground System goals of establishing a capability that is flexible, extensible, and adaptable.

  4. Python-based geometry preparation and simulation visualization toolkits for STEPS

    PubMed Central

    Chen, Weiliang; De Schutter, Erik

    2014-01-01

    STEPS is a stochastic reaction-diffusion simulation engine that implements a spatial extension of Gillespie's Stochastic Simulation Algorithm (SSA) in complex tetrahedral geometries. An extensive Python-based interface is provided to STEPS so that it can interact with the large number of scientific packages in Python. However, a gap existed between the interfaces of these packages and the STEPS user interface, where supporting toolkits could reduce the amount of scripting required for research projects. This paper introduces two new supporting toolkits that support geometry preparation and visualization for STEPS simulations. PMID:24782754

  5. A Modular and Extensible Architecture Integrating Sensors, Dynamic Displays of Anatomy and Physiology, and Automated Instruction for Innovations in Clinical Education

    ERIC Educational Resources Information Center

    Nelson, Douglas Allen, Jr.

    2017-01-01

    Adoption of simulation in healthcare education has increased tremendously over the past two decades. However, the resources necessary to perform simulation are immense. Simulators are large capital investments and require specialized training for both instructors and simulation support staff to develop curriculum using the simulator and to use the…

  6. A Collaborative Extensible User Environment for Simulation and Knowledge Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freedman, Vicky L.; Lansing, Carina S.; Porter, Ellen A.

    2015-06-01

    In scientific simulation, scientists use measured data to create numerical models, execute simulations and analyze results from advanced simulators executing on high performance computing platforms. This process usually requires a team of scientists collaborating on data collection, model creation and analysis, and on authorship of publications and data. This paper shows that scientific teams can benefit from a user environment called Akuna that permits subsurface scientists in disparate locations to collaborate on numerical modeling and analysis projects. The Akuna user environment is built on the Velo framework that provides both a rich client environment for conducting and analyzing simulations andmore » a Web environment for data sharing and annotation. Akuna is an extensible toolset that integrates with Velo, and is designed to support any type of simulator. This is achieved through data-driven user interface generation, use of a customizable knowledge management platform, and an extensible framework for simulation execution, monitoring and analysis. This paper describes how the customized Velo content management system and the Akuna toolset are used to integrate and enhance an effective collaborative research and application environment. The extensible architecture of Akuna is also described and demonstrates its usage for creation and execution of a 3D subsurface simulation.« less

  7. Simulations of stretching a flexible polyelectrolyte with varying charge separation

    DOE PAGES

    Stevens, Mark J.; Saleh, Omar A.

    2016-07-22

    We calculated the force-extension curves for a flexible polyelectrolyte chain with varying charge separations by performing Monte Carlo simulations of a 5000 bead chain using a screened Coulomb interaction. At all charge separations, the force-extension curves exhibit a Pincus-like scaling regime at intermediate forces and a logarithmic regime at large forces. As the charge separation increases, the Pincus regime shifts to a larger range of forces and the logarithmic regime starts are larger forces. We also found that force-extension curve for the corresponding neutral chain has a logarithmic regime. Decreasing the diameter of bead in the neutral chain simulations removedmore » the logarithmic regime, and the force-extension curve tends to the freely jointed chain limit. In conclusion, this result shows that only excluded volume is required for the high force logarithmic regime to occur.« less

  8. Simulating Multivariate Nonnormal Data Using an Iterative Algorithm

    ERIC Educational Resources Information Center

    Ruscio, John; Kaczetow, Walter

    2008-01-01

    Simulating multivariate nonnormal data with specified correlation matrices is difficult. One especially popular method is Vale and Maurelli's (1983) extension of Fleishman's (1978) polynomial transformation technique to multivariate applications. This requires the specification of distributional moments and the calculation of an intermediate…

  9. Monte Carlo Simulation Using HyperCard and Lotus 1-2-3.

    ERIC Educational Resources Information Center

    Oulman, Charles S.; Lee, Motoko Y.

    Monte Carlo simulation is a computer modeling procedure for mimicking observations on a random variable. A random number generator is used in generating the outcome for the events that are being modeled. The simulation can be used to obtain results that otherwise require extensive testing or complicated computations. This paper describes how Monte…

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Mark J.; Saleh, Omar A.

    We calculated the force-extension curves for a flexible polyelectrolyte chain with varying charge separations by performing Monte Carlo simulations of a 5000 bead chain using a screened Coulomb interaction. At all charge separations, the force-extension curves exhibit a Pincus-like scaling regime at intermediate forces and a logarithmic regime at large forces. As the charge separation increases, the Pincus regime shifts to a larger range of forces and the logarithmic regime starts are larger forces. We also found that force-extension curve for the corresponding neutral chain has a logarithmic regime. Decreasing the diameter of bead in the neutral chain simulations removedmore » the logarithmic regime, and the force-extension curve tends to the freely jointed chain limit. In conclusion, this result shows that only excluded volume is required for the high force logarithmic regime to occur.« less

  11. Impact of model development, calibration and validation decisions on hydrological simulations in West Lake Erie Basin

    USDA-ARS?s Scientific Manuscript database

    Watershed simulation models are used extensively to investigate hydrologic processes, landuse and climate change impacts, pollutant load assessments and best management practices (BMPs). Developing, calibrating and validating these models require a number of critical decisions that will influence t...

  12. NASA/ESA CV-990 spacelab simulation

    NASA Technical Reports Server (NTRS)

    Reller, J. O., Jr.

    1976-01-01

    Simplified techniques were applied to conduct an extensive spacelab simulation using the airborne laboratory. The scientific payload was selected to perform studies in upper atmospheric physics and infrared astronomy. The mission was successful and provided extensive data relevant to spacelab objectives on overall management of a complex international payload; experiment preparation, testing, and integration; training for proxy operation in space; data handling; multiexperimenter use of common experimenter facilities (telescopes); multiexperiment operation by experiment operators; selection criteria for spacelab experiment operators; and schedule requirements to prepare for such a spacelab mission.

  13. Improved simulation of river water and groundwater exchange in an alluvial plain using the SWAT model

    USDA-ARS?s Scientific Manuscript database

    Hydrological interaction between surface and subsurface water systems has a significant impact on water quality, ecosystems and biogeochemistry cycling of both systems. Distributed models have been developed to simulate this function, but they require detailed spatial inputs and extensive computati...

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arion is a library and tool set that enables researchers to holistically define test system models. To define a complex system for testing an algorithm or control requires expertise across multiple domains. Simulating a complex system requires the integration of multiple simulators and test hardware, each with their own specification languages and concepts. This requires extensive set of knowledge and capabilities. Arion was developed to alleviate this challenge. Arion is a library of Java libraries that abstracts the concepts from supported simulators into a cohesive model language that allows someone to build models to their needed level of fidelity andmore » expertise. Arion is also a software tool that translates the users model back into the specification languages of the simulators and test hardware needed for execution.« less

  15. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.

  16. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.

  17. Rapid Monte Carlo Simulation of Gravitational Wave Galaxies

    NASA Astrophysics Data System (ADS)

    Breivik, Katelyn; Larson, Shane L.

    2015-01-01

    With the detection of gravitational waves on the horizon, astrophysical catalogs produced by gravitational wave observatories can be used to characterize the populations of sources and validate different galactic population models. Efforts to simulate gravitational wave catalogs and source populations generally focus on population synthesis models that require extensive time and computational power to produce a single simulated galaxy. Monte Carlo simulations of gravitational wave source populations can also be used to generate observation catalogs from the gravitational wave source population. Monte Carlo simulations have the advantes of flexibility and speed, enabling rapid galactic realizations as a function of galactic binary parameters with less time and compuational resources required. We present a Monte Carlo method for rapid galactic simulations of gravitational wave binary populations.

  18. Grandmothering and cognitive resources are required for the emergence of menopause and extensive post-reproductive lifespan.

    PubMed

    Aimé, Carla; André, Jean-Baptiste; Raymond, Michel

    2017-07-01

    Menopause, the permanent cessation of ovulation, occurs in humans well before the end of the expected lifespan, leading to an extensive post-reproductive period which remains a puzzle for evolutionary biologists. All human populations display this particularity; thus, it is difficult to empirically evaluate the conditions for its emergence. In this study, we used artificial neural networks to model the emergence and evolution of allocation decisions related to reproduction in simulated populations. When allocation decisions were allowed to freely evolve, both menopause and extensive post-reproductive life-span emerged under some ecological conditions. This result allowed us to test various hypotheses about the required conditions for the emergence of menopause and extensive post-reproductive life-span. Our findings did not support the Maternal Hypothesis (menopause has evolved to avoid the risk of dying in childbirth, which is higher in older women). In contrast, results supported a shared prediction from the Grandmother Hypothesis and the Embodied Capital Model. Indeed, we found that extensive post-reproductive lifespan allows resource reallocation to increase fertility of the children and survival of the grandchildren. Furthermore, neural capital development and the skill intensiveness of the foraging niche, rather than strength, played a major role in shaping the age profile of somatic and cognitive senescence in our simulated populations. This result supports the Embodied Capital Model rather than the Grand-Mother Hypothesis. Finally, in simulated populations where menopause had already evolved, we found that reduced post-reproductive lifespan lead to reduced children's fertility and grandchildren's survival. The results are discussed in the context of the evolutionary emergence of menopause and extensive post-reproductive life-span.

  19. LANES 1 Users' Guide

    NASA Technical Reports Server (NTRS)

    Jordan, J.

    1985-01-01

    This document is intended for users of the Local Area Network Extensible Simulator, version I. This simulator models the performance of a Fiber Optic network under a variety of loading conditions and network characteristics. The options available to the user for defining the network conditions are described in this document. Computer hardware and software requirements are also defined.

  20. Faculty Perspectives on Effective Integration of Simulation into a Baccalaureate Nursing Curriculum

    ERIC Educational Resources Information Center

    Howell, Linda Jane

    2017-01-01

    Research shows that use of high fidelity simulation (HFS) as a teaching strategy requires extensive amounts of faculty time and financial resources for faculty development and equipment. This project study addressed the challenges encountered in the integration of HFS into a Midwestern metropolitan baccalaureate nursing program. The purpose of…

  1. Preliminary study for a numerical aerodynamic simulation facility. Phase 1: Extension

    NASA Technical Reports Server (NTRS)

    Lincoln, N. R.

    1978-01-01

    Functional requirements and preliminary design data were identified for use in the design of all system components and in the construction of a facility to perform aerodynamic simulation for airframe design. A skeleton structure of specifications for the flow model processor and monitor, the operating system, and the language and its compiler is presented.

  2. Variance in binary stellar population synthesis

    NASA Astrophysics Data System (ADS)

    Breivik, Katelyn; Larson, Shane L.

    2016-03-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  3. Studying Variance in the Galactic Ultra-compact Binary Population

    NASA Astrophysics Data System (ADS)

    Larson, Shane L.; Breivik, Katelyn

    2017-01-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  4. High performance real-time flight simulation at NASA Langley

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1994-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be deterministic and be completed in as short a time as possible. This includes simulation mathematical model computational and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, personnel at NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to a standard input/output system to provide for high bandwidth, low latency data acquisition and distribution. The Computer Automated Measurement and Control technology (IEEE standard 595) was extended to meet the performance requirements for real-time simulation. This technology extension increased the effective bandwidth by a factor of ten and increased the performance of modules necessary for simulator communications. This technology is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications of this technology are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC have completed the development of the use of supercomputers for simulation mathematical model computational to support real-time flight simulation. This includes the development of a real-time operating system and the development of specialized software and hardware for the CAMAC simulator network. This work, coupled with the use of an open systems software architecture, has advanced the state of the art in real time flight simulation. The data acquisition technology innovation and experience with recent developments in this technology are described.

  5. ATTDES: An Expert System for Satellite Attitude Determination and Control. 2

    NASA Technical Reports Server (NTRS)

    Mackison, Donald L.; Gifford, Kevin

    1996-01-01

    The design, analysis, and flight operations of satellite attitude determintion and attitude control systems require extensive mathematical formulations, optimization studies, and computer simulation. This is best done by an analyst with extensive education and experience. The development of programs such as ATTDES permit the use of advanced techniques by those with less experience. Typical tasks include the mission analysis to select stabilization and damping schemes, attitude determination sensors and algorithms, and control system designs to meet program requirements. ATTDES is a system that includes all of these activities, including high fidelity orbit environment models that can be used for preliminary analysis, parameter selection, stabilization schemes, the development of estimators covariance analyses, and optimization, and can support ongoing orbit activities. The modification of existing simulations to model new configurations for these purposes can be an expensive, time consuming activity that becomes a pacing item in the development and operation of such new systems. The use of an integrated tool such as ATTDES significantly reduces the effort and time required for these tasks.

  6. An Electrostatic Precipitator System for the Martian Environment

    NASA Technical Reports Server (NTRS)

    Calle, C. I.; Mackey, P. J.; Hogue, M. D.; Johansen, M. R.; Phillips, J. R., III; Clements, J. S.

    2012-01-01

    Human exploration missions to Mars will require the development of technologies for the utilization of the planet's own resources for the production of commodities. However, the Martian atmosphere contains large amounts of dust. The extraction of commodities from this atmosphere requires prior removal of this dust. We report on our development of an electrostatic precipitator able to collect Martian simulated dust particles in atmospheric conditions approaching those of Mars. Extensive experiments with an initial prototype in a simulated Martian atmosphere showed efficiencies of 99%. The design of a second prototype with aerosolized Martian simulated dust in a flow-through is described. Keywords: Space applications, electrostatic precipitator, particle control, particle charging

  7. Quantitative Technique for Comparing Simulant Materials through Figures of Merit

    NASA Technical Reports Server (NTRS)

    Rickman, Doug; Hoelzer, Hans; Fourroux, Kathy; Owens, Charles; McLemore, Carole; Fikes, John

    2007-01-01

    The 1989 workshop report entitled Workshop on Production and Uses of Simulated Lunar Materials and the Lunar Regolith Simulant Materials: Recommendations for Standardization, Production, and Usage, NASA Technical Publication both identified and reinforced a need for a set of standards and requirements for the production and usage of the Lunar simulant materials. As NASA prepares to return to the Moon, and set out to Mars, a set of early requirements have been developed for simulant materials and the initial methods to produce and measure those simulants have been defined. Addressed in the requirements document are: 1) a method for evaluating the quality of any simulant of a regolith, 2) the minimum characteristics for simulants of Lunar regolith, and 3) a method to produce simulants needed for NASA's Exploration mission. As an extension of the requirements document a method to evaluate new and current simulants has been rigorously defined through the mathematics of Figures of Merit (FoM). Requirements and techniques have been developed that allow the simulant provider to compare their product to a standard reference material through Figures of Merit. Standard reference material may be physical material such as the Apollo core samples or material properties predicted for any landing site. The simulant provider is not restricted to providing a single "high fidelity" simulant, which may be costly to produce. The provider can now develop "lower fidelity" simulants for engineering applications such as drilling and mobility applications.

  8. A federated design for a neurobiological simulation engine: the CBI federated software architecture.

    PubMed

    Cornelis, Hugo; Coop, Allan D; Bower, James M

    2012-01-01

    Simulator interoperability and extensibility has become a growing requirement in computational biology. To address this, we have developed a federated software architecture. It is federated by its union of independent disparate systems under a single cohesive view, provides interoperability through its capability to communicate, execute programs, or transfer data among different independent applications, and supports extensibility by enabling simulator expansion or enhancement without the need for major changes to system infrastructure. Historically, simulator interoperability has relied on development of declarative markup languages such as the neuron modeling language NeuroML, while simulator extension typically occurred through modification of existing functionality. The software architecture we describe here allows for both these approaches. However, it is designed to support alternative paradigms of interoperability and extensibility through the provision of logical relationships and defined application programming interfaces. They allow any appropriately configured component or software application to be incorporated into a simulator. The architecture defines independent functional modules that run stand-alone. They are arranged in logical layers that naturally correspond to the occurrence of high-level data (biological concepts) versus low-level data (numerical values) and distinguish data from control functions. The modular nature of the architecture and its independence from a given technology facilitates communication about similar concepts and functions for both users and developers. It provides several advantages for multiple independent contributions to software development. Importantly, these include: (1) Reduction in complexity of individual simulator components when compared to the complexity of a complete simulator, (2) Documentation of individual components in terms of their inputs and outputs, (3) Easy removal or replacement of unnecessary or obsoleted components, (4) Stand-alone testing of components, and (5) Clear delineation of the development scope of new components.

  9. A Federated Design for a Neurobiological Simulation Engine: The CBI Federated Software Architecture

    PubMed Central

    Cornelis, Hugo; Coop, Allan D.; Bower, James M.

    2012-01-01

    Simulator interoperability and extensibility has become a growing requirement in computational biology. To address this, we have developed a federated software architecture. It is federated by its union of independent disparate systems under a single cohesive view, provides interoperability through its capability to communicate, execute programs, or transfer data among different independent applications, and supports extensibility by enabling simulator expansion or enhancement without the need for major changes to system infrastructure. Historically, simulator interoperability has relied on development of declarative markup languages such as the neuron modeling language NeuroML, while simulator extension typically occurred through modification of existing functionality. The software architecture we describe here allows for both these approaches. However, it is designed to support alternative paradigms of interoperability and extensibility through the provision of logical relationships and defined application programming interfaces. They allow any appropriately configured component or software application to be incorporated into a simulator. The architecture defines independent functional modules that run stand-alone. They are arranged in logical layers that naturally correspond to the occurrence of high-level data (biological concepts) versus low-level data (numerical values) and distinguish data from control functions. The modular nature of the architecture and its independence from a given technology facilitates communication about similar concepts and functions for both users and developers. It provides several advantages for multiple independent contributions to software development. Importantly, these include: (1) Reduction in complexity of individual simulator components when compared to the complexity of a complete simulator, (2) Documentation of individual components in terms of their inputs and outputs, (3) Easy removal or replacement of unnecessary or obsoleted components, (4) Stand-alone testing of components, and (5) Clear delineation of the development scope of new components. PMID:22242154

  10. 3D numerical simulations of multiphase continental rifting

    NASA Astrophysics Data System (ADS)

    Naliboff, J.; Glerum, A.; Brune, S.

    2017-12-01

    Observations of rifted margin architecture suggest continental breakup occurs through multiple phases of extension with distinct styles of deformation. The initial rifting stages are often characterized by slow extension rates and distributed normal faulting in the upper crust decoupled from deformation in the lower crust and mantle lithosphere. Further rifting marks a transition to higher extension rates and coupling between the crust and mantle lithosphere, with deformation typically focused along large-scale detachment faults. Significantly, recent detailed reconstructions and high-resolution 2D numerical simulations suggest that rather than remaining focused on a single long-lived detachment fault, deformation in this phase may progress toward lithospheric breakup through a complex process of fault interaction and development. The numerical simulations also suggest that an initial phase of distributed normal faulting can play a key role in the development of these complex fault networks and the resulting finite deformation patterns. Motivated by these findings, we will present 3D numerical simulations of continental rifting that examine the role of temporal increases in extension velocity on rifted margin structure. The numerical simulations are developed with the massively parallel finite-element code ASPECT. While originally designed to model mantle convection using advanced solvers and adaptive mesh refinement techniques, ASPECT has been extended to model visco-plastic deformation that combines a Drucker Prager yield criterion with non-linear dislocation and diffusion creep. To promote deformation localization, the internal friction angle and cohesion weaken as a function of accumulated plastic strain. Rather than prescribing a single zone of weakness to initiate deformation, an initial random perturbation of the plastic strain field combined with rapid strain weakening produces distributed normal faulting at relatively slow rates of extension in both 2D and 3D simulations. Our presentation will focus on both the numerical assumptions required to produce these results and variations in 3D rifted margin architecture arising from a transition from slow to rapid rates of extension.

  11. Urbanization and watershed sustainability: Collaborative simulation modeling of future development states

    NASA Astrophysics Data System (ADS)

    Randhir, Timothy O.; Raposa, Sarah

    2014-11-01

    Urbanization has a significant impact on water resources and requires a watershed-based approach to evaluate impacts of land use and urban development on watershed processes. This study uses a simulation with urban policy scenarios to model and strategize transferable recommendations for municipalities and cities to guide urban decisions using watershed ecohydrologic principles. The watershed simulation model is used to evaluation intensive (policy in existing built regions) and extensive (policy outside existing build regions) urban development scenarios with and without implementation of Best Management practices (BMPs). Water quantity and quality changes are simulated to assess effectiveness of five urban development scenarios. It is observed that optimal combination of intensive and extensive strategies can be used to sustain urban ecosystems. BMPs are found critical to reduce storm water and water quality impacts on urban development. Conservation zoning and incentives for voluntary adoption of BMPs can be used in sustaining urbanizing watersheds.

  12. Sensitivity Observing System Experiment (SOSE)-a new effective NWP-based tool in designing the global observing system

    NASA Astrophysics Data System (ADS)

    Marseille, Gert-Jan; Stoffelen, Ad; Barkmeijer, Jan

    2008-03-01

    Lacking an established methodology to test the potential impact of prospective extensions to the global observing system (GOS) in real atmospheric cases we developed such a method, called Sensitivity Observing System Experiment (SOSE). For example, since the GOS is non uniform it is of interest to investigate the benefit of complementary observing systems filling its gaps. In a SOSE adjoint sensitivity structures are used to define a pseudo true atmospheric state for the simulation of the prospective observing system. Next, the synthetic observations are used together with real observations from the existing GOS in a state-of-the-art Numerical Weather Prediction (NWP) model to assess the potential added value of the new observing system. Unlike full observing system simulation experiments (OSSE), SOSE can be applied to real extreme events that were badly forecast operationally and only requires the simulation of the new instrument. As such SOSE is an effective tool, for example, to define observation requirements for extensions to the GOS. These observation requirements may serve as input for the design of an operational network of prospective observing systems. In a companion paper we use SOSE to simulate potential future space borne Doppler Wind Lidar (DWL) scenarios and assess their capability to sample meteorologically sensitive areas not well captured by the current GOS, in particular over the Northern Hemisphere oceans.

  13. Modular space station, phase B extension. Information management advanced development. Volume 4: Data processing assembly

    NASA Technical Reports Server (NTRS)

    Gerber, C. R.

    1972-01-01

    The computation and logical functions which are performed by the data processing assembly of the modular space station are defined. The subjects discussed are: (1) requirements analysis, (2) baseline data processing assembly configuration, (3) information flow study, (4) throughput simulation, (5) redundancy study, (6) memory studies, and (7) design requirements specification.

  14. The Kepler End-to-End Model: Creating High-Fidelity Simulations to Test Kepler Ground Processing

    NASA Technical Reports Server (NTRS)

    Bryson, Stephen T.; Jenkins, Jon M.; Peters, Dan J.; Tenenbaum, Peter P.; Klaus, Todd C.; Gunter, Jay P.; Cote, Miles T.; Caldwell, Douglas A.

    2010-01-01

    The Kepler mission is designed to detect the transit of Earth-like planets around Sun-like stars by observing 100,000 stellar targets. Developing and testing the Kepler ground-segment processing system, in particular the data analysis pipeline, requires high-fidelity simulated data. This simulated data is provided by the Kepler End-to-End Model (ETEM). ETEM simulates the astrophysics of planetary transits and other phenomena, properties of the Kepler spacecraft and the format of the downlinked data. Major challenges addressed by ETEM include the rapid production of large amounts of simulated data, extensibility and maintainability.

  15. Modeling and simulation for space medicine operations: preliminary requirements considered

    NASA Technical Reports Server (NTRS)

    Dawson, D. L.; Billica, R. D.; McDonald, P. V.

    2001-01-01

    The NASA Space Medicine program is now developing plans for more extensive use of high-fidelity medical simulation systems. The use of simulation is seen as means to more effectively use the limited time available for astronaut medical training. Training systems should be adaptable for use in a variety of training environments, including classrooms or laboratories, space vehicle mockups, analog environments, and in microgravity. Modeling and simulation can also provide the space medicine development program a mechanism for evaluation of other medical technologies under operationally realistic conditions. Systems and procedures need preflight verification with ground-based testing. Traditionally, component testing has been accomplished, but practical means for "human in the loop" verification of patient care systems have been lacking. Medical modeling and simulation technology offer potential means to accomplish such validation work. Initial considerations in the development of functional requirements and design standards for simulation systems for space medicine are discussed.

  16. Requirements for Modeling and Simulation for Space Medicine Operations: Preliminary Considerations

    NASA Technical Reports Server (NTRS)

    Dawson, David L.; Billica, Roger D.; Logan, James; McDonald, P. Vernon

    2001-01-01

    The NASA Space Medicine program is now developing plans for more extensive use of high-fidelity medical Simulation systems. The use of simulation is seen as means to more effectively use the limited time available for astronaut medical training. Training systems should be adaptable for use in a variety of training environments, including classrooms or laboratories, space vehicle mockups, analog environments, and in microgravity. Modeling and simulation can also provide the space medicine development program a mechanism for evaluation of other medical technologies under operationally realistic conditions. Systems and procedures need preflight verification with ground-based testing. Traditionally, component testing has been accomplished, but practical means for "human in the loop" verification of patient care systems have been lacking. Medical modeling and simulation technology offer potential means to accomplish such validation work. Initial considerations in the development of functional requirements and design standards for simulation systems for space medicine are discussed.

  17. Normal Brain-Skull Development with Hybrid Deformable VR Models Simulation.

    PubMed

    Jin, Jing; De Ribaupierre, Sandrine; Eagleson, Roy

    2016-01-01

    This paper describes a simulation framework for a clinical application involving skull-brain co-development in infants, leading to a platform for craniosynostosis modeling. Craniosynostosis occurs when one or more sutures are fused early in life, resulting in an abnormal skull shape. Surgery is required to reopen the suture and reduce intracranial pressure, but is difficult without any predictive model to assist surgical planning. We aim to study normal brain-skull growth by computer simulation, which requires a head model and appropriate mathematical methods for brain and skull growth respectively. On the basis of our previous model, we further specified suture model into fibrous and cartilaginous sutures and develop algorithm for skull extension. We evaluate the resulting simulation by comparison with datasets of cases and normal growth.

  18. Experimental model: dye penetration of extensive interim restorations used during endodontic treatment while under load in a multiple axis chewing simulator.

    PubMed

    Jensen, Arna-Lee; Abbott, Paul V

    2007-10-01

    The purpose of this study was to design an experimental model that allowed extensive endodontic interim restorations to be tested for dye penetration while under simulated masticatory load. Extracted premolar teeth had standardized mesio-occluso-distal cavities prepared, and the root canals were instrumented. A cotton wool pellet was placed in the pulp chamber, and the cavities were restored with Cavit, IRM, Ketac-Fil Plus, Ketac-Silver, or composite resin (Z100). They were subjected to the equivalent of 3 months of clinical load while exposed to methylene blue dye. Results of this study could not support IRM as a suitable interim endodontic restorative material to use in extensive cavities. The dye penetration in the Ketac-Fil Plus and Ketac-Silver specimens was not predictable, and the results suggested Cavit and Z100 composite resin require further investigations as potentially useful materials for this purpose.

  19. Extending quantum mechanics entails extending special relativity

    NASA Astrophysics Data System (ADS)

    Aravinda, S.; Srikanth, R.

    2016-05-01

    The complementarity between signaling and randomness in any communicated resource that can simulate singlet statistics is generalized by relaxing the assumption of free will in the choice of measurement settings. We show how to construct an ontological extension for quantum mechanics (QMs) through the oblivious embedding of a sound simulation protocol in a Newtonian spacetime. Minkowski or other intermediate spacetimes are ruled out as the locus of the embedding by virtue of hidden influence inequalities. The complementarity transferred from a simulation to the extension unifies a number of results about quantum non-locality, and implies that special relativity has a different significance for the ontological model and for the operational theory it reproduces. Only the latter, being experimentally accessible, is required to be Lorentz covariant. There may be certain Lorentz non-covariant elements at the ontological level, but they will be inaccessible at the operational level in a valid extension. Certain arguments against the extendability of QM, due to Conway and Kochen (2009) and Colbeck and Renner (2012), are attributed to their assumption that the spacetime at the ontological level has Minkowski causal structure.

  20. A dynamic motion simulator for future European docking systems

    NASA Technical Reports Server (NTRS)

    Brondino, G.; Marchal, PH.; Grimbert, D.; Noirault, P.

    1990-01-01

    Europe's first confrontation with docking in space will require extensive testing to verify design and performance and to qualify hardware. For this purpose, a Docking Dynamics Test Facility (DDTF) was developed. It allows reproduction on the ground of the same impact loads and relative motion dynamics which would occur in space during docking. It uses a 9 degree of freedom, servo-motion system, controlled by a real time computer, which simulates the docking spacecraft in a zero-g environment. The test technique involves and active loop based on six axis force and torque detection, a mathematical simulation of individual spacecraft dynamics, and a 9 degree of freedom servomotion of which 3 DOFs allow extension of the kinematic range to 5 m. The configuration was checked out by closed loop tests involving spacecraft control models and real sensor hardware. The test facility at present has an extensive configuration that allows evaluation of both proximity control and docking systems. It provides a versatile tool to verify system design, hardware items and performance capabilities in the ongoing HERMES and COLUMBUS programs. The test system is described and its capabilities are summarized.

  1. Control of Transitional and Turbulent Flows Using Plasma-Based Actuators

    DTIC Science & Technology

    2006-06-01

    by means of asymmetric dielectric-barrier-discharge ( DBD ) actuators is presented. The flow fields are simulated employ- ing an extensively validated...effective use of DBD devices. As a consequence, meaningful computations require the use of three-dimensional large-eddy simulation approaches capable of...counter-flow DBD actuator is shown to provide an effective on-demand tripping device . This prop- erty is exploited for the suppression of laminar

  2. Taming Log Files from Game/Simulation-Based Assessments: Data Models and Data Analysis Tools. Research Report. ETS RR-16-10

    ERIC Educational Resources Information Center

    Hao, Jiangang; Smith, Lawrence; Mislevy, Robert; von Davier, Alina; Bauer, Malcolm

    2016-01-01

    Extracting information efficiently from game/simulation-based assessment (G/SBA) logs requires two things: a well-structured log file and a set of analysis methods. In this report, we propose a generic data model specified as an extensible markup language (XML) schema for the log files of G/SBAs. We also propose a set of analysis methods for…

  3. Design of a cylindrical LED substrate without radiator

    NASA Astrophysics Data System (ADS)

    Tang, Fan; Guo, Zhenning

    2017-12-01

    To reduce the weight and production costs of light-emitting diode (LED) lamps, we applied the principle of the chimney effect to design a cylindrical LED substrate without a radiator. We built a 3D model by using Solidworks software and applied the flow simulation plug-in to conduct model simulation, thereby optimizing the heat source distribution and substrate thickness. The results indicate that the design achieved optimal cooling with a substrate with an upper extension length of 35 mm, a lower extension length of 8 mm, and a thickness of 1 mm. For a substrate of those dimensions, the highest LED chip temperature was 64.78 °C, the weight of the substrate was 35.09 g, and R jb = 7.00 K/W. If the substrate is powered at 8, 10, and 12 W, its temperature meets LED safety requirements. In physical tests, the highest temperature for a physical 8 W cylindrical LED substrate was 66 °C, which differed by only 1.22 °C from the simulation results, verifying the validity of the simulation. The designed cylindrical LED substrate can be used in high-power LED lamps that do not require radiators. This design is not only excellent for heat dissipation, but also for its low weight, low cost, and simplicity of manufacture.

  4. Use of high performance networks and supercomputers for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  5. Simulation of 10 A electron-beam formation and collection for a high current electron-beam ion source

    NASA Astrophysics Data System (ADS)

    Kponou, A.; Beebe, E.; Pikin, A.; Kuznetsov, G.; Batazova, M.; Tiunov, M.

    1998-02-01

    Presented is a report on the development of an electron-beam ion source (EBIS) for the relativistic heavy ion collider at Brookhaven National Laboratory (BNL) which requires operating with a 10 A electron beam. This is approximately an order of magnitude higher current than in any existing EBIS device. A test stand is presently being designed and constructed where EBIS components will be tested. It will be reported in a separate paper at this conference. The design of the 10 A electron gun, drift tubes, and electron collector requires extensive computer simulations. Calculations have been performed at Novosibirsk and BNL using two different programs, SAM and EGUN. Results of these simulations will be presented.

  6. OpenMM 7: Rapid development of high performance algorithms for molecular dynamics

    PubMed Central

    Swails, Jason; Zhao, Yutong; Beauchamp, Kyle A.; Wang, Lee-Ping; Stern, Chaya D.; Brooks, Bernard R.; Pande, Vijay S.

    2017-01-01

    OpenMM is a molecular dynamics simulation toolkit with a unique focus on extensibility. It allows users to easily add new features, including forces with novel functional forms, new integration algorithms, and new simulation protocols. Those features automatically work on all supported hardware types (including both CPUs and GPUs) and perform well on all of them. In many cases they require minimal coding, just a mathematical description of the desired function. They also require no modification to OpenMM itself and can be distributed independently of OpenMM. This makes it an ideal tool for researchers developing new simulation methods, and also allows those new methods to be immediately available to the larger community. PMID:28746339

  7. Quadriceps force during knee extension in different replacement scenarios with a modular partial prosthesis.

    PubMed

    Calliess, Tilman; Schado, Ssuheib; Richter, Berna I; Becher, Christoph; Ezechieli, Marco; Ostermeier, Sven

    2014-02-01

    Previous biomechanical studies have shown that bi-cruciate retaining knee replacement does not significantly alter normal knee kinematics, however, there are no data on the influence of a combined medial and patellofemoral bi-compartimental arthroplasty. The purpose of this in vitro study was to evaluate the effect of different replacement scenarios with a modular partial knee replacement system on the amount of quadriceps force required to extend the knee during an isokinetic extension cycle. Ten human knee specimens were tested in a kinematic knee simulator under (1) physiologic condition and after subsequent implantation of (2) a medial unicondylar and (3) a trochlear replacement. An isokinetic extension cycle of the knee with a constant extension moment of 31 Nm was simulated. The resulting quadriceps extension force was measured from 120° to full knee extension. The quadriceps force curve described a typically sinusoidal characteristic before and after each replacement scenario. The isolated medial replacement resulted in a slightly, but significantly higher maximum quadriceps force (1510 N vs. 1585 N, P = 0.006) as well as the subsequent trochlear replacement showed an additional increase (1801 N, P = 0.008). However, for both replacements no significant difference to the untreated condition could be detected in mid-flexion (10-50°). When considering a bi-compartimental replacement an increase of required maximum quadriceps force needed to extend the knee has to keep in mind. However, the close to physiological movement in mid-flexion suggests that patients with a bi-crutiate retaining arthroplasty might have an advantage in knee stability compared to total knee arthroplasty. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Nanosecond Enhancements of the Atmospheric Electron Density by Extensive Air Showers

    NASA Astrophysics Data System (ADS)

    Rutjes, C.; Camporeale, E.; Ebert, U.; Buitink, S.; Scholten, O.; Trinh, G. T. N.; Witteveen, J.

    2015-12-01

    As is well known a sufficient density of free electrons and strong electric fields are the basic requirements to start any electrical discharge. In the context of thunderstorm discharges it has become clear that in addition droplets and or ice particles are required to enhance the electric field to values above breakdown. In our recent study [1] we have shown that these three ingredients have to interplay to allow for lightning inception, triggered by an extensive air shower event. The extensive air showers are a very stochastic natural phenomenon, creating highly coherent sub-nanosecond enhancements of the atmospheric electron density. Predicting these electron density enhancements accurately one has to take the uncertainty of the input variables into account. For this study we use the initial energy, inclination and altitude of first interaction, which will influence the evolution of the shower significantly. To this end, we use the stochastic collocation method, [2] to post-process our detailed Monte Carlo extensive air shower simulations, done with the CORSIKA [3] software package, which provides an efficient and elegant way to determine the distribution of the atmospheric electron density enhancements. [1] Dubinova, A., Rutjes, C., Ebert, E., Buitink, S., Scholten, O., and Trinh, G. T. N. "Prediction of Lightning Inception by Large Ice Particles and Extensive Air Showers." PRL 115 015002 (2015)[2] G.J.A. Loeven, J.A.S. Witteveen, H. Bijl, Probabilistic collocation: an efficient nonintrusive approach for arbitrarily distributed parametric uncertainties, 45th AIAA Aerospace Sciences Meeting, Reno, Nevada, 2007, AIAA-2007-317[3] Heck, Dieter, et al. CORSIKA: A Monte Carlo code to simulate extensive air showers. No. FZKA-6019. 1998.

  9. An Extension of a Parallel-Distributed Processing Framework of Reading Aloud in Japanese: Human Nonword Reading Accuracy Does Not Require a Sequential Mechanism

    ERIC Educational Resources Information Center

    Ikeda, Kenji; Ueno, Taiji; Ito, Yuichi; Kitagami, Shinji; Kawaguchi, Jun

    2017-01-01

    Humans can pronounce a nonword (e.g., rint). Some researchers have interpreted this behavior as requiring a sequential mechanism by which a grapheme-phoneme correspondence rule is applied to each grapheme in turn. However, several parallel-distributed processing (PDP) models in English have simulated human nonword reading accuracy without a…

  10. Exploring the use of high-fidelity simulation training to enhance clinical skills.

    PubMed

    Ann Kirkham, Lucy

    2018-02-07

    The use of interprofessional simulation training to enhance nursing students' performance of technical and non-technical clinical skills is becoming increasingly common. Simulation training can involve the use of role play, virtual reality or patient simulator manikins to replicate clinical scenarios and assess the nursing student's ability to, for example, undertake clinical observations or work as part of a team. Simulation training enables nursing students to practise clinical skills in a safe environment. Effective simulation training requires extensive preparation, and debriefing is necessary following a simulated training session to review any positive or negative aspects of the learning experience. This article discusses a high-fidelity simulated training session that was used to assess a group of third-year nursing students and foundation level 1 medical students. This involved the use of a patient simulator manikin in a scenario that required the collaborative management of a deteriorating patient. ©2018 RCN Publishing Company Ltd. All rights reserved. Not to be copied, transmitted or recorded in any way, in whole or part, without prior permission of the publishers.

  11. NASA/ESA CV-990 airborne simulation of Spacelab

    NASA Technical Reports Server (NTRS)

    Mulholland, D.; Neel, C.; De Waard, J.; Lovelett, R.; Weaver, L.; Parker, R.

    1975-01-01

    The paper describes the joint NASA/ESA extensive Spacelab simulation using the NASA CV-990 airborne laboratory. The scientific payload was selected to conduct studies in upper atmospheric physics and infrared astronomy. Two experiment operators from Europe and two from the U.S. were selected to live aboard the aircraft along with a mission manager for a six-day period and operate the experiments in behalf of the principal scientists. The mission was successful and provided extensive data relevant to Spacelab objectives on overall management of a complex international payload; experiment preparation, testing, and integration; training for proxy operation in space; data handling; multiexperimenter use of common experimenter facilities (telescopes); and schedule requirements to prepare for such a Spacelab mission.

  12. GPU accelerated Discrete Element Method (DEM) molecular dynamics for conservative, faceted particle simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spellings, Matthew; Biointerfaces Institute, University of Michigan, 2800 Plymouth Rd., Ann Arbor, MI 48109; Marson, Ryan L.

    Faceted shapes, such as polyhedra, are commonly found in systems of nanoscale, colloidal, and granular particles. Many interesting physical phenomena, like crystal nucleation and growth, vacancy motion, and glassy dynamics are challenging to model in these systems because they require detailed dynamical information at the individual particle level. Within the granular materials community the Discrete Element Method has been used extensively to model systems of anisotropic particles under gravity, with friction. We provide an implementation of this method intended for simulation of hard, faceted nanoparticles, with a conservative Weeks–Chandler–Andersen (WCA) interparticle potential, coupled to a thermodynamic ensemble. This method ismore » a natural extension of classical molecular dynamics and enables rigorous thermodynamic calculations for faceted particles.« less

  13. ASC FY17 Implementation Plan, Rev. 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, P. G.

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computationalmore » resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resources, including technical staff, hardware, simulation software, and computer science solutions.« less

  14. Surgical model-view-controller simulation software framework for local and collaborative applications

    PubMed Central

    Sankaranarayanan, Ganesh; Halic, Tansel; Arikatla, Venkata Sreekanth; Lu, Zhonghua; De, Suvranu

    2010-01-01

    Purpose Surgical simulations require haptic interactions and collaboration in a shared virtual environment. A software framework for decoupled surgical simulation based on a multi-controller and multi-viewer model-view-controller (MVC) pattern was developed and tested. Methods A software framework for multimodal virtual environments was designed, supporting both visual interactions and haptic feedback while providing developers with an integration tool for heterogeneous architectures maintaining high performance, simplicity of implementation, and straightforward extension. The framework uses decoupled simulation with updates of over 1,000 Hz for haptics and accommodates networked simulation with delays of over 1,000 ms without performance penalty. Results The simulation software framework was implemented and was used to support the design of virtual reality-based surgery simulation systems. The framework supports the high level of complexity of such applications and the fast response required for interaction with haptics. The efficacy of the framework was tested by implementation of a minimally invasive surgery simulator. Conclusion A decoupled simulation approach can be implemented as a framework to handle simultaneous processes of the system at the various frame rates each process requires. The framework was successfully used to develop collaborative virtual environments (VEs) involving geographically distributed users connected through a network, with the results comparable to VEs for local users. PMID:20714933

  15. Surgical model-view-controller simulation software framework for local and collaborative applications.

    PubMed

    Maciel, Anderson; Sankaranarayanan, Ganesh; Halic, Tansel; Arikatla, Venkata Sreekanth; Lu, Zhonghua; De, Suvranu

    2011-07-01

    Surgical simulations require haptic interactions and collaboration in a shared virtual environment. A software framework for decoupled surgical simulation based on a multi-controller and multi-viewer model-view-controller (MVC) pattern was developed and tested. A software framework for multimodal virtual environments was designed, supporting both visual interactions and haptic feedback while providing developers with an integration tool for heterogeneous architectures maintaining high performance, simplicity of implementation, and straightforward extension. The framework uses decoupled simulation with updates of over 1,000 Hz for haptics and accommodates networked simulation with delays of over 1,000 ms without performance penalty. The simulation software framework was implemented and was used to support the design of virtual reality-based surgery simulation systems. The framework supports the high level of complexity of such applications and the fast response required for interaction with haptics. The efficacy of the framework was tested by implementation of a minimally invasive surgery simulator. A decoupled simulation approach can be implemented as a framework to handle simultaneous processes of the system at the various frame rates each process requires. The framework was successfully used to develop collaborative virtual environments (VEs) involving geographically distributed users connected through a network, with the results comparable to VEs for local users.

  16. Spontaneous thrombosis of congenital extrahepatic portosystemic shunt (Abernethy malformation) simulating inguinal hernia incarceration.

    PubMed

    Afzal, Samara; Nair, Amit; Grainger, Jennie; Latif, Sherif; Rehman, Atiq-ur

    2010-08-01

    Tender lumps in the inguinal region are often explored emergently to treat suspected hernial strangulation. We discuss the case of an adult male who presented acutely with a tender inguinal swelling and raised inflammatory markers and was therefore deemed as requiring surgical exploration. However preoperative abdominal computerized tomography (CT) revealed an extensive thrombosing congenital venous malformation of portosystemic origin with extension into the symptomatic inguinal canal. A potentially lethal exsanguination from surgery was thus avoided.

  17. Development of Improved Models, Stochasticity, and Frameworks for the MIT Extensible Air Network Simulation

    NASA Technical Reports Server (NTRS)

    Clarke, John-Paul

    2004-01-01

    MEANS, the MIT Extensible Air Network Simulation, was created in February of 2001, and has been developed with support from NASA Ames since August of 2001. MEANS is a simulation tool which is designed to maximize fidelity without requiring data of such a low level as to preclude easy examination of alternative scenarios. To this end, MEANS is structured in a modular fashion to allow more detailed components to be brought in when desired, and left out when they would only be an impediment. Traditionally, one of the difficulties with high-fidelity models is that they require a level of detail in their data that is difficult to obtain. For analysis of past scenarios, the required data may not have been collected, or may be considered proprietary and thus difficult for independent researchers to obtain. For hypothetical scenarios, generation of the data is sufficiently difficult to be a task in and of itself. Often, simulations designed by a researcher will model exactly one element of the problem well and in detail, while assuming away other parts of the problem which are not of interest or for which data is not available. While these models are useful for working with the task at hand, they are very often not applicable to future problems. The MEAN Simulation attempts to address these problems by using a modular design which provides components of varying fidelity for each aspect of the simulation. This allows for the most accurate model for which data is available to be used. It also provides for easy analysis of sensitivity to data accuracy. This can be particularly useful in the case where accurate data is available for some subset of the situations that are to be considered. Furthermore, the ability to use the same model while examining effects on different parts of a system reduces the time spent learning the simulation, and provides for easier comparisons between changes to different parts of the system.

  18. Sub-half-micron contact window design with 3D photolithography simulator

    NASA Astrophysics Data System (ADS)

    Brainerd, Steve K.; Bernard, Douglas A.; Rey, Juan C.; Li, Jiangwei; Granik, Yuri; Boksha, Victor V.

    1997-07-01

    In state of the art IC design and manufacturing certain lithography layers have unique requirements. Latitudes and tolerances that apply to contacts and polysilicon gates are tight for such critical layers. Industry experts are discussing the most cost effective ways to use feature- oriented equipment and materials already developed for these layers. Such requirements introduce new dimensions into the traditionally challenging task for the photolithography engineer when considering various combinations of multiple factors to optimize and control the process. In addition, he/she faces a rapidly increasing cost of experiments, limited time and scarce access to equipment to conduct them. All the reasons presented above support simulation as an ideal method to satisfy these demands. However lithography engineers may be easily dissatisfied with a simulation tool when discovering disagreement between the simulation and experimental data. The problem is that several parameters used in photolithography simulation are very process specific. Calibration, i.e. matching experimental and simulation data using a specific set of procedures allows one to effectively use the simulation tool. We present results of a simulation based approach to optimize photolithography processes for sub-0.5 micron contact windows. Our approach consists of: (1) 3D simulation to explore different lithographic options, (2) calibration to a range of process conditions with extensive use of specifically developed optimization techniques. The choice of a 3D simulator is essential because of 3D nature of the problem of contact window design. We use DEPICT 4.1. This program performs fast aerial image simulation as presented before. For 3D exposure the program uses an extension to three-dimensions of the high numerical aperture model combined with Fast Fourier Transforms for maximum performance and accuracy. We use Kim (U.C. Berkeley) model and the fast marching Level Set method respectively for the calculation of resist development rates and resist surface movement during development process. Calibration efforts were aimed at matching experimental results on contact windows obtained after exposure of a binary mask. Additionally, simulation was applied to conduct quantitative analysis of PSM design capabilities, optical proximity correction, and stepper parameter optimization. Extensive experiments covered exposure (ASML 5500/100D stepper), pre- and post-exposure bake and development (2.38% TMAH, puddle process) of JSR IX725D2G and TOK iP3500 photoresists films on 200 mm test wafers. `Aquatar' was used as top antireflective coating, SEM pictures of developed patterns were analyzed and compared with simulation results for different values of defocus, exposure energies, numerical aperture and partial coherence.

  19. Simulation of wave interactions with MHD

    NASA Astrophysics Data System (ADS)

    Batchelor, D.; Alba, C.; Bateman, G.; Bernholdt, D.; Berry, L.; Bonoli, P.; Bramley, R.; Breslau, J.; Chance, M.; Chen, J.; Choi, M.; Elwasif, W.; Fu, G.; Harvey, R.; Jaeger, E.; Jardin, S.; Jenkins, T.; Keyes, D.; Klasky, S.; Kruger, S.; Ku, L.; Lynch, V.; McCune, D.; Ramos, J.; Schissel, D.; Schnack, D.; Wright, J.

    2008-07-01

    The broad scientific objectives of the SWIM (Simulation 01 Wave Interaction with MHD) project are twofold: (1) improve our understanding of interactions that both radio frequency (RF) wave and particle sources have on extended-MHD phenomena, and to substantially improve our capability for predicting and optimizing the performance of burning plasmas in devices such as ITER: and (2) develop an integrated computational system for treating multiphysics phenomena with the required flexibility and extensibility to serve as a prototype for the Fusion Simulation Project. The Integrated Plasma Simulator (IPS) has been implemented. Presented here are initial physics results on RP effects on MHD instabilities in tokamaks as well as simulation results for tokamak discharge evolution using the IPS.

  20. Extension of a Kinetic Approach to Chemical Reactions to Electronic Energy Levels and Reactions Involving Charged Species with Application to DSMC Simulations

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2014-01-01

    The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. Recently introduced molecular-level chemistry models that predict equilibrium and nonequilibrium reaction rates using only kinetic theory and fundamental molecular properties are extended in the current work to include electronic energy level transitions and reactions involving charged particles. These extensions are shown to agree favorably with reported transition and reaction rates from the literature for near-equilibrium conditions. Also, the extensions are applied to the second flight of the Project FIRE flight experiment at 1634 seconds with a Knudsen number of 0.001 at an altitude of 76.4 km. In order to accomplish this, NASA's direct simulation Monte Carlo code DAC was rewritten to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced chemistry model, and to include the extensions presented in this work. The 1634 second data point was chosen for comparisons to be made in order to include a CFD solution. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid because, although near-transitional, the flow is still considered to be continuum. It is shown that the inclusion of electronic energy levels in the DSMC simulation is necessary for flows of this nature and is required for comparison to the CFD solution. The flow field solutions are also post-processed by the nonequilibrium radiation code HARA to compute the radiative portion.

  1. Computers for real time flight simulation: A market survey

    NASA Technical Reports Server (NTRS)

    Bekey, G. A.; Karplus, W. J.

    1977-01-01

    An extensive computer market survey was made to determine those available systems suitable for current and future flight simulation studies at Ames Research Center. The primary requirement is for the computation of relatively high frequency content (5 Hz) math models representing powered lift flight vehicles. The Rotor Systems Research Aircraft (RSRA) was used as a benchmark vehicle for computation comparison studies. The general nature of helicopter simulations and a description of the benchmark model are presented, and some of the sources of simulation difficulties are examined. A description of various applicable computer architectures is presented, along with detailed discussions of leading candidate systems and comparisons between them.

  2. A glacier runoff extension to the Precipitation Runoff Modeling System

    USGS Publications Warehouse

    Van Beusekom, Ashley E.; Viger, Roland

    2016-01-01

    A module to simulate glacier runoff, PRMSglacier, was added to PRMS (Precipitation Runoff Modeling System), a distributed-parameter, physical-process hydrological simulation code. The extension does not require extensive on-glacier measurements or computational expense but still relies on physical principles over empirical relations as much as is feasible while maintaining model usability. PRMSglacier is validated on two basins in Alaska, Wolverine, and Gulkana Glacier basin, which have been studied since 1966 and have a substantial amount of data with which to test model performance over a long period of time covering a wide range of climatic and hydrologic conditions. When error in field measurements is considered, the Nash-Sutcliffe efficiencies of streamflow are 0.87 and 0.86, the absolute bias fractions of the winter mass balance simulations are 0.10 and 0.08, and the absolute bias fractions of the summer mass balances are 0.01 and 0.03, all computed over 42 years for the Wolverine and Gulkana Glacier basins, respectively. Without taking into account measurement error, the values are still within the range achieved by the more computationally expensive codes tested over shorter time periods.

  3. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2011-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  4. SIMULATING RADIONUCLIDE FATE AND TRANSPORT IN THE UNSATURATED ZONE: EVALUATION AND SENSITIVITY ANALYSES OF SELECT COMPUTER MODELS

    EPA Science Inventory

    Numerical, mathematical models of water and chemical movement in soils are used as decision aids for determining soil screening levels (SSLs) of radionuclides in the unsaturated zone. Many models require extensive input parameters which include uncertainty due to soil variabil...

  5. J-adaptive estimation with estimated noise statistics

    NASA Technical Reports Server (NTRS)

    Jazwinski, A. H.; Hipkins, C.

    1973-01-01

    The J-adaptive sequential estimator is extended to include simultaneous estimation of the noise statistics in a model for system dynamics. This extension completely automates the estimator, eliminating the requirement of an analyst in the loop. Simulations in satellite orbit determination demonstrate the efficacy of the sequential estimation algorithm.

  6. Simulated Tempering Distributed Replica Sampling, Virtual Replica Exchange, and Other Generalized-Ensemble Methods for Conformational Sampling.

    PubMed

    Rauscher, Sarah; Neale, Chris; Pomès, Régis

    2009-10-13

    Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.

  7. Simulation of rotor blade element turbulence

    NASA Technical Reports Server (NTRS)

    Mcfarland, R. E.; Duisenberg, Ken

    1995-01-01

    A piloted, motion-based simulation of Sikorsky's Black Hawk helicopter was used as a platform for the investigation of rotorcraft responses to vertical turbulence. By using an innovative temporal and geometrical distribution algorithm that preserved the statistical characteristics of the turbulence over the rotor disc, stochastic velocity components were applied at each of twenty blade-element stations. This model was implemented on NASA Ames' Vertical Motion Simulator (VMS), and ten test pilots were used to establish that the model created realistic cues. The objectives of this research included the establishment of a simulation-technology basis for future investigation into real-time turbulence modeling. This goal was achieved; our extensive additions to the rotor model added less than a 10 percent computational overhead. Using a VAX 9000 computer the entire simulation required a cycle time of less than 12 msec. Pilot opinion during this simulation was generally quite favorable. For low speed flight the consensus was that SORBET (acronym for title) was better than the conventional body-fixed model, which was used for comparison purposes, and was determined to be too violent (like a washboard). For high speed flight the pilots could not identify differences between these models. These opinions were something of a surprise because only the vertical turbulence component on the rotor system was implemented in SORBET. Because of the finite-element distribution of the inputs, induced outputs were observed in all translational and rotational axes. Extensive post-simulation spectral analyses of the SORBET model suggest that proper rotorcraft turbulence modeling requires that vertical atmospheric disturbances not be superimposed at the vehicle center of gravity but, rather, be input into the rotor system, where the rotor-to-body transfer function severely attenuates high frequency rotorcraft responses.

  8. Rapidly Re-Configurable Flight Simulator Tools for Crew Vehicle Integration Research and Design

    NASA Technical Reports Server (NTRS)

    Schutte, Paul C.; Trujillo, Anna; Pritchett, Amy R.

    2000-01-01

    While simulation is a valuable research and design tool, the time and difficulty required to create new simulations (or re-use existing simulations) often limits their application. This report describes the design of the software architecture for the Reconfigurable Flight Simulator (RFS), which provides a robust simulation framework that allows the simulator to fulfill multiple research and development goals. The core of the architecture provides the interface standards for simulation components, registers and initializes components, and handles the communication between simulation components. The simulation components are each a pre-compiled library 'plug-in' module. This modularity allows independent development and sharing of individual simulation components. Additional interfaces can be provided through the use of Object Data/Method Extensions (OD/ME). RFS provides a programmable run-time environment for real-time access and manipulation, and has networking capabilities using the High Level Architecture (HLA).

  9. Rapidly Re-Configurable Flight Simulator Tools for Crew Vehicle Integration Research and Design

    NASA Technical Reports Server (NTRS)

    Pritchett, Amy R.

    2002-01-01

    While simulation is a valuable research and design tool, the time and difficulty required to create new simulations (or re-use existing simulations) often limits their application. This report describes the design of the software architecture for the Reconfigurable Flight Simulator (RFS), which provides a robust simulation framework that allows the simulator to fulfill multiple research and development goals. The core of the architecture provides the interface standards for simulation components, registers and initializes components, and handles the communication between simulation components. The simulation components are each a pre-compiled library 'plugin' module. This modularity allows independent development and sharing of individual simulation components. Additional interfaces can be provided through the use of Object Data/Method Extensions (OD/ME). RFS provides a programmable run-time environment for real-time access and manipulation, and has networking capabilities using the High Level Architecture (HLA).

  10. Simulating Self-Assembly with Simple Models

    NASA Astrophysics Data System (ADS)

    Rapaport, D. C.

    Results from recent molecular dynamics simulations of virus capsid self-assembly are described. The model is based on rigid trapezoidal particles designed to form polyhedral shells of size 60, together with an atomistic solvent. The underlying bonding process is fully reversible. More extensive computations are required than in previous work on icosahedral shells built from triangular particles, but the outcome is a high yield of closed shells. Intermediate clusters have a variety of forms, and bond counts provide a useful classification scheme

  11. Interleaved concatenated codes: new perspectives on approaching the Shannon limit.

    PubMed

    Viterbi, A J; Viterbi, A M; Sindhushayana, N T

    1997-09-02

    The last few years have witnessed a significant decrease in the gap between the Shannon channel capacity limit and what is practically achievable. Progress has resulted from novel extensions of previously known coding techniques involving interleaved concatenated codes. A considerable body of simulation results is now available, supported by an important but limited theoretical basis. This paper presents a computational technique which further ties simulation results to the known theory and reveals a considerable reduction in the complexity required to approach the Shannon limit.

  12. CSM digital autopilot testing in support of ASTP experiments control requirements

    NASA Technical Reports Server (NTRS)

    Rue, D. L.

    1975-01-01

    Results are presented of CSM digital autopilot (DAP) testing. The testing was performed to demonstrate and evaluate control modes which are currently planned or could be considered for use in support of experiments on the ASTP mission. The testing was performed on the Lockheed Guidance, Navigation, and Control System Functional Simulator (GNCFS). This simulator, which was designed to test the Apollo and Skylab DAP control system, has been used extensively and is a proven tool for CSM DAP analysis.

  13. Cislan-2 extension final document by University of Twente (Netherlands)

    NASA Astrophysics Data System (ADS)

    Niemegeers, Ignas; Baumann, Frank; Beuwer, Wim; Jordense, Marcel; Pras, Aiko; Schutte, Leon; Tracey, Ian

    1992-01-01

    Results of worked performed under the so called Cislan extension contract are presented. The adaptation of the Cislan 2 prototype design to an environment of interconnected Local Area Networks (LAN's) instead of a single 802.5 token ring LAN is considered. In order to extend the network architecture, the Interconnection Function (IF) protocol layer was subdivided into two protocol layers: a new IF layer, and below the Medium Enhancement (ME) protocol layer. Some small enhancements to the distributed bandwidth allocation protocol were developed, which in fact are also applicable to the 'normal' Cislan 2 system. The new services and protocols are described together with some scenarios and requirements for the new internetting Cislan 2 system. How to overcome the degradation of the quality of speech due to packet loss on the LAN subsystem was studied. Experiments were planned in order to measure this speech quality degradation. Simulations were performed of two Cislan subsystems, the bandwidth allocation protocol and the clock synchronization mechanism. Results on both simulations, performed on SUN workstations using QNAP as a simulation tool, are given. Results of the simulations of the clock synchronization mechanism, and results of the simulation of the distributed bandwidth allocation protocol are given.

  14. Java Architecture for Detect and Avoid Extensibility and Modeling

    NASA Technical Reports Server (NTRS)

    Santiago, Confesor; Mueller, Eric Richard; Johnson, Marcus A.; Abramson, Michael; Snow, James William

    2015-01-01

    Unmanned aircraft will equip with a detect-and-avoid (DAA) system that enables them to comply with the requirement to "see and avoid" other aircraft, an important layer in the overall set of procedural, strategic and tactical separation methods designed to prevent mid-air collisions. This paper describes a capability called Java Architecture for Detect and Avoid Extensibility and Modeling (JADEM), developed to prototype and help evaluate various DAA technological requirements by providing a flexible and extensible software platform that models all major detect-and-avoid functions. Figure 1 illustrates JADEM's architecture. The surveillance module can be actual equipment on the unmanned aircraft or simulators that model the process by which sensors on-board detect other aircraft and provide track data to the traffic display. The track evaluation function evaluates each detected aircraft and decides whether to provide an alert to the pilot and its severity. Guidance is a combination of intruder track information, alerting, and avoidance/advisory algorithms behind the tools shown on the traffic display to aid the pilot in determining a maneuver to avoid a loss of well clear. All these functions are designed with a common interface and configurable implementation, which is critical in exploring DAA requirements. To date, JADEM has been utilized in three computer simulations of the National Airspace System, three pilot-in-the-loop experiments using a total of 37 professional UAS pilots, and two flight tests using NASA's Predator-B unmanned aircraft, named Ikhana. The data collected has directly informed the quantitative separation standard for "well clear", safety case, requirements development, and the operational environment for the DAA minimum operational performance standards. This work was performed by the Separation Assurance/Sense and Avoid Interoperability team under NASA's UAS Integration in the NAS project.

  15. GenASiS Basics: Object-oriented utilitarian functionality for large-scale physics simulations

    DOE PAGES

    Cardall, Christian Y.; Budiardja, Reuben D.

    2015-06-11

    Aside from numerical algorithms and problem setup, large-scale physics simulations on distributed-memory supercomputers require more basic utilitarian functionality, such as physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of this sort of rudimentary functionality, along with individual `unit test' programs and larger example problems demonstrating their use. Lastly, these classes compose the Basics division of our developing astrophysics simulation code GenASiS (General Astrophysical Simulation System), but their fundamental nature makes themmore » useful for physics simulations in many fields.« less

  16. On extending parallelism to serial simulators

    NASA Technical Reports Server (NTRS)

    Nicol, David; Heidelberger, Philip

    1994-01-01

    This paper describes an approach to discrete event simulation modeling that appears to be effective for developing portable and efficient parallel execution of models of large distributed systems and communication networks. In this approach, the modeler develops submodels using an existing sequential simulation modeling tool, using the full expressive power of the tool. A set of modeling language extensions permit automatically synchronized communication between submodels; however, the automation requires that any such communication must take a nonzero amount off simulation time. Within this modeling paradigm, a variety of conservative synchronization protocols can transparently support conservative execution of submodels on potentially different processors. A specific implementation of this approach, U.P.S. (Utilitarian Parallel Simulator), is described, along with performance results on the Intel Paragon.

  17. Charge plasma technique based dopingless accumulation mode junctionless cylindrical surrounding gate MOSFET: analog performance improvement

    NASA Astrophysics Data System (ADS)

    Trivedi, Nitin; Kumar, Manoj; Haldar, Subhasis; Deswal, S. S.; Gupta, Mridula; Gupta, R. S.

    2017-09-01

    A charge plasma technique based dopingless (DL) accumulation mode (AM) junctionless (JL) cylindrical surrounding gate (CSG) MOSFET has been proposed and extensively investigated. Proposed device has no physical junction at source to channel and channel to drain interface. The complete silicon pillar has been considered as undoped. The high free electron density or induced N+ region is designed by keeping the work function of source/drain metal contacts lower than the work function of undoped silicon. Thus, its fabrication complexity is drastically reduced by curbing the requirement of high temperature doping techniques. The electrical/analog characteristics for the proposed device has been extensively investigated using the numerical simulation and are compared with conventional junctionless cylindrical surrounding gate (JL-CSG) MOSFET with identical dimensions. For the numerical simulation purpose ATLAS-3D device simulator is used. The results show that the proposed device is more short channel immune to conventional JL-CSG MOSFET and suitable for faster switching applications due to higher I ON/ I OFF ratio.

  18. Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach

    ERIC Educational Resources Information Center

    Rotondi, Michael A.; Donner, Allan

    2009-01-01

    The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…

  19. Enterprise Requirements and Acquisition Model (ERAM) Analysis and Extension

    DTIC Science & Technology

    2014-02-20

    add them to the ERAM simulation. References . Arena, M. V., Obaid, Y., Galway L. A., Fox, B., Graser, J. C., Sollinger, J. M., Wu, F., & Wong, C... Galway L. A., Fox, B., Graser, J. C., Sollinger, J. M., Wu, F., & Wong, C. (2006). Impossible certainty: Cost risk analysis for air force systems (MG-415

  20. Extensible 3D (X3D) Earth Technical Requirements Workshop Summary Report

    DTIC Science & Technology

    2007-08-01

    world in detail already, but rarely interconnect on to another • Most interesting part of “virtual reality” (VR) is reality – which means physics... Two Web-Enabled Modeling and Simulation (WebSim) symposia have demonstrated that large partnerships can work 9. Server-side 3D graphics • Our

  1. The folding transition state of Protein L is extensive with non-native interactions (and not small and polarized)

    PubMed Central

    Yoo, Tae Yeon; Adhikari, Aashish; Xia, Zhen; Huynh, Tien; Freed, Karl F.; Zhou, Ruhong; Sosnick, Tobin R.

    2012-01-01

    Progress in understanding protein folding relies heavily upon an interplay between experiment and theory. In particular, readily interpretable experimental data are required that can be meaningfully compared to simulations. According to standard mutational φ analysis, the transition state for Protein L contains only a single hairpin. However, we demonstrate here using ψ analysis with engineered metal ion binding sites that the transition state is extensive, containing the entire four-stranded β sheet. Underreporting of the structural content of the transition state by φ analysis also occurs for acyl phosphatase1, ubiquitin2 and BdpA3. The carboxy terminal hairpin in the transition state of Protein L is found to be non-native, a significant result that agrees with our PDB-based backbone sampling and all-atom simulations. The non-native character partially explains the failure of accepted experimental and native-centric computational approaches to adequately describe the transition state. Hence, caution is required even when an apparent agreement exists between experiment and theory, thus highlighting the importance of having alternative methods for characterizing transition states. PMID:22522126

  2. The sixth generation robot in space

    NASA Technical Reports Server (NTRS)

    Butcher, A.; Das, A.; Reddy, Y. V.; Singh, H.

    1990-01-01

    The knowledge based simulator developed in the artificial intelligence laboratory has become a working test bed for experimenting with intelligent reasoning architectures. With this simulator, recently, small experiments have been done with an aim to simulate robot behavior to avoid colliding paths. An automatic extension of such experiments to intelligently planning robots in space demands advanced reasoning architectures. One such architecture for general purpose problem solving is explored. The robot, seen as a knowledge base machine, goes via predesigned abstraction mechanism for problem understanding and response generation. The three phases in one such abstraction scheme are: abstraction for representation, abstraction for evaluation, and abstraction for resolution. Such abstractions require multimodality. This multimodality requires the use of intensional variables to deal with beliefs in the system. Abstraction mechanisms help in synthesizing possible propagating lattices for such beliefs. The machine controller enters into a sixth generation paradigm.

  3. Neutralizer Hollow Cathode Simulations and Comparisons with Ground Test Data

    NASA Technical Reports Server (NTRS)

    Mikellides, Ioannis G.; Snyder, John S.; Goebel, Dan M.; Katz, Ira; Herman, Daniel A.

    2009-01-01

    The fidelity of electric propulsion physics-based models depends largely on the validity of their predictions over a range of operating conditions and geometries. In general, increased complexity of the physics requires more extensive comparisons with laboratory data to identify the region(s) that lie outside the validity of the model assumptions and to quantify the uncertainties within its range of application. This paper presents numerical simulations of neutralizer hollow cathodes at various operating conditions and orifice sizes. The simulations were performed using a two-dimensional axisymmetric model that solves numerically a relatively extensive system of conservation laws for the partially ionized gas in these devices. A summary of the comparisons between simulation results and Langmuir probe measurements is provided. The model has also been employed to provide insight into recent ground test observations of the neutralizer cathode in NEXT. It is found that a likely cause of the observed keeper voltage drop is cathode orifice erosion. However, due to the small magnitude of this change, is approx. 0.5 V (less than 5% of the beginning-of-life value) over 10 khrs, and in light of the large uncertainties of the cathode material sputtering yield at low ion energies, other causes cannot be excluded. Preliminary simulations to understand transition to plume mode suggest that in the range of 3-5 sccm the existing 2-D model reproduces fairly well the rise of the keeper voltage in the NEXT neutralizer as observed in the laboratory. At lower flow rates the simulation produces oscillations in the keeper current and voltage that require prohibitively small time-steps to resolve with the existing algorithms.

  4. Effectiveness of Rotation-free Triangular and Quadrilateral Shell Elements in Sheet-metal Forming Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunet, M.; Sabourin, F.

    2005-08-05

    This paper is concerned with the effectiveness of triangular 3-node shell element without rotational d.o.f. and the extension to a new 4-node quadrilateral shell element called S4 with only 3 translational degrees of freedom per node and one-point integration. The curvatures are computed resorting to the surrounding elements. Extension from rotation-free triangular element to a quadrilateral element requires internal curvatures in order to avoid singular bending stiffness. Two numerical examples with regular and irregular meshes are performed to show the convergence and accuracy. Deep-drawing of a box, spring-back analysis of a U-shape strip sheet and the crash simulation of amore » beam-box complete the demonstration of the bending capabilities of the proposed rotation-free triangular and quadrilateral elements.« less

  5. PARALLEL HOP: A SCALABLE HALO FINDER FOR MASSIVE COSMOLOGICAL DATA SETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skory, Stephen; Turk, Matthew J.; Norman, Michael L.

    2010-11-15

    Modern N-body cosmological simulations contain billions (10{sup 9}) of dark matter particles. These simulations require hundreds to thousands of gigabytes of memory and employ hundreds to tens of thousands of processing cores on many compute nodes. In order to study the distribution of dark matter in a cosmological simulation, the dark matter halos must be identified using a halo finder, which establishes the halo membership of every particle in the simulation. The resources required for halo finding are similar to the requirements for the simulation itself. In particular, simulations have become too extensive to use commonly employed halo finders, suchmore » that the computational requirements to identify halos must now be spread across multiple nodes and cores. Here, we present a scalable-parallel halo finding method called Parallel HOP for large-scale cosmological simulation data. Based on the halo finder HOP, it utilizes message passing interface and domain decomposition to distribute the halo finding workload across multiple compute nodes, enabling analysis of much larger data sets than is possible with the strictly serial or previous parallel implementations of HOP. We provide a reference implementation of this method as a part of the toolkit {sup yt}, an analysis toolkit for adaptive mesh refinement data that include complementary analysis modules. Additionally, we discuss a suite of benchmarks that demonstrate that this method scales well up to several hundred tasks and data sets in excess of 2000{sup 3} particles. The Parallel HOP method and our implementation can be readily applied to any kind of N-body simulation data and is therefore widely applicable.« less

  6. Low Gravity Freefall Facilities

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Composite of Marshall Space Flight Center's Low-Gravity Free Fall Facilities.These facilities include a 100-meter drop tower and a 100-meter drop tube. The drop tower simulates in-flight microgravity conditions for up to 4.2 seconds for containerless processing experiments, immiscible fluids and materials research, pre-flight hardware design test and flight experiment simulation. The drop tube simulates in-flight microgravity conditions for up to 4.6 seconds and is used extensively for ground-based microgravity convection research in which extremely small samples are studied. The facility can provide deep undercooling for containerless processing experiments that require materials to remain in a liquid phase when cooled below the normal solidification temperature.

  7. Microgravity

    NASA Image and Video Library

    1981-03-30

    Composite of Marshall Space Flight Center's Low-Gravity Free Fall Facilities.These facilities include a 100-meter drop tower and a 100-meter drop tube. The drop tower simulates in-flight microgravity conditions for up to 4.2 seconds for containerless processing experiments, immiscible fluids and materials research, pre-flight hardware design test and flight experiment simulation. The drop tube simulates in-flight microgravity conditions for up to 4.6 seconds and is used extensively for ground-based microgravity convection research in which extremely small samples are studied. The facility can provide deep undercooling for containerless processing experiments that require materials to remain in a liquid phase when cooled below the normal solidification temperature.

  8. Monte Carlo simulations of neutron-scattering instruments using McStas

    NASA Astrophysics Data System (ADS)

    Nielsen, K.; Lefmann, K.

    2000-06-01

    Monte Carlo simulations have become an essential tool for improving the performance of neutron-scattering instruments, since the level of sophistication in the design of instruments is defeating purely analytical methods. The program McStas, being developed at Risø National Laboratory, includes an extension language that makes it easy to adapt it to the particular requirements of individual instruments, and thus provides a powerful and flexible tool for constructing such simulations. McStas has been successfully applied in such areas as neutron guide design, flux optimization, non-Gaussian resolution functions of triple-axis spectrometers, and time-focusing in time-of-flight instruments.

  9. Dissolution curve comparisons through the F(2) parameter, a Bayesian extension of the f(2) statistic.

    PubMed

    Novick, Steven; Shen, Yan; Yang, Harry; Peterson, John; LeBlond, Dave; Altan, Stan

    2015-01-01

    Dissolution (or in vitro release) studies constitute an important aspect of pharmaceutical drug development. One important use of such studies is for justifying a biowaiver for post-approval changes which requires establishing equivalence between the new and old product. We propose a statistically rigorous modeling approach for this purpose based on the estimation of what we refer to as the F2 parameter, an extension of the commonly used f2 statistic. A Bayesian test procedure is proposed in relation to a set of composite hypotheses that capture the similarity requirement on the absolute mean differences between test and reference dissolution profiles. Several examples are provided to illustrate the application. Results of our simulation study comparing the performance of f2 and the proposed method show that our Bayesian approach is comparable to or in many cases superior to the f2 statistic as a decision rule. Further useful extensions of the method, such as the use of continuous-time dissolution modeling, are considered.

  10. The Numerical Propulsion System Simulation: An Overview

    NASA Technical Reports Server (NTRS)

    Lytle, John K.

    2000-01-01

    Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.

  11. Enhanced TCAS 2/CDTI traffic Sensor digital simulation model and program description

    NASA Technical Reports Server (NTRS)

    Goka, T.

    1984-01-01

    Digital simulation models of enhanced TCAS 2/CDTI traffic sensors are developed, based on actual or projected operational and performance characteristics. Two enhanced Traffic (or Threat) Alert and Collision Avoidance Systems are considered. A digital simulation program is developed in FORTRAN. The program contains an executive with a semireal time batch processing capability. The simulation program can be interfaced with other modules with a minimum requirement. Both the traffic sensor and CAS logic modules are validated by means of extensive simulation runs. Selected validation cases are discussed in detail, and capabilities and limitations of the actual and simulated systems are noted. The TCAS systems are not specifically intended for Cockpit Display of Traffic Information (CDTI) applications. These systems are sufficiently general to allow implementation of CDTI functions within the real systems' constraints.

  12. Viscous relaxation as a prerequisite for tectonic resurfacing on Ganymede: Insights from numerical models of lithospheric extension

    USGS Publications Warehouse

    Bland, Michael T.; McKinnon, William B.

    2018-01-01

    Ganymede’s bright terrain formed during a near-global resurfacing event (or events) that produced both heavily tectonized and relatively smooth terrains. The mechanism(s) by which resurfacing occurred on Ganymede (e.g., cryovolcanic or tectonic), and the relationship between the older, dark and the younger, bright terrain are fundamental to understanding the geological evolution of the satellite. Using a two-dimensional numerical model of lithospheric extension that has previously been used to successfully simulate surface deformation consistent with grooved terrain morphologies, we investigate whether large-amplitude preexisting topography can be resurfaced (erased) by extension (i.e., tectonic resurfacing). Using synthetically produced initial topography, we show that when the total relief of the initial topography is larger than 25–50 m, periodic groove-like structures fail to form. Instead, extension is localized in a few individual, isolated troughs. These results pose a challenge to the tectonic resurfacing hypothesis. We further investigate the effects of preexisting topography by performing suites of simulations initialized with topography derived from digital terrain models of Ganymede’s surface. These include dark terrain, fresh (relatively deep) impact craters, smooth bright terrain, and a viscously relaxed impact crater. The simulations using dark terrain and fresh impact craters are consistent with our simulations using synthetic topography: periodic groove-like deformation fails to form. In contrast, when simulations were initialized with bright smooth terrain topography, groove-like deformation results from a wide variety of heat flow and surface temperature conditions. Similarly, when a viscously relaxed impact crater was used, groove-like structures were able to form during extension. These results suggest that tectonic resurfacing may require that the amplitude of the initial topography be reduced before extension begins. We emphasize that viscous relaxation may be the key to enabling tectonic resurfacing, as the heat fluxes associated with groove terrain formation are also capable of reducing crater topography through viscous relaxation. For long-wavelength topography (large craters) viscous relaxation is unavoidable. We propose that the resurfacing of Ganymede occurred through a combination of viscous relaxation, tectonic resurfacing, cryovolcanism and, at least in a few cases, band formation. Variations in heat flow and strain magnitudes across Ganymede likely produced the complex variety of terrain types currently observed.

  13. Viscous relaxation as a prerequisite for tectonic resurfacing on Ganymede: Insights from numerical models of lithospheric extension

    NASA Astrophysics Data System (ADS)

    Bland, Michael T.; McKinnon, William B.

    2018-05-01

    Ganymede's bright terrain formed during a near-global resurfacing event (or events) that produced both heavily tectonized and relatively smooth terrains. The mechanism(s) by which resurfacing occurred on Ganymede (e.g., cryovolcanic or tectonic), and the relationship between the older, dark and the younger, bright terrain are fundamental to understanding the geological evolution of the satellite. Using a two-dimensional numerical model of lithospheric extension that has previously been used to successfully simulate surface deformation consistent with grooved terrain morphologies, we investigate whether large-amplitude preexisting topography can be resurfaced (erased) by extension (i.e., tectonic resurfacing). Using synthetically produced initial topography, we show that when the total relief of the initial topography is larger than 25-50 m, periodic groove-like structures fail to form. Instead, extension is localized in a few individual, isolated troughs. These results pose a challenge to the tectonic resurfacing hypothesis. We further investigate the effects of preexisting topography by performing suites of simulations initialized with topography derived from digital terrain models of Ganymede's surface. These include dark terrain, fresh (relatively deep) impact craters, smooth bright terrain, and a viscously relaxed impact crater. The simulations using dark terrain and fresh impact craters are consistent with our simulations using synthetic topography: periodic groove-like deformation fails to form. In contrast, when simulations were initialized with bright smooth terrain topography, groove-like deformation results from a wide variety of heat flow and surface temperature conditions. Similarly, when a viscously relaxed impact crater was used, groove-like structures were able to form during extension. These results suggest that tectonic resurfacing may require that the amplitude of the initial topography be reduced before extension begins. We emphasize that viscous relaxation may be the key to enabling tectonic resurfacing, as the heat fluxes associated with groove terrain formation are also capable of reducing crater topography through viscous relaxation. For long-wavelength topography (large craters) viscous relaxation is unavoidable. We propose that the resurfacing of Ganymede occurred through a combination of viscous relaxation, tectonic resurfacing, cryovolcanism and, at least in a few cases, band formation. Variations in heat flow and strain magnitudes across Ganymede likely produced the complex variety of terrain types currently observed.

  14. The research of hourglass worm dynamic balancing simulation based on SolidWorks motion

    NASA Astrophysics Data System (ADS)

    Wang, Zhuangzhuang; Yang, Jie; Liu, Pingyi; Zhao, Junpeng

    2018-02-01

    Hourglass worm is extensively used in industry due to its characteristic of heavy-load and a large reduction ratio. Varying sizes of unbalanced mass distribution appeared in the design of a single head worm. With machines developing towards higher speed and precision, the vibration and shock caused by the unbalanced mass distribution of rotating parts must be considered. Therefore, the balance grade of these parts must meet higher requirements. A method based on theoretical analysis and SolidWorks motion software simulation is presented in this paper; the virtual dynamic balance simulation test of the hourglass worm was carried out during the design of the product, so as to ensure that the hourglass worm meet the requirements of dynamic balance in the design process. This can effectively support the structural design of the hourglass worm and provide a way of thinking and designing the same type of products.

  15. Interleaved concatenated codes: New perspectives on approaching the Shannon limit

    PubMed Central

    Viterbi, A. J.; Viterbi, A. M.; Sindhushayana, N. T.

    1997-01-01

    The last few years have witnessed a significant decrease in the gap between the Shannon channel capacity limit and what is practically achievable. Progress has resulted from novel extensions of previously known coding techniques involving interleaved concatenated codes. A considerable body of simulation results is now available, supported by an important but limited theoretical basis. This paper presents a computational technique which further ties simulation results to the known theory and reveals a considerable reduction in the complexity required to approach the Shannon limit. PMID:11038568

  16. Extension of a Kinetic Approach to Chemical Reactions to Electronic Energy Levels and Reactions Involving Charged Species With Application to DSMC Simulations

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2013-01-01

    The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. Recently introduced molecular-level chemistry models that predict equilibrium and nonequilibrium reaction rates using only kinetic theory and fundamental molecular properties are extended in the current work to include electronic energy level transitions and reactions involving charged particles. These extensions are shown to agree favorably with reported transition and reaction rates from the literature for nearequilibrium conditions. Also, the extensions are applied to the second flight of the Project FIRE flight experiment at 1634 seconds with a Knudsen number of 0.001 at an altitude of 76.4 km. In order to accomplish this, NASA's direct simulation Monte Carlo code DAC was rewritten to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced chemistry model, and to include the extensions presented in this work. The 1634 second data point was chosen for comparisons to be made in order to include a CFD solution. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid because, although near-transitional, the flow is still considered to be continuum. It is shown that the inclusion of electronic energy levels in the DSMC simulation is necessary for flows of this nature and is required for comparison to the CFD solution. The flow field solutions are also post-processed by the nonequilibrium radiation code HARA to compute the radiative portion of the heating and is then compared to the total heating measured in flight.

  17. Model Predictions and Observed Performance of JWST's Cryogenic Position Metrology System

    NASA Technical Reports Server (NTRS)

    Lunt, Sharon R.; Rhodes, David; DiAntonio, Andrew; Boland, John; Wells, Conrad; Gigliotti, Trevis; Johanning, Gary

    2016-01-01

    The James Webb Space Telescope cryogenic testing requires measurement systems that both obtain a very high degree of accuracy and can function in that environment. Close-range photogrammetry was identified as meeting those criteria. Testing the capability of a close-range photogrammetric system prior to its existence is a challenging problem. Computer simulation was chosen over building a scaled mock-up to allow for increased flexibility in testing various configurations. Extensive validation work was done to ensure that the actual as-built system meet accuracy and repeatability requirements. The simulated image data predicted the uncertainty in measurement to be within specification and this prediction was borne out experimentally. Uncertainty at all levels was verified experimentally to be less than 0.1 millimeters.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Thomas M.; Berndt, Markus; Baglietto, Emilio

    The purpose of this report is to document a multi-year plan for enhancing turbulence modeling in Hydra-TH for the Consortium for Advanced Simulation of Light Water Reactors (CASL) program. Hydra-TH is being developed to the meet the high- fidelity, high-Reynolds number CFD based thermal hydraulic simulation needs of the program. This work is being conducted within the thermal hydraulics methods (THM) focus area. This report is an extension of THM CASL milestone L3:THM.CFD.P10.02 [33] (March, 2015) and picks up where it left off. It will also serve to meet the requirements of CASL THM level three milestone, L3:THM.CFD.P11.04, scheduled formore » completion September 30, 2015. The objectives of this plan will be met by: maturation of recently added turbulence models, strategic design/development of new models and systematic and rigorous testing of existing and new models and model extensions. While multi-phase turbulent flow simulations are important to the program, only single-phase modeling will be considered in this report. Large Eddy Simulation (LES) is also an important modeling methodology. However, at least in the first year, the focus is on steady-state Reynolds Averaged Navier-Stokes (RANS) turbulence modeling.« less

  19. NASA/ESA CV-990 spacelab simulation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Due to interest in the application of simplified techniques used to conduct airborne science missions at NASA's Ames Research Center, a joint NASA/ESA endeavor was established to conduct an extensive Spacelab simulation using the NASA CV-990 airborne laboratory. The scientific payload was selected to perform studies in upper atmospheric physics and infrared astronomy with principal investigators from France, the Netherlands, England, and several groups from the United States. Communication links between the 'Spacelab' and a ground based mission operations center were limited consistent with Spacelab plans. The mission was successful and provided extensive data relevant to Spacelab objectives on overall management of a complex international payload; experiment preparation, testing, and integration; training for proxy operation in space; data handling; multiexperimenter use of common experimenter facilities (telescopes); multiexperiment operation by experiment operators; selection criteria for Spacelab experiment operators; and schedule requirements to prepare for such a Spacelab mission.

  20. simulation of the DNA force-extension curve

    NASA Astrophysics Data System (ADS)

    Shinaberry, Gregory; Mikhaylov, Ivan; Balaeff, Alexander

    A molecular dynamics simulation study of the force-extension curve of double-stranded DNA is presented. Extended simulations of the DNA at multiple points along the force-extension curve are conducted with DNA end-to-end length constrained at each point. The calculated force-extension curve qualitatively reproduces the experimental one. The DNA conformational ensemble at each extension shows that the famous plateau of the force-extension curve results from B-DNA melting, whereas the formation of the earlier-predicted novel DNA conformation called 'zip-DNA' takes place at extensions past the plateau. An extensive analysis of the DNA conformational ensemble in terms of base configuration, backbone configuration, solvent interaction energy, etc., is conducted in order to elucidate the physical origin of DNA elasticity and the main interactions responsible for the shape of the force-extension curve.

  1. Automating the solution of PDEs on the sphere and other manifolds in FEniCS 1.2

    NASA Astrophysics Data System (ADS)

    Rognes, M. E.; Ham, D. A.; Cotter, C. J.; McRae, A. T. T.

    2013-12-01

    Differential equations posed over immersed manifolds are of particular importance in studying geophysical flows; for instance, ocean and atmosphere simulations crucially rely on the capability to solve equations over the sphere. This paper presents the extension of the FEniCS software components to the automated solution of finite element formulations of differential equations defined over general, immersed manifolds. We describe the implementation and, in particular detail, how the required extensions essentially reduce to the extension of the FEniCS form compiler to cover this case. The resulting implementation has all the properties of the FEniCS pipeline and we demonstrate its flexibility by an extensive range of numerical examples covering a number of geophysical benchmark examples and test cases. The results are all in agreement with the expected values. The description here relates to DOLFIN/FEniCS 1.2.

  2. Automating the solution of PDEs on the sphere and other manifolds in FEniCS 1.2

    NASA Astrophysics Data System (ADS)

    Rognes, M. E.; Ham, D. A.; Cotter, C. J.; McRae, A. T. T.

    2013-07-01

    Differential equations posed over immersed manifolds are of particular importance in studying geophysical flows; for instance, ocean and atmosphere simulations crucially rely on the capability to solve equations over the sphere. This paper presents the extension of the FEniCS software components to the automated solution of finite element formulations of differential equations defined over general, immersed manifolds. We describe the implementation and in particular detail how the required extensions essentially reduce to the extension of the FEniCS form compiler to cover this case. The resulting implementation has all the properties of the FEniCS pipeline and we demonstrate its flexibility by an extensive range of numerical examples covering a number of geophysical benchmark examples and test cases. The results are all in agreement with the expected values. The description here relates to DOLFIN/FEniCS 1.2.

  3. The effect of resistance level and stability demands on recruitment patterns and internal loading of spine in dynamic flexion and extension using a simple trunk model.

    PubMed

    Zeinali-Davarani, Shahrokh; Shirazi-Adl, Aboulfazl; Dariush, Behzad; Hemami, Hooshang; Parnianpour, Mohamad

    2011-07-01

    The effects of external resistance on the recruitment of trunk muscles in sagittal movements and the coactivation mechanism to maintain spinal stability were investigated using a simple computational model of iso-resistive spine sagittal movements. Neural excitation of muscles was attained based on inverse dynamics approach along with a stability-based optimisation. The trunk flexion and extension movements between 60° flexion and the upright posture against various resistance levels were simulated. Incorporation of the stability constraint in the optimisation algorithm required higher antagonistic activities for all resistance levels mostly close to the upright position. Extension movements showed higher coactivation with higher resistance, whereas flexion movements demonstrated lower coactivation indicating a greater stability demand in backward extension movements against higher resistance at the neighbourhood of the upright posture. Optimal extension profiles based on minimum jerk, work and power had distinct kinematics profiles which led to recruitment patterns with different timing and amplitude of activation.

  4. BlazeDEM3D-GPU A Large Scale DEM simulation code for GPUs

    NASA Astrophysics Data System (ADS)

    Govender, Nicolin; Wilke, Daniel; Pizette, Patrick; Khinast, Johannes

    2017-06-01

    Accurately predicting the dynamics of particulate materials is of importance to numerous scientific and industrial areas with applications ranging across particle scales from powder flow to ore crushing. Computational discrete element simulations is a viable option to aid in the understanding of particulate dynamics and design of devices such as mixers, silos and ball mills, as laboratory scale tests comes at a significant cost. However, the computational time required to simulate an industrial scale simulation which consists of tens of millions of particles can take months to complete on large CPU clusters, making the Discrete Element Method (DEM) unfeasible for industrial applications. Simulations are therefore typically restricted to tens of thousands of particles with highly detailed particle shapes or a few million of particles with often oversimplified particle shapes. However, a number of applications require accurate representation of the particle shape to capture the macroscopic behaviour of the particulate system. In this paper we give an overview of the recent extensions to the open source GPU based DEM code, BlazeDEM3D-GPU, that can simulate millions of polyhedra and tens of millions of spheres on a desktop computer with a single or multiple GPUs.

  5. Advanced Simulation and Computing Fiscal Year 14 Implementation Plan, Rev. 0.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meisner, Robert; McCoy, Michel; Archer, Bill

    2013-09-11

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Moreover, ASC’s business model is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools.« less

  6. RT 24 - Architecture, Modeling & Simulation, and Software Design

    DTIC Science & Technology

    2010-11-01

    focus on tool extensions (UPDM, SysML, SoaML, BPMN ) Leverage “best of breed” architecture methodologies Provide tooling to support the methodology DoDAF...Capability 10 Example: BPMN 11 DoDAF 2.0 MetaModel BPMN MetaModel Mapping SysML to DoDAF 2.0 12 DoDAF V2.0 Models OV-2 SysML Diagrams Requirement

  7. Coverage Extension and Balancing the Transmitted Power of the Moving Relay Node at LTE-A Cellular Network

    PubMed Central

    Aldhaibani, Jaafar A.; Yahya, Abid; Ahmad, R. Badlishah

    2014-01-01

    The poor capacity at cell boundaries is not enough to meet the growing demand and stringent design which required high capacity and throughput irrespective of user's location in the cellular network. In this paper, we propose new schemes for an optimum fixed relay node (RN) placement in LTE-A cellular network to enhance throughput and coverage extension at cell edge region. The proposed approach mitigates interferences between all nodes and ensures optimum utilization with the optimization of transmitted power. Moreover, we proposed a new algorithm to balance the transmitted power of moving relay node (MR) over cell size and providing required SNR and throughput at the users inside vehicle along with reducing the transmitted power consumption by MR. The numerical analysis along with the simulation results indicates that an improvement in capacity for users is 40% increment at downlink transmission from cell capacity. Furthermore, the results revealed that there is saving nearly 75% from transmitted power in MR after using proposed balancing algorithm. ATDI simulator was used to verify the numerical results, which deals with real digital cartographic and standard formats for terrain. PMID:24672378

  8. Coverage extension and balancing the transmitted power of the moving relay node at LTE-A cellular network.

    PubMed

    Aldhaibani, Jaafar A; Yahya, Abid; Ahmad, R Badlishah

    2014-01-01

    The poor capacity at cell boundaries is not enough to meet the growing demand and stringent design which required high capacity and throughput irrespective of user's location in the cellular network. In this paper, we propose new schemes for an optimum fixed relay node (RN) placement in LTE-A cellular network to enhance throughput and coverage extension at cell edge region. The proposed approach mitigates interferences between all nodes and ensures optimum utilization with the optimization of transmitted power. Moreover, we proposed a new algorithm to balance the transmitted power of moving relay node (MR) over cell size and providing required SNR and throughput at the users inside vehicle along with reducing the transmitted power consumption by MR. The numerical analysis along with the simulation results indicates that an improvement in capacity for users is 40% increment at downlink transmission from cell capacity. Furthermore, the results revealed that there is saving nearly 75% from transmitted power in MR after using proposed balancing algorithm. ATDI simulator was used to verify the numerical results, which deals with real digital cartographic and standard formats for terrain.

  9. Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity

    PubMed Central

    Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.

    2010-01-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183

  10. Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.

    PubMed

    Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L

    2010-02-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.

  11. Evaluation of total knee mechanics using a crouching simulator with a synthetic knee substitute.

    PubMed

    Lowry, Michael; Rosenbaum, Heather; Walker, Peter S

    2016-05-01

    Mechanical evaluation of total knees is frequently required for aspects such as wear, strength, kinematics, contact areas, and force transmission. In order to carry out such tests, we developed a crouching simulator, based on the Oxford-type machine, with novel features including a synthetic knee including ligaments. The instrumentation and data processing methods enabled the determination of contact area locations and interface forces and moments, for a full flexion-extension cycle. To demonstrate the use of the simulator, we carried out a comparison of two different total knee designs, cruciate retaining and substituting. The first part of the study describes the simulator design and the methodology for testing the knees without requiring cadaveric knee specimens. The degrees of freedom of the anatomic hip and ankle joints were reproduced. Flexion-extension was obtained by changing quadriceps length, while variable hamstring forces were applied using springs. The knee joint was represented by three-dimensional printed blocks on to which the total knee components were fixed. Pretensioned elastomeric bands of realistic stiffnesses passed through holes in the block at anatomical locations to represent ligaments. Motion capture of the knees during flexion, together with laser scanning and computer modeling, was used to reconstruct contact areas on the bearing surfaces. A method was also developed for measuring tibial component interface forces and moments as a comparative assessment of fixation. The method involved interposing Tekscan pads at locations on the interface. Overall, the crouching machine and the methodology could be used for many different mechanical measurements of total knee designs, adapted especially for comparative or parametric studies. © IMechE 2016.

  12. Simulation studies for the PANDA experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kopf, B.

    2005-10-26

    One main component of the planned Facility for Antiproton and Ion Research (FAIR) is the High Energy Storage Ring (HESR) at GSI, Darmstadt, which will provide cooled antiprotons with momenta between 1.5 and 15 GeV/c. The PANDA experiment will investigate p-barannihilations with internal hydrogen and nuclear targets. Due to the planned extensive physics program a multipurpose detector with nearly complete solid angle coverage, proper particle identification over a large momentum range, and high resolution calorimetry for neutral particles is required. For the optimization of the detector design simulation studies of several benchmark channels are in progress which are covering themore » most relevant physics topics. Some important simulation results are discussed here.« less

  13. OSCAR a Matlab based optical FFT code

    NASA Astrophysics Data System (ADS)

    Degallaix, Jérôme

    2010-05-01

    Optical simulation softwares are essential tools for designing and commissioning laser interferometers. This article aims to introduce OSCAR, a Matlab based FFT code, to the experimentalist community. OSCAR (Optical Simulation Containing Ansys Results) is used to simulate the steady state electric fields in optical cavities with realistic mirrors. The main advantage of OSCAR over other similar packages is the simplicity of its code requiring only a short time to master. As a result, even for a beginner, it is relatively easy to modify OSCAR to suit other specific purposes. OSCAR includes an extensive manual and numerous detailed examples such as simulating thermal aberration, calculating cavity eigen modes and diffraction loss, simulating flat beam cavities and three mirror ring cavities. An example is also provided about how to run OSCAR on the GPU of modern graphic cards instead of the CPU, making the simulation up to 20 times faster.

  14. The Fire and Fuels Extension to the Forest Vegetation Simulator

    Treesearch

    Elizabeth Reinhardt; Nicholas L. Crookston

    2003-01-01

    The Fire and Fuels Extension (FFE) to the Forest Vegetation Simulator (FVS) simulates fuel dynamics and potential fire behaviour over time, in the context of stand development and management. Existing models of fire behavior and fire effects were added to FVS to form this extension. New submodels representing snag and fuel dynamics were created to complete the linkages...

  15. Modeling AFM-induced PEVK extension and the reversible unfolding of Ig/FNIII domains in single and multiple titin molecules.

    PubMed Central

    Zhang, B; Evans, J S

    2001-01-01

    Molecular elasticity is associated with a select number of polypeptides and proteins, such as titin, Lustrin A, silk fibroin, and spider silk dragline protein. In the case of titin, the globular (Ig) and non-globular (PEVK) regions act as extensible springs under stretch; however, their unfolding behavior and force extension characteristics are different. Using our time-dependent macroscopic method for simulating AFM-induced titin Ig domain unfolding and refolding, we simulate the extension and relaxation of hypothetical titin chains containing Ig domains and a PEVK region. Two different models are explored: 1) a series-linked WLC expression that treats the PEVK region as a distinct entropic spring, and 2) a summation of N single WLC expressions that simulates the extension and release of a discrete number of parallel titin chains containing constant or variable amounts of PEVK. In addition to these simulations, we also modeled the extension of a hypothetical PEVK domain using a linear Hooke's spring model to account for "enthalpic" contributions to PEVK elasticity. We find that the modified WLC simulations feature chain length compensation, Ig domain unfolding/refolding, and force-extension behavior that more closely approximate AFM, laser tweezer, and immunolocalization experimental data. In addition, our simulations reveal the following: 1) PEVK extension overlaps with the onset of Ig domain unfolding, and 2) variations in PEVK content within a titin chain ensemble lead to elastic diversity within that ensemble. PMID:11159428

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCoy, Michel; Archer, Bill; Hendrickson, Bruce

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computationalmore » resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.« less

  17. What Makes a Simulation Useful

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eubank, S.G.

    1999-10-12

    Modern computers make possible a new blending of systems, man, and cybernetics in the detailed simulation of large sociotechnical systems. Several such simulations are currently under development at Los Alamos National Laboratory and elsewhere. When deployed, they will affect the daily lives of hundreds of millions of people and the allocation of billions of dollars. Whether they are deployed depends entirely on their perceived usefulness, which in turn depends on answers to the following: What kinds of questions does the simulation address and what kinds of solutions does it provide? How can the solutions be validated? Is simulation more cost-effectivemore » than other methods? Answers to these questions lead us to define a useful simulation as one which efficiently provides correct, robust estimates required by decision-making needs, together with well understood variability for the outcomes in hypothetical situations. This paper examines the implications of this criterion for the design o f TRANSIMS, a regional transportation network simulation, and by extension, for simulations of other sociotechnical systems.« less

  18. Cryo Cooler Induced Micro-Vibration Disturbances to the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Jedrich, Nick; Zimbelman, Darrell; Turczyn, Mark; Sills, Joel; Voorhees, Carl; Clapp, Brian; Brumfield, Mark (Technical Monitor)

    2002-01-01

    This paper presents an overview of the Hubble Space Telescope (HST) Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cryo Cooler (MCC) system, a description of the micro-vibration characterization testing performed, and a discussion of the simulated performance. The NCC is a reverse Brayton cycle system that employs micro turbo-machinery to provide cooling to the NICMOS instrument. Extensive testing was conducted to quantify the expected on-orbit disturbances caused by the micro turbo-machinery and provide input to a flexible-body dynamic simulation to demonstrate compliance with the HST 7 milli-arcsecond root mean square jitter requirement.

  19. An overview of the fire and fuels extension to the forest vegetation simulator

    Treesearch

    Sarah J. Beukema; Elizabeth D. Reinhardt; Werner A. Kurz; Nicholas L. Crookston

    2000-01-01

    The Fire and Fuels Extension (FFE) to the Forest Vegetation Simulator (FVS) has been developed to assess the risk, behavior, and impact of fire in forest ecosystems. This extension to the widely-used stand-dynamics model FVS simulates the dynamics of snags and surface fuels as they are affected by stand management (of trees or fuels), live tree growth and mortality,...

  20. ECHO: A Computer Based Test for the Measurement of Individualistic, Cooperative, Defensive, and Aggressive Models of Behavior. Occasional Paper No. 30.

    ERIC Educational Resources Information Center

    Krus, David J.; And Others

    This paper describes a test which attempts to measure a group of personality traits by analyzing the actual behavior of the participant in a computer-simulated game. ECHO evolved from an extension and computerization of Horstein and Deutsch's allocation game. The computerized version of ECHO requires subjects to make decisions about the allocation…

  1. M&S Journal. Volume 8, Issue 2, Summer 2013

    DTIC Science & Technology

    2013-01-01

    Modeling Notation ( BPMN ) [White and Miers, 2008], and the integration of the modeling notation with executable simulation engines [Anupindi 2005...activities and the supporting IT in BPMN and use that to compute MOE for a mission instance. Requirements for Modeling Missions To understand the...representation versus impact computation tradeoffs we selected BPMN , along with some proposed extensions to represent information dependencies, as the

  2. Evolution of the INMARSAT aeronautical system: Service, system, and business considerations

    NASA Technical Reports Server (NTRS)

    Sengupta, Jay R.

    1995-01-01

    A market-driven approach was adopted to develop enhancements to the Inmarsat-Aeronautical system, to address the requirements of potential new market segments. An evolutionary approach and well differentiated product/service portfolio was required, to minimize system upgrade costs and market penetration, respectively. The evolved system definition serves to minimize equipment cost/size/mass for short/medium range aircraft, by reducing the antenna gain requirement and relaxing the performance requirements for non safety-related communications. A validation program involving simulation, laboratory tests, over-satellite tests and flight trials is being conducted to confirm the system definition. Extensive market research has been conducted to determine user requirements and to quantify market demand for future Inmarsat Aero-1 AES, using sophisticated computer assisted survey techniques.

  3. Simulation of Constrained Musculoskeletal Systems in Task Space.

    PubMed

    Stanev, Dimitar; Moustakas, Konstantinos

    2018-02-01

    This paper proposes an operational task space formalization of constrained musculoskeletal systems, motivated by its promising results in the field of robotics. The change of representation requires different algorithms for solving the inverse and forward dynamics simulation in the task space domain. We propose an extension to the direct marker control and an adaptation of the computed muscle control algorithms for solving the inverse kinematics and muscle redundancy problems, respectively. Experimental evaluation demonstrates that this framework is not only successful in dealing with the inverse dynamics problem, but also provides an intuitive way of studying and designing simulations, facilitating assessment prior to any experimental data collection. The incorporation of constraints in the derivation unveils an important extension of this framework toward addressing systems that use absolute coordinates and topologies that contain closed kinematic chains. Task space projection reveals a more intuitive encoding of the motion planning problem, allows for better correspondence between observed and estimated variables, provides the means to effectively study the role of kinematic redundancy, and most importantly, offers an abstract point of view and control, which can be advantageous toward further integration with high level models of the precommand level. Task-based approaches could be adopted in the design of simulation related to the study of constrained musculoskeletal systems.

  4. Semi-physical simulation test for micro CMOS star sensor

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Zhang, Guang-jun; Jiang, Jie; Fan, Qiao-yun

    2008-03-01

    A designed star sensor must be extensively tested before launching. Testing star sensor requires complicated process with much time and resources input. Even observing sky on the ground is a challenging and time-consuming job, requiring complicated and expensive equipments, suitable time and location, and prone to be interfered by weather. And moreover, not all stars distributed on the sky can be observed by this testing method. Semi-physical simulation in laboratory reduces the testing cost and helps to debug, analyze and evaluate the star sensor system while developing the model. The test system is composed of optical platform, star field simulator, star field simulator computer, star sensor and the central data processing computer. The test system simulates the starlight with high accuracy and good parallelism, and creates static or dynamic image in FOV (Field of View). The conditions of the test are close to observing real sky. With this system, the test of a micro star tracker designed by Beijing University of Aeronautics and Astronautics has been performed successfully. Some indices including full-sky autonomous star identification time, attitude update frequency and attitude precision etc. meet design requirement of the star sensor. Error source of the testing system is also analyzed. It is concluded that the testing system is cost-saving, efficient, and contributes to optimizing the embed arithmetic, shortening the development cycle and improving engineering design processes.

  5. Guidance law simulation studies for complex approaches using the Microwave Landing System (MLS)

    NASA Technical Reports Server (NTRS)

    Feather, J. B.

    1986-01-01

    This report documents results for MLS guidance algorithm development conducted by DAC for NASA under the Advance Transport Operating Systems (ATOPS) Technology Studies program (NAS1-18028). The study consisted of evaluating guidance laws for vertical and lateral path control, as well as speed control, by simulating an MLS approach for the Washington National Airport. This work is an extension and generalization of a previous ATOPS contract (NAS1-16202) completed by DAC in 1985. The Washington river approach was simulated by six waypoints and one glideslope change and consisted of an eleven nautical mile approach path. Tracking performance was generated for 10 cases representing several different conditions, which included MLS noise, steady wind, turbulence, and windshear. Results of this simulation phase are suitable for use in future fixed-base simulator evaluations employing actual hardware (autopilot and a performance management system), as well as crew procedures and information requirements for MLS.

  6. Monte Carlo simulations of particle acceleration at oblique shocks: Including cross-field diffusion

    NASA Technical Reports Server (NTRS)

    Baring, M. G.; Ellison, D. C.; Jones, F. C.

    1995-01-01

    The Monte Carlo technique of simulating diffusive particle acceleration at shocks has made spectral predictions that compare extremely well with particle distributions observed at the quasi-parallel region of the earth's bow shock. The current extension of this work to compare simulation predictions with particle spectra at oblique interplanetary shocks has required the inclusion of significant cross-field diffusion (strong scattering) in the simulation technique, since oblique shocks are intrinsically inefficient in the limit of weak scattering. In this paper, we present results from the method we have developed for the inclusion of cross-field diffusion in our simulations, namely model predictions of particle spectra downstream of oblique subluminal shocks. While the high-energy spectral index is independent of the shock obliquity and the strength of the scattering, the latter is observed to profoundly influence the efficiency of injection of cosmic rays into the acceleration process.

  7. Feasibility of four-dimensional preoperative simulation for elbow debridement arthroplasty.

    PubMed

    Yamamoto, Michiro; Murakami, Yukimi; Iwatsuki, Katsuyuki; Kurimoto, Shigeru; Hirata, Hitoshi

    2016-04-02

    Recent advances in imaging modalities have enabled three-dimensional preoperative simulation. A four-dimensional preoperative simulation system would be useful for debridement arthroplasty of primary degenerative elbow osteoarthritis because it would be able to detect the impingement lesions. We developed a four-dimensional simulation system by adding the anatomical axis to the three-dimensional computed tomography scan data of the affected arm in one position. Eleven patients with primary degenerative elbow osteoarthritis were included. A "two rings" method was used to calculate the flexion-extension axis of the elbow by converting the surface of the trochlea and capitellum into two rings. A four-dimensional simulation movie was created and showed the optimal range of motion and the impingement area requiring excision. To evaluate the reliability of the flexion-extension axis, interobserver and intraobserver reliabilities regarding the assessment of bony overlap volumes were calculated twice for each patient by two authors. Patients were treated by open or arthroscopic debridement arthroplasties. Pre- and postoperative examinations included elbow range of motion measurement, and completion of the patient-rated questionnaire Hand20, Japanese Orthopaedic Association-Japan Elbow Society Elbow Function Score, and the Mayo Elbow Performance Score. Measurement of the bony overlap volume showed an intraobserver intraclass correlation coefficient of 0.93 and 0.90, and an interobserver intraclass correlation coefficient of 0.94. The mean elbow flexion-extension arc significantly improved from 101° to 125°. The mean Hand20 score significantly improved from 52 to 22. The mean Japanese Orthopaedic Association-Japan Elbow Society Elbow Function Score significantly improved from 67 to 88. The mean Mayo Elbow Performance Score significantly improved from 71 to 91 at the final follow-up evaluation. We showed that four-dimensional, preoperative simulation can be generated by adding the rotation axis to the one-position, three-dimensional computed tomography image of the affected arm. This method is feasible for elbow debridement arthroplasty.

  8. Design of a 100 MW X-band klystron

    NASA Astrophysics Data System (ADS)

    Eppley, Kenneth

    1989-02-01

    Future linear colliders will require klystrons with higher peak power at higher frequency than are currently in use. SLAC is currently designing a 100 MW klystron at 11.4 GHz as a prototype for such a tube. The gun has been designed for 440 kV and 510 amps. Transporting this beam through a 5 mm radius X-band drift tube presents the major design problem. The area convergence ratio of 190 to one is over ten times higher than is found in conventional klystrons. Even with high magnetic fields of 6 to 7 kilogauss careful matching is required to prevent excessive scalloping. Extensive EGUN and CONDOR simulations have been made to optimize the transmission and RF efficiency. The EGUN simulations indicate that better matching is possible by using resonant magnetic focusing. CONDOR calculations indicate efficiencies of 45 percent are possible with a double output cavity. We will discuss the results of the simulations and the status of the experimental program.

  9. Force-extension behavior of DNA in the presence of DNA-bending nucleoid associated proteins

    NASA Astrophysics Data System (ADS)

    Dahlke, K.; Sing, C. E.

    2018-02-01

    Interactions between nucleoid associated proteins (NAPs) and DNA affect DNA polymer conformation, leading to phenomena such as concentration dependent force-extension behavior. These effects, in turn, also impact the local binding behavior of the protein, such as high forces causing proteins to unbind, or proteins binding favorably to locally bent DNA. We develop a coarse-grained NAP-DNA simulation model that incorporates both force- and concentration-dependent behaviors, in order to study the interplay between NAP binding and DNA conformation. This model system includes multi-state protein binding and unbinding, motivated by prior work, but is now dependent on the local structure of the DNA, which is related to external forces acting on the DNA strand. We observe the expected qualitative binding behavior, where more proteins are bound at lower forces than at higher forces. Our model also includes NAP-induced DNA bending, which affects DNA elasticity. We see semi-quantitative matching of our simulated force-extension behavior to the reported experimental data. By using a coarse-grained simulation, we are also able to look at non-equilibrium behaviors, such as dynamic extension of a DNA strand. We stretch a DNA strand at different rates and at different NAP concentrations to observe how the time scales of the system (such as pulling time and unbinding time) work in concert. When these time scales are similar, we observe measurable rate-dependent changes in the system, which include the number of proteins bound and the force required to extend the DNA molecule. This suggests that the relative time scales of different dynamic processes play an important role in the behavior of NAP-DNA systems.

  10. Practices to enable the geophysical research spectrum: from fundamentals to applications

    NASA Astrophysics Data System (ADS)

    Kang, S.; Cockett, R.; Heagy, L. J.; Oldenburg, D.

    2016-12-01

    In a geophysical survey, a source injects energy into the earth and a response is measured. These physical systems are governed by partial differential equations and their numerical solutions are obtained by discretizing the earth. Geophysical simulations and inversions are tools for understanding physical responses and constructing models of the subsurface given a finite amount of data. SimPEG (http://simpeg.xyz) is our effort to synthesize geophysical forward and inverse methodologies into a consistent framework. The primary focus of our initial development has been on the electromagnetics (EM) package, with recent extensions to magnetotelluric, direct current (DC), and induced polarization. Across these methods, and applied geophysics in general, we require tools to explore and build an understanding of the physics (behaviour of fields, fluxes), and work with data to produce models through reproducible inversions. If we consider DC or EM experiments, with the aim of understanding responses from subsurface conductors, we require resources that provide multiple "entry points" into the geophysical problem. To understand the physical responses and measured data, we must simulate the physical system and visualize electric fields, currents, and charges. Performing an inversion requires that many moving pieces be brought together: simulation, physics, linear algebra, data processing, optimization, etc. Each component must be trusted, accessible to interrogation and manipulation, and readily combined in order to enable investigation into inversion methodologies. To support such research, we not only require "entry points" into the software, but also extensibility to new situations. In our development of SimPEG, we have sought to use leading practices in software development with the aim of supporting and promoting collaborations across a spectrum of geophysical research: from fundamentals to applications. Designing software to enable this spectrum puts unique constraints on both the architecture of the codebase as well as the development practices that are employed. In this presentation, we will share some lessons learned and, in particular, how our prioritization of testing, documentation, and refactoring has impacted our own research and fostered collaborations.

  11. Developing a multi-method approach to data collection and analysis for explaining the learning during simulation in undergraduate nurse education.

    PubMed

    Bland, Andrew J; Tobbell, Jane

    2015-11-01

    Simulation has become an established feature of undergraduate nurse education and as such requires extensive investigation. Research limited to pre-constructed categories imposed by some questionnaire and interview methods may only provide partial understanding. This is problematic in understanding the mechanisms of learning in simulation-based education as contemporary distributed theories of learning posit that learning can be understood as the interaction of individual identity with context. This paper details a method of data collection and analysis that captures interaction of individuals within the simulation experience which can be analysed through multiple lenses, including context and through the lens of both researcher and learner. The study utilised a grounded theory approach involving 31 under-graduate third year student nurses. Data was collected and analysed through non-participant observation, digital recordings of simulation activity and focus group deconstruction of their recorded simulation by the participants and researcher. Focus group interviews enabled further clarification. The method revealed multiple levels of dynamic data, concluding that in order to better understand how students learn in social and active learning strategies, dynamic data is required enabling researchers and participants to unpack what is happening as it unfolds in action. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. [Anesthesia simulators and training devices].

    PubMed

    Hartmannsgruber, M; Good, M; Carovano, R; Lampotang, S; Gravenstein, J S

    1993-07-01

    Simulators and training devices are used extensively by educators in 'high-tech' occupations, especially those requiring an understanding of complex systems and co-ordinated psychomotor skills. Because of advances in computer technology, anaesthetised patients can now be realistically simulated. This paper describes several training devices and a simulator currently being employed in the training of anaesthesia personnel at the University of Florida. This Gainesville Anesthesia Simulator (GAS) comprises a patient mannequin, anaesthesia gas machine, and a full set of normally operating monitoring instruments. The patient can spontaneously breathe, has audible heart and breath sounds, and palpable pulses. The mannequin contains a sophisticated lung model that consumes and eliminates gas according to physiological principles. Interconnected computers controlling the physical signs of the mannequin enable the presentation of a multitude of clinical signs. In addition, the anaesthesia machine, which is functionally intact, has hidden fault activators to challenge the user to correct equipment malfunctions. Concealed sensors monitor the users' actions and responses. A robust data acquisition and control system and a user-friendly scripting language for programming simulation scenarios are key features of GAS and make this system applicable for the training of both the beginning resident and the experienced practitioner. GAS enhances clinical education in anaesthesia by providing a non-threatening environment that fosters learning by doing. Exercises with the simulator are supported by sessions on a number of training devices. These present theoretical and practical interactive courses on the anaesthesia machine and on monitors. An extensive system, for example, introduces the student to the physics and clinical application of transoesophageal echocardiography.(ABSTRACT TRUNCATED AT 250 WORDS)

  13. Influence of lumbar spine extension on vertical jump height during maximal squat jumping.

    PubMed

    Blache, Yoann; Monteil, Karine

    2014-01-01

    The purpose of this study was to determine the influence of lumbar spine extension and erector spinae muscle activation on vertical jump height during maximal squat jumping. Eight male athletes performed maximal squat jumps. Electromyograms of the erector spinae were recorded during these jumps. A simulation model of the musculoskeletal system was used to simulate maximal squat jumping with and without spine extension. The effect on vertical jump height of changing erector spinae strength was also tested through the simulated jumps. Concerning the participant jumps, the kinematics indicated a spine extension and erector spinae activation. Concerning the simulated jumps, vertical jump height was about 5.4 cm lower during squat jump without trunk extension compared to squat jump. These results were explained by greater total muscle work during squat jump, more especially by the erector spinae work (+119.5 J). The erector spinae may contribute to spine extension during maximal squat jumping. The simulated jumps confirmed this hypothesis showing that vertical jumping was decreased if this muscle was not taken into consideration in the model. Therefore it is concluded that the erector spinae should be considered as a trunk extensor, which enables to enhance total muscle work and consequently vertical jump height.

  14. Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCoy, Michel; Archer, Bill; Matzen, M. Keith

    2014-09-16

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.« less

  15. Integrated restructurable flight control system demonstration results

    NASA Technical Reports Server (NTRS)

    Weiss, Jerold L.; Hsu, John Y.

    1987-01-01

    The purpose of this study was to examine the complementary capabilities of several restructurable flight control system (RFCS) concepts through the integration of these technologies into a complete system. Performance issues were addressed through a re-examination of RFCS functional requirements, and through a qualitative analysis of the design issues that, if properly addressed during integration, will lead to the highest possible degree of fault-tolerant performance. Software developed under previous phases of this contract and under NAS1-18004 was modified and integrated into a complete RFCS subroutine for NASA's B-737 simulation. The integration of these modules involved the development of methods for dealing with the mismatch between the outputs of the failure detection module and the input requirements of the automatic control system redesign module. The performance of this demonstration system was examined through extensive simulation trials.

  16. Flight test experience and controlled impact of a large, four-engine, remotely piloted airplane

    NASA Technical Reports Server (NTRS)

    Kempel, R. W.; Horton, T. W.

    1985-01-01

    A controlled impact demonstration (CID) program using a large, four engine, remotely piloted transport airplane was conducted. Closed loop primary flight control was performed from a ground based cockpit and digital computer in conjunction with an up/down telemetry link. Uplink commands were received aboard the airplane and transferred through uplink interface systems to a highly modified Bendix PB-20D autopilot. Both proportional and discrete commands were generated by the ground pilot. Prior to flight tests, extensive simulation was conducted during the development of ground based digital control laws. The control laws included primary control, secondary control, and racetrack and final approach guidance. Extensive ground checks were performed on all remotely piloted systems. However, manned flight tests were the primary method of verification and validation of control law concepts developed from simulation. The design, development, and flight testing of control laws and the systems required to accomplish the remotely piloted mission are discussed.

  17. Three-Dimensional Data Registration Based on Human Perception

    DTIC Science & Technology

    2006-01-01

    sets. The new algorithm was tested extensively on simulated sensor images in several scenarios key to successful application to autonomous ground...that humans perceive visual images, an assumption of stationarity can be applied to the data sets , with to compensate for any new data...proximity to each other that an assumption of, or preference for , stationarity would require corresponding data in the data sets that is not new

  18. Fuels planning: science synthesis and integration; environmental consequences fact sheet 09: Fire and Fuels Extension to the Forest Vegetation Simulator (FFE-FVS)

    Treesearch

    Elizabeth Reinhardt

    2005-01-01

    FFE-FVS is a model linking stand development, fuel dynamics, fire behavior and fire effects. It allows comparison of mid- to long-term effects of management alternatives including harvest, mechanical fuel treatment, prescribed fire, salvage, and no action. This fact sheet identifies the intended users and uses, required inputs, what the model does, and tells the user...

  19. Fatigue reassessment for lifetime extension of offshore wind monopile substructures

    NASA Astrophysics Data System (ADS)

    Ziegler, Lisa; Muskulus, Michael

    2016-09-01

    Fatigue reassessment is required to decide about lifetime extension of aging offshore wind farms. This paper presents a methodology to identify important parameters to monitor during the operational phase of offshore wind turbines. An elementary effects method is applied to analyze the global sensitivity of residual fatigue lifetimes to environmental, structural and operational parameters. Therefore, renewed lifetime simulations are performed for a case study which consists of a 5 MW turbine with monopile substructure in 20 m water depth. Results show that corrosion, turbine availability, and turbulence intensity are the most influential parameters. This can vary strongly for other settings (water depth, turbine size, etc.) making case-specific assessments necessary.

  20. The Application of the Human Engineering Modeling and Performance Laboratory for Space Vehicle Ground Processing Tasks at Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Woodbury, Sarah K.

    2008-01-01

    The introduction of United Space Alliance's Human Engineering Modeling and Performance Laboratory began in early 2007 in an attempt to address the problematic workspace design issues that the Space Shuttle has imposed on technicians performing maintenance and inspection operations. The Space Shuttle was not expected to require the extensive maintenance it undergoes between flights. As a result, extensive, costly resources have been expended on workarounds and modifications to accommodate ground processing personnel. Consideration of basic human factors principles for design of maintenance is essential during the design phase of future space vehicles, facilities, and equipment. Simulation will be needed to test and validate designs before implementation.

  1. Fail Safe, High Temperature Magnetic Bearings

    NASA Technical Reports Server (NTRS)

    Minihan, Thomas; Palazzolo, Alan; Kim, Yeonkyu; Lei, Shu-Liang; Kenny, Andrew; Na, Uhn Joo; Tucker, Randy; Preuss, Jason; Hunt, Andrew; Carter, Bart; hide

    2002-01-01

    This paper contributes to the magnetic bearing literature in two distinct areas: high temperature and redundant actuation. Design considerations and test results are given for the first published combined 538 C (1000 F) high speed rotating test performance of a magnetic bearing. Secondly, a significant extension of the flux isolation based, redundant actuator control algorithm is proposed to eliminate the prior deficiency of changing position stiffness after failure. The benefit of the novel extension was not experimentally demonstrated due to a high active stiffness requirement. In addition, test results are given for actuator failure tests at 399 C (750 F), 12,500 rpm. Finally, simulation results are presented confirming the experimental data and validating the redundant control algorithm.

  2. Advanced EUV mask and imaging modeling

    NASA Astrophysics Data System (ADS)

    Evanschitzky, Peter; Erdmann, Andreas

    2017-10-01

    The exploration and optimization of image formation in partially coherent EUV projection systems with complex source shapes requires flexible, accurate, and efficient simulation models. This paper reviews advanced mask diffraction and imaging models for the highly accurate and fast simulation of EUV lithography systems, addressing important aspects of the current technical developments. The simulation of light diffraction from the mask employs an extended rigorous coupled wave analysis (RCWA) approach, which is optimized for EUV applications. In order to be able to deal with current EUV simulation requirements, several additional models are included in the extended RCWA approach: a field decomposition and a field stitching technique enable the simulation of larger complex structured mask areas. An EUV multilayer defect model including a database approach makes the fast and fully rigorous defect simulation and defect repair simulation possible. A hybrid mask simulation approach combining real and ideal mask parts allows the detailed investigation of the origin of different mask 3-D effects. The image computation is done with a fully vectorial Abbe-based approach. Arbitrary illumination and polarization schemes and adapted rigorous mask simulations guarantee a high accuracy. A fully vectorial sampling-free description of the pupil with Zernikes and Jones pupils and an optimized representation of the diffraction spectrum enable the computation of high-resolution images with high accuracy and short simulation times. A new pellicle model supports the simulation of arbitrary membrane stacks, pellicle distortions, and particles/defects on top of the pellicle. Finally, an extension for highly accurate anamorphic imaging simulations is included. The application of the models is demonstrated by typical use cases.

  3. Energy requirements in pressure irrigation systems

    NASA Astrophysics Data System (ADS)

    Sánchez, R.; Rodríguez-Sinobas, L.; Juana, L.; Laguna, F. V.; Castañón, G.; Gil, M.; Benítez, J.

    2012-04-01

    Modernization of irrigation schemes, generally understood as transformation of surface irrigation systems into pressure -sprinkler and trickle- irrigation systems, aims at, among others, improving irrigation efficiency and reduction of operation and maintenance efforts made by the irrigators. However, pressure irrigation systems, in contrast, carry a serious energy cost. Energy requirements depend on decisions taken on management strategies during the operation phase, which are conditioned by previous decisions taken on the design project of the different elements which compose the irrigation system. Most of the countries where irrigation activity is significant bear in mind that modernization irrigation must play a key role in the agricultural infrastructure policies. The objective of this study is to characterize and estimate the mean and variation of the energy consumed by common types of irrigation systems and their management possibilities. The work includes all processes involved from the diversion of water into irrigation specific infrastructure to water discharge by the emitters installed on the crop fields. Simulation taking into account all elements comprising the irrigation system has been used to estimate the energy requirements of typical irrigation systems of several crop production systems. It has been applied to extensive and intensive crop systems, such us extensive winter crops, summer crops and olive trees, fruit trees and vineyards and intensive horticulture in greenhouses. The simulation of various types of irrigation systems and management strategies, in the framework imposed by particular cropping systems, would help to develop criteria for improving the energy balance in relation to the irrigation water supply productivity.

  4. Lunar Regolith Simulant Materials: Recommendations for Standardization, Production, and Usage

    NASA Technical Reports Server (NTRS)

    Sibille, L.; Carpenter, P.; Schlagheck, R.; French, R. A.

    2006-01-01

    Experience gained during the Apollo program demonstrated the need for extensive testing of surface systems in relevant environments, including regolith materials similar to those encountered on the lunar surface. As NASA embarks on a return to the Moon, it is clear that the current lunar sample inventory is not only insufficient to support lunar surface technology and system development, but its scientific value is too great to be consumed by destructive studies. Every effort must be made to utilize standard simulant materials, which will allow developers to reduce the cost, development, and operational risks to surface systems. The Lunar Regolith Simulant Materials Workshop held in Huntsville, AL, on January 24 26, 2005, identified the need for widely accepted standard reference lunar simulant materials to perform research and development of technologies required for lunar operations. The workshop also established a need for a common, traceable, and repeatable process regarding the standardization, characterization, and distribution of lunar simulants. This document presents recommendations for the standardization, production and usage of lunar regolith simulant materials.

  5. Onyx-Advanced Aeropropulsion Simulation Framework Created

    NASA Technical Reports Server (NTRS)

    Reed, John A.

    2001-01-01

    The Numerical Propulsion System Simulation (NPSS) project at the NASA Glenn Research Center is developing a new software environment for analyzing and designing aircraft engines and, eventually, space transportation systems. Its purpose is to dramatically reduce the time, effort, and expense necessary to design and test jet engines by creating sophisticated computer simulations of an aerospace object or system (refs. 1 and 2). Through a university grant as part of that effort, researchers at the University of Toledo have developed Onyx, an extensible Java-based (Sun Micro-systems, Inc.), objectoriented simulation framework, to investigate how advanced software design techniques can be successfully applied to aeropropulsion system simulation (refs. 3 and 4). The design of Onyx's architecture enables users to customize and extend the framework to add new functionality or adapt simulation behavior as required. It exploits object-oriented technologies, such as design patterns, domain frameworks, and software components, to develop a modular system in which users can dynamically replace components with others having different functionality.

  6. Physical and digital simulations for IVA robotics

    NASA Technical Reports Server (NTRS)

    Hinman, Elaine; Workman, Gary L.

    1992-01-01

    Space based materials processing experiments can be enhanced through the use of IVA robotic systems. A program to determine requirements for the implementation of robotic systems in a microgravity environment and to develop some preliminary concepts for acceleration control of small, lightweight arms has been initiated with the development of physical and digital simulation capabilities. The physical simulation facilities incorporate a robotic workcell containing a Zymark Zymate II robot instrumented for acceleration measurements, which is able to perform materials transfer functions while flying on NASA's KC-135 aircraft during parabolic manuevers to simulate reduced gravity. Measurements of accelerations occurring during the reduced gravity periods will be used to characterize impacts of robotic accelerations in a microgravity environment in space. Digital simulations are being performed with TREETOPS, a NASA developed software package which is used for the dynamic analysis of systems with a tree topology. Extensive use of both simulation tools will enable the design of robotic systems with enhanced acceleration control for use in the space manufacturing environment.

  7. Modeling strength data for CREW CHIEF

    NASA Technical Reports Server (NTRS)

    Mcdaniel, Joe W.

    1990-01-01

    The Air Force has developed CREW CHIEF, a computer-aided design (CAD) tool for simulating and evaluating aircraft maintenance to determine if the required activities are feasible. CREW CHIEF gives the designer the ability to simulate maintenance activities with respect to reach, accessibility, strength, hand tool operation, and materials handling. While developing the CREW CHIEF, extensive research was performed to describe workers strength capabilities for using hand tools and manual handling of objects. More than 100,000 strength measures were collected and modeled for CREW CHIEF. These measures involved both male and female subjects in the 12 maintenance postures included in CREW CHIEF. The data collection and modeling effort are described.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patnaik, P. C.

    The SIGMET mesoscale meteorology simulation code represents an extension, in terms of physical modelling detail and numerical approach, of the work of Anthes (1972) and Anthes and Warner (1974). The code utilizes a finite difference technique to solve the so-called primitive equations which describe transient flow in the atmosphere. The SIGMET modelling contains all of the physics required to simulate the time dependent meteorology of a region with description of both the planetary boundary layer and upper level flow as they are affected by synoptic forcing and complex terrain. The mathematical formulation of the SIGMET model and the various physicalmore » effects incorporated into it are summarized.« less

  9. Solar thermal vacuum tests of Magellan spacecraft

    NASA Technical Reports Server (NTRS)

    Neuman, James C.

    1990-01-01

    The Magellen solar/thermal/vacuum test involved a number of unique requirements and approaches. Because of the need to operate in orbit around Venus, the solar intensity requirement ranged up to 2.3 suns or Earth equivalent solar constants. Extensive modification to the solar simulator portion of the test facility were required to achieve this solar intensity. Venus albedo and infrared emission were simulated using temperature controlled movable louver panels to allow the spacecraft to view either a selectable temperature black heat source with closed louvers, or the chamber coldwall behind open louvers. The test conditions included widely varying solar intensities, multiple sun angles, alternate hardware configurations, steady state and transient cases, and cruise and orbital power profiles. Margin testing was also performed, wherein supplemental heaters were mounted to internal thermal blankets to verify spacecraft performance at higher than expected temperatures. The test was successful, uncovering some spacecraft anomalies and verifying the thermal design. The test support equipment experienced some anomalous behavior and a significant failure during the test.

  10. Poverty Simulations: Building Relationships among Extension, Schools, and the Community

    ERIC Educational Resources Information Center

    Franck, Karen L.; Barnes, Shelly; Harrison, Julie

    2016-01-01

    Poverty simulations can be effective experiential learning tools for educating community members about the impact of poverty on families. The project described here includes survey results from three simulations with community leaders and teachers. This project illustrated how such workshops can help Extension professionals extend their reach and…

  11. Real-time maritime scene simulation for ladar sensors

    NASA Astrophysics Data System (ADS)

    Christie, Chad L.; Gouthas, Efthimios; Swierkowski, Leszek; Williams, Owen M.

    2011-06-01

    Continuing interest exists in the development of cost-effective synthetic environments for testing Laser Detection and Ranging (ladar) sensors. In this paper we describe a PC-based system for real-time ladar scene simulation of ships and small boats in a dynamic maritime environment. In particular, we describe the techniques employed to generate range imagery accompanied by passive radiance imagery. Our ladar scene generation system is an evolutionary extension of the VIRSuite infrared scene simulation program and includes all previous features such as ocean wave simulation, the physically-realistic representation of boat and ship dynamics, wake generation and simulation of whitecaps, spray, wake trails and foam. A terrain simulation extension is also under development. In this paper we outline the development, capabilities and limitations of the VIRSuite extensions.

  12. Epithelial cancers and photon migration: Monte Carlo simulations and diffuse reflectance measurements

    NASA Astrophysics Data System (ADS)

    Tubiana, Jerome; Kass, Alex J.; Newman, Maya Y.; Levitz, David

    2015-07-01

    Detecting pre-cancer in epithelial tissues such as the cervix is a challenging task in low-resources settings. In an effort to achieve low cost cervical cancer screening and diagnostic method for use in low resource settings, mobile colposcopes that use a smartphone as their engine have been developed. Designing image analysis software suited for this task requires proper modeling of light propagation from the abnormalities inside tissues to the camera of the smartphones. Different simulation methods have been developed in the past, by solving light diffusion equations, or running Monte Carlo simulations. Several algorithms exist for the latter, including MCML and the recently developed MCX. For imaging purpose, the observable parameter of interest is the reflectance profile of a tissue under some specific pattern of illumination and optical setup. Extensions of the MCX algorithm to simulate this observable under these conditions were developed. These extensions were validated against MCML and diffusion theory for the simple case of contact measurements, and reflectance profiles under colposcopy imaging geometry were also simulated. To validate this model, the diffuse reflectance profiles of tissue phantoms were measured with a spectrometer under several illumination and optical settings for various homogeneous tissues phantoms. The measured reflectance profiles showed a non-trivial deviation across the spectrum. Measurements of an added absorber experiment on a series of phantoms showed that absorption of dye scales linearly when fit to both MCX and diffusion models. More work is needed to integrate a pupil into the experiment.

  13. FermiLib v0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MCCLEAN, JARROD; HANER, THOMAS; STEIGER, DAMIAN

    FermiLib is an open source software package designed to facilitate the development and testing of algorithms for simulations of fermionic systems on quantum computers. Fermionic simulations represent an important application of early quantum devices with a lot of potential high value targets, such as quantum chemistry for the development of new catalysts. This software strives to provide a link between the required domain expertise in specific fermionic applications and quantum computing to enable more users to directly interface with, and develop for, these applications. It is an extensible Python library designed to interface with the high performance quantum simulator, ProjectQ,more » as well as application specific software such as PSI4 from the domain of quantum chemistry. Such software is key to enabling effective user facilities in quantum computation research.« less

  14. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2013-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very difficult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The first version of this tool was a serial code and the current version is a parallel code, which has greatly increased the analysis capabilities. This paper describes the new implementation of this analysis tool on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  15. Simulation of nanoparticle-mediated near-infrared thermal therapy using GATE

    PubMed Central

    Cuplov, Vesna; Pain, Frédéric; Jan, Sébastien

    2017-01-01

    Application of nanotechnology for biomedicine in cancer therapy allows for direct delivery of anticancer agents to tumors. An example of such therapies is the nanoparticle-mediated near-infrared hyperthermia treatment. In order to investigate the influence of nanoparticle properties on the spatial distribution of heat in the tumor and healthy tissues, accurate simulations are required. The Geant4 Application for Emission Tomography (GATE) open-source simulation platform, based on the Geant4 toolkit, is widely used by the research community involved in molecular imaging, radiotherapy and optical imaging. We present an extension of GATE that can model nanoparticle-mediated hyperthermal therapy as well as simple heat diffusion in biological tissues. This new feature of GATE combined with optical imaging allows for the simulation of a theranostic scenario in which the patient is injected with theranostic nanosystems that can simultaneously deliver therapeutic (i.e. hyperthermia therapy) and imaging agents (i.e. fluorescence imaging). PMID:28663855

  16. Detached Eddy Simulation Results for a Space Launch System Configuration at Liftoff Conditions and Comparison with Experiment

    NASA Technical Reports Server (NTRS)

    Krist, Steven E.; Ghaffari, Farhad

    2015-01-01

    Computational simulations for a Space Launch System configuration at liftoff conditions for incidence angles from 0 to 90 degrees were conducted in order to generate integrated force and moment data and longitudinal lineloads. While the integrated force and moment coefficients can be obtained from wind tunnel testing, computational analyses are indispensable in obtaining the extensive amount of surface information required to generate proper lineloads. However, beyond an incidence angle of about 15 degrees, the effects of massive flow separation on the leeward pressure field is not well captured with state of the art Reynolds Averaged Navier-Stokes methods, necessitating the employment of a Detached Eddy Simulation method. Results from these simulations are compared to the liftoff force and moment database and surface pressure data derived from a test in the NASA Langley 14- by 22-Foot Subsonic Wind Tunnel.

  17. Lightweight Object Oriented Structure analysis: Tools for building Tools to Analyze Molecular Dynamics Simulations

    PubMed Central

    Romo, Tod D.; Leioatts, Nicholas; Grossfield, Alan

    2014-01-01

    LOOS (Lightweight Object-Oriented Structure-analysis) is a C++ library designed to facilitate making novel tools for analyzing molecular dynamics simulations by abstracting out the repetitive tasks, allowing developers to focus on the scientifically relevant part of the problem. LOOS supports input using the native file formats of most common biomolecular simulation packages, including CHARMM, NAMD, Amber, Tinker, and Gromacs. A dynamic atom selection language based on the C expression syntax is included and is easily accessible to the tool-writer. In addition, LOOS is bundled with over 120 pre-built tools, including suites of tools for analyzing simulation convergence, 3D histograms, and elastic network models. Through modern C++ design, LOOS is both simple to develop with (requiring knowledge of only 4 core classes and a few utility functions) and is easily extensible. A python interface to the core classes is also provided, further facilitating tool development. PMID:25327784

  18. Lightweight object oriented structure analysis: tools for building tools to analyze molecular dynamics simulations.

    PubMed

    Romo, Tod D; Leioatts, Nicholas; Grossfield, Alan

    2014-12-15

    LOOS (Lightweight Object Oriented Structure-analysis) is a C++ library designed to facilitate making novel tools for analyzing molecular dynamics simulations by abstracting out the repetitive tasks, allowing developers to focus on the scientifically relevant part of the problem. LOOS supports input using the native file formats of most common biomolecular simulation packages, including CHARMM, NAMD, Amber, Tinker, and Gromacs. A dynamic atom selection language based on the C expression syntax is included and is easily accessible to the tool-writer. In addition, LOOS is bundled with over 140 prebuilt tools, including suites of tools for analyzing simulation convergence, three-dimensional histograms, and elastic network models. Through modern C++ design, LOOS is both simple to develop with (requiring knowledge of only four core classes and a few utility functions) and is easily extensible. A python interface to the core classes is also provided, further facilitating tool development. © 2014 Wiley Periodicals, Inc.

  19. Development and mechanical properties of construction materials from lunar simulants

    NASA Technical Reports Server (NTRS)

    Desai, Chandra S.

    1990-01-01

    The development of construction materials such as concrete from lunar soils without the use of water requires a different methodology than that used for conventional terrestrial concrete. Currently, this research involves two aspects: (1) liquefaction of lunar simulants with various additives in a furnace so as to produce a construction material like an intermediate ceramic; and (2) cyclic loading of simulant with different initial vacuums and densities with respect to the theoretical maximum densities (TMD). In both cases, bending, triaxial compression, extension, and hydrostatic tests will be performed to define the stress-strain strength response of the resulting materials. In the case of the intermediate ceramic, bending and available multiaxial test devices will be used, while for the compacted case, tests will be performed directly in the new device. The tests will be performed by simulating in situ confining conditions. A preliminary review of high-purity metal is also conducted.

  20. Analytic calculation of radio emission from parametrized extensive air showers: A tool to extract shower parameters

    NASA Astrophysics Data System (ADS)

    Scholten, O.; Trinh, T. N. G.; de Vries, K. D.; Hare, B. M.

    2018-01-01

    The radio intensity and polarization footprint of a cosmic-ray induced extensive air shower is determined by the time-dependent structure of the current distribution residing in the plasma cloud at the shower front. In turn, the time dependence of the integrated charge-current distribution in the plasma cloud, the longitudinal shower structure, is determined by interesting physics which one would like to extract, such as the location and multiplicity of the primary cosmic-ray collision or the values of electric fields in the atmosphere during thunderstorms. To extract the structure of a shower from its footprint requires solving a complicated inverse problem. For this purpose we have developed a code that semianalytically calculates the radio footprint of an extensive air shower given an arbitrary longitudinal structure. This code can be used in an optimization procedure to extract the optimal longitudinal shower structure given a radio footprint. On the basis of air-shower universality we propose a simple parametrization of the structure of the plasma cloud. This parametrization is based on the results of Monte Carlo shower simulations. Deriving the parametrization also teaches which aspects of the plasma cloud are important for understanding the features seen in the radio-emission footprint. The calculated radio footprints are compared with microscopic CoREAS simulations.

  1. Analysis of hybrid electric/thermofluidic inputs for wet shape memory alloy actuators

    NASA Astrophysics Data System (ADS)

    Flemming, Leslie; Mascaro, Stephen

    2013-01-01

    A wet shape memory alloy (SMA) actuator is characterized by an SMA wire embedded within a compliant fluid-filled tube. Heating and cooling of the SMA wire produces a linear contraction and extension of the wire. Thermal energy can be transferred to and from the wire using combinations of resistive heating and free/forced convection. This paper analyzes the speed and efficiency of a simulated wet SMA actuator using a variety of control strategies involving different combinations of electrical and thermofluidic inputs. A computational fluid dynamics (CFD) model is used in conjunction with a temperature-strain model of the SMA wire to simulate the thermal response of the wire and compute strains, contraction/extension times and efficiency. The simulations produce cycle rates of up to 5 Hz for electrical heating and fluidic cooling, and up to 2 Hz for fluidic heating and cooling. The simulated results demonstrate efficiencies up to 0.5% for electric heating and up to 0.2% for fluidic heating. Using both electric and fluidic inputs concurrently improves the speed and efficiency of the actuator and allows for the actuator to remain contracted without continually delivering energy to the actuator, because of the thermal capacitance of the hot fluid. The characterized speeds and efficiencies are key requirements for implementing broader research efforts involving the intelligent control of electric and thermofluidic networks to optimize the speed and efficiency of wet actuator arrays.

  2. The perspectives of Australian midwifery academics on barriers and enablers for simulation in midwifery education in Australia: a focus group study.

    PubMed

    Fox-Young, Stephanie; Brady, Susannah; Brealey, Wendy; Cooper, Simon; McKenna, Lisa; Hall, Helen; Bogossian, Fiona

    2012-08-01

    to describe Australian midwifery academics' perceptions of the current barriers and enablers for simulation in midwifery education in Australia and the potential and resources required for simulation to be increased. a series of 11 focus groups/interviews were held in all states and territories of Australia with 46 participating academics nominated by their heads of discipline from universities across the country. three themes were identified relating to barriers to the extension of the use of simulated learning environments (SLEs) ('there are things that you can't simulate'; 'not having the appropriate resources'; and professional accreditation requirements) and three themes were identified to facilitate SLE use ('for the bits that you're not likely to see very often in clinical'; ['for students] to figure something out before [they] get to go out there and do it on the real person'; and good resources and support). although barriers exist to the adoption and spread of simulated learning in midwifery, there is a long history of simulation and a great willingness to enhance its use among midwifery academics in Australia. while some aspects of midwifery practice may be impossible to simulate, more collaboration and sharing in the development and use of simulation scenarios, equipment, space and other physical and personnel resources would make the uptake of simulation in midwifery education more widespread. Students would therefore be exposed to the best available preparation for clinical practice contributing to the safety and quality of midwifery care. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Mechanical Analysis of W78/88-1 Life Extension Program Warhead Design Options

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Nathan

    2014-09-01

    Life Extension Program (LEP) is a program to repair/replace components of nuclear weapons to ensure the ability to meet military requirements. The W78/88-1 LEP encompasses the modernization of two major nuclear weapon reentry systems into an interoperable warhead. Several design concepts exist to provide different options for robust safety and security themes, maximum non-nuclear commonality, and cost. Simulation is one capability used to evaluate the mechanical performance of the designs in various operational environments, plan for system and component qualification efforts, and provide insight into the survivability of the warhead in environments that are not currently testable. The simulation effortsmore » use several Sandia-developed tools through the Advanced Simulation and Computing program, including Cubit for mesh generation, the DART Model Manager, SIERRA codes running on the HPC TLCC2 platforms, DAKOTA, and ParaView. Several programmatic objectives were met using the simulation capability including: (1) providing early environmental specification estimates that may be used by component designers to understand the severity of the loads their components will need to survive, (2) providing guidance for load levels and configurations for subassembly tests intended to represent operational environments, and (3) recommending design options including modified geometry and material properties. These objectives were accomplished through regular interactions with component, system, and test engineers while using the laboratory's computational infrastructure to effectively perform ensembles of simulations. Because NNSA has decided to defer the LEP program, simulation results are being documented and models are being archived for future reference. However, some advanced and exploratory efforts will continue to mature key technologies, using the results from these and ongoing simulations for design insights, test planning, and model validation.« less

  4. The Generic Resolution Advisor and Conflict Evaluator (GRACE) for Detect-And-Avoid Systems

    NASA Technical Reports Server (NTRS)

    Abramson, Michael; Refai, Mohamad; Santiago, Confesor

    2017-01-01

    Java Architecture for Detect-And-Avoid (DAA) Extensibility and Modeling (JADEM) was developed at NASA Ames Research Center as a research and modeling tool for Unmanned Aircraft Systems (UAS) Integration in the National Airspace System (NAS). UAS will be required to have DAA systems in order to fulfill the regulatory requirement to remain well clear'' of other traffic. JADEM supports research on technological requirements and Minimum Operational Performance Standards (MOPS) for UAS DAA systems by providing a flexible and extensible software platform that includes models and algorithms for all major DAA functions. This paper describes one of these algorithms, the Generic Resolution Advisor and Conflict Evaluator (GRACE). GRACE supports two core DAA functions: threat evaluation and guidance. GRACE is generic in the sense that it is designed to work with any aircraft or sensor type (both cooperative and non-cooperative), and to be used in various applications and DAA guidance concepts, thus supporting evolving MOPS requirements and research. GRACE combines flexibility, robustness, and computational efficiency. It has modest memory requirements and can handle multiple cooperative and noncooperative intruders. GRACE has been used as a core JADEM component in several real-time and fast-time experiments, including human-in-the-loop simulations and live flight tests.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Digital instrumentation and controls system technique is being introduced in new constructed research reactor or life extension of older research reactor. Digital systems are easy to change and optimize but the validated process for them is required. Also, to reduce project risk or cost, we have to make it sure that configuration and control functions is right before the commissioning phase on research reactor. For this purpose, simulators have been widely used in developing control systems in automotive and aerospace industries. In these literatures, however, very few of these can be found regarding test on the control system of researchmore » reactor with simulator. Therefore, this paper proposes a simulation platform to verify the performance of RRS (Reactor Regulating System) for research reactor. This simulation platform consists of the reactor simulation model and the interface module. This simulation platform is applied to I and C upgrade project of TRIGA reactor, and many problems of RRS configuration were found and solved. And it proved that the dynamic performance testing based on simulator enables significant time saving and improves economics and quality for RRS in the system test phase. (authors)« less

  6. Quantum analogue computing.

    PubMed

    Kendon, Vivien M; Nemoto, Kae; Munro, William J

    2010-08-13

    We briefly review what a quantum computer is, what it promises to do for us and why it is so hard to build one. Among the first applications anticipated to bear fruit is the quantum simulation of quantum systems. While most quantum computation is an extension of classical digital computation, quantum simulation differs fundamentally in how the data are encoded in the quantum computer. To perform a quantum simulation, the Hilbert space of the system to be simulated is mapped directly onto the Hilbert space of the (logical) qubits in the quantum computer. This type of direct correspondence is how data are encoded in a classical analogue computer. There is no binary encoding, and increasing precision becomes exponentially costly: an extra bit of precision doubles the size of the computer. This has important consequences for both the precision and error-correction requirements of quantum simulation, and significant open questions remain about its practicality. It also means that the quantum version of analogue computers, continuous-variable quantum computers, becomes an equally efficient architecture for quantum simulation. Lessons from past use of classical analogue computers can help us to build better quantum simulators in future.

  7. Building the ECON extension: Functionality and lessons learned

    Treesearch

    Fred C. Martin

    2008-01-01

    The functionality of the ECON extension to FVS is described with emphasis on the ability to dynamically interact with all elements of the FVS simulation process. Like other extensions, ECON is fully integrated within FVS. This integration allows: (1) analysis of multiple alternative tree-removal actions within a single simulation without altering “normal” stand...

  8. LANES - LOCAL AREA NETWORK EXTENSIBLE SIMULATOR

    NASA Technical Reports Server (NTRS)

    Gibson, J.

    1994-01-01

    The Local Area Network Extensible Simulator (LANES) provides a method for simulating the performance of high speed local area network (LAN) technology. LANES was developed as a design and analysis tool for networking on board the Space Station. The load, network, link and physical layers of a layered network architecture are all modeled. LANES models to different lower-layer protocols, the Fiber Distributed Data Interface (FDDI) and the Star*Bus. The load and network layers are included in the model as a means of introducing upper-layer processing delays associated with message transmission; they do not model any particular protocols. FDDI is an American National Standard and an International Organization for Standardization (ISO) draft standard for a 100 megabit-per-second fiber-optic token ring. Specifications for the LANES model of FDDI are taken from the Draft Proposed American National Standard FDDI Token Ring Media Access Control (MAC), document number X3T9.5/83-16 Rev. 10, February 28, 1986. This is a mature document describing the FDDI media-access-control protocol. Star*Bus, also known as the Fiber Optic Demonstration System, is a protocol for a 100 megabit-per-second fiber-optic star-topology LAN. This protocol, along with a hardware prototype, was developed by Sperry Corporation under contract to NASA Goddard Space Flight Center as a candidate LAN protocol for the Space Station. LANES can be used to analyze performance of a networking system based on either FDDI or Star*Bus under a variety of loading conditions. Delays due to upper-layer processing can easily be nullified, allowing analysis of FDDI or Star*Bus as stand-alone protocols. LANES is a parameter-driven simulation; it provides considerable flexibility in specifying both protocol an run-time parameters. Code has been optimized for fast execution and detailed tracing facilities have been included. LANES was written in FORTRAN 77 for implementation on a DEC VAX under VMS 4.6. It consists of two programs, a simulation program and a user-interface program. The simulation program requires the SLAM II simulation library from Pritsker and Associates, W. Lafayette IN; the user interface is implemented using the Ingres database manager from Relational Technology, Inc. Information about running the simulation program without the user-interface program is contained in the documentation. The memory requirement is 129,024 bytes. LANES was developed in 1988.

  9. Anticipation of the landing shock phenomenon in flight simulation

    NASA Technical Reports Server (NTRS)

    Mcfarland, Richard E.

    1987-01-01

    An aircraft landing may be described as a controlled crash because a runway surface is intercepted. In a simulation model the transition from aerodynamic flight to weight on wheels involves a single computational cycle during which stiff differential equations are activated; with a significant probability these initial conditions are unrealistic. This occurs because of the finite cycle time, during which large restorative forces will accompany unrealistic initial oleo compressions. This problem was recognized a few years ago at Ames Research Center during simulation studies of a supersonic transport. The mathematical model of this vehicle severely taxed computational resources, and required a large cycle time. The ground strike problem was solved by a described technique called anticipation equations. This extensively used technique has not been previously reported. The technique of anticipating a significant event is a useful tool in the general field of discrete flight simulation. For the differential equations representing a landing gear model stiffness, rate of interception and cycle time may combine to produce an unrealistic simulation of the continuum.

  10. An Investigation of the Impact of Aerodynamic Model Fidelity on Close-In Combat Effectiveness Prediction in Piloted Simulation

    NASA Technical Reports Server (NTRS)

    Persing, T. Ray; Bellish, Christine A.; Brandon, Jay; Kenney, P. Sean; Carzoo, Susan; Buttrill, Catherine; Guenther, Arlene

    2005-01-01

    Several aircraft airframe modeling approaches are currently being used in the DoD community for acquisition, threat evaluation, training, and other purposes. To date there has been no clear empirical study of the impact of airframe simulation fidelity on piloted real-time aircraft simulation study results, or when use of a particular level of fidelity is indicated. This paper documents a series of piloted simulation studies using three different levels of airframe model fidelity. This study was conducted using the NASA Langley Differential Maneuvering Simulator. Evaluations were conducted with three pilots for scenarios requiring extensive maneuvering of the airplanes during air combat. In many cases, a low-fidelity modified point-mass model may be sufficient to evaluate the combat effectiveness of the aircraft. However, in cases where high angle-of-attack flying qualities and aerodynamic performance are a factor or when precision tracking ability of the aircraft must be represented, use of high-fidelity models is indicated.

  11. An Efficient Next Hop Selection Algorithm for Multi-Hop Body Area Networks

    PubMed Central

    Ayatollahitafti, Vahid; Ngadi, Md Asri; Mohamad Sharif, Johan bin; Abdullahi, Mohammed

    2016-01-01

    Body Area Networks (BANs) consist of various sensors which gather patient’s vital signs and deliver them to doctors. One of the most significant challenges faced, is the design of an energy-efficient next hop selection algorithm to satisfy Quality of Service (QoS) requirements for different healthcare applications. In this paper, a novel efficient next hop selection algorithm is proposed in multi-hop BANs. This algorithm uses the minimum hop count and a link cost function jointly in each node to choose the best next hop node. The link cost function includes the residual energy, free buffer size, and the link reliability of the neighboring nodes, which is used to balance the energy consumption and to satisfy QoS requirements in terms of end to end delay and reliability. Extensive simulation experiments were performed to evaluate the efficiency of the proposed algorithm using the NS-2 simulator. Simulation results show that our proposed algorithm provides significant improvement in terms of energy consumption, number of packets forwarded, end to end delay and packet delivery ratio compared to the existing routing protocol. PMID:26771586

  12. A local quasicontinuum method for 3D multilattice crystalline materials: Application to shape-memory alloys

    NASA Astrophysics Data System (ADS)

    Sorkin, V.; Elliott, R. S.; Tadmor, E. B.

    2014-07-01

    The quasicontinuum (QC) method, in its local (continuum) limit, is applied to materials with a multilattice crystal structure. Cauchy-Born (CB) kinematics, which accounts for the shifts of the crystal motif, is used to relate atomic motions to continuum deformation gradients. To avoid failures of CB kinematics, QC is augmented with a phonon stability analysis that detects lattice period extensions and identifies the minimum required periodic cell size. This approach is referred to as Cascading Cauchy-Born kinematics (CCB). In this paper, the method is described and developed. It is then used, along with an effective interaction potential (EIP) model for shape-memory alloys, to simulate the shape-memory effect and pseudoelasticity in a finite specimen. The results of these simulations show that (i) the CCB methodology is an essential tool that is required in order for QC-type simulations to correctly capture the first-order phase transitions responsible for these material behaviors, and (ii) that the EIP model adopted in this work coupled with the QC/CCB methodology is capable of predicting the characteristic behavior found in shape-memory alloys.

  13. A service life extension (SLEP) approach to operating aging aircraft beyond their original design lives

    NASA Astrophysics Data System (ADS)

    Pentz, Alan Carter

    With today's uncertain funding climate (including sequestration and continuing budget resolutions), decision makers face severe budgetary challenges to maintain dominance through all aspects of the Department of Defense (DoD). To meet war-fighting capabilities, the DoD continues to extend aircraft programs beyond their design service lives by up to ten years, and occasionally much more. The budget requires a new approach to traditional extension strategies (i.e., reuse, reset, and reclamation) for structural hardware. While extending service life without careful controls can present a safety concern, future operations planning does not consider how much risk is present when operating within sound structural principles. Traditional structural hardware extension methods drive increased costs. Decision makers often overlook the inherent damage tolerance and fatigue capability of structural components and rely on simple time- and flight-based cycle accumulation when determining aircraft retirement lives. This study demonstrates that decision makers should consider risk in addition to the current extension strategies. Through an evaluation of eight military aircraft programs and the application and simulation of F-18 turbine engine usage data, this dissertation shows that insight into actual aircraft mission data, consideration of fatigue capability, and service extension length are key factors to consider. Aircraft structural components, as well as many critical safety components and system designs, have a predefined level of conservatism and inherent damage tolerance. The methods applied in this study would apply to extensions of other critical structures such as bridges. Understanding how much damage tolerance is built into the design compared to the original design usage requirements presents the opportunity to manage systems based on risk. The study presents the sensitivity of these factors and recommends avenues for further research.

  14. Chapter 2: Fire and Fuels Extension: Model description

    Treesearch

    Sarah J. Beukema; Elizabeth D. Reinhardt; Julee A. Greenough; Donald C. E. Robinson; Werner A. Kurz

    2003-01-01

    The Fire and Fuels Extension to the Forest Vegetation Simulator is a model that simulates fuel dynamics and potential fire behavior over time, in the context of stand development and management. Existing models are used to represent forest stand development (the Forest Vegetation Simulator, Wykoff and others 1982), fire behavior (Rothermel 1972, Van Wagner 1977, and...

  15. Extending the Binomial Checkpointing Technique for Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walther, Andrea; Narayanan, Sri Hari Krishna

    In terms of computing time, adjoint methods offer a very attractive alternative to compute gradient information, re- quired, e.g., for optimization purposes. However, together with this very favorable temporal complexity result comes a memory requirement that is in essence proportional with the operation count of the underlying function, e.g., if algo- rithmic differentiation is used to provide the adjoints. For this reason, checkpointing approaches in many variants have become popular. This paper analyzes an extension of the so-called binomial approach to cover also possible failures of the computing systems. Such a measure of precaution is of special interest for massivemore » parallel simulations and adjoint calculations where the mean time between failure of the large scale computing system is smaller than the time needed to complete the calculation of the adjoint information. We de- scribe the extensions of standard checkpointing approaches required for such resilience, provide a corresponding imple- mentation and discuss numerical results.« less

  16. Effect of tidal fluctuations on transient dispersion of simulated contaminant concentrations in coastal aquifers

    USGS Publications Warehouse

    La Licata, Ivana; Langevin, Christian D.; Dausman, Alyssa M.; Alberti, Luca

    2011-01-01

    Variable-density groundwater models require extensive computational resources, particularly for simulations representing short-term hydrologic variability such as tidal fluctuations. Saltwater-intrusion models usually neglect tidal fluctuations and this may introduce errors in simulated concentrations. The effects of tides on simulated concentrations in a coastal aquifer were assessed. Three analyses are reported: in the first, simulations with and without tides were compared for three different dispersivity values. Tides do not significantly affect the transfer of a hypothetical contaminant into the ocean; however, the concentration difference between tidal and non-tidal simulations could be as much as 15%. In the second analysis, the dispersivity value for the model without tides was increased in a zone near the ocean boundary. By slightly increasing dispersivity in this zone, the maximum concentration difference between the simulations with and without tides was reduced to as low as 7%. In the last analysis, an apparent dispersivity value was calculated for each model cell using the simulated velocity variations from the model with tides. Use of apparent dispersivity values in models with a constant ocean boundary seems to provide a reasonable approach for approximating tidal effects in simulations where explicit representation of tidal fluctuations is not feasible.

  17. Effect of tidal fluctuations on transient dispersion of simulated contaminant concentrations in coastal aquifers

    USGS Publications Warehouse

    La Licata, Ivana; Langevin, Christian D.; Dausman, Alyssa M.; Alberti, Luca

    2013-01-01

    Variable-density groundwater models require extensive computational resources, particularly for simulations representing short-term hydrologic variability such as tidal fluctuations. Saltwater-intrusion models usually neglect tidal fluctuations and this may introduce errors in simulated concentrations. The effects of tides on simulated concentrations in a coastal aquifer were assessed. Three analyses are reported: in the first, simulations with and without tides were compared for three different dispersivity values. Tides do not significantly affect the transfer of a hypothetical contaminant into the ocean; however, the concentration difference between tidal and non-tidal simulations could be as much as 15%. In the second analysis, the dispersivity value for the model without tides was increased in a zone near the ocean boundary. By slightly increasing dispersivity in this zone, the maximum concentration difference between the simulations with and without tides was reduced to as low as 7%. In the last analysis, an apparent dispersivity value was calculated for each model cell using the simulated velocity variations from the model with tides. Use of apparent dispersivity values in models with a constant ocean boundary seems to provide a reasonable approach for approximating tidal effects in simulations where explicit representation of tidal fluctuations is not feasible.

  18. Pit formation observed in a multilayer dielectric coating as a result of simulated space environmental exposure

    NASA Astrophysics Data System (ADS)

    Fuqua, Peter D.; Presser, Nathan; Barrie, James D.; Meshishnek, Michael J.; Coleman, Dianne J.

    2002-06-01

    Certain spaceborne telescope designs require that dielectric-coated lenses be exposed to the energetic electrons and protons associated with the space environment. Test coupons that were exposed to a simulated space environment showed extensive pitting as a result of dielectric breakdown. A typical pit was 50-100 mum at the surface and extended to the substrate material, in which a 10-mum-diameter melt region was found. Pitting was not observed on similar samples that had also been overcoated with a transparent conductive thin film. Measurement of the bidirectional reflectance distribution transfer function showed that pitting caused a fivefold to tenfold increase in the scattering of visible light.

  19. Simulation of router action on a lathe to test the cutting tool performance in edge-trimming of graphite/epoxy composite

    NASA Astrophysics Data System (ADS)

    Ramulu, M.; Rogers, E.

    1994-04-01

    The predominant machining application with graphite/epoxy composite materials in aerospace industry is peripheral trimming. The computer numerically controlled (CNC) high speed routers required to do edge trimming work are generally scheduled for production work in industry and are not available for extensive cutter testing. Therefore, an experimental method of simulating the conditions of periphery trim using a lathe is developed in this paper. The validity of the test technique will be demonstrated by conducting carbide tool wear tests under dry cutting conditions. The experimental results will be analyzed to characterize the wear behavior of carbide cutting tools in machining the composite materials.

  20. Numerical aerodynamic simulation facility. Preliminary study extension

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The production of an optimized design of key elements of the candidate facility was the primary objective of this report. This was accomplished by effort in the following tasks: (1) to further develop, optimize and describe the function description of the custom hardware; (2) to delineate trade off areas between performance, reliability, availability, serviceability, and programmability; (3) to develop metrics and models for validation of the candidate systems performance; (4) to conduct a functional simulation of the system design; (5) to perform a reliability analysis of the system design; and (6) to develop the software specifications to include a user level high level programming language, a correspondence between the programming language and instruction set and outline the operation system requirements.

  1. Streaks and vortices in near-wall turbulence.

    PubMed

    Chernyshenko, S I; Baig, M F

    2005-05-15

    This paper presents evidence that organization of wall-normal motions plays almost no role in the creation of streaks. This evidence consists of the theory of streak generation not requiring the existence of organized vortices, extensive quantitative comparisons between the theory and direct numerical simulations, including examples of large variation in average spacing of the streaks of different scalars simultaneously present in the flow, and an example of the scalar streaks in an artificially created purely random flow.

  2. Dual-spin attitude control for outer planet missions

    NASA Technical Reports Server (NTRS)

    Ward, R. S.; Tauke, G. J.

    1977-01-01

    The applicability of dual-spin technology to a Jupiter orbiter with probe mission was investigated. Basic mission and system level attitude control requirements were established and preliminary mechanization and control concepts developed. A comprehensive 18-degree-of-freedom digital simulation was utilized extensively to establish control laws, study dynamic interactions, and determined key sensitivities. Fundamental system/subsystem constraints were identified, and the applicability of dual-spin technology to a Jupiter orbiter with probe mission was validated.

  3. Research in digital adaptive flight controllers

    NASA Technical Reports Server (NTRS)

    Kaufman, H.

    1976-01-01

    A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Both explicit controllers which directly utilize parameter identification and implicit controllers which do not require identification were considered. Extensive analytical and simulation efforts resulted in the recommendation of two explicit digital adaptive flight controllers. Interface weighted least squares estimation procedures with control logic were developed using either optimal regulator theory or with control logic based upon single stage performance indices.

  4. An Aircraft Lifecycle Approach for the Cost-Benefit Analysis of Prognostics and Condition-Based Maintenance-Based on Discrete-Event Simulation

    DTIC Science & Technology

    2014-10-02

    MPD. This manufacturer documentation contains maintenance tasks with specification of intervals and required man-hours that are to be carried out...failures, without consideration of false alarms and missed failures (see also section 4.1.3). The task redundancy rate is the percentage of preventive...Prognostics and Health Management ROI return on investment RUL remaining useful life TCG task code group SB Service Bulletin XML Extensible Markup

  5. Bidirectional reaction steps in metabolic networks: I. Modeling and simulation of carbon isotope labeling experiments.

    PubMed

    Wiechert, W; de Graaf, A A

    1997-07-05

    The extension of metabolite balancing with carbon labeling experiments, as described by Marx et al. (Biotechnol. Bioeng. 49: 11-29), results in a much more detailed stationary metabolic flux analysis. As opposed to basic metabolite flux balancing alone, this method enables both flux directions of bidirectional reaction steps to be quantitated. However, the mathematical treatment of carbon labeling systems is much more complicated, because it requires the solution of numerous balance equations that are bilinear with respect to fluxes and fractional labeling. In this study, a universal modeling framework is presented for describing the metabolite and carbon atom flux in a metabolic network. Bidirectional reaction steps are extensively treated and their impact on the system's labeling state is investigated. Various kinds of modeling assumptions, as usually made for metabolic fluxes, are expressed by linear constraint equations. A numerical algorithm for the solution of the resulting linear constrained set of nonlinear equations is developed. The numerical stability problems caused by large bidirectional fluxes are solved by a specially developed transformation method. Finally, the simulation of carbon labeling experiments is facilitated by a flexible software tool for network synthesis. An illustrative simulation study on flux identifiability from available flux and labeling measurements in the cyclic pentose phosphate pathway of a recombinant strain of Zymomonas mobilis concludes this contribution.

  6. Design of a Modular Monolithic Implicit Solver for Multi-Physics Applications

    NASA Technical Reports Server (NTRS)

    Carton De Wiart, Corentin; Diosady, Laslo T.; Garai, Anirban; Burgess, Nicholas; Blonigan, Patrick; Ekelschot, Dirk; Murman, Scott M.

    2018-01-01

    The design of a modular multi-physics high-order space-time finite-element framework is presented together with its extension to allow monolithic coupling of different physics. One of the main objectives of the framework is to perform efficient high- fidelity simulations of capsule/parachute systems. This problem requires simulating multiple physics including, but not limited to, the compressible Navier-Stokes equations, the dynamics of a moving body with mesh deformations and adaptation, the linear shell equations, non-re effective boundary conditions and wall modeling. The solver is based on high-order space-time - finite element methods. Continuous, discontinuous and C1-discontinuous Galerkin methods are implemented, allowing one to discretize various physical models. Tangent and adjoint sensitivity analysis are also targeted in order to conduct gradient-based optimization, error estimation, mesh adaptation, and flow control, adding another layer of complexity to the framework. The decisions made to tackle these challenges are presented. The discussion focuses first on the "single-physics" solver and later on its extension to the monolithic coupling of different physics. The implementation of different physics modules, relevant to the capsule/parachute system, are also presented. Finally, examples of coupled computations are presented, paving the way to the simulation of the full capsule/parachute system.

  7. LBflow: An extensible lattice Boltzmann framework for the simulation of geophysical flows. Part II: usage and validation

    NASA Astrophysics Data System (ADS)

    Llewellin, E. W.

    2010-02-01

    LBflow is a flexible, extensible implementation of the lattice Boltzmann method, developed with geophysical applications in mind. The theoretical basis for LBflow, and its implementation, are presented in the companion paper, 'Part I'. This article covers the practical usage of LBflow and presents guidelines for obtaining optimal results from available computing power. The relationships among simulation resolution, accuracy, runtime and memory requirements are investigated in detail. Particular attention is paid to the origin, quantification and minimization of errors. LBflow is validated against analytical, numerical and experimental results for a range of three-dimensional flow geometries. The fluid conductance of prismatic pipes with various cross sections is calculated with LBflow and found to be in excellent agreement with published results. Simulated flow along sinusoidally constricted pipes gives good agreement with experimental data for a wide range of Reynolds number. The permeability of packs of spheres is determined and shown to be in excellent agreement with analytical results. The accuracy of internal flow patterns within the investigated geometries is also in excellent quantitative agreement with published data. The development of vortices within a sinusoidally constricted pipe with increasing Reynolds number is shown, demonstrating the insight that LBflow can offer as a 'virtual laboratory' for fluid flow.

  8. The architecture of a video image processor for the space station

    NASA Technical Reports Server (NTRS)

    Yalamanchili, S.; Lee, D.; Fritze, K.; Carpenter, T.; Hoyme, K.; Murray, N.

    1987-01-01

    The architecture of a video image processor for space station applications is described. The architecture was derived from a study of the requirements of algorithms that are necessary to produce the desired functionality of many of these applications. Architectural options were selected based on a simulation of the execution of these algorithms on various architectural organizations. A great deal of emphasis was placed on the ability of the system to evolve and grow over the lifetime of the space station. The result is a hierarchical parallel architecture that is characterized by high level language programmability, modularity, extensibility and can meet the required performance goals.

  9. Design and evaluation of a DAMQ multiprocessor network with self-compacting buffers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, J.; O`Krafka, B.W.O.; Vassiliadis, S.

    1994-12-31

    This paper describes a new approach to implement Dynamically Allocated Multi-Queue (DAMQ) switching elements using a technique called ``self-compacting buffers``. This technique is efficient in that the amount of hardware required to manage the buffers is relatively small; it offers high performance since it is an implementation of a DAMQ. The first part of this paper describes the self-compacting buffer architecture in detail, and compares it against a competing DAMQ switch design. The second part presents extensive simulation results comparing the performance of a self compacting buffer switch against an ideal switch including several examples of k-ary n-cubes and deltamore » networks. In addition, simulation results show how the performance of an entire network can be quickly and accurately approximated by simulating just a single switching element.« less

  10. Features of MCNP6 Relevant to Medical Radiation Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, H. Grady III; Goorley, John T.

    2012-08-29

    MCNP (Monte Carlo N-Particle) is a general-purpose Monte Carlo code for simulating the transport of neutrons, photons, electrons, positrons, and more recently other fundamental particles and heavy ions. Over many years MCNP has found a wide range of applications in many different fields, including medical radiation physics. In this presentation we will describe and illustrate a number of significant recently-developed features in the current version of the code, MCNP6, having particular utility for medical physics. Among these are major extensions of the ability to simulate large, complex geometries, improvement in memory requirements and speed for large lattices, introduction of mesh-basedmore » isotopic reaction tallies, advances in radiography simulation, expanded variance-reduction capabilities, especially for pulse-height tallies, and a large number of enhancements in photon/electron transport.« less

  11. A nonlocal electron conduction model for multidimensional radiation hydrodynamics codes

    NASA Astrophysics Data System (ADS)

    Schurtz, G. P.; Nicolaï, Ph. D.; Busquet, M.

    2000-10-01

    Numerical simulation of laser driven Inertial Confinement Fusion (ICF) related experiments require the use of large multidimensional hydro codes. Though these codes include detailed physics for numerous phenomena, they deal poorly with electron conduction, which is the leading energy transport mechanism of these systems. Electron heat flow is known, since the work of Luciani, Mora, and Virmont (LMV) [Phys. Rev. Lett. 51, 1664 (1983)], to be a nonlocal process, which the local Spitzer-Harm theory, even flux limited, is unable to account for. The present work aims at extending the original formula of LMV to two or three dimensions of space. This multidimensional extension leads to an equivalent transport equation suitable for easy implementation in a two-dimensional radiation-hydrodynamic code. Simulations are presented and compared to Fokker-Planck simulations in one and two dimensions of space.

  12. Mass production of extensive air showers for the Pierre Auger Collaboration using Grid Technology

    NASA Astrophysics Data System (ADS)

    Lozano Bahilo, Julio; Pierre Auger Collaboration

    2012-06-01

    When ultra-high energy cosmic rays enter the atmosphere they interact producing extensive air showers (EAS) which are the objects studied by the Pierre Auger Observatory. The number of particles involved in an EAS at these energies is of the order of billions and the generation of a single simulated EAS requires many hours of computing time with current processors. In addition, the storage space consumed by the output of one simulated EAS is very high. Therefore we have to make use of Grid resources to be able to generate sufficient quantities of showers for our physics studies in reasonable time periods. We have developed a set of highly automated scripts written in common software scripting languages in order to deal with the high number of jobs which we have to submit regularly to the Grid. In spite of the low number of sites supporting our Virtual Organization (VO) we have reached the top spot on CPU consumption among non LHC (Large Hadron Collider) VOs within EGI (European Grid Infrastructure).

  13. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    NASA Astrophysics Data System (ADS)

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; Kalinkin, Alexander A.

    2017-02-01

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, which is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,'bottom-up' and 'top-down', are illustrated. Preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.

  14. Stochastic and Deterministic Crystal Structure Solution Methods in GSAS-II: Monte Carlo/Simulated Annealing Versus Charge Flipping

    DOE PAGES

    Von Dreele, Robert

    2017-08-29

    One of the goals in developing GSAS-II was to expand from the capabilities of the original General Structure Analysis System (GSAS) which largely encompassed just structure refinement and post refinement analysis. GSAS-II has been written almost entirely in Python loaded with graphics, GUI and mathematical packages (matplotlib, pyOpenGL, wxpython, numpy and scipy). Thus, GSAS-II has a fully developed modern GUI as well as extensive graphical display of data and results. However, the structure and operation of Python has required new approaches to many of the algorithms used in crystal structure analysis. The extensions beyond GSAS include image calibration/integration as wellmore » as peak fitting and unit cell indexing for powder data which are precursors for structure solution. Structure solution within GSAS-II begins with either Pawley or LeBail extracted structure factors from powder data or those measured in a single crystal experiment. Both charge flipping and Monte Carlo-Simulated Annealing techniques are available; the former can be applied to (3+1) incommensurate structures as well as conventional 3D structures.« less

  15. Electron distribution functions in electric field environments

    NASA Technical Reports Server (NTRS)

    Rudolph, Terence H.

    1991-01-01

    The amount of current carried by an electric discharge in its early stages of growth is strongly dependent on its geometrical shape. Discharges with a large number of branches, each funnelling current to a common stem, tend to carry more current than those with fewer branches. The fractal character of typical discharges was simulated using stochastic models based on solutions of the Laplace equation. Extension of these models requires the use of electron distribution functions to describe the behavior of electrons in the undisturbed medium ahead of the discharge. These electrons, interacting with the electric field, determine the propagation of branches in the discharge and the way in which further branching occurs. The first phase in the extension of the referenced models , the calculation of simple electron distribution functions in an air/electric field medium, is discussed. Two techniques are investigated: (1) the solution of the Boltzmann equation in homogeneous, steady state environments, and (2) the use of Monte Carlo simulations. Distribution functions calculated from both techniques are illustrated. Advantages and disadvantages of each technique are discussed.

  16. Extensions to the Dynamic Aerospace Vehicle Exchange Markup Language

    NASA Technical Reports Server (NTRS)

    Brian, Geoffrey J.; Jackson, E. Bruce

    2011-01-01

    The Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML) is a syntactical language for exchanging flight vehicle dynamic model data. It provides a framework for encoding entire flight vehicle dynamic model data packages for exchange and/or long-term archiving. Version 2.0.1 of DAVE-ML provides much of the functionality envisioned for exchanging aerospace vehicle data; however, it is limited in only supporting scalar time-independent data. Additional functionality is required to support vector and matrix data, abstracting sub-system models, detailing dynamics system models (both discrete and continuous), and defining a dynamic data format (such as time sequenced data) for validation of dynamics system models and vehicle simulation packages. Extensions to DAVE-ML have been proposed to manage data as vectors and n-dimensional matrices, and record dynamic data in a compatible form. These capabilities will improve the clarity of data being exchanged, simplify the naming of parameters, and permit static and dynamic data to be stored using a common syntax within a single file; thereby enhancing the framework provided by DAVE-ML for exchanging entire flight vehicle dynamic simulation models.

  17. DAKOTA Design Analysis Kit for Optimization and Terascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  18. Flight test experience and controlled impact of a remotely piloted jet transport aircraft

    NASA Technical Reports Server (NTRS)

    Horton, Timothy W.; Kempel, Robert W.

    1988-01-01

    The Dryden Flight Research Center Facility of NASA Ames Research Center (Ames-Dryden) and the FAA conducted the controlled impact demonstration (CID) program using a large, four-engine, remotely piloted jet transport airplane. Closed-loop primary flight was controlled through the existing onboard PB-20D autopilot which had been modified for the CID program. Uplink commands were sent from a ground-based cockpit and digital computer in conjunction with an up-down telemetry link. These uplink commands were received aboard the airplane and transferred through uplink interface systems to the modified PB-20D autopilot. Both proportional and discrete commands were produced by the ground system. Prior to flight tests, extensive simulation was conducted during the development of ground-based digital control laws. The control laws included primary control, secondary control, and racetrack and final approach guidance. Extensive ground checks were performed on all remotely piloted systems; however, piloted flight tests were the primary method and validation of control law concepts developed from simulation. The design, development, and flight testing of control laws and systems required to accomplish the remotely piloted mission are discussed.

  19. Appendices to the model description document for a computer program for the emulation/simulation of a space station environmental control and life support system

    NASA Technical Reports Server (NTRS)

    Yanosy, James L.

    1988-01-01

    A Model Description Document for the Emulation Simulation Computer Model was already published. The model consisted of a detailed model (emulation) of a SAWD CO2 removal subsystem which operated with much less detailed (simulation) models of a cabin, crew, and condensing and sensible heat exchangers. The purpose was to explore the utility of such an emulation simulation combination in the design, development, and test of a piece of ARS hardware, SAWD. Extensions to this original effort are presented. The first extension is an update of the model to reflect changes in the SAWD control logic which resulted from test. Also, slight changes were also made to the SAWD model to permit restarting and to improve the iteration technique. The second extension is the development of simulation models for more pieces of air and water processing equipment. Models are presented for: EDC, Molecular Sieve, Bosch, Sabatier, a new condensing heat exchanger, SPE, SFWES, Catalytic Oxidizer, and multifiltration. The third extension is to create two system simulations using these models. The first system presented consists of one air and one water processing system. The second consists of a potential air revitalization system.

  20. Appendices to the user's manual for a computer program for the emulation/simulation of a space station environmental control and life support system

    NASA Technical Reports Server (NTRS)

    Yanosy, James L.

    1988-01-01

    A user's Manual for the Emulation Simulation Computer Model was published previously. The model consisted of a detailed model (emulation) of a SAWD CO2 removal subsystem which operated with much less detailed (simulation) models of a cabin, crew, and condensing and sensible heat exchangers. The purpose was to explore the utility of such an emulation/simulation combination in the design, development, and test of a piece of ARS hardware - SAWD. Extensions to this original effort are presented. The first extension is an update of the model to reflect changes in the SAWD control logic which resulted from the test. In addition, slight changes were also made to the SAWD model to permit restarting and to improve the iteration technique. The second extension is the development of simulation models for more pieces of air and water processing equipment. Models are presented for: EDC, Molecular Sieve, Bosch, Sabatier, a new condensing heat exchanger, SPE, SFWES, Catalytic Oxidizer, and multifiltration. The third extension is to create two system simulations using these models. The first system presented consists of one air and one water processing system, the second a potential Space Station air revitalization system.

  1. A Method for Incorporating Changing Structural Characteristics Due to Propellant Mass Usage in a Launch Vehicle Ascent Simulation

    NASA Technical Reports Server (NTRS)

    McGhee, D. S.

    2004-01-01

    Launch vehicles consume large quantities of propellant quickly, causing the mass properties and structural dynamics of the vehicle to change dramatically. Currently, structural load assessments account for this change with a large collection of structural models representing various propellant fill levels. This creates a large database of models complicating the delivery of reduced models and requiring extensive work for model changes. Presented here is a method to account for these mass changes in a more efficient manner. The method allows for the subtraction of propellant mass as the propellant is used in the simulation. This subtraction is done in the modal domain of the vehicle generalized model. Additional computation required is primarily for constructing the used propellant mass matrix from an initial propellant model and further matrix multiplications and subtractions. An additional eigenvalue solution is required to uncouple the new equations of motion; however, this is a much simplier calculation starting from a system that is already substantially uncoupled. The method was successfully tested in a simulation of Saturn V loads. Results from the method are compared to results from separate structural models for several propellant levels, showing excellent agreement. Further development to encompass more complicated propellant models, including slosh dynamics, is possible.

  2. Simulation of floods caused by overloaded sewer systems: extensions of shallow-water equations

    NASA Astrophysics Data System (ADS)

    Hilden, Michael

    2005-03-01

    The outflow of water from a manhole onto a street is a typical flow problem within the simulation of floods in urban areas that are caused by overloaded sewer systems in the event of heavy rains. The reliable assessment of the flood risk for the connected houses requires accurate simulations of the water flow processes in the sewer system and in the street.The Navier-Stokes equations (NSEs) describe the free surface flow of the fluid water accurately, but since their numerical solution requires high CPU times and much memory, their application is not practical. However, their solutions for selected flow problems are applied as reference states to assess the results of other model approaches.The classical shallow-water equations (SWEs) require only fractions (factor 1/100) of the NSEs' computational effort. They assume hydrostatic pressure distribution, depth-averaged horizontal velocities and neglect vertical velocities. These shallow-water assumptions are not fulfilled for the outflow of water from a manhole onto the street. Accordingly, calculations show differences between NSEs and SWEs solutions.The SWEs are extended in order to assess the flood risks in urban areas reliably within applicable computational efforts. Separating vortex regions from the main flow and approximating vertical velocities to involve their contributions into a pressure correction yield suitable results.

  3. Radial Mixing and Ru-Mo Isotope Systematics Under Different Accretion Scenarios

    NASA Astrophysics Data System (ADS)

    Fischer, R. A.; Nimmo, F.; O'Brien, D. P.

    2017-12-01

    The Ru-Mo isotopic compositions of inner Solar System bodies may reflect the provenance of accreted material and how it evolved with time, both of which are controlled by the accretion scenario these bodies experienced. Here we use a total of 116 N-body simulations of terrestrial planet accretion, run in the Eccentric Jupiter and Saturn (EJS), Circular Jupiter and Saturn (CJS), and Grand Tack scenarios, to model the Ru-Mo anomalies of Earth, Mars, and Theia analogues. This model starts by applying an initial step function in Ru-Mo isotopic composition, with compositions reflecting those in meteorites, and traces compositional evolution as planets accrete. The mass-weighted provenance of the resulting planets reveals more radial mixing in Grand Tack simulations than in EJS/CJS simulations, and more efficient mixing among late-accreted material than during the main phase of accretion in EJS/CJS simulations. We find that an extensive homogenous inner disk region is required to reproduce Earth's observed Ru-Mo composition. EJS/CJS simulations require a homogeneous reservoir in the inner disk extending to ≥3-4 AU (≥74-98% of initial mass) to reproduce Earth's composition, while Grand Tack simulations require a homogeneous reservoir extending to ≥3-10 AU (≥97-99% of initial mass), and likely to ≥7-10 AU. In the Grand Tack model, Jupiter's initial location (the most likely location for a discontinuity in isotopic composition) is 3.5 AU; however, this step location has only a 33% likelihood of producing an Earth with the correct Ru-Mo isotopic signature for the most plausible model conditions. Our results give the testable predictions that Mars has zero Ru anomaly and small or zero Mo anomaly, and the Moon has zero Mo anomaly. These predictions are insensitive to wide variations in parameter choices.

  4. Caregiver involvement in a large clinical systems project.

    PubMed Central

    Sales, S.; Mathews, P.; Gamblin, D.; Gee, S.

    1994-01-01

    The Kaiser Permanente Northern California Region (KPNCR) CareGiver Workstation (CGW) Project's mission is to develop and implement a clinical workstation system that will enhance each caregiver-member interaction and aid in the decision-making processes of direct patient care in the inpatient and outpatient settings. The requirements analysis approach for CareGiver Workstation (CGW) is based on the belief that extensive caregiver involvement will provide a better understanding of the diverse needs of Kaiser Permanente Northern California Region (KPNCR). In order to involve as many caregivers as reasonably possible, CGW included a 16 member caregiver core team and 6 different Medical Centers in the requirements definition process. The Medical Centers are referred to as "focus facilities". A "focus group" (caregiver team) at each selected focus facility consisted of a site coordinator and a 24-30 person multidisciplinary team involving physicians, nurses, therapists and other caregivers. The Medical Center selection process identified facilities that provided the best cross-sectional representation of KPNCR. The Lead Focus Facility participated in the initial round of requirements definition activities. These sessions assisted in the design of a simulation that was used at five additional Medical Centers to validate requirements. The five additional Focus Facilities participated in simulation review sessions. Feedback from these sessions was used to revise the simulation and update the requirements document. Caregivers from all six focus facilities and other identified groups participated in a requirements survey to assist CGW with identification of high priority features. Caregiver commitment and continuing involvement are essential for the success of CGW.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:7949953

  5. Maximization of the Supportable Number of Sensors in QoS-Aware Cluster-Based Underwater Acoustic Sensor Networks

    PubMed Central

    Nguyen, Thi-Tham; Van Le, Duc; Yoon, Seokhoon

    2014-01-01

    This paper proposes a practical low-complexity MAC (medium access control) scheme for quality of service (QoS)-aware and cluster-based underwater acoustic sensor networks (UASN), in which the provision of differentiated QoS is required. In such a network, underwater sensors (U-sensor) in a cluster are divided into several classes, each of which has a different QoS requirement. The major problem considered in this paper is the maximization of the number of nodes that a cluster can accommodate while still providing the required QoS for each class in terms of the PDR (packet delivery ratio). In order to address the problem, we first estimate the packet delivery probability (PDP) and use it to formulate an optimization problem to determine the optimal value of the maximum packet retransmissions for each QoS class. The custom greedy and interior-point algorithms are used to find the optimal solutions, which are verified by extensive simulations. The simulation results show that, by solving the proposed optimization problem, the supportable number of underwater sensor nodes can be maximized while satisfying the QoS requirements for each class. PMID:24608009

  6. Maximization of the supportable number of sensors in QoS-aware cluster-based underwater acoustic sensor networks.

    PubMed

    Nguyen, Thi-Tham; Le, Duc Van; Yoon, Seokhoon

    2014-03-07

    This paper proposes a practical low-complexity MAC (medium access control) scheme for quality of service (QoS)-aware and cluster-based underwater acoustic sensor networks (UASN), in which the provision of differentiated QoS is required. In such a network, underwater sensors (U-sensor) in a cluster are divided into several classes, each of which has a different QoS requirement. The major problem considered in this paper is the maximization of the number of nodes that a cluster can accommodate while still providing the required QoS for each class in terms of the PDR (packet delivery ratio). In order to address the problem, we first estimate the packet delivery probability (PDP) and use it to formulate an optimization problem to determine the optimal value of the maximum packet retransmissions for each QoS class. The custom greedy and interior-point algorithms are used to find the optimal solutions, which are verified by extensive simulations. The simulation results show that, by solving the proposed optimization problem, the supportable number of underwater sensor nodes can be maximized while satisfying the QoS requirements for each class.

  7. Distal radius osteotomy with volar locking plates based on computer simulation.

    PubMed

    Miyake, Junichi; Murase, Tsuyoshi; Moritomo, Hisao; Sugamoto, Kazuomi; Yoshikawa, Hideki

    2011-06-01

    Corrective osteotomy using dorsal plates and structural bone graft usually has been used for treating symptomatic distal radius malunions. However, the procedure is technically demanding and requires an extensive dorsal approach. Residual deformity is a relatively frequent complication of this technique. We evaluated the clinical applicability of a three-dimensional osteotomy using computer-aided design and manufacturing techniques with volar locking plates for distal radius malunions. Ten patients with metaphyseal radius malunions were treated. Corrective osteotomy was simulated with the help of three-dimensional bone surface models created using CT data. We simulated the most appropriate screw holes in the deformed radius using computer-aided design data of a locking plate. During surgery, using a custom-made surgical template, we predrilled the screw holes as simulated. After osteotomy, plate fixation using predrilled screw holes enabled automatic reduction of the distal radial fragment. Autogenous iliac cancellous bone was grafted after plate fixation. The median volar tilt, radial inclination, and ulnar variance improved from -20°, 13°, and 6 mm, respectively, before surgery to 12°, 24°, and 1 mm, respectively, after surgery. The median wrist flexion improved from 33° before surgery to 60° after surgery. The median wrist extension was 70° before surgery and 65° after surgery. All patients experienced wrist pain before surgery, which disappeared or decreased after surgery. Surgeons can operate precisely and easily using this advanced technique. It is a new treatment option for malunion of distal radius fractures.

  8. Direct Monte Carlo simulation of chemical reaction systems: Simple bimolecular reactions

    NASA Astrophysics Data System (ADS)

    Piersall, Shannon D.; Anderson, James B.

    1991-07-01

    In applications to several simple reaction systems we have explored a ``direct simulation'' method for predicting and understanding the behavior of gas phase chemical reaction systems. This Monte Carlo method, originated by Bird, has been found remarkably successful in treating a number of difficult problems in rarefied dynamics. Extension to chemical reactions offers a powerful tool for treating reaction systems with nonthermal distributions, with coupled gas-dynamic and reaction effects, with emission and adsorption of radiation, and with many other effects difficult to treat in any other way. The usual differential equations of chemical kinetics are eliminated. For a bimolecular reaction of the type A+B→C+D with a rate sufficiently low to allow a continued thermal equilibrium of reactants we find that direct simulation reproduces the expected second order kinetics. Simulations for a range of temperatures yield the activation energies expected for the reaction models specified. For faster reactions under conditions leading to a depletion of energetic reactant species, the expected slowing of reaction rates and departures from equilibrium distributions are observed. The minimum sample sizes required for adequate simulations are as low as 1000 molecules for these cases. The calculations are found to be simple and straightforward for the homogeneous systems considered. Although computation requirements may be excessively high for very slow reactions, they are reasonably low for fast reactions, for which nonequilibrium effects are most important.

  9. Accelerating Sequential Gaussian Simulation with a constant path

    NASA Astrophysics Data System (ADS)

    Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus

    2018-03-01

    Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.

  10. Collecting conditions usage metadata to optimize current and future ATLAS software and processing

    NASA Astrophysics Data System (ADS)

    Rinaldi, L.; Barberis, D.; Formica, A.; Gallas, E. J.; Oda, S.; Rybkin, G.; Verducci, M.; ATLAS Collaboration

    2017-10-01

    Conditions data (for example: alignment, calibration, data quality) are used extensively in the processing of real and simulated data in ATLAS. The volume and variety of the conditions data needed by different types of processing are quite diverse, so optimizing its access requires a careful understanding of conditions usage patterns. These patterns can be quantified by mining representative log files from each type of processing and gathering detailed information about conditions usage for that type of processing into a central repository.

  11. Microcanonical model for interface formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rucklidge, A.; Zaleski, S.

    1988-04-01

    We describe a new cellular automaton model which allows us to simulate separation of phases. The model is an extension of existing cellular automata for the Ising model, such as Q2R. It conserves particle number and presents the qualitative features of spinodal decomposition. The dynamics is deterministic and does not require random number generators. The spins exchange energy with small local reservoirs or demons. The rate of relaxation to equilibrium is investigated, and the results are compared to the Lifshitz-Slyozov theory.

  12. Accommodating complexity and human behaviors in decision analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Siirola, John Daniel; Schoenwald, David Alan

    2007-11-01

    This is the final report for a LDRD effort to address human behavior in decision support systems. One sister LDRD effort reports the extension of this work to include actual human choices and additional simulation analyses. Another provides the background for this effort and the programmatic directions for future work. This specific effort considered the feasibility of five aspects of model development required for analysis viability. To avoid the use of classified information, healthcare decisions and the system embedding them became the illustrative example for assessment.

  13. Relationship between movement time and hip moment impulse in the sagittal plane during sit-to-stand movement: a combined experimental and computer simulation study.

    PubMed

    Inai, Takuma; Takabayashi, Tomoya; Edama, Mutsuaki; Kubo, Masayoshi

    2018-04-27

    The association between repetitive hip moment impulse and the progression of hip osteoarthritis is a recently recognized area of study. A sit-to-stand movement is essential for daily life and requires hip extension moment. Although a change in the sit-to-stand movement time may influence the hip moment impulse in the sagittal plane, this effect has not been examined. The purpose of this study was to clarify the relationship between sit-to-stand movement time and hip moment impulse in the sagittal plane. Twenty subjects performed the sit-to-stand movement at a self-selected natural speed. The hip, knee, and ankle joint angles obtained from experimental trials were used to perform two computer simulations. In the first simulation, the actual sit-to-stand movement time obtained from the experiment was entered. In the second simulation, sit-to-stand movement times ranging from 0.5 to 4.0 s at intervals of 0.25 s were entered. Hip joint moments and hip moment impulses in the sagittal plane during sit-to-stand movements were calculated for both computer simulations. The reliability of the simulation model was confirmed, as indicated by the similarities in the hip joint moment waveforms (r = 0.99) and the hip moment impulses in the sagittal plane between the first computer simulation and the experiment. In the second computer simulation, the hip moment impulse in the sagittal plane decreased with a decrease in the sit-to-stand movement time, although the peak hip extension moment increased with a decrease in the movement time. These findings clarify the association between the sit-to-stand movement time and hip moment impulse in the sagittal plane and may contribute to the prevention of the progression of hip osteoarthritis.

  14. Reliable before-fabrication forecasting of normal and touch mode MEMS capacitive pressure sensor: modeling and simulation

    NASA Astrophysics Data System (ADS)

    Jindal, Sumit Kumar; Mahajan, Ankush; Raghuwanshi, Sanjeev Kumar

    2017-10-01

    An analytical model and numerical simulation for the performance of MEMS capacitive pressure sensors in both normal and touch modes is required for expected behavior of the sensor prior to their fabrication. Obtaining such information should be based on a complete analysis of performance parameters such as deflection of diaphragm, change of capacitance when the diaphragm deflects, and sensitivity of the sensor. In the literature, limited work has been carried out on the above-stated issue; moreover, due to approximation factors of polynomials, a tolerance error cannot be overseen. Reliable before-fabrication forecasting requires exact mathematical calculation of the parameters involved. A second-order polynomial equation is calculated mathematically for key performance parameters of both modes. This eliminates the approximation factor, and an exact result can be studied, maintaining high accuracy. The elimination of approximation factors and an approach of exact results are based on a new design parameter (δ) that we propose. The design parameter gives an initial hint to the designers on how the sensor will behave once it is fabricated. The complete work is aided by extensive mathematical detailing of all the parameters involved. Next, we verified our claims using MATLAB® simulation. Since MATLAB® effectively provides the simulation theory for the design approach, more complicated finite element method is not used.

  15. Adaption of G-TAG Software for Validating Touch and Go Asteroid Sample Return Design Methodology

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars James C.; Acikmese, Behcet; Mandic, Milan

    2012-01-01

    A software tool is used to demonstrate the feasibility of Touch and Go (TAG) sampling for Asteroid Sample Return missions. TAG is a concept whereby a spacecraft is in contact with the surface of a small body, such as a comet or asteroid, for a few seconds or less before ascending to a safe location away from the small body. Previous work at JPL developed the G-TAG simulation tool, which provides a software environment for fast, multi-body simulations of the TAG event. G-TAG is described in Multibody Simulation Software Testbed for Small-Body Exploration and Sampling, (NPO-47196) NASA Tech Briefs, Vol. 35, No. 11 (November 2011), p.54. This current innovation adapts this tool to a mission that intends to return a sample from the surface of an asteroid. In order to demonstrate the feasibility of the TAG concept, the new software tool was used to generate extensive simulations that demonstrate the designed spacecraft meets key requirements. These requirements state that contact force and duration must be sufficient to ensure that enough material from the surface is collected in the brushwheel sampler (BWS), and that the spacecraft must survive the contact and must be able to recover and ascend to a safe position, and maintain velocity and orientation after the contact.

  16. Towards an internal model in pilot training.

    PubMed

    Braune, R J; Trollip, S R

    1982-10-01

    Optimal decision making requires an information seeking behavior which reflects the comprehension of the overall system dynamics. Research in the area of human monitors in man-machine systems supports the notion of an internal model with built-in expectancies. It is doubtful that the current approach to pilot training helps develop this internal model in the most efficient way. But this is crucial since the role of the pilot is changing to a systems' manager and decision maker. An extension of the behavioral framework of pilot training might help to prepare the pilot better for the increasingly complex flight environment. This extension is based on the theoretical model of schema theory, which evolved out of psychological research. The technological advances in aircraft simulators and in-flight performance measurement devices allow investigation of the still-unresolved issues.

  17. Modelling erosion on a daily basis, an adaptation of the MMF approach

    NASA Astrophysics Data System (ADS)

    Shrestha, Dhruba Pikha; Jetten, Victor G.

    2018-02-01

    Effect of soil erosion causing negative impact on ecosystem services and food security is well known. On the other hand there can be yearly variation of total precipitation received in an area, with the presence of extreme rains. To assess annual erosion rates various empirical models have been extensively used in all the climatic regions. While these models are simple to operate and do not require lot of input data, the effect of extreme rain is not taken into account. Although physically based models are available to simulate erosion processes including particle detachment, transportation and deposition of sediments during a storm they are not applicable for assessing annual soil loss rates. Moreover storm event data may not be available everywhere prohibiting their extensive use.

  18. Modelling phosphorus (P), sulfur (S) and iron (Fe) interactions for dynamic simulations of anaerobic digestion processes.

    PubMed

    Flores-Alsina, Xavier; Solon, Kimberly; Kazadi Mbamba, Christian; Tait, Stephan; Gernaey, Krist V; Jeppsson, Ulf; Batstone, Damien J

    2016-05-15

    This paper proposes a series of extensions to functionally upgrade the IWA Anaerobic Digestion Model No. 1 (ADM1) to allow for plant-wide phosphorus (P) simulation. The close interplay between the P, sulfur (S) and iron (Fe) cycles requires a substantial (and unavoidable) increase in model complexity due to the involved three-phase physico-chemical and biological transformations. The ADM1 version, implemented in the plant-wide context provided by the Benchmark Simulation Model No. 2 (BSM2), is used as the basic platform (A0). Three different model extensions (A1, A2, A3) are implemented, simulated and evaluated. The first extension (A1) considers P transformations by accounting for the kinetic decay of polyphosphates (XPP) and potential uptake of volatile fatty acids (VFA) to produce polyhydroxyalkanoates (XPHA) by phosphorus accumulating organisms (XPAO). Two variant extensions (A2,1/A2,2) describe biological production of sulfides (SIS) by means of sulfate reducing bacteria (XSRB) utilising hydrogen only (autolithotrophically) or hydrogen plus organic acids (heterorganotrophically) as electron sources, respectively. These two approaches also consider a potential hydrogen sulfide ( [Formula: see text] inhibition effect and stripping to the gas phase ( [Formula: see text] ). The third extension (A3) accounts for chemical iron (III) ( [Formula: see text] ) reduction to iron (II) ( [Formula: see text] ) using hydrogen ( [Formula: see text] ) and sulfides (SIS) as electron donors. A set of pre/post interfaces between the Activated Sludge Model No. 2d (ASM2d) and ADM1 are furthermore proposed in order to allow for plant-wide (model-based) analysis and study of the interactions between the water and sludge lines. Simulation (A1 - A3) results show that the ratio between soluble/particulate P compounds strongly depends on the pH and cationic load, which determines the capacity to form (or not) precipitation products. Implementations A1 and A2,1/A2,2 lead to a reduction in the predicted methane/biogas production (and potential energy recovery) compared to reference ADM1 predictions (A0). This reduction is attributed to two factors: (1) loss of electron equivalents due to sulfate [Formula: see text] reduction by XSRB and storage of XPHA by XPAO; and, (2) decrease of acetoclastic and hydrogenotrophic methanogenesis due to [Formula: see text] inhibition. Model A3 shows the potential for iron to remove free SIS (and consequently inhibition) and instead promote iron sulfide (XFeS) precipitation. It also reduces the quantities of struvite ( [Formula: see text] ) and calcium phosphate ( [Formula: see text] ) that are formed due to its higher affinity for phosphate anions. This study provides a detailed analysis of the different model assumptions, the effect that operational/design conditions have on the model predictions and the practical implications of the proposed model extensions in view of plant-wide modelling/development of resource recovery strategies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Disc replacement adjacent to cervical fusion: a biomechanical comparison of hybrid construct versus two-level fusion.

    PubMed

    Lee, Michael J; Dumonski, Mark; Phillips, Frank M; Voronov, Leonard I; Renner, Susan M; Carandang, Gerard; Havey, Robert M; Patwardhan, Avinash G

    2011-11-01

    A cadaveric biomechanical study. To investigate the biomechanical behavior of the cervical spine after cervical total disc replacement (TDR) adjacent to a fusion as compared to a two-level fusion. There are concerns regarding the biomechanical effects of cervical fusion on the mobile motion segments. Although previous biomechanical studies have demonstrated that cervical disc replacement normalizes adjacent segment motion, there is a little information regarding the function of a cervical disc replacement adjacent to an anterior cervical decompression and fusion, a potentially common clinical application. Nine cadaveric cervical spines (C3-T1, age: 60.2 ± 3.5 years) were tested under load- and displacement-control testing. After intact testing, a simulated fusion was performed at C4-C5, followed by C6-C7. The simulated fusion was then reversed, and the response of TDR at C5-C6 was measured. A hybrid construct was then tested with the TDR either below or above a single-level fusion and contrasted with a simulated two-level fusion (C4-C6 and C5-C7). The external fixator device used to simulate fusion significantly reduced range of motion (ROM) at C4-C5 and C6-C7 by 74.7 ± 8.1% and 78.1 ± 11.5%, respectively (P < 0.05). Removal of the fusion construct restored the motion response of the spinal segments to their intact state. Arthroplasty performed at C5-C6 using the porous-coated motion disc prosthesis maintained the total flexion-extension ROM to the level of the intact controls when used as a stand-alone procedure or when implanted adjacent to a single-level fusion (P > 0.05). The location of the single-level fusion, whether above or below the arthroplasty, did not significantly affect the motion response of the arthroplasty in the hybrid construct. Performing a two-level fusion significantly increased the motion demands on the nonoperated segments as compared to a hybrid TDR-plus fusion construct when the spine was required to reach the same motion end points. The spine with a hybrid construct required significantly less extension moment than the spine with a two-level fusion to reach the same extension end point. The porous-coated motion cervical prosthesis restored the ROM of the treated level to the intact state. When the porous-coated motion prosthesis was used in a hybrid construct, the TDR response was not adversely affected. A hybrid construct seems to offer significant biomechanical advantages over two-level fusion in terms of reducing compensatory adjacent-level hypermobility and also loads required to achieve a predetermined ROM.

  20. Early Validation of Failure Detection, Isolation, and Recovery Design Using Heterogeneous Modelling and Simulation

    NASA Astrophysics Data System (ADS)

    van der Plas, Peter; Guerriero, Suzanne; Cristiano, Leorato; Rugina, Ana

    2012-08-01

    Modelling and simulation can support a number of use cases across the spacecraft development life-cycle. Given the increasing complexity of space missions, the observed general trend is for a more extensive usage of simulation already in the early phases. A major perceived advantage is that modelling and simulation can enable the validation of critical aspects of the spacecraft design before the actual development is started, as such reducing the risk in later phases.Failure Detection, Isolation, and Recovery (FDIR) is one of the areas with a high potential to benefit from early modelling and simulation. With the increasing level of required spacecraft autonomy, FDIR specifications can grow in such a way that the traditional document-based review process soon becomes inadequate.This paper shows that FDIR modelling and simulation in a system context can provide a powerful tool to support the FDIR verification process. It is highlighted that FDIR modelling at this early stage requires heterogeneous modelling tools and languages, in order to provide an adequate functional description of the different components (i.e. FDIR functions, environment, equipment, etc.) to be modelled.For this reason, an FDIR simulation framework is proposed in this paper. This framework is based on a number of tools already available in the Avionics Systems Laboratory at ESTEC, which are the Avionics Test Bench Functional Engineering Simulator (ATB FES), Matlab/Simulink, TASTE, and Real Time Developer Studio (RTDS).The paper then discusses the application of the proposed simulation framework to a real case-study, i.e. the FDIR modelling of a satellite in support of actual ESA mission. Challenges and benefits of the approach are described. Finally, lessons learned and the generality of the proposed approach are discussed.

  1. An actuator extension transformation for a motion simulator and an inverse transformation applying Newton-Raphson's method

    NASA Technical Reports Server (NTRS)

    Dieudonne, J. E.

    1972-01-01

    A set of equations which transform position and angular orientation of the centroid of the payload platform of a six-degree-of-freedom motion simulator into extensions of the simulator's actuators has been derived and is based on a geometrical representation of the system. An iterative scheme, Newton-Raphson's method, has been successfully used in a real time environment in the calculation of the position and angular orientation of the centroid of the payload platform when the magnitude of the actuator extensions is known. Sufficient accuracy is obtained by using only one Newton-Raphson iteration per integration step of the real time environment.

  2. Test of Hadronic Interaction Models with the KASCADE Hadron Calorimeter

    NASA Astrophysics Data System (ADS)

    Milke, J.; KASCADE Collaboration

    The interpretation of extensive air shower (EAS) measurements often requires the comparison with EAS simulations based on high-energy hadronic interaction models. These interaction models have to extrapolate into kinematical regions and energy ranges beyond the limit of present accelerators. Therefore, it is necessary to test whether these models are able to describe the EAS development in a consistent way. By measuring simultaneously the hadronic, electromagnetic, and muonic part of an EAS the experiment KASCADE offers best facilities for checking the models. For the EAS simulations the program CORSIKA with several hadronic event generators implemented is used. Different hadronic observables, e.g. hadron number, energy spectrum, lateral distribution, are investigated, as well as their correlations with the electromagnetic and muonic shower size. By comparing measurements and simulations the consistency of the description of the EAS development is checked. First results with the new interaction model NEXUS and the version II.5 of the model DPMJET, recently included in CORSIKA, are presented and compared with QGSJET simulations.

  3. Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment*†

    PubMed Central

    Khan, Md. Ashfaquzzaman; Herbordt, Martin C.

    2011-01-01

    Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations. PMID:21822327

  4. Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment.

    PubMed

    Khan, Md Ashfaquzzaman; Herbordt, Martin C

    2011-07-20

    Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations.

  5. The role of disk self-gravity on gap formation of the HL Tau proto-planetary disk

    DOE PAGES

    Li, Shengtai; Li, Hui

    2016-05-31

    Here, we use extensive global hydrodynamic disk gas+dust simulations with embedded planets to model the dust ring and gap structures in the HL Tau protoplanetary disk observed with the Atacama Large Millimeter/Submillimeter Array (ALMA). Since the HL Tau is a relatively massive disk, we find the disk self-gravity (DSG) plays an important role in the gap formation induced by the planets. Our simulation results demonstrate that DSG is necessary in explaining of the dust ring and gap in HL Tau disk. The comparison of simulation results shows that the dust rings and gap structures are more evident when the fullymore » 2D DSG (non-axisymmetric components are included) is used than if 1D axisymmetric DSG (only the axisymetric component is included) is used, or the disk self-gravity is not considered. We also find that the couple dust+gas+planet simulations are required because the gap and ring structure is different between dust and gas surface density.« less

  6. A standard library for modeling satellite orbits on a microcomputer

    NASA Astrophysics Data System (ADS)

    Beutel, Kenneth L.

    1988-03-01

    Introductory students of astrodynamics and the space environment are required to have a fundamental understanding of the kinematic behavior of satellite orbits. This thesis develops a standard library that contains the basic formulas for modeling earth orbiting satellites. This library is used as a basis for implementing a satellite motion simulator that can be used to demonstrate orbital phenomena in the classroom. Surveyed are the equations of orbital elements, coordinate systems and analytic formulas, which are made into a standard method for modeling earth orbiting satellites. The standard library is written in the C programming language and is designed to be highly portable between a variety of computer environments. The simulation draws heavily on the standards established by the library to produce a graphics-based orbit simulation program written for the Apple Macintosh computer. The simulation demonstrates the utility of the standard library functions but, because of its extensive use of the Macintosh user interface, is not portable to other operating systems.

  7. Effects of including surface depressions in the application of the Precipitation-Runoff Modeling System in the Upper Flint River Basin, Georgia

    USGS Publications Warehouse

    Viger, Roland J.; Hay, Lauren E.; Jones, John W.; Buell, Gary R.

    2010-01-01

    This report documents an extension of the Precipitation Runoff Modeling System that accounts for the effect of a large number of water-holding depressions in the land surface on the hydrologic response of a basin. Several techniques for developing the inputs needed by this extension also are presented. These techniques include the delineation of the surface depressions, the generation of volume estimates for the surface depressions, and the derivation of model parameters required to describe these surface depressions. This extension is valuable for applications in basins where surface depressions are too small or numerous to conveniently model as discrete spatial units, but where the aggregated storage capacity of these units is large enough to have a substantial effect on streamflow. In addition, this report documents several new model concepts that were evaluated in conjunction with the depression storage functionality, including: ?hydrologically effective? imperviousness, rates of hydraulic conductivity, and daily streamflow routing. All of these techniques are demonstrated as part of an application in the Upper Flint River Basin, Georgia. Simulated solar radiation, potential evapotranspiration, and water balances match observations well, with small errors for the first two simulated data in June and August because of differences in temperatures from the calibration and evaluation periods for those months. Daily runoff simulations show increasing accuracy with streamflow and a good fit overall. Including surface depression storage in the model has the effect of decreasing daily streamflow for all but the lowest flow values. The report discusses the choices and resultant effects involved in delineating and parameterizing these features. The remaining enhancements to the model and its application provide a more realistic description of basin geography and hydrology that serve to constrain the calibration process to more physically realistic parameter values.

  8. Boat, wake, and wave real-time simulation

    NASA Astrophysics Data System (ADS)

    Świerkowski, Leszek; Gouthas, Efthimios; Christie, Chad L.; Williams, Owen M.

    2009-05-01

    We describe the extension of our real-time scene generation software VIRSuite to include the dynamic simulation of small boats and their wakes within an ocean environment. Extensive use has been made of the programmabilty available in the current generation of GPUs. We have demonstrated that real-time simulation is feasible, even including such complexities as dynamical calculation of the boat motion, wake generation and calculation of an FFTgenerated sea state.

  9. Establishment of a rotor model basis

    NASA Technical Reports Server (NTRS)

    Mcfarland, R. E.

    1982-01-01

    Radial-dimension computations in the RSRA's blade-element model are modified for both the acquisition of extensive baseline data and for real-time simulation use. The baseline data, which are for the evaluation of model changes, use very small increments and are of high quality. The modifications to the real-time simulation model are for accuracy improvement, especially when a minimal number of blade segments is required for real-time synchronization. An accurate technique for handling tip loss in discrete blade models is developed. The mathematical consistency and convergence properties of summation algorithms for blade forces and moments are examined and generalized integration coefficients are applied to equal-annuli midpoint spacing. Rotor conditions identified as 'constrained' and 'balanced' are used and the propagation of error is analyzed.

  10. Reynolds shear stress and heat flux calculations in a fully developed turbulent duct flow

    NASA Technical Reports Server (NTRS)

    Antonia, R. A.; Kim, J.

    1991-01-01

    The use of a modified form of the Van Driest mixing length for a fully developed turbulent channel flow leads to mean velocity and Reynolds stress distributions that are in close agreement with data obtained either from experiments or direct numerical simulations. The calculations are then extended to a nonisothermal flow by assuming a constant turbulent Prandtl number, the value of which depends on the molecular Prandtl number. Calculated distributions of mean temperature and lateral heat flux are in reasonable agreement with the simulations. The extension of the calculations to higher Reynolds numbers provides some idea of the Reynolds number required for scaling on wall variables to apply in the inner region of the flow.

  11. MSFC Three Point Docking Mechanism design review

    NASA Technical Reports Server (NTRS)

    Schaefer, Otto; Ambrosio, Anthony

    1992-01-01

    In the next few decades, we will be launching expensive satellites and space platforms that will require recovery for economic reasons, because of initial malfunction, servicing, repairs, or out of a concern for post lifetime debris removal. The planned availability of a Three Point Docking Mechanism (TPDM) is a positive step towards an operational satellite retrieval infrastructure. This study effort supports NASA/MSFC engineering work in developing an automated docking capability. The work was performed by the Grumman Space & Electronics Group as a concept evaluation/test for the Tumbling Satellite Retrieval Kit. Simulation of a TPDM capture was performed in Grumman's Large Amplitude Space Simulator (LASS) using mockups of both parts (the mechanism and payload). Similar TPDM simulation activities and more extensive hardware testing was performed at NASA/MSFC in the Flight Robotics Laboratory and Space Station/Space Operations Mechanism Test Bed (6-DOF Facility).

  12. Measurement and numerical simulation of a small centrifugal compressor characteristics at small or negative flow rate

    NASA Astrophysics Data System (ADS)

    Tsukamoto, Kaname; Okada, Mizuki; Inokuchi, Yuzo; Yamasaki, Nobuhiko; Yamagata, Akihiro

    2017-04-01

    For centrifugal compressors used in automotive turbochargers, the extension of the surge margin is demanded because of lower engine speed. In order to estimate the surge line exactly, it is required to acquire the compressor characteristics at small or negative flow rate. In this paper, measurement and numerical simulation of the characteristics at small or negative flow rate are carried out. In the measurement, an experimental facility with a valve immediately downstream of the compressor is used to suppress the surge. In the numerical work, a new boundary condition that specifies mass flow rate at the outlet boundary is used to simulate the characteristics around the zero flow rate region. Furthermore, flow field analyses at small or negative flow rate are performed with the numerical results. The separated and re-circulated flow fields are investigated by visualization to identify the origin of losses.

  13. MAGE (M-file/Mif Automatic GEnerator): A graphical interface tool for automatic generation of Object Oriented Micromagnetic Framework configuration files and Matlab scripts for results analysis

    NASA Astrophysics Data System (ADS)

    Chęciński, Jakub; Frankowski, Marek

    2016-10-01

    We present a tool for fully-automated generation of both simulations configuration files (Mif) and Matlab scripts for automated data analysis, dedicated for Object Oriented Micromagnetic Framework (OOMMF). We introduce extended graphical user interface (GUI) that allows for fast, error-proof and easy creation of Mifs, without any programming skills usually required for manual Mif writing necessary. With MAGE we provide OOMMF extensions for complementing it by mangetoresistance and spin-transfer-torque calculations, as well as local magnetization data selection for output. Our software allows for creation of advanced simulations conditions like simultaneous parameters sweeps and synchronic excitation application. Furthermore, since output of such simulation could be long and complicated we provide another GUI allowing for automated creation of Matlab scripts suitable for analysis of such data with Fourier and wavelet transforms as well as user-defined operations.

  14. Software systems for modeling articulated figures

    NASA Technical Reports Server (NTRS)

    Phillips, Cary B.

    1989-01-01

    Research in computer animation and simulation of human task performance requires sophisticated geometric modeling and user interface tools. The software for a research environment should present the programmer with a powerful but flexible substrate of facilities for displaying and manipulating geometric objects, yet insure that future tools have a consistent and friendly user interface. Jack is a system which provides a flexible and extensible programmer and user interface for displaying and manipulating complex geometric figures, particularly human figures in a 3D working environment. It is a basic software framework for high-performance Silicon Graphics IRIS workstations for modeling and manipulating geometric objects in a general but powerful way. It provides a consistent and user-friendly interface across various applications in computer animation and simulation of human task performance. Currently, Jack provides input and control for applications including lighting specification and image rendering, anthropometric modeling, figure positioning, inverse kinematics, dynamic simulation, and keyframe animation.

  15. Analysis of in-trail following dynamics of CDTI-equipped aircraft. [Cockpit Displays of Traffic Information

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Goka, T.

    1982-01-01

    In connection with the necessity to provide greater terminal area capacity, attention is given to approaches in which the required increase in capacity will be obtained by making use of more automation and by involving the pilot to a larger degree in the air traffic control (ATC) process. It was recommended that NASA should make extensive use of its research aircraft and cockpit simulators to assist the FAA in examining the capabilities and limitations of cockpit displays of traffic information (CDTI). A program was organized which utilizes FAA ATC (ground-based) simulators and NASA aircraft and associated cockpit simulators in a research project which explores applications of the CDTI system. The present investigation is concerned with several questions related to the CDTI-based terminal area traffic tactical control concepts. Attention is given to longitudinal separation criteria, a longitudinal following model, longitudinal capture, combined longitudinal/vertical control, and lateral control.

  16. GENASIS Mathematics : Object-oriented manifolds, operations, and solvers for large-scale physics simulations

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2018-01-01

    The large-scale computer simulation of a system of physical fields governed by partial differential equations requires some means of approximating the mathematical limit of continuity. For example, conservation laws are often treated with a 'finite-volume' approach in which space is partitioned into a large number of small 'cells,' with fluxes through cell faces providing an intuitive discretization modeled on the mathematical definition of the divergence operator. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of simple meshes and the evolution of generic conserved currents thereon, along with individual 'unit test' programs and larger example problems demonstrating their use. These classes inaugurate the Mathematics division of our developing astrophysics simulation code GENASIS (Gen eral A strophysical Si mulation S ystem), which will be expanded over time to include additional meshing options, mathematical operations, solver types, and solver variations appropriate for many multiphysics applications.

  17. A comparison of stress in cracked fibrous tissue specimens with varied crack location, loading, and orientation using finite element analysis.

    PubMed

    Peloquin, John M; Elliott, Dawn M

    2016-04-01

    Cracks in fibrous soft tissue, such as intervertebral disc annulus fibrosus and knee meniscus, cause pain and compromise joint mechanics. A crack concentrates stress at its tip, making further failure and crack extension (fracture) more likely. Ex vivo mechanical testing is an important tool for studying the loading conditions required for crack extension, but prior work has shown that it is difficult to reproduce crack extension. Most prior work used edge crack specimens in uniaxial tension, with the crack 90° to the edge of the specimen. This configuration does not necessarily represent the loading conditions that cause in vivo crack extension. To find a potentially better choice for experiments aiming to reproduce crack extension, we used finite element analysis to compare, in factorial combination, (1) center crack vs. edge crack location, (2) biaxial vs. uniaxial loading, and (3) crack-fiber angles ranging from 0° to 90°. The simulated material was annulus fibrosus fibrocartilage with a single fiber family. We hypothesized that one of the simulated test cases would produce a stronger stress concentration than the commonly used uniaxially loaded 90° crack-fiber angle edge crack case. Stress concentrations were compared between cases in terms of fiber-parallel stress (representing risk of fiber rupture), fiber-perpendicular stress (representing risk of matrix rupture), and fiber shear stress (representing risk of fiber sliding). Fiber-perpendicular stress and fiber shear stress concentrations were greatest in edge crack specimens (of any crack-fiber angle) and center crack specimens with a 90° crack-fiber angle. However, unless the crack is parallel to the fiber direction, these stress components alone are insufficient to cause crack opening and extension. Fiber-parallel stress concentrations were greatest in center crack specimens with a 45° crack-fiber angle, either biaxially or uniaxially loaded. We therefore recommend that the 45° center crack case be tried in future experiments intended to study crack extension by fiber rupture. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Real-time simulation of thermal shadows with EMIT

    NASA Astrophysics Data System (ADS)

    Klein, Andreas; Oberhofer, Stefan; Schätz, Peter; Nischwitz, Alfred; Obermeier, Paul

    2016-05-01

    Modern missile systems use infrared imaging for tracking or target detection algorithms. The development and validation processes of these missile systems need high fidelity simulations capable of stimulating the sensors in real-time with infrared image sequences from a synthetic 3D environment. The Extensible Multispectral Image Generation Toolset (EMIT) is a modular software library developed at MBDA Germany for the generation of physics-based infrared images in real-time. EMIT is able to render radiance images in full 32-bit floating point precision using state of the art computer graphics cards and advanced shader programs. An important functionality of an infrared image generation toolset is the simulation of thermal shadows as these may cause matching errors in tracking algorithms. However, for real-time simulations, such as hardware in the loop simulations (HWIL) of infrared seekers, thermal shadows are often neglected or precomputed as they require a thermal balance calculation in four-dimensions (3D geometry in one-dimensional time up to several hours in the past). In this paper we will show the novel real-time thermal simulation of EMIT. Our thermal simulation is capable of simulating thermal effects in real-time environments, such as thermal shadows resulting from the occlusion of direct and indirect irradiance. We conclude our paper with the practical use of EMIT in a missile HWIL simulation.

  19. Particle kinetic simulation of high altitude hypervelocity flight

    NASA Technical Reports Server (NTRS)

    Boyd, Iain; Haas, Brian L.

    1994-01-01

    Rarefied flows about hypersonic vehicles entering the upper atmosphere or through nozzles expanding into a near vacuum may only be simulated accurately with a direct simulation Monte Carlo (DSMC) method. Under this grant, researchers enhanced the models employed in the DSMC method and performed simulations in support of existing NASA projects or missions. DSMC models were developed and validated for simulating rotational, vibrational, and chemical relaxation in high-temperature flows, including effects of quantized anharmonic oscillators and temperature-dependent relaxation rates. State-of-the-art advancements were made in simulating coupled vibration-dissociation recombination for post-shock flows. Models were also developed to compute vehicle surface temperatures directly in the code rather than requiring isothermal estimates. These codes were instrumental in simulating aerobraking of NASA's Magellan spacecraft during orbital maneuvers to assess heat transfer and aerodynamic properties of the delicate satellite. NASA also depended upon simulations of entry of the Galileo probe into the atmosphere of Jupiter to provide drag and flow field information essential for accurate interpretation of an onboard experiment. Finally, the codes have been used extensively to simulate expanding nozzle flows in low-power thrusters in support of propulsion activities at NASA-Lewis. Detailed comparisons between continuum calculations and DSMC results helped to quantify the limitations of continuum CFD codes in rarefied applications.

  20. Atmospheric Modeling And Sensor Simulation (AMASS) study

    NASA Technical Reports Server (NTRS)

    Parker, K. G.

    1985-01-01

    A 4800 band synchronous communications link was established between the Perkin-Elmer (P-E) 3250 Atmospheric Modeling and Sensor Simulation (AMASS) system and the Cyber 205 located at the Goddard Space Flight Center. An extension study of off-the-shelf array processors offering standard interface to the Perkin-Elmer was conducted to determine which would meet computational requirements of the division. A Floating Point Systems AP-120B was borrowed from another Marshall Space Flight Center laboratory for evaluation. It was determined that available array processors did not offer significantly more capabilities than the borrowed unit, although at least three other vendors indicated that standard Perkin-Elmer interfaces would be marketed in the future. Therefore, the recommendation was made to continue to utilize the 120B ad to keep monitoring the AP market. Hardware necessary to support requirements of the ASD as well as to enhance system performance was specified and procured. Filters were implemented on the Harris/McIDAS system including two-dimensional lowpass, gradient, Laplacian, and bicubic interpolation routines.

  1. The Quantum Socket: Wiring for Superconducting Qubits - Part 1

    NASA Astrophysics Data System (ADS)

    McConkey, T. G.; Bejanin, J. H.; Rinehart, J. R.; Bateman, J. D.; Earnest, C. T.; McRae, C. H.; Rohanizadegan, Y.; Shiri, D.; Mariantoni, M.; Penava, B.; Breul, P.; Royak, S.; Zapatka, M.; Fowler, A. G.

    Quantum systems with ten superconducting quantum bits (qubits) have been realized, making it possible to show basic quantum error correction (QEC) algorithms. However, a truly scalable architecture has not been developed yet. QEC requires a two-dimensional array of qubits, restricting any interconnection to external classical systems to the third axis. In this talk, we introduce an interconnect solution for solid-state qubits: The quantum socket. The quantum socket employs three-dimensional wires and makes it possible to connect classical electronics with quantum circuits more densely and accurately than methods based on wire bonding. The three-dimensional wires are based on spring-loaded pins engineered to insure compatibility with quantum computing applications. Extensive design work and machining was required, with focus on material quality to prevent magnetic impurities. Microwave simulations were undertaken to optimize the design, focusing on the interface between the micro-connector and an on-chip coplanar waveguide pad. Simulations revealed good performance from DC to 10 GHz and were later confirmed against experimental measurements.

  2. Continuum and discrete approach in modeling biofilm development and structure: a review.

    PubMed

    Mattei, M R; Frunzo, L; D'Acunto, B; Pechaud, Y; Pirozzi, F; Esposito, G

    2018-03-01

    The scientific community has recognized that almost 99% of the microbial life on earth is represented by biofilms. Considering the impacts of their sessile lifestyle on both natural and human activities, extensive experimental activity has been carried out to understand how biofilms grow and interact with the environment. Many mathematical models have also been developed to simulate and elucidate the main processes characterizing the biofilm growth. Two main mathematical approaches for biomass representation can be distinguished: continuum and discrete. This review is aimed at exploring the main characteristics of each approach. Continuum models can simulate the biofilm processes in a quantitative and deterministic way. However, they require a multidimensional formulation to take into account the biofilm spatial heterogeneity, which makes the models quite complicated, requiring significant computational effort. Discrete models are more recent and can represent the typical multidimensional structural heterogeneity of biofilm reflecting the experimental expectations, but they generate computational results including elements of randomness and introduce stochastic effects into the solutions.

  3. Extending a Flight Management Computer for Simulation and Flight Experiments

    NASA Technical Reports Server (NTRS)

    Madden, Michael M.; Sugden, Paul C.

    2005-01-01

    In modern transport aircraft, the flight management computer (FMC) has evolved from a flight planning aid to an important hub for pilot information and origin-to-destination optimization of flight performance. Current trends indicate increasing roles of the FMC in aviation safety, aviation security, increasing airport capacity, and improving environmental impact from aircraft. Related research conducted at the Langley Research Center (LaRC) often requires functional extension of a modern, full-featured FMC. Ideally, transport simulations would include an FMC simulation that could be tailored and extended for experiments. However, due to the complexity of a modern FMC, a large investment (millions of dollars over several years) and scarce domain knowledge are needed to create such a simulation for transport aircraft. As an intermediate alternative, the Flight Research Services Directorate (FRSD) at LaRC created a set of reusable software products to extend flight management functionality upstream of a Boeing-757 FMC, transparently simulating or sharing its operator interfaces. The paper details the design of these products and highlights their use on NASA projects.

  4. AMPS data management concepts. [Atmospheric, Magnetospheric and Plasma in Space experiment

    NASA Technical Reports Server (NTRS)

    Metzelaar, P. N.

    1975-01-01

    Five typical AMPS experiments were formulated to allow simulation studies to verify data management concepts. Design studies were conducted to analyze these experiments in terms of the applicable procedures, data processing and displaying functions. Design concepts for AMPS data management system are presented which permit both automatic repetitive measurement sequences and experimenter-controlled step-by-step procedures. Extensive use is made of a cathode ray tube display, the experimenters' alphanumeric keyboard, and the computer. The types of computer software required by the system and the possible choices of control and display procedures available to the experimenter are described for several examples. An electromagnetic wave transmission experiment illustrates the methods used to analyze data processing requirements.

  5. A Module Experimental Process System Development Unit (MEPSDU)

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Subsequent to the design review, a series of tests was conducted on simulated modules to demonstrate that all environmental specifications (wind loading, hailstone impact, thermal cycling, and humidity cycling) are satisfied by the design. All tests, except hailstone impact, were successfully completed. The assembly sequence was simplified by virtue of eliminating the frame components and assembly steps. Performance was improved by reducing the module edge border required to accommodate the frame of the preliminary design module. An ultrasonic rolling spot bonding technique was selected for use in the machine to perform the aluminum interconnect to cell metallization electrical joints required in the MEPSDU module configuration. This selection was based on extensive experimental tests and economic analyses.

  6. Determination of the thermodynamic correction factor of fluids confined in nano-metric slit pores from molecular simulation

    NASA Astrophysics Data System (ADS)

    Collell, Julien; Galliero, Guillaume

    2014-05-01

    The multi-component diffusive mass transport is generally quantified by means of the Maxwell-Stefan diffusion coefficients when using molecular simulations. These coefficients can be related to the Fick diffusion coefficients using the thermodynamic correction factor matrix, which requires to run several simulations to estimate all the elements of the matrix. In a recent work, Schnell et al. ["Thermodynamics of small systems embedded in a reservoir: A detailed analysis of finite size effects," Mol. Phys. 110, 1069-1079 (2012)] developed an approach to determine the full matrix of thermodynamic factors from a single simulation in bulk. This approach relies on finite size effects of small systems on the density fluctuations. We present here an extension of their work for inhomogeneous Lennard Jones fluids confined in slit pores. We first verified this extension by cross validating the results obtained from this approach with the results obtained from the simulated adsorption isotherms, which allows to determine the thermodynamic factor in porous medium. We then studied the effects of the pore width (from 1 to 15 molecular sizes), of the solid-fluid interaction potential (Lennard Jones 9-3, hard wall potential) and of the reduced fluid density (from 0.1 to 0.7 at a reduced temperature T* = 2) on the thermodynamic factor. The deviation of the thermodynamic factor compared to its equivalent bulk value decreases when increasing the pore width and becomes insignificant for reduced pore width above 15. We also found that the thermodynamic factor is sensitive to the magnitude of the fluid-fluid and solid-fluid interactions, which softens or exacerbates the density fluctuations.

  7. Determination of the thermodynamic correction factor of fluids confined in nano-metric slit pores from molecular simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collell, Julien; Galliero, Guillaume, E-mail: guillaume.galliero@univ-pau.fr

    2014-05-21

    The multi-component diffusive mass transport is generally quantified by means of the Maxwell-Stefan diffusion coefficients when using molecular simulations. These coefficients can be related to the Fick diffusion coefficients using the thermodynamic correction factor matrix, which requires to run several simulations to estimate all the elements of the matrix. In a recent work, Schnell et al. [“Thermodynamics of small systems embedded in a reservoir: A detailed analysis of finite size effects,” Mol. Phys. 110, 1069–1079 (2012)] developed an approach to determine the full matrix of thermodynamic factors from a single simulation in bulk. This approach relies on finite size effectsmore » of small systems on the density fluctuations. We present here an extension of their work for inhomogeneous Lennard Jones fluids confined in slit pores. We first verified this extension by cross validating the results obtained from this approach with the results obtained from the simulated adsorption isotherms, which allows to determine the thermodynamic factor in porous medium. We then studied the effects of the pore width (from 1 to 15 molecular sizes), of the solid-fluid interaction potential (Lennard Jones 9-3, hard wall potential) and of the reduced fluid density (from 0.1 to 0.7 at a reduced temperature T* = 2) on the thermodynamic factor. The deviation of the thermodynamic factor compared to its equivalent bulk value decreases when increasing the pore width and becomes insignificant for reduced pore width above 15. We also found that the thermodynamic factor is sensitive to the magnitude of the fluid-fluid and solid-fluid interactions, which softens or exacerbates the density fluctuations.« less

  8. Using Modeling and Simulation to Complement Testing for Increased Understanding of Weapon Subassembly Response.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Michael K.; Davidson, Megan

    As part of Sandia’s nuclear deterrence mission, the B61-12 Life Extension Program (LEP) aims to modernize the aging weapon system. Modernization requires requalification and Sandia is using high performance computing to perform advanced computational simulations to better understand, evaluate, and verify weapon system performance in conjunction with limited physical testing. The Nose Bomb Subassembly (NBSA) of the B61-12 is responsible for producing a fuzing signal upon ground impact. The fuzing signal is dependent upon electromechanical impact sensors producing valid electrical fuzing signals at impact. Computer generated models were used to assess the timing between the impact sensor’s response to themore » deceleration of impact and damage to major components and system subassemblies. The modeling and simulation team worked alongside the physical test team to design a large-scale reverse ballistic test to not only assess system performance, but to also validate their computational models. The reverse ballistic test conducted at Sandia’s sled test facility sent a rocket sled with a representative target into a stationary B61-12 (NBSA) to characterize the nose crush and functional response of NBSA components. Data obtained from data recorders and high-speed photometrics were integrated with previously generated computer models in order to refine and validate the model’s ability to reliably simulate real-world effects. Large-scale tests are impractical to conduct for every single impact scenario. By creating reliable computer models, we can perform simulations that identify trends and produce estimates of outcomes over the entire range of required impact conditions. Sandia’s HPCs enable geometric resolution that was unachievable before, allowing for more fidelity and detail, and creating simulations that can provide insight to support evaluation of requirements and performance margins. As computing resources continue to improve, researchers at Sandia are hoping to improve these simulations so they provide increasingly credible analysis of the system response and performance over the full range of conditions.« less

  9. Evaluation of urban surface parameterizations in the WRF model using measurements during the Texas Air Quality Study 2006 field campaign

    NASA Astrophysics Data System (ADS)

    Lee, S.-H.; Kim, S.-W.; Angevine, W. M.; Bianco, L.; McKeen, S. A.; Senff, C. J.; Trainer, M.; Tucker, S. C.; Zamora, R. J.

    2011-03-01

    The performance of different urban surface parameterizations in the WRF (Weather Research and Forecasting) in simulating urban boundary layer (UBL) was investigated using extensive measurements during the Texas Air Quality Study 2006 field campaign. The extensive field measurements collected on surface (meteorological, wind profiler, energy balance flux) sites, a research aircraft, and a research vessel characterized 3-dimensional atmospheric boundary layer structures over the Houston-Galveston Bay area, providing a unique opportunity for the evaluation of the physical parameterizations. The model simulations were performed over the Houston metropolitan area for a summertime period (12-17 August) using a bulk urban parameterization in the Noah land surface model (original LSM), a modified LSM, and a single-layer urban canopy model (UCM). The UCM simulation compared quite well with the observations over the Houston urban areas, reducing the systematic model biases in the original LSM simulation by 1-2 °C in near-surface air temperature and by 200-400 m in UBL height, on average. A more realistic turbulent (sensible and latent heat) energy partitioning contributed to the improvements in the UCM simulation. The original LSM significantly overestimated the sensible heat flux (~200 W m-2) over the urban areas, resulting in warmer and higher UBL. The modified LSM slightly reduced warm and high biases in near-surface air temperature (0.5-1 °C) and UBL height (~100 m) as a result of the effects of urban vegetation. The relatively strong thermal contrast between the Houston area and the water bodies (Galveston Bay and the Gulf of Mexico) in the LSM simulations enhanced the sea/bay breezes, but the model performance in predicting local wind fields was similar among the simulations in terms of statistical evaluations. These results suggest that a proper surface representation (e.g. urban vegetation, surface morphology) and explicit parameterizations of urban physical processes are required for accurate urban atmospheric numerical modeling.

  10. Lightweight computational steering of very large scale molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beazley, D.M.; Lomdahl, P.S.

    1996-09-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show howmore » this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.« less

  11. Cutting the Wires: Modularization of Cellular Networks for Experimental Design

    PubMed Central

    Lang, Moritz; Summers, Sean; Stelling, Jörg

    2014-01-01

    Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. PMID:24411264

  12. 37 CFR 1.730 - Applicant for extension of patent term; signature requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Applicant for extension of patent term; signature requirements. 1.730 Section 1.730 Patents, Trademarks, and Copyrights UNITED... for extension of patent term; signature requirements. (a) Any application for extension of a patent...

  13. Efficient simulations of the aqueous bio-interface of graphitic nanostructures with a polarisable model

    NASA Astrophysics Data System (ADS)

    Hughes, Zak E.; Tomásio, Susana M.; Walsh, Tiffany R.

    2014-04-01

    To fully harness the enormous potential offered by interfaces between graphitic nanostructures and biomolecules, detailed connections between adsorbed conformations and adsorption behaviour are needed. To elucidate these links, a key approach, in partnership with experimental techniques, is molecular simulation. For this, a force-field (FF) that can appropriately capture the relevant physics and chemistry of these complex bio-interfaces, while allowing extensive conformational sampling, and also supporting inter-operability with known biological FFs, is a pivotal requirement. Here, we present and apply such a force-field, GRAPPA, designed to work with the CHARMM FF. GRAPPA is an efficiently implemented polarisable force-field, informed by extensive plane-wave DFT calculations using the revPBE-vdW-DF functional. GRAPPA adequately recovers the spatial and orientational structuring of the aqueous interface of graphene and carbon nanotubes, compared with more sophisticated approaches. We apply GRAPPA to determine the free energy of adsorption for a range of amino acids, identifying Trp, Tyr and Arg to have the strongest binding affinity and Asp to be a weak binder. The GRAPPA FF can be readily incorporated into mainstream simulation packages, and will enable large-scale polarisable biointerfacial simulations at graphitic interfaces, that will aid the development of biomolecule-mediated, solution-based graphene processing and self-assembly strategies.To fully harness the enormous potential offered by interfaces between graphitic nanostructures and biomolecules, detailed connections between adsorbed conformations and adsorption behaviour are needed. To elucidate these links, a key approach, in partnership with experimental techniques, is molecular simulation. For this, a force-field (FF) that can appropriately capture the relevant physics and chemistry of these complex bio-interfaces, while allowing extensive conformational sampling, and also supporting inter-operability with known biological FFs, is a pivotal requirement. Here, we present and apply such a force-field, GRAPPA, designed to work with the CHARMM FF. GRAPPA is an efficiently implemented polarisable force-field, informed by extensive plane-wave DFT calculations using the revPBE-vdW-DF functional. GRAPPA adequately recovers the spatial and orientational structuring of the aqueous interface of graphene and carbon nanotubes, compared with more sophisticated approaches. We apply GRAPPA to determine the free energy of adsorption for a range of amino acids, identifying Trp, Tyr and Arg to have the strongest binding affinity and Asp to be a weak binder. The GRAPPA FF can be readily incorporated into mainstream simulation packages, and will enable large-scale polarisable biointerfacial simulations at graphitic interfaces, that will aid the development of biomolecule-mediated, solution-based graphene processing and self-assembly strategies. Electronic supplementary information (ESI) available: Details of the testing of four different DFT functionals; the adsorption energies and separation distances for the full set of analogue molecules; details of the adsorption energies of the phenyl species on the graphene surface at different adsorption sites; snapshots of the set-ups of the three different water-graphene simulations; plane-wave DFT minimum energy configurations of the full set of analogue molecules; details of the development and parametrisation of the GRAPPA FF; details of the parameters and setup used for the AMEOBAPRO simulations; the probability distribution of the O-H bond vectors of water molecules at the graphene interface; details of the simulation times for the (14 × 0) CNT systems using the different FFs; details of tests performed to determine the contribution of polarisability to binding energies; the RMSD between the reference values and plane-wave DFT values of different groups of molecules; 2D density maps of water on the graphene interface; density and hydrogen bond profiles for the simulations of water inside CNTs; 2D density maps of water inside the CNTs; plots of the collective variable against time for the meta-dynamics simulations; probability distributions of the angle between the plane of the aromatic rings and the graphene surface; the probability distribution of distance of the methyl carbon from the graphene surface for Ala. See DOI: 10.1039/c4nr00468j

  14. Dissipative particle dynamics simulations of polymer chains: scaling laws and shearing response compared to DNA experiments.

    PubMed

    Symeonidis, Vasileios; Em Karniadakis, George; Caswell, Bruce

    2005-08-12

    Dissipative particle dynamics simulations of several bead-spring representations of polymer chains in dilute solution are used to demonstrate the correct static scaling laws for the radius of gyration. Shear flow results for the wormlike chain simulating single DNA molecules compare well with average extensions from experiments, irrespective of the number of beads. However, coarse graining with more than a few beads degrades the agreement of the autocorrelation of the extension.

  15. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    DOE PAGES

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; ...

    2017-03-20

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less

  16. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less

  17. Simulation of Wind Profile Perturbations for Launch Vehicle Design

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    2004-01-01

    Ideally, a statistically representative sample of measured high-resolution wind profiles with wavelengths as small as tens of meters is required in design studies to establish aerodynamic load indicator dispersions and vehicle control system capability. At most potential launch sites, high- resolution wind profiles may not exist. Representative samples of Rawinsonde wind profiles to altitudes of 30 km are more likely to be available from the extensive network of measurement sites established for routine sampling in support of weather observing and forecasting activity. Such a sample, large enough to be statistically representative of relatively large wavelength perturbations, would be inadequate for launch vehicle design assessments because the Rawinsonde system accurately measures wind perturbations with wavelengths no smaller than 2000 m (1000 m altitude increment). The Kennedy Space Center (KSC) Jimsphere wind profiles (150/month and seasonal 2 and 3.5-hr pairs) are the only adequate samples of high resolution profiles approx. 150 to 300 m effective resolution, but over-sampled at 25 m intervals) that have been used extensively for launch vehicle design assessments. Therefore, a simulation process has been developed for enhancement of measured low-resolution Rawinsonde profiles that would be applicable in preliminary launch vehicle design studies at launch sites other than KSC.

  18. TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics

    DOE PAGES

    Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...

    2015-04-16

    Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less

  19. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).

  20. Simulating propagation of coherent light in random media using the Fredholm type integral equation

    NASA Astrophysics Data System (ADS)

    Kraszewski, Maciej; Pluciński, Jerzy

    2017-06-01

    Studying propagation of light in random scattering materials is important for both basic and applied research. Such studies often require usage of numerical method for simulating behavior of light beams in random media. However, if such simulations require consideration of coherence properties of light, they may become a complex numerical problems. There are well established methods for simulating multiple scattering of light (e.g. Radiative Transfer Theory and Monte Carlo methods) but they do not treat coherence properties of light directly. Some variations of these methods allows to predict behavior of coherent light but only for an averaged realization of the scattering medium. This limits their application in studying many physical phenomena connected to a specific distribution of scattering particles (e.g. laser speckle). In general, numerical simulation of coherent light propagation in a specific realization of random medium is a time- and memory-consuming problem. The goal of the presented research was to develop new efficient method for solving this problem. The method, presented in our earlier works, is based on solving the Fredholm type integral equation, which describes multiple light scattering process. This equation can be discretized and solved numerically using various algorithms e.g. by direct solving the corresponding linear equations system, as well as by using iterative or Monte Carlo solvers. Here we present recent development of this method including its comparison with well-known analytical results and a finite-difference type simulations. We also present extension of the method for problems of multiple scattering of a polarized light on large spherical particles that joins presented mathematical formalism with Mie theory.

  1. SiMon: Simulation Monitor for Computational Astrophysics

    NASA Astrophysics Data System (ADS)

    Xuran Qian, Penny; Cai, Maxwell Xu; Portegies Zwart, Simon; Zhu, Ming

    2017-09-01

    Scientific discovery via numerical simulations is important in modern astrophysics. This relatively new branch of astrophysics has become possible due to the development of reliable numerical algorithms and the high performance of modern computing technologies. These enable the analysis of large collections of observational data and the acquisition of new data via simulations at unprecedented accuracy and resolution. Ideally, simulations run until they reach some pre-determined termination condition, but often other factors cause extensive numerical approaches to break down at an earlier stage. In those cases, processes tend to be interrupted due to unexpected events in the software or the hardware. In those cases, the scientist handles the interrupt manually, which is time-consuming and prone to errors. We present the Simulation Monitor (SiMon) to automatize the farming of large and extensive simulation processes. Our method is light-weight, it fully automates the entire workflow management, operates concurrently across multiple platforms and can be installed in user space. Inspired by the process of crop farming, we perceive each simulation as a crop in the field and running simulation becomes analogous to growing crops. With the development of SiMon we relax the technical aspects of simulation management. The initial package was developed for extensive parameter searchers in numerical simulations, but it turns out to work equally well for automating the computational processing and reduction of observational data reduction.

  2. Toward a Principled Sampling Theory for Quasi-Orders

    PubMed Central

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601

  3. Toward a Principled Sampling Theory for Quasi-Orders.

    PubMed

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.

  4. Normal functional range of motion of the cervical spine during 15 activities of daily living.

    PubMed

    Bible, Jesse E; Biswas, Debdut; Miller, Christopher P; Whang, Peter G; Grauer, Jonathan N

    2010-02-01

    Prospective clinical study. The purpose of this investigation was to quantify normal cervical range of motion (ROM) and compare these results to those used to perform 15 simulated activities of daily living (ADLs) in asymptomatic subjects. Previous studies looking at cervical ROM during ADLs have been limited and used measuring devices that do not record continuous motion. The purpose of this investigation was to quantify normal cervical ROM and compare these results with those used to perform 15 simulated ADLs in asymptomatic subjects. A noninvasive electrogoniometer and torsiometer were used to measure the ROM of the cervical spine. The accuracy and reliability of the devices were confirmed by comparing the ROM values acquired from dynamic flexion/extension and lateral bending radiographs to those provided by the device, which was activated while the radiographs were obtained. Intraobserver reliability was established by calculating the intraclass correlation coefficient for repeated measurements on the same subjects by 1 investigator on consecutive days. These tools were employed in a clinical laboratory setting to evaluate the full active ROM of the cervical spines (ie, flexion/extension, lateral bending, and axial rotation) of 60 asymptomatic subjects (30 females and 30 males; age, 20 to 75 y) as well as to assess the functional ROM required to complete 15 simulated ADLs. When compared with radiographic measurements, the electrogoniometer was found to be accurate within 2.3+/-2.2 degrees (mean+/-SD) and the intraobserver reliabilities for measuring the full active and functional ROM were both excellent (intraclass correlation coefficient of 0.96 and 0.92, respectively). The absolute ROM and percentage of full active cervical spinal ROM used during the 15 ADLs was 13 to 32 degrees and 15% to 32% (median, 20 degrees/19%) for flexion/extension, 9 to 21 degrees and 11% to 27% (14 degrees/18%) for lateral bending, and 13 to 57 degrees and 12% to 92% (18 degrees/19%) for rotation. Backing up a car required the most ROM of all the ADLs, involving 32% of sagittal, 26% of lateral, and 92% of rotational motion. In general, personal hygiene ADLs such as washing hands and hair, shaving, and applying make-up entailed a significantly greater ROM relative to locomotive ADLs including walking and traveling up and down a set of stairs (P<0.0001); in addition, compared with climbing up these steps, significantly more sagittal and rotational motion was used when descending stairs (P=0.003 and P=0.016, respectively). When picking up an object from the ground, a squatting technique required a lower percentage of lateral and rotational ROM than bending at the waist (P=0.002 and P<0.0001). By quantifying the amounts of cervical motion required to execute a series of simulated ADLs, this study indicates that most individuals use a relatively small percentage of their full active ROM when performing such activities. These findings provide baseline data which may allow clinicians to accurately assess preoperative impairment and postsurgical outcomes.

  5. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  6. Ascent Guidance for a Winged Boost Vehicle. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Corvin, Michael Alexander

    1988-01-01

    The objective of the advanced ascent guidance study was to investigate guidance concepts which could contribute to increased autonomy during ascent operations in a winged boost vehicle such as the proposed Shuttle II. The guidance scheme was required to yield near a full-optimal ascent in the presence of vehicle system and environmental dispersions. The study included consideration of trajectory shaping issues, trajectory design, closed loop and predictive adaptive guidance techniques and control of dynamic pressure by throttling. An extensive ascent vehicle simulation capability was developed for use in the study.

  7. An Auto-Calibrating Knee Flexion-Extension Axis Estimator Using Principal Component Analysis with Inertial Sensors.

    PubMed

    McGrath, Timothy; Fineman, Richard; Stirling, Leia

    2018-06-08

    Inertial measurement units (IMUs) have been demonstrated to reliably measure human joint angles—an essential quantity in the study of biomechanics. However, most previous literature proposed IMU-based joint angle measurement systems that required manual alignment or prescribed calibration motions. This paper presents a simple, physically-intuitive method for IMU-based measurement of the knee flexion/extension angle in gait without requiring alignment or discrete calibration, based on computationally-efficient and easy-to-implement Principle Component Analysis (PCA). The method is compared against an optical motion capture knee flexion/extension angle modeled through OpenSim. The method is evaluated using both measured and simulated IMU data in an observational study ( n = 15) with an absolute root-mean-square-error (RMSE) of 9.24∘ and a zero-mean RMSE of 3.49∘. Variation in error across subjects was found, made emergent by the larger subject population than previous literature considers. Finally, the paper presents an explanatory model of RMSE on IMU mounting location. The observational data suggest that RMSE of the method is a function of thigh IMU perturbation and axis estimation quality. However, the effect size for these parameters is small in comparison to potential gains from improved IMU orientation estimations. Results also highlight the need to set relevant datums from which to interpret joint angles for both truth references and estimated data.

  8. An Object Oriented Extensible Architecture for Affordable Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.

    2003-01-01

    Driven by a need to explore and develop propulsion systems that exceeded current computing capabilities, NASA Glenn embarked on a novel strategy leading to the development of an architecture that enables propulsion simulations never thought possible before. Full engine 3 Dimensional Computational Fluid Dynamic propulsion system simulations were deemed impossible due to the impracticality of the hardware and software computing systems required. However, with a software paradigm shift and an embracing of parallel and distributed processing, an architecture was designed to meet the needs of future propulsion system modeling. The author suggests that the architecture designed at the NASA Glenn Research Center for propulsion system modeling has potential for impacting the direction of development of affordable weapons systems currently under consideration by the Applied Vehicle Technology Panel (AVT).

  9. Two-flavor simulations of ρ ( 770 ) and the role of the K K ¯ channel

    DOE PAGES

    Hu, B.; Molina, R.; Döring, M.; ...

    2016-09-15

    Here, the ρ(770) meson is the most extensively studied resonance in lattice QCD simulations in two (N f = 2) and three (N f = 2 + 1) flavor formulations. We analyze N f = 2 lattice scattering data using unitarized chiral perturbation theory, allowing not only for the extrapolation in mass but also in flavor, N f = 2 → N f = 2 + 1. The flavor extrapolation requires information from a global fit to ππ and πK phase shifts from experiment. While the chiral extrapolation of N f = 2 lattice data leads to masses of themore » ρ(770) meson far below the experimental one, we find that the missing KK¯ channel is able to explain this discrepancy.« less

  10. On the design and optimisation of new fractal antenna using PSO

    NASA Astrophysics Data System (ADS)

    Rani, Shweta; Singh, A. P.

    2013-10-01

    An optimisation technique for newly shaped fractal structure using particle swarm optimisation with curve fitting is presented in this article. The aim of particle swarm optimisation is to find the geometry of the antenna for the required user-defined frequency. To assess the effectiveness of the presented method, a set of representative numerical simulations have been done and the results are compared with the measurements from experimental prototypes built according to the design specifications coming from the optimisation procedure. The proposed fractal antenna resonates at the 5.8 GHz industrial, scientific and medical band which is suitable for wireless telemedicine applications. The antenna characteristics have been studied using extensive numerical simulations and are experimentally verified. The antenna exhibits well-defined radiation patterns over the band.

  11. Modeling Europa's dust plumes

    NASA Astrophysics Data System (ADS)

    Southworth, B. S.; Kempf, S.; Schmidt, J.

    2015-12-01

    The discovery of Jupiter's moon Europa maintaining a probably sporadic water vapor plume constitutes a huge scientific opportunity for NASA's upcoming mission to this Galilean moon. Measuring properties of material emerging from interior sources offers a unique chance to understand conditions at Europa's subsurface ocean. Exploiting results obtained for the Enceladus plume, we simulate possible Europa plume configurations, analyze particle number density and surface deposition results, and estimate the expected flux of ice grains on a spacecraft. Due to Europa's high escape speed, observing an active plume will require low-altitude flybys, preferably at altitudes of 5-100 km. At higher altitudes a plume may escape detection. Our simulations provide an extensive library documenting the possible structure of Europa dust plumes, which can be quickly refined as more data on Europa dust plumes are collected.

  12. Simulating dynamic and mixed-severity fire regimes: a process-based fire extension for LANDIS-II

    Treesearch

    Brian R. Sturtevant; Robert M. Scheller; Brian R. Miranda; Douglas Shinneman; Alexandra Syphard

    2009-01-01

    Fire regimes result from reciprocal interactions between vegetation and fire that may be further affected by other disturbances, including climate, landform, and terrain. In this paper, we describe fire and fuel extensions for the forest landscape simulation model, LANDIS-II, that allow dynamic interactions among fire, vegetation, climate, and landscape structure, and...

  13. Development of modelling method selection tool for health services management: from problem structuring methods to modelling and simulation methods.

    PubMed

    Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P

    2011-05-19

    There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  14. Challenges in reducing the computational time of QSTS simulations for distribution system analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.

    The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less

  15. Electron Thermal Transport due to Magnetic Diffusion in the MST RFP

    NASA Astrophysics Data System (ADS)

    Reusch, J. A.; Anderson, J. K.; den Hartog, D. J.; Forest, C. B.; Kasten, C. P.; Schnack, D. D.; Stephens, H. D.

    2011-10-01

    Comparison of measurements made in the MST RFP to the results from extensive nonlinear resistive MHD simulations has provided two key observations. First, trapped particles reduce electron thermal diffusion; inclusion of this effect is required for quantitative agreement of simulation to measurement. Second, the structure and evolution of long-wavelength temperature fluctuations measured in MST shows remarkable qualitative similarity to fluctuations appearing in a finite-pressure simulation. These simulations were run at parameters matching those of 400 kA discharges in MST (S ~ 4 ×106). In a zero β simulation, the measured χe is compared to the thermal diffusion due to parallel losses along diffusing magnetic field lines, χst =v∥Dmag . Agreement is only found if the reduction in χst due to trapped particles is taken into account. In a second simulation, the pressure field was evolved self consistently assuming Ohmic heating and anisotropic thermal conduction. Fluctuations in the simulated temperature are very similar in character and time evolution to temperature fluctuations measured in MST. This includes m = 1 , n = 6 fluctuations that flatten the temperature profile as well as m = 1 , n = 5 fluctuations that generate hot island structures near the core shortly after sawtooth crashes. This work supported by the US DOE and NSF.

  16. NEVESIM: event-driven neural simulation framework with a Python interface.

    PubMed

    Pecevski, Dejan; Kappel, David; Jonke, Zeno

    2014-01-01

    NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies.

  17. NEVESIM: event-driven neural simulation framework with a Python interface

    PubMed Central

    Pecevski, Dejan; Kappel, David; Jonke, Zeno

    2014-01-01

    NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies. PMID:25177291

  18. Operations planning simulation model extension study. Volume 1: Long duration exposure facility ST-01-A automated payload

    NASA Technical Reports Server (NTRS)

    Marks, D. A.; Gendiellee, R. E.; Kelly, T. M.; Giovannello, M. A.

    1974-01-01

    Ground processing and operation activities for selected automated and sortie payloads are evaluated. Functional flow activities are expanded to identify payload launch site facility and support requirements. Payload definitions are analyzed from the launch site ground processing viewpoint and then processed through the expanded functional flow activities. The requirements generated from the evaluation are compared with those contained in the data sheets. The following payloads were included in the evaluation: Long Duration Exposure Facility; Life Sciences Shuttle Laboratory; Biomedical Experiments Scientific Satellite; Dedicated Solar Sortie Mission; Magnetic Spectrometer; and Mariner Jupiter Orbiter. The expanded functional flow activities and descriptions for the automated and sortie payloads at the launch site are presented.

  19. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  20. Remote visualization and scale analysis of large turbulence datatsets

    NASA Astrophysics Data System (ADS)

    Livescu, D.; Pulido, J.; Burns, R.; Canada, C.; Ahrens, J.; Hamann, B.

    2015-12-01

    Accurate simulations of turbulent flows require solving all the dynamically relevant scales of motions. This technique, called Direct Numerical Simulation, has been successfully applied to a variety of simple flows; however, the large-scale flows encountered in Geophysical Fluid Dynamics (GFD) would require meshes outside the range of the most powerful supercomputers for the foreseeable future. Nevertheless, the current generation of petascale computers has enabled unprecedented simulations of many types of turbulent flows which focus on various GFD aspects, from the idealized configurations extensively studied in the past to more complex flows closer to the practical applications. The pace at which such simulations are performed only continues to increase; however, the simulations themselves are restricted to a small number of groups with access to large computational platforms. Yet the petabytes of turbulence data offer almost limitless information on many different aspects of the flow, from the hierarchy of turbulence moments, spectra and correlations, to structure-functions, geometrical properties, etc. The ability to share such datasets with other groups can significantly reduce the time to analyze the data, help the creative process and increase the pace of discovery. Using the largest DOE supercomputing platforms, we have performed some of the biggest turbulence simulations to date, in various configurations, addressing specific aspects of turbulence production and mixing mechanisms. Until recently, the visualization and analysis of such datasets was restricted by access to large supercomputers. The public Johns Hopkins Turbulence database simplifies the access to multi-Terabyte turbulence datasets and facilitates turbulence analysis through the use of commodity hardware. First, one of our datasets, which is part of the database, will be described and then a framework that adds high-speed visualization and wavelet support for multi-resolution analysis of turbulence will be highlighted. The addition of wavelet support reduces the latency and bandwidth requirements for visualization, allowing for many concurrent users, and enables new types of analyses, including scale decomposition and coherent feature extraction.

  1. Radial mixing and Ru-Mo isotope systematics under different accretion scenarios

    NASA Astrophysics Data System (ADS)

    Fischer, Rebecca A.; Nimmo, Francis; O'Brien, David P.

    2018-01-01

    The Ru-Mo isotopic compositions of inner Solar System bodies may reflect the provenance of accreted material and how it evolved with time, both of which are controlled by the accretion scenario these bodies experienced. Here we use a total of 116 N-body simulations of terrestrial planet accretion, run in the Eccentric Jupiter and Saturn (EJS), Circular Jupiter and Saturn (CJS), and Grand Tack scenarios, to model the Ru-Mo anomalies of Earth, Mars, and Theia analogues. This model starts by applying an initial step function in Ru-Mo isotopic composition, with compositions reflecting those in meteorites, and traces compositional evolution as planets accrete. The mass-weighted provenance of the resulting planets reveals more radial mixing in Grand Tack simulations than in EJS/CJS simulations, and more efficient mixing among late-accreted material than during the main phase of accretion in EJS/CJS simulations. We find that an extensive homogeneous inner disk region is required to reproduce Earth's observed Ru-Mo composition. EJS/CJS simulations require a homogeneous reservoir in the inner disk extending to ≥3-4 AU (≥74-98% of initial mass) to reproduce Earth's composition, while Grand Tack simulations require a homogeneous reservoir extending to ≥3-10 AU (≥97-99% of initial mass), and likely to ≥6-10 AU. In the Grand Tack model, Jupiter's initial location (the most likely location for a discontinuity in isotopic composition) is ∼3.5 AU; however, this step location has only a 33% likelihood of producing an Earth with the correct Ru-Mo isotopic signature for the most plausible model conditions. Our results give the testable predictions that Mars has zero Ru anomaly and small or zero Mo anomaly, and the Moon has zero Mo anomaly. These predictions are insensitive to wide variations in parameter choices.

  2. Workflow Management Systems for Molecular Dynamics on Leadership Computers

    NASA Astrophysics Data System (ADS)

    Wells, Jack; Panitkin, Sergey; Oleynik, Danila; Jha, Shantenu

    Molecular Dynamics (MD) simulations play an important role in a range of disciplines from Material Science to Biophysical systems and account for a large fraction of cycles consumed on computing resources. Increasingly science problems require the successful execution of ''many'' MD simulations as opposed to a single MD simulation. There is a need to provide scalable and flexible approaches to the execution of the workload. We present preliminary results on the Titan computer at the Oak Ridge Leadership Computing Facility that demonstrate a general capability to manage workload execution agnostic of a specific MD simulation kernel or execution pattern, and in a manner that integrates disparate grid-based and supercomputing resources. Our results build upon our extensive experience of distributed workload management in the high-energy physics ATLAS project using PanDA (Production and Distributed Analysis System), coupled with recent conceptual advances in our understanding of workload management on heterogeneous resources. We will discuss how we will generalize these initial capabilities towards a more production level service on DOE leadership resources. This research is sponsored by US DOE/ASCR and used resources of the OLCF computing facility.

  3. GeNN: a code generation framework for accelerated brain simulations

    NASA Astrophysics Data System (ADS)

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

  4. GeNN: a code generation framework for accelerated brain simulations.

    PubMed

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-07

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

  5. GeNN: a code generation framework for accelerated brain simulations

    PubMed Central

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/. PMID:26740369

  6. Human factors evaluations of Free Flight Issues solved and issues remaining.

    PubMed

    Ruigrok, Rob C J; Hoekstra, Jacco M

    2007-07-01

    The Dutch National Aerospace Laboratory (NLR) has conducted extensive human-in-the-loop simulation experiments in NLR's Research Flight Simulator (RFS), focussed on human factors evaluation of Free Flight. Eight years of research, in co-operation with partners in the United States and Europe, has shown that Free Flight has the potential to increase airspace capacity by at least a factor of 3. Expected traffic loads and conflict rates for the year 2020 appear to be no major problem for professional airline crews participating in flight simulation experiments. Flight efficiency is significantly improved by user-preferred routings, including cruise climbs, while pilot workload is only slightly increased compared to today's reference. Detailed results from three projects and six human-in-the-loop experiments in NLR's Research Flight Simulator are reported. The main focus of these results is on human factors issues and particularly workload, measured both subjectively and objectively. An extensive discussion is included on many human factors issues resolved during the experiments, but also open issues are identified. An intent-based Conflict Detection and Resolution (CD&R) system provides "benefits" in terms of reduced pilot workload, but also "costs" in terms of complexity, need for priority rules, potential compatibility problems between different brands of Flight Management Systems and large bandwidth. Moreover, the intent-based system is not effective at solving multi-aircraft conflicts. A state-based CD&R system also provides "benefits" and "costs". Benefits compared to the full intent-based system are simplicity, low bandwidth requirements, easy to retrofit (no requirements to change avionics infrastructure) and the ability to solve multi-aircraft conflicts in parallel. The "costs" involve a somewhat higher pilot workload in similar circumstances, the smaller look-ahead time which results in less efficient resolution manoeuvres and the sometimes false/nuisance alerts due to missing intent information. The optimal CD&R system (in terms of costs versus benefits) has been suggested to be state-based CD&R with the addition of intended or target flight level. This combination of state-based CD&R with a limited amount of intent provides "the best of both worlds". Studying this CD&R system is still an open issue.

  7. Track structure modeling in liquid water: A review of the Geant4-DNA very low energy extension of the Geant4 Monte Carlo simulation toolkit.

    PubMed

    Bernal, M A; Bordage, M C; Brown, J M C; Davídková, M; Delage, E; El Bitar, Z; Enger, S A; Francis, Z; Guatelli, S; Ivanchenko, V N; Karamitros, M; Kyriakou, I; Maigne, L; Meylan, S; Murakami, K; Okada, S; Payno, H; Perrot, Y; Petrovic, I; Pham, Q T; Ristic-Fira, A; Sasaki, T; Štěpán, V; Tran, H N; Villagrasa, C; Incerti, S

    2015-12-01

    Understanding the fundamental mechanisms involved in the induction of biological damage by ionizing radiation remains a major challenge of today's radiobiology research. The Monte Carlo simulation of physical, physicochemical and chemical processes involved may provide a powerful tool for the simulation of early damage induction. The Geant4-DNA extension of the general purpose Monte Carlo Geant4 simulation toolkit aims to provide the scientific community with an open source access platform for the mechanistic simulation of such early damage. This paper presents the most recent review of the Geant4-DNA extension, as available to Geant4 users since June 2015 (release 10.2 Beta). In particular, the review includes the description of new physical models for the description of electron elastic and inelastic interactions in liquid water, as well as new examples dedicated to the simulation of physicochemical and chemical stages of water radiolysis. Several implementations of geometrical models of biological targets are presented as well, and the list of Geant4-DNA examples is described. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  8. Interactive X-ray and proton therapy training and simulation.

    PubMed

    Hamza-Lup, Felix G; Farrar, Shane; Leon, Erik

    2015-10-01

    External beam X-ray therapy (XRT) and proton therapy (PT) are effective and widely accepted forms of treatment for many types of cancer. However, the procedures require extensive computerized planning. Current planning systems for both XRT and PT have insufficient visual aid to combine real patient data with the treatment device geometry to account for unforeseen collisions among system components and the patient. The 3D surface representation (S-rep) is a widely used scheme to create 3D models of physical objects. 3D S-reps have been successfully used in CAD/CAM and, in conjunction with texture mapping, in the modern gaming industry to customize avatars and improve the gaming realism and sense of presence. We are proposing a cost-effective method to extract patient-specific S-reps in real time and combine them with the treatment system geometry to provide a comprehensive simulation of the XRT/PT treatment room. The X3D standard is used to implement and deploy the simulator on the web, enabling its use not only for remote specialists' collaboration, simulation, and training, but also for patient education. An objective assessment of the accuracy of the S-reps obtained proves the potential of the simulator for clinical use.

  9. Mine fire experiments and simulation with MFIRE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laage, L.W.; Yang, Hang

    1995-12-31

    A major concern of mine fires is the heat generated ventilation disturbances which can move products of combustion (POC) through unexpected passageways. Fire emergency planning requires simulation of the interaction of the fire and ventilation system to predict the state of the ventilation system and the subsequent distribution of temperatures and POC. Several computer models were developed by the U.S. Bureau of Mines (USBM) to perform this simulation. The most recent, MFIRE, simulates a mine`s ventilation system and its response to altered ventilation parameters such as the development of new mine workings or changes in ventilation control structures, external influencemore » such as varying outside temperatures, and internal influences such as fires. Extensive output allows quantitative analysis of the effects of the proposed alteration to die ventilation system. This paper describes recent USBM research to validate MFIRE`s calculation of temperature distribution in an airway due to a mine fire, as temperatures are the most significant source of ventilation disturbances. Fire tests were conducted at the Waldo Mine near Magdalena, NM. From these experiments, temperature profiles were developed as functions of time and distance from the fire and compared with simulations from MFIRE.« less

  10. Frictional velocity-weakening in landslides on Earth and on other planetary bodies.

    PubMed

    Lucas, Antoine; Mangeney, Anne; Ampuero, Jean Paul

    2014-03-04

    One of the ultimate goals in landslide hazard assessment is to predict maximum landslide extension and velocity. Despite much work, the physical processes governing energy dissipation during these natural granular flows remain uncertain. Field observations show that large landslides travel over unexpectedly long distances, suggesting low dissipation. Numerical simulations of landslides require a small friction coefficient to reproduce the extension of their deposits. Here, based on analytical and numerical solutions for granular flows constrained by remote-sensing observations, we develop a consistent method to estimate the effective friction coefficient of landslides. This method uses a constant basal friction coefficient that reproduces the first-order landslide properties. We show that friction decreases with increasing volume or, more fundamentally, with increasing sliding velocity. Inspired by frictional weakening mechanisms thought to operate during earthquakes, we propose an empirical velocity-weakening friction law under a unifying phenomenological framework applicable to small and large landslides observed on Earth and beyond.

  11. An Extension of the Chi-Square Procedure for Non-NORMAL Statistics, with Application to Solar Neutrino Data

    NASA Astrophysics Data System (ADS)

    Sturrock, P. A.

    2008-01-01

    Using the chi-square statistic, one may conveniently test whether a series of measurements of a variable are consistent with a constant value. However, that test is predicated on the assumption that the appropriate probability distribution function (pdf) is normal in form. This requirement is usually not satisfied by experimental measurements of the solar neutrino flux. This article presents an extension of the chi-square procedure that is valid for any form of the pdf. This procedure is applied to the GALLEX-GNO dataset, and it is shown that the results are in good agreement with the results of Monte Carlo simulations. Whereas application of the standard chi-square test to symmetrized data yields evidence significant at the 1% level for variability of the solar neutrino flux, application of the extended chi-square test to the unsymmetrized data yields only weak evidence (significant at the 4% level) of variability.

  12. Design and implementation of laser target simulator in hardware-in-the-loop simulation system based on LabWindows/CVI and RTX

    NASA Astrophysics Data System (ADS)

    Tong, Qiujie; Wang, Qianqian; Li, Xiaoyang; Shan, Bin; Cui, Xuntai; Li, Chenyu; Peng, Zhong

    2016-11-01

    In order to satisfy the requirements of the real-time and generality, a laser target simulator in semi-physical simulation system based on RTX+LabWindows/CVI platform is proposed in this paper. Compared with the upper-lower computers simulation platform architecture used in the most of the real-time system now, this system has better maintainability and portability. This system runs on the Windows platform, using Windows RTX real-time extension subsystem to ensure the real-time performance of the system combining with the reflective memory network to complete some real-time tasks such as calculating the simulation model, transmitting the simulation data, and keeping real-time communication. The real-time tasks of simulation system run under the RTSS process. At the same time, we use the LabWindows/CVI to compile a graphical interface, and complete some non-real-time tasks in the process of simulation such as man-machine interaction, display and storage of the simulation data, which run under the Win32 process. Through the design of RTX shared memory and task scheduling algorithm, the data interaction between the real-time tasks process of RTSS and non-real-time tasks process of Win32 is completed. The experimental results show that this system has the strongly real-time performance, highly stability, and highly simulation accuracy. At the same time, it also has the good performance of human-computer interaction.

  13. Time domain simulations of preliminary breakdown pulses in natural lightning.

    PubMed

    Carlson, B E; Liang, C; Bitzer, P; Christian, H

    2015-06-16

    Lightning discharge is a complicated process with relevant physical scales spanning many orders of magnitude. In an effort to understand the electrodynamics of lightning and connect physical properties of the channel to observed behavior, we construct a simulation of charge and current flow on a narrow conducting channel embedded in three-dimensional space with the time domain electric field integral equation, the method of moments, and the thin-wire approximation. The method includes approximate treatment of resistance evolution due to lightning channel heating and the corona sheath of charge surrounding the lightning channel. Focusing our attention on preliminary breakdown in natural lightning by simulating stepwise channel extension with a simplified geometry, our simulation reproduces the broad features observed in data collected with the Huntsville Alabama Marx Meter Array. Some deviations in pulse shape details are evident, suggesting future work focusing on the detailed properties of the stepping mechanism. Preliminary breakdown pulses can be reproduced by simulated channel extension Channel heating and corona sheath formation are crucial to proper pulse shape Extension processes and channel orientation significantly affect observations.

  14. Time domain simulations of preliminary breakdown pulses in natural lightning

    PubMed Central

    Carlson, B E; Liang, C; Bitzer, P; Christian, H

    2015-01-01

    Lightning discharge is a complicated process with relevant physical scales spanning many orders of magnitude. In an effort to understand the electrodynamics of lightning and connect physical properties of the channel to observed behavior, we construct a simulation of charge and current flow on a narrow conducting channel embedded in three-dimensional space with the time domain electric field integral equation, the method of moments, and the thin-wire approximation. The method includes approximate treatment of resistance evolution due to lightning channel heating and the corona sheath of charge surrounding the lightning channel. Focusing our attention on preliminary breakdown in natural lightning by simulating stepwise channel extension with a simplified geometry, our simulation reproduces the broad features observed in data collected with the Huntsville Alabama Marx Meter Array. Some deviations in pulse shape details are evident, suggesting future work focusing on the detailed properties of the stepping mechanism. Key Points Preliminary breakdown pulses can be reproduced by simulated channel extension Channel heating and corona sheath formation are crucial to proper pulse shape Extension processes and channel orientation significantly affect observations PMID:26664815

  15. Contributions of the ARM Program to Radiative Transfer Modeling for Climate and Weather Applications

    NASA Technical Reports Server (NTRS)

    Mlawer, Eli J.; Iacono, Michael J.; Pincus, Robert; Barker, Howard W.; Oreopoulos, Lazaros; Mitchell, David L.

    2016-01-01

    Accurate climate and weather simulations must account for all relevant physical processes and their complex interactions. Each of these atmospheric, ocean, and land processes must be considered on an appropriate spatial and temporal scale, which leads these simulations to require a substantial computational burden. One especially critical physical process is the flow of solar and thermal radiant energy through the atmosphere, which controls planetary heating and cooling and drives the large-scale dynamics that moves energy from the tropics toward the poles. Radiation calculations are therefore essential for climate and weather simulations, but are themselves quite complex even without considering the effects of variable and inhomogeneous clouds. Clear-sky radiative transfer calculations have to account for thousands of absorption lines due to water vapor, carbon dioxide, and other gases, which are irregularly distributed across the spectrum and have shapes dependent on pressure and temperature. The line-by-line (LBL) codes that treat these details have a far greater computational cost than can be afforded by global models. Therefore, the crucial requirement for accurate radiation calculations in climate and weather prediction models must be satisfied by fast solar and thermal radiation parameterizations with a high level of accuracy that has been demonstrated through extensive comparisons with LBL codes. See attachment for continuation.

  16. Development of a Haptic Interface for Natural Orifice Translumenal Endoscopic Surgery Simulation

    PubMed Central

    Dargar, Saurabh; Sankaranarayanan, Ganesh

    2016-01-01

    Natural orifice translumenal endoscopic surgery (NOTES) is a minimally invasive procedure, which utilizes the body’s natural orifices to gain access to the peritoneal cavity. The NOTES procedure is designed to minimize external scarring and patient trauma, however flexible endoscopy based pure NOTES procedures require critical scope handling skills. The delicate nature of the NOTES procedure requires extensive training, thus to improve access to training while reducing risk to patients we have designed and developed the VTEST©, a virtual reality NOTES simulator. As part of the simulator, a novel decoupled 2-DOF haptic device was developed to provide realistic force feedback to the user in training. A series of experiments were performed to determine the behavioral characteristics of the device. The device was found capable of rendering up to 5.62N and 0.190Nm of continuous force and torque in the translational and rotational DOF, respectively. The device possesses 18.1Hz and 5.7Hz of force bandwidth in the translational and rotational DOF, respectively. A feedforward friction compensator was also successfully implemented to minimize the negative impact of friction during the interaction with the device. In this work we have presented the detailed development and evaluation of the haptic device for the VTEST©. PMID:27008674

  17. Experimental and numerical study of wastewater pollution in Yuhui channel, Jiashan city

    NASA Astrophysics Data System (ADS)

    Fu, Lei; Peng, Zhenhua; You, Aiju

    2018-02-01

    Due to the development of economics and society in China, the huge amount of wastewater becomes a serious problem in most of the Chinese cities. Therefore, the construction of wastewater treatment plant draws much more attentions than before. The discharge from the wastewater treatment plant is then considered as a point source in most of the important rivers and channels in China. In this study, a typical wastewater treatment plant extension project is introduced as a case study, a filed monitoring experiment is designed and executed to observe required data, then, a two-dimensional model is estabilished to simulate the water quality downsteam of the wastewater treatment plant, CODCr is considered as a typical pollutant during the simulation. The simulation results indicate that different discharge conditions will lead to different CODCr concentration downstream of the wastewater treatment plant, and an emergency plan should be prepared to minimize the risk of the pollution in the channel under unusual and accident conditions.

  18. Evidence from molecular dynamics simulations of conformational preorganization in the ribonuclease H active site

    PubMed Central

    Stafford, Kate A.; Palmer III, Arthur G.

    2014-01-01

    Ribonuclease H1 (RNase H) enzymes are well-conserved endonucleases that are present in all domains of life and are particularly important in the life cycle of retroviruses as domains within reverse transcriptase. Despite extensive study, especially of the E. coli homolog, the interaction of the highly negatively charged active site with catalytically required magnesium ions remains poorly understood. In this work, we describe molecular dynamics simulations of the E. coli homolog in complex with magnesium ions, as well as simulations of other homologs in their apo states. Collectively, these results suggest that the active site is highly rigid in the apo state of all homologs studied and is conformationally preorganized to favor the binding of a magnesium ion. Notably, representatives of bacterial, eukaryotic, and retroviral RNases H all exhibit similar active-site rigidity, suggesting that this dynamic feature is only subtly modulated by amino acid sequence and is primarily imposed by the distinctive RNase H protein fold. PMID:25075292

  19. A parametric study of surface roughness and bonding mechanisms of aluminum alloys with epoxies: a molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Timilsina, Rajendra; Termaath, Stephanie

    The marine environment is highly aggressive towards most materials. However, aluminium-magnesium alloys (Al-Mg, specifically, 5xxx series) have exceptionally long service life in such aggressive marine environments. For instance, an Al-Mg alloy, AA5083, is extensively used in naval structures because of its good mechanical strength, formability, seawater corrosion resistance and weldability. However, bonding mechanisms of these alloys with epoxies in a rough surface environment are not fully understood yet. It requires a rigorous investigation at molecular or atomic levels. We performed a molecular dynamics simulation to study an adherend surface preparation and surface bonding mechanisms of Al-Mg alloy (AA5083) with different epoxies by developing several computer models. Various distributions of surface roughness are introduced in the models and performed molecular dynamics simulations. Formation of a beta phase (Al3Mg2) , microstructures, bonding energies at the interface, bonding strengths and durability are investigated. Office of Naval Research.

  20. Hot interstellar tunnels. 1: Simulation of interacting supernova remnants

    NASA Technical Reports Server (NTRS)

    Smith, B. W.

    1976-01-01

    The theory required to build a numerical simulation of interacting supernova remnants is developed. The hot cavities within a population of remnants will become connected, with varying ease and speed, for a variety of assumed conditions in the outer shells of old remnants. Apparently neither radiative cooling nor thermal conduction in a large-scale galactic magnetic field can destroy hot cavity regions, if they grow, faster than they are reheated by supernova shock waves, but interstellar mass motions disrupt the contiguity of extensive cavities necessary for the dispersal of these shocks over a wide volume. Monte Carlo simulations show that a quasi-equilibrium is reached in the test space within 10 million yrs of the first supernova and is characterized by an average cavity filling fraction of the interstellar volume. Aspects of this equilibrium are discussed for a range of supernova rates. Two predictions are not confirmed within this range: critical growth of hot regions to encompass the entire medium, and the efficient quenching of a remnant's expansion by interaction with other cavities.

  1. An object oriented Python interface for atomistic simulations

    NASA Astrophysics Data System (ADS)

    Hynninen, T.; Himanen, L.; Parkkinen, V.; Musso, T.; Corander, J.; Foster, A. S.

    2016-01-01

    Programmable simulation environments allow one to monitor and control calculations efficiently and automatically before, during, and after runtime. Environments directly accessible in a programming environment can be interfaced with powerful external analysis tools and extensions to enhance the functionality of the core program, and by incorporating a flexible object based structure, the environments make building and analysing computational setups intuitive. In this work, we present a classical atomistic force field with an interface written in Python language. The program is an extension for an existing object based atomistic simulation environment.

  2. Experimental Evaluation of an Integrated Datalink and Automation-Based Strategic Trajectory Concept

    NASA Technical Reports Server (NTRS)

    Mueller, Eric

    2007-01-01

    This paper presents research on the interoperability of trajectory-based automation concepts and technologies with modern Flight Management Systems and datalink communication available on many of today s commercial aircraft. A tight integration of trajectory-based ground automation systems with the aircraft Flight Management System through datalink will enable mid-term and far-term benefits from trajectory-based automation methods. A two-way datalink connection between the trajectory-based automation resident in the Center/TRACON Automation System and the Future Air Navigation System-1 integrated FMS/datalink in NASA Ames B747-400 Level D simulator has been established and extensive simulation of the use of datalink messages to generate strategic trajectories completed. A strategic trajectory is defined as an aircraft deviation needed to solve a conflict or honor a route request and then merge the aircraft back to its nominal preferred trajectory using a single continuous trajectory clearance. Engineers on the ground side of the datalink generated lateral and vertical trajectory clearances and transmitted them to the Flight Management System of the 747; the airborne automation then flew the new trajectory without human intervention, requiring the flight crew only to review and to accept the trajectory. This simulation established the protocols needed for a significant majority of the trajectory change types required to solve a traffic conflict or deviate around weather. This demonstration provides a basis for understanding the requirements for integration of trajectory-based automation with current Flight Management Systems and datalink to support future National Airspace System operations.

  3. New Prediction Model for Probe Specificity in an Allele-Specific Extension Reaction for Haplotype-Specific Extraction (HSE) of Y Chromosome Mixtures

    PubMed Central

    Rothe, Jessica; Watkins, Norman E.; Nagy, Marion

    2012-01-01

    Allele-specific extension reactions (ASERs) use 3′ terminus-specific primers for the selective extension of completely annealed matches by polymerase. The ability of the polymerase to extend non-specific 3′ terminal mismatches leads to a failure of the reaction, a process that is only partly understood and predictable, and often requires time-consuming assay design. In our studies we investigated haplotype-specific extraction (HSE) for the separation of male DNA mixtures. HSE is an ASER and provides the ability to distinguish between diploid chromosomes from one or more individuals. Here, we show that the success of HSE and allele-specific extension depend strongly on the concentration difference between complete match and 3′ terminal mismatch. Using the oligonucleotide-modeling platform Visual Omp, we demonstrated the dependency of the discrimination power of the polymerase on match- and mismatch-target hybridization between different probe lengths. Therefore, the probe specificity in HSE could be predicted by performing a relative comparison of different probe designs with their simulated differences between the duplex concentration of target-probe match and mismatches. We tested this new model for probe design in more than 300 HSE reactions with 137 different probes and obtained an accordance of 88%. PMID:23049901

  4. Cutting the wires: modularization of cellular networks for experimental design.

    PubMed

    Lang, Moritz; Summers, Sean; Stelling, Jörg

    2014-01-07

    Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  5. New prediction model for probe specificity in an allele-specific extension reaction for haplotype-specific extraction (HSE) of Y chromosome mixtures.

    PubMed

    Rothe, Jessica; Watkins, Norman E; Nagy, Marion

    2012-01-01

    Allele-specific extension reactions (ASERs) use 3' terminus-specific primers for the selective extension of completely annealed matches by polymerase. The ability of the polymerase to extend non-specific 3' terminal mismatches leads to a failure of the reaction, a process that is only partly understood and predictable, and often requires time-consuming assay design. In our studies we investigated haplotype-specific extraction (HSE) for the separation of male DNA mixtures. HSE is an ASER and provides the ability to distinguish between diploid chromosomes from one or more individuals. Here, we show that the success of HSE and allele-specific extension depend strongly on the concentration difference between complete match and 3' terminal mismatch. Using the oligonucleotide-modeling platform Visual Omp, we demonstrated the dependency of the discrimination power of the polymerase on match- and mismatch-target hybridization between different probe lengths. Therefore, the probe specificity in HSE could be predicted by performing a relative comparison of different probe designs with their simulated differences between the duplex concentration of target-probe match and mismatches. We tested this new model for probe design in more than 300 HSE reactions with 137 different probes and obtained an accordance of 88%.

  6. Modeling Effects of Annealing on Coal Char Reactivity to O 2 and CO 2 , Based on Preparation Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, Troy; Bhat, Sham; Marcy, Peter

    Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive computational fluid dynamics (CFD) simulations are valuable tools in evaluating and deploying oxyfuel and other carbon capture technologies, either as retrofit technologies or for new construction. However, accurate predictive combustor simulations require physically realistic submodels with low computational requirements. A recent sensitivity analysis of a detailed char conversion model (Char Conversion Kinetics (CCK)) found thermal annealing to be an extremely sensitive submodel. In the present work, further analysis of the previous annealing model revealed significant disagreement with numerous datasets from experiments performed after that annealing model was developed. Themore » annealing model was accordingly extended to reflect experimentally observed reactivity loss, because of the thermal annealing of a variety of coals under diverse char preparation conditions. The model extension was informed by a Bayesian calibration analysis. In addition, since oxyfuel conditions include extraordinarily high levels of CO 2, the development of a first-ever CO 2 reactivity loss model due to annealing is presented.« less

  7. Modeling Effects of Annealing on Coal Char Reactivity to O 2 and CO 2 , Based on Preparation Conditions

    DOE PAGES

    Holland, Troy; Bhat, Sham; Marcy, Peter; ...

    2017-08-25

    Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive computational fluid dynamics (CFD) simulations are valuable tools in evaluating and deploying oxyfuel and other carbon capture technologies, either as retrofit technologies or for new construction. However, accurate predictive combustor simulations require physically realistic submodels with low computational requirements. A recent sensitivity analysis of a detailed char conversion model (Char Conversion Kinetics (CCK)) found thermal annealing to be an extremely sensitive submodel. In the present work, further analysis of the previous annealing model revealed significant disagreement with numerous datasets from experiments performed after that annealing model was developed. Themore » annealing model was accordingly extended to reflect experimentally observed reactivity loss, because of the thermal annealing of a variety of coals under diverse char preparation conditions. The model extension was informed by a Bayesian calibration analysis. In addition, since oxyfuel conditions include extraordinarily high levels of CO 2, the development of a first-ever CO 2 reactivity loss model due to annealing is presented.« less

  8. Section 4. The GIS Weasel User's Manual

    USGS Publications Warehouse

    Viger, Roland J.; Leavesley, George H.

    2007-01-01

    INTRODUCTION The GIS Weasel was designed to aid in the preparation of spatial information for input to lumped and distributed parameter hydrologic or other environmental models. The GIS Weasel provides geographic information system (GIS) tools to help create maps of geographic features relevant to a user's model and to generate parameters from those maps. The operation of the GIS Weasel does not require the user to be a GIS expert, only that the user have an understanding of the spatial information requirements of the environmental simulation model being used. The GIS Weasel software system uses a GIS-based graphical user interface (GUI), the C programming language, and external scripting languages. The software will run on any computing platform where ArcInfo Workstation (version 8.0.2 or later) and the GRID extension are accessible. The user controls the processing of the GIS Weasel by interacting with menus, maps, and tables. The purpose of this document is to describe the operation of the software. This document is not intended to describe the usage of this software in support of any particular environmental simulation model. Such guides are published separately.

  9. Statistical analysis of CSP plants by simulating extensive meteorological series

    NASA Astrophysics Data System (ADS)

    Pavón, Manuel; Fernández, Carlos M.; Silva, Manuel; Moreno, Sara; Guisado, María V.; Bernardos, Ana

    2017-06-01

    The feasibility analysis of any power plant project needs the estimation of the amount of energy it will be able to deliver to the grid during its lifetime. To achieve this, its feasibility study requires a precise knowledge of the solar resource over a long term period. In Concentrating Solar Power projects (CSP), financing institutions typically requires several statistical probability of exceedance scenarios of the expected electric energy output. Currently, the industry assumes a correlation between probabilities of exceedance of annual Direct Normal Irradiance (DNI) and energy yield. In this work, this assumption is tested by the simulation of the energy yield of CSP plants using as input a 34-year series of measured meteorological parameters and solar irradiance. The results of this work show that, even if some correspondence between the probabilities of exceedance of annual DNI values and energy yields is found, the intra-annual distribution of DNI may significantly affect this correlation. This result highlights the need of standardized procedures for the elaboration of representative DNI time series representative of a given probability of exceedance of annual DNI.

  10. Plant-Level Modeling and Simulation of Used Nuclear Fuel Dissolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Almeida, Valmor F.

    2012-09-07

    Plant-level modeling and simulation of a used nuclear fuel prototype dissolver is presented. Emphasis is given in developing a modeling and simulation approach to be explored by other processes involved in the recycle of used fuel. The commonality concepts presented in a previous communication were used to create a model and realize its software module. An initial model was established based on a theory of chemical thermomechanical network transport outlined previously. A software module prototype was developed with the required external behavior and internal mathematical structure. Results obtained demonstrate the generality of the design approach and establish an extensible mathematicalmore » model with its corresponding software module for a wide range of dissolvers. Scale up numerical tests were made varying the type of used fuel (breeder and light-water reactors) and the capacity of dissolution (0.5 t/d to 1.7 t/d). These tests were motivated by user requirements in the area of nuclear materials safeguards. A computer module written in high-level programing languages (MATLAB and Octave) was developed, tested, and provided as open-source code (MATLAB) for integration into the Separations and Safeguards Performance Model application in development at Sandia National Laboratories. The modeling approach presented here is intended to serve as a template for a rational modeling of all plant-level modules. This will facilitate the practical application of the commonality features underlying the unifying network transport theory proposed recently. In addition, by example, this model describes, explicitly, the needed data from sub-scale models, and logical extensions for future model development. For example, from thermodynamics, an off-line simulation of molecular dynamics could quantify partial molar volumes for the species in the liquid phase; this simulation is currently at reach for high-performance computing. From fluid mechanics, a hold-up capacity function is needed for the dissolver device; this simulation is currently at reach for computational fluid mechanics given the existing CAD geometry. From chemical transport phenomena, a simulation of the particle-scale dissolution front is needed to derive an improved solid dissolution kinetics law by predicting the local surface area change; an example was provided in this report. In addition, the associated reaction mechanisms for dissolution are presently largely untested and simplified, hence even a parallel experimental program in reaction kinetics is needed to support modeling and simulation efforts. Last but not least, a simple account of finite rates of solid feed and transfer can be readily introduced via a coupled delayed model. These are some of the theoretical benefits of a rational plant-level modeling approach which guides the development of smaller length and time scale modeling. Practical, and other theoretical benefits have been presented on a previous report.« less

  11. Sampling ARG of multiple populations under complex configurations of subdivision and admixture.

    PubMed

    Carrieri, Anna Paola; Utro, Filippo; Parida, Laxmi

    2016-04-01

    Simulating complex evolution scenarios of multiple populations is an important task for answering many basic questions relating to population genomics. Apart from the population samples, the underlying Ancestral Recombinations Graph (ARG) is an additional important means in hypothesis checking and reconstruction studies. Furthermore, complex simulations require a plethora of interdependent parameters making even the scenario-specification highly non-trivial. We present an algorithm SimRA that simulates generic multiple population evolution model with admixture. It is based on random graphs that improve dramatically in time and space requirements of the classical algorithm of single populations.Using the underlying random graphs model, we also derive closed forms of expected values of the ARG characteristics i.e., height of the graph, number of recombinations, number of mutations and population diversity in terms of its defining parameters. This is crucial in aiding the user to specify meaningful parameters for the complex scenario simulations, not through trial-and-error based on raw compute power but intelligent parameter estimation. To the best of our knowledge this is the first time closed form expressions have been computed for the ARG properties. We show that the expected values closely match the empirical values through simulations.Finally, we demonstrate that SimRA produces the ARG in compact forms without compromising any accuracy. We demonstrate the compactness and accuracy through extensive experiments. SimRA (Simulation based on Random graph Algorithms) source, executable, user manual and sample input-output sets are available for downloading at: https://github.com/ComputationalGenomics/SimRA CONTACT: : parida@us.ibm.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. OpenSHS: Open Smart Home Simulator.

    PubMed

    Alshammari, Nasser; Alshammari, Talal; Sedky, Mohamed; Champion, Justin; Bauer, Carolin

    2017-05-02

    This paper develops a new hybrid, open-source, cross-platform 3D smart home simulator, OpenSHS, for dataset generation. OpenSHS offers an opportunity for researchers in the field of the Internet of Things (IoT) and machine learning to test and evaluate their models. Following a hybrid approach, OpenSHS combines advantages from both interactive and model-based approaches. This approach reduces the time and efforts required to generate simulated smart home datasets. We have designed a replication algorithm for extending and expanding a dataset. A small sample dataset produced, by OpenSHS, can be extended without affecting the logical order of the events. The replication provides a solution for generating large representative smart home datasets. We have built an extensible library of smart devices that facilitates the simulation of current and future smart home environments. Our tool divides the dataset generation process into three distinct phases: first design: the researcher designs the initial virtual environment by building the home, importing smart devices and creating contexts; second, simulation: the participant simulates his/her context-specific events; and third, aggregation: the researcher applies the replication algorithm to generate the final dataset. We conducted a study to assess the ease of use of our tool on the System Usability Scale (SUS).

  13. Simulation supported POD for RT test case-concept and modeling

    NASA Astrophysics Data System (ADS)

    Gollwitzer, C.; Bellon, C.; Deresch, A.; Ewert, U.; Jaenisch, G.-R.; Zscherpel, U.; Mistral, Q.

    2012-05-01

    Within the framework of the European project PICASSO, the radiographic simulator aRTist (analytical Radiographic Testing inspection simulation tool) developed by BAM has been extended for reliability assessment of film and digital radiography. NDT of safety relevant components of aerospace industry requires the proof of probability of detection (POD) of the inspection. Modeling tools can reduce the expense of such extended, time consuming NDT trials, if the result of simulation fits to the experiment. Our analytic simulation tool consists of three modules for the description of the radiation source, the interaction of radiation with test pieces and flaws, and the detection process with special focus on film and digital industrial radiography. It features high processing speed with near-interactive frame rates and a high level of realism. A concept has been developed as well as a software extension for reliability investigations, completed by a user interface for planning automatic simulations with varying parameters and defects. Furthermore, an automatic image analysis procedure is included to evaluate the defect visibility. The radiographic modeling from 3D CAD of aero engine components and quality test samples are compared as a precondition for real trials. This enables the evaluation and optimization of film replacement for application of modern digital equipment for economical NDT and defined POD.

  14. RF wave simulation for cold edge plasmas using the MFEM library

    NASA Astrophysics Data System (ADS)

    Shiraiwa, S.; Wright, J. C.; Bonoli, P. T.; Kolev, T.; Stowell, M.

    2017-10-01

    A newly developed generic electro-magnetic (EM) simulation tool for modeling RF wave propagation in SOL plasmas is presented. The primary motivation of this development is to extend the domain partitioning approach for incorporating arbitrarily shaped SOL plasmas and antenna to the TORIC core ICRF solver, which was previously demonstrated in the 2D geometry [S. Shiraiwa, et. al., "HISTORIC: extending core ICRF wave simulation to include realistic SOL plasmas", Nucl. Fusion in press], to larger and more complicated simulations by including a 3D realistic antenna and integrating RF rectified sheath potential model. Such an extension requires a scalable high fidelity 3D edge plasma wave simulation. We used the MFEM [http://mfem.org], open source scalable C++ finite element method library, and developed a Python wrapper for MFEM (PyMFEM), and then a radio frequency (RF) wave physics module in Python. This approach allows for building a physics layer rapidly, while separating the physics implementation being apart from the numerical FEM implementation. An interactive modeling interface was built on pScope [S Shiraiwa, et. al. Fusion Eng. Des. 112, 835] to work with an RF simulation model in a complicated geometry.

  15. OpenSHS: Open Smart Home Simulator

    PubMed Central

    Alshammari, Nasser; Alshammari, Talal; Sedky, Mohamed; Champion, Justin; Bauer, Carolin

    2017-01-01

    This paper develops a new hybrid, open-source, cross-platform 3D smart home simulator, OpenSHS, for dataset generation. OpenSHS offers an opportunity for researchers in the field of the Internet of Things (IoT) and machine learning to test and evaluate their models. Following a hybrid approach, OpenSHS combines advantages from both interactive and model-based approaches. This approach reduces the time and efforts required to generate simulated smart home datasets. We have designed a replication algorithm for extending and expanding a dataset. A small sample dataset produced, by OpenSHS, can be extended without affecting the logical order of the events. The replication provides a solution for generating large representative smart home datasets. We have built an extensible library of smart devices that facilitates the simulation of current and future smart home environments. Our tool divides the dataset generation process into three distinct phases: first design: the researcher designs the initial virtual environment by building the home, importing smart devices and creating contexts; second, simulation: the participant simulates his/her context-specific events; and third, aggregation: the researcher applies the replication algorithm to generate the final dataset. We conducted a study to assess the ease of use of our tool on the System Usability Scale (SUS). PMID:28468330

  16. Simulation of groundwater flow and streamflow depletion in the Branch Brook, Merriland River, and parts of the Mousam River watersheds in southern Maine

    USGS Publications Warehouse

    Nielsen, Martha G.; Locke, Daniel B.

    2015-01-01

    The study evaluated two different methods of calculating in-stream flow requirements for Branch Brook and the Merriland River—a set of statewide equations used to calculate monthly median flows and the MOVE.1 record-extension technique used on site-specific streamflow measurements. The August median in-stream flow requirement in the Merriland River was calculated as 7.18 ft3/s using the statewide equations but was 3.07 ft3/s using the MOVE.1 analysis. In Branch Brook, the August median in-stream flow requirements were calculated as 20.3 ft3/s using the statewide equations and 11.8 ft3/s using the MOVE.1 analysis. In each case, using site-specific data yields an estimate of in-stream flow that is much lower than an estimate the statewide equations provide.

  17. Display/control requirements for automated VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Hoffman, W. C.; Kleinman, D. L.; Young, L. R.

    1976-01-01

    A systematic design methodology for pilot displays in advanced commercial VTOL aircraft was developed and refined. The analyst is provided with a step-by-step procedure for conducting conceptual display/control configurations evaluations for simultaneous monitoring and control pilot tasks. The approach consists of three phases: formulation of information requirements, configuration evaluation, and system selection. Both the monitoring and control performance models are based upon the optimal control model of the human operator. Extensions to the conventional optimal control model required in the display design methodology include explicit optimization of control/monitoring attention; simultaneous monitoring and control performance predictions; and indifference threshold effects. The methodology was applied to NASA's experimental CH-47 helicopter in support of the VALT program. The CH-47 application examined the system performance of six flight conditions. Four candidate configurations are suggested for evaluation in pilot-in-the-loop simulations and eventual flight tests.

  18. Flight test experience with high-alpha control system techniques on the F-14 airplane

    NASA Technical Reports Server (NTRS)

    Gera, J.; Wilson, R. J.; Enevoldson, E. K.; Nguyen, L. T.

    1981-01-01

    Improved handling qualities of fighter aircraft at high angles of attack can be provided by various stability and control augmentation techniques. NASA and the U.S. Navy are conducting a joint flight demonstration of these techniques on an F-14 airplane. This paper reports on the flight test experience with a newly designed lateral-directional control system which suppresses such high angle of attack handling qualities problems as roll reversal, wing rock, and directional divergence while simultaneously improving departure/spin resistance. The technique of integrating a piloted simulation into the flight program was used extensively in this program. This technique had not been applied previously to high angle of attack testing and required the development of a valid model to simulate the test airplane at extremely high angles of attack.

  19. Stochastic soil water balance under seasonal climates

    PubMed Central

    Feng, Xue; Porporato, Amilcare; Rodriguez-Iturbe, Ignacio

    2015-01-01

    The analysis of soil water partitioning in seasonally dry climates necessarily requires careful consideration of the periodic climatic forcing at the intra-annual timescale in addition to daily scale variabilities. Here, we introduce three new extensions to a stochastic soil moisture model which yields seasonal evolution of soil moisture and relevant hydrological fluxes. These approximations allow seasonal climatic forcings (e.g. rainfall and potential evapotranspiration) to be fully resolved, extending the analysis of soil water partitioning to account explicitly for the seasonal amplitude and the phase difference between the climatic forcings. The results provide accurate descriptions of probabilistic soil moisture dynamics under seasonal climates without requiring extensive numerical simulations. We also find that the transfer of soil moisture between the wet to the dry season is responsible for hysteresis in the hydrological response, showing asymmetrical trajectories in the mean soil moisture and in the transient Budyko's curves during the ‘dry-down‘ versus the ‘rewetting‘ phases of the year. Furthermore, in some dry climates where rainfall and potential evapotranspiration are in-phase, annual evapotranspiration can be shown to increase because of inter-seasonal soil moisture transfer, highlighting the importance of soil water storage in the seasonal context. PMID:25663808

  20. High frequency direct drive generation using white noise sources

    NASA Astrophysics Data System (ADS)

    Frazier, S.; Sebacher, K.; Lawry, D.; Prather, W.; Hoffer, G.

    1994-12-01

    Damped sinusoid direct drive injection on interconnecting cable bundles between subsystems has long been used as a technique for determining susceptibility to electromagnetic transients in military weapon systems. Questions arise, however, about the adequacy of this method of individually injected, single sinusoids in assuring subsystem strength against broad band threats. This issue has recently been raised in the latest revision of MIL-STD-461 that requires subsystems exhibit no malfunctions when subjected to a repetitive square wave pulse with fast rise and fall time (CS115). An extension to this approach would be to test subsystems using arbitrary waveforms. In recent years arbitrary waveform generators (AWG's) have been used to duplicate, with a high degree of fidelity, the waveforms measured on cable bundles in a system illuminated by fields in a system-level EMP simulator. However, the operating speeds of present AWG's do not allow the extension of this approach to meet new threats such as MIL-STD-2169A. A novel alternative approach for generation of the required signals, being developed in a cooperative effort between the Naval Air Warfare Center and Phillips Laboratory, is the use of white noise signals conditioned in such a manner to produce the desired direct drive waveforms.

  1. Imaging of earthquake faults using small UAVs as a pathfinder for air and space observations

    USGS Publications Warehouse

    Donnellan, Andrea; Green, Joseph; Ansar, Adnan; Aletky, Joseph; Glasscoe, Margaret; Ben-Zion, Yehuda; Arrowsmith, J. Ramón; DeLong, Stephen B.

    2017-01-01

    Large earthquakes cause billions of dollars in damage and extensive loss of life and property. Geodetic and topographic imaging provide measurements of transient and long-term crustal deformation needed to monitor fault zones and understand earthquakes. Earthquake-induced strain and rupture characteristics are expressed in topographic features imprinted on the landscapes of fault zones. Small UAVs provide an efficient and flexible means to collect multi-angle imagery to reconstruct fine scale fault zone topography and provide surrogate data to determine requirements for and to simulate future platforms for air- and space-based multi-angle imaging.

  2. An Electron/Photon/Relaxation Data Library for MCNP6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, III, H. Grady

    The capabilities of the MCNP6 Monte Carlo code in simulation of electron transport, photon transport, and atomic relaxation have recently been significantly expanded. The enhancements include not only the extension of existing data and methods to lower energies, but also the introduction of new categories of data and methods. Support of these new capabilities has required major additions to and redesign of the associated data tables. In this paper we present the first complete documentation of the contents and format of the new electron-photon-relaxation data library now available with the initial production release of MCNP6.

  3. Dissipative preparation of entangled many-body states with Rydberg atoms

    NASA Astrophysics Data System (ADS)

    Roghani, Maryam; Weimer, Hendrik

    2018-07-01

    We investigate a one-dimensional atomic lattice laser-driven to a Rydberg state, in which engineered dissipation channels lead to entanglement in the many-body system. In particular, we demonstrate the efficient generation of ground states of a frustration-free Hamiltonian, as well as states closely related to W states. We discuss the realization of the required coherent and dissipative terms, and we perform extensive numerical simulations characterizing the fidelity of the state preparation procedure. We identify the optimum parameters for high fidelity entanglement preparation and investigate the scaling with the size of the system.

  4. NASA supercritical laminar flow control airfoil experiment

    NASA Technical Reports Server (NTRS)

    Harvey, W. D.

    1982-01-01

    The design and goals of experimental investigations of supercritical LFC airfoils conducted in the NASA Langley 8-ft Transonic Pressure Tunnel beginning in March 1982 are reviewed. Topics addressed include laminarization aspects; flow-quality requirements; simulation of flight parameters; the setup of screens, honeycomb, and sonic throat; the design cycle; theoretical pressure distributions and shock-free limits; drag divergence and stability analysis; and the LFC suction system. Consideration is given to the LFC airfoil model, the air-flow control system, airfoil-surface instrumentation, liner design and hardware, and test options. Extensive diagrams, drawings, graphs, photographs, and tables of numerical data are provided.

  5. Algorithm and code development for unsteady three-dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru

    1994-01-01

    Aeroelastic tests require extensive cost and risk. An aeroelastic wind-tunnel experiment is an order of magnitude more expensive than a parallel experiment involving only aerodynamics. By complementing the wind-tunnel experiments with numerical simulations, the overall cost of the development of aircraft can be considerably reduced. In order to accurately compute aeroelastic phenomenon it is necessary to solve the unsteady Euler/Navier-Stokes equations simultaneously with the structural equations of motion. These equations accurately describe the flow phenomena for aeroelastic applications. At ARC a code, ENSAERO, is being developed for computing the unsteady aerodynamics and aeroelasticity of aircraft, and it solves the Euler/Navier-Stokes equations. The purpose of this cooperative agreement was to enhance ENSAERO in both algorithm and geometric capabilities. During the last five years, the algorithms of the code have been enhanced extensively by using high-resolution upwind algorithms and efficient implicit solvers. The zonal capability of the code has been extended from a one-to-one grid interface to a mismatching unsteady zonal interface. The geometric capability of the code has been extended from a single oscillating wing case to a full-span wing-body configuration with oscillating control surfaces. Each time a new capability was added, a proper validation case was simulated, and the capability of the code was demonstrated.

  6. Extensions to decision curve analysis, a novel method for evaluating diagnostic tests, prediction models and molecular markers

    PubMed Central

    Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat

    2008-01-01

    Background Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. Methods In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Results Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Conclusion Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided. PMID:19036144

  7. Extensions to decision curve analysis, a novel method for evaluating diagnostic tests, prediction models and molecular markers.

    PubMed

    Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat

    2008-11-26

    Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided.

  8. Mixed Criticality Scheduling for Industrial Wireless Sensor Networks

    PubMed Central

    Jin, Xi; Xia, Changqing; Xu, Huiting; Wang, Jintao; Zeng, Peng

    2016-01-01

    Wireless sensor networks (WSNs) have been widely used in industrial systems. Their real-time performance and reliability are fundamental to industrial production. Many works have studied the two aspects, but only focus on single criticality WSNs. Mixed criticality requirements exist in many advanced applications in which different data flows have different levels of importance (or criticality). In this paper, first, we propose a scheduling algorithm, which guarantees the real-time performance and reliability requirements of data flows with different levels of criticality. The algorithm supports centralized optimization and adaptive adjustment. It is able to improve both the scheduling performance and flexibility. Then, we provide the schedulability test through rigorous theoretical analysis. We conduct extensive simulations, and the results demonstrate that the proposed scheduling algorithm and analysis significantly outperform existing ones. PMID:27589741

  9. Machine learning based cloud mask algorithm driven by radiative transfer modeling

    NASA Astrophysics Data System (ADS)

    Chen, N.; Li, W.; Tanikawa, T.; Hori, M.; Shimada, R.; Stamnes, K. H.

    2017-12-01

    Cloud detection is a critically important first step required to derive many satellite data products. Traditional threshold based cloud mask algorithms require a complicated design process and fine tuning for each sensor, and have difficulty over snow/ice covered areas. With the advance of computational power and machine learning techniques, we have developed a new algorithm based on a neural network classifier driven by extensive radiative transfer modeling. Statistical validation results obtained by using collocated CALIOP and MODIS data show that its performance is consistent over different ecosystems and significantly better than the MODIS Cloud Mask (MOD35 C6) during the winter seasons over mid-latitude snow covered areas. Simulations using a reduced number of satellite channels also show satisfactory results, indicating its flexibility to be configured for different sensors.

  10. Live tree carbon stock equivalence of fire and fuels extension to the Forest Vegetation Simulator and Forest Inventory and Analysis approaches

    Treesearch

    James E. Smith; Coeli M. Hoover

    2017-01-01

    The carbon reports in the Fire and Fuels Extension (FFE) to the Forest Vegetation Simulator (FVS) provide two alternate approaches to carbon estimates for live trees (Rebain 2010). These are (1) the FFE biomass algorithms, which are volumebased biomass equations, and (2) the Jenkins allometric equations (Jenkins and others 2003), which are diameter based. Here, we...

  11. Evaluating indoor exposure modeling alternatives for LCA: A case study in the vehicle repair industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demou, Evangelia; Hellweg, Stefanie; Wilson, Michael P.

    2009-05-01

    We evaluated three exposure models with data obtained from measurements among workers who use"aerosol" solvent products in the vehicle repair industry and with field experiments using these products to simulate the same exposure conditions. The three exposure models were the: 1) homogeneously-mixed-one-box model, 2) multi-zone model, and 3) eddy-diffusion model. Temporally differentiated real-time breathing zone volatile organic compound (VOC) concentration measurements, integrated far-field area samples, and simulated experiments were used in estimating parameters, such as emission rates, diffusivity, and near-field dimensions. We assessed differences in model input requirements and their efficacy for predictive modeling. The One-box model was not ablemore » to resemble the temporal profile of exposure concentrations, but it performed well concerning time-weighted exposure over extended time periods. However, this model required an adjustment for spatial concentration gradients. Multi-zone models and diffusion-models may solve this problem. However, we found that the reliable use of both these models requires extensive field data to appropriately define pivotal parameters such as diffusivity or near-field dimensions. We conclude that it is difficult to apply these models for predicting VOC exposures in the workplace. However, for comparative exposure scenarios in life-cycle assessment they may be useful.« less

  12. Bifurcation of the Kuroshio Extension at the Shatsky Rise

    NASA Astrophysics Data System (ADS)

    Hurlburt, Harley E.; Metzger, E. Joseph

    1998-04-01

    A 1/16° six-layer Pacific Ocean model north of 20°S is used to investigate the bifurcation of the Kuroshio Extension at the main Shatsky Rise and the pathway of the northern branch from the bifurcation to the subarctic front. Upper ocean-topographic coupling via a mixed barotropic-baroclinic instability is essential to this bifurcation and to the formation and mean pathway of the northern branch as are several aspects of the Shatsky Rise complex of topography and the latitude of the Kuroshio Extension in relation to the topography. The flow instabilities transfer energy to the abyssal layer where it is constrained by geostrophic contours of the bottom topography. The topographically constrained abyssal currents in turn steer upper ocean currents, which do not directly impinge on the bottom topography. This includes steering of mean pathways. Obtaining sufficient coupling requires very fine resolution of mesoscale variability and sufficient eastward penetration of the Kuroshio as an unstable inertial jet. Resolution of 1/8° for each variable was not sufficient in this case. The latitudinal extent of the main Shatsky Rise (31°N-36°N) and the shape of the downward slope on the north side are crucial to the bifurcation at the main Shatsky Rise, with both branches passing north of the peak. The well-defined, relatively steep and straight eastern edge of the Shatsky Rise topographic complex (30°N-42°N) and the southwestward abyssal flow along it play a critical role in forming the rest of the Kuroshio northern branch which flows in the opposite direction. A deep pass between the main Shatsky Rise and the rest of the ridge to the northeast helps to link the northern fork of the bifurcation at the main rise to the rest of the northern branch. Two 1/16° "identical twin" interannual simulations forced by daily winds 1981-1995 show that the variability in this region is mostly nondeterministic on all timescales that could be examined (up to 7 years in these 15-year simulations). A comparison of climatologically forced and interannual simulations over the region 150°E-180°E, 29°N-47°N showed greatly enhanced abyssal and upper ocean eddy kinetic energy and much stronger mean abyssal currents east of the Emperor Seamount Chain (about 170°E) in the interannual simulations but little difference west of 170°E. This greatly enhanced the upper ocean-topographic coupling in the interannual simulations east of 170°E. This coupling affected the latitudinal positioning of the eastward branches of the Kuroshio Extension and tended to reduce latitudinal movement compared to the climatologically forced simulation, including a particularly noticeable impact from the Hess Rise. Especially in the interannual simulations, effects of almost all topographic features in the region could be seen in the mean upper ocean currents (more so than in instantaneous currents), including meanders and bifurcations of major and minor currents, closed circulations, and impacts from depressions and rises of large and small amplitudes.

  13. Space Suit Portable Life Support System (PLSS) 2.0 Unmanned Vacuum Environment Testing

    NASA Technical Reports Server (NTRS)

    Watts, Carly; Vogel, Matthew

    2016-01-01

    For the first time in more than 30 years, an advanced space suit Portable Life Support System (PLSS) design was operated inside a vacuum chamber representative of the flight operating environment. The test article, PLSS 2.0, was the second system-level integrated prototype of the advanced PLSS design, following the PLSS 1.0 Breadboard that was developed and tested throughout 2011. Whereas PLSS 1.0 included five technology development components with the balance the system simulated using commercial-off-the-shelf items, PLSS 2.0 featured first generation or later prototypes for all components less instrumentation, tubing and fittings. Developed throughout 2012, PLSS 2.0 was the first attempt to package the system into a flight-like representative volume. PLSS 2.0 testing included an extensive functional evaluation known as Pre-Installation Acceptance (PIA) testing, Human-in-the-Loop testing in which the PLSS 2.0 prototype was integrated via umbilicals to a manned prototype space suit for 19 two-hour simulated EVAs, and unmanned vacuum environment testing. Unmanned vacuum environment testing took place from 1/9/15-7/9/15 with PLSS 2.0 located inside a vacuum chamber. Test sequences included performance mapping of several components, carbon dioxide removal evaluations at simulated intravehicular activity (IVA) conditions, a regulator pressure schedule assessment, and culminated with 25 simulated extravehicular activities (EVAs). During the unmanned vacuum environment test series, PLSS 2.0 accumulated 378 hours of integrated testing including 291 hours of operation in a vacuum environment and 199 hours of simulated EVA time. The PLSS prototype performed nominally throughout the test series, with two notable exceptions including a pump failure and a Spacesuit Water Membrane Evaporator (SWME) leak, for which post-test failure investigations were performed. In addition to generating an extensive database of PLSS 2.0 performance data, achievements included requirements and operational concepts verification, as well as demonstration of vehicular interfaces, consumables sizing and recharge, and water quality control.

  14. mGrid: A load-balanced distributed computing environment for the remote execution of the user-defined Matlab code

    PubMed Central

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-01-01

    Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet. PMID:16539707

  15. mGrid: a load-balanced distributed computing environment for the remote execution of the user-defined Matlab code.

    PubMed

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-03-15

    Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet.

  16. 75 FR 48921 - Administrative Guidance for Multistate Extension Activities and Integrated Research and Extension...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-12

    .... Section 105 of AREERA amended the Smith-Lever Act to require that a specified amount of agricultural... Hatch Act and Smith-Lever Act to require that a specified amount of agricultural research and extension... Smith- Lever Act funds on multistate extension activities and 25 percent on integrated research and...

  17. Unipolar distributions of junctional Myosin II identify cell stripe boundaries that drive cell intercalation throughout Drosophila axis extension

    PubMed Central

    Tetley, Robert J; Blanchard, Guy B; Fletcher, Alexander G; Adams, Richard J; Sanson, Bénédicte

    2016-01-01

    Convergence and extension movements elongate tissues during development. Drosophila germ-band extension (GBE) is one example, which requires active cell rearrangements driven by Myosin II planar polarisation. Here, we develop novel computational methods to analyse the spatiotemporal dynamics of Myosin II during GBE, at the scale of the tissue. We show that initial Myosin II bipolar cell polarization gives way to unipolar enrichment at parasegmental boundaries and two further boundaries within each parasegment, concomitant with a doubling of cell number as the tissue elongates. These boundaries are the primary sites of cell intercalation, behaving as mechanical barriers and providing a mechanism for how cells remain ordered during GBE. Enrichment at parasegment boundaries during GBE is independent of Wingless signaling, suggesting pair-rule gene control. Our results are consistent with recent work showing that a combinatorial code of Toll-like receptors downstream of pair-rule genes contributes to Myosin II polarization via local cell-cell interactions. We propose an updated cell-cell interaction model for Myosin II polarization that we tested in a vertex-based simulation. DOI: http://dx.doi.org/10.7554/eLife.12094.001 PMID:27183005

  18. Non-destructive inspection in industrial equipment using robotic mobile manipulation

    NASA Astrophysics Data System (ADS)

    Maurtua, Iñaki; Susperregi, Loreto; Ansuategui, Ander; Fernández, Ane; Ibarguren, Aitor; Molina, Jorge; Tubio, Carlos; Villasante, Cristobal; Felsch, Torsten; Pérez, Carmen; Rodriguez, Jorge R.; Ghrissi, Meftah

    2016-05-01

    MAINBOT project has developed service robots based applications to autonomously execute inspection tasks in extensive industrial plants in equipment that is arranged horizontally (using ground robots) or vertically (climbing robots). The industrial objective has been to provide a means to help measuring several physical parameters in multiple points by autonomous robots, able to navigate and climb structures, handling non-destructive testing sensors. MAINBOT has validated the solutions in two solar thermal plants (cylindrical-parabolic collectors and central tower), that are very demanding from mobile manipulation point of view mainly due to the extension (e.g. a thermal solar plant of 50Mw, with 400 hectares, 400.000 mirrors, 180 km of absorber tubes, 140m height tower), the variability of conditions (outdoor, day-night), safety requirements, etc. Once the technology was validated in simulation, the system was deployed in real setups and different validation tests carried out. In this paper two of the achievements related with the ground mobile inspection system are presented: (1) Autonomous navigation localization and planning algorithms to manage navigation in huge extensions and (2) Non-Destructive Inspection operations: thermography based detection algorithms to provide automatic inspection abilities to the robots.

  19. Hierarchical Testing with Automated Document Generation for Amanzi, ASCEM's Subsurface Flow and Reactive Transport Simulator

    NASA Astrophysics Data System (ADS)

    Moulton, J. D.; Steefel, C. I.; Yabusaki, S.; Castleton, K.; Scheibe, T. D.; Keating, E. H.; Freedman, V. L.

    2013-12-01

    The Advanced Simulation Capabililty for Environmental Management (ASCEM) program is developing an approach and open-source tool suite for standardized risk and performance assessments at legacy nuclear waste sites. These assessments use a graded and iterative approach, beginning with simplified highly abstracted models, and adding geometric and geologic complexity as understanding is gained. To build confidence in this assessment capability, extensive testing of the underlying tools is needed. Since the tools themselves, such as the subsurface flow and reactive-transport simulator, Amanzi, are under active development, testing must be both hierarchical and highly automated. In this presentation we show how we have met these requirements, by leveraging the python-based open-source documentation system called Sphinx with several other open-source tools. Sphinx builds on the reStructured text tool docutils, with important extensions that include high-quality formatting of equations, and integrated plotting through matplotlib. This allows the documentation, as well as the input files for tests, benchmark and tutorial problems, to be maintained with the source code under a version control system. In addition, it enables developers to build documentation in several different formats (e.g., html and pdf) from a single source. We will highlight these features, and discuss important benefits of this approach for Amanzi. In addition, we'll show that some of ASCEM's other tools, such as the sampling provided by the Uncertainty Quantification toolset, are naturally leveraged to enable more comprehensive testing. Finally, we will highlight the integration of this hiearchical testing and documentation framework with our build system and tools (CMake, CTest, and CDash).

  20. Ultra-high resolution electron microscopy

    DOE PAGES

    Oxley, Mark P.; Lupini, Andrew R.; Pennycook, Stephen J.

    2016-12-23

    The last two decades have seen dramatic advances in the resolution of the electron microscope brought about by the successful correction of lens aberrations that previously limited resolution for most of its history. Here we briefly review these advances, the achievement of sub-Ångstrom resolution and the ability to identify individual atoms, their bonding configurations and even their dynamics and diffusion pathways. We then present a review of the basic physics of electron scattering, lens aberrations and their correction, and an approximate imaging theory for thin crystals which provides physical insight into the various different imaging modes. Then we proceed tomore » describe a more exact imaging theory starting from Yoshioka’s formulation and covering full image simulation methods using Bloch waves, the multislice formulation and the frozen phonon/quantum excitation of phonons models. Delocalization of inelastic scattering has become an important limiting factor at atomic resolution. We therefore discuss this issue extensively, showing how the full-width-half-maximum is the appropriate measure for predicting image contrast, but the diameter containing 50% of the excitation is an important measure of the range of the interaction. These two measures can differ by a factor of 5, are not a simple function of binding energy, and full image simulations are required to match to experiment. The Z-dependence of annular dark field images is also discussed extensively, both for single atoms and for crystals, and we show that temporal incoherence must be included accurately if atomic species are to be identified through matching experimental intensities to simulations. Finally we mention a few promising directions for future investigation.« less

  1. Test results of six-month test of two water electrolysis systems

    NASA Technical Reports Server (NTRS)

    Mills, E. S.; Wells, G. W.

    1972-01-01

    The two water electrolysis systems used in the NASA space station simulation 90-day manned test of a regenerative life support system were refurbished as required and subjected to 26-weeks of testing. The two electrolysis units are both promising systems for oxygen and hydrogen generation and both needed extensive long-term testing to evaluate the performance of the respective cell design and provide guidance for further development. Testing was conducted to evaluate performance in terms of current, pressure, variable oxygen demands, and orbital simulation. An automatic monitoring system was used to record, monitor and printout performance data at one minute, ten minute or one-hour intervals. Performance data is presented for each day of system operation for each module used during the day. Failures are analyzed, remedial action taken to eliminate problems is discussed and recommendations for redesign for future space applications are stated.

  2. Effect of mutation at the interface of Trp-repressor dimeric protein: a steered molecular dynamics simulation.

    PubMed

    Miño, German; Baez, Mauricio; Gutierrez, Gonzalo

    2013-09-01

    The strength of key interfacial contacts that stabilize protein-protein interactions have been studied by computer simulation. Experimentally, changes in the interface are evaluated by generating specific mutations at one or more points of the protein structure. Here, such an evaluation is performed by means of steered molecular dynamics and use of a dimeric model of tryptophan repressor and in-silico mutants as a test case. Analysis of four particular cases shows that, in principle, it is possible to distinguish between wild-type and mutant forms by examination of the total energy and force-extension profiles. In particular, detailed atomic level structural analysis indicates that specific mutations at the interface of the dimeric model (positions 19 and 39) alter interactions that appear in the wild-type form of tryptophan repressor, reducing the energy and force required to separate both subunits.

  3. A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application.

    PubMed

    Li, Shuai; Li, Yangming; Wang, Zheng

    2013-03-01

    This paper presents a class of recurrent neural networks to solve quadratic programming problems. Different from most existing recurrent neural networks for solving quadratic programming problems, the proposed neural network model converges in finite time and the activation function is not required to be a hard-limiting function for finite convergence time. The stability, finite-time convergence property and the optimality of the proposed neural network for solving the original quadratic programming problem are proven in theory. Extensive simulations are performed to evaluate the performance of the neural network with different parameters. In addition, the proposed neural network is applied to solving the k-winner-take-all (k-WTA) problem. Both theoretical analysis and numerical simulations validate the effectiveness of our method for solving the k-WTA problem. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Calculation of open and closed system elastic coefficients for multicomponent solids

    NASA Astrophysics Data System (ADS)

    Mishin, Y.

    2015-06-01

    Thermodynamic equilibrium in multicomponent solids subject to mechanical stresses is a complex nonlinear problem whose exact solution requires extensive computations. A few decades ago, Larché and Cahn proposed a linearized solution of the mechanochemical equilibrium problem by introducing the concept of open system elastic coefficients [Acta Metall. 21, 1051 (1973), 10.1016/0001-6160(73)90021-7]. Using the Ni-Al solid solution as a model system, we demonstrate that open system elastic coefficients can be readily computed by semigrand canonical Monte Carlo simulations in conjunction with the shape fluctuation approach. Such coefficients can be derived from a single simulation run, together with other thermodynamic properties needed for prediction of compositional fields in solid solutions containing defects. The proposed calculation approach enables streamlined solutions of mechanochemical equilibrium problems in complex alloys. Second order corrections to the linear theory are extended to multicomponent systems.

  5. Shield evaluation and performance testing at the USMB`s Strategic Structures Testing Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barczak, T.M.; Gearhart, D.F.

    1996-12-31

    Historically, shield performance testing is conducted by the support manufacturers at European facilities. The U.S. Bureau of Mines (USBM) has conducted extensive research in shield Mechanics and is now opening its Strategic Structures Testing (SST) Laboratory to the mining industry for shield performance testing. The SST Laboratory provides unique shield testing capabilities using the Mine Roof Simulator (MRS) load frame. The MRS provides realistic and cost-effective shield evaluation by combining both vertical and horizontal loading into a single load cycle; whereas, several load cycles would be required to obtain this loading in a static frame. In addition to these advantages,more » the USBM acts as an independent research organization to provide an unbiased assessment of shield performance. This paper describes the USBM`s shield testing program that is designed specifically to simulate in-service mining conditions using the unique the capabilities of the SST Laboratory.« less

  6. Scanning electron microscope fine tuning using four-bar piezoelectric actuated mechanism

    NASA Astrophysics Data System (ADS)

    Hatamleh, Khaled S.; Khasawneh, Qais A.; Al-Ghasem, Adnan; Jaradat, Mohammad A.; Sawaqed, Laith; Al-Shabi, Mohammad

    2018-01-01

    Scanning Electron Microscopes are extensively used for accurate micro/nano images exploring. Several strategies have been proposed to fine tune those microscopes in the past few years. This work presents a new fine tuning strategy of a scanning electron microscope sample table using four bar piezoelectric actuated mechanisms. The introduced paper presents an algorithm to find all possible inverse kinematics solutions of the proposed mechanism. In addition, another algorithm is presented to search for the optimal inverse kinematic solution. Both algorithms are used simultaneously by means of a simulation study to fine tune a scanning electron microscope sample table through a pre-specified circular or linear path of motion. Results of the study shows that, proposed algorithms were able to minimize the power required to drive the piezoelectric actuated mechanism by a ratio of 97.5% for all simulated paths of motion when compared to general non-optimized solution.

  7. On computing stress in polymer systems involving multi-body potentials from molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Fu, Yao; Song, Jeong-Hoon

    2014-08-01

    Hardy stress definition has been restricted to pair potentials and embedded-atom method potentials due to the basic assumptions in the derivation of a symmetric microscopic stress tensor. Force decomposition required in the Hardy stress expression becomes obscure for multi-body potentials. In this work, we demonstrate the invariance of the Hardy stress expression for a polymer system modeled with multi-body interatomic potentials including up to four atoms interaction, by applying central force decomposition of the atomic force. The balance of momentum has been demonstrated to be valid theoretically and tested under various numerical simulation conditions. The validity of momentum conservation justifies the extension of Hardy stress expression to multi-body potential systems. Computed Hardy stress has been observed to converge to the virial stress of the system with increasing spatial averaging volume. This work provides a feasible and reliable linkage between the atomistic and continuum scales for multi-body potential systems.

  8. Surgical simulation software for insertion of pedicle screws.

    PubMed

    Eftekhar, Behzad; Ghodsi, Mohammad; Ketabchi, Ebrahim; Rasaee, Saman

    2002-01-01

    As the first step toward finding noninvasive alternatives to the traditional methods of surgical training, we have developed a small, stand-alone computer program that simulates insertion of pedicle screws in different spinal vertebrae (T10-L5). We used Delphi 5.0 and DirectX 7.0 extension for Microsoft Windows. This is a stand-alone and portable program. The program can run on most personal computers. It provides the trainee with visual feedback during practice of the technique. At present, it uses predefined three-dimensional images of the vertebrae, but we are attempting to adapt the program to three-dimensional objects based on real computed tomographic scans of the patients. The program can be downloaded at no cost from the web site: www.tums.ac.ir/downloads As a preliminary work, it requires further development, particularly toward better visual, auditory, and even proprioceptive feedback and use of the individual patient's data.

  9. Experimental, Numerical, and Analytical Slosh Dynamics of Water and Liquid Nitrogen in a Spherical Tank

    NASA Technical Reports Server (NTRS)

    Storey, Jedediah Morse

    2016-01-01

    Understanding, predicting, and controlling fluid slosh dynamics is critical to safety and improving performance of space missions when a significant percentage of the spacecraft's mass is a liquid. Computational fluid dynamics simulations can be used to predict the dynamics of slosh, but these programs require extensive validation. Many experimental and numerical studies of water slosh have been conducted. However, slosh data for cryogenic liquids is lacking. Water and cryogenic liquid nitrogen are used in various ground-based tests with a spherical tank to characterize damping, slosh mode frequencies, and slosh forces. A single ring baffle is installed in the tank for some of the tests. Analytical models for slosh modes, slosh forces, and baffle damping are constructed based on prior work. Select experiments are simulated using a commercial CFD software, and the numerical results are compared to the analytical and experimental results for the purposes of validation and methodology-improvement.

  10. Lane-changing behavior and its effect on energy dissipation using full velocity difference model

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Ding, Jian-Xun; Shi, Qin; Kühne, Reinhart D.

    2016-07-01

    In real urban traffic, roadways are usually multilane with lane-specific velocity limits. Most previous researches are derived from single-lane car-following theory which in the past years has been extensively investigated and applied. In this paper, we extend the continuous single-lane car-following model (full velocity difference model) to simulate the three-lane-changing behavior on an urban roadway which consists of three lanes. To meet incentive and security requirements, a comprehensive lane-changing rule set is constructed, taking safety distance and velocity difference into consideration and setting lane-specific speed restriction for each lane. We also investigate the effect of lane-changing behavior on distribution of cars, velocity, headway, fundamental diagram of traffic and energy dissipation. Simulation results have demonstrated asymmetric lane-changing “attraction” on changeable lane-specific speed-limited roadway, which leads to dramatically increasing energy dissipation.

  11. A system for the simulation and evaluation of satellite communication networks

    NASA Technical Reports Server (NTRS)

    Bagwell, J. W.

    1983-01-01

    With the emergence of a new era in satellite communications brought about by NASA's thrust into the Ka band with multibeam and onboard processing technologies, new and innovative techniques for evaluating these concepts and systems are required. To this end, NASA, in conjunction with its extensive program for advanced communications technology development, has undertaken to develop a concept for the simulation and evaluation of a complete communications network. Incorporated in this network will be proof of concept models of the latest technologies proposed for future satellite communications systems. These include low noise receivers, matrix switches, baseband processors, and solid state and tube type high power amplifiers. To accomplish this, numerous supporting technologies must be added to those aforementioned proof of concept models. These include controllers for synchronization, order wire, and resource allocation, gain compensation, signal leveling, power augmentation, and rain fade and range delay simulation. Taken together, these will be assembled to comprise a system capable of addressing numerous design and performance questions. The simulation and evaluation system as planned will be modular in design and implementation, capable of modification and updating to track and evaluate a continuum emerging concepts and technologies.

  12. Scramjet exhaust simulation technique for hypersonic aircraft nozzle design and aerodynamic tests

    NASA Technical Reports Server (NTRS)

    Hunt, J. L.; Talcott, N. A., Jr.; Cubbage, J. M.

    1977-01-01

    Current design philosophy for scramjet-powered hypersonic aircraft results in configurations with the entire lower fuselage surface utilized as part of the propulsion system. The lower aft-end of the vehicle acts as a high expansion ratio nozzle. Not only must the external nozzle be designed to extract the maximum possible thrust force from the high energy flow at the combustor exit, but the forces produced by the nozzle must be aligned such that they do not unduly affect aerodynamic balance. The strong coupling between the propulsion system and aerodynamics of the aircraft makes imperative at least a partial simulation of the inlet, exhaust, and external flows of the hydrogen-burning scramjet in conventional facilities for both nozzle formulation and aerodynamic-force data acquisition. Aerodynamic testing methods offer no contemporary approach for such vehicle design requirements. NASA-Langley has pursued an extensive scramjet/airframe integration R&D program for several years and has recently developed a promising technique for simulation of the scramjet exhaust flow for hypersonic aircraft. Current results of the research program to develop a scramjet flow simulation technique through the use of substitute gas blends are described in this paper.

  13. Simulations of drift-Alfven turbulence in LAPD using BOUT

    NASA Astrophysics Data System (ADS)

    Popovich, Pavel; Umansky, Maxim; Carter, Troy; Cowley, Steve

    2008-11-01

    The LArge Plasma Device (LAPD) at UCLA is a 17 m long, 60 cm diameter magnetized plasma column with typical plasma parameters ne˜1x10^12cm-3, Te˜10eV, and B ˜1kG. The simple geometry and extensive measurement capability on LAPD allows for detailed comparison with and validation of numerical simulations of turbulence and transport. We analyse the LAPD results using simulations with the boundary plasma turbulence code BOUT. BOUT models the 3D electromagnetic plasma turbulence solving a system of fluid moment equations in a general tokamak geometry near the boundary. We will discuss the physical model and modifications of the BOUT code required for the LAPD configuration, and present the first results of the simulations and comparison to experimental measurements. In particular, a confinement transition is observed in LAPD under the application of bias-driven rotation. Also, intermittent generation and convection of filamentary structures (``blobs'' and ``holes'') is observed in the LAPD edge. Application of BOUT to modeling of these two phenomena will be discussed. E. Maggs, T.A. Carter, and R.J. Taylor, Phys. Plasmas 14, (2007) T.A. Carter, Phys. Plasmas 13, (2006)

  14. On the Simulation of Sea States with High Significant Wave Height for the Validation of Parameter Retrieval Algorithms for Future Altimetry Missions

    NASA Astrophysics Data System (ADS)

    Kuschenerus, Mieke; Cullen, Robert

    2016-08-01

    To ensure reliability and precision of wave height estimates for future satellite altimetry missions such as Sentinel 6, reliable parameter retrieval algorithms that can extract significant wave heights up to 20 m have to be established. The retrieved parameters, i.e. the retrieval methods need to be validated extensively on a wide range of possible significant wave heights. Although current missions require wave height retrievals up to 20 m, there is little evidence of systematic validation of parameter retrieval methods for sea states with wave heights above 10 m. This paper provides a definition of a set of simulated sea states with significant wave height up to 20 m, that allow simulation of radar altimeter response echoes for extreme sea states in SAR and low resolution mode. The simulated radar responses are used to derive significant wave height estimates, which can be compared with the initial models, allowing precision estimations of the applied parameter retrieval methods. Thus we establish a validation method for significant wave height retrieval for sea states causing high significant wave heights, to allow improved understanding and planning of future satellite altimetry mission validation.

  15. A Review of Simulators with Haptic Devices for Medical Training.

    PubMed

    Escobar-Castillejos, David; Noguez, Julieta; Neri, Luis; Magana, Alejandra; Benes, Bedrich

    2016-04-01

    Medical procedures often involve the use of the tactile sense to manipulate organs or tissues by using special tools. Doctors require extensive preparation in order to perform them successfully; for example, research shows that a minimum of 750 operations are needed to acquire sufficient experience to perform medical procedures correctly. Haptic devices have become an important training alternative and they have been considered to improve medical training because they let users interact with virtual environments by adding the sense of touch to the simulation. Previous articles in the field state that haptic devices enhance the learning of surgeons compared to current training environments used in medical schools (corpses, animals, or synthetic skin and organs). Consequently, virtual environments use haptic devices to improve realism. The goal of this paper is to provide a state of the art review of recent medical simulators that use haptic devices. In particular we focus on stitching, palpation, dental procedures, endoscopy, laparoscopy, and orthopaedics. These simulators are reviewed and compared from the viewpoint of used technology, the number of degrees of freedom, degrees of force feedback, perceived realism, immersion, and feedback provided to the user. In the conclusion, several observations per area and suggestions for future work are provided.

  16. Simulation of lung alveolar epithelial wound healing in vitro.

    PubMed

    Kim, Sean H J; Matthay, Michael A; Mostov, Keith; Hunt, C Anthony

    2010-08-06

    The mechanisms that enable and regulate alveolar type II (AT II) epithelial cell wound healing in vitro and in vivo remain largely unknown and need further elucidation. We used an in silico AT II cell-mimetic analogue to explore and better understand plausible wound healing mechanisms for two conditions: cyst repair in three-dimensional cultures and monolayer wound healing. Starting with the analogue that validated for key features of AT II cystogenesis in vitro, we devised an additional cell rearrangement action enabling cyst repair. Monolayer repair was enabled by providing 'cells' a control mechanism to switch automatically to a repair mode in the presence of a distress signal. In cyst wound simulations, the revised analogue closed wounds by adhering to essentially the same axioms available for alveolar-like cystogenesis. In silico cell proliferation was not needed. The analogue recovered within a few simulation cycles but required a longer recovery time for larger or multiple wounds. In simulated monolayer wound repair, diffusive factor-mediated 'cell' migration led to repair patterns comparable to those of in vitro cultures exposed to different growth factors. Simulations predicted directional cell locomotion to be critical for successful in vitro wound repair. We anticipate that with further use and refinement, the methods used will develop as a rigorous, extensible means of unravelling mechanisms of lung alveolar repair and regeneration.

  17. 7 CFR 3419.3 - Determination of non-Federal sources of funds.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... RESEARCH, EDUCATION, AND EXTENSION SERVICE, DEPARTMENT OF AGRICULTURE MATCHING FUNDS REQUIREMENT FOR AGRICULTURAL RESEARCH AND EXTENSION FORMULA FUNDS AT 1890 LAND-GRANT INSTITUTIONS, INCLUDING TUSKEGEE... agricultural research, extension, and qualified educational activity to meet the matching requirements of...

  18. Autophagy and leucine promote chronological longevity and respiration proficiency during calorie restriction in yeast.

    PubMed

    Aris, John P; Alvers, Ashley L; Ferraiuolo, Roy A; Fishwick, Laura K; Hanvivatpong, Amanda; Hu, Doreen; Kirlew, Christine; Leonard, Michael T; Losin, Kyle J; Marraffini, Michelle; Seo, Arnold Y; Swanberg, Veronica; Westcott, Jennifer L; Wood, Michael S; Leeuwenburgh, Christiaan; Dunn, William A

    2013-10-01

    We have previously shown that autophagy is required for chronological longevity in the budding yeast Saccharomyces cerevisiae. Here we examine the requirements for autophagy during extension of chronological life span (CLS) by calorie restriction (CR). We find that autophagy is upregulated by two CR interventions that extend CLS: water wash CR and low glucose CR. Autophagy is required for full extension of CLS during water wash CR under all growth conditions tested. In contrast, autophagy was not uniformly required for full extension of CLS during low glucose CR, depending on the atg allele and strain genetic background. Leucine status influenced CLS during CR. Eliminating the leucine requirement in yeast strains or adding supplemental leucine to growth media extended CLS during CR. In addition, we observed that both water wash and low glucose CR promote mitochondrial respiration proficiency during aging of autophagy-deficient yeast. In general, the extension of CLS by water wash or low glucose CR was inversely related to respiration deficiency in autophagy-deficient cells. Also, autophagy is required for full extension of CLS under non-CR conditions in buffered media, suggesting that extension of CLS during CR is not solely due to reduced medium acidity. Thus, our findings show that autophagy is: (1) induced by CR, (2) required for full extension of CLS by CR in most cases (depending on atg allele, strain, and leucine availability) and, (3) promotes mitochondrial respiration proficiency during aging under CR conditions. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Autophagy and leucine promote chronological longevity and respiration proficiency during calorie restriction in yeast

    PubMed Central

    Aris, John P.; Alvers, Ashley L.; Ferraiuolo, Roy A.; Fishwick, Laura K.; Hanvivatpong, Amanda; Hu, Doreen; Kirlew, Christine; Leonard, Michael T.; Losin, Kyle J.; Marraffini, Michelle; Seo, Arnold Y.; Swanberg, Veronica; Westcott, Jennifer L.; Wood, Michael S.; Leeuwenburgh, Christiaan; Dunn, William A.

    2013-01-01

    We have previously shown that autophagy is required for chronological longevity in the budding yeast Saccharomyces cerevisiae. Here we examine the requirements for autophagy during extension of chronological life span (CLS) by calorie restriction (CR). We find that autophagy is upregulated by two CR interventions that extend CLS: water wash CR and low glucose CR. Autophagy is required for full extension of CLS during water wash CR under all growth conditions tested. In contrast, autophagy was not uniformly required for full extension of CLS during low glucose CR, depending on the atg allele and strain genetic background. Leucine status influenced CLS during CR. Eliminating the leucine requirement in yeast strains or adding supplemental leucine to growth media extended CLS during CR. In addition, we observed that both water wash and low glucose CR promote mitochondrial respiration proficiency during aging of autophagy-deficient yeast. In general, the extension of CLS by water wash or low glucose CR was inversely related to respiration deficiency in autophagy-deficient cells. Also, autophagy is required for full extension of CLS under non-CR conditions in buffered media, suggesting that extension of CLS during CR is not solely due to reduced medium acidity. Thus, our findings show that autophagy is: (1) induced by CR, (2) required for full extension of CLS by CR in most cases (depending on atg allele, strain, and leucine availability) and, (3) promotes mitochondrial respiration proficiency during aging under CR conditions. PMID:23337777

  20. The use of emulator-based simulators for on-board software maintenance

    NASA Astrophysics Data System (ADS)

    Irvine, M. M.; Dartnell, A.

    2002-07-01

    Traditionally, onboard software maintenance activities within the space sector are performed using hardware-based facilities. These facilities are developed around the use of hardware emulation or breadboards containing target processors. Some sort of environment is provided around the hardware to support the maintenance actives. However, these environments are not easy to use to set-up the required test scenarios, particularly when the onboard software executes in a dynamic I/O environment, e.g. attitude control software, or data handling software. In addition, the hardware and/or environment may not support the test set-up required during investigations into software anomalies, e.g. raise spurious interrupt, fail memory, etc, and the overall "visibility" of the software executing may be limited. The Software Maintenance Simulator (SOMSIM) is a tool that can support the traditional maintenance facilities. The following list contains some of the main benefits that SOMSIM can provide: Low cost flexible extension to existing product - operational simulator containing software processor emulator; System-level high-fidelity test-bed in which software "executes"; Provides a high degree of control/configuration over the entire "system", including contingency conditions perhaps not possible with real hardware; High visibility and control over execution of emulated software. This paper describes the SOMSIM concept in more detail, and also describes the SOMSIM study being carried out for ESA/ESOC by VEGA IT GmbH.

  1. A database and tool for boundary conditions for regional air quality modeling: description and evaluation

    NASA Astrophysics Data System (ADS)

    Henderson, B. H.; Akhtar, F.; Pye, H. O. T.; Napelenok, S. L.; Hutzell, W. T.

    2013-09-01

    Transported air pollutants receive increasing attention as regulations tighten and global concentrations increase. The need to represent international transport in regional air quality assessments requires improved representation of boundary concentrations. Currently available observations are too sparse vertically to provide boundary information, particularly for ozone precursors, but global simulations can be used to generate spatially and temporally varying Lateral Boundary Conditions (LBC). This study presents a public database of global simulations designed and evaluated for use as LBC for air quality models (AQMs). The database covers the contiguous United States (CONUS) for the years 2000-2010 and contains hourly varying concentrations of ozone, aerosols, and their precursors. The database is complimented by a tool for configuring the global results as inputs to regional scale models (e.g., Community Multiscale Air Quality or Comprehensive Air quality Model with extensions). This study also presents an example application based on the CONUS domain, which is evaluated against satellite retrieved ozone vertical profiles. The results show performance is largely within uncertainty estimates for the Tropospheric Emission Spectrometer (TES) with some exceptions. The major difference shows a high bias in the upper troposphere along the southern boundary in January. This publication documents the global simulation database, the tool for conversion to LBC, and the fidelity of concentrations on the boundaries. This documentation is intended to support applications that require representation of long-range transport of air pollutants.

  2. Adaptation of G-TAG Software for Validating Touch-and-Go Comet Surface Sampling Design Methodology

    NASA Technical Reports Server (NTRS)

    Mandic, Milan; Acikmese, Behcet; Blackmore, Lars

    2011-01-01

    The G-TAG software tool was developed under the R&TD on Integrated Autonomous Guidance, Navigation, and Control for Comet Sample Return, and represents a novel, multi-body dynamics simulation software tool for studying TAG sampling. The G-TAG multi-body simulation tool provides a simulation environment in which a Touch-and-Go (TAG) sampling event can be extensively tested. TAG sampling requires the spacecraft to descend to the surface, contact the surface with a sampling collection device, and then to ascend to a safe altitude. The TAG event lasts only a few seconds but is mission-critical with potentially high risk. Consequently, there is a need for the TAG event to be well characterized and studied by simulation and analysis in order for the proposal teams to converge on a reliable spacecraft design. This adaptation of the G-TAG tool was developed to support the Comet Odyssey proposal effort, and is specifically focused to address comet sample return missions. In this application, the spacecraft descends to and samples from the surface of a comet. Performance of the spacecraft during TAG is assessed based on survivability and sample collection performance. For the adaptation of the G-TAG simulation tool to comet scenarios, models are developed that accurately describe the properties of the spacecraft, approach trajectories, and descent velocities, as well as the models of the external forces and torques acting on the spacecraft. The adapted models of the spacecraft, descent profiles, and external sampling forces/torques were more sophisticated and customized for comets than those available in the basic G-TAG simulation tool. Scenarios implemented include the study of variations in requirements, spacecraft design (size, locations, etc. of the spacecraft components), and the environment (surface properties, slope, disturbances, etc.). The simulations, along with their visual representations using G-View, contributed to the Comet Odyssey New Frontiers proposal effort by indicating problems and/or benefits of different approaches and designs.

  3. A Graphical-User Interface for the U. S. Geological Survey's SUTRA Code using Argus ONE (for simulation of variable-density saturated-unsaturated ground-water flow with solute or energy transport)

    USGS Publications Warehouse

    Voss, Clifford I.; Boldt, David; Shapiro, Allen M.

    1997-01-01

    This report describes a Graphical-User Interface (GUI) for SUTRA, the U.S. Geological Survey (USGS) model for saturated-unsaturated variable-fluid-density ground-water flow with solute or energy transport,which combines a USGS-developed code that interfaces SUTRA with Argus ONE, a commercial software product developed by Argus Interware. This product, known as Argus Open Numerical Environments (Argus ONETM), is a programmable system with geographic-information-system-like (GIS-like) functionality that includes automated gridding and meshing capabilities for linking geospatial information with finite-difference and finite-element numerical model discretizations. The GUI for SUTRA is based on a public-domain Plug-In Extension (PIE) to Argus ONE that automates the use of ArgusONE to: automatically create the appropriate geospatial information coverages (information layers) for SUTRA, provide menus and dialogs for inputting geospatial information and simulation control parameters for SUTRA, and allow visualization of SUTRA simulation results. Following simulation control data and geospatial data input bythe user through the GUI, ArgusONE creates text files in a format required for normal input to SUTRA,and SUTRA can be executed within the Argus ONE environment. Then, hydraulic head, pressure, solute concentration, temperature, saturation and velocity results from the SUTRA simulation may be visualized. Although the GUI for SUTRA discussed in this report provides all of the graphical pre- and post-processor functions required for running SUTRA, it is also possible for advanced users to apply programmable features within Argus ONE to modify the GUI to meet the unique demands of particular ground-water modeling projects.

  4. Investigation in Simulated Vertical Descent of the Characteristics of a Cargo-Dropping Device having Extensible Rotating Blades

    NASA Technical Reports Server (NTRS)

    Stone, Ralph W., Jr.; Hultz, Burton E.

    1949-01-01

    The characteristics of a cargo-dropping device having extensible rotating blades as load-carrying surfaces have been studied in simulated vertical descent in the Langley 20-foot free-spinning tunnel. The investigation included tests to determine the variation in vertical sinking speed with load. A study of the blade characteristics and of the test results indicated a method of dynamically balancing the blades to permit proper functioning of the device.

  5. Prediction, Refinement and Persistency of Transmembrane Helix Dimers in Lipid Bilayers using Implicit and Explicit Solvent/Lipid Representations: Microsecond Molecular Dynamics Simulations of ErbB1/B2 and EphA1

    PubMed Central

    Zhang, Liqun; Sodt, Alexander J.; Venable, Richard M.; Pastor, Richard W.; Buck, Matthias

    2012-01-01

    All-atom simulations are carried out on ErbB1/B2 and EphA1 transmembrane helix dimers in lipid bilayers starting from their solution/DMPC bicelle NMR structures. Over the course of microsecond trajectories, the structures remain in close proximity to the initial configuration and satisfy the great majority of experimental tertiary contact restraints. These results further validate CHARMM protein/lipid force fields and simulation protocols on Anton. Separately, dimer conformations are generated using replica exchange in conjunction with an implicit solvent and lipid representation. The implicit model requires further improvement, and this study investigates whether lengthy all-atom molecular dynamics simulations can alleviate the shortcomings of the initial conditions. The simulations correct many of the deficiencies. For example excessive helix twisting is eliminated over a period of hundreds of nanoseconds. The helix tilt, crossing angles and dimer contacts approximate those of the NMR derived structure, although the detailed contact surface remains off-set for one of two helices in both systems. Hence, even microsecond simulations are not long enough for extensive helix rotations. The alternate structures can be rationalized with reference to interaction motifs and may represent still sought after receptor states that are important in ErbB1/B2 and EphA1 signaling. PMID:23042146

  6. From conscious thought to automatic action: A simulation account of action planning.

    PubMed

    Martiny-Huenger, Torsten; Martiny, Sarah E; Parks-Stamm, Elizabeth J; Pfeiffer, Elisa; Gollwitzer, Peter M

    2017-10-01

    We provide a theoretical framework and empirical evidence for how verbally planning an action creates direct perception-action links and behavioral automaticity. We argue that planning actions in an if (situation)-then (action) format induces sensorimotor simulations (i.e., activity patterns reenacting the event in the sensory and motor brain areas) of the anticipated situation and the intended action. Due to their temporal overlap, these activity patterns become linked. Whenever the previously simulated situation is encountered, the previously simulated action is partially reactivated through spreading activation and thus more likely to be executed. In 4 experiments (N = 363), we investigated the relation between specific if-then action plans worded to activate simulations of elbow flexion versus extension movements and actual elbow flexion versus extension movements in a subsequent, ostensibly unrelated categorization task. As expected, linking a critical stimulus to intended actions that implied elbow flexion movements (e.g., grabbing it for consumption) subsequently facilitated elbow flexion movements upon encountering the critical stimulus. However, linking a critical stimulus to actions that implied elbow extension movements (e.g., pointing at it) subsequently facilitated elbow extension movements upon encountering the critical stimulus. Thus, minor differences (i.e., exchanging the words "point at" with "grab") in verbally formulated action plans (i.e., conscious thought) had systematic consequences on subsequent actions. The question of how conscious thought can induce stimulus-triggered action is illuminated by the provided theoretical framework and the respective empirical evidence, facilitating the understanding of behavioral automaticity and human agency. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Generating topography through tectonic deformation of ice lithospheres: Simulating the formation of Ganymede's grooves

    NASA Astrophysics Data System (ADS)

    Bland, M. T.; McKinnon, W. B.

    2010-12-01

    Ganymede’s iconic topography offers clues to both the satellite’s thermal evolution, and the mechanics of tectonic deformation on icy satellites. Much of Ganymede’s surface consists of bright, young terrain, with a characteristic morphology dubbed “groove terrain”. As reviewed in Pappalardo et al. (2004), in Jupiter - The Planet, Satellites, and Magnetosphere (CUP), grooved terrain consists of sets of quasi-parallel, periodically-spaced, ridges and troughs. Peak-to-trough groove amplitudes are ~500 m, with low topographic slopes (~5°). Groove spacing is strongly periodic within a single groove set, ranging from 3-17 km; shorter wavelength deformation is also apparent in high-resolution images. Grooved terrain likely formed via unstable extension of Ganymede’s ice lithosphere, which was deformed into periodically-spaced pinches and swells, and accommodated by tilt-block normal faulting. Analytical models of unstable extension support this formation mechanism [Dombard and McKinnon 2001, Icarus 154], but initial numerical models of extending ice lithospheres struggled to produce large-amplitude, groove-like deformation [Bland and Showman 2007, Icarus 189]. Here we present simulations that reproduce many of the characteristics of Ganymede’s grooves [Bland et al. 2010, Icarus in press]. By more realistically simulating the decrease in material strength after initial fault development, our model allows strain to become readily localized into discrete zones. Such strain localization leads to the formation of periodic structures with amplitudes of 200-500 m, and wavelengths of 3-20 km. The morphology of the deformation depends on both the lithospheric thermal gradient, and the rate at which material strength decreases with increasing plastic strain. Large-amplitude, graben-like structures form when material weakening occurs rapidly with increasing strain, while lower-amplitude, periodic structures form when the ice retains its strength. Thus, extension can result in complex surface deformation, consistent with the variety of surface morphologies observed within the grooved terrain. Our modeling indicates that moderate thermal gradients (10 K km-1) may be sufficient to explain many of Ganymede’s groove morphologies. The implied heat flow (~50 mW m-2), however, is a factor of two greater than the expected radiogenic heat flux, suggesting additional energy input (e.g., tidal dissipation) may be required. Our modeling of groove formation suggests that understanding tectonic deformation on icy satellites requires a detailed understanding of the mechanical behavior of ice and ice lithospheres, and demonstrates the need for new tectonic models that include localization, realistic plasticity, and energy dissipation.

  8. Design of an autonomous lunar construction utility vehicle

    NASA Technical Reports Server (NTRS)

    1990-01-01

    In order to prepare a site for a lunar base, an autonomously operated construction vehicle is necessary. Discussed here is a Lunar Construction Utility Vehicle (LCUV), which uses interchangeable construction implements. Design of an elastic loop track system has advanced to the testing stage. A standard coupling device has been designed to insure a proper connection between the different construction tools and the LCUV. Autonomous control of the track drive motors was simulated successfully through the use of a joystick and a computer interface. A study of hydrogen-oxygen fuel cells produced estimates of reactant and product requirements and identified multilayer insulation needs. Research on the 100-kW heat rejection system determined that it is necessary to transport the radiator panel on a utility trailer. Extensive logistical support for the 720 hour use cycle requires further study.

  9. Swarm Counter-Asymmetric-Threat (CAT) 6-DOF Dynamics Simulation

    DTIC Science & Technology

    2005-07-01

    NAWCWD TP 8593 Swarm Counter-Asymmetric-Threat ( CAT ) 6-DOF Dynamics Simulation by James Bobinchak Weapons and Energetics...mathematical models used in the swarm counter- asymmetric-threat ( CAT ) simulation and the results of extensive Monte Carlo simulations. The swarm CAT ...Asymmetric-Threat ( CAT ) 6-DOF Dynamics Simulation (U) 6. AUTHOR(S) James Bobinchak and Gary Hewer 7. PERFORMING ORGANIZATION NAME(S) AND

  10. Simulation and Experimental Studies of a 2.45GHz Magnetron Source for an SRF Cavity with Field Amplitude and Phase Controls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Haipeng; Plawski, Tomasz E.; Rimmer, Robert A.

    2016-06-01

    Phase lock to an SRF cavity by using injection signal through output waveguide of a magnetron has been demonstrated [1, 3]. Amplitude control using magnetic field trimming and anode voltage modulation has been studied using MATLAB/Simulink simulations [2]. Based on these, we are planning to use an FPGA based digital LLRF system, which allows applying various types of control algorithms in order to achieve the required accelerating field stability. Since the 1497 MHz magnetron is still in the design stage, the proof of principle measurements of a commercial 2450 MHz magnetron are carried out to characterize the anode I-V curve,more » output power (the tube electronic efficiency), frequency dependence on the anode current (frequency pushing) and the Rieke diagram (frequency pulling by the reactive load). Based on early Simulink simulation, experimental data and extension of the Adler equation governing injection phase stability by Chen’s model, the specification of the new LLRF control chassis for both 2450 and 1497MHz systems are presented in this paper.« less

  11. Dynamic stresses in a Francis model turbine at deep part load

    NASA Astrophysics Data System (ADS)

    Weber, Wilhelm; von Locquenghien, Florian; Conrad, Philipp; Koutnik, Jiri

    2017-04-01

    A comparison between numerically obtained dynamic stresses in a Francis model turbine at deep part load with experimental ones is presented. Due to the change in the electrical power mix to more content of new renewable energy sources, Francis turbines are forced to operate at deep part load in order to compensate stochastic nature of wind and solar power and to ensure grid stability. For the extension of the operating range towards deep part load improved understanding of the harsh flow conditions and their impact on material fatigue of hydraulic components is required in order to ensure long life time of the power unit. In this paper pressure loads on a model turbine runner from unsteady two-phase computational fluid dynamics simulation at deep part load are used for calculation of mechanical stresses by finite element analysis. Therewith, stress distribution over time is determined. Since only few runner rotations are simulated due to enormous numerical cost, more effort has to be spent to evaluation procedure in order to obtain objective results. By comparing the numerical results with measured strains accuracy of the whole simulation procedure is verified.

  12. Application of a lower-upper implicit scheme and an interactive grid generation for turbomachinery flow field simulations

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Soh, Woo-Yung; Yoon, Seokkwan

    1989-01-01

    A finite-volume lower-upper (LU) implicit scheme is used to simulate an inviscid flow in a tubine cascade. This approximate factorization scheme requires only the inversion of sparse lower and upper triangular matrices, which can be done efficiently without extensive storage. As an implicit scheme it allows a large time step to reach the steady state. An interactive grid generation program (TURBO), which is being developed, is used to generate grids. This program uses the control point form of algebraic grid generation which uses a sparse collection of control points from which the shape and position of coordinate curves can be adjusted. A distinct advantage of TURBO compared with other grid generation programs is that it allows the easy change of local mesh structure without affecting the grid outside the domain of independence. Sample grids are generated by TURBO for a compressor rotor blade and a turbine cascade. The turbine cascade flow is simulated by using the LU implicit scheme on the grid generated by TURBO.

  13. An implementation of discrete electron transport models for gold in the Geant4 simulation toolkit

    NASA Astrophysics Data System (ADS)

    Sakata, D.; Incerti, S.; Bordage, M. C.; Lampe, N.; Okada, S.; Emfietzoglou, D.; Kyriakou, I.; Murakami, K.; Sasaki, T.; Tran, H.; Guatelli, S.; Ivantchenko, V. N.

    2016-12-01

    Gold nanoparticle (GNP) boosted radiation therapy can enhance the biological effectiveness of radiation treatments by increasing the quantity of direct and indirect radiation-induced cellular damage. As the physical effects of GNP boosted radiotherapy occur across energy scales that descend down to 10 eV, Monte Carlo simulations require discrete physics models down to these very low energies in order to avoid underestimating the absorbed dose and secondary particle generation. Discrete physics models for electron transportation down to 10 eV have been implemented within the Geant4-DNA low energy extension of Geant4. Such models allow the investigation of GNP effects at the nanoscale. At low energies, the new models have better agreement with experimental data on the backscattering coefficient, and they show similar performance for transmission coefficient data as the Livermore and Penelope models already implemented in Geant4. These new models are applicable in simulations focussed towards estimating the relative biological effectiveness of radiation in GNP boosted radiotherapy applications with photon and electron radiation sources.

  14. Extension of the quantum-kinetic model to lunar and Mars return physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liechty, D. S.; Lewis, M. J.

    The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high-mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. A recently introduced molecular-level chemistry model, the quantum-kinetic, or Q-K, model that predicts reaction rates for gases in thermal equilibrium and non-equilibrium using only kinetic theory and fundamental molecular properties, is extended in the current work to include electronic energy level transitions and reactions involving charged particles. Like the Q-K procedures for neutral species chemical reactions, these new models are phenomenological procedures that aimmore » to reproduce the reaction/transition rates but do not necessarily capture the exact physics. These engineering models are necessarily efficient due to the requirement to compute billions of simulated collisions in direct simulation Monte Carlo (DSMC) simulations. The new models are shown to generally agree within the spread of reported transition and reaction rates from the literature for near equilibrium conditions.« less

  15. Etch Profile Simulation Using Level Set Methods

    NASA Technical Reports Server (NTRS)

    Hwang, Helen H.; Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1997-01-01

    Etching and deposition of materials are critical steps in semiconductor processing for device manufacturing. Both etching and deposition may have isotropic and anisotropic components, due to directional sputtering and redeposition of materials, for example. Previous attempts at modeling profile evolution have used so-called "string theory" to simulate the moving solid-gas interface between the semiconductor and the plasma. One complication of this method is that extensive de-looping schemes are required at the profile corners. We will present a 2D profile evolution simulation using level set theory to model the surface. (1) By embedding the location of the interface in a field variable, the need for de-looping schemes is eliminated and profile corners are more accurately modeled. This level set profile evolution model will calculate both isotropic and anisotropic etch and deposition rates of a substrate in low pressure (10s mTorr) plasmas, considering the incident ion energy angular distribution functions and neutral fluxes. We will present etching profiles of Si substrates in Ar/Cl2 discharges for various incident ion energies and trench geometries.

  16. CFD Simulation On The Pressure Distribution For An Isolated Single-Story House With Extension: Grid Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Yahya, W. N. W.; Zaini, S. S.; Ismail, M. A.; Majid, T. A.; Deraman, S. N. C.; Abdullah, J.

    2018-04-01

    Damage due to wind-related disasters is increasing due to global climate change. Many studies have been conducted to study the wind effect surrounding low-rise building using wind tunnel tests or numerical simulations. The use of numerical simulation is relatively cheap but requires very good command in handling the software, acquiring the correct input parameters and obtaining the optimum grid or mesh. However, before a study can be conducted, a grid sensitivity test must be conducted to get a suitable cell number for the final to ensure an accurate result with lesser computing time. This study demonstrates the numerical procedures for conducting a grid sensitivity analysis using five models with different grid schemes. The pressure coefficients (CP) were observed along the wall and roof profile and compared between the models. The results showed that medium grid scheme can be used and able to produce high accuracy results compared to finer grid scheme as the difference in terms of the CP values was found to be insignificant.

  17. Using McStas for modelling complex optics, using simple building bricks

    NASA Astrophysics Data System (ADS)

    Willendrup, Peter K.; Udby, Linda; Knudsen, Erik; Farhi, Emmanuel; Lefmann, Kim

    2011-04-01

    The McStas neutron ray-tracing simulation package is a versatile tool for producing accurate neutron simulations, extensively used for design and optimization of instruments, virtual experiments, data analysis and user training.In McStas, component organization and simulation flow is intrinsically linear: the neutron interacts with the beamline components in a sequential order, one by one. Historically, a beamline component with several parts had to be implemented with a complete, internal description of all these parts, e.g. a guide component including all four mirror plates and required logic to allow scattering between the mirrors.For quite a while, users have requested the ability to allow “components inside components” or meta-components, allowing to combine functionality of several simple components to achieve more complex behaviour, i.e. four single mirror plates together defining a guide.We will here show that it is now possible to define meta-components in McStas, and present a set of detailed, validated examples including a guide with an embedded, wedged, polarizing mirror system of the Helmholtz-Zentrum Berlin type.

  18. Comparison of different models for non-invasive FFR estimation

    NASA Astrophysics Data System (ADS)

    Mirramezani, Mehran; Shadden, Shawn

    2017-11-01

    Coronary artery disease is a leading cause of death worldwide. Fractional flow reserve (FFR), derived from invasively measuring the pressure drop across a stenosis, is considered the gold standard to diagnose disease severity and need for treatment. Non-invasive estimation of FFR has gained recent attention for its potential to reduce patient risk and procedural cost versus invasive FFR measurement. Non-invasive FFR can be obtained by using image-based computational fluid dynamics to simulate blood flow and pressure in a patient-specific coronary model. However, 3D simulations require extensive effort for model construction and numerical computation, which limits their routine use. In this study we compare (ordered by increasing computational cost/complexity): reduced-order algebraic models of pressure drop across a stenosis; 1D, 2D (multiring) and 3D CFD models; as well as 3D FSI for the computation of FFR in idealized and patient-specific stenosis geometries. We demonstrate the ability of an appropriate reduced order algebraic model to closely predict FFR when compared to FFR from a full 3D simulation. This work was supported by the NIH, Grant No. R01-HL103419.

  19. In Vivo Investigation of the Effectiveness of a Hyper-viscoelastic Model in Simulating Brain Retraction

    NASA Astrophysics Data System (ADS)

    Li, Ping; Wang, Weiwei; Zhang, Chenxi; An, Yong; Song, Zhijian

    2016-07-01

    Intraoperative brain retraction leads to a misalignment between the intraoperative positions of the brain structures and their previous positions, as determined from preoperative images. In vitro swine brain sample uniaxial tests showed that the mechanical response of brain tissue to compression and extension could be described by the hyper-viscoelasticity theory. The brain retraction caused by the mechanical process is a combination of brain tissue compression and extension. In this paper, we first constructed a hyper-viscoelastic framework based on the extended finite element method (XFEM) to simulate intraoperative brain retraction. To explore its effectiveness, we then applied this framework to an in vivo brain retraction simulation. The simulation strictly followed the clinical scenario, in which seven swine were subjected to brain retraction. Our experimental results showed that the hyper-viscoelastic XFEM framework is capable of simulating intraoperative brain retraction and improving the navigation accuracy of an image-guided neurosurgery system (IGNS).

  20. Impact Detection for Characterization of Complex Multiphase Flows

    NASA Astrophysics Data System (ADS)

    Chan, Wai Hong Ronald; Urzay, Javier; Mani, Ali; Moin, Parviz

    2016-11-01

    Multiphase flows often involve a wide range of impact events, such as liquid droplets impinging on a liquid pool or gas bubbles coalescing in a liquid medium. These events contribute to a myriad of large-scale phenomena, including breaking waves on ocean surfaces. As impacts between surfaces necessarily occur at isolated points, numerical simulations of impact events will require the resolution of molecular scales near the impact points for accurate modeling. This can be prohibitively expensive unless subgrid impact and breakup models are formulated to capture the effects of the interactions. The first step in a large-eddy simulation (LES) based computational methodology for complex multiphase flows like air-sea interactions requires effective detection of these impact events. The starting point of this work is a collision detection algorithm for structured grids on a coupled level set / volume of fluid (CLSVOF) solver adapted from an earlier algorithm for cloth animations that triangulates the interface with the marching cubes method. We explore the extension of collision detection to a geometric VOF solver and to unstructured grids. Supported by ONR/A*STAR. Agency of Science, Technology and Research, Singapore; Office of Naval Research, USA.

  1. Modeling impact of small Kansas landfills on underlying aquifers

    USGS Publications Warehouse

    Sophocleous, M.; Stadnyk, N.G.; Stotts, M.

    1996-01-01

    Small landfills are exempt from compliance with Resource Conservation and Recovery Act Subtitle D standards for liner and leachate collection. We investigate the ramifications of this exemption under western Kansas semiarid environments and explore the conditions under which naturally occurring geologic settings provide sufficient protection against ground-water contamination. The methodology we employed was to run water budget simulations using the Hydrologic Evaluation of Landfill Performance (HELP) model, and fate and transport simulations using the Multimedia Exposure Assessment Model (MULTIMED) for several western Kansas small landfill scenarios in combination with extensive sensitivity analyses. We demonstrate that requiring landfill cover, leachate collection system (LCS), and compacted soil liner will reduce leachate production by 56%, whereas requiring only a cover without LCS and liner will reduce leachate by half as much. The most vulnerable small landfills are shown to be the ones with no vegetative cover underlain by both a relatively thin vadose zone and aquifer and which overlie an aquifer characterized by cool temperatures and low hydraulic gradients. The aquifer-related physical and chemical parameters proved to be more important than vadose zone and biodegradation parameters in controlling leachate concentrations at the point of compliance. ??ASCE.

  2. Investigation of severe lightning strike incidents to two USAF F-106A aircraft

    NASA Technical Reports Server (NTRS)

    Plumer, J. A.

    1981-01-01

    The results of the inspection and analysis of two F-106A aircraft that were struck by separate lightning strikes within a few minutes of each other are presented. Each aircraft sustained severe lightning strikes to the pitot booms, resulting in extensive damage to the pitot heater power harness, number 8 ground wire, and lightning suppressors, but there was no damage to either aircraft's electrical or avionic systems. A simulated lightning current of 226 kA and 3.8 million A(2)*S was required to reproduce the damage to the ground wires in the radomes. Photographs and detailed assessments of the damage are included.

  3. Progress Towards a Microgravity CFD Validation Study Using the ISS SPHERES-SLOSH Experiment

    NASA Technical Reports Server (NTRS)

    Storey, Jedediah M.; Kirk, Daniel; Marsell, Brandon (Editor); Schallhorn, Paul (Editor)

    2017-01-01

    Understanding, predicting, and controlling fluid slosh dynamics is critical to safety and improving performance of space missions when a significant percentage of the spacecrafts mass is a liquid. Computational fluid dynamics simulations can be used to predict the dynamics of slosh, but these programs require extensive validation. Many CFD programs have been validated by slosh experiments using various fluids in earth gravity, but prior to the ISS SPHERES-Slosh experiment1, little experimental data for long-duration, zero-gravity slosh existed. This paper presents the current status of an ongoing CFD validation study using the ISS SPHERES-Slosh experimental data.

  4. Progress Towards a Microgravity CFD Validation Study Using the ISS SPHERES-SLOSH Experiment

    NASA Technical Reports Server (NTRS)

    Storey, Jed; Kirk, Daniel (Editor); Marsell, Brandon (Editor); Schallhorn, Paul (Editor)

    2017-01-01

    Understanding, predicting, and controlling fluid slosh dynamics is critical to safety and improving performance of space missions when a significant percentage of the spacecrafts mass is a liquid. Computational fluid dynamics simulations can be used to predict the dynamics of slosh, but these programs require extensive validation. Many CFD programs have been validated by slosh experiments using various fluids in earth gravity, but prior to the ISS SPHERES-Slosh experiment, little experimental data for long-duration, zero-gravity slosh existed. This paper presents the current status of an ongoing CFD validation study using the ISS SPHERES-Slosh experimental data.

  5. Modeling non-locality of plasmonic excitations with a fictitious film

    NASA Astrophysics Data System (ADS)

    Kong, Jiantao; Shvonski, Alexander; Kempa, Krzysztof

    Non-local effects, requiring a wavevector (q) dependent dielectric response are becoming increasingly important in studies of plasmonic and metamaterial structures. The phenomenological hydrodynamic approximation (HDA) is the simplest, and most often used model, but it often fails. We show that the d-function formalism, exact to first order in q, is a powerful and simple-to-use alternative. Recently, we developed a mapping of the d-function formalism into a purely local fictitious film. This geometric mapping allows for non-local extensions of any local calculation scheme, including FDTD. We demonstrate here, that such mapped FDTD simulation of metallic nanoclusters agrees very well with various experiments.

  6. A Parallel Independent Component Analysis Approach to Investigate Genomic Influence on Brain Function

    PubMed Central

    Liu, Jingyu; Demirci, Oguz; Calhoun, Vince D.

    2009-01-01

    Relationships between genomic data and functional brain images are of great interest but require new analysis approaches to integrate the high-dimensional data types. This letter presents an extension of a technique called parallel independent component analysis (paraICA), which enables the joint analysis of multiple modalities including interconnections between them. We extend our earlier work by allowing for multiple interconnections and by providing important overfitting controls. Performance was assessed by simulations under different conditions, and indicated reliable results can be extracted by properly balancing overfitting and underfitting. An application to functional magnetic resonance images and single nucleotide polymorphism array produced interesting findings. PMID:19834575

  7. A Parallel Independent Component Analysis Approach to Investigate Genomic Influence on Brain Function.

    PubMed

    Liu, Jingyu; Demirci, Oguz; Calhoun, Vince D

    2008-01-01

    Relationships between genomic data and functional brain images are of great interest but require new analysis approaches to integrate the high-dimensional data types. This letter presents an extension of a technique called parallel independent component analysis (paraICA), which enables the joint analysis of multiple modalities including interconnections between them. We extend our earlier work by allowing for multiple interconnections and by providing important overfitting controls. Performance was assessed by simulations under different conditions, and indicated reliable results can be extracted by properly balancing overfitting and underfitting. An application to functional magnetic resonance images and single nucleotide polymorphism array produced interesting findings.

  8. Drug discovery using very large numbers of patents. General strategy with extensive use of match and edit operations

    NASA Astrophysics Data System (ADS)

    Robson, Barry; Li, Jin; Dettinger, Richard; Peters, Amanda; Boyer, Stephen K.

    2011-05-01

    A patent data base of 6.7 million compounds generated by a very high performance computer (Blue Gene) requires new techniques for exploitation when extensive use of chemical similarity is involved. Such exploitation includes the taxonomic classification of chemical themes, and data mining to assess mutual information between themes and companies. Importantly, we also launch candidates that evolve by "natural selection" as failure of partial match against the patent data base and their ability to bind to the protein target appropriately, by simulation on Blue Gene. An unusual feature of our method is that algorithms and workflows rely on dynamic interaction between match-and-edit instructions, which in practice are regular expressions. Similarity testing by these uses SMILES strings and, less frequently, graph or connectivity representations. Examining how this performs in high throughput, we note that chemical similarity and novelty are human concepts that largely have meaning by utility in specific contexts. For some purposes, mutual information involving chemical themes might be a better concept.

  9. Estimation of the Invisible Energy in Extensive Air Showers with the Data Collected by the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Mariazzi, Analisa

    The determination of the energy of primary cosmic rays from their extensive air showers using the fluorescence technique requires an estimation of the energy carried away by particles reaching the ground that do not deposit all their energy in the atmosphere. This estimation is typically made using Monte Carlo simulations and depends on the assumed primary particle mass and on model predictions for hadron-air collisions at high energies. In this work we review the method that the Pierre Auger Collaboration uses to obtain the invisible energy directly from hybrid events measured simultaneously with the fluorescence and the surface detectors of the Pierre Auger Observatory. As a corroboration of these results, a new method for the determination of the invisible energy using an independent data set is also presented. Both methods agree within systematic uncertainties, reducing significantly the biases related to differences between the high energy hadronic interaction models and data.

  10. Toward a Predictive Understanding of Earth’s Microbiomes to Address 21st Century Challenges

    PubMed Central

    Blaser, Martin J.; Cardon, Zoe G.; Cho, Mildred K.; Dangl, Jeffrey L.; Green, Jessica L.; Knight, Rob; Maxon, Mary E.; Northen, Trent R.; Pollard, Katherine S.

    2016-01-01

    ABSTRACT Microorganisms have shaped our planet and its inhabitants for over 3.5 billion years. Humankind has had a profound influence on the biosphere, manifested as global climate and land use changes, and extensive urbanization in response to a growing population. The challenges we face to supply food, energy, and clean water while maintaining and improving the health of our population and ecosystems are significant. Given the extensive influence of microorganisms across our biosphere, we propose that a coordinated, cross-disciplinary effort is required to understand, predict, and harness microbiome function. From the parallelization of gene function testing to precision manipulation of genes, communities, and model ecosystems and development of novel analytical and simulation approaches, we outline strategies to move microbiome research into an era of causality. These efforts will improve prediction of ecosystem response and enable the development of new, responsible, microbiome-based solutions to significant challenges of our time. PMID:27178263

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Von Dreele, Robert

    One of the goals in developing GSAS-II was to expand from the capabilities of the original General Structure Analysis System (GSAS) which largely encompassed just structure refinement and post refinement analysis. GSAS-II has been written almost entirely in Python loaded with graphics, GUI and mathematical packages (matplotlib, pyOpenGL, wxpython, numpy and scipy). Thus, GSAS-II has a fully developed modern GUI as well as extensive graphical display of data and results. However, the structure and operation of Python has required new approaches to many of the algorithms used in crystal structure analysis. The extensions beyond GSAS include image calibration/integration as wellmore » as peak fitting and unit cell indexing for powder data which are precursors for structure solution. Structure solution within GSAS-II begins with either Pawley or LeBail extracted structure factors from powder data or those measured in a single crystal experiment. Both charge flipping and Monte Carlo-Simulated Annealing techniques are available; the former can be applied to (3+1) incommensurate structures as well as conventional 3D structures.« less

  12. Toward a Predictive Understanding of Earth's Microbiomes to Address 21st Century Challenges.

    PubMed

    Blaser, Martin J; Cardon, Zoe G; Cho, Mildred K; Dangl, Jeffrey L; Donohue, Timothy J; Green, Jessica L; Knight, Rob; Maxon, Mary E; Northen, Trent R; Pollard, Katherine S; Brodie, Eoin L

    2016-05-13

    Microorganisms have shaped our planet and its inhabitants for over 3.5 billion years. Humankind has had a profound influence on the biosphere, manifested as global climate and land use changes, and extensive urbanization in response to a growing population. The challenges we face to supply food, energy, and clean water while maintaining and improving the health of our population and ecosystems are significant. Given the extensive influence of microorganisms across our biosphere, we propose that a coordinated, cross-disciplinary effort is required to understand, predict, and harness microbiome function. From the parallelization of gene function testing to precision manipulation of genes, communities, and model ecosystems and development of novel analytical and simulation approaches, we outline strategies to move microbiome research into an era of causality. These efforts will improve prediction of ecosystem response and enable the development of new, responsible, microbiome-based solutions to significant challenges of our time. Copyright © 2016 Blaser et al.

  13. Development of total maximum daily loads for bacteria impaired watershed using the comprehensive hydrology and water quality simulation model.

    PubMed

    Kim, Sang M; Brannan, Kevin M; Zeckoski, Rebecca W; Benham, Brian L

    2014-01-01

    The objective of this study was to develop bacteria total maximum daily loads (TMDLs) for the Hardware River watershed in the Commonwealth of Virginia, USA. The TMDL program is an integrated watershed management approach required by the Clean Water Act. The TMDLs were developed to meet Virginia's water quality standard for bacteria at the time, which stated that the calendar-month geometric mean concentration of Escherichia coli should not exceed 126 cfu/100 mL, and that no single sample should exceed a concentration of 235 cfu/100 mL. The bacteria impairment TMDLs were developed using the Hydrological Simulation Program-FORTRAN (HSPF). The hydrology and water quality components of HSPF were calibrated and validated using data from the Hardware River watershed to ensure that the model adequately simulated runoff and bacteria concentrations. The calibrated and validated HSPF model was used to estimate the contributions from the various bacteria sources in the Hardware River watershed to the in-stream concentration. Bacteria loads were estimated through an extensive source characterization process. Simulation results for existing conditions indicated that the majority of the bacteria came from livestock and wildlife direct deposits and pervious lands. Different source reduction scenarios were evaluated to identify scenarios that meet both the geometric mean and single sample maximum E. coli criteria with zero violations. The resulting scenarios required extreme and impractical reductions from livestock and wildlife sources. Results from studies similar to this across Virginia partially contributed to a reconsideration of the standard's applicability to TMDL development.

  14. MITK-based segmentation of co-registered MRI for subject-related regional anesthesia simulation

    NASA Astrophysics Data System (ADS)

    Teich, Christian; Liao, Wei; Ullrich, Sebastian; Kuhlen, Torsten; Ntouba, Alexandre; Rossaint, Rolf; Ullisch, Marcus; Deserno, Thomas M.

    2008-03-01

    With a steadily increasing indication, regional anesthesia is still trained directly on the patient. To develop a virtual reality (VR)-based simulation, a patient model is needed containing several tissues, which have to be extracted from individual magnet resonance imaging (MRI) volume datasets. Due to the given modality and the different characteristics of the single tissues, an adequate segmentation can only be achieved by using a combination of segmentation algorithms. In this paper, we present a framework for creating an individual model from MRI scans of the patient. Our work splits in two parts. At first, an easy-to-use and extensible tool for handling the segmentation task on arbitrary datasets is provided. The key idea is to let the user create a segmentation for the given subject by running different processing steps in a purposive order and store them in a segmentation script for reuse on new datasets. For data handling and visualization, we utilize the Medical Imaging Interaction Toolkit (MITK), which is based on the Visualization Toolkit (VTK) and the Insight Segmentation and Registration Toolkit (ITK). The second part is to find suitable segmentation algorithms and respectively parameters for differentiating the tissues required by the RA simulation. For this purpose, a fuzzy c-means clustering algorithm combined with mathematical morphology operators and a geometric active contour-based approach is chosen. The segmentation process itself aims at operating with minimal user interaction, and the gained model fits the requirements of the simulation. First results are shown for both, male and female MRI of the pelvis.

  15. EXTENSION EDUCATION SYMPOSIUM: reinventing extension as a resource--what does the future hold?

    PubMed

    Mirando, M A; Bewley, J M; Blue, J; Amaral-Phillips, D M; Corriher, V A; Whittet, K M; Arthur, N; Patterson, D J

    2012-10-01

    The mission of the Cooperative Extension Service, as a component of the land-grant university system, is to disseminate new knowledge and to foster its application and use. Opportunities and challenges facing animal agriculture in the United States have changed dramatically over the past few decades and require the use of new approaches and emerging technologies that are available to extension professionals. Increased federal competitive grant funding for extension, the creation of eXtension, the development of smartphone and related electronic technologies, and the rapidly increasing popularity of social media created new opportunities for extension educators to disseminate knowledge to a variety of audiences and engage these audiences in electronic discussions. Competitive grant funding opportunities for extension efforts to advance animal agriculture became available from the USDA National Institute of Food and Agriculture (NIFA) and have increased dramatically in recent years. The majority of NIFA funding opportunities require extension efforts to be integrated with research, and NIFA encourages the use of eXtension and other cutting-edge approaches to extend research to traditional clientele and nontraditional audiences. A case study is presented to illustrate how research and extension were integrated to improve the adoption of AI by beef producers. Those in agriculture are increasingly resorting to the use of social media venues such as Facebook, YouTube, LinkedIn, and Twitter to access information required to support their enterprises. Use of these various approaches by extension educators requires appreciation of the technology and an understanding of how the target audiences access information available on social media. Technology to deliver information is changing rapidly, and Cooperative Extension Service professionals will need to continuously evaluate digital technology and social media tools to appropriately integrate them into learning and educational opportunities.

  16. Simulation of Subsurface Multiphase Contaminant Extraction Using a Bioslurping Well Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matos de Souza, Michelle; Oostrom, Mart; White, Mark D.

    2016-07-12

    Subsurface simulation of multiphase extraction from wells is notoriously difficult. Explicit representation of well geometry requires small grid resolution, potentially leading to large computational demands. To reduce the problem dimensionality, multiphase extraction is mostly modeled using vertically-averaged approaches. In this paper, a multiphase well model approach is presented as an alternative to simplify the application. The well model, a multiphase extension of the classic Peaceman model, has been implemented in the STOMP simulator. The numerical solution approach accounts for local conditions and gradients in the exchange of fluids between the well and the aquifer. Advantages of this well model implementationmore » include the option to simulate the effects of well characteristics and operation. Simulations were conducted investigating the effects of extraction location, applied vacuum pressure, and a number of hydraulic properties. The obtained results were all consistent and logical. A major outcome of the test simulations is that, in contrast with common recommendations to extract from either the gas-NAPL or the NAPL-aqueous phase interface, the optimum extraction location should be in between these two levels. The new model implementation was also used to simulate extraction at a field site in Brazil. The simulation shows a good match with the field data, suggesting that the new STOMP well module may correctly represent oil removal. The field simulations depend on the quality of the site conceptual model, including the porous media and contaminant properties and the boundary and extraction conditions adopted. The new module may potentially be used to design field applications and analyze extraction data.« less

  17. 75 FR 22844 - Construction Fall Protection Systems Criteria and Practices and Training Requirements; Extension...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-30

    ...] Construction Fall Protection Systems Criteria and Practices and Training Requirements; Extension of the Office of Management and Budget's (OMB) Approval of Information Collection (Paperwork) Requirements AGENCY... requirements contained in the construction standards on Fall Protection Systems Criteria and Practices (29 CFR...

  18. Estimating canopy bulk density and canopy base height for conifer stands in the interior Western United States using the Forest Vegetation Simulator Fire and Fuels Extension.

    Treesearch

    Seth Ex; Frederick Smith; Tara Keyser; Stephanie Rebain

    2017-01-01

    The Forest Vegetation Simulator Fire and Fuels Extension (FFE-FVS) is often used to estimate canopy bulk density (CBD) and canopy base height (CBH), which are key indicators of crown fire hazard for conifer stands in the Western United States. Estimated CBD from FFE-FVS is calculated as the maximum 4 m running mean bulk density of predefined 0.3 m thick canopy layers (...

  19. Closed Environment Module - Modularization and extension of the Virtual Habitat

    NASA Astrophysics Data System (ADS)

    Plötner, Peter; Czupalla, Markus; Zhukov, Anton

    2013-12-01

    The Virtual Habitat (V-HAB), is a Life Support System (LSS) simulation, created to perform dynamic simulation of LSS's for future human spaceflight missions. It allows the testing of LSS robustness by means of computer simulations, e.g. of worst case scenarios.

  20. A Component-Based Extension Framework for Large-Scale Parallel Simulations in NEURON

    PubMed Central

    King, James G.; Hines, Michael; Hill, Sean; Goodman, Philip H.; Markram, Henry; Schürmann, Felix

    2008-01-01

    As neuronal simulations approach larger scales with increasing levels of detail, the neurosimulator software represents only a part of a chain of tools ranging from setup, simulation, interaction with virtual environments to analysis and visualizations. Previously published approaches to abstracting simulator engines have not received wide-spread acceptance, which in part may be to the fact that they tried to address the challenge of solving the model specification problem. Here, we present an approach that uses a neurosimulator, in this case NEURON, to describe and instantiate the network model in the simulator's native model language but then replaces the main integration loop with its own. Existing parallel network models are easily adopted to run in the presented framework. The presented approach is thus an extension to NEURON but uses a component-based architecture to allow for replaceable spike exchange components and pluggable components for monitoring, analysis, or control that can run in this framework alongside with the simulation. PMID:19430597

  1. Simulating the effects of ground-water withdrawals on streamflow in a precipitation-runoff model

    USGS Publications Warehouse

    Zarriello, Philip J.; Barlow, P.M.; Duda, P.B.

    2004-01-01

    Precipitation-runoff models are used to assess the effects of water use and management alternatives on streamflow. Often, ground-water withdrawals are a major water-use component that affect streamflow, but the ability of surface-water models to simulate ground-water withdrawals is limited. As part of a Hydrologic Simulation Program-FORTRAN (HSPF) precipitation-runoff model developed to analyze the effect of ground-water and surface-water withdrawals on streamflow in the Ipswich River in northeastern Massachusetts, an analytical technique (STRMDEPL) was developed for calculating the effects of pumped wells on streamflow. STRMDEPL is a FORTRAN program based on two analytical solutions that solve equations for ground-water flow to a well completed in a semi-infinite, homogeneous, and isotropic aquifer in direct hydraulic connection to a fully penetrating stream. One analytical method calculates unimpeded flow at the stream-aquifer boundary and the other method calculates the resistance to flow caused by semipervious streambed and streambank material. The principle of superposition is used with these analytical equations to calculate time-varying streamflow depletions due to daily pumping. The HSPF model can readily incorporate streamflow depletions caused by a well or surface-water withdrawal, or by multiple wells or surface-water withdrawals, or both, as a combined time-varying outflow demand from affected channel reaches. These demands are stored as a time series in the Watershed Data Management (WDM) file. This time-series data is read into the model as an external source used to specify flow from the first outflow gate in the reach where these withdrawals are located. Although the STRMDEPL program can be run independently of the HSPF model, an extension was developed to run this program within GenScn, a scenario generator and graphical user interface developed for use with the HSPF model. This extension requires that actual pumping rates for each well be stored in a unique WDM dataset identified by an attribute that associates each well with the model reach from which water is withdrawn. Other attributes identify the type and characteristics of the data. The interface allows users to easily add new pumping wells, delete exiting pumping wells, or change properties of the simulated aquifer or well. Development of this application enhanced the ability of the HSPF model to simulate complex water-use conditions in the Ipswich River Basin. The STRMDEPL program and the GenScn extension provide a valuable tool for water managers to evaluate the effects of pumped wells on streamflow and to test alternative water-use scenarios. Copyright ASCE 2004.

  2. Incorporating Non-Linear Sorption into High Fidelity Subsurface Reactive Transport Models

    NASA Astrophysics Data System (ADS)

    Matott, L. S.; Rabideau, A. J.; Allen-King, R. M.

    2014-12-01

    A variety of studies, including multiple NRC (National Research Council) reports, have stressed the need for simulation models that can provide realistic predictions of contaminant behavior during the groundwater remediation process, most recently highlighting the specific technical challenges of "back diffusion and desorption in plume models". For a typically-sized remediation site, a minimum of about 70 million grid cells are required to achieve desired cm-level thickness among low-permeability lenses responsible for driving the back-diffusion phenomena. Such discretization is nearly three orders of magnitude more than is typically seen in modeling practice using public domain codes like RT3D (Reactive Transport in Three Dimensions). Consequently, various extensions have been made to the RT3D code to support efficient modeling of recently proposed dual-mode non-linear sorption processes (e.g. Polanyi with linear partitioning) at high-fidelity scales of grid resolution. These extensions have facilitated development of exploratory models in which contaminants are introduced into an aquifer via an extended multi-decade "release period" and allowed to migrate under natural conditions for centuries. These realistic simulations of contaminant loading and migration provide high fidelity representation of the underlying diffusion and sorption processes that control remediation. Coupling such models with decision support processes is expected to facilitate improved long-term management of complex remediation sites that have proven intractable to conventional remediation strategies.

  3. Evaluation of dispersive mixing, extension rate and bubble size distribution using numerical simulation of a non-Newtonian fluid in a twin-screw mixer

    NASA Astrophysics Data System (ADS)

    Rathod, Maureen L.

    Initially 3D FEM simulation of a simplified mixer was used to examine the effect of mixer configuration and operating conditions on dispersive mixing of a non-Newtonian fluid. Horizontal and vertical velocity magnitudes increased with increasing mixer speed, while maximum axial velocity and shear rate were greater with staggered paddles. In contrast, parallel paddles produced an area of efficient dispersive mixing between the center of the paddle and the barrel wall. This study was expanded to encompass the complete nine-paddle mixing section using power-law and Bird-Carreau fluid models. In the center of the mixer, simple shear flow was seen, corresponding with high [special character omitted]. Efficient dispersive mixing appeared near the barrel wall at all flow rates and near the barrel center with parallel paddles. Areas of backflow, improving fluid retention time, occurred with staggered paddles. The Bird-Carreau fluid showed greater influence of paddle motion under the same operating conditions due to the inelastic nature of the fluid. Shear-thinning behavior also resulted in greater maximum shear rate as shearing became easier with decreasing fluid viscosity. Shear rate distributions are frequently calculated, but extension rate calculations have not been made in a complex geometry since Debbaut and Crochet (1988) defined extension rate as the ratio of the third to the second invariant of the strain rate tensor. Extension rate was assumed to be negligible in most studies, but here extension rate is shown to be significant. It is possible to calculate maximum stable bubble diameter from capillary number if shear and extension rates in a flow field are known. Extension rate distributions were calculated for Newtonian and non-Newtonian fluids. High extension and shear rates were found in the intermeshing region. Extension is the major influence on critical capillary number and maximum stable bubble diameter, but when extension rate values are low shear rate has a larger impact. Examination of maximum stable bubble diameter through the mixer predicted areas of higher bubble dispersion based on flow type. This research has advanced simulation of non-Newtonian fluid and shown that direct calculation of extension rate is possible, demonstrating the effect of extension rate on bubble break-up.

  4. Incision extension is the optimal method of difficult gallbladder extraction at laparoscopic cholecystectomy.

    PubMed

    Bordelon, B M; Hobday, K A; Hunter, J G

    1992-01-01

    An unsolved problem of laparoscopic cholecystectomy is the optimal method of removing the gallbladder with thick walls and a large stone burden. Proposed solutions include fascial dilatation, stone crushing, and ultrasonic, high-speed rotary, or laser lithotripsy. Our observation was that extension of the fascial incision to remove the impacted gallbladder was time efficient and did not increase postoperative pain. We reviewed the narcotic requirements of 107 consecutive patients undergoing laparoscopic cholecystectomy. Fifty-two patients required extension of the umbilical incision, and 55 patients did not have their fascial incision enlarged. Parenteral meperidine use was 39.5 +/- 63.6 mg in the patients requiring fascial incision extension and 66.3 +/- 79.2 mg in those not requiring fascial incision extension (mean +/- standard deviation). Oral narcotic requirements were 1.1 +/- 1.5 doses vs 1.3 +/- 1.7 doses in patients with and without incision extension, respectively. The wide range of narcotic use in both groups makes these apparent differences not statistically significant. We conclude that protracted attempts at stone crushing or expensive stone fragmentation devices are unnecessary for the extraction of a difficult gallbladder during laparoscopic cholecystectomy.

  5. ARC-1969-AC-42137

    NASA Image and Video Library

    1969-02-05

    Height-Control Test Apparatus (HICONTA) Simulator mounted to the exterior of the 40x80ft W.T. Building N-221B and provided extensive vertical motion simulating airplanes, helicopter and V/STOL aircraft

  6. Signal treatments to reduce heavy vehicle crash-risk at metropolitan highway intersections.

    PubMed

    Archer, Jeffery; Young, William

    2009-05-01

    Heavy vehicle red-light running at intersections is a common safety problem that has severe consequences. This paper investigates alternative signal treatments that address this issue. A micro-simulation analysis approach was adopted as a precursor to a field trial. The simulation model emulated traffic conditions at a known problem intersection and provided a baseline measure to compare the effects of: an extension of amber time; an extension of green for heavy vehicles detected in the dilemma zone at the onset of amber; an extension of the all-red safety-clearance time based on the detection of vehicles considered likely to run the red light at two detector locations during amber; an extension of the all-red safety-clearance time based on the detection of potential red-light runners during amber or red; and a combination of the second and fourth alternatives. Results suggested safety improvements for all treatments. An extension of amber provided the best safety effect but is known to be prone to behavioural adaptation effects and wastes traffic movement time unnecessarily. A green extension for heavy vehicles detected in the dilemma zone and an all-red extension for potential red-light runners were deemed to provide a sustainable safety improvement and operational efficiency.

  7. Automatic discovery of optimal classes

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John; Freeman, Don; Self, Matthew

    1986-01-01

    A criterion, based on Bayes' theorem, is described that defines the optimal set of classes (a classification) for a given set of examples. This criterion is transformed into an equivalent minimum message length criterion with an intuitive information interpretation. This criterion does not require that the number of classes be specified in advance, this is determined by the data. The minimum message length criterion includes the message length required to describe the classes, so there is a built in bias against adding new classes unless they lead to a reduction in the message length required to describe the data. Unfortunately, the search space of possible classifications is too large to search exhaustively, so heuristic search methods, such as simulated annealing, are applied. Tutored learning and probabilistic prediction in particular cases are an important indirect result of optimal class discovery. Extensions to the basic class induction program include the ability to combine category and real value data, hierarchical classes, independent classifications and deciding for each class which attributes are relevant.

  8. Simulation framework for electromagnetic effects in plasmonics, filter apertures, wafer scattering, grating mirrors, and nano-crystals

    NASA Astrophysics Data System (ADS)

    Ceperley, Daniel Peter

    This thesis presents a Finite-Difference Time-Domain simulation framework as well as both scientific observations and quantitative design data for emerging optical devices. These emerging applications required the development of simulation capabilities to carefully control numerical experimental conditions, isolate and quantifying specific scattering processes, and overcome memory and run-time limitations on large device structures. The framework consists of a new version 7 of TEMPEST and auxiliary tools implemented as Matlab scripts. In improving the geometry representation and absorbing boundary conditions in TEMPEST from v6 the accuracy has been sustained and key improvements have yielded application specific speed and accuracy improvements. These extensions include pulsed methods, PML for plasmon termination, and plasmon and scattered field sources. The auxiliary tools include application specific methods such as signal flow graphs of plasmon couplers, Bloch mode expansions of sub-wavelength grating waves, and back-propagation methods to characterize edge scattering in diffraction masks. Each application posed different numerical hurdles and physical questions for the simulation framework. The Terrestrial Planet Finder Coronagraph required accurate modeling of diffraction mask structures too large for solely FDTD analysis. This analysis was achieved through a combination of targeted TEMPEST simulations and full system simulator based on thin mask scalar diffraction models by Ball Aerospace for JPL. TEMPEST simulation showed that vertical sidewalls were the strongest scatterers, adding nearly 2lambda of light per mask edge, which could be reduced by 20° undercuts. TEMPEST assessment of coupling in rapid thermal annealing was complicated by extremely sub-wavelength features and fine meshes. Near 100% coupling and low variability was confirmed even in the presence of unidirectional dense metal gates. Accurate analysis of surface plasmon coupling efficiency by small surface features required capabilities to isolate these features and cleanly illuminate them with plasmons and plane-waves. These features were shown to have coupling cross-sections up to and slightly exceeding their physical size. Long run-times for TEMPEST simulations of finite length gratings were overcome with a signal flow graph method. With these methods a plasmon coupler with over a 10lambda 100% capture length was demonstrated. Simulation of 3D nano-particle arrays utilized TEMPEST v7's pulsed methods to minimize the number of multi-day simulations. These simulations led to the discovery that interstitial plasmons were responsible for resonant absorption and transmission but not reflection. Simulation of a sub-wavelength grating mirror using pulsed sources to map resonant spectra showed that neither coupled guided waves nor coupled isolated resonators accurately described the operation. However, a new model based on vertical propagation of lateral Bloch modes with zero phase progression efficiently characterized the device and provided principles for designing similar devices at other wavelengths.

  9. Assessing uncertainties in crop and pasture ensemble model simulations of productivity and N2O emissions

    USDA-ARS?s Scientific Manuscript database

    Simulation models are extensively used to predict agricultural productivity and greenhouse gas (GHG) emissions. However, the uncertainties of (reduced) model ensemble simulations have not been assessed systematically for variables affecting food security and climate change mitigation, within multisp...

  10. Dual Interlocked Logic for Single-Event Transient Mitigation

    DTIC Science & Technology

    2017-03-01

    SPICE simulation and fault-injection analysis. Exemplar SPICE simulations have been performed in a 32nm partially- depleted silicon-on-insulator...in this work. The model has been validated at the 32nm SOI technology node with extensive heavy-ion data [7]. For the SPICE simulations, three

  11. Extending MAM5 Meta-Model and JaCalIV E Framework to Integrate Smart Devices from Real Environments.

    PubMed

    Rincon, J A; Poza-Lujan, Jose-Luis; Julian, V; Posadas-Yagüe, Juan-Luis; Carrascosa, C

    2016-01-01

    This paper presents the extension of a meta-model (MAM5) and a framework based on the model (JaCalIVE) for developing intelligent virtual environments. The goal of this extension is to develop augmented mirror worlds that represent a real and virtual world coupled, so that the virtual world not only reflects the real one, but also complements it. A new component called a smart resource artifact, that enables modelling and developing devices to access the real physical world, and a human in the loop agent to place a human in the system have been included in the meta-model and framework. The proposed extension of MAM5 has been tested by simulating a light control system where agents can access both virtual and real sensor/actuators through the smart resources developed. The results show that the use of real environment interactive elements (smart resource artifacts) in agent-based simulations allows to minimize the error between simulated and real system.

  12. Extending MAM5 Meta-Model and JaCalIV E Framework to Integrate Smart Devices from Real Environments

    PubMed Central

    2016-01-01

    This paper presents the extension of a meta-model (MAM5) and a framework based on the model (JaCalIVE) for developing intelligent virtual environments. The goal of this extension is to develop augmented mirror worlds that represent a real and virtual world coupled, so that the virtual world not only reflects the real one, but also complements it. A new component called a smart resource artifact, that enables modelling and developing devices to access the real physical world, and a human in the loop agent to place a human in the system have been included in the meta-model and framework. The proposed extension of MAM5 has been tested by simulating a light control system where agents can access both virtual and real sensor/actuators through the smart resources developed. The results show that the use of real environment interactive elements (smart resource artifacts) in agent-based simulations allows to minimize the error between simulated and real system. PMID:26926691

  13. LOOS: an extensible platform for the structural analysis of simulations.

    PubMed

    Romo, Tod D; Grossfield, Alan

    2009-01-01

    We have developed LOOS (Lightweight Object-Oriented Structure-analysis library) as an object-oriented library designed to facilitate the rapid development of tools for the structural analysis of simulations. LOOS supports the native file formats of most common simulation packages including AMBER, CHARMM, CNS, Gromacs, NAMD, Tinker, and X-PLOR. Encapsulation and polymorphism are used to simultaneously provide a stable interface to the programmer and make LOOS easily extensible. A rich atom selection language based on the C expression syntax is included as part of the library. LOOS enables students and casual programmer-scientists to rapidly write their own analytical tools in a compact and expressive manner resembling scripting. LOOS is written in C++ and makes extensive use of the Standard Template Library and Boost, and is freely available under the GNU General Public License (version 3) LOOS has been tested on Linux and MacOS X, but is written to be portable and should work on most Unix-based platforms.

  14. Testing in Support of Space Fission System Development and Qualification

    NASA Technical Reports Server (NTRS)

    Houts, Mike; Bragg-Sitton, Shannon; Garber, Anne; Godfrey, Tom; Martin, Jim; Pearson, Boise; Webster, Kenny

    2007-01-01

    Extensive data would be required for the qualification of a fission surface power (FSP) system. The strategy for qualifying a FSP system could have a significant programmatic impact. This paper explores potential options that could be used for qualifying FSP systems, including cost-effective means for obtaining required data. three methods for obtaining qualification data are analysis, non-nuclear testing, and nuclear testing. It has been over 40 years since the US qualified a space reactor for launch. During that time, advances have been made related to all three methods. Perhaps the greatest advancement has occurred in the area of computational tools for design and analysis. Tools that have been developed, coupled with modem computers, would have a significant impact on a FSP qualification. This would be especially true for systems with materials and fuels operating well within temperature, irradiation damage, and burnup limits. The ability to perform highly realistic non-nuclear testing has also advanced throughout the past four decades. Instrumented thermal simulators were developed during the 1970s and 1980s to assist in the development, operation, and assessment of terrestrial fission systems. Instrumented thermal simulators optimized for assisting in the development, operation, and assessment of modem FSP systems have been under development (and utilized) since 1998. These thermal simulators enable heat from fission to be closely mimicked (axial power profile, radial power profile, temperature, heat flux, etc.} and extensive data to be taken from the core region. Both steady-state and transient operation can be tested. For transient testing, reactivity feedback is calculated (or measured in cold/warm criticals) based on reactor temperature and/or dimensional changes. Pin power during a transient is then calculated based on the reactivity feedback that would occur given measured values of temperature and/or dimensional change. In this way nonnuclear testing can be used to provide very realistic information related to nuclear operation. Non-nuclear testing can be used at all levels, including component, subsystem, and integrated system testing. Realistic non-nuclear testing is most useful for systems operating within known temperature, irradiation damage, and burnup capabilities.

  15. Simulating ungulate herbivory across forest landscapes: A browsing extension for LANDIS-II

    USGS Publications Warehouse

    DeJager, Nathan R.; Drohan, Patrick J.; Miranda, Brian M.; Sturtevant, Brian R.; Stout, Susan L.; Royo, Alejandro; Gustafson, Eric J.; Romanski, Mark C.

    2017-01-01

    Browsing ungulates alter forest productivity and vegetation succession through selective foraging on species that often dominate early succession. However, the long-term and large-scale effects of browsing on forest succession are not possible to project without the use of simulation models. To explore the effects of ungulates on succession in a spatially explicit manner, we developed a Browse Extension that simulates the effects of browsing ungulates on the growth and survival of plant species cohorts within the LANDIS-II spatially dynamic forest landscape simulation model framework. We demonstrate the capabilities of the new extension and explore the spatial effects of ungulates on forest composition and dynamics using two case studies. The first case study examined the long-term effects of persistently high white-tailed deer browsing rates in the northern hardwood forests of the Allegheny National Forest, USA. In the second case study, we incorporated a dynamic ungulate population model to simulate interactions between the moose population and boreal forest landscape of Isle Royale National Park, USA. In both model applications, browsing reduced total aboveground live biomass and caused shifts in forest composition. Simulations that included effects of browsing resulted in successional patterns that were more similar to those observed in the study regions compared to simulations that did not incorporate browsing effects. Further, model estimates of moose population density and available forage biomass were similar to previously published field estimates at Isle Royale and in other moose-boreal forest systems. Our simulations suggest that neglecting effects of browsing when modeling forest succession in ecosystems known to be influenced by ungulates may result in flawed predictions of aboveground biomass and tree species composition.

  16. Mechanisms of electron acceptor utilization: Implications for simulating anaerobic biodegradation

    USGS Publications Warehouse

    Schreiber, M.E.; Carey, G.R.; Feinstein, D.T.; Bahr, J.M.

    2004-01-01

    Simulation of biodegradation reactions within a reactive transport framework requires information on mechanisms of terminal electron acceptor processes (TEAPs). In initial modeling efforts, TEAPs were approximated as occurring sequentially, with the highest energy-yielding electron acceptors (e.g. oxygen) consumed before those that yield less energy (e.g., sulfate). Within this framework in a steady state plume, sequential electron acceptor utilization would theoretically produce methane at an organic-rich source and Fe(II) further downgradient, resulting in a limited zone of Fe(II) and methane overlap. However, contaminant plumes often display much more extensive zones of overlapping Fe(II) and methane. The extensive overlap could be caused by several abiotic and biotic processes including vertical mixing of byproducts in long-screened monitoring wells, adsorption of Fe(II) onto aquifer solids, or microscale heterogeneity in Fe(III) concentrations. Alternatively, the overlap could be due to simultaneous utilization of terminal electron acceptors. Because biodegradation rates are controlled by TEAPs, evaluating the mechanisms of electron acceptor utilization is critical for improving prediction of contaminant mass losses due to biodegradation. Using BioRedox-MT3DMS, a three-dimensional, multi-species reactive transport code, we simulated the current configurations of a BTEX plume and TEAP zones at a petroleum- contaminated field site in Wisconsin. Simulation results suggest that BTEX mass loss due to biodegradation is greatest under oxygen-reducing conditions, with smaller but similar contributions to mass loss from biodegradation under Fe(III)-reducing, sulfate-reducing, and methanogenic conditions. Results of sensitivity calculations document that BTEX losses due to biodegradation are most sensitive to the age of the plume, while the shape of the BTEX plume is most sensitive to effective porosity and rate constants for biodegradation under Fe(III)-reducing and methanogenic conditions. Using this transport model, we had limited success in simulating overlap of redox products using reasonable ranges of parameters within a strictly sequential electron acceptor utilization framework. Simulation results indicate that overlap of redox products cannot be accurately simulated using the constructed model, suggesting either that Fe(III) reduction and methanogenesis are occurring simultaneously in the source area, or that heterogeneities in Fe(III) concentration and/or mineral type cause the observed overlap. Additional field, experimental, and modeling studies will be needed to address these questions. ?? 2004 Elsevier B.V. All rights reserved.

  17. OpenMM 4: A Reusable, Extensible, Hardware Independent Library for High Performance Molecular Simulation.

    PubMed

    Eastman, Peter; Friedrichs, Mark S; Chodera, John D; Radmer, Randall J; Bruns, Christopher M; Ku, Joy P; Beauchamp, Kyle A; Lane, Thomas J; Wang, Lee-Ping; Shukla, Diwakar; Tye, Tony; Houston, Mike; Stich, Timo; Klein, Christoph; Shirts, Michael R; Pande, Vijay S

    2013-01-08

    OpenMM is a software toolkit for performing molecular simulations on a range of high performance computing architectures. It is based on a layered architecture: the lower layers function as a reusable library that can be invoked by any application, while the upper layers form a complete environment for running molecular simulations. The library API hides all hardware-specific dependencies and optimizations from the users and developers of simulation programs: they can be run without modification on any hardware on which the API has been implemented. The current implementations of OpenMM include support for graphics processing units using the OpenCL and CUDA frameworks. In addition, OpenMM was designed to be extensible, so new hardware architectures can be accommodated and new functionality (e.g., energy terms and integrators) can be easily added.

  18. OpenMM 4: A Reusable, Extensible, Hardware Independent Library for High Performance Molecular Simulation

    PubMed Central

    Eastman, Peter; Friedrichs, Mark S.; Chodera, John D.; Radmer, Randall J.; Bruns, Christopher M.; Ku, Joy P.; Beauchamp, Kyle A.; Lane, Thomas J.; Wang, Lee-Ping; Shukla, Diwakar; Tye, Tony; Houston, Mike; Stich, Timo; Klein, Christoph; Shirts, Michael R.; Pande, Vijay S.

    2012-01-01

    OpenMM is a software toolkit for performing molecular simulations on a range of high performance computing architectures. It is based on a layered architecture: the lower layers function as a reusable library that can be invoked by any application, while the upper layers form a complete environment for running molecular simulations. The library API hides all hardware-specific dependencies and optimizations from the users and developers of simulation programs: they can be run without modification on any hardware on which the API has been implemented. The current implementations of OpenMM include support for graphics processing units using the OpenCL and CUDA frameworks. In addition, OpenMM was designed to be extensible, so new hardware architectures can be accommodated and new functionality (e.g., energy terms and integrators) can be easily added. PMID:23316124

  19. Experimental validation of ultrasonic NDE simulation software

    NASA Astrophysics Data System (ADS)

    Dib, Gerges; Larche, Michael; Diaz, Aaron A.; Crawford, Susan L.; Prowant, Matthew S.; Anderson, Michael T.

    2016-02-01

    Computer modeling and simulation is becoming an essential tool for transducer design and insight into ultrasonic nondestructive evaluation (UT-NDE). As the popularity of simulation tools for UT-NDE increases, it becomes important to assess their reliability to model acoustic responses from defects in operating components and provide information that is consistent with in-field inspection data. This includes information about the detectability of different defect types for a given UT probe. Recently, a cooperative program between the Electrical Power Research Institute and the U.S. Nuclear Regulatory Commission was established to validate numerical modeling software commonly used for simulating UT-NDE of nuclear power plant components. In the first phase of this cooperative, extensive experimental UT measurements were conducted on machined notches with varying depth, length, and orientation in stainless steel plates. Then, the notches were modeled in CIVA, a semi-analytical NDE simulation platform developed by the French Commissariat a l'Energie Atomique, and their responses compared with the experimental measurements. Discrepancies between experimental and simulation results are due to either improper inputs to the simulation model, or to incorrect approximations and assumptions in the numerical models. To address the former, a variation study was conducted on the different parameters that are required as inputs for the model, specifically the specimen and transducer properties. Then, the ability of simulations to give accurate predictions regarding the detectability of the different defects was demonstrated. This includes the results in terms of the variations in defect amplitude indications, and the ratios between tip diffracted and specular signal amplitudes.

  20. 78 FR 21159 - Additional Requirements for Special Dipping and Coating Operations (Dip Tanks); Extension of the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-09

    ...] Additional Requirements for Special Dipping and Coating Operations (Dip Tanks); Extension of the Office of Management and Budget's Approval of the Information Collection (Paperwork) Requirement AGENCY: Occupational... requirement specified in its Standard on Dipping and Coating Operations (Dip Tanks) (29 CFR 1910.126(g)(4...

  1. Comparative study on gene set and pathway topology-based enrichment methods.

    PubMed

    Bayerlová, Michaela; Jung, Klaus; Kramer, Frank; Klemm, Florian; Bleckmann, Annalen; Beißbarth, Tim

    2015-10-22

    Enrichment analysis is a popular approach to identify pathways or sets of genes which are significantly enriched in the context of differentially expressed genes. The traditional gene set enrichment approach considers a pathway as a simple gene list disregarding any knowledge of gene or protein interactions. In contrast, the new group of so called pathway topology-based methods integrates the topological structure of a pathway into the analysis. We comparatively investigated gene set and pathway topology-based enrichment approaches, considering three gene set and four topological methods. These methods were compared in two extensive simulation studies and on a benchmark of 36 real datasets, providing the same pathway input data for all methods. In the benchmark data analysis both types of methods showed a comparable ability to detect enriched pathways. The first simulation study was conducted with KEGG pathways, which showed considerable gene overlaps between each other. In this study with original KEGG pathways, none of the topology-based methods outperformed the gene set approach. Therefore, a second simulation study was performed on non-overlapping pathways created by unique gene IDs. Here, methods accounting for pathway topology reached higher accuracy than the gene set methods, however their sensitivity was lower. We conducted one of the first comprehensive comparative works on evaluating gene set against pathway topology-based enrichment methods. The topological methods showed better performance in the simulation scenarios with non-overlapping pathways, however, they were not conclusively better in the other scenarios. This suggests that simple gene set approach might be sufficient to detect an enriched pathway under realistic circumstances. Nevertheless, more extensive studies and further benchmark data are needed to systematically evaluate these methods and to assess what gain and cost pathway topology information introduces into enrichment analysis. Both types of methods for enrichment analysis require further improvements in order to deal with the problem of pathway overlaps.

  2. A focused ultrasound treatment system for moving targets (part I): generic system design and in-silico first-stage evaluation.

    PubMed

    Schwenke, Michael; Strehlow, Jan; Demedts, Daniel; Haase, Sabrina; Barrios Romero, Diego; Rothlübbers, Sven; von Dresky, Caroline; Zidowitz, Stephan; Georgii, Joachim; Mihcin, Senay; Bezzi, Mario; Tanner, Christine; Sat, Giora; Levy, Yoav; Jenne, Jürgen; Günther, Matthias; Melzer, Andreas; Preusser, Tobias

    2017-01-01

    Focused ultrasound (FUS) is entering clinical routine as a treatment option. Currently, no clinically available FUS treatment system features automated respiratory motion compensation. The required quality standards make developing such a system challenging. A novel FUS treatment system with motion compensation is described, developed with the goal of clinical use. The system comprises a clinically available MR device and FUS transducer system. The controller is very generic and could use any suitable MR or FUS device. MR image sequences (echo planar imaging) are acquired for both motion observation and thermometry. Based on anatomical feature tracking, motion predictions are estimated to compensate for processing delays. FUS control parameters are computed repeatedly and sent to the hardware to steer the focus to the (estimated) target position. All involved calculations produce individually known errors, yet their impact on therapy outcome is unclear. This is solved by defining an intuitive quality measure that compares the achieved temperature to the static scenario, resulting in an overall efficiency with respect to temperature rise. To allow for extensive testing of the system over wide ranges of parameters and algorithmic choices, we replace the actual MR and FUS devices by a virtual system. It emulates the hardware and, using numerical simulations of FUS during motion, predicts the local temperature rise in the tissue resulting from the controls it receives. With a clinically available monitoring image rate of 6.67 Hz and 20 FUS control updates per second, normal respiratory motion is estimated to be compensable with an estimated efficiency of 80%. This reduces to about 70% for motion scaled by 1.5. Extensive testing (6347 simulated sonications) over wide ranges of parameters shows that the main source of error is the temporal motion prediction. A history-based motion prediction method performs better than a simple linear extrapolator. The estimated efficiency of the new treatment system is already suited for clinical applications. The simulation-based in-silico testing as a first-stage validation reduces the efforts of real-world testing. Due to the extensible modular design, the described approach might lead to faster translations from research to clinical practice.

  3. Human Factors in Training - Space Flight Resource Management Training

    NASA Technical Reports Server (NTRS)

    Bryne, Vicky; Connell, Erin; Barshi, Immanuel; Arsintescu, L.

    2009-01-01

    Accidents and incidents show that high workload-induced stress and poor teamwork skills lead to performance decrements and errors. Research on teamwork shows that effective teams are able to adapt to stressful situations, and to reduce workload by using successful strategies for communication and decision making, and through dynamic redistribution of tasks among team members. Furthermore, superior teams are able to recognize signs and symptoms of workload-induced stress early, and to adapt their coordination and communication strategies to the high workload, or stress conditions. Mission Control Center (MCC) teams often face demanding situations in which they must operate as an effective team to solve problems with crew and vehicle during onorbit operations. To be successful as a team, flight controllers (FCers) must learn effective teamwork strategies. Such strategies are the focus of Space Flight Resource Management (SFRM) training. SFRM training in MOD has been structured to include some classroom presentations of basic concepts and case studies, with the assumption that skill development happens in mission simulation. Integrated mission simulations do provide excellent opportunities for FCers to practice teamwork, but also require extensive technical knowledge of vehicle systems, mission operations, and crew actions. Such technical knowledge requires lengthy training. When SFRM training is relegated to integrated simulations, FCers can only practice SFRM after they have already mastered the technical knowledge necessary for these simulations. Given the centrality of teamwork to the success of MCC, holding SFRM training till late in the flow is inefficient. But to be able to train SFRM earlier in the flow, the training cannot rely on extensive mission-specific technical knowledge. Hence, the need for a generic SFRM training framework that would allow FCers to develop basic teamwork skills which are mission relevant, but without the required mission knowledge. Work on SFRM training has been conducted in collaboration with the Expedition Vehicle Division at the Mission Operations Directorate (MOD) and with United Space Alliance (USA) which provides training to Flight Controllers. The space flight resource management training work is part of the Human Factors in Training Directed Research Project (DRP) of the Space Human Factors Engineering (SHFE) Project under the Space Human Factors and Habitability (SHFH) Element of the Human Research Program (HRP). Human factors researchers at the Ames Research Center have been investigating team work and distributed decision making processes to develop a generic SFRM training framework for flight controllers. The work proposed for FY10 continues to build on this strong collaboration with MOD and the USA Training Group as well as previous research in relevant domains such as aviation. In FY10, the work focuses on documenting and analyzing problem solving strategies and decision making processes used in MCC by experienced FCers.

  4. Computationally efficient simulation of unsteady aerodynamics using POD on the fly

    NASA Astrophysics Data System (ADS)

    Moreno-Ramos, Ruben; Vega, José M.; Varas, Fernando

    2016-12-01

    Modern industrial aircraft design requires a large amount of sufficiently accurate aerodynamic and aeroelastic simulations. Current computational fluid dynamics (CFD) solvers with aeroelastic capabilities, such as the NASA URANS unstructured solver FUN3D, require very large computational resources. Since a very large amount of simulation is necessary, the CFD cost is just unaffordable in an industrial production environment and must be significantly reduced. Thus, a more inexpensive, yet sufficiently precise solver is strongly needed. An opportunity to approach this goal could follow some recent results (Terragni and Vega 2014 SIAM J. Appl. Dyn. Syst. 13 330-65 Rapun et al 2015 Int. J. Numer. Meth. Eng. 104 844-68) on an adaptive reduced order model that combines ‘on the fly’ a standard numerical solver (to compute some representative snapshots), proper orthogonal decomposition (POD) (to extract modes from the snapshots), Galerkin projection (onto the set of POD modes), and several additional ingredients such as projecting the equations using a limited amount of points and fairly generic mode libraries. When applied to the complex Ginzburg-Landau equation, the method produces acceleration factors (comparing with standard numerical solvers) of the order of 20 and 300 in one and two space dimensions, respectively. Unfortunately, the extension of the method to unsteady, compressible flows around deformable geometries requires new approaches to deal with deformable meshes, high-Reynolds numbers, and compressibility. A first step in this direction is presented considering the unsteady compressible, two-dimensional flow around an oscillating airfoil using a CFD solver in a rigidly moving mesh. POD on the Fly gives results whose accuracy is comparable to that of the CFD solver used to compute the snapshots.

  5. Simulated Radioscapholunate Fusion Alters Carpal Kinematics While Preserving Dart-Thrower's Motion

    PubMed Central

    Calfee, Ryan P.; Leventhal, Evan L.; Wilkerson, Jim; Moore, Douglas C.; Akelman, Edward; Crisco, Joseph J.

    2014-01-01

    Purpose Midcarpal degeneration is well documented after radioscapholunate fusion. This study tested the hypothesis that radioscapholunate fusion alters the kinematic behavior of the remaining lunotriquetral and midcarpal joints, with specific focus on the dart-thrower's motion. Methods Simulated radioscapholunate fusions were performed on 6 cadaveric wrists in an anatomically neutral posture. Two 0.060-in. carbon fiber pins were placed from proximal to distal across the radiolunate and radioscaphoid joints, respectively. The wrists were passively positioned in a custom jig toward a full range of motion along the orthogonal axes as well as oblique motions, with additional intermediate positions along the dart-thrower's path. Using a computed tomography– based markerless bone registration technique, each carpal bone's three-dimensional rotation was defined as a function of wrist flexion/extension from the pinned neutral position. Kinematic data was analyzed against data collected on the same wrist prior to fixation using hierarchical linear regression analysis and paired Student's t-tests. Results After simulated fusion, wrist motion was restricted to an average flexion-extension arc of 48°, reduced from 77°, and radial-ulnar deviation arc of 19°, reduced from 33°. The remaining motion was maximally preserved along the dart-thrower's path from radial-extension toward ulnar-flexion. The simulated fusion significantly increased rotation through the scaphotrapezial joint, scaphocapitate joint, triquetrohamate joint, and lunotriquetral joint. For example, in the pinned wrist, the rotation of the hamate relative to the triquetrum increased 85%. Therefore, during every 10° of total wrist motion, the hamate rotated an average of nearly 8° relative to the triquetrum after pinning versus 4° in the normal state. Conclusions Simulated radioscapholunate fusion altered midcarpal and lunotriquetral kinematics. The increased rotations across these remaining joints provide one potential explanation for midcarpal degeneration after radioscapholunate fusion. Additionally, this fusion model confirms the dart-thrower's hypothesis, as wrist motion after simulated radioscapholunate fusion was primarily preserved from radial-extension toward ulnar-flexion. PMID:18406953

  6. Computing the total atmospheric refraction for real-time optical imaging sensor simulation

    NASA Astrophysics Data System (ADS)

    Olson, Richard F.

    2015-05-01

    Fast and accurate computation of light path deviation due to atmospheric refraction is an important requirement for real-time simulation of optical imaging sensor systems. A large body of existing literature covers various methods for application of Snell's Law to the light path ray tracing problem. This paper provides a discussion of the adaptation to real time simulation of atmospheric refraction ray tracing techniques used in mid-1980's LOWTRAN releases. The refraction ray trace algorithm published in a LOWTRAN-6 technical report by Kneizys (et. al.) has been coded in MATLAB for development, and in C-language for simulation use. To this published algorithm we have added tuning parameters for variable path segment lengths, and extensions for Earth grazing and exoatmospheric "near Earth" ray paths. Model atmosphere properties used to exercise the refraction algorithm were obtained from tables published in another LOWTRAN-6 related report. The LOWTRAN-6 based refraction model is applicable to atmospheric propagation at wavelengths in the IR and visible bands of the electromagnetic spectrum. It has been used during the past two years by engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) in support of several advanced imaging sensor simulations. Recently, a faster (but sufficiently accurate) method using Gauss-Chebyshev Quadrature integration for evaluating the refraction integral was adopted.

  7. Simulation of lung alveolar epithelial wound healing in vitro

    PubMed Central

    Kim, Sean H. J.; Matthay, Michael A.; Mostov, Keith; Hunt, C. Anthony

    2010-01-01

    The mechanisms that enable and regulate alveolar type II (AT II) epithelial cell wound healing in vitro and in vivo remain largely unknown and need further elucidation. We used an in silico AT II cell-mimetic analogue to explore and better understand plausible wound healing mechanisms for two conditions: cyst repair in three-dimensional cultures and monolayer wound healing. Starting with the analogue that validated for key features of AT II cystogenesis in vitro, we devised an additional cell rearrangement action enabling cyst repair. Monolayer repair was enabled by providing ‘cells’ a control mechanism to switch automatically to a repair mode in the presence of a distress signal. In cyst wound simulations, the revised analogue closed wounds by adhering to essentially the same axioms available for alveolar-like cystogenesis. In silico cell proliferation was not needed. The analogue recovered within a few simulation cycles but required a longer recovery time for larger or multiple wounds. In simulated monolayer wound repair, diffusive factor-mediated ‘cell’ migration led to repair patterns comparable to those of in vitro cultures exposed to different growth factors. Simulations predicted directional cell locomotion to be critical for successful in vitro wound repair. We anticipate that with further use and refinement, the methods used will develop as a rigorous, extensible means of unravelling mechanisms of lung alveolar repair and regeneration. PMID:20236957

  8. Parallel computing method for simulating hydrological processesof large rivers under climate change

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, Y.

    2016-12-01

    Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.

  9. Methodology for the systems engineering process. Volume 3: Operational availability

    NASA Technical Reports Server (NTRS)

    Nelson, J. H.

    1972-01-01

    A detailed description and explanation of the operational availability parameter is presented. The fundamental mathematical basis for operational availability is developed, and its relationship to a system's overall performance effectiveness is illustrated within the context of identifying specific availability requirements. Thus, in attempting to provide a general methodology for treating both hypothetical and existing availability requirements, the concept of an availability state, in conjunction with the more conventional probability-time capability, is investigated. In this respect, emphasis is focused upon a balanced analytical and pragmatic treatment of operational availability within the system design process. For example, several applications of operational availability to typical aerospace systems are presented, encompassing the techniques of Monte Carlo simulation, system performance availability trade-off studies, analytical modeling of specific scenarios, as well as the determination of launch-on-time probabilities. Finally, an extensive bibliography is provided to indicate further levels of depth and detail of the operational availability parameter.

  10. Grown organic matter as a fuel raw material resource

    NASA Technical Reports Server (NTRS)

    Roller, W. L.; Keener, H. M.; Kline, R. D.; Mederski, H. J.; Curry, R. B.

    1975-01-01

    An extensive search was made on biomass production from the standpoint of climatic zones, water, nutrients, costs and energy requirements for many species. No exotic species were uncovered that gave hope for a bonanza of biomass production under culture, location, and management markedly different from those of existing agricultural concepts. A simulation analysis of biomass production was carried out for six species using conventional production methods, including their production costs and energy requirements. These estimates were compared with data on food, fiber, and feed production. The alternative possibility of using residues from food, feed, or lumber was evaluated. It was concluded that great doubt must be cast on the feasibility of producing grown organic matter for fuel, in competition with food, feed, or fiber. The feasibility of collecting residues may be nearer, but the competition for the residues for return to the soil or cellulosic production is formidable.

  11. Internet stream synchronization using Concord

    NASA Astrophysics Data System (ADS)

    Sreenan, Cormac J.; Narendran, B.; Agrawal, Prathima; Shivakumar, Narayanan

    1996-03-01

    Using packet networks to transport multimedia introduces delay variations within and across streams, necessitating synchronization at the receiver. This requires stream data to be buffered prior to presentation, which also increases its total end to end delay. Concord recognizes that applications may wish to influence the underlying synchronization policy in terms of its effect on quality of service. It provides a single framework for synchronization within and across streams and employs an application specific tradeoff between packet losses, delay and inter- stream skew. We present a new predictive approach for synchronization and a selection of results from an extensive evaluation of Concord for use in the Internet. A trace driven simulator is used, allowing a direct comparison with alternative approaches. We demonstrate that Concord can operate with lower maximum delay and less variation in total end to end delay, which in turn can allow receiver buffer requirements to be reduced.

  12. An Overview of the GIS Weasel

    USGS Publications Warehouse

    Viger, Roland J.

    2008-01-01

    This fact sheet provides a high-level description of the GIS Weasel, a software system designed to aid users in preparing spatial information as input to lumped and distributed parameter environmental simulation models (ESMs). The GIS Weasel provides geographic information system (GIS) tools to help create maps of geographic features relevant to the application of a user?s ESM and to generate parameters from those maps. The operation of the GIS Weasel does not require a user to be a GIS expert, only that a user has an understanding of the spatial information requirements of the model. The GIS Weasel software system provides a GIS-based graphical user interface (GUI), C programming language executables, and general utility scripts. The software will run on any computing platform where ArcInfo Workstation (version 8.1 or later) and the GRID extension are accessible. The user controls the GIS Weasel by interacting with menus, maps, and tables.

  13. Using Magnetic Field Gradients to Simulate Variable Gravity in Fluids and Materials Experiments

    NASA Technical Reports Server (NTRS)

    Ramachandran, Narayanan

    2006-01-01

    Fluid flow due to a gravitational field is caused by sedimentation, thermal buoyancy, or solutal buoyancy induced convection. During crystal growth, for example, these flows are undesirable and can lead to crystal imperfections. While crystallization in microgravity can approach diffusion limited growth conditions (no convection), terrestrially strong magnetic fields can be used to control fluid flow and sedimentation effects. In this work, a theory is presented on the stability of solutal convection of a magnetized fluid(weak1y paramagnetic) in the presence of a magnetic field. The requirements for stability are developed and compared to experiments performed within the bore of a superconducting magnet. The theoretical predictions are in good agreement with the experiments. Extension of the technique can also be applied to study artificial gravity requirements for long duration exploration missions. Discussion of this application with preliminary experiments and application of the technique to crystal growth will be provided.

  14. Warped linear mixed models for the genetic analysis of transformed phenotypes

    PubMed Central

    Fusi, Nicolo; Lippert, Christoph; Lawrence, Neil D.; Stegle, Oliver

    2014-01-01

    Linear mixed models (LMMs) are a powerful and established tool for studying genotype–phenotype relationships. A limitation of the LMM is that the model assumes Gaussian distributed residuals, a requirement that rarely holds in practice. Violations of this assumption can lead to false conclusions and loss in power. To mitigate this problem, it is common practice to pre-process the phenotypic values to make them as Gaussian as possible, for instance by applying logarithmic or other nonlinear transformations. Unfortunately, different phenotypes require different transformations, and choosing an appropriate transformation is challenging and subjective. Here we present an extension of the LMM that estimates an optimal transformation from the observed data. In simulations and applications to real data from human, mouse and yeast, we show that using transformations inferred by our model increases power in genome-wide association studies and increases the accuracy of heritability estimation and phenotype prediction. PMID:25234577

  15. Warped linear mixed models for the genetic analysis of transformed phenotypes.

    PubMed

    Fusi, Nicolo; Lippert, Christoph; Lawrence, Neil D; Stegle, Oliver

    2014-09-19

    Linear mixed models (LMMs) are a powerful and established tool for studying genotype-phenotype relationships. A limitation of the LMM is that the model assumes Gaussian distributed residuals, a requirement that rarely holds in practice. Violations of this assumption can lead to false conclusions and loss in power. To mitigate this problem, it is common practice to pre-process the phenotypic values to make them as Gaussian as possible, for instance by applying logarithmic or other nonlinear transformations. Unfortunately, different phenotypes require different transformations, and choosing an appropriate transformation is challenging and subjective. Here we present an extension of the LMM that estimates an optimal transformation from the observed data. In simulations and applications to real data from human, mouse and yeast, we show that using transformations inferred by our model increases power in genome-wide association studies and increases the accuracy of heritability estimation and phenotype prediction.

  16. Reinventing atomic magnetic simulations with spin-orbit coupling

    DOE PAGES

    Perera, Meewanage Dilina N.; Eisenbach, Markus; Nicholson, Don M.; ...

    2016-02-10

    We propose a powerful extension to the combined molecular and spin dynamics method that fully captures the coupling between the atomic and spin subsystems via spin-orbit interactions. Moreover, the foundation of this method lies in the inclusion of the local magnetic anisotropies that arise as a consequence of the lattice symmetry breaking due to phonons or crystallographic defects. By using canonical simulations of bcc iron with the system coupled to a phonon heat bath, we show that our extension enables the previously unachievable angular momentum exchange between the atomic and spin degrees of freedom.

  17. Approximation of Engine Casing Temperature Constraints for Casing Mounted Electronics

    NASA Technical Reports Server (NTRS)

    Kratz, Jonathan L.; Culley, Dennis E.; Chapman, Jeffryes W.

    2017-01-01

    The performance of propulsion engine systems is sensitive to weight and volume considerations. This can severely constrain the configuration and complexity of the control system hardware. Distributed Engine Control technology is a response to these concerns by providing more flexibility in designing the control system, and by extension, more functionality leading to higher performing engine systems. Consequently, there can be a weight benefit to mounting modular electronic hardware on the engine core casing in a high temperature environment. This paper attempts to quantify the in-flight temperature constraints for engine casing mounted electronics. In addition, an attempt is made at studying heat soak back effects. The Commercial Modular Aero Propulsion System Simulation 40k (C-MAPSS40k) software is leveraged with real flight data as the inputs to the simulation. A two-dimensional (2-D) heat transfer model is integrated with the engine simulation to approximate the temperature along the length of the engine casing. This modification to the existing C-MAPSS40k software will provide tools and methodologies to develop a better understanding of the requirements for the embedded electronics hardware in future engine systems. Results of the simulations are presented and their implications on temperature constraints for engine casing mounted electronics is discussed.

  18. Approximation of Engine Casing Temperature Constraints for Casing Mounted Electronics

    NASA Technical Reports Server (NTRS)

    Kratz, Jonathan; Culley, Dennis; Chapman, Jeffryes

    2016-01-01

    The performance of propulsion engine systems is sensitive to weight and volume considerations. This can severely constrain the configuration and complexity of the control system hardware. Distributed Engine Control technology is a response to these concerns by providing more flexibility in designing the control system, and by extension, more functionality leading to higher performing engine systems. Consequently, there can be a weight benefit to mounting modular electronic hardware on the engine core casing in a high temperature environment. This paper attempts to quantify the in-flight temperature constraints for engine casing mounted electronics. In addition, an attempt is made at studying heat soak back effects. The Commercial Modular Aero Propulsion System Simulation 40k (C-MAPSS40k) software is leveraged with real flight data as the inputs to the simulation. A two-dimensional (2-D) heat transfer model is integrated with the engine simulation to approximate the temperature along the length of the engine casing. This modification to the existing C-MAPSS40k software will provide tools and methodologies to develop a better understanding of the requirements for the embedded electronics hardware in future engine systems. Results of the simulations are presented and their implications on temperature constraints for engine casing mounted electronics is discussed.

  19. Deep Part Load Flow Analysis in a Francis Model turbine by means of two-phase unsteady flow simulations

    NASA Astrophysics Data System (ADS)

    Conrad, Philipp; Weber, Wilhelm; Jung, Alexander

    2017-04-01

    Hydropower plants are indispensable to stabilize the grid by reacting quickly to changes of the energy demand. However, an extension of the operating range towards high and deep part load conditions without fatigue of the hydraulic components is desirable to increase their flexibility. In this paper a model sized Francis turbine at low discharge operating conditions (Q/QBEP = 0.27) is analyzed by means of computational fluid dynamics (CFD). Unsteady two-phase simulations for two Thoma-number conditions are conducted. Stochastic pressure oscillations, observed on the test rig at low discharge, require sophisticated numerical models together with small time steps, large grid sizes and long simulation times to cope with these fluctuations. In this paper the BSL-EARSM model (Explicit Algebraic Reynolds Stress) was applied as a compromise between scale resolving and two-equation turbulence models with respect to computational effort and accuracy. Simulation results are compared to pressure measurements showing reasonable agreement in resolving the frequency spectra and amplitude. Inner blade vortices were predicted successfully in shape and size. Surface streamlines in blade-to-blade view are presented, giving insights to the formation of the inner blade vortices. The acquired time dependent pressure fields can be used for quasi-static structural analysis (FEA) for fatigue calculations in the future.

  20. POLYVIEW-MM: web-based platform for animation and analysis of molecular simulations

    PubMed Central

    Porollo, Aleksey; Meller, Jaroslaw

    2010-01-01

    Molecular simulations offer important mechanistic and functional clues in studies of proteins and other macromolecules. However, interpreting the results of such simulations increasingly requires tools that can combine information from multiple structural databases and other web resources, and provide highly integrated and versatile analysis tools. Here, we present a new web server that integrates high-quality animation of molecular motion (MM) with structural and functional analysis of macromolecules. The new tool, dubbed POLYVIEW-MM, enables animation of trajectories generated by molecular dynamics and related simulation techniques, as well as visualization of alternative conformers, e.g. obtained as a result of protein structure prediction methods or small molecule docking. To facilitate structural analysis, POLYVIEW-MM combines interactive view and analysis of conformational changes using Jmol and its tailored extensions, publication quality animation using PyMol, and customizable 2D summary plots that provide an overview of MM, e.g. in terms of changes in secondary structure states and relative solvent accessibility of individual residues in proteins. Furthermore, POLYVIEW-MM integrates visualization with various structural annotations, including automated mapping of known inter-action sites from structural homologs, mapping of cavities and ligand binding sites, transmembrane regions and protein domains. URL: http://polyview.cchmc.org/conform.html. PMID:20504857

  1. Born-Oppenheimer ab initio QM/MM Molecular Dynamics Simulations of Enzyme Reactions

    PubMed Central

    Zhou, Yanzi; Wang, Shenglong; Li, Yongle; Zhang, Yingkai

    2016-01-01

    There are two key requirements for reliably simulating enzyme reactions: one is a reasonably accurate potential energy surface to describe the bond forming/breaking process as well as to adequately model the heterogeneous enzyme environment; the other is to perform extensive sampling since an enzyme system consists of at least thousands of atoms and its energy landscape is very complex. One attractive approach to meet both daunting tasks is Born-Oppenheimer ab initio QM/MM molecular dynamics simulation (aiQM/MM-MD) with umbrella sampling. In this chapter, we describe our recently developed pseudobond Q-Chem–Amber interface, which employs a combined electrostatic-mechanical embedding scheme with periodic boundary condition and the particle mesh Ewald method for long-range electrostatics interactions. In our implementation, Q-Chem and the sander module of Amber are combined at the source code level without using system calls, and all necessary data communications between QM and MM calculations are achieved via computer memory. We demonstrate the applicability of this pseudobond Q-Chem–Amber interface by presenting two examples, one reaction in aqueous solution and one enzyme reaction. Finally, we describe our established aiQM/MM-MD enzyme simulation protocol, which has been successfully applied to study more than a dozen enzymes. PMID:27498636

  2. Simulation of seagrass bed mapping by satellite images based on the radiative transfer model

    NASA Astrophysics Data System (ADS)

    Sagawa, Tatsuyuki; Komatsu, Teruhisa

    2015-06-01

    Seagrass and seaweed beds play important roles in coastal marine ecosystems. They are food sources and habitats for many marine organisms, and influence the physical, chemical, and biological environment. They are sensitive to human impacts such as reclamation and pollution. Therefore, their management and preservation are necessary for a healthy coastal environment. Satellite remote sensing is a useful tool for mapping and monitoring seagrass beds. The efficiency of seagrass mapping, seagrass bed classification in particular, has been evaluated by mapping accuracy using an error matrix. However, mapping accuracies are influenced by coastal environments such as seawater transparency, bathymetry, and substrate type. Coastal management requires sufficient accuracy and an understanding of mapping limitations for monitoring coastal habitats including seagrass beds. Previous studies are mainly based on case studies in specific regions and seasons. Extensive data are required to generalise assessments of classification accuracy from case studies, which has proven difficult. This study aims to build a simulator based on a radiative transfer model to produce modelled satellite images and assess the visual detectability of seagrass beds under different transparencies and seagrass coverages, as well as to examine mapping limitations and classification accuracy. Our simulations led to the development of a model of water transparency and the mapping of depth limits and indicated the possibility for seagrass density mapping under certain ideal conditions. The results show that modelling satellite images is useful in evaluating the accuracy of classification and that establishing seagrass bed monitoring by remote sensing is a reliable tool.

  3. 76 FR 3062 - Extension of Comment Period on Change to the Reporting Date for Certain Data Elements Required...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-19

    ...-9252-4] Extension of Comment Period on Change to the Reporting Date for Certain Data Elements Required... Change to the Reporting Date for Certain Data Elements Required Under the Mandatory Reporting of... the Reporting Date for Certain Data Elements Required Under the Mandatory Reporting of Greenhouse...

  4. Optimization of GATE and PHITS Monte Carlo code parameters for uniform scanning proton beam based on simulation with FLUKA general-purpose code

    NASA Astrophysics Data System (ADS)

    Kurosu, Keita; Takashina, Masaaki; Koizumi, Masahiko; Das, Indra J.; Moskvin, Vadim P.

    2014-10-01

    Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.

  5. Accelerated weight histogram method for exploring free energy landscapes

    NASA Astrophysics Data System (ADS)

    Lindahl, V.; Lidmar, J.; Hess, B.

    2014-07-01

    Calculating free energies is an important and notoriously difficult task for molecular simulations. The rapid increase in computational power has made it possible to probe increasingly complex systems, yet extracting accurate free energies from these simulations remains a major challenge. Fully exploring the free energy landscape of, say, a biological macromolecule typically requires sampling large conformational changes and slow transitions. Often, the only feasible way to study such a system is to simulate it using an enhanced sampling method. The accelerated weight histogram (AWH) method is a new, efficient extended ensemble sampling technique which adaptively biases the simulation to promote exploration of the free energy landscape. The AWH method uses a probability weight histogram which allows for efficient free energy updates and results in an easy discretization procedure. A major advantage of the method is its general formulation, making it a powerful platform for developing further extensions and analyzing its relation to already existing methods. Here, we demonstrate its efficiency and general applicability by calculating the potential of mean force along a reaction coordinate for both a single dimension and multiple dimensions. We make use of a non-uniform, free energy dependent target distribution in reaction coordinate space so that computational efforts are not wasted on physically irrelevant regions. We present numerical results for molecular dynamics simulations of lithium acetate in solution and chignolin, a 10-residue long peptide that folds into a β-hairpin. We further present practical guidelines for setting up and running an AWH simulation.

  6. Accelerated weight histogram method for exploring free energy landscapes.

    PubMed

    Lindahl, V; Lidmar, J; Hess, B

    2014-07-28

    Calculating free energies is an important and notoriously difficult task for molecular simulations. The rapid increase in computational power has made it possible to probe increasingly complex systems, yet extracting accurate free energies from these simulations remains a major challenge. Fully exploring the free energy landscape of, say, a biological macromolecule typically requires sampling large conformational changes and slow transitions. Often, the only feasible way to study such a system is to simulate it using an enhanced sampling method. The accelerated weight histogram (AWH) method is a new, efficient extended ensemble sampling technique which adaptively biases the simulation to promote exploration of the free energy landscape. The AWH method uses a probability weight histogram which allows for efficient free energy updates and results in an easy discretization procedure. A major advantage of the method is its general formulation, making it a powerful platform for developing further extensions and analyzing its relation to already existing methods. Here, we demonstrate its efficiency and general applicability by calculating the potential of mean force along a reaction coordinate for both a single dimension and multiple dimensions. We make use of a non-uniform, free energy dependent target distribution in reaction coordinate space so that computational efforts are not wasted on physically irrelevant regions. We present numerical results for molecular dynamics simulations of lithium acetate in solution and chignolin, a 10-residue long peptide that folds into a β-hairpin. We further present practical guidelines for setting up and running an AWH simulation.

  7. In Vivo Investigation of the Effectiveness of a Hyper-viscoelastic Model in Simulating Brain Retraction

    PubMed Central

    Li, Ping; Wang, Weiwei; Zhang, Chenxi; An, Yong; Song, Zhijian

    2016-01-01

    Intraoperative brain retraction leads to a misalignment between the intraoperative positions of the brain structures and their previous positions, as determined from preoperative images. In vitro swine brain sample uniaxial tests showed that the mechanical response of brain tissue to compression and extension could be described by the hyper-viscoelasticity theory. The brain retraction caused by the mechanical process is a combination of brain tissue compression and extension. In this paper, we first constructed a hyper-viscoelastic framework based on the extended finite element method (XFEM) to simulate intraoperative brain retraction. To explore its effectiveness, we then applied this framework to an in vivo brain retraction simulation. The simulation strictly followed the clinical scenario, in which seven swine were subjected to brain retraction. Our experimental results showed that the hyper-viscoelastic XFEM framework is capable of simulating intraoperative brain retraction and improving the navigation accuracy of an image-guided neurosurgery system (IGNS). PMID:27387301

  8. 29 CFR 9.4 - Exclusions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... purchase orders at issue and would require extensive training to learn new technology or processes that... require extensive training to learn new technology or processes that would not be required of a new...) Managerial and supervisory employees. This part does not apply to employees who are managerial or supervisory...

  9. 29 CFR 9.4 - Exclusions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... purchase orders at issue and would require extensive training to learn new technology or processes that... require extensive training to learn new technology or processes that would not be required of a new...) Managerial and supervisory employees. This part does not apply to employees who are managerial or supervisory...

  10. 29 CFR 9.4 - Exclusions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... purchase orders at issue and would require extensive training to learn new technology or processes that... require extensive training to learn new technology or processes that would not be required of a new...) Managerial and supervisory employees. This part does not apply to employees who are managerial or supervisory...

  11. Energetic requirements of green sturgeon (Acipenser medirostris) feeding on burrowing shrimp (Neotrypaea californiensis) in estuaries: importance of temperature, reproductive investment, and residence time

    USGS Publications Warehouse

    Borin, Joshua M.; Moser, Mary L.; Hansen, Adam G.; Beauchamp, David A.; Corbett, Stephen C.; Dumbauld, Brett R.; Pruitt, Casey; Ruesink, Jennifer L.; Donoghue, Cinde

    2017-01-01

    Habitat use can be complex, as tradeoffs among physiology, resource abundance, and predator avoidance affect the suitability of different environments for different species. Green sturgeon (Acipenser medirostris), an imperiled species along the west coast of North America, undertake extensive coastal migrations and occupy estuaries during the summer and early fall. Warm water and abundant prey in estuaries may afford a growth opportunity. We applied a bioenergetics model to investigate how variation in estuarine temperature, spawning frequency, and duration of estuarine residence affect consumption and growth potential for individual green sturgeon. We assumed that green sturgeon achieve observed annual growth by feeding solely in conditions represented by Willapa Bay, Washington, an estuary annually frequented by green sturgeon and containing extensive tidal flats that harbor a major prey source (burrowing shrimp, Neotrypaea californiensis). Modeled consumption rates increased little with reproductive investment (<0.4%), but responded strongly (10–50%) to water temperature and duration of residence, as higher temperatures and longer residence required greater consumption to achieve equivalent growth. Accordingly, although green sturgeon occupy Willapa Bay from May through September, acoustically-tagged individuals are observed over much shorter durations (34 d + 41 d SD, N = 89). Simulations of <34 d estuarine residence required unrealistically high consumption rates to achieve observed growth, whereas longer durations required sustained feeding, and therefore higher total intake, to compensate for prolonged exposure to warm temperatures. Model results provide a range of per capita consumption rates by green sturgeon feeding in estuaries to inform management decisions regarding resource and habitat protection for this protected species.

  12. Use of Flowtran Simulation in Education

    ERIC Educational Resources Information Center

    Clark, J. Peter; Sommerfeld, Jude T.

    1976-01-01

    Describes the use in chemical engineering education of FLOWTRAN, a large steady-state simulator of chemical processes with extensive facilities for physical and thermodynamic data-handling and a large library of equipment modules, including cost estimation capability. (MLH)

  13. Demonstrating an Order-of-Magnitude Sampling Enhancement in Molecular Dynamics Simulations of Complex Protein Systems.

    PubMed

    Pan, Albert C; Weinreich, Thomas M; Piana, Stefano; Shaw, David E

    2016-03-08

    Molecular dynamics (MD) simulations can describe protein motions in atomic detail, but transitions between protein conformational states sometimes take place on time scales that are infeasible or very expensive to reach by direct simulation. Enhanced sampling methods, the aim of which is to increase the sampling efficiency of MD simulations, have thus been extensively employed. The effectiveness of such methods when applied to complex biological systems like proteins, however, has been difficult to establish because even enhanced sampling simulations of such systems do not typically reach time scales at which convergence is extensive enough to reliably quantify sampling efficiency. Here, we obtain sufficiently converged simulations of three proteins to evaluate the performance of simulated tempering, a member of a widely used class of enhanced sampling methods that use elevated temperature to accelerate sampling. Simulated tempering simulations with individual lengths of up to 100 μs were compared to (previously published) conventional MD simulations with individual lengths of up to 1 ms. With two proteins, BPTI and ubiquitin, we evaluated the efficiency of sampling of conformational states near the native state, and for the third, the villin headpiece, we examined the rate of folding and unfolding. Our comparisons demonstrate that simulated tempering can consistently achieve a substantial sampling speedup of an order of magnitude or more relative to conventional MD.

  14. Establishing a Modern Ground Network for Space Geodesy Applications

    NASA Technical Reports Server (NTRS)

    Pearlman, M.; Pavlis, E.; Altamimi, Z.; Noll, C.

    2010-01-01

    Ground-based networks of co-located space-geodesy techniques (VLBI, SLR, GLASS, DORIS) are the basis for the development and maintenance of the :International Terrestrial deference Frame (ITRE), which is the basis for our metric measurements of global change. The Global Geodetic Observing System (GGOS) within the International Association of Geodesy has established a task to develop a strategy to design, integrate and maintain the fundamental geodetic network and supporting infrastructure in a sustainable way to satisfy the long-term requirements for the reference frame. The GGOS goal is an origin definition at I mm or better and a temporal stability on the order of 0.1 mm/y, with similar numbers for the scale and orientation components. These goals are based on scientific requirements to address sea level rise with confidence. As a first step, simulations focused on establishing the optimal global SLR and VLBI network, since these two techniques alone are sufficient to define the reference frame. The GLASS constellations will then distribute the reference frame to users anywhere on the Earth. Using simulated data to be collected by the future networks, we investigated various designs and the resulting accuracy in the origin, scale and orientation of the resulting ITRF. We present here the results of extensive simulation studies aimed at designing optimal global geodetic networks to support GGOS science products. Current estimates are the network will require 24 - 32 globally distributed co-location sites. Stations in the near global network will require geologically stable sites witla good weather, established infrastructure, and local support and personnel. EGOS will seek groups that are interested in participation. GGOS intends to issues a Call for Participation of groups that would like to take part in the network implementation and operation_ Some examples of integrated stations currently in operation or under development will be presented. We will examine necessary conditions and challenges in designing a co-location station.

  15. Numerical model of solar dynamic radiator for parametric analysis

    NASA Technical Reports Server (NTRS)

    Rhatigan, Jennifer L.

    1989-01-01

    Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations.

  16. Weather Research and Forecasting Model with Vertical Nesting Capability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-08-01

    The Weather Research and Forecasting (WRF) model with vertical nesting capability is an extension of the WRF model, which is available in the public domain, from www.wrf-model.org. The new code modifies the nesting procedure, which passes lateral boundary conditions between computational domains in the WRF model. Previously, the same vertical grid was required on all domains, while the new code allows different vertical grids to be used on concurrently run domains. This new functionality improves WRF's ability to produce high-resolution simulations of the atmosphere by allowing a wider range of scales to be efficiently resolved and more accurate lateral boundarymore » conditions to be provided through the nesting procedure.« less

  17. MPI-Defrost: Extension of Defrost to MPI-based Cluster Environment

    NASA Astrophysics Data System (ADS)

    Amin, Mustafa A.; Easther, Richard; Finkel, Hal

    2011-06-01

    MPI-Defrost extends Frolov’s Defrost to an MPI-based cluster environment. This version has been restricted to a single field. Restoring two-field support should be straightforward, but will require some code changes. Some output options may also not be fully supported under MPI. This code was produced to support our own work, and has been made available for the benefit of anyone interested in either oscillon simulations or an MPI capable version of Defrost, and it is provided on an "as-is" basis. Andrei Frolov is the primary developer of Defrost and we thank him for placing his work under the GPL (GNU Public License), and thus allowing us to distribute this modified version.

  18. Design, qualification, manufacturing and integration of IXV Ablative Thermal Protection System

    NASA Astrophysics Data System (ADS)

    Cioeta, Mario; Di Vita, Gandolfo; Signorelli Maria, Teresa; Bianco, Gianluca; Cutroni, Maurizio; Damiani, Francesco; Ferretti, Viviana; Rotondo, Adriano

    2016-07-01

    In the present paper, all the activities carried out by Avio S.p.A in order to define, qualify, manufacture and integrate the IXV Ablative TPS will be presented. In particular the extensive numerical simulation in both small and full scale testing activities will be overviewed. Wide-ranging testing activity has been carried out in order to verify, confirm and correlate the numerical models used for TPS sizing. Tests ranged from classical thermo-mechanical characterization traction specimens to tests in plasma wind tunnels on dedicated prototypes. Finally manufacturing and integration activities will be described emphasizing technological aspects solved in order to meet the stringent requirements in terms of shape accuracy and integration tolerances.

  19. Glass Waste Forms for Oak Ridge Tank Wastes: Fiscal Year 1998 Report for Task Plan SR-16WT-31, Task B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, M.K.

    1999-05-10

    Using ORNL information on the characterization of the tank waste sludges, SRTC performed extensive bench-scale vitrification studies using simulants. Several glass systems were tested to ensure the optimum glass composition (based on the glass liquidus temperature, viscosity and durability) is determined. This optimum composition will balance waste loading, melt temperature, waste form performance and disposal requirements. By optimizing the glass composition, a cost savings can be realized during vitrification of the waste. The preferred glass formulation was selected from the bench-scale studies and recommended to ORNL for further testing with samples of actual OR waste tank sludges.

  20. Fault detection in mechanical systems with friction phenomena: an online neural approximation approach.

    PubMed

    Papadimitropoulos, Adam; Rovithakis, George A; Parisini, Thomas

    2007-07-01

    In this paper, the problem of fault detection in mechanical systems performing linear motion, under the action of friction phenomena is addressed. The friction effects are modeled through the dynamic LuGre model. The proposed architecture is built upon an online neural network (NN) approximator, which requires only system's position and velocity. The friction internal state is not assumed to be available for measurement. The neural fault detection methodology is analyzed with respect to its robustness and sensitivity properties. Rigorous fault detectability conditions and upper bounds for the detection time are also derived. Extensive simulation results showing the effectiveness of the proposed methodology are provided, including a real case study on an industrial actuator.

  1. Preliminary assessment of the robustness of dynamic inversion based flight control laws

    NASA Technical Reports Server (NTRS)

    Snell, S. A.

    1992-01-01

    Dynamic-inversion-based flight control laws present an attractive alternative to conventional gain-scheduled designs for high angle-of-attack maneuvering, where nonlinearities dominate the dynamics. Dynamic inversion is easily applied to the aircraft dynamics requiring a knowledge of the nonlinear equations of motion alone, rather than an extensive set of linearizations. However, the robustness properties of the dynamic inversion are questionable especially when considering the uncertainties involved with the aerodynamic database during post-stall flight. This paper presents a simple analysis and some preliminary results of simulations with a perturbed database. It is shown that incorporating integrators into the control loops helps to improve the performance in the presence of these perturbations.

  2. Ceramic Technology for Advanced Heat Engines Project. Semiannual progress report, October 1984-March 1985

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1985-09-01

    A five-year project plan was developed with extensive input from private industry. The objective of the project is to develop the industrial technology base required for reliable ceramics for application in advanced automotive heat engines. The project approach includes determining the mechanisms controlling reliability, improving processes for fabricating existing ceramics, developing new materials with increased reliability, and testing these materials in simulated engine environments to confirm reliability. Although this is a generic materials project, the focus is on structural ceramics for advanced gas turbine and diesel engines, ceramic bearings and attachments, and ceramic coatings for thermal barrier and wear applicationsmore » in these engines.« less

  3. Ceramic technology for advanced heat engines project: Semiannual progress report for April through September 1986

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-03-01

    An assessment of needs was completed, and a five-year project plan was developed with extensive input from private industry. Objective is to develop the industrial technology base required for reliable ceramics for application in advanced automotive heat engines. The project approach includes determining the mechanisms controlling reliability, improving processes for fabricating existing ceramics, developing new materials with increased reliability, and testing these materials in simulated engine environments to confirm reliability. Although this is a generic materials project, the focus is on structural ceramics for advanced gas turbine and diesel engines, ceramic bearings and attachments, and ceramic coatings for thermal barriermore » and wear applications in these engines.« less

  4. Remote sensing and field test capabilities at U.S. Army Dugway Proving Ground

    NASA Astrophysics Data System (ADS)

    Pearson, James T.; Herron, Joshua P.; Marshall, Martin S.

    2011-11-01

    U.S. Army Dugway Proving Ground (DPG) is a Major Range and Test Facility Base (MRTFB) with the mission of testing chemical and biological defense systems and materials. DPG facilities include state-of-the-art laboratories, extensive test grids, controlled environment calibration facilities, and a variety of referee instruments for required test measurements. Among these referee instruments, DPG has built up a significant remote sensing capability for both chemical and biological detection. Technologies employed for remote sensing include FTIR spectroscopy, UV spectroscopy, Raman-shifted eye-safe lidar, and other elastic backscatter lidar systems. These systems provide referee data for bio-simulants, chemical simulants, toxic industrial chemicals (TICs), and toxic industrial materials (TIMs). In order to realize a successful large scale open-air test, each type of system requires calibration and characterization. DPG has developed specific calibration facilities to meet this need. These facilities are the Joint Ambient Breeze Tunnel (JABT), and the Active Standoff Chamber (ASC). The JABT and ASC are open ended controlled environment tunnels. Each includes validation instrumentation to characterize simulants that are disseminated. Standoff systems are positioned at typical field test distances to measure characterized simulants within the tunnel. Data from different types of systems can be easily correlated using this method, making later open air test results more meaningful. DPG has a variety of large scale test grids available for field tests. After and during testing, data from the various referee instruments is provided in a visual format to more easily draw conclusions on the results. This presentation provides an overview of DPG's standoff testing facilities and capabilities, as well as example data from different test scenarios.

  5. Remote sensing and field test capabilities at U.S. Army Dugway Proving Ground

    NASA Astrophysics Data System (ADS)

    Pearson, James T.; Herron, Joshua P.; Marshall, Martin S.

    2012-05-01

    U.S. Army Dugway Proving Ground (DPG) is a Major Range and Test Facility Base (MRTFB) with the mission of testing chemical and biological defense systems and materials. DPG facilities include state-of-the-art laboratories, extensive test grids, controlled environment calibration facilities, and a variety of referee instruments for required test measurements. Among these referee instruments, DPG has built up a significant remote sensing capability for both chemical and biological detection. Technologies employed for remote sensing include FTIR spectroscopy, UV spectroscopy, Raman-shifted eye-safe lidar, and other elastic backscatter lidar systems. These systems provide referee data for bio-simulants, chemical simulants, toxic industrial chemicals (TICs), and toxic industrial materials (TIMs). In order to realize a successful large scale open-air test, each type of system requires calibration and characterization. DPG has developed specific calibration facilities to meet this need. These facilities are the Joint Ambient Breeze Tunnel (JABT), and the Active Standoff Chamber (ASC). The JABT and ASC are open ended controlled environment tunnels. Each includes validation instrumentation to characterize simulants that are disseminated. Standoff systems are positioned at typical field test distances to measure characterized simulants within the tunnel. Data from different types of systems can be easily correlated using this method, making later open air test results more meaningful. DPG has a variety of large scale test grids available for field tests. After and during testing, data from the various referee instruments is provided in a visual format to more easily draw conclusions on the results. This presentation provides an overview of DPG's standoff testing facilities and capabilities, as well as example data from different test scenarios.

  6. MOtoNMS: A MATLAB toolbox to process motion data for neuromusculoskeletal modeling and simulation.

    PubMed

    Mantoan, Alice; Pizzolato, Claudio; Sartori, Massimo; Sawacha, Zimi; Cobelli, Claudio; Reggiani, Monica

    2015-01-01

    Neuromusculoskeletal modeling and simulation enable investigation of the neuromusculoskeletal system and its role in human movement dynamics. These methods are progressively introduced into daily clinical practice. However, a major factor limiting this translation is the lack of robust tools for the pre-processing of experimental movement data for their use in neuromusculoskeletal modeling software. This paper presents MOtoNMS (matlab MOtion data elaboration TOolbox for NeuroMusculoSkeletal applications), a toolbox freely available to the community, that aims to fill this lack. MOtoNMS processes experimental data from different motion analysis devices and generates input data for neuromusculoskeletal modeling and simulation software, such as OpenSim and CEINMS (Calibrated EMG-Informed NMS Modelling Toolbox). MOtoNMS implements commonly required processing steps and its generic architecture simplifies the integration of new user-defined processing components. MOtoNMS allows users to setup their laboratory configurations and processing procedures through user-friendly graphical interfaces, without requiring advanced computer skills. Finally, configuration choices can be stored enabling the full reproduction of the processing steps. MOtoNMS is released under GNU General Public License and it is available at the SimTK website and from the GitHub repository. Motion data collected at four institutions demonstrate that, despite differences in laboratory instrumentation and procedures, MOtoNMS succeeds in processing data and producing consistent inputs for OpenSim and CEINMS. MOtoNMS fills the gap between motion analysis and neuromusculoskeletal modeling and simulation. Its support to several devices, a complete implementation of the pre-processing procedures, its simple extensibility, the available user interfaces, and its free availability can boost the translation of neuromusculoskeletal methods in daily and clinical practice.

  7. Shingle 2.0: generalising self-consistent and automated domain discretisation for multi-scale geophysical models

    NASA Astrophysics Data System (ADS)

    Candy, Adam S.; Pietrzak, Julie D.

    2018-01-01

    The approaches taken to describe and develop spatial discretisations of the domains required for geophysical simulation models are commonly ad hoc, model- or application-specific, and under-documented. This is particularly acute for simulation models that are flexible in their use of multi-scale, anisotropic, fully unstructured meshes where a relatively large number of heterogeneous parameters are required to constrain their full description. As a consequence, it can be difficult to reproduce simulations, to ensure a provenance in model data handling and initialisation, and a challenge to conduct model intercomparisons rigorously. This paper takes a novel approach to spatial discretisation, considering it much like a numerical simulation model problem of its own. It introduces a generalised, extensible, self-documenting approach to carefully describe, and necessarily fully, the constraints over the heterogeneous parameter space that determine how a domain is spatially discretised. This additionally provides a method to accurately record these constraints, using high-level natural language based abstractions that enable full accounts of provenance, sharing, and distribution. Together with this description, a generalised consistent approach to unstructured mesh generation for geophysical models is developed that is automated, robust and repeatable, quick-to-draft, rigorously verified, and consistent with the source data throughout. This interprets the description above to execute a self-consistent spatial discretisation process, which is automatically validated to expected discrete characteristics and metrics. Library code, verification tests, and examples available in the repository at https://github.com/shingleproject/Shingle. Further details of the project presented at http://shingleproject.org.

  8. The Text Encoding Initiative: Flexible and Extensible Document Encoding.

    ERIC Educational Resources Information Center

    Barnard, David T.; Ide, Nancy M.

    1997-01-01

    The Text Encoding Initiative (TEI), an international collaboration aimed at producing a common encoding scheme for complex texts, examines the requirement for generality versus the requirement to handle specialized text types. Discusses how documents and users tax the limits of fixed schemes requiring flexible extensible encoding to support…

  9. Detector Simulations with DD4hep

    NASA Astrophysics Data System (ADS)

    Petrič, M.; Frank, M.; Gaede, F.; Lu, S.; Nikiforou, N.; Sailer, A.

    2017-10-01

    Detector description is a key component of detector design studies, test beam analyses, and most of particle physics experiments that require the simulation of more and more different detector geometries and event types. This paper describes DD4hep, which is an easy-to-use yet flexible and powerful detector description framework that can be used for detector simulation and also extended to specific needs for a particular working environment. Linear collider detector concepts ILD, SiD and CLICdp as well as detector development collaborations CALICE and FCal have chosen to adopt the DD4hep geometry framework and its DDG4 pathway to Geant4 as its core simulation and reconstruction tools. The DDG4 plugins suite includes a wide variety of input formats, provides access to the Geant4 particle gun or general particles source and allows for handling of Monte Carlo truth information, eg. by linking hits and the primary particle that caused them, which is indispensable for performance and efficiency studies. An extendable array of segmentations and sensitive detectors allows the simulation of a wide variety of detector technologies. This paper shows how DD4hep allows to perform complex Geant4 detector simulations without compiling a single line of additional code by providing a palette of sub-detector components that can be combined and configured via compact XML files. Simulation is controlled either completely via the command line or via simple Python steering files interpreted by a Python executable. It also discusses how additional plugins and extensions can be created to increase the functionality.

  10. Study of GLAO-corrected PSF evolution for the MUSE Wide Field Mode. Expected performance and requirements for PSF reconstruction

    NASA Astrophysics Data System (ADS)

    Fusco, T.; Villecroze, R.; Jarno, A.; Bacon, R.

    2011-09-01

    The second generation instrument MUSE for the VLT has been designed to profit of the ESO Adaptive Optics Facility (AOF). The two Adaptive Optics (AO) modes (GLAO in Wide Field Mode [WFM] and LTAO in Narrow Field Mode [NFM]) will be used. To achieve its key science goals, MUSE will require information on the full system (Atmosphere, AO, telescope and instrument) image quality and its variation with Field position and wavelength. For example, optimal summation of a large number of deep field exposures in WFM will require a good knowledge of the PSF. In this paper, we will present an exhaustive analysis of the MUSE Wide Field Mode PSF evolution both spatially and spectrally. For that purpose we have coupled a complete AO simulation tool developed at ONERA with the MUSE instrumental PSF simulation. Relative impact of atmospheric and system parameters (seeing, Cn^2, LGS and NGS positions etc ...) with respect to differential MUSE aberrations per channel (i.e. slicer and IFU) is analysed. The results allow us (in close collaboration with astronomers) to define pertinent parameters (fit parameters using a Moffat function) for a PSF reconstruction process (estimation of this parameters using GLAO telemetry) and to propose an efficient and robust algorithm to be implemented in the MUSE pipeline. The extension of the spatial and spectral PSF analysis to the NFM case is discussed and preliminary results are given. Some specific requirements for the generalisation of the GLAO PSF reconstruction process to the LTAO case are derived from these early results.

  11. IITET and shadow TT: an innovative approach to training at the point of need

    NASA Astrophysics Data System (ADS)

    Gross, Andrew; Lopez, Favio; Dirkse, James; Anderson, Darran; Berglie, Stephen; May, Christopher; Harkrider, Susan

    2014-06-01

    The Image Intensification and Thermal Equipment Training (IITET) project is a joint effort between Night Vision and Electronics Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) and the Army Research Institute (ARI) Fort Benning Research Unit. The IITET effort develops a reusable and extensible training architecture that supports the Army Learning Model and trains Manned-Unmanned Teaming (MUM-T) concepts to Shadow Unmanned Aerial Systems (UAS) payload operators. The training challenge of MUM-T during aviation operations is that UAS payload operators traditionally learn few of the scout-reconnaissance skills and coordination appropriate to MUM-T at the schoolhouse. The IITET effort leveraged the simulation experience and capabilities at NVESD and ARI's research to develop a novel payload operator training approach consistent with the Army Learning Model. Based on the training and system requirements, the team researched and identified candidate capabilities in several distinct technology areas. The training capability will support a variety of training missions as well as a full campaign. Data from these missions will be captured in a fully integrated AAR capability, which will provide objective feedback to the user in near-real-time. IITET will be delivered via a combination of browser and video streaming technologies, eliminating the requirement for a client download and reducing user computer system requirements. The result is a novel UAS Payload Operator training capability, nested within an architecture capable of supporting a wide variety of training needs for air and ground tactical platforms and sensors, and potentially several other areas requiring vignette-based serious games training.

  12. Los Angeles International Airport Runway Incursion Studies: Phase III--Center-Taxiway Simulation

    NASA Technical Reports Server (NTRS)

    Madson, Michael D.

    2004-01-01

    Phase III of the Los Angeles International Airport Runway Incursion Studies was conducted, under an agreement with HNTB Corporation, at the NASA Ames FutureFlight Central (FFC) facility in June 2003. The objective of the study was the evaluation of a new center-taxiway concept at LAX. This study is an extension of the Phase I and Phase II studies previously conducted at FFC. This report presents results from Phase III of the study, in which a center-taxiway concept between runways 25L and 25R was simulated and evaluated. Phase III data were compared objectively against the Baseline data. Subjective evaluations by participating LAX controllers were obtained with regard to workload, efficiency, and safety criteria. To facilitate a valid comparison between Baseline and Phase III data, the same scenarios were used for Phase III that were tested during Phases I and II. This required briefing participating controllers on differences in airport and airline operations between 2001 and today.

  13. Liver of the "visible man".

    PubMed

    Fasel, J H; Gingins, P; Kalra, P; Magnenat-Thalmann, N; Baur, C; Cuttat, J F; Muster, M; Gailloud, P

    1997-01-01

    Endoscopic surgery, also called minimally invasive surgery, is presumed drastically to reduce postoperative morbidity and thus to offer both human and economic benefits. For the surgeon, however, this approach leads to a number of gestural challenges that require extensive training to be mastered. In order to replace experimentation on animals and patients, we developed a simulator for endoscopic surgery. To achieve this goal, a first step was to develop a working prototype, a "standard patient," on which the informatic and microengineering tools could be validated. We used the visible man dataset for this purpose. The external shape of the visible man's liver, his biliary passages, and his extrahepatic portal system turned out to be fully within the standard pattern of normal anatomy. Anatomic variations were observed in the intrahepatic right portal vein, the hepatic veins, and the arterial blood supply to the liver. Thus, the visible man dataset reveals itself to be well suited for the simulation of minimally invasive surgical operation such as endoscopic cholecystectomy.

  14. On computing stress in polymer systems involving multi-body potentials from molecular dynamics simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Yao, E-mail: fu5@mailbox.sc.edu, E-mail: jhsong@cec.sc.edu; Song, Jeong-Hoon, E-mail: fu5@mailbox.sc.edu, E-mail: jhsong@cec.sc.edu

    2014-08-07

    Hardy stress definition has been restricted to pair potentials and embedded-atom method potentials due to the basic assumptions in the derivation of a symmetric microscopic stress tensor. Force decomposition required in the Hardy stress expression becomes obscure for multi-body potentials. In this work, we demonstrate the invariance of the Hardy stress expression for a polymer system modeled with multi-body interatomic potentials including up to four atoms interaction, by applying central force decomposition of the atomic force. The balance of momentum has been demonstrated to be valid theoretically and tested under various numerical simulation conditions. The validity of momentum conservation justifiesmore » the extension of Hardy stress expression to multi-body potential systems. Computed Hardy stress has been observed to converge to the virial stress of the system with increasing spatial averaging volume. This work provides a feasible and reliable linkage between the atomistic and continuum scales for multi-body potential systems.« less

  15. Communicating with sentences: A multi-word naming game model

    NASA Astrophysics Data System (ADS)

    Lou, Yang; Chen, Guanrong; Hu, Jianwei

    2018-01-01

    Naming game simulates the process of naming an object by a single word, in which a population of communicating agents can reach global consensus asymptotically through iteratively pair-wise conversations. We propose an extension of the single-word model to a multi-word naming game (MWNG), simulating the case of describing a complex object by a sentence (multiple words). Words are defined in categories, and then organized as sentences by combining them from different categories. We refer to a formatted combination of several words as a pattern. In such an MWNG, through a pair-wise conversation, it requires the hearer to achieve consensus with the speaker with respect to both every single word in the sentence as well as the sentence pattern, so as to guarantee the correct meaning of the saying; otherwise, they fail reaching consensus in the interaction. We validate the model in three typical topologies as the underlying communication network, and employ both conventional and man-designed patterns in performing the MWNG.

  16. Gaussian representation of high-intensity focused ultrasound beams.

    PubMed

    Soneson, Joshua E; Myers, Matthew R

    2007-11-01

    A method for fast numerical simulation of high-intensity focused ultrasound beams is derived. The method is based on the frequency-domain representation of the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, and assumes for each harmonic a Gaussian transverse pressure distribution at all distances from the transducer face. The beamwidths of the harmonics are constrained to vary inversely with the square root of the harmonic number, and as such this method may be viewed as an extension of a quasilinear approximation. The technique is capable of determining pressure or intensity fields of moderately nonlinear high-intensity focused ultrasound beams in water or biological tissue, usually requiring less than a minute of computer time on a modern workstation. Moreover, this method is particularly well suited to high-gain simulations since, unlike traditional finite-difference methods, it is not subject to resolution limitations in the transverse direction. Results are shown to be in reasonable agreement with numerical solutions of the full KZK equation in both tissue and water for moderately nonlinear beams.

  17. On the interplay of gas dynamics and the electromagnetic field in an atmospheric Ar/H2 microwave plasma torch

    NASA Astrophysics Data System (ADS)

    Synek, Petr; Obrusník, Adam; Hübner, Simon; Nijdam, Sander; Zajíčková, Lenka

    2015-04-01

    A complementary simulation and experimental study of an atmospheric pressure microwave torch operating in pure argon or argon/hydrogen mixtures is presented. The modelling part describes a numerical model coupling the gas dynamics and mixing to the electromagnetic field simulations. Since the numerical model is not fully self-consistent and requires the electron density as an input, quite extensive spatially resolved Stark broadening measurements were performed for various gas compositions and input powers. In addition, the experimental part includes Rayleigh scattering measurements, which are used for the validation of the model. The paper comments on the changes in the gas temperature and hydrogen dissociation with the gas composition and input power, showing in particular that the dependence on the gas composition is relatively strong and non-monotonic. In addition, the work provides interesting insight into the plasma sustainment mechanism by showing that the power absorption profile in the plasma has two distinct maxima: one at the nozzle tip and one further upstream.

  18. Occupant Responses in a Full-Scale Crash Test of the Sikorsky ACAP Helicopter

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Fasanella, Edwin L.; Boitnott, Richard L.; McEntire, Joseph; Lewis, Alan

    2002-01-01

    A full-scale crash test of the Sikorsky Advanced Composite Airframe Program (ACAP) helicopter was performed in 1999 to generate experimental data for correlation with a crash simulation developed using an explicit nonlinear, transient dynamic finite element code. The airframe was the residual flight test hardware from the ACAP program. For the test, the aircraft was outfitted with two crew and two troop seats, and four anthropomorphic test dummies. While the results of the impact test and crash simulation have been documented fairly extensively in the literature, the focus of this paper is to present the detailed occupant response data obtained from the crash test and to correlate the results with injury prediction models. These injury models include the Dynamic Response Index (DRI), the Head Injury Criteria (HIC), the spinal load requirement defined in FAR Part 27.562(c), and a comparison of the duration and magnitude of the occupant vertical acceleration responses with the Eiband whole-body acceleration tolerance curve.

  19. Extension of a Kolmogorov Atmospheric Turbulence Model for Time-Based Simulation Implementation

    NASA Technical Reports Server (NTRS)

    McMinn, John D.

    1997-01-01

    The development of any super/hypersonic aircraft requires the interaction of a wide variety of technical disciplines to maximize vehicle performance. For flight and engine control system design and development on this class of vehicle, realistic mathematical simulation models of atmospheric turbulence, including winds and the varying thermodynamic properties of the atmosphere, are needed. A model which has been tentatively selected by a government/industry group of flight and engine/inlet controls representatives working on the High Speed Civil Transport is one based on the Kolmogorov spectrum function. This report compares the Dryden and Kolmogorov turbulence forms, and describes enhancements that add functionality to the selected Kolmogorov model. These added features are: an altitude variation of the eddy dissipation rate based on Dryden data, the mapping of the eddy dissipation rate database onto a regular latitude and longitude grid, a method to account for flight at large vehicle attitude angles, and a procedure for transitioning smoothly across turbulence segments.

  20. Integrating Reliability Analysis with a Performance Tool

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Ulrey, Michael

    1995-01-01

    A large number of commercial simulation tools support performance oriented studies of complex computer and communication systems. Reliability of these systems, when desired, must be obtained by remodeling the system in a different tool. This has obvious drawbacks: (1) substantial extra effort is required to create the reliability model; (2) through modeling error the reliability model may not reflect precisely the same system as the performance model; (3) as the performance model evolves one must continuously reevaluate the validity of assumptions made in that model. In this paper we describe an approach, and a tool that implements this approach, for integrating a reliability analysis engine into a production quality simulation based performance modeling tool, and for modeling within such an integrated tool. The integrated tool allows one to use the same modeling formalisms to conduct both performance and reliability studies. We describe how the reliability analysis engine is integrated into the performance tool, describe the extensions made to the performance tool to support the reliability analysis, and consider the tool's performance.

  1. Modeling the Physics of Sliding Objects on Rotating Space Elevators and Other Non-relativistic Strings

    NASA Astrophysics Data System (ADS)

    Golubovic, Leonardo; Knudsen, Steven

    2017-01-01

    We consider general problem of modeling the dynamics of objects sliding on moving strings. We introduce a powerful computational algorithm that can be used to investigate the dynamics of objects sliding along non-relativistic strings. We use the algorithm to numerically explore fundamental physics of sliding climbers on a unique class of dynamical systems, Rotating Space Elevators (RSE). Objects sliding along RSE strings do not require internal engines or propulsion to be transported from the Earth's surface into outer space. By extensive numerical simulations, we find that sliding climbers may display interesting non-linear dynamics exhibiting both quasi-periodic and chaotic states of motion. While our main interest in this study is in the climber dynamics on RSEs, our results for the dynamics of sliding object are of more general interest. In particular, we designed tools capable of dealing with strongly nonlinear phenomena involving moving strings of any kind, such as the chaotic dynamics of sliding climbers observed in our simulations.

  2. Modeling plug-in electric vehicle charging demand with BEAM: the framework for behavior energy autonomy mobility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheppard, Colin; Waraich, Rashid; Campbell, Andrew

    This report summarizes the BEAM modeling framework (Behavior, Energy, Mobility, and Autonomy) and its application to simulating plug-in electric vehicle (PEV) mobility, energy consumption, and spatiotemporal charging demand. BEAM is an agent-based model of PEV mobility and charging behavior designed as an extension to MATSim (the Multi-Agent Transportation Simulation model). We apply BEAM to the San Francisco Bay Area and conduct a preliminary calibration and validation of its prediction of charging load based on observed charging infrastructure utilization for the region in 2016. We then explore the impact of a variety of common modeling assumptions in the literature regarding chargingmore » infrastructure availability and driver behavior. We find that accurately reproducing observed charging patterns requires an explicit representation of spatially disaggregated charging infrastructure as well as a more nuanced model of the decision to charge that balances tradeoffs people make with regards to time, cost, convenience, and range anxiety.« less

  3. GTRF: a game theory approach for regulating node behavior in real-time wireless sensor networks.

    PubMed

    Lin, Chi; Wu, Guowei; Pirozmand, Poria

    2015-06-04

    The selfish behaviors of nodes (or selfish nodes) cause packet loss, network congestion or even void regions in real-time wireless sensor networks, which greatly decrease the network performance. Previous methods have focused on detecting selfish nodes or avoiding selfish behavior, but little attention has been paid to regulating selfish behavior. In this paper, a Game Theory-based Real-time & Fault-tolerant (GTRF) routing protocol is proposed. GTRF is composed of two stages. In the first stage, a game theory model named VA is developed to regulate nodes' behaviors and meanwhile balance energy cost. In the second stage, a jumping transmission method is adopted, which ensures that real-time packets can be successfully delivered to the sink before a specific deadline. We prove that GTRF theoretically meets real-time requirements with low energy cost. Finally, extensive simulations are conducted to demonstrate the performance of our scheme. Simulation results show that GTRF not only balances the energy cost of the network, but also prolongs network lifetime.

  4. Simulating nitrogen budgets in complex farming systems using INCA: calibration and scenario analyses for the Kervidy catchment (W. France)

    NASA Astrophysics Data System (ADS)

    Durand, P.

    The integrated nitrogen model INCA (Integrated Nitrogen in Catchments) was used to analyse the nitrogen dynamics in a small rural catchment in Western France. The agrosystem studied is very complex, with: extensive use of different organic fertilisers, a variety of crop rotations, a structural excess of nitrogen (i.e. more animal N produced by the intensive farming than the N requirements of the crops and pastures), and nitrate retention in both hydrological stores and riparian zones. The original model features were adapted here to describe this complexity. The calibration results are satisfactory, although the daily variations in stream nitrate are not simulated in detail. Different climate scenarios, based on observed climate records, were tested; all produced a worsening of the pollution in the short term. Scenarios of alternative agricultural practices (reduced fertilisation and catch crops) were also analysed, suggesting that a reduction by 40% of the fertilisation combined with the introduction of catch crops would be necessary to stop the degradation of water quality.

  5. Global and critical test of the perturbation density-functional theory based on extensive simulation of Lennard-Jones fluid near an interface and in confined systems.

    PubMed

    Zhou, Shiqi; Jamnik, Andrej

    2005-09-22

    The structure of a Lennard-Jones (LJ) fluid subjected to diverse external fields maintaining the equilibrium with the bulk LJ fluid is studied on the basis of the third-order+second-order perturbation density-functional approximation (DFA). The chosen density and potential parameters for the bulk fluid correspond to the conditions situated at "dangerous" regions of the phase diagram, i.e., near the critical temperature or close to the gas-liquid coexistence curve. The accuracy of DFA predictions is tested against the results of a grand canonical ensemble Monte Carlo simulation. It is found that the DFA theory presented in this work performs successfully for the nonuniform LJ fluid only on the condition of high accuracy of the required bulk second-order direct correlation function. The present report further indicates that the proposed perturbation DFA is efficient and suitable for both supercritical and subcritical temperatures.

  6. Ewald Summation Approach to Potential Models of Aqueous Electrolytes Involving Gaussian Charges and Induced Dipoles: Formal and Simulation Results

    DOE PAGES

    Chialvo, Ariel A.; Vlcek, Lukas

    2014-11-01

    We present a detailed derivation of the complete set of expressions required for the implementation of an Ewald summation approach to handle the long-range electrostatic interactions of polar and ionic model systems involving Gaussian charges and induced dipole moments with a particular application to the isobaricisothermal molecular dynamics simulation of our Gaussian Charge Polarizable (GCP) water model and its extension to aqueous electrolytes solutions. The set comprises the individual components of the potential energy, electrostatic potential, electrostatic field and gradient, the electrostatic force and the corresponding virial. Moreover, we show how the derived expressions converge to known point-based electrostatic counterpartsmore » when the parameters, defining the Gaussian charge and induced-dipole distributions, are extrapolated to their limiting point values. Finally, we illustrate the Ewald implementation against the current reaction field approach by isothermal-isobaric molecular dynamics of ambient GCP water for which we compared the outcomes of the thermodynamic, microstructural, and polarization behavior.« less

  7. Computational simulation of concurrent engineering for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Singhal, S. N.

    1992-01-01

    Results are summarized of an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulations methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties - fundamental in developing such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering for propulsion systems and systems in general. Benefits and facets needing early attention in the development are outlined.

  8. Computational simulation for concurrent engineering of aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Singhal, S. N.

    1993-01-01

    Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.

  9. Computational simulation for concurrent engineering of aerospace propulsion systems

    NASA Astrophysics Data System (ADS)

    Chamis, C. C.; Singhal, S. N.

    1993-02-01

    Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.

  10. A time accurate prediction of the viscous flow in a turbine stage including a rotor in motion

    NASA Astrophysics Data System (ADS)

    Shavalikul, Akamol

    In this current study, the flow field in the Pennsylvania State University Axial Flow Turbine Research Facility (AFTRF) was simulated. This study examined four sets of simulations. The first two sets are for an individual NGV and for an individual rotor. The last two sets use a multiple reference frames approach for a complete turbine stage with two different interface models: a steady circumferential average approach called a mixing plane model, and a time accurate flow simulation approach called a sliding mesh model. The NGV passage flow field was simulated using a three-dimensional Reynolds Averaged Navier-Stokes finite volume solver (RANS) with a standard kappa -- epsilon turbulence model. The mean flow distributions on the NGV surfaces and endwall surfaces were computed. The numerical solutions indicate that two passage vortices begin to be observed approximately at the mid axial chord of the NGV suction surface. The first vortex is a casing passage vortex which occurs at the corner formed by the NGV suction surface and the casing. This vortex is created by the interaction of the passage flow and the radially inward flow, while the second vortex, the hub passage vortex, is observed near the hub. These two vortices become stronger towards the NGV trailing edge. By comparing the results from the X/Cx = 1.025 plane and the X/Cx = 1.09 plane, it can be concluded that the NGV wake decays rapidly within a short axial distance downstream of the NGV. For the rotor, a set of simulations was carried out to examine the flow fields associated with different pressure side tip extension configurations, which are designed to reduce the tip leakage flow. The simulation results show that significant reductions in tip leakage mass flow rate and aerodynamic loss reduction are possible by using suitable tip platform extensions located near the pressure side corner of the blade tip. The computations used realistic turbine rotor inlet flow conditions in a linear cascade arrangement in the relative frame of reference; the boundary conditions for the computations were obtained from inlet flow measurements performed in the AFTRF. A complete turbine stage, including an NGV and a rotor row was simulated using the RANS solver with the SST kappa -- o turbulence model, with two different computational models for the interface between the rotating component and the stationary component. The first interface model, the circumferentially averaged mixing plane model, was solved for a fixed position of the rotor blades relative to the NGV in the stationary frame of reference. The information transferred between the NGV and rotor domains is obtained by averaging across the entire interface. The quasi-steady state flow characteristics of the AFTRF can be obtained from this interface model. After the model was validated with the existing experimental data, this model was not only used to investigate the flow characteristics in the turbine stage but also the effects of using pressure side rotor tip extensions. The tip leakage flow fields simulated from this model and from the linear cascade model show similar trends. More detailed understanding of unsteady characteristics of a turbine flow field can be obtained using the second type of interface model, the time accurate sliding mesh model. The potential flow interactions, wake characteristics, their effects on secondary flow formation, and the wake mixing process in a rotor passage were examined using this model. Furthermore, turbine stage efficiency and effects of tip clearance height on the turbine stage efficiency were also investigated. A comparison between the results from the circumferential average model and the time accurate flow model results is presented. It was found that the circumferential average model cannot accurately simulate flow interaction characteristics on the interface plane between the NGV trailing edge and the rotor leading edge. However, the circumferential average model does give accurate flow characteristics in the NGV domain and the rotor domain with less computational time and computer memory requirements. In contrast, the time accurate flow simulation can predict all unsteady flow characteristics occurring in the turbine stage, but with high computational resource requirements. (Abstract shortened by UMI.)

  11. The Programming Language Python In Earth System Simulations

    NASA Astrophysics Data System (ADS)

    Gross, L.; Imranullah, A.; Mora, P.; Saez, E.; Smillie, J.; Wang, C.

    2004-12-01

    Mathematical models in earth sciences base on the solution of systems of coupled, non-linear, time-dependent partial differential equations (PDEs). The spatial and time-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.

  12. Assessment of crew operations during internal servicing of the Columbus Free-Flyer by Hermes

    NASA Astrophysics Data System (ADS)

    Winisdoerffer, F.; Lamothe, A.; Bourdeau'hui, J. C.

    The Hermes system has been adopted as a European programme at the Hague ministerial level meeting. The primary mission of the Hermes spaceplane will be the servicing of the Columbus Free-Flyer (CFF) in order to bring new experiments in orbit, recover the results of old ones, and refurbish/maintain the various subsystems. This mission will be based on the extensive use of the 3 crewmembers on-board Hermes in order to perform either the Intra-Vehicular (IVA) and/or the Extra-Vehicular (EVA) activities. This paper focuses on the internal operations and the dimensions of the various payload of the basic reference cargo set are presented. The main constraints associated with their manipulation are also assessed independently of the configuration. During the spaceplane definition process, various configurations were developed. The operations were simulated using the CAD CATIA software with representative anthropometric models of the potential Hermes users population. These simulations helped to assess the various configurations and to refine the general concept of the spaceplane. The geometrical feasibility is demonstrated through those simulations. However full-scale tests are required to confirm data and assess the duration of the operations.

  13. Diablo 2.0: A modern DNS/LES code for the incompressible NSE leveraging new time-stepping and multigrid algorithms

    NASA Astrophysics Data System (ADS)

    Cavaglieri, Daniele; Bewley, Thomas; Mashayek, Ali

    2015-11-01

    We present a new code, Diablo 2.0, for the simulation of the incompressible NSE in channel and duct flows with strong grid stretching near walls. The code leverages the fractional step approach with a few twists. New low-storage IMEX (implicit-explicit) Runge-Kutta time-marching schemes are tested which are superior to the traditional and widely-used CN/RKW3 (Crank-Nicolson/Runge-Kutta-Wray) approach; the new schemes tested are L-stable in their implicit component, and offer improved overall order of accuracy and stability with, remarkably, similar computational cost and storage requirements. For duct flow simulations, our new code also introduces a new smoother for the multigrid solver for the pressure Poisson equation. The classic approach, involving alternating-direction zebra relaxation, is replaced by a new scheme, dubbed tweed relaxation, which achieves the same convergence rate with roughly half the computational cost. The code is then tested on the simulation of a shear flow instability in a duct, a classic problem in fluid mechanics which has been the object of extensive numerical modelling for its role as a canonical pathway to energetic turbulence in several fields of science and engineering.

  14. Background simulations for the wide field imager aboard the ATHENA X-ray Observatory

    NASA Astrophysics Data System (ADS)

    Hauf, Steffen; Kuster, Markus; Hoffmann, Dieter H. H.; Lang, Philipp-Michael; Neff, Stephan; Pia, Maria Grazia; Strüder, Lothar

    2012-09-01

    The ATHENA X-ray observatory was a European Space Agency project for a L-class mission. ATHENA was to be based upon a simplified IXO design with the number of instruments and the focal length of the Wolter optics being reduced. One of the two instruments, the Wide Field Imager (WFI) was to be a DePFET based focal plane pixel detector, allowing for high time and spatial resolution spectroscopy in the energy-range between 0.1 and 15 keV. In order to fulfill the mission goals a high sensitivity is essential, especially to study faint and extended sources. Thus a detailed understanding of the detector background induced by cosmic ray particles is crucial. During the mission design generally extensive Monte-Carlo simulations are used to estimate the detector background in order to optimize shielding components and software rejection algorithms. The Geant4 toolkit1,2 is frequently the tool of choice for this purpose. Alongside validation of the simulation environment with XMM-Newton EPIC-pn and Space Shuttle STS-53 data we present estimates for the ATHENA WFI cosmic ray induced background including long-term activation, which demonstrate that DEPFET-technology based detectors are able to achieve the required sensitivity.

  15. Stochastic Rotation Dynamics simulations of wetting multi-phase flows

    NASA Astrophysics Data System (ADS)

    Hiller, Thomas; Sanchez de La Lama, Marta; Brinkmann, Martin

    2016-06-01

    Multi-color Stochastic Rotation Dynamics (SRDmc) has been introduced by Inoue et al. [1,2] as a particle based simulation method to study the flow of emulsion droplets in non-wetting microchannels. In this work, we extend the multi-color method to also account for different wetting conditions. This is achieved by assigning the color information not only to fluid particles but also to virtual wall particles that are required to enforce proper no-slip boundary conditions. To extend the scope of the original SRDmc algorithm to e.g. immiscible two-phase flow with viscosity contrast we implement an angular momentum conserving scheme (SRD+mc). We perform extensive benchmark simulations to show that a mono-phase SRDmc fluid exhibits bulk properties identical to a standard SRD fluid and that SRDmc fluids are applicable to a wide range of immiscible two-phase flows. To quantify the adhesion of a SRD+mc fluid in contact to the walls we measure the apparent contact angle from sessile droplets in mechanical equilibrium. For a further verification of our wettability implementation we compare the dewetting of a liquid film from a wetting stripe to experimental and numerical studies of interfacial morphologies on chemically structured surfaces.

  16. On the hydration of subnanometric antifouling organosilane adlayers: a molecular dynamics simulation.

    PubMed

    Sheikh, Sonia; Blaszykowski, Christophe; Nolan, Robert; Thompson, Damien; Thompson, Michael

    2015-01-01

    The connection between antifouling and surface hydration is a fascinating but daunting question to answer. Herein, we use molecular dynamics (MD) computer simulations to gain further insight into the role of surface functionalities in the molecular-level structuration of water (surface kosmotropicity)--within and atop subnanometric organosilane adlayers that were shown in previous experimental work to display varied antifouling behavior. Our simulations support the hypothesized intimate link between surface hydration and antifouling, in particular the importance of both internal and interfacial hydrophilicity and kosmotropicity. The antifouling mechanism is also discussed in terms of surface dehydration energy and water dynamicity (lability and mobility), notably the crucial requirement for clustered water molecules to remain tightly bound for extensive periods of time--i.e. exhibit slow exchange dynamics. A substrate effect on surface hydration, which would also participate in endowing antifouling adlayers with hydrogel-like characteristics, is also proposed. In contrast, the role of adlayer flexibility, if any, is assigned a secondary role in these ultrathin structures made of short building blocks. The conclusions from this work are well in line with those previously drawn in the literature. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Numerical Simulation of Flow Through an Artificial Heart

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Kutler, Paul; Kwak, Dochan; Kiris, Cetin

    1989-01-01

    A solution procedure was developed that solves the unsteady, incompressible Navier-Stokes equations, and was used to numerically simulate viscous incompressible flow through a model of the Pennsylvania State artificial heart. The solution algorithm is based on the artificial compressibility method, and uses flux-difference splitting to upwind the convective terms; a line-relaxation scheme is used to solve the equations. The time-accuracy of the method is obtained by iteratively solving the equations at each physical time step. The artificial heart geometry involves a piston-type action with a moving solid wall. A single H-grid is fit inside the heart chamber. The grid is continuously compressed and expanded with a constant number of grid points to accommodate the moving piston. The computational domain ends at the valve openings where nonreflective boundary conditions based on the method of characteristics are applied. Although a number of simplifing assumptions were made regarding the geometry, the computational results agreed reasonably well with an experimental picture. The computer time requirements for this flow simulation, however, are quite extensive. Computational study of this type of geometry would benefit greatly from improvements in computer hardware speed and algorithm efficiency enhancements.

  18. Design and performance evaluation of a simplified dynamic model for combined sewer overflows in pumped sewer systems

    NASA Astrophysics Data System (ADS)

    van Daal-Rombouts, Petra; Sun, Siao; Langeveld, Jeroen; Bertrand-Krajewski, Jean-Luc; Clemens, François

    2016-07-01

    Optimisation or real time control (RTC) studies in wastewater systems increasingly require rapid simulations of sewer systems in extensive catchments. To reduce the simulation time calibrated simplified models are applied, with the performance generally based on the goodness of fit of the calibration. In this research the performance of three simplified and a full hydrodynamic (FH) model for two catchments are compared based on the correct determination of CSO event occurrences and of the total discharged volumes to the surface water. Simplified model M1 consists of a rainfall runoff outflow (RRO) model only. M2 combines the RRO model with a static reservoir model for the sewer behaviour. M3 comprises the RRO model and a dynamic reservoir model. The dynamic reservoir characteristics were derived from FH model simulations. It was found that M2 and M3 are able to describe the sewer behaviour of the catchments, contrary to M1. The preferred model structure depends on the quality of the information (geometrical database and monitoring data) available for the design and calibration of the model. Finally, calibrated simplified models are shown to be preferable to uncalibrated FH models when performing optimisation or RTC studies.

  19. An effective absorbing layer for the boundary condition in acoustic seismic wave simulation

    NASA Astrophysics Data System (ADS)

    Yao, Gang; da Silva, Nuno V.; Wu, Di

    2018-04-01

    Efficient numerical simulation of seismic wavefields generally involves truncating the Earth model in order to keep computing time and memory requirements down. Absorbing boundary conditions, therefore, are applied to remove the boundary reflections caused by this truncation, thereby allowing for accurate modeling of wavefields. In this paper, we derive an effective absorbing boundary condition for both acoustic and elastic wave simulation, through the simplification of the damping term of the split perfectly matched layer (SPML) boundary condition. This new boundary condition is accurate, cost-effective, and easily implemented, especially for high-performance computing. Stability analysis shows that this boundary condition is effectively as stable as normal (non-absorbing) wave equations for explicit time-stepping finite differences. We found that for full-waveform inversion (FWI), the strengths of the effective absorbing layer—a reduction of the computational and memory cost coupled with a simplistic implementation—significantly outweighs the limitation of incomplete absorption of outgoing waves relative to the SPML. More importantly, we demonstrate that this limitation can easily be overcome through the use of two strategies in FWI, namely variable cell size and model extension thereby fully compensating for the imperfectness of the proposed absorbing boundary condition.

  20. Gsflow-py: An integrated hydrologic model development tool

    NASA Astrophysics Data System (ADS)

    Gardner, M.; Niswonger, R. G.; Morton, C.; Henson, W.; Huntington, J. L.

    2017-12-01

    Integrated hydrologic modeling encompasses a vast number of processes and specifications, variable in time and space, and development of model datasets can be arduous. Model input construction techniques have not been formalized or made easily reproducible. Creating the input files for integrated hydrologic models (IHM) requires complex GIS processing of raster and vector datasets from various sources. Developing stream network topology that is consistent with the model resolution digital elevation model is important for robust simulation of surface water and groundwater exchanges. Distribution of meteorologic parameters over the model domain is difficult in complex terrain at the model resolution scale, but is necessary to drive realistic simulations. Historically, development of input data for IHM models has required extensive GIS and computer programming expertise which has restricted the use of IHMs to research groups with available financial, human, and technical resources. Here we present a series of Python scripts that provide a formalized technique for the parameterization and development of integrated hydrologic model inputs for GSFLOW. With some modifications, this process could be applied to any regular grid hydrologic model. This Python toolkit automates many of the necessary and laborious processes of parameterization, including stream network development and cascade routing, land coverages, and meteorological distribution over the model domain.

Top