Cunningham, C E; Siegel, L S
1987-06-01
Groups of 30 ADD-H boys and 90 normal boys were divided into 30 mixed dyads composed of a normal and an ADD-H boy, and 30 normal dyads composed of 2 normal boys. Dyads were videotaped interacting in 15-minute free-play, 15-minute cooperative task, and 15-minute simulated classroom settings. Mixed dyads engaged in more controlling interaction than normal dyads in both free-play and simulated classroom settings. In the simulated classroom, mixed dyads completed fewer math problems and were less compliant with the commands of peers. ADD-H children spent less simulated classroom time on task and scored lower on drawing tasks than normal peers. Older dyads proved less controlling, more compliant with peer commands, more inclined to play and work independently, less active, and more likely to remain on task during the cooperative task and simulated classroom settings. Results suggest that the ADD-H child prompts a more controlling, less cooperative pattern of responses from normal peers.
Program For Parallel Discrete-Event Simulation
NASA Technical Reports Server (NTRS)
Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.
1991-01-01
User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.
Management of Wood Products Manufacturing Using Simulation/Animation
D. Earl Kline; J.K. Wiedenbeck; Philip A. Araman
1992-01-01
Managers of hardwood processing facilities need timely information on which to base important decisions such as when to add costly equipment or how to improve profitability subject to time-varying demands. The overall purpose of this paper is to introduce a method that can effectively provide such timely information. A simulation/animation modeling procedure is...
Design and Evaluation of Wood Processing Facilities Using Object-Oriented Simulation
D. Earl Kline; Philip A. Araman
1992-01-01
Managers of hardwood processing facilities need timely information on which to base important decisions such as when to add costly equipment or how to improve profitability subject to time-varying demands. The overall purpose of this paper is to introduce a tool that can effectively provide such timely information. A simulation/animation modeling procedure is described...
AN/SLQ-32 EW System Model: and Expandable, Object-Oriented, Process- Based Simulation
1992-09-01
CONST threshold = 0.1; timetol = 0.01; orientol = 5.8; VAR rec, recLast :BufferBeamRecType; time,power : REAL; powerl,orientation : REAL; BEGIN NEW...PulseGroup); rec:-ASK BufferBeam Removed; time: =rec. time; orientation: =rec. orientation; OUTPUT ( "ORIENREFI, orientation); recLast :=ASK BufferBeam Last...TO Add(rec); IF (rec= recLast ) EXIT; END IF; rec :=ASK BufferBeam Remove o; ELSE ASK BufferBeam TO Add(rec); IF (rec = recLast ) EXIT; END IF; rec
Simulation modeling for the health care manager.
Kennedy, Michael H
2009-01-01
This article addresses the use of simulation software to solve administrative problems faced by health care managers. Spreadsheet add-ins, process simulation software, and discrete event simulation software are available at a range of costs and complexity. All use the Monte Carlo method to realistically integrate probability distributions into models of the health care environment. Problems typically addressed by health care simulation modeling are facility planning, resource allocation, staffing, patient flow and wait time, routing and transportation, supply chain management, and process improvement.
Real-time visual simulation of APT system based on RTW and Vega
NASA Astrophysics Data System (ADS)
Xiong, Shuai; Fu, Chengyu; Tang, Tao
2012-10-01
The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.
Investigation of the Vehicle Mobility in Fording
2016-05-29
Conference on Multibody System Dynamics May 29 – June 2, 2016, Montréal, Canada Investigation of the Vehicle Mobility in Fording Arman Pazouki1...strategy outlined has been implemented in Chrono as a dedicated add-on called Chrono::FSI [3]. Figure 1 shows a vehicle model used in a fording simulation...rigid objects. Chrono::FSI has been used for vehicle mobility in fording operations as shown in Figure 2. The computational time per simulation time
Distribution Feeder Modeling for Time-Series Simulation of Voltage Management Strategies: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giraldez Miner, Julieta I; Gotseff, Peter; Nagarajan, Adarsh
This paper presents techniques to create baseline distribution models using a utility feeder from Hawai'ian Electric Company. It describes the software-to-software conversion, steady-state, and time-series validations of a utility feeder model. It also presents a methodology to add secondary low-voltage circuit models to accurately capture the voltage at the customer meter level. This enables preparing models to perform studies that simulate how customer-sited resources integrate into legacy utility distribution system operations.
Real-time failure control (SAFD)
NASA Technical Reports Server (NTRS)
Panossian, Hagop V.; Kemp, Victoria R.; Eckerling, Sherry J.
1990-01-01
The Real Time Failure Control program involves development of a failure detection algorithm, referred as System for Failure and Anomaly Detection (SAFD), for the Space Shuttle Main Engine (SSME). This failure detection approach is signal-based and it entails monitoring SSME measurement signals based on predetermined and computed mean values and standard deviations. Twenty four engine measurements are included in the algorithm and provisions are made to add more parameters if needed. Six major sections of research are presented: (1) SAFD algorithm development; (2) SAFD simulations; (3) Digital Transient Model failure simulation; (4) closed-loop simulation; (5) SAFD current limitations; and (6) enhancements planned for.
draco: Analysis and simulation of drift scan radio data
NASA Astrophysics Data System (ADS)
Shaw, J. Richard
2017-12-01
draco analyzes transit radio data with the m-mode formalism. It is telescope agnostic, and is used as part of the analysis and simulation pipeline for the CHIME (Canadian Hydrogen Intensity Mapping Experiment) telescope. It can simulate time stream data from maps of the sky (using the m-mode formalism) and add gain fluctuations and correctly correlated instrumental noise (i.e. Wishart distributed). Further, it can perform various cuts on the data and make maps of the sky from data using the m-mode formalism.
Flight investigation of a four-dimensional terminal area guidance system for STOL aircraft
NASA Technical Reports Server (NTRS)
Neuman, F.; Hardy, G. H.
1981-01-01
A series of flight tests and fast-time simulations were conducted, using the augmentor wing jet STOL research aircraft and the STOLAND 4D-RNAV system to add to the growing data base of 4D-RNAV system performance capabilities. To obtain statistically meaningful data a limited amount of flight data were supplemented by a statistically significant amount of data obtained from fast-time simulation. The results of these tests are reported. Included are comparisons of the 4D-RNAV estimated winds with actual winds encountered in flight, as well as data on along-track navigation and guidance errors, and time-of-arrival errors at the final approach waypoint. In addition, a slight improvement of the STOLAND 4D-RNAV system is proposed and demonstrated, using the fast-time simulation.
Sequence-dependent folding landscapes of adenine riboswitch aptamers.
Lin, Jong-Chin; Hyeon, Changbong; Thirumalai, D
2014-04-14
Expression of a large fraction of genes in bacteria is controlled by riboswitches, which are found in the untranslated region of mRNA. Structurally riboswitches have a conserved aptamer domain to which a metabolite binds, resulting in a conformational change in the downstream expression platform. Prediction of the functions of riboswitches requires a quantitative description of the folding landscape so that the barriers and time scales for the conformational change in the switching region in the aptamer can be estimated. Using a combination of all atom molecular dynamics (MD) and coarse-grained model simulations we studied the response of adenine (A) binding add and pbuE A-riboswitches to mechanical force. The two riboswitches contain a structurally similar three-way junction formed by three paired helices, P1, P2, and P3, but carry out different functions. Using pulling simulations, with structures generated in MD simulations, we show that after P1 rips the dominant unfolding pathway in the add A-riboswitch is the rupture of P2 followed by unraveling of P3. In the pbuE A-riboswitch, after P1 unfolds P3 ruptures ahead of P2. The order of unfolding of the helices, which is in accord with single molecule pulling experiments, is determined by the relative stabilities of the individual helices. Our results show that the stability of isolated helices determines the order of assembly and response to force in these non-coding regions. We use the simulated free energy profile for the pbuE A-riboswitch to estimate the time scale for allosteric switching, which shows that this riboswitch is under kinetic control lending additional support to the conclusion based on single molecule pulling experiments. A consequence of the stability hypothesis is that a single point mutation (U28C) in the P2 helix of the add A-riboswitch, which increases the stability of P2, would make the folding landscapes of the two riboswitches similar. This prediction can be tested in single molecule pulling experiments.
Add Control: plant virtualization for control solutions in WWTP.
Maiza, M; Bengoechea, A; Grau, P; De Keyser, W; Nopens, I; Brockmann, D; Steyer, J P; Claeys, F; Urchegui, G; Fernández, O; Ayesa, E
2013-01-01
This paper summarizes part of the research work carried out in the Add Control project, which proposes an extension of the wastewater treatment plant (WWTP) models and modelling architectures used in traditional WWTP simulation tools, addressing, in addition to the classical mass transformations (transport, physico-chemical phenomena, biological reactions), all the instrumentation, actuation and automation & control components (sensors, actuators, controllers), considering their real behaviour (signal delays, noise, failures and power consumption of actuators). Its ultimate objective is to allow a rapid transition from the simulation of the control strategy to its implementation at full-scale plants. Thus, this paper presents the application of the Add Control simulation platform for the design and implementation of new control strategies at the WWTP of Mekolalde.
Impact of add-on laboratory testing at an academic medical center: a five year retrospective study.
Nelson, Louis S; Davis, Scott R; Humble, Robert M; Kulhavy, Jeff; Aman, Dean R; Krasowski, Matthew D
2015-01-01
Clinical laboratories frequently receive orders to perform additional tests on existing specimens ('add-ons'). Previous studies have examined add-on ordering patterns over short periods of time. The objective of this study was to analyze add-on ordering patterns over an extended time period. We also analyzed the impact of a robotic specimen archival/retrieval system on add-on testing procedure and manual effort. In this retrospective study at an academic medical center, electronic health records from were searched to obtain all add-on orders that were placed in the time period of May 2, 2009 to December 31, 2014. During the time period of retrospective study, 880,359 add-on tests were ordered on 96,244 different patients. Add-on testing comprised 3.3 % of total test volumes. There were 443,411 unique ordering instances, leading to an average of 1.99 add-on tests per instance. Some patients had multiple episodes of add-on test orders at different points in time, leading to an average of 9.15 add-on tests per patient. The majority of add-on orders were for chemistry tests (78.8 % of total add-ons) with the next most frequent being hematology and coagulation tests (11.2 % of total add-ons). Inpatient orders accounted for 66.8 % of total add-on orders, while the emergency department and outpatient clinics had 14.8 % and 18.4 % of total add-on orders, respectively. The majority of add-ons were placed within 8 hours (87.3 %) and nearly all by 24 hours (96.8 %). Nearly 100 % of add-on orders within the emergency department were placed within 8 hours. The introduction of a robotic specimen archival/retrieval unit saved an average of 2.75 minutes of laboratory staff manual time per unique add-on order. This translates to 24.1 hours/day less manual effort in dealing with add-on orders. Our study reflects the previous literature in showing that add-on orders significantly impact the workload of the clinical laboratory. The majority of add-on orders are clinical chemistry tests, and most add-on orders occur within 24 hours of original specimen collection. Robotic specimen archival/retrieval units can reduce manual effort in the clinical laboratory associated with add-on orders.
A third-order silicon racetrack add-drop filter with a moderate feature size
NASA Astrophysics Data System (ADS)
Wang, Ying; Zhou, Xin; Chen, Qian; Shao, Yue; Chen, Xiangning; Huang, Qingzhong; Jiang, Wei
2018-01-01
In this work, we design and fabricate a highly compact third-order racetrack add-drop filter consisting of silicon waveguides with modified widths on a silicon-on-insulator (SOI) wafer. Compared to the previous approach that requires an exceedingly narrow coupling gap less than 100nm, we propose a new approach that enlarges the minimum feature size of the whole device to be 300 nm to reduce the process requirement. The three-dimensional finite-difference time-domain (3D-FDTD) method is used for simulation. Experiment results show good agreement with simulation results in property. In the experiment, the filter shows a nearly box-like channel dropping response, which has a large flat 3-dB bandwidth ({3 nm), relatively large FSR ({13.3 nm) and out-of-band rejection larger than 14 dB at the drop port with a footprint of 0.0006 mm2 . The device is small and simple enough to have a wide range of applications in large scale on-chip photonic integration circuits.
NASA Astrophysics Data System (ADS)
Wang, Jiali; Kotamarthi, Veerabhadra R.
2014-07-01
The Weather Research and Forecasting (WRF) model is used for dynamic downscaling of 2.5-degree National Centers for Environmental Prediction-U.S. Department of Energy Reanalysis II (NCEP-R2) data for 1980-2010 at 12 km resolution over most of North America. The model's performance for surface air temperature and precipitation is evaluated by comparison with high-resolution observational data sets. The model's ability to add value is investigated by comparison with NCEP-R2 data and a 50 km regional climate simulation. The causes for major model bias are studied through additional sensitivity experiments with various model setup/integration approaches and physics representations. The WRF captures the main features of the spatial patterns and annual cycles of air temperature and precipitation over most of the contiguous United States. However, simulated air temperatures over the south central region and precipitation over the Great Plains and the Southwest have significant biases. Allowing longer spin-up time, reducing the nudging strength, or replacing the WRF Single-Moment six-class microphysics with Morrison microphysics reduces the bias over some subregions. However, replacing the Grell-Devenyi cumulus parameterization with Kain-Fritsch shows no improvement. The 12 km simulation does add value above the NCEP-R2 data and the 50 km simulation over mountainous and coastal zones.
Time reversal and charge conjugation in an embedding quantum simulator.
Zhang, Xiang; Shen, Yangchao; Zhang, Junhua; Casanova, Jorge; Lamata, Lucas; Solano, Enrique; Yung, Man-Hong; Zhang, Jing-Ning; Kim, Kihwan
2015-08-04
A quantum simulator is an important device that may soon outperform current classical computations. A basic arithmetic operation, the complex conjugate, however, is considered to be impossible to be implemented in such a quantum system due to the linear character of quantum mechanics. Here, we present the experimental quantum simulation of such an unphysical operation beyond the regime of unitary and dissipative evolutions through the embedding of a quantum dynamics in the electronic multilevels of a (171)Yb(+) ion. We perform time reversal and charge conjugation, which are paradigmatic examples of antiunitary symmetry operators, in the evolution of a Majorana equation without the tomographic knowledge of the evolving state. Thus, these operations can be applied regardless of the system size. Our approach offers the possibility to add unphysical operations to the toolbox of quantum simulation, and provides a route to efficiently compute otherwise intractable quantities, such as entanglement monotones.
Time reversal and charge conjugation in an embedding quantum simulator
Zhang, Xiang; Shen, Yangchao; Zhang, Junhua; Casanova, Jorge; Lamata, Lucas; Solano, Enrique; Yung, Man-Hong; Zhang, Jing-Ning; Kim, Kihwan
2015-01-01
A quantum simulator is an important device that may soon outperform current classical computations. A basic arithmetic operation, the complex conjugate, however, is considered to be impossible to be implemented in such a quantum system due to the linear character of quantum mechanics. Here, we present the experimental quantum simulation of such an unphysical operation beyond the regime of unitary and dissipative evolutions through the embedding of a quantum dynamics in the electronic multilevels of a 171Yb+ ion. We perform time reversal and charge conjugation, which are paradigmatic examples of antiunitary symmetry operators, in the evolution of a Majorana equation without the tomographic knowledge of the evolving state. Thus, these operations can be applied regardless of the system size. Our approach offers the possibility to add unphysical operations to the toolbox of quantum simulation, and provides a route to efficiently compute otherwise intractable quantities, such as entanglement monotones. PMID:26239028
Development of advanced avionics systems applicable to terminal-configured vehicles
NASA Technical Reports Server (NTRS)
Heimbold, R. L.; Lee, H. P.; Leffler, M. F.
1980-01-01
A technique to add the time constraint to the automatic descent feature of the existing L-1011 aircraft Flight Management System (FMS) was developed. Software modifications were incorporated in the FMS computer program and the results checked by lab simulation and on a series of eleven test flights. An arrival time dispersion (2 sigma) of 19 seconds was achieved. The 4 D descent technique can be integrated with the time-based metering method of air traffic control. Substantial reductions in delays at today's busy airports should result.
Desai, Darshan B; Aldawsari, Mabkhoot Mudith S; Alharbi, Bandar Mohammed H; Sen, Sanchari; Grave de Peralta, Luis
2015-09-01
We show that various setups for optical microscopy which are commonly used in biomedical laboratories behave like efficient microscope condensers that are responsible for observed subwavelength resolution. We present a series of experiments and simulations that reveal how inclined illumination from such unexpected condensers occurs when the sample is perpendicularly illuminated by a microscope's built-in white-light source. In addition, we demonstrate an inexpensive add-on optical module that serves as an efficient and lightweight microscope condenser. Using such add-on optical module in combination with a low-numerical-aperture objective lens and Fourier plane imaging microscopy technique, we demonstrate detection of photonic crystals with a period nearly eight times smaller than the Rayleigh resolution limit.
NASA Astrophysics Data System (ADS)
Langlois, Serge; Fouquet, Olivier; Gouy, Yann; Riant, David
2014-08-01
On-Board Computers (OBC) are more and more using integrated systems on-chip (SOC) that embed processors running from 50MHz up to several hundreds of MHz, and around which are plugged some dedicated communication controllers together with other Input/Output channels.For ground testing and On-Board SoftWare (OBSW) validation purpose, a representative simulation of these systems, faster than real-time and with cycle-true timing of execution, is not achieved with current purely software simulators.Since a few years some hybrid solutions where put in place ([1], [2]), including hardware in the loop so as to add accuracy and performance in the computer software simulation.This paper presents the results of the works engaged by Thales Alenia Space (TAS-F) at the end of 2010, that led to a validated HW simulator of the UT699 by mid- 2012 and that is now qualified and fully used in operational contexts.
Identification of deficiencies in seasonal rainfall simulated by CMIP5 climate models
NASA Astrophysics Data System (ADS)
Dunning, Caroline M.; Allan, Richard P.; Black, Emily
2017-11-01
An objective technique for analysing seasonality, in terms of regime, progression and timing of the wet seasons, is applied in the evaluation of CMIP5 simulations across continental Africa. Atmosphere-only and coupled integrations capture the gross observed patterns of seasonal progression and give mean onset/cessation dates within 18 days of the observational dates for 11 of the 13 regions considered. Accurate representation of seasonality over central-southern Africa and West Africa (excluding the southern coastline) adds credence for future projected changes in seasonality here. However, coupled simulations exhibit timing biases over the Horn of Africa, with the long rains 20 days late on average. Although both sets of simulations detect biannual rainfall seasonal cycles for East and Central Africa, coupled simulations fail to capture the biannual regime over the southern West African coastline. This is linked with errors in the Gulf of Guinea sea surface temperature (SST) and deficient representation of the SST/rainfall relationship.
Benoit, Julia S; Chan, Wenyaw; Doody, Rachelle S
2015-01-01
Parameter dependency within data sets in simulation studies is common, especially in models such as Continuous-Time Markov Chains (CTMC). Additionally, the literature lacks a comprehensive examination of estimation performance for the likelihood-based general multi-state CTMC. Among studies attempting to assess the estimation, none have accounted for dependency among parameter estimates. The purpose of this research is twofold: 1) to develop a multivariate approach for assessing accuracy and precision for simulation studies 2) to add to the literature a comprehensive examination of the estimation of a general 3-state CTMC model. Simulation studies are conducted to analyze longitudinal data with a trinomial outcome using a CTMC with and without covariates. Measures of performance including bias, component-wise coverage probabilities, and joint coverage probabilities are calculated. An application is presented using Alzheimer's disease caregiver stress levels. Comparisons of joint and component-wise parameter estimates yield conflicting inferential results in simulations from models with and without covariates. In conclusion, caution should be taken when conducting simulation studies aiming to assess performance and choice of inference should properly reflect the purpose of the simulation.
NASA Astrophysics Data System (ADS)
Griffiths, Mike; Fedun, Viktor; Mumford, Stuart; Gent, Frederick
2013-06-01
The Sheffield Advanced Code (SAC) is a fully non-linear MHD code designed for simulations of linear and non-linear wave propagation in gravitationally strongly stratified magnetized plasma. It was developed primarily for the forward modelling of helioseismological processes and for the coupling processes in the solar interior, photosphere, and corona; it is built on the well-known VAC platform that allows robust simulation of the macroscopic processes in gravitationally stratified (non-)magnetized plasmas. The code has no limitations of simulation length in time imposed by complications originating from the upper boundary, nor does it require implementation of special procedures to treat the upper boundaries. SAC inherited its modular structure from VAC, thereby allowing modification to easily add new physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jiali; Kotamarthi, Veerabhadra R.
The Weather Research and Forecasting (WRF) model is used for dynamic downscaling of 2.5 degree National Centers for Environmental Prediction-U.S. Department of Energy Reanalysis II (NCEP-R2) data for 1980-2010 at 12 km resolution over most of North America. The model's performance for surface air temperature and precipitation is evaluated by comparison with high-resolution observational data sets. The model's ability to add value is investigated by comparison with NCEP-R2 data and a 50 km regional climate simulation. The causes for major model bias are studied through additional sensitivity experiments with various model setup/integration approaches and physics representations. The WRF captures themore » main features of the spatial patterns and annual cycles of air temperature and precipitation over most of the contiguous United States. However, simulated air temperatures over the south central region and precipitation over the Great Plains and the Southwest have significant biases. Allowing longer spin-up time, reducing the nudging strength, or replacing the WRF Single-Moment 6-class microphysics with Morrison microphysics reduces the bias over some subregions. However, replacing the Grell-Devenyi cumulus parameterization with Kain-Fritsch shows no improvement. The 12 km simulation does add value above the NCEP-R2 data and the 50 km simulation over mountainous and coastal zones.« less
Spatial and spectral simulation of LANDSAT images of agricultural areas
NASA Technical Reports Server (NTRS)
Pont, W. F., Jr. (Principal Investigator)
1982-01-01
A LANDSAT scene simulation capability was developed to study the effects of small fields and misregistration on LANDSAT-based crop proportion estimation procedures. The simulation employs a pattern of ground polygons each with a crop ID, planting date, and scale factor. Historical greenness/brightness crop development profiles generate the mean signal values for each polygon. Historical within-field covariances add texture to pixels in each polygon. The planting dates and scale factors create between-field/within-crop variation. Between field and crop variation is achieved by the above and crop profile differences. The LANDSAT point spread function is used to add correlation between nearby pixels. The next effect of the point spread function is to blur the image. Mixed pixels and misregistration are also simulated.
Analysis of Time Filters in Multistep Methods
NASA Astrophysics Data System (ADS)
Hurl, Nicholas
Geophysical ow simulations have evolved sophisticated implicit-explicit time stepping methods (based on fast-slow wave splittings) followed by time filters to control any unstable models that result. Time filters are modular and parallel. Their effect on stability of the overall process has been tested in numerous simulations, but never analyzed. Stability is proven herein for the Crank-Nicolson Leapfrog (CNLF) method with the Robert-Asselin (RA) time filter and for the Crank-Nicolson Leapfrog method with the Robert-Asselin-Williams (RAW) time filter for systems by energy methods. We derive an equivalent multistep method for CNLF+RA and CNLF+RAW and stability regions are obtained. The time step restriction for energy stability of CNLF+RA is smaller than CNLF and CNLF+RAW time step restriction is even smaller. Numerical tests find that RA and RAW add numerical dissipation. This thesis also shows that all modes of the Crank-Nicolson Leap Frog (CNLF) method are asymptotically stable under the standard timestep condition.
Management of queues in out-patient departments: the use of computer simulation.
Aharonson-Daniel, L; Paul, R J; Hedley, A J
1996-01-01
Notes that patients attending public outpatient departments in Hong Kong spend a long time waiting for a short consultation, that clinics are congested and that both staff and patients are dissatisfied. Points out that experimentation of management changes in a busy clinical environment can be both expensive and difficult. Demonstrates computerized simulation modelling as a potential tool for clarifying processes occurring within such systems, improving clinic operation by suggesting possible answers to problems identified and evaluating the solutions, without interfering with the clinic routine. Adds that solutions can be implemented after they had proved to be successful on the model. Demonstrates some ways in which managers in health care facilities can benefit from the use of computerized simulation modelling. Specifically, shows the effect of changing the duration of consultation and the effect of the application of an appointment system on patients' waiting time.
Czaplicki, Jerzy; Cornélissen, Germaine; Halberg, Franz
2009-01-01
Summary Transyears in biology have been documented thus far by the extended cosinor approach, including linear-nonlinear rhythmometry. We here confirm the existence of transyears by simulated annealing, a method originally developed for a much broader use, but described and introduced herein for validating its application to time series. The method is illustrated both on an artificial test case with known components and on biological data. We provide a table comparing results by the two methods and trust that the procedure will serve the budding sciences of chronobiology (the study of mechanisms underlying biological time structure), chronomics (the mapping of time structures in and around us), and chronobioethics, using the foregoing disciplines to add to concern for illnesses of individuals, and to budding focus on diseases of nations and civilizations. PMID:20414480
An Extension to the Multilevel Logic Simulator for Microcomputers.
1987-06-01
gates .............................. 61 3. Add or delete inputs ............................... 61 4. Add or delete outputs...the gates affected by the deletion. 61 4. Add or delete outputs The only modification that will be done in the circuit is the insertion (deletion) of...recompilation of the circuit. P 105 -Z -- % % 9,m "N , " " " " " ’ ’-"" " " " " " " " - -" , " " -- -" - .’ TABLE 25 THE DELINP CASE FOR THE ALU CIRCUIT JI
Hawthorne, Kamila; Denney, Mei Ling; Bewick, Mike; Wakeford, Richard
2006-01-01
WHAT IS ALREADY KNOWN IN THIS AREA • The Simulated Surgery module of the MRCGP examination has been shown to be a valid and reliable assessment of clinical consulting skills. WHAT THIS WORK ADDS • This paper describes the further development of the methodology of the Simulated Surgery; showing the type of data analysis currently used to assure its quality and reliability. The measures taken to tighten up case quality are discussed. SUGGESTIONS FOR FUTURE RESEARCH The future development of clinical skills assessments in general practice is discussed. More work is needed on the effectiveness and reliability of lay assessors in complex integrated clinical cases. New methods to test areas that are difficult to reproduces in a simulated environment (such as acute emergencies and cases with the very young or very old) are also needed.
A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berres, Anne Sabine; Adhinarayanan, Vignesh; Turton, Terece
2017-05-12
Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline atmore » the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.« less
Simulation Training for the Office-Based Anesthesia Team.
Ritt, Richard M; Bennett, Jeffrey D; Todd, David W
2017-05-01
An OMS office is a complex environment. Within such an environment, a diverse scope of complex surgical procedures is performed with different levels of anesthesia, ranging from local anesthesia to general anesthesia, on patients with varying comorbidities. Optimal patient outcomes require a functional surgical and anesthetic team, who are familiar with both standard operational principles and emergency recognition and management. Offices with high volume and time pressure add further stress and potential risk to the office environment. Creating and maintaining a functional surgical and anesthetic team that is competent with a culture of patient safety and risk reduction is a significant challenge that requires time, commitment, planning, and dedication. This article focuses on the role of simulation training in office training and preparation. Copyright © 2017 Elsevier Inc. All rights reserved.
Cunningham, C E; Siegel, L S; Offord, D R
1985-11-01
Mixed dyads of 42 normal and 42 ADD boys were videotaped in free play, co-operative task, and simulated classrooms. ADD boys received placebo, 0.15 mg/kg, and 0.50 mg/kg of methylphenidate. ADD boys were more active and off task, watched peers less, and scored lower on mathematics and visual-motor tasks. Older boys interacted less, ignored peer interactions and play more frequently, were less controlling, and more compliant. In class, methylphenidate improved visual motor scores, and reduced the controlling behaviour, activity level, and off task behaviour of ADD boys. Normal peers displayed reciprocal reductions in controlling behaviour, activity level, and off task behaviour.
Design of a real-time wind turbine simulator using a custom parallel architecture
NASA Technical Reports Server (NTRS)
Hoffman, John A.; Gluck, R.; Sridhar, S.
1995-01-01
The design of a new parallel-processing digital simulator is described. The new simulator has been developed specifically for analysis of wind energy systems in real time. The new processor has been named: the Wind Energy System Time-domain simulator, version 3 (WEST-3). Like previous WEST versions, WEST-3 performs many computations in parallel. The modules in WEST-3 are pure digital processors, however. These digital processors can be programmed individually and operated in concert to achieve real-time simulation of wind turbine systems. Because of this programmability, WEST-3 is very much more flexible and general than its two predecessors. The design features of WEST-3 are described to show how the system produces high-speed solutions of nonlinear time-domain equations. WEST-3 has two very fast Computational Units (CU's) that use minicomputer technology plus special architectural features that make them many times faster than a microcomputer. These CU's are needed to perform the complex computations associated with the wind turbine rotor system in real time. The parallel architecture of the CU causes several tasks to be done in each cycle, including an IO operation and the combination of a multiply, add, and store. The WEST-3 simulator can be expanded at any time for additional computational power. This is possible because the CU's interfaced to each other and to other portions of the simulation using special serial buses. These buses can be 'patched' together in essentially any configuration (in a manner very similar to the programming methods used in analog computation) to balance the input/ output requirements. CU's can be added in any number to share a given computational load. This flexible bus feature is very different from many other parallel processors which usually have a throughput limit because of rigid bus architecture.
The effect of a scanning flat fold mirror on a cosmic microwave background B-mode experiment.
Grainger, William F; North, Chris E; Ade, Peter A R
2011-06-01
We investigate the possibility of using a flat-fold beam steering mirror for a cosmic microwave background B-mode experiment. An aluminium flat-fold mirror is found to add ∼0.075% polarization, which varies in a scan synchronous way. Time-domain simulations of a realistic scanning pattern are performed, and the effect on the power-spectrum illustrated, and a possible method of correction applied. © 2011 American Institute of Physics
Solving Problems With SINDA/FLUINT
NASA Technical Reports Server (NTRS)
2002-01-01
SINDA/FLUINT, the NASA standard software system for thermohydraulic analysis, provides computational simulation of interacting thermal and fluid effects in designs modeled as heat transfer and fluid flow networks. The product saves time and money by making the user's design process faster and easier, and allowing the user to gain a better understanding of complex systems. The code is completely extensible, allowing the user to choose the features, accuracy and approximation levels, and outputs. Users can also add their own customizations as needed to handle unique design tasks or to automate repetitive tasks. Applications for SINDA/FLUINT include the pharmaceutical, petrochemical, biomedical, electronics, and energy industries. The system has been used to simulate nuclear reactors, windshield wipers, and human windpipes. In the automotive industry, it simulates the transient liquid/vapor flows within air conditioning systems.
Onyx-Advanced Aeropropulsion Simulation Framework Created
NASA Technical Reports Server (NTRS)
Reed, John A.
2001-01-01
The Numerical Propulsion System Simulation (NPSS) project at the NASA Glenn Research Center is developing a new software environment for analyzing and designing aircraft engines and, eventually, space transportation systems. Its purpose is to dramatically reduce the time, effort, and expense necessary to design and test jet engines by creating sophisticated computer simulations of an aerospace object or system (refs. 1 and 2). Through a university grant as part of that effort, researchers at the University of Toledo have developed Onyx, an extensible Java-based (Sun Micro-systems, Inc.), objectoriented simulation framework, to investigate how advanced software design techniques can be successfully applied to aeropropulsion system simulation (refs. 3 and 4). The design of Onyx's architecture enables users to customize and extend the framework to add new functionality or adapt simulation behavior as required. It exploits object-oriented technologies, such as design patterns, domain frameworks, and software components, to develop a modular system in which users can dynamically replace components with others having different functionality.
NASA Astrophysics Data System (ADS)
Li, Chunhua; Lv, Dashuai; Zhang, Lei; Yang, Feng; Wang, Cunxin; Su, Jiguo; Zhang, Yang
2016-07-01
Riboswitches are noncoding mRNA segments that can regulate the gene expression via altering their structures in response to specific metabolite binding. We proposed a coarse-grained Gaussian network model (GNM) to examine the unfolding and folding dynamics of adenosine deaminase (add) A-riboswitch upon the adenine dissociation, in which the RNA is modeled by a nucleotide chain with interaction networks formed by connecting adjoining atomic contacts. It was shown that the adenine binding is critical to the folding of the add A-riboswitch while the removal of the ligand can result in drastic increase of the thermodynamic fluctuations especially in the junction regions between helix domains. Under the assumption that the native contacts with the highest thermodynamic fluctuations break first, the iterative GNM simulations showed that the unfolding process of the adenine-free add A-riboswitch starts with the denature of the terminal helix stem, followed by the loops and junctions involving ligand binding pocket, and then the central helix domains. Despite the simplified coarse-grained modeling, the unfolding dynamics and pathways are shown in close agreement with the results from atomic-level MD simulations and the NMR and single-molecule force spectroscopy experiments. Overall, the study demonstrates a new avenue to investigate the binding and folding dynamics of add A-riboswitch molecule which can be readily extended for other RNA molecules.
Compact, self-contained enhanced-vision system (EVS) sensor simulator
NASA Astrophysics Data System (ADS)
Tiana, Carlo
2007-04-01
We describe the model SIM-100 PC-based simulator, for imaging sensors used, or planned for use, in Enhanced Vision System (EVS) applications. Typically housed in a small-form-factor PC, it can be easily integrated into existing out-the-window visual simulators for fixed-wing or rotorcraft, to add realistic sensor imagery to the simulator cockpit. Multiple bands of infrared (short-wave, midwave, extended-midwave and longwave) as well as active millimeter-wave RADAR systems can all be simulated in real time. Various aspects of physical and electronic image formation and processing in the sensor are accurately (and optionally) simulated, including sensor random and fixed pattern noise, dead pixels, blooming, B-C scope transformation (MMWR). The effects of various obscurants (fog, rain, etc.) on the sensor imagery are faithfully represented and can be selected by an operator remotely and in real-time. The images generated by the system are ideally suited for many applications, ranging from sensor development engineering tradeoffs (Field Of View, resolution, etc.), to pilot familiarization and operational training, and certification support. The realistic appearance of the simulated images goes well beyond that of currently deployed systems, and beyond that required by certification authorities; this level of realism will become necessary as operational experience with EVS systems grows.
On the interpretation of kernels - Computer simulation of responses to impulse pairs
NASA Technical Reports Server (NTRS)
Hung, G.; Stark, L.; Eykhoff, P.
1983-01-01
A method is presented for the use of a unit impulse response and responses to impulse pairs of variable separation in the calculation of the second-degree kernels of a quadratic system. A quadratic system may be built from simple linear terms of known dynamics and a multiplier. Computer simulation results on quadratic systems with building elements of various time constants indicate reasonably that the larger time constant term before multiplication dominates in the envelope of the off-diagonal kernel curves as these move perpendicular to and away from the main diagonal. The smaller time constant term before multiplication combines with the effect of the time constant after multiplication to dominate in the kernel curves in the direction of the second-degree impulse response, i.e., parallel to the main diagonal. Such types of insight may be helpful in recognizing essential aspects of (second-degree) kernels; they may be used in simplifying the model structure and, perhaps, add to the physical/physiological understanding of the underlying processes.
Vu, Lien T; Chen, Chao-Chang A; Lee, Chia-Cheng; Yu, Chia-Wei
2018-04-20
This study aims to develop a compensating method to minimize the shrinkage error of the shell mold (SM) in the injection molding (IM) process to obtain uniform optical power in the central optical zone of soft axial symmetric multifocal contact lenses (CL). The Z-shrinkage error along the Z axis or axial axis of the anterior SM corresponding to the anterior surface of a dry contact lens in the IM process can be minimized by optimizing IM process parameters and then by compensating for additional (Add) powers in the central zone of the original lens design. First, the shrinkage error is minimized by optimizing three levels of four IM parameters, including mold temperature, injection velocity, packing pressure, and cooling time in 18 IM simulations based on an orthogonal array L 18 (2 1 ×3 4 ). Then, based on the Z-shrinkage error from IM simulation, three new contact lens designs are obtained by increasing the Add power in the central zone of the original multifocal CL design to compensate for the optical power errors. Results obtained from IM process simulations and the optical simulations show that the new CL design with 0.1 D increasing in Add power has the closest shrinkage profile to the original anterior SM profile with percentage of reduction in absolute Z-shrinkage error of 55% and more uniform power in the central zone than in the other two cases. Moreover, actual experiments of IM of SM for casting soft multifocal CLs have been performed. The final product of wet CLs has been completed for the original design and the new design. Results of the optical performance have verified the improvement of the compensated design of CLs. The feasibility of this compensating method has been proven based on the measurement results of the produced soft multifocal CLs of the new design. Results of this study can be further applied to predict or compensate for the total optical power errors of the soft multifocal CLs.
Simulation-based bronchoscopy training: systematic review and meta-analysis.
Kennedy, Cassie C; Maldonado, Fabien; Cook, David A
2013-07-01
Simulation-based bronchoscopy training is increasingly used, but effectiveness remains uncertain. We sought to perform a comprehensive synthesis of published work on simulation-based bronchoscopy training. We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus for eligible articles through May 11, 2011. We included all original studies involving health professionals that evaluated, in comparison with no intervention or an alternative instructional approach, simulation-based training for flexible or rigid bronchoscopy. Study selection and data abstraction were performed independently and in duplicate. We pooled results using random effects meta-analysis. From an initial pool of 10,903 articles, we identified 17 studies evaluating simulation-based bronchoscopy training. In comparison with no intervention, simulation training was associated with large benefits on skills and behaviors (pooled effect size, 1.21 [95% CI, 0.82-1.60]; n=8 studies) and moderate benefits on time (0.62 [95% CI, 0.12-1.13]; n=7). In comparison with clinical instruction, behaviors with real patients showed nonsignificant effects favoring simulation for time (0.61 [95% CI, -1.47 to 2.69]) and process (0.33 [95% CI, -1.46 to 2.11]) outcomes (n=2 studies each), although variation in training time might account for these differences. Four studies compared alternate simulation-based training approaches. Inductive analysis to inform instructional design suggested that longer or more structured training is more effective, authentic clinical context adds value, and animal models and plastic part-task models may be superior to more costly virtual-reality simulators. Simulation-based bronchoscopy training is effective in comparison with no intervention. Comparative effectiveness studies are few.
Adding Badging to a Marketing Simulation to Increase Motivation to Learn
ERIC Educational Resources Information Center
Saxton, M. Kim
2015-01-01
Badging has become a popular tool for obtaining social recognition for personal accomplishments. This innovation describes a way to add badging to a marketing simulation to increase student motivation to achieve the simulation's goals. Assessments indicate that badging both motivates students to perform better and helps explain students' perceived…
Mapping land cover through time with the Rapid Land Cover Mapper—Documentation and user manual
Cotillon, Suzanne E.; Mathis, Melissa L.
2017-02-15
The Rapid Land Cover Mapper is an Esri ArcGIS® Desktop add-in, which was created as an alternative to automated or semiautomated mapping methods. Based on a manual photo interpretation technique, the tool facilitates mapping over large areas and through time, and produces time-series raster maps and associated statistics that characterize the changing landscapes. The Rapid Land Cover Mapper add-in can be used with any imagery source to map various themes (for instance, land cover, soils, or forest) at any chosen mapping resolution. The user manual contains all essential information for the user to make full use of the Rapid Land Cover Mapper add-in. This manual includes a description of the add-in functions and capabilities, and step-by-step procedures for using the add-in. The Rapid Land Cover Mapper add-in was successfully used by the U.S. Geological Survey West Africa Land Use Dynamics team to accurately map land use and land cover in 17 West African countries through time (1975, 2000, and 2013).
Eliminating time dispersion from seismic wave modeling
NASA Astrophysics Data System (ADS)
Koene, Erik F. M.; Robertsson, Johan O. A.; Broggini, Filippo; Andersson, Fredrik
2018-04-01
We derive an expression for the error introduced by the second-order accurate temporal finite-difference (FD) operator, as present in the FD, pseudospectral and spectral element methods for seismic wave modeling applied to time-invariant media. The `time-dispersion' error speeds up the signal as a function of frequency and time step only. Time dispersion is thus independent of the propagation path, medium or spatial modeling error. We derive two transforms to either add or remove time dispersion from synthetic seismograms after a simulation. The transforms are compared to previous related work and demonstrated on wave modeling in acoustic as well as elastic media. In addition, an application to imaging is shown. The transforms enable accurate computation of synthetic seismograms at reduced cost, benefitting modeling applications in both exploration and global seismology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Meng, E-mail: mengwu@stanford.edu; Fahrig, Rebecca
2014-11-01
Purpose: The scanning beam digital x-ray system (SBDX) is an inverse geometry fluoroscopic system with high dose efficiency and the ability to perform continuous real-time tomosynthesis in multiple planes. This system could be used for image guidance during lung nodule biopsy. However, the reconstructed images suffer from strong out-of-plane artifact due to the small tomographic angle of the system. Methods: The authors propose an out-of-plane artifact subtraction tomosynthesis (OPAST) algorithm that utilizes a prior CT volume to augment the run-time image processing. A blur-and-add (BAA) analytical model, derived from the project-to-backproject physical model, permits the generation of tomosynthesis images thatmore » are a good approximation to the shift-and-add (SAA) reconstructed image. A computationally practical algorithm is proposed to simulate images and out-of-plane artifacts from patient-specific prior CT volumes using the BAA model. A 3D image registration algorithm to align the simulated and reconstructed images is described. The accuracy of the BAA analytical model and the OPAST algorithm was evaluated using three lung cancer patients’ CT data. The OPAST and image registration algorithms were also tested with added nonrigid respiratory motions. Results: Image similarity measurements, including the correlation coefficient, mean squared error, and structural similarity index, indicated that the BAA model is very accurate in simulating the SAA images from the prior CT for the SBDX system. The shift-variant effect of the BAA model can be ignored when the shifts between SBDX images and CT volumes are within ±10 mm in the x and y directions. The nodule visibility and depth resolution are improved by subtracting simulated artifacts from the reconstructions. The image registration and OPAST are robust in the presence of added respiratory motions. The dominant artifacts in the subtraction images are caused by the mismatches between the real object and the prior CT volume. Conclusions: Their proposed prior CT-augmented OPAST reconstruction algorithm improves lung nodule visibility and depth resolution for the SBDX system.« less
Bulalo field, Philippines: Reservoir modeling for prediction of limits to sustainable generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strobel, Calvin J.
1993-01-28
The Bulalo geothermal field, located in Laguna province, Philippines, supplies 12% of the electricity on the island of Luzon. The first 110 MWe power plant was on line May 1979; current 330 MWe (gross) installed capacity was reached in 1984. Since then, the field has operated at an average plant factor of 76%. The National Power Corporation plans to add 40 MWe base load and 40 MWe standby in 1995. A numerical simulation model for the Bulalo field has been created that matches historic pressure changes, enthalpy and steam flash trends and cumulative steam production. Gravity modeling provided independent verificationmore » of mass balances and time rate of change of liquid desaturation in the rock matrix. Gravity modeling, in conjunction with reservoir simulation provides a means of predicting matrix dry out and the time to limiting conditions for sustainable levelized steam deliverability and power generation.« less
3D Simulations of the ``Keyhole'' Hohlraum for Shock Timing on NIF
NASA Astrophysics Data System (ADS)
Robey, H. F.; Marinak, M. M.; Munro, D. H.; Jones, O. S.
2007-11-01
Ignition implosions planned for the National Ignition Facility (NIF) require a pulse shape with a carefully designed series of steps, which launch a series of shocks through the ablator and DT fuel. The relative timing of these shocks must be tuned to better than +/- 100ps to maintain the DT fuel on a sufficiently low adiabat. To meet these requirements, pre-ignition tuning experiments using a modified hohlraum geometry are being planned. This modified geometry, known as the ``keyhole'' hohlraum, adds a re-entrant gold cone, which passes through the hohlraum and capsule walls, to provide an optical line-of-sight to directly measure the shocks as they break out of the ablator. In order to assess the surrogacy of this modified geometry, 3D simulations using HYDRA [1] have been performed. The drive conditions and the resulting effect on shock timing in the keyhole hohlraum will be compared with the corresponding results for the standard ignition hohlraum. [1] M.M. Marinak, et al., Phys. Plasmas 8, 2275 (2001).
Modeling snail breeding in a bioregenerative life support system
NASA Astrophysics Data System (ADS)
Kovalev, V. S.; Manukovsky, N. S.; Tikhomirov, A. A.; Kolmakova, A. A.
2015-07-01
The discrete-time model of snail breeding consists of two sequentially linked submodels: "Stoichiometry" and "Population". In both submodels, a snail population is split up into twelve age groups within one year of age. The first submodel is used to simulate the metabolism of a single snail in each age group via the stoichiometric equation; the second submodel is used to optimize the age structure and the size of the snail population. Daily intake of snail meat by crewmen is a guideline which specifies the population productivity. The mass exchange of the snail unit inhabited by land snails of Achatina fulica is given as an outcome of step-by-step modeling. All simulations are performed using Solver Add-In of Excel 2007.
Helicopter time-domain electromagnetic numerical simulation based on Leapfrog ADI-FDTD
NASA Astrophysics Data System (ADS)
Guan, S.; Ji, Y.; Li, D.; Wu, Y.; Wang, A.
2017-12-01
We present a three-dimension (3D) Alternative Direction Implicit Finite-Difference Time-Domain (Leapfrog ADI-FDTD) method for the simulation of helicopter time-domain electromagnetic (HTEM) detection. This method is different from the traditional explicit FDTD, or ADI-FDTD. Comparing with the explicit FDTD, leapfrog ADI-FDTD algorithm is no longer limited by Courant-Friedrichs-Lewy(CFL) condition. Thus, the time step is longer. Comparing with the ADI-FDTD, we reduce the equations from 12 to 6 and .the Leapfrog ADI-FDTD method will be easier for the general simulation. First, we determine initial conditions which are adopted from the existing method presented by Wang and Tripp(1993). Second, we derive Maxwell equation using a new finite difference equation by Leapfrog ADI-FDTD method. The purpose is to eliminate sub-time step and retain unconditional stability characteristics. Third, we add the convolution perfectly matched layer (CPML) absorbing boundary condition into the leapfrog ADI-FDTD simulation and study the absorbing effect of different parameters. Different absorbing parameters will affect the absorbing ability. We find the suitable parameters after many numerical experiments. Fourth, We compare the response with the 1-Dnumerical result method for a homogeneous half-space to verify the correctness of our algorithm.When the model contains 107*107*53 grid points, the conductivity is 0.05S/m. The results show that Leapfrog ADI-FDTD need less simulation time and computer storage space, compared with ADI-FDTD. The calculation speed decreases nearly four times, memory occupation decreases about 32.53%. Thus, this algorithm is more efficient than the conventional ADI-FDTD method for HTEM detection, and is more precise than that of explicit FDTD in the late time.
Coupling of Peridynamics and Finite Element Formulation for Multiscale Simulations
2012-10-16
unidirectional fiber - reinforced composites, Computer Methods in Applied Mechanics and Engineering 217 (2012) 247-261. [44] S. A. Silling, M. Epton...numerical testing for different grid widths to horizon ratios , (4) development of an approach to add another material variable in the given approach...partition of unity principle, (3) numerical testing for different grid widths to horizon ratios , (4) development of an approach to add another
Euclid Cosmological Simulations Requirements and Implementation Plan
NASA Technical Reports Server (NTRS)
Kiessling, Alina
2012-01-01
Simulations are essential for the successful undertaking of the Euclid mission. The simulations requirements for the Euclid mission are vast ! It is an enormous undertaking that includes development of software and acquisition of hardware facilities. The simulations requirements are currently being finalised - please contact myself or Elisabetta Semboloni if you would like to add/modify any r equi r ements (or if you would like to be involved in the development of the simulations).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Chunhua; Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, Michigan 45108; Lv, Dashuai
Riboswitches are noncoding mRNA segments that can regulate the gene expression via altering their structures in response to specific metabolite binding. We proposed a coarse-grained Gaussian network model (GNM) to examine the unfolding and folding dynamics of adenosine deaminase (add) A-riboswitch upon the adenine dissociation, in which the RNA is modeled by a nucleotide chain with interaction networks formed by connecting adjoining atomic contacts. It was shown that the adenine binding is critical to the folding of the add A-riboswitch while the removal of the ligand can result in drastic increase of the thermodynamic fluctuations especially in the junction regionsmore » between helix domains. Under the assumption that the native contacts with the highest thermodynamic fluctuations break first, the iterative GNM simulations showed that the unfolding process of the adenine-free add A-riboswitch starts with the denature of the terminal helix stem, followed by the loops and junctions involving ligand binding pocket, and then the central helix domains. Despite the simplified coarse-grained modeling, the unfolding dynamics and pathways are shown in close agreement with the results from atomic-level MD simulations and the NMR and single-molecule force spectroscopy experiments. Overall, the study demonstrates a new avenue to investigate the binding and folding dynamics of add A-riboswitch molecule which can be readily extended for other RNA molecules.« less
The natural history of the anterior knee instability by stress radiography
de Rezende, Márcia Uchôa; Hernandez, Arnaldo José; Camanho, Gilberto Luis
2014-01-01
OBJECTIVE: To analyze the anteroposterior displacement of the knee by means of stress radiography in individuals with unilateral anterior knee instability and relate to time of instability. METHODS: Sixty individuals with intact knees (control group) and 125 patients with unilateral anterior instability (AI group) agreed to participate in the study. Gender, age, weight, height, age at injury, time between injury and testing, and surgical findings are studied. Both groups are submitted to anterior and posterior stress radiographies of both knees. Anterior (ADD) and posterior displacement difference (PDD) were calculated between sides. RESULTS: In the control group ADD and PDD are in average, zero, whereas in the AI group ADD averaged 9.8mm and PDD, 1.92mm. Gender, age, weight, height, age at trauma and presence of menisci's lesions do not intervene in the values of ADD and PDD. Meniscal injuries increase with time. ADD and PDD do not relate with the presence or absence of associated menisci's lesions. The ADD and the PDD are related to each other and increase with time. CONCLUSION: There is a permanent anterior subluxation of the injured knee that is related to the amount of anterior displacement that increases with time. Level of Evidence III, Study Types Case-control study. PMID:25246846
40 CFR 63.4500 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
...) for which you use the compliant material option or the emission rate without add-on controls option... § 63.4490 at all times. (2) Any coating operation(s) for which you use the emission rate with add-on... for emission capture systems and add-on control devices required by § 63.4492 at all times except...
40 CFR 63.4500 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2013 CFR
2013-07-01
...) for which you use the compliant material option or the emission rate without add-on controls option... § 63.4490 at all times. (2) Any coating operation(s) for which you use the emission rate with add-on... for emission capture systems and add-on control devices required by § 63.4492 at all times except...
40 CFR 63.3900 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2012 CFR
2012-07-01
... operation(s) for which you use the compliant material option or the emission rate without add-on controls... § 63.3890 at all times. (2) Any coating operation(s) for which you use the emission rate with add-on... for emission capture systems and add-on control devices required by § 63.3892 at all times except...
40 CFR 63.4500 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2012 CFR
2012-07-01
...) for which you use the compliant material option or the emission rate without add-on controls option... § 63.4490 at all times. (2) Any coating operation(s) for which you use the emission rate with add-on... for emission capture systems and add-on control devices required by § 63.4492 at all times except...
40 CFR 63.3900 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2013 CFR
2013-07-01
... operation(s) for which you use the compliant material option or the emission rate without add-on controls... § 63.3890 at all times. (2) Any coating operation(s) for which you use the emission rate with add-on... for emission capture systems and add-on control devices required by § 63.3892 at all times except...
40 CFR 63.3900 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
... operation(s) for which you use the compliant material option or the emission rate without add-on controls... § 63.3890 at all times. (2) Any coating operation(s) for which you use the emission rate with add-on... for emission capture systems and add-on control devices required by § 63.3892 at all times except...
40 CFR 63.3900 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
...) for which you use the compliant material option or the emission rate without add-on controls option... § 63.3890 at all times. (2) Any coating operation(s) for which you use the emission rate with add-on... for emission capture systems and add-on control devices required by § 63.3892 at all times except...
40 CFR 63.3900 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
...) for which you use the compliant material option or the emission rate without add-on controls option... § 63.3890 at all times. (2) Any coating operation(s) for which you use the emission rate with add-on... for emission capture systems and add-on control devices required by § 63.3892 at all times except...
Classroom as Reality: Demonstrating Campaign Effects through Live Simulation
ERIC Educational Resources Information Center
Coffey, Daniel J.; Miller, William J.; Feuerstein, Derek
2011-01-01
Scholastic research has demonstrated that when conducted properly, active learning exercises are successful at increasing student awareness, student interest, and knowledge retention. Face-to-face simulations, in particular, have been demonstrated to add positively to classrooms focusing on comparative politics, international relations, public…
Simulation-Based Bronchoscopy Training
Kennedy, Cassie C.; Maldonado, Fabien
2013-01-01
Background: Simulation-based bronchoscopy training is increasingly used, but effectiveness remains uncertain. We sought to perform a comprehensive synthesis of published work on simulation-based bronchoscopy training. Methods: We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus for eligible articles through May 11, 2011. We included all original studies involving health professionals that evaluated, in comparison with no intervention or an alternative instructional approach, simulation-based training for flexible or rigid bronchoscopy. Study selection and data abstraction were performed independently and in duplicate. We pooled results using random effects meta-analysis. Results: From an initial pool of 10,903 articles, we identified 17 studies evaluating simulation-based bronchoscopy training. In comparison with no intervention, simulation training was associated with large benefits on skills and behaviors (pooled effect size, 1.21 [95% CI, 0.82-1.60]; n = 8 studies) and moderate benefits on time (0.62 [95% CI, 0.12-1.13]; n = 7). In comparison with clinical instruction, behaviors with real patients showed nonsignificant effects favoring simulation for time (0.61 [95% CI, −1.47 to 2.69]) and process (0.33 [95% CI, −1.46 to 2.11]) outcomes (n = 2 studies each), although variation in training time might account for these differences. Four studies compared alternate simulation-based training approaches. Inductive analysis to inform instructional design suggested that longer or more structured training is more effective, authentic clinical context adds value, and animal models and plastic part-task models may be superior to more costly virtual-reality simulators. Conclusions: Simulation-based bronchoscopy training is effective in comparison with no intervention. Comparative effectiveness studies are few. PMID:23370487
Measurement of Gamma-Irradiated Corneal Patch Graft Thickness After Aqueous Drainage Device Surgery.
de Luna, Regina A; Moledina, Ameera; Wang, Jiangxia; Jampel, Henry D
2017-09-01
Exposure of the tube of an aqueous drainage device (ADD) through the conjunctiva is a serious complication of ADD surgery. Although placement of gamma-irradiated sterile cornea (GISC) as a patch graft over the tube is commonly performed, exposures still occur. To measure GISC patch graft thickness as a function of time after surgery, estimate the rate of graft thinning, and determine risk factors for graft thinning. Cross-sectional study of graft thickness using anterior segment optic coherence tomography (AS-OCT) was conducted at the Wilmer Eye Institute at Johns Hopkins Hospital. A total of 107 patients (120 eyes, 120 ADDs) 18 years or older who underwent ADD surgery at Johns Hopkins with GISC patch graft between July 1, 2010, and October 31, 2016, were enrolled. Implantation of ADD with placement of GISC patch graft over the tube. Graft thickness vs time after ADD surgery and risk factors for undetectable graft. Of the 107 patients included in the analysis, the mean (SD) age of the cohort was 64 (16.2) years, 49 (45.8%) were male, and 43 (40.2%) were African American. The mean time of measurement after surgery was 1.7 years (range, 1 day to 6 years). Thinner grafts were observed as the time after surgery lengthened (β regression coefficient, -60 µm per year since surgery; 95% CI, -80 µm to -40 µm). The odds ratio of undetectable grafts per year after ADD surgery was 2.1 (95% CI, 1.5-3.0; P < .001). Age, sex, race, type of ADD, quadrant of ADD placement, diagnosis of uveitis or dry eye, and prior conjunctival surgery were not correlated with the presence or absence of the graft. Gamma-irradiated sterile corneal patch grafts do not always retain their integrity after ADD surgery. Data from this cross-sectional study showed that on average, the longer the time after surgery, the thinner the graft. These findings suggest that placement of a GISC patch graft is no guarantee against tube exposure, and that better strategies are needed for preventing this complication.
Dynamic Modeling of Starting Aerodynamics and Stage Matching in an Axi-Centrifugal Compressor
NASA Technical Reports Server (NTRS)
Wilkes, Kevin; OBrien, Walter F.; Owen, A. Karl
1996-01-01
A DYNamic Turbine Engine Compressor Code (DYNTECC) has been modified to model speed transients from 0-100% of compressor design speed. The impetus for this enhancement was to investigate stage matching and stalling behavior during a start sequence as compared to rotating stall events above ground idle. The model can simulate speed and throttle excursions simultaneously as well as time varying bleed flow schedules. Results of a start simulation are presented and compared to experimental data obtained from an axi-centrifugal turboshaft engine and companion compressor rig. Stage by stage comparisons reveal the front stages to be operating in or near rotating stall through most of the start sequence. The model matches the starting operating line quite well in the forward stages with deviations appearing in the rearward stages near the start bleed. Overall, the performance of the model is very promising and adds significantly to the dynamic simulation capabilities of DYNTECC.
Visualization of spatial-temporal data based on 3D virtual scene
NASA Astrophysics Data System (ADS)
Wang, Xianghong; Liu, Jiping; Wang, Yong; Bi, Junfang
2009-10-01
The main purpose of this paper is to realize the expression of the three-dimensional dynamic visualization of spatialtemporal data based on three-dimensional virtual scene, using three-dimensional visualization technology, and combining with GIS so that the people's abilities of cognizing time and space are enhanced and improved by designing dynamic symbol and interactive expression. Using particle systems, three-dimensional simulation, virtual reality and other visual means, we can simulate the situations produced by changing the spatial location and property information of geographical entities over time, then explore and analyze its movement and transformation rules by changing the interactive manner, and also replay history and forecast of future. In this paper, the main research object is the vehicle track and the typhoon path and spatial-temporal data, through three-dimensional dynamic simulation of its track, and realize its timely monitoring its trends and historical track replaying; according to visualization techniques of spatialtemporal data in Three-dimensional virtual scene, providing us with excellent spatial-temporal information cognitive instrument not only can add clarity to show spatial-temporal information of the changes and developments in the situation, but also be used for future development and changes in the prediction and deduction.
Fatigue Damage of Collagenous Tissues: Experiment, Modeling and Simulation Studies
Martin, Caitlin; Sun, Wei
2017-01-01
Mechanical fatigue damage is a critical issue for soft tissues and tissue-derived materials, particularly for musculoskeletal and cardiovascular applications; yet, our understanding of the fatigue damage process is incomplete. Soft tissue fatigue experiments are often difficult and time-consuming to perform, which has hindered progress in this area. However, the recent development of soft-tissue fatigue-damage constitutive models has enabled simulation-based fatigue analyses of tissues under various conditions. Computational simulations facilitate highly controlled and quantitative analyses to study the distinct effects of various loading conditions and design features on tissue durability; thus, they are advantageous over complex fatigue experiments. Although significant work to calibrate the constitutive models from fatigue experiments and to validate predictability remains, further development in these areas will add to our knowledge of soft-tissue fatigue damage and will facilitate the design of durable treatments and devices. In this review, the experimental, modeling, and simulation efforts to study collagenous tissue fatigue damage are summarized and critically assessed. PMID:25955007
ERIC Educational Resources Information Center
Cook, Mark S.; Kernahan, Peter J.
2017-01-01
Cadaveric simulations are an effective way to add clinical context to an anatomy course. In this study, unembalmed (fresh) cadavers were uniquely prepared to simulate pleural effusion to teach chest percussion and review thoracic anatomy. Thirty first-year medical students were assigned to either an intervention (Group A) or control group (Group…
Experimental Verification of Bayesian Planet Detection Algorithms with a Shaped Pupil Coronagraph
NASA Astrophysics Data System (ADS)
Savransky, D.; Groff, T. D.; Kasdin, N. J.
2010-10-01
We evaluate the feasibility of applying Bayesian detection techniques to discovering exoplanets using high contrast laboratory data with simulated planetary signals. Background images are generated at the Princeton High Contrast Imaging Lab (HCIL), with a coronagraphic system utilizing a shaped pupil and two deformable mirrors (DMs) in series. Estimates of the electric field at the science camera are used to correct for quasi-static speckle and produce symmetric high contrast dark regions in the image plane. Planetary signals are added in software, or via a physical star-planet simulator which adds a second off-axis point source before the coronagraph with a beam recombiner, calibrated to a fixed contrast level relative to the source. We produce a variety of images, with varying integration times and simulated planetary brightness. We then apply automated detection algorithms such as matched filtering to attempt to extract the planetary signals. This allows us to evaluate the efficiency of these techniques in detecting planets in a high noise regime and eliminating false positives, as well as to test existing algorithms for calculating the required integration times for these techniques to be applicable.
RTSPM: real-time Linux control software for scanning probe microscopy.
Chandrasekhar, V; Mehta, M M
2013-01-01
Real time computer control is an essential feature of scanning probe microscopes, which have become important tools for the characterization and investigation of nanometer scale samples. Most commercial (and some open-source) scanning probe data acquisition software uses digital signal processors to handle the real time data processing and control, which adds to the expense and complexity of the control software. We describe here scan control software that uses a single computer and a data acquisition card to acquire scan data. The computer runs an open-source real time Linux kernel, which permits fast acquisition and control while maintaining a responsive graphical user interface. Images from a simulated tuning-fork based microscope as well as a standard topographical sample are also presented, showing some of the capabilities of the software.
Observing Strategies for the Detection of Jupiter Analogs
NASA Astrophysics Data System (ADS)
Wittenmyer, Robert A.; Tinney, C. G.; Horner, J.; Butler, R. P.; Jones, H. R. A.; O'Toole, S. J.; Bailey, J.; Carter, B. D.; Salter, G. S.; Wright, D.
2013-04-01
To understand the frequency, and thus the formation and evolution, of planetary systems like our own solar system, it is critical to detect Jupiter-like planets in Jupiter-like orbits. For long-term radial-velocity monitoring, it is useful to estimate the observational effort required to reliably detect such objects, particularly in light of severe competition for limited telescope time. We perform detailed simulations of observational campaigns, maximizing the realism of the sampling of a set of simulated observations. We then compute the detection limits for each campaign to quantify the effect of increasing the number of observational epochs and varying their time coverage. We show that once there is sufficient time baseline to detect a given orbital period, it becomes less effective to add further time coverage—rather, the detectability of a planet scales roughly as the square root of the number of observations, independently of the number of orbital cycles included in the data string. We also show that no noise floor is reached, with a continuing improvement in detectability at the maximum number of observations N = 500 tested here.
Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ehlert, Kurt; Loewe, Laurence, E-mail: loewe@wisc.edu; Wisconsin Institute for Discovery, University of Wisconsin-Madison, Madison, Wisconsin 53715
2014-11-28
To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected “hubs” such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution.more » Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present “Lazy Updating,” an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise.« less
Population Simulation, AKA: Grahz, Rahbitz and Fawkzes
NASA Technical Reports Server (NTRS)
Bangert, Tyler R.
2008-01-01
In an effort to give students a more visceral experience of science and instill a deeper working knowledge of concepts, activities that utilize hands-on, laboratory and simulated experiences are recommended because these activities have a greater impact on student learning, especially for Native American students. Because it is not usually feasible to take large and/or multiple classes of high school science students into the field to count numbers of organisms of a particular species, especially over a long period of time and covering a large area of an environment, the population simulation presented in this paper was created to aid students in understanding population dynamics by working with a simulated environment, which can be done in the classroom. Students create an environment and populate the environment with imaginary species. Then, using a sequence of "rules" that allow organisms to eat, reproduce, move and age, students see how the population of a species changes over time. In particular, students practice collecting data, summarizing information, plotting graphs, and interpreting graphs for such information as carrying capacity, predator prey relationships, and how specific species factors impact population and the environment. Students draw conclusions from their results and suggest further research, which may involve changes in simulation parameters, prediction of outcomes, and testing predictions. The population Simulation has demonstrated success in the above student activities using a "board game" version of the population simulation. A computer version of the population simulation needs more testing, but preliminary runs are promising. A second - and more complicated - computer simulation will simulate the same things and will add simulated population genetics.
Dupont, Nana; Fertner, Mette; Kristensen, Charlotte Sonne; Toft, Nils; Stege, Helle
2016-05-03
Transparent calculation methods are crucial when investigating trends in antimicrobial consumption over time and between populations. Until 2011, one single standardized method was applied when quantifying the Danish pig antimicrobial consumption with the unit "Animal Daily Dose" (ADD). However, two new methods for assigning values for ADDs have recently emerged, one implemented by DANMAP, responsible for publishing annual reports on antimicrobial consumption, and one by the Danish Veterinary and Food Administration (DVFA), responsible for the Yellow Card initiative. In addition to new ADD assignment methods, Denmark has also experienced a shift in the production pattern, towards a larger export of live pigs. The aims of this paper were to (1) describe previous and current ADD assignment methods used by the major Danish institutions and (2) to illustrate how ADD assignment method and choice of population and population measurement affect the calculated national antimicrobial consumption in pigs (2007-2013). The old VetStat ADD-values were based on SPCs in contrast to the new ADD-values, which were based on active compound, concentration and administration route. The new ADD-values stated by both DANMAP and DVFA were only identical for 48 % of antimicrobial products approved for use in pigs. From 2007 to 2013, the total number of ADDs per year increased by 9 % when using the new DVFA ADD-values, but decreased by 2 and 7 % when using the new DANMAP ADD-values or the old VetStat ADD-values, respectively. Through 2007 to 2013, the production of pigs increased from 26.1 million pigs per year with 18 % exported live to 28.7 million with 34 % exported live. In the same time span, the annual pig antimicrobial consumption increased by 22.2 %, when calculated using the new DVFA ADD-values and pigs slaughtered per year as population measurement (13.0 ADDs/pig/year to 15.9 ADDs/pig/year). However, when based on the old VetStat ADD values and pigs produced per year (including live export), a 10.9 % decrease was seen (10.6 ADDs/pig/year to 9.4 ADDs/pig/year). The findings of this paper clearly highlight that calculated national antimicrobial consumption is highly affected by chosen population measurement and the applied ADD-values.
Towards real-time photon Monte Carlo dose calculation in the cloud
NASA Astrophysics Data System (ADS)
Ziegenhein, Peter; Kozin, Igor N.; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-01
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
Towards real-time photon Monte Carlo dose calculation in the cloud.
Ziegenhein, Peter; Kozin, Igor N; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-07
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
TWOS - TIME WARP OPERATING SYSTEM, VERSION 2.5.1
NASA Technical Reports Server (NTRS)
Bellenot, S. F.
1994-01-01
The Time Warp Operating System (TWOS) is a special-purpose operating system designed to support parallel discrete-event simulation. TWOS is a complete implementation of the Time Warp mechanism, a distributed protocol for virtual time synchronization based on process rollback and message annihilation. Version 2.5.1 supports simulations and other computations using both virtual time and dynamic load balancing; it does not support general time-sharing or multi-process jobs using conventional message synchronization and communication. The program utilizes the underlying operating system's resources. TWOS runs a single simulation at a time, executing it concurrently on as many processors of a distributed system as are allocated. The simulation needs only to be decomposed into objects (logical processes) that interact through time-stamped messages. TWOS provides transparent synchronization. The user does not have to add any more special logic to aid in synchronization, nor give any synchronization advice, nor even understand much about how the Time Warp mechanism works. The Time Warp Simulator (TWSIM) subdirectory contains a sequential simulation engine that is interface compatible with TWOS. This means that an application designer and programmer who wish to use TWOS can prototype code on TWSIM on a single processor and/or workstation before having to deal with the complexity of working on a distributed system. TWSIM also provides statistics about the application which may be helpful for determining the correctness of an application and for achieving good performance on TWOS. Version 2.5.1 has an updated interface that is not compatible with 2.0. The program's user manual assists the simulation programmer in the design, coding, and implementation of discrete-event simulations running on TWOS. The manual also includes a practical user's guide to the TWOS application benchmark, Colliding Pucks. TWOS supports simulations written in the C programming language. It is designed to run on the Sun3/Sun4 series computers and the BBN "Butterfly" GP-1000 computer. The standard distribution medium for this package is a .25 inch tape cartridge in TAR format. TWOS was developed in 1989 and updated in 1991. This program is a copyrighted work with all copyright vested in NASA. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
Forecasting peaks of seasonal influenza epidemics.
Nsoesie, Elaine; Mararthe, Madhav; Brownstein, John
2013-06-21
We present a framework for near real-time forecast of influenza epidemics using a simulation optimization approach. The method combines an individual-based model and a simple root finding optimization method for parameter estimation and forecasting. In this study, retrospective forecasts were generated for seasonal influenza epidemics using web-based estimates of influenza activity from Google Flu Trends for 2004-2005, 2007-2008 and 2012-2013 flu seasons. In some cases, the peak could be forecasted 5-6 weeks ahead. This study adds to existing resources for influenza forecasting and the proposed method can be used in conjunction with other approaches in an ensemble framework.
Simulations of Solar Wind Turbulence
NASA Technical Reports Server (NTRS)
Goldstein, Melvyn L.; Usmanov, A. V.; Roberts, D. A.
2008-01-01
Recently we have restructured our approach to simulating magnetohydrodynamic (MHD) turbulence in the solar wind. Previously, we had defined a 'virtual' heliosphere that contained, for example, a tilted rotating current sheet, microstreams, quasi-two-dimensional fluctuations as well as Alfven waves. In this new version of the code, we use the global, time-stationary, WKB Alfven wave-driven solar wind model developed by Usmanov and described in Usmanov and Goldstein [2003] to define the initial state of the system. Consequently, current sheets, and fast and slow streams are computed self-consistently from an inner, photospheric, boundary. To this steady-state configuration, we add fluctuations close to, but above, the surface where the flow become super-Alfvenic. The time-dependent MHD equations are then solved using a semi-discrete third-order Central Weighted Essentially Non-Oscillatory (CWENO) numerical scheme. The computational domain now includes the entire sphere; the geometrical singularity at the poles is removed using the multiple grid approach described in Usmanov [1996]. Wave packets are introduced at the inner boundary such as to satisfy Faraday's Law [Yeh and Dryer, 1985] and their nonlinear evolution are followed in time.
Beaulieu-Bonneau, Simon; Fortier-Brochu, Émilie; Ivers, Hans; Morin, Charles M
2017-03-01
The objectives of this study were to compare individuals with traumatic brain injury (TBI) and healthy controls on neuropsychological tests of attention and driving simulation performance, and explore their relationships with participants' characteristics, sleep, sleepiness, and fatigue. Participants were 22 adults with moderate or severe TBI (time since injury ≥ one year) and 22 matched controls. They completed three neuropsychological tests of attention, a driving simulator task, night-time polysomnographic recordings, and subjective ratings of sleepiness and fatigue. Results showed that participants with TBI exhibited poorer performance compared to controls on measures tapping speed of information processing and sustained attention, but not on selective attention measures. On the driving simulator task, a greater variability of the vehicle lateral position was observed in the TBI group. Poorer performance on specific subsets of neuropsychological variables was associated with poorer sleep continuity in the TBI group, and with a greater increase in subjective sleepiness in both groups. No significant relationship was found between cognitive performance and fatigue. These findings add to the existing evidence that speed of information processing is still impaired several years after moderate to severe TBI. Sustained attention could also be compromised. Attention seems to be associated with sleep continuity and daytime sleepiness; this interaction needs to be explored further.
Electrolyzers Enhancing Flexibility in Electric Grids
Mohanpurkar, Manish; Luo, Yusheng; Terlip, Danny; ...
2017-11-10
This paper presents a real-time simulation with a hardware-in-the-loop (HIL)-based approach for verifying the performance of electrolyzer systems in providing grid support. Hydrogen refueling stations may use electrolyzer systems to generate hydrogen and are proposed to have the potential of becoming smarter loads that can proactively provide grid services. On the basis of experimental findings, electrolyzer systems with balance of plant are observed to have a high level of controllability and hence can add flexibility to the grid from the demand side. A generic front end controller (FEC) is proposed, which enables an optimal operation of the load on themore » basis of market and grid conditions. This controller has been simulated and tested in a real-time environment with electrolyzer hardware for a performance assessment. It can optimize the operation of electrolyzer systems on the basis of the information collected by a communication module. Real-time simulation tests are performed to verify the performance of the FEC-driven electrolyzers to provide grid support that enables flexibility, greater economic revenue, and grid support for hydrogen producers under dynamic conditions. In conclusion, the FEC proposed in this paper is tested with electrolyzers, however, it is proposed as a generic control topology that is applicable to any load.« less
Additional confirmation of the validity of laboratory simulation of cloud radiances
NASA Technical Reports Server (NTRS)
Davis, J. M.; Cox, S. K.
1986-01-01
The results of a laboratory experiment are presented that provide additional verification of the methodology adopted for simulation of the radiances reflected from fields of optically thick clouds using the Cloud Field Optical Simulator (CFOS) at Colorado State University. The comparison of these data with their theoretically derived counterparts indicates that the crucial mechanism of cloud-to-cloud radiance field interaction is accurately simulated in the CFOS experiments and adds confidence to the manner in which the optical depth is scaled.
Impact of Frequent Interruption on Nurses' Patient-Controlled Analgesia Programming Performance.
Campoe, Kristi R; Giuliano, Karen K
2017-12-01
The purpose was to add to the body of knowledge regarding the impact of interruption on acute care nurses' cognitive workload, total task completion times, nurse frustration, and medication administration error while programming a patient-controlled analgesia (PCA) pump. Data support that the severity of medication administration error increases with the number of interruptions, which is especially critical during the administration of high-risk medications. Bar code technology, interruption-free zones, and medication safety vests have been shown to decrease administration-related errors. However, there are few published data regarding the impact of number of interruptions on nurses' clinical performance during PCA programming. Nine acute care nurses completed three PCA pump programming tasks in a simulation laboratory. Programming tasks were completed under three conditions where the number of interruptions varied between two, four, and six. Outcome measures included cognitive workload (six NASA Task Load Index [NASA-TLX] subscales), total task completion time (seconds), nurse frustration (NASA-TLX Subscale 6), and PCA medication administration error (incorrect final programming). Increases in the number of interruptions were associated with significant increases in total task completion time ( p = .003). We also found increases in nurses' cognitive workload, nurse frustration, and PCA pump programming errors, but these increases were not statistically significant. Complex technology use permeates the acute care nursing practice environment. These results add new knowledge on nurses' clinical performance during PCA pump programming and high-risk medication administration.
Burn severity mapping using simulation modeling and satellite imagery
Eva C. Karau; Robert E. Keane
2010-01-01
Although burn severity maps derived from satellite imagery provide a landscape view of fire impacts, fire effects simulation models can provide spatial fire severity estimates and add a biotic context in which to interpret severity. In this project, we evaluated two methods of mapping burn severity in the context of rapid post-fire assessment for four wildfires in...
Toward textbook multigrid efficiency for fully implicit resistive magnetohydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Mark F.; Samtaney, Ravi, E-mail: samtaney@pppl.go; Brandt, Achi
2010-09-01
Multigrid methods can solve some classes of elliptic and parabolic equations to accuracy below the truncation error with a work-cost equivalent to a few residual calculations - so-called 'textbook' multigrid efficiency. We investigate methods to solve the system of equations that arise in time dependent magnetohydrodynamics (MHD) simulations with textbook multigrid efficiency. We apply multigrid techniques such as geometric interpolation, full approximate storage, Gauss-Seidel smoothers, and defect correction for fully implicit, nonlinear, second-order finite volume discretizations of MHD. We apply these methods to a standard resistive MHD benchmark problem, the GEM reconnection problem, and add a strong magnetic guide field,more » which is a critical characteristic of magnetically confined fusion plasmas. We show that our multigrid methods can achieve near textbook efficiency on fully implicit resistive MHD simulations.« less
Toward textbook multigrid efficiency for fully implicit resistive magnetohydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Mark F.; Samtaney, Ravi; Brandt, Achi
2010-09-01
Multigrid methods can solve some classes of elliptic and parabolic equations to accuracy below the truncation error with a work-cost equivalent to a few residual calculations – so-called ‘‘textbook” multigrid efficiency. We investigate methods to solve the system of equations that arise in time dependent magnetohydrodynamics (MHD) simulations with textbook multigrid efficiency. We apply multigrid techniques such as geometric interpolation, full approximate storage, Gauss–Seidel smoothers, and defect correction for fully implicit, nonlinear, second-order finite volume discretizations of MHD. We apply these methods to a standard resistive MHD benchmark problem, the GEM reconnection problem, and add a strong magnetic guide field,more » which is a critical characteristic of magnetically confined fusion plasmas. We show that our multigrid methods can achieve near textbook efficiency on fully implicit resistive MHD simulations.« less
Toward textbook multigrid efficiency for fully implicit resistive magnetohydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Mark F.; Samtaney, Ravi; Brandt, Achi
2013-12-14
Multigrid methods can solve some classes of elliptic and parabolic equations to accuracy below the truncation error with a work-cost equivalent to a few residual calculations – so-called “textbook” multigrid efficiency. We investigate methods to solve the system of equations that arise in time dependent magnetohydrodynamics (MHD) simulations with textbook multigrid efficiency. We apply multigrid techniques such as geometric interpolation, full approximate storage, Gauss-Seidel smoothers, and defect correction for fully implicit, nonlinear, second-order finite volume discretizations of MHD. We apply these methods to a standard resistive MHD benchmark problem, the GEM reconnection problem, and add a strong magnetic guide field,more » which is a critical characteristic of magnetically confined fusion plasmas. We show that our multigrid methods can achieve near textbook efficiency on fully implicit resistive MHD simulations.« less
ERIC Educational Resources Information Center
Hartel, Hermann
2000-01-01
Finds that computer simulations can be used to visualize the processes involved with lunar tides. Technology adds value, thus opening new paths for a more distinct analysis and increased learning results. (Author/CCM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romanov, Gennady; /Fermilab
CST Particle Studio combines electromagnetic field simulation, multi-particle tracking, adequate post-processing and advanced probabilistic emission model, which is the most important new capability in multipactor simulation. The emission model includes in simulation the stochastic properties of emission and adds primary electron elastic and inelastic reflection from the surfaces. The simulation of multipactor in coaxial waveguides have been performed to study the effects of the innovations on the multipactor threshold and the range over which multipactor can occur. The results compared with available previous experiments and simulations as well as the technique of MP simulation with CST PS are presented andmore » discussed.« less
Nanoscale plasmonic waveguides for filtering and demultiplexing devices
NASA Astrophysics Data System (ADS)
Akjouj, A.; Noual, A.; Pennec, Y.; Bjafari-Rouhani, B.
2010-05-01
Numerical simulations, based on a FDTD (finite-difference-time-domain) method, of infrared light propagation for add/drop filtering in two-dimensional (2D) Ag-SiO2-Ag resonators are reported to design 2D Y-bent plasmonic waveguides with possible applications in telecommunication WDM (wavelength demultiplexing). First, we study optical transmission and reflection of a nanoscale SiO2 waveguide coupled to a nanocavity of the same insulator located either inside or on the side of a linear waveguide sandwiched between Ag. According to the inside or outside positioning of the nanocavity with respect to the waveguide, the transmission spectrum displays peaks or dips, respectively, which occur at the same central frequency. A fundamental study of the possible cavity modes in the near-infrared frequency band is also given. These filtering properties are then exploited to propose a nanoscale demultiplexer based on a Y-shaped plasmonic waveguide for separation of two different wavelengths, in selection or rejection, from an input broadband signal around 1550 nm. We detail coupling of the 2D add/drop Y connector to two cavities inserted on each of its branches.
Surgical simulators in urological training--views of UK Training Programme Directors.
Forster, James A; Browning, Anthony J; Paul, Alan B; Biyani, C Shekhar
2012-09-01
What's known on the subject? and What does the study add? The role of surgical simulators is currently being debated in urological and other surgical specialties. Simulators are not presently implemented in the UK urology training curriculum. The availability of simulators and the opinions of Training Programme Directors' (TPD) on their role have not been described. In the present questionnaire-based survey, the trainees of most, but not all, UK TPDs had access to laparoscopic simulators, and that all responding TPDs thought that simulators improved laparoscopic training. We hope that the present study will be a positive step towards making an agreement to formally introduce simulators into the UK urology training curriculum. To discuss the current situation on the use of simulators in surgical training. To determine the views of UK Urology Training Programme Directors (TPDs) on the availability and use of simulators in Urology at present, and to discuss the role that simulators may have in future training. An online-questionnaire survey was distributed to all UK Urology TPDs. In all, 16 of 21 TPDs responded. All 16 thought that laparoscopic simulators improved the quality of laparoscopic training. The trainees of 13 TPDs had access to a laparoscopic simulator (either in their own hospital or another hospital in the deanery). Most TPDs thought that trainees should use simulators in their free time, in quiet time during work hours, or in teaching sessions (rather than incorporated into the weekly timetable). We feel that the current apprentice-style method of training in urological surgery is out-dated. We think that all TPDs and trainees should have access to a simulator, and that a formal competency based simulation training programme should be incorporated into the urology training curriculum, with trainees reaching a minimum proficiency on a simulator before undertaking surgical procedures. © 2012 THE AUTHORS. BJU INTERNATIONAL © 2012 BJU INTERNATIONAL.
Robotics On-Board Trainer (ROBoT)
NASA Technical Reports Server (NTRS)
Johnson, Genevieve; Alexander, Greg
2013-01-01
ROBoT is an on-orbit version of the ground-based Dynamics Skills Trainer (DST) that astronauts use for training on a frequent basis. This software consists of two primary software groups. The first series of components is responsible for displaying the graphical scenes. The remaining components are responsible for simulating the Mobile Servicing System (MSS), the Japanese Experiment Module Remote Manipulator System (JEMRMS), and the H-II Transfer Vehicle (HTV) Free Flyer Robotics Operations. The MSS simulation software includes: Robotic Workstation (RWS) simulation, a simulation of the Space Station Remote Manipulator System (SSRMS), a simulation of the ISS Command and Control System (CCS), and a portion of the Portable Computer System (PCS) software necessary for MSS operations. These components all run under the CentOS4.5 Linux operating system. The JEMRMS simulation software includes real-time, HIL, dynamics, manipulator multi-body dynamics, and a moving object contact model with Tricks discrete time scheduling. The JEMRMS DST will be used as a functional proficiency and skills trainer for flight crews. The HTV Free Flyer Robotics Operations simulation software adds a functional simulation of HTV vehicle controllers, sensors, and data to the MSS simulation software. These components are intended to support HTV ISS visiting vehicle analysis and training. The scene generation software will use DOUG (Dynamic On-orbit Ubiquitous Graphics) to render the graphical scenes. DOUG runs on a laptop running the CentOS4.5 Linux operating system. DOUG is an Open GL-based 3D computer graphics rendering package. It uses pre-built three-dimensional models of on-orbit ISS and space shuttle systems elements, and provides realtime views of various station and shuttle configurations.
Investigation of parabolic computational techniques for internal high-speed viscous flows
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Power, G. D.
1985-01-01
A feasibility study was conducted to assess the applicability of an existing parabolic analysis (ADD-Axisymmetric Diffuser Duct), developed previously for subsonic viscous internal flows, to mixed supersonic/subsonic flows with heat addition simulating a SCRAMJET combustor. A study was conducted with the ADD code modified to include additional convection effects in the normal momentum equation when supersonic expansion and compression waves were present. It is concluded from the present study that for the class of problems where strong viscous/inviscid interactions are present a global iteration procedure is required.
ADHD Training Modules for Rural Health Care Providers, Educators and Parents.
ERIC Educational Resources Information Center
Woodrum, D. T.; And Others
Teachers, psychologists, medical personnel, and parents need to work together with a common base of knowledge to provide appropriate services to children with attention deficit disorder (ADD). The history of ADD symptoms begins in the late 19th century but the term ADD was not coined until 1980. Since that time, definitions and terms have…
Review of Flight Training Technology
1976-07-01
the cockpit. They might be used to train pilots in procedures to cope with NOE-altitude emergencies; howeve-r, a combination of cinematic simulation...airplanes. Although cockpit motion adds realism , thereby i-nproving pilot performanc, in the simulater Fedderqon, Vil; Guercio and Wall, i7?. Ince...operations. Light aircraft, part-task trainers, motion pictures and video tares, cinematic simulators, and digital teaching machines are among the
NASA Astrophysics Data System (ADS)
Tajuddin, Wan Ahmad
1994-02-01
Ease in finding the configuration at the global energy minimum in a symmetric neural network is important for combinatorial optimization problems. We carry out a comprehensive survey of available strategies for seeking global minima by comparing their performances in the binary representation problem. We recall our previous comparison of steepest descent with analog dynamics, genetic hill-climbing, simulated diffusion, simulated annealing, threshold accepting and simulated tunneling. To this, we add comparisons to other strategies including taboo search and one with field-ordered updating.
MISR Level 2 TOA/Cloud Versioning
Atmospheric Science Data Center
2017-10-11
... public release. Add trap singular matrix condition. Add test for invalid look vectors. Use different metadata to test for validity of time tags. Fix incorrectly addressed array. Introduced bug ...
ARES Modeling of High-foot Implosions (NNSA Milestone #5466)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurricane, O. A.
ARES “capsule only” simulations demonstrated results of applying an ASC code to a suite of high-foot ICF implosion experiments. While a capability to apply an asymmetric FDS drive to the capsule-only model using add-on Python routines exists, it was not exercised here. The ARES simulation results resemble the results from HYDRA simulations documented in A. Kritcher, et al., Phys. Plasmas, 23, 052709 (2016); namely, 1D simulation and data are in reasonable agreement for the lowest velocity experiments, but diverge from each other at higher velocities.
Adjoint sensitivity analysis of plasmonic structures using the FDTD method.
Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H
2014-05-15
We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.
An assessment of the potential of PFEM-2 for solving long real-time industrial applications
NASA Astrophysics Data System (ADS)
Gimenez, Juan M.; Ramajo, Damián E.; Márquez Damián, Santiago; Nigro, Norberto M.; Idelsohn, Sergio R.
2017-07-01
The latest generation of the particle finite element method (PFEM-2) is a numerical method based on the Lagrangian formulation of the equations, which presents advantages in terms of robustness and efficiency over classical Eulerian methodologies when certain kind of flows are simulated, especially those where convection plays an important role. These situations are often encountered in real engineering problems, where very complex geometries and operating conditions require very large and long computations. The advantages that the parallelism introduced in the computational fluid dynamics making affordable computations with very fine spatial discretizations are well known. However, it is not possible to have the time parallelized, despite the effort that is being dedicated to use space-time formulations. In this sense, PFEM-2 adds a valuable feature in that its strong stability with little loss of accuracy provides an interesting way of satisfying the real-life computation needs. After having already demonstrated in previous publications its ability to achieve academic-based solutions with a good compromise between accuracy and efficiency, in this work, the method is revisited and employed to solve several nonacademic problems of technological interest, which fall into that category. Simulations concerning oil-water separation, waste-water treatment, metallurgical foundries, and safety assessment are presented. These cases are selected due to their particular requirements of long simulation times and or intensive interface treatment. Thus, large time-steps may be employed with PFEM-2 without compromising the accuracy and robustness of the simulation, as occurs with Eulerian alternatives, showing the potentiality of the methodology for solving not only academic tests but also real engineering problems.
Ensemble-Biased Metadynamics: A Molecular Simulation Method to Sample Experimental Distributions
Marinelli, Fabrizio; Faraldo-Gómez, José D.
2015-01-01
We introduce an enhanced-sampling method for molecular dynamics (MD) simulations referred to as ensemble-biased metadynamics (EBMetaD). The method biases a conventional MD simulation to sample a molecular ensemble that is consistent with one or more probability distributions known a priori, e.g., experimental intramolecular distance distributions obtained by double electron-electron resonance or other spectroscopic techniques. To this end, EBMetaD adds an adaptive biasing potential throughout the simulation that discourages sampling of configurations inconsistent with the target probability distributions. The bias introduced is the minimum necessary to fulfill the target distributions, i.e., EBMetaD satisfies the maximum-entropy principle. Unlike other methods, EBMetaD does not require multiple simulation replicas or the introduction of Lagrange multipliers, and is therefore computationally efficient and straightforward in practice. We demonstrate the performance and accuracy of the method for a model system as well as for spin-labeled T4 lysozyme in explicit water, and show how EBMetaD reproduces three double electron-electron resonance distance distributions concurrently within a few tens of nanoseconds of simulation time. EBMetaD is integrated in the open-source PLUMED plug-in (www.plumed-code.org), and can be therefore readily used with multiple MD engines. PMID:26083917
Simulation and training in Urology - in collaboration with ESU/ESUT.
Veneziano, Domenico; Cacciamani, Giovanni; Shekhar Biyani, Chandra
2018-01-01
Being a Surgeon today means taking on your shoulders countless responsibilities. It is definitely a high-stakes job but, even though the professionals do not go through the intense, focused and demanding training schedule as followed by the other equally risky fields, it doesn't yet require any practical training certification. Simulation was introduced in the aviation field in the early '30s with the "Link Trainer", designed to reproduce the most difficult flying case scenario: landing on an air-carrier. After almost a century, flight simulation is still becoming more sophisticated, while surgical training is slowly starting to fill the gap. The aim of a simulator is to produce an "imitation of the operation of a real-world process or system over time". This short but effective definition explains why simulators are utilised across different fields. There is no doubt that surgeons are continuously undergoing a condition of stress, even in nonthreatening situations, while performing a procedure. This condition adds a relevant variable to surgery, meaning that mastering technical skills is not always equal to "safe surgery". This is why "non-technical skills" (NTS) training should be a part of any simulation based training opportunity and will probably start to be always more part of the Handson Training programs.
12 CFR 390.41 - Construction of time limits.
Code of Federal Regulations, 2012 CFR
2012-01-01
... time prescribed by this subpart, the date of the act or event that commences the designated period of... follows: (1) If service is made by first class, registered, or certified mail, add three calendar days to the prescribed period; (2) If service is made by express mail or overnight delivery service, add one...
12 CFR 390.41 - Construction of time limits.
Code of Federal Regulations, 2014 CFR
2014-01-01
... time prescribed by this subpart, the date of the act or event that commences the designated period of... follows: (1) If service is made by first class, registered, or certified mail, add three calendar days to the prescribed period; (2) If service is made by express mail or overnight delivery service, add one...
12 CFR 1081.114 - Construction of time limits.
Code of Federal Regulations, 2012 CFR
2012-01-01
... statute, the date of the act or event that commences the designated period of time is not included. The... follows: (1) If service is made by first class, registered, or certified mail, add three calendar days to the prescribed period; (2) If service is made by express mail or overnight delivery service, add one...
12 CFR 1081.114 - Construction of time limits.
Code of Federal Regulations, 2013 CFR
2013-01-01
... statute, the date of the act or event that commences the designated period of time is not included. The... follows: (1) If service is made by First Class Mail, Registered Mail, or Certified Mail, add three... service, add one calendar day to the prescribed period; or (3) If service is made by electronic...
12 CFR 109.12 - Construction of time limits.
Code of Federal Regulations, 2012 CFR
2012-01-01
... or event that commences the designated period of time is not included. The last day so computed is... certified mail, add three calendar days to the prescribed period; (2) If service is made by express mail or overnight delivery service, add one calendar day to the prescribed period; or (3) If service is made by...
How predictable is the timing of a summer ice-free Arctic?
NASA Astrophysics Data System (ADS)
Jahn, Alexandra; Kay, Jennifer E.; Holland, Marika M.; Hall, David M.
2016-09-01
Climate model simulations give a large range of over 100 years for predictions of when the Arctic could first become ice free in the summer, and many studies have attempted to narrow this uncertainty range. However, given the chaotic nature of the climate system, what amount of spread in the prediction of an ice-free summer Arctic is inevitable? Based on results from large ensemble simulations with the Community Earth System Model, we show that internal variability alone leads to a prediction uncertainty of about two decades, while scenario uncertainty between the strong (Representative Concentration Pathway (RCP) 8.5) and medium (RCP4.5) forcing scenarios adds at least another 5 years. Common metrics of the past and present mean sea ice state (such as ice extent, volume, and thickness) as well as global mean temperatures do not allow a reduction of the prediction uncertainty from internal variability.
Introduction to a system for implementing neural net connections on SIMD architectures
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl
1988-01-01
Neural networks have attracted much interest recently, and using parallel architectures to simulate neural networks is a natural and necessary application. The SIMD model of parallel computation is chosen, because systems of this type can be built with large numbers of processing elements. However, such systems are not naturally suited to generalized communication. A method is proposed that allows an implementation of neural network connections on massively parallel SIMD architectures. The key to this system is an algorithm permitting the formation of arbitrary connections between the neurons. A feature is the ability to add new connections quickly. It also has error recovery ability and is robust over a variety of network topologies. Simulations of the general connection system, and its implementation on the Connection Machine, indicate that the time and space requirements are proportional to the product of the average number of connections per neuron and the diameter of the interconnection network.
Introduction to a system for implementing neural net connections on SIMD architectures
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl
1988-01-01
Neural networks have attracted much interest recently, and using parallel architectures to simulate neural networks is a natural and necessary application. The SIMD model of parallel computation is chosen, because systems of this type can be built with large numbers of processing elements. However, such systems are not naturally suited to generalized elements. A method is proposed that allows an implementation of neural network connections on massively parallel SIMD architectures. The key to this system is an algorithm permitting the formation of arbitrary connections between the neurons. A feature is the ability to add new connections quickly. It also has error recovery ability and is robust over a variety of network topologies. Simulations of the general connection system, and its implementation on the Connection Machine, indicate that the time and space requirements are proportional to the product of the average number of connections per neuron and the diameter of the interconnection network.
SWIFT: SPH With Inter-dependent Fine-grained Tasking
NASA Astrophysics Data System (ADS)
Schaller, Matthieu; Gonnet, Pedro; Chalk, Aidan B. G.; Draper, Peter W.
2018-05-01
SWIFT runs cosmological simulations on peta-scale machines for solving gravity and SPH. It uses the Fast Multipole Method (FMM) to calculate gravitational forces between nearby particles, combining these with long-range forces provided by a mesh that captures both the periodic nature of the calculation and the expansion of the simulated universe. SWIFT currently uses a single fixed but time-variable softening length for all the particles. Many useful external potentials are also available, such as galaxy haloes or stratified boxes that are used in idealised problems. SWIFT implements a standard LCDM cosmology background expansion and solves the equations in a comoving frame; equations of state of dark-energy evolve with scale-factor. The structure of the code allows implementation for modified-gravity solvers or self-interacting dark matter schemes to be implemented. Many hydrodynamics schemes are implemented in SWIFT and the software allows users to add their own.
Hot super-Earths stripped by their host stars.
Lundkvist, M S; Kjeldsen, H; Albrecht, S; Davies, G R; Basu, S; Huber, D; Justesen, A B; Karoff, C; Silva Aguirre, V; Van Eylen, V; Vang, C; Arentoft, T; Barclay, T; Bedding, T R; Campante, T L; Chaplin, W J; Christensen-Dalsgaard, J; Elsworth, Y P; Gilliland, R L; Handberg, R; Hekker, S; Kawaler, S D; Lund, M N; Metcalfe, T S; Miglio, A; Rowe, J F; Stello, D; Tingley, B; White, T R
2016-04-11
Simulations predict that hot super-Earth sized exoplanets can have their envelopes stripped by photoevaporation, which would present itself as a lack of these exoplanets. However, this absence in the exoplanet population has escaped a firm detection. Here we demonstrate, using asteroseismology on a sample of exoplanets and exoplanet candidates observed during the Kepler mission that, while there is an abundance of super-Earth sized exoplanets with low incident fluxes, none are found with high incident fluxes. We do not find any exoplanets with radii between 2.2 and 3.8 Earth radii with incident flux above 650 times the incident flux on Earth. This gap in the population of exoplanets is explained by evaporation of volatile elements and thus supports the predictions. The confirmation of a hot-super-Earth desert caused by evaporation will add an important constraint on simulations of planetary systems, since they must be able to reproduce the dearth of close-in super-Earths.
Gordon, M E; Edwards, M S; Sweeney, C R; Jerina, M L
2013-08-01
The objective of this study was to test the hypothesis that an equine diet formulated with chelated trace minerals, organic selenium, yeast culture, direct-fed microbials (DFM) and Yucca schidigera extract would decrease excretion of nutrients that have potential for environmental impact. Horses were acclimated to 100% pelleted diets formulated with (ADD) and without (CTRL) the aforementioned additives. Chelated sources of Cu, Zn, Mn, and Co were included in the ADD diet at a 100% replacement rate of sulfate forms used in the CTRL diet. Additionally, the ADD diet included organic selenium yeast, DFM, and Yucca schidigera extract. Ten horses were fed the 2 experimental diets during two 42-d periods in a crossover design. Total fecal and urine collection occurred during the last 14 d of each period. Results indicate no significant differences between Cu, Zn, Mn, and Co concentrations excreted via urine (P > 0.05) due to dietary treatment. There was no difference between fecal Cu and Mn concentrations (P > 0.05) based on diet consumed. Mean fecal Zn and Co concentrations excreted by horses consuming ADD were greater than CTRL (P < 0.003). Differences due to diet were found for selenium fecal (P < 0.0001) and urine (P < 0.0001) excretions, with decreased concentrations found for horses consuming organic selenium yeast (ADD). In contrast, fecal K (%) was greater (P = 0.0421) for horses consuming ADD, whereas concentrations of fecal solids, total N, ammonia N, P, total ammonia, and fecal output did not differ between dietary treatments (P > 0.05). In feces stockpiled to simulate a crude composting method, no differences (P > 0.05) due to diet were detected for particle size, temperature, moisture, OM, total N, P, phosphate, K, moisture, potash, or ammonia N (P > 0.05). Although no difference (P = 0.2737) in feces stockpile temperature due to diet was found, temperature differences over time were documented (P < 0.0001). In conclusion, the addition of certain chelated mineral sources, organic Se yeast, DFM, and Yucca schidigera extract did not decrease most nutrient concentrations excreted. Horses consuming organic selenium as part of the additive diet had lower fecal and urine Se concentrations, as well as greater fecal K concentrations.
Decloedt, A I; Bailly-Chouriberry, L; Vanden Bussche, J; Garcia, P; Popot, M-A; Bonnaire, Y; Vanhaecke, L
2015-08-01
Traditionally, steroids other than testosterone are considered to be synthetic, anabolic steroids. Nevertheless, in stallions, it has been shown that β-Bol can originate from naturally present testosterone. Other precursors, including phytosterols from feed, have been put forward to explain the prevalence of low levels of steroids (including β-Bol and ADD) in urine of mares and geldings. However, the possible biotransformation and identification of the precursors has thus far not been investigated in horses. To study the possible endogenous digestive transformation, in vitro simulations of the horse hindgut were set up, using fecal inocula obtained from eight different horses. The functionality of the in vitro model was confirmed by monitoring the formation of short-chain fatty acids and the consumption of amino acids and carbohydrates throughout the digestion process. In vitro digestion samples were analyzed with a validated UHPLC-MS/MS method. The addition of β-Bol gave rise to the formation of ADD (androsta-1,4-diene-3,17-dione) or αT. Upon addition of ADD to the in vitro digestions, the transformation of ADD to β-Bol was observed and this for all eight horses' inocula, in line with previously obtained in vivo results, again confirming the functionality of the in vitro model. The transformation ratio proved to be inoculum and thus horse dependent. The addition of pure phytosterols (50% β-sitosterol) or phytosterol-rich herbal supplements on the other hand, did not induce the detection of β-Bol, only low concentrations of AED, a testosterone precursor, could be found (0.1 ng/mL). As such, the digestive transformation of ADD could be linked to the detection of β-Bol, and the consumption of phytosterols to low concentrations of AED, but there is no direct link between phytosterols and β-Bol. Copyright © 2015 Elsevier Ltd. All rights reserved.
Raguin, Olivier; Gruaz-Guyon, Anne; Barbet, Jacques
2002-11-01
An add-in to Microsoft Excel was developed to simulate multiple binding equilibriums. A partition function, readily written even when the equilibrium is complex, describes the experimental system. It involves the concentrations of the different free molecular species and of the different complexes present in the experiment. As a result, the software is not restricted to a series of predefined experimental setups but can handle a large variety of problems involving up to nine independent molecular species. Binding parameters are estimated by nonlinear least-square fitting of experimental measurements as supplied by the user. The fitting process allows user-defined weighting of the experimental data. The flexibility of the software and the way it may be used to describe common experimental situations and to deal with usual problems such as tracer reactivity or nonspecific binding is demonstrated by a few examples. The software is available free of charge upon request.
A design multifunctional plasmonic optical device by micro ring system
NASA Astrophysics Data System (ADS)
Pornsuwancharoen, N.; Youplao, P.; Amiri, I. S.; Ali, J.; Yupapin, P.
2018-03-01
A multi-function electronic device based on the plasmonic circuit is designed and simulated by using the micro-ring system. From which a nonlinear micro-ring resonator is employed and the selected electronic devices such as rectifier, amplifier, regulator and filter are investigated. A system consists of a nonlinear micro-ring resonator, which is known as a modified add-drop filter and made of an InGaAsP/InP material. The stacked waveguide of an InGaAsP/InP - graphene -gold/silver is formed as a part of the device, the required output signals are formed by the specific control of input signals via the input and add ports. The material and device aspects are reviewed. The simulation results are obtained using the Opti-wave and MATLAB software programs, all device parameters are based on the fabrication technology capability.
Time-Scale Modification of Complex Acoustic Signals in Noise
1994-02-04
of a response from a closing stapler . 15 6 Short-time processing of long waveforms. 16 7 Time-scale expansion (x 2) of sequence of transients using...filter bank/overlap- add. 17 8 Time-scale expansion (x2) of a closing stapler using filter bank/overlap-add. 18 9 Composite subband time-scale...INTRODUCTION Short-duration complex sounds, as from the closing of a stapler or the tapping of a drum stick, often consist of a series of brief
USACDEC Experimentation Manual
1981-10-01
Commander, Instrumentation Command (Prov) who is responsible for the cinematic form of the films. The writing requirements for discrete sections of the...level of simulated realism required. Higher levels of simulated realism will require higher degrees of control to insure the test events occur as...experimentation, the "enemy" created to add realism . Aggressor forces may be represented by live troops In the field or by mechanical targets with or
Simulated electronic heterodyne recording and processing of pulsed-laser holograms
NASA Technical Reports Server (NTRS)
Decker, A. J.
1979-01-01
The electronic recording of pulsed-laser holograms is proposed. The polarization sensitivity of each resolution element of the detector is controlled independently to add an arbitrary phase to the image waves. This method which can be used to simulate heterodyne recording and to process three-dimensional optical images, is based on a similar method for heterodyne recording and processing of continuous-wave holograms.
Evaluation of search strategies for microcalcifications and masses in 3D images
NASA Astrophysics Data System (ADS)
Eckstein, Miguel P.; Lago, Miguel A.; Abbey, Craig K.
2018-03-01
Medical imaging is quickly evolving towards 3D image modalities such as computed tomography (CT), magnetic resonance imaging (MRI) and digital breast tomosynthesis (DBT). These 3D image modalities add volumetric information but further increase the need for radiologists to search through the image data set. Although much is known about search strategies in 2D images less is known about the functional consequences of different 3D search strategies. We instructed readers to use two different search strategies: drillers had their eye movements restricted to a few regions while they quickly scrolled through the image stack, scanners explored through eye movements the 2D slices. We used real-time eye position monitoring to ensure observers followed the drilling or the scanning strategy while approximately preserving the percentage of the volumetric data covered by the useful field of view. We investigated search for two signals: a simulated microcalcification and a larger simulated mass. Results show an interaction between the search strategy and lesion type. In particular, scanning provided significantly better detectability for microcalcifications at the cost of 5 times more time to search while there was little change in the detectability for the larger simulated masses. Analyses of eye movements support the hypothesis that the effectiveness of a search strategy in 3D imaging arises from the interaction of the fixational sampling of visual information and the signals' visibility in the visual periphery.
Numerical investigation of internal high-speed viscous flows using a parabolic technique
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Power, G. D.
1985-01-01
A feasibility study has been conducted to assess the applicability of an existing parabolic analysis (ADD-Axisymmetric Diffuser Duct), developed previously for subsonic viscous internal flows, to mixed supersonic/subsonic flows with heat addition simulating a SCRAMJET combustor. A study was conducted with the ADD code modified to include additional convection effects in the normal momentum equation when supersonic expansion and compression waves are present. A set of test problems with weak shock and expansion waves have been analyzed with this modified ADD method and stable and accurate solutions were demonstrated provided the streamwise step size was maintained at levels larger than the boundary layer displacement thickness. Calculations made with further reductions in step size encountered departure solutions consistent with strong interaction theory. Calculations were also performed for a flow field with a flame front in which a specific heat release was imposed to simulate a SCRAMJET combustor. In this case the flame front generated relatively thick shear layers which aggravated the departure solution problem. Qualitatively correct results were obtained for these cases using a marching technique with the convective terms in the normal momentum equation suppressed. It is concluded from the present study that for the class of problems where strong viscous/inviscid interactions are present a global iteration procedure is required.
Chen, Xi; Shi, Yuechun; Lou, Fei; Chen, Yiting; Yan, Min; Wosinski, Lech; Qiu, Min
2014-10-20
An optically pumped thermo-optic (TO) silicon ring add-drop filter with fast thermal response is experimentally demonstrated. We propose that metal-insulator-metal (MIM) light absorber can be integrated into silicon TO devices, acting as a localized heat source which can be activated remotely by a pump beam. The MIM absorber design introduces less thermal capacity to the device, compared to conventional electrically-driven approaches. Experimentally, the absorber-integrated add-drop filter shows an optical response time of 13.7 μs following the 10%-90% rule (equivalent to a exponential time constant of 5 μs) and a wavelength shift over pump power of 60 pm/mW. The photothermally tunable add-drop filter may provide new perspectives for all-optical routing and switching in integrated Si photonic circuits.
40 CFR 63.4100 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
... compliant material option or the emission rate without add-on controls option, as specified in § 63.4091(a... coating operation(s) for which you use the emission rate with add-on controls option, as specified in § 63... operating limits for emission capture systems and add-on control devices required by § 63.4092 at all times...
40 CFR 63.3500 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... material option or the emission rate without add-on controls option, as specified in § 63.3491(a) and (b... which you use the emission rate with add-on controls option, as specified in § 63.3491(c), or the... systems and add-on control devices required by § 63.3492 at all times, except for those for which you use...
40 CFR 63.4700 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... material option or the emission rate without add-on controls option, as specified in § 63.4691(a) and (b... operation(s) for which you use the emission rate with add-on controls option, as specified in § 63.4691(c... and add-on control devices required by § 63.4692 at all times, except during periods of SSM, and...
40 CFR 63.4700 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2012 CFR
2012-07-01
... compliant material option or the emission rate without add-on controls option, as specified in § 63.4691(a... coating operation(s) for which you use the emission rate with add-on controls option, as specified in § 63... systems and add-on control devices required by § 63.4692 at all times, except during periods of SSM, and...
40 CFR 63.4100 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2013 CFR
2013-07-01
... compliant material option or the emission rate without add-on controls option, as specified in § 63.4091(a... coating operation(s) for which you use the emission rate with add-on controls option, as specified in § 63... operating limits for emission capture systems and add-on control devices required by § 63.4092 at all times...
40 CFR 63.4700 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2013 CFR
2013-07-01
... compliant material option or the emission rate without add-on controls option, as specified in § 63.4691(a... coating operation(s) for which you use the emission rate with add-on controls option, as specified in § 63... systems and add-on control devices required by § 63.4692 at all times, except during periods of SSM, and...
40 CFR 63.3500 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
... compliant material option or the emission rate without add-on controls option, as specified in § 63.3491(a... operation(s) for which you use the emission rate with add-on controls option, as specified in § 63.3491(c... capture systems and add-on control devices required by § 63.3492 at all times, except for those for which...
40 CFR 63.3500 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2012 CFR
2012-07-01
... compliant material option or the emission rate without add-on controls option, as specified in § 63.3491(a... operation(s) for which you use the emission rate with add-on controls option, as specified in § 63.3491(c... capture systems and add-on control devices required by § 63.3492 at all times, except for those for which...
40 CFR 63.4100 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2012 CFR
2012-07-01
... compliant material option or the emission rate without add-on controls option, as specified in § 63.4091(a... coating operation(s) for which you use the emission rate with add-on controls option, as specified in § 63... operating limits for emission capture systems and add-on control devices required by § 63.4092 at all times...
40 CFR 63.4700 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
... material option or the emission rate without add-on controls option, as specified in § 63.4691(a) and (b... operation(s) for which you use the emission rate with add-on controls option, as specified in § 63.4691(c... and add-on control devices required by § 63.4692 at all times, except during periods of SSM, and...
40 CFR 63.3500 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
... material option or the emission rate without add-on controls option, as specified in § 63.3491(a) and (b... which you use the emission rate with add-on controls option, as specified in § 63.3491(c), or the... systems and add-on control devices required by § 63.3492 at all times, except for those for which you use...
40 CFR 63.3500 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2013 CFR
2013-07-01
... compliant material option or the emission rate without add-on controls option, as specified in § 63.3491(a... operation(s) for which you use the emission rate with add-on controls option, as specified in § 63.3491(c... capture systems and add-on control devices required by § 63.3492 at all times, except for those for which...
40 CFR 63.4700 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
... compliant material option or the emission rate without add-on controls option, as specified in § 63.4691(a... coating operation(s) for which you use the emission rate with add-on controls option, as specified in § 63... systems and add-on control devices required by § 63.4692 at all times, except during periods of SSM, and...
Filling the Simulated Sandtrap
2009-06-30
Rover team members Mike Seibert left and Paolo Bellutta add a barrowful of soil mixture to the sloped box where a test rover will be used for assessing possible maneuvers for NASA rover Spirit to use in escaping from a sandtrap on Mars.
NASA Astrophysics Data System (ADS)
Francés, J.; Bleda, S.; Neipp, C.; Márquez, A.; Pascual, I.; Beléndez, A.
2013-03-01
The finite-difference time-domain method (FDTD) allows electromagnetic field distribution analysis as a function of time and space. The method is applied to analyze holographic volume gratings (HVGs) for the near-field distribution at optical wavelengths. Usually, this application requires the simulation of wide areas, which implies more memory and time processing. In this work, we propose a specific implementation of the FDTD method including several add-ons for a precise simulation of optical diffractive elements. Values in the near-field region are computed considering the illumination of the grating by means of a plane wave for different angles of incidence and including absorbing boundaries as well. We compare the results obtained by FDTD with those obtained using a matrix method (MM) applied to diffraction gratings. In addition, we have developed two optimized versions of the algorithm, for both CPU and GPU, in order to analyze the improvement of using the new NVIDIA Fermi GPU architecture versus highly tuned multi-core CPU as a function of the size simulation. In particular, the optimized CPU implementation takes advantage of the arithmetic and data transfer streaming SIMD (single instruction multiple data) extensions (SSE) included explicitly in the code and also of multi-threading by means of OpenMP directives. A good agreement between the results obtained using both FDTD and MM methods is obtained, thus validating our methodology. Moreover, the performance of the GPU is compared to the SSE+OpenMP CPU implementation, and it is quantitatively determined that a highly optimized CPU program can be competitive for a wider range of simulation sizes, whereas GPU computing becomes more powerful for large-scale simulations.
NASA Astrophysics Data System (ADS)
Satrio, Reza Indra; Subiyanto
2018-03-01
The effect of electric loads growth emerged direct impact in power systems distribution. Drop voltage and power losses one of the important things in power systems distribution. This paper presents modelling approach used to restructrure electrical network configuration, reduce drop voltage, reduce power losses and add new distribution transformer to enhance reliability of power systems distribution. Restructrure electrical network was aimed to analyse and investigate electric loads of a distribution transformer. Measurement of real voltage and real current were finished two times for each consumer, that were morning period and night period or when peak load. Design and simulation were conduct by using ETAP Power Station Software. Based on result of simulation and real measurement precentage of drop voltage and total power losses were mismatch with SPLN (Standard PLN) 72:1987. After added a new distribution transformer and restructrured electricity network configuration, the result of simulation could reduce drop voltage from 1.3 % - 31.3 % to 8.1 % - 9.6 % and power losses from 646.7 watt to 233.29 watt. Result showed, restructrure electricity network configuration and added new distribution transformer can be applied as an effective method to reduce drop voltage and reduce power losses.
Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P
2011-05-19
There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.
Karain, Wael I
2017-11-28
Proteins undergo conformational transitions over different time scales. These transitions are closely intertwined with the protein's function. Numerous standard techniques such as principal component analysis are used to detect these transitions in molecular dynamics simulations. In this work, we add a new method that has the ability to detect transitions in dynamics based on the recurrences in the dynamical system. It combines bootstrapping and recurrence quantification analysis. We start from the assumption that a protein has a "baseline" recurrence structure over a given period of time. Any statistically significant deviation from this recurrence structure, as inferred from complexity measures provided by recurrence quantification analysis, is considered a transition in the dynamics of the protein. We apply this technique to a 132 ns long molecular dynamics simulation of the β-Lactamase Inhibitory Protein BLIP. We are able to detect conformational transitions in the nanosecond range in the recurrence dynamics of the BLIP protein during the simulation. The results compare favorably to those extracted using the principal component analysis technique. The recurrence quantification analysis based bootstrap technique is able to detect transitions between different dynamics states for a protein over different time scales. It is not limited to linear dynamics regimes, and can be generalized to any time scale. It also has the potential to be used to cluster frames in molecular dynamics trajectories according to the nature of their recurrence dynamics. One shortcoming for this method is the need to have large enough time windows to insure good statistical quality for the recurrence complexity measures needed to detect the transitions.
Rebecca Ralston; Joseph Buongiorno; Benedict Schulte; Jeremy Fried
2003-01-01
WestPro is an add-in program designed to work with Microsoft Excel to simulate the growth of uneven-aged Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) stands in the Pacific Northwest region of the United States. Given the initial stand state, defined as the number of softwood and hardwood trees per acre by diameter class, WestPro predicts the...
ERIC Educational Resources Information Center
Shanklin, Stephen B.; Ehlen, Craig R.
2017-01-01
This paper extends the use of the Monopoly® board game as an economic simulation exercise designed to reinforce an understanding of how the accounting cycle impacts the financial statements used to evaluate management performance. This extension adds elements of debt not previously utilized to allow for an introduction of the fundamentals of ratio…
Simulation of Ultra-Small MOSFETs Using a 2-D Quantum-Corrected Drift-Diffusion Model
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.; Rafferty, Conor S.; Yu, Zhiping; Dutton, Robert W.; Ancona, Mario G.; Saini, Subhash (Technical Monitor)
1998-01-01
We describe an electronic transport model and an implementation approach that respond to the challenges of device modeling for gigascale integration. We use the density-gradient (DG) transport model, which adds tunneling and quantum smoothing of carrier density profiles to the drift-diffusion model. We present the current implementation of the DG model in PROPHET, a partial differential equation solver developed by Lucent Technologies. This implementation approach permits rapid development and enhancement of models, as well as run-time modifications and model switching. We show that even in typical bulk transport devices such as P-N diodes and BJTs, DG quantum effects can significantly modify the I-V characteristics. Quantum effects are shown to be even more significant in small, surface transport devices, such as sub-0.1 micron MOSFETs. In thin-oxide MOS capacitors, we find that quantum effects may reduce gate capacitance by 25% or more. The inclusion of quantum effects in simulations dramatically improves the match between C-V simulations and measurements. Significant quantum corrections also occur in the I-V characteristics of short-channel MOSFETs due to the gate capacitance correction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinilla, Maria Isabel
This report seeks to study and benchmark code predictions against experimental data; determine parameters to match MCNP-simulated detector response functions to experimental stilbene measurements; add stilbene processing capabilities to DRiFT; and improve NEUANCE detector array modeling and analysis using new MCNP6 and DRiFT features.
A Functional Comparison of Lunar Regoliths and Their Simulants
NASA Technical Reports Server (NTRS)
Rickman, D.; Edmunson, J.; McLemore, C.
2012-01-01
Lunar regolith simulants are essential to the development of technology for human exploration of the Moon. Any equipment that will interact with the surface environment must be tested with simulant to mitigate risk. To reduce the greatest amount of risk, the simulant must replicate the lunar surface as well as possible. To quantify the similarities and differences between simulants, the Figures of Merit were developed. The Figures of Merit software compares the simulants and regolith by particle size, particle shape, density, and bulk chemistry and mineralogy; these four properties dictate the majority of the remaining characteristics of a geologic material. There are limitations to both the current Figures of Merit approach and simulants in general. The effect of particle textures is lacking in the Figures of Merit software, and research into this topic has only recently begun with applications to simulants. In addition, not all of the properties for lunar regolith are defined sufficiently for simulant reproduction or comparison; for example, the size distribution of particles greater than 1 centimeter and the makeup of particles less than 10 micrometers is not well known. For simulants, contamination by terrestrial weathering products or undesired trace phases in feedstock material is a major issue. Vapor deposited rims have not yet been created for simulants. Fortunately, previous limitations such as the lack of agglutinates in simulants have been addressed and commercial companies are now making agglutinate material for simulants. Despite some limitations, the Figures of Merit sufficiently quantify the comparison between simulants and regolith for useful application in lunar surface technology. Over time, the compilation and analysis of simulant user data will add an advantageous predictive capability to the Figures of Merit, accurately relating Figures of Merit characteristics to simulant user parameters.
NIMROD Simulations of Spheromak Formation, Magnetic Reconnection and Energy Confinement in SSPX
NASA Astrophysics Data System (ADS)
Hooper, E. B.; Sovinec, C. R.
2005-10-01
The SSPX spheromak is formed and driven by a coaxial electrostatic gun that injects current and magnetic flux. Magnetic fluctuations are associated with the conversion of toroidal to poloidal magnetic flux during formation. After formation, fluctuations that break axisymmetry degrade magnetic surfaces, and are anti-correlated with the core temperature and energy confinement time. We report NIMROD simulations extending earlier work^1 supporting the SSPX experiment through predictions of performance and providing insight. The simulations are in fairly good agreement with features observed in SSPX and underscore the importance of current profile control in mitigating magnetic fluctuation amplitudes and improving confinement. The simulations yield insight into magnetic reconnection and the relationship of fluctuations to field line stochasticity. We have added external circuit equations for the new 32 module capacitor bank in SSPX that will add flexibility in shaping the injector current pulses and substantially increase the injected currents and the magnetic energy. New NIMROD simulations of SSPX lead to higher temperature plasmas than in previous simulations. *Work supported by U.S. DOE, under Contr. No. W-7405-ENG-48 at U. Cal. LLNL and under grant FG02-01ER54661 at U. Wisc Madison. ^1C. R. Sovinec, B. I. Cohen, et al., Phys. Rev. Lett. 94, 035003 (2005); B. I. Cohen, E. B. Hooper, et al., Phys. Plasmas 12, 056106 (2005).
Neural Processing of Musical and Vocal Emotions Through Cochlear Implants Simulation.
Ahmed, Duha G; Paquette, Sebastian; Zeitouni, Anthony; Lehmann, Alexandre
2018-05-01
Cochlear implants (CIs) partially restore the sense of hearing in the deaf. However, the ability to recognize emotions in speech and music is reduced due to the implant's electrical signal limitations and the patient's altered neural pathways. Electrophysiological correlations of these limitations are not yet well established. Here we aimed to characterize the effect of CIs on auditory emotion processing and, for the first time, directly compare vocal and musical emotion processing through a CI-simulator. We recorded 16 normal hearing participants' electroencephalographic activity while listening to vocal and musical emotional bursts in their original form and in a degraded (CI-simulated) condition. We found prolonged P50 latency and reduced N100-P200 complex amplitude in the CI-simulated condition. This points to a limitation in encoding sound signals processed through CI simulation. When comparing the processing of vocal and musical bursts, we found a delay in latency with the musical bursts compared to the vocal bursts in both conditions (original and CI-simulated). This suggests that despite the cochlear implants' limitations, the auditory cortex can distinguish between vocal and musical stimuli. In addition, it adds to the literature supporting the complexity of musical emotion. Replicating this study with actual CI users might lead to characterizing emotional processing in CI users and could ultimately help develop optimal rehabilitation programs or device processing strategies to improve CI users' quality of life.
NASA Astrophysics Data System (ADS)
Thorne, Ben; Alonso, David; Naess, Sigurd; Dunkley, Jo
2017-04-01
PySM generates full-sky simulations of Galactic foregrounds in intensity and polarization relevant for CMB experiments. The components simulated are thermal dust, synchrotron, AME, free-free, and CMB at a given Nside, with an option to integrate over a top hat bandpass, to add white instrument noise, and to smooth with a given beam. PySM is based on the large-scale Galactic part of Planck Sky Model code and uses some of its inputs
Hidalgo-Vega, Alvaro; Ramos-Goñi, Juan Manuel; Villoro, Renata
2014-12-01
Ranolazine is an antianginal agent that was approved in the EU in 2008 as an add-on therapy for symptomatic chronic angina pectoris treatment in patients who are inadequately controlled by, or are intolerant to, first-line antianginal therapies. These patients' quality of life is significantly affected by more frequent angina events, which increase the risk of revascularization. To assess the cost-utility of ranolazine versus placebo as an add-on therapy for the symptomatic treatment of patients with chronic angina pectoris in Spain. A decision tree model with 1-year time horizon was designed. Transition probabilities and utility values for different angina frequencies were obtained from the literature. Costs were obtained from Spanish official DRGs for patients with chronic angina pectoris. We calculated the incremental cost-utility ratio of using ranolazine compared with a placebo. Sensitivity analyses, by means of Monte Carlo simulations, were performed. Acceptability curves and expected value of perfect information were calculated. The incremental cost-utility ratio was €8,455 per quality-adjusted life-year (QALY) per patient in Spain. Sensitivity analyses showed that if the decision makers' willingness to pay is €15,000 per QALY, the treatment with ranolazine will be cost effective at a 95 % level of confidence. The incremental cost-utility ratio is particularly sensitive to changes in utility values of those non-hospitalized patients with mild or moderate angina frequency. Ranolazine is a highly efficient add-on therapy for the symptomatic treatment of chronic angina pectoris in patients who are inadequately controlled by, or intolerant to, first-line antianginal therapies in Spain.
ST-analyzer: a web-based user interface for simulation trajectory analysis.
Jeong, Jong Cheol; Jo, Sunhwan; Wu, Emilia L; Qi, Yifei; Monje-Galvan, Viviana; Yeom, Min Sun; Gorenstein, Lev; Chen, Feng; Klauda, Jeffery B; Im, Wonpil
2014-05-05
Molecular dynamics (MD) simulation has become one of the key tools to obtain deeper insights into biological systems using various levels of descriptions such as all-atom, united-atom, and coarse-grained models. Recent advances in computing resources and MD programs have significantly accelerated the simulation time and thus increased the amount of trajectory data. Although many laboratories routinely perform MD simulations, analyzing MD trajectories is still time consuming and often a difficult task. ST-analyzer, http://im.bioinformatics.ku.edu/st-analyzer, is a standalone graphical user interface (GUI) toolset to perform various trajectory analyses. ST-analyzer has several outstanding features compared to other existing analysis tools: (i) handling various formats of trajectory files from MD programs, such as CHARMM, NAMD, GROMACS, and Amber, (ii) intuitive web-based GUI environment--minimizing administrative load and reducing burdens on the user from adapting new software environments, (iii) platform independent design--working with any existing operating system, (iv) easy integration into job queuing systems--providing options of batch processing either on the cluster or in an interactive mode, and (v) providing independence between foreground GUI and background modules--making it easier to add personal modules or to recycle/integrate pre-existing scripts utilizing other analysis tools. The current ST-analyzer contains nine main analysis modules that together contain 18 options, including density profile, lipid deuterium order parameters, surface area per lipid, and membrane hydrophobic thickness. This article introduces ST-analyzer with its design, implementation, and features, and also illustrates practical analysis of lipid bilayer simulations. Copyright © 2014 Wiley Periodicals, Inc.
A Flight Training Simulator for Instructing the Helicopter Autorotation Maneuver (Enhanced Version)
NASA Technical Reports Server (NTRS)
Rogers, Steven P.; Asbury, Charles N.
2000-01-01
Autorotation is a maneuver that permits a safe helicopter landing when the engine loses power. A catastrophe may occur if the pilot's control inputs are incorrect, insufficient, excessive, or poorly timed. Due to the danger involved, full-touchdown autorotations are very rarely practiced. Because in-flight autorotation training is risky, time-consuming, and expensive, the objective of the project was to develop the first helicopter flight simulator expressly designed to train students in this critical maneuver. A central feature of the project was the inclusion of an enhanced version of the Pilot-Rotorcraft Intelligent Symbology Management Simulator (PRISMS), a virtual-reality system developed by Anacapa Sciences and Thought Wave. A task analysis was performed to identify the procedural steps in the autorotation, to inventory the information needed to support student task performance, to identify typical errors, and to structure the simulator's practice environment. The system provides immediate knowledge of results, extensive practice of perceptual-motor skills, part-task training, and augmented cueing in a realistic cockpit environment. Additional work, described in this report, extended the capabilities of the simulator in three areas: 1. Incorporation of visual training aids to assist the student in learning the proper appearance of the visual scene when the maneuver is being properly performed; 2. Introduction of the requirement to land at a particular spot, as opposed to the wide, flat open field initially used, and development of appropriate metrics of success; and 3. Inclusion of wind speed and wind direction settings (and random variability settings) to add a more realistic challenge in "hitting the spot."
Chakraborty, Arindom
2016-12-01
A common objective in longitudinal studies is to characterize the relationship between a longitudinal response process and a time-to-event data. Ordinal nature of the response and possible missing information on covariates add complications to the joint model. In such circumstances, some influential observations often present in the data may upset the analysis. In this paper, a joint model based on ordinal partial mixed model and an accelerated failure time model is used, to account for the repeated ordered response and time-to-event data, respectively. Here, we propose an influence function-based robust estimation method. Monte Carlo expectation maximization method-based algorithm is used for parameter estimation. A detailed simulation study has been done to evaluate the performance of the proposed method. As an application, a data on muscular dystrophy among children is used. Robust estimates are then compared with classical maximum likelihood estimates. © The Author(s) 2014.
Serrano-Marugán, Isabel; Herrera, Begoña; Romero, Sara; Nogales, Ramón; Poch-Broto, Joaquín; Quintero, Javier; Ortiz, Tomás
2014-02-24
Tactile stimulation is key for the posterior brain re-organization activity and attention processes, however the impact of tactile stimulation on attention deficit disorder (ADD) in blind children remains unexplored. We carried out a study with children having or not ADD (four per group). The subjects have been exposed during six months to tactile stimulation protocol consisting in two daily sessions (morning and afternoon sessions) of 30 minutes each. We have measured the ability to detect an infrequent tactile stimulus, reaction time, latency of P300, sources of brain activity, and ADD clinical symptoms, before and after tactile training. Passive tactile stimulation significantly improves ADD clinical symptoms, particularly attention, behavior and self-control of involuntary movements and tics. In addition, tactile stimulation changes the pattern of brain activity in ADD blind children inducing activity in frontal and occipital areas, which could be associated to a compensation of the attention deficit. Passive tactile stimulation training may improve ADD clinical symptoms and can reorganize the pattern of brain activity in blind ADD children.
Simulation of the Universal-Time Diurnal Variation of the Global Electric Circuit Charging Rate
NASA Technical Reports Server (NTRS)
Mackerras, D.; Darvenzia, M.; Orville, R. E.; Williams, E. R.; Goodman, S. J.
1999-01-01
A global lightning model that includes diurnal and annual lightning variation, and total flash density versus latitude for each major land and ocean, has been used as the basis for simulating the global electric circuit charging rate. A particular objective has been to reconcile the difference in amplitude ratios [AR=(max-min)/mean] between global lightning diurnal variation (AR approx. = 0.8) and the diurnal variation of typical atmospheric potential gradient curves (AR approx. = 0.35). A constraint on the simulation is that the annual mean charging current should be about 1000 A. The global lightning model shows that negative ground flashes can contribute, at most, about 10-15% of the required current. For the purpose of the charging rate simulation, it was assumed that each ground flash contributes 5 C to the charging process. It was necessary to assume that all electrified clouds contribute to charging by means other than lightning, that the total flash rate can serve as an indirect indicator of the rate of charge transfer, and that oceanic electrified clouds contribute to charging even though they are relatively inefficient in producing lightning. It was also found necessary to add a diurnally invariant charging current component. By trial and error it was found that charging rate diurnal variation curves in Universal time (UT) could be produced with amplitude ratios and general shapes similar to those of the potential gradient diurnal variation curves measured over ocean and arctic regions during voyages of the Carnegie Institute research vessels.
Interactive investigations into planetary interiors
NASA Astrophysics Data System (ADS)
Rose, I.
2015-12-01
Many processes in Earth science are difficult to observe or visualize due to the large timescales and lengthscales over which they operate. The dynamics of planetary mantles are particularly challenging as we cannot even look at the rocks involved. As a result, much teaching material on mantle dynamics relies on static images and cartoons, many of which are decades old. Recent improvements in computing power and technology (largely driven by game and web development) have allowed for advances in real-time physics simulations and visualizations, but these have been slow to affect Earth science education.Here I demonstrate a teaching tool for mantle convection and seismology which solves the equations for conservation of mass, momentum, and energy in real time, allowing users make changes to the simulation and immediately see the effects. The user can ask and answer questions about what happens when they add heat in one place, or take it away from another place, or increase the temperature at the base of the mantle. They can also pause the simulation, and while it is paused, create and visualize seismic waves traveling through the mantle. These allow for investigations into and discussions about plate tectonics, earthquakes, hot spot volcanism, and planetary cooling.The simulation is rendered to the screen using OpenGL, and is cross-platform. It can be run as a native application for maximum performance, but it can also be embedded in a web browser for easy deployment and portability.
Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore
2014-04-01
Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided.
Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore
2014-01-01
Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided. PMID:24834325
Increasing operating room efficiency through electronic medical record analysis.
Attaallah, A F; Elzamzamy, O M; Phelps, A L; Ranganthan, P; Vallejo, M C
2016-05-01
We used electronic medical record (EMR) analysis to determine errors in operating room (OR) time utilisation. Over a two year period EMR data of 44,503 surgical procedures was analysed for OR duration, on-time, first case, and add-on time performance, within 19 surgical specialties. Maximal OR time utilisation at our institution could have saved over 302,620 min or 5,044 hours of OR efficiency over a two year period. Most specialties (78.95%) had inaccurately scheduled procedure times and therefore used the OR more than their scheduled allotment time. Significant differences occurred between the mean scheduled surgical durations (101.38 ± 87.11 min) and actual durations (108.18 ± 102.27 min; P < 0.001). Significant differences also occurred between the mean scheduled add-on durations (111.4 ± 75.5 min) and the actual add-on scheduled durations (118.6 ± 90.1 minutes; P < 0.001). EMR quality improvement analysis can be used to determine scheduling error and bias, in order to improve efficiency and increase OR time utilisation.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., including vehicle simulations using industry standard model (need to add name and location of this open.... All such information and data must include assumptions made in their preparation and the range of... any product (vehicle or component) to be produced by or through the project, including relevant data...
Integrated Spatio-Temporal Ecological Modeling System
1998-07-01
models that we hold in our conscious (and subconscious ) minds. Chapter 3 explores how this approach is being augmented with the more formal capture...This approach makes it possible to add new simulation model components to I- STEMS without having to reprogram existing components. The steps required
Kirkendall, Abbie M; Waldrop, Deborah
2013-09-01
The purpose of the study was to describe the perceptions of community residence (CR) staff who have cared for older adults with developmental disabilities (ADDs) that are at the end of life. This exploratory, descriptive study utilized qualitative methods that involved semistructured interviews with CR staff members. The setting was a CR that was also an intermediate care facility (ICF) that provided 24-hour residential treatment for medical and/or behavioral needs. At least one registered nurse was present at all times. A CR with at least one resident who was over the age of 40 and had a diagnosis of a life-limiting illness was chosen. Participants included three frontline workers, four managers, and one registered nurse. In-person interviews included open-ended questions about end-of-life care for older ADDs. Demographics such as age, length of time working with ADDs, and education were analyzed using descriptive statistics. Descriptive statistics were used to analyze demographics such as age, and length of time working with ADDs. Interviews were digitally recorded, transcribed, and analyzed using grounded theory techniques. Four themes illuminated unique elements of the provision of end-of-life care in a CR: (1) influence of relationships, (2) expression of individuality, (3) contribution of hospice, (4) grief and bereavement, and (5) challenges to end-of-life care. The results provided insight into the unique needs of older ADDs at the end of life and how this influences their care. Emphasis was also placed on the importance of specialized care that involved collaborations with hospice for older ADDs who remain in a CR at the end of life.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-27
...-adjusted option series of the Market-Maker's appointed class that have a time to expiration of less than... liquidity in these few series, it would only last for a short period of time (until the following trading... language to Exchange Rules 8.5 and 8.17 to exclude intra-day add-on series (``Intra-day Adds'') on the day...
Magnetic Resonance Mediated Radiofrequency Ablation.
Hue, Yik-Kiong; Guimaraes, Alexander R; Cohen, Ouri; Nevo, Erez; Roth, Abraham; Ackerman, Jerome L
2018-02-01
To introduce magnetic resonance mediated radiofrequency ablation (MR-RFA), in which the MRI scanner uniquely serves both diagnostic and therapeutic roles. In MR-RFA scanner-induced RF heating is channeled to the ablation site via a Larmor frequency RF pickup device and needle system, and controlled via the pulse sequence. MR-RFA was evaluated with simulation of electric and magnetic fields to predict the increase in local specific-absorption-rate (SAR). Temperature-time profiles were measured for different configurations of the device in agar phantoms and ex vivo bovine liver in a 1.5 T scanner. Temperature rise in MR-RFA was imaged using the proton resonance frequency method validated with fiber-optic thermometry. MR-RFA was performed on the livers of two healthy live pigs. Simulations indicated a near tenfold increase in SAR at the RFA needle tip. Temperature-time profiles depended significantly on the physical parameters of the device although both configurations tested yielded temperature increases sufficient for ablation. Resected livers from live ablations exhibited clear thermal lesions. MR-RFA holds potential for integrating RF ablation tumor therapy with MRI scanning. MR-RFA may add value to MRI with the addition of a potentially disposable ablation device, while retaining MRI's ability to provide real time procedure guidance and measurement of tissue temperature, perfusion, and coagulation.
Hot super-Earths stripped by their host stars
Lundkvist, M. S.; Kjeldsen, H.; Albrecht, S.; Davies, G. R.; Basu, S.; Huber, D.; Justesen, A. B.; Karoff, C.; Silva Aguirre, V.; Van Eylen, V.; Vang, C.; Arentoft, T.; Barclay, T.; Bedding, T. R.; Campante, T. L.; Chaplin, W. J.; Christensen-Dalsgaard, J.; Elsworth, Y. P.; Gilliland, R. L.; Handberg, R.; Hekker, S.; Kawaler, S. D.; Lund, M. N.; Metcalfe, T. S.; Miglio, A.; Rowe, J. F.; Stello, D.; Tingley, B.; White, T. R.
2016-01-01
Simulations predict that hot super-Earth sized exoplanets can have their envelopes stripped by photoevaporation, which would present itself as a lack of these exoplanets. However, this absence in the exoplanet population has escaped a firm detection. Here we demonstrate, using asteroseismology on a sample of exoplanets and exoplanet candidates observed during the Kepler mission that, while there is an abundance of super-Earth sized exoplanets with low incident fluxes, none are found with high incident fluxes. We do not find any exoplanets with radii between 2.2 and 3.8 Earth radii with incident flux above 650 times the incident flux on Earth. This gap in the population of exoplanets is explained by evaporation of volatile elements and thus supports the predictions. The confirmation of a hot-super-Earth desert caused by evaporation will add an important constraint on simulations of planetary systems, since they must be able to reproduce the dearth of close-in super-Earths. PMID:27062914
Simulating The Prompt Electromagnetic Pulse In 3D Using Vector Spherical Harmonics
NASA Astrophysics Data System (ADS)
Friedman, Alex; Cohen, Bruce I.; Eng, Chester D.; Farmer, William A.; Grote, David P.; Kruger, Hans W.; Larson, David J.
2017-10-01
We describe a new, efficient code for simulating the prompt electromagnetic pulse. In SHEMP (``Spherical Harmonic EMP''), we extend to 3-D the methods pioneered in C. Longmire's CHAP code. The geomagnetic field and air density are consistent with CHAP's assumed spherical symmetry only for narrow domains of influence about the line of sight, limiting validity to very early times. Also, we seek to model inherently 3-D situations. In CHAP and our own CHAP-lite, the independent coordinates are r (the distance from the source) and τ = t-r/c; the pulse varies slowly with r at fixed τ, so a coarse radial grid suffices. We add non-spherically-symmetric physics via a vector spherical harmonic decomposition. For each (l,m) harmonic, the radial equation is similar to that in CHAP and CHAP-lite. We present our methodology and results on model problems. This work was performed under the auspices of the U.S. DOE by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Extension of a Kolmogorov Atmospheric Turbulence Model for Time-Based Simulation Implementation
NASA Technical Reports Server (NTRS)
McMinn, John D.
1997-01-01
The development of any super/hypersonic aircraft requires the interaction of a wide variety of technical disciplines to maximize vehicle performance. For flight and engine control system design and development on this class of vehicle, realistic mathematical simulation models of atmospheric turbulence, including winds and the varying thermodynamic properties of the atmosphere, are needed. A model which has been tentatively selected by a government/industry group of flight and engine/inlet controls representatives working on the High Speed Civil Transport is one based on the Kolmogorov spectrum function. This report compares the Dryden and Kolmogorov turbulence forms, and describes enhancements that add functionality to the selected Kolmogorov model. These added features are: an altitude variation of the eddy dissipation rate based on Dryden data, the mapping of the eddy dissipation rate database onto a regular latitude and longitude grid, a method to account for flight at large vehicle attitude angles, and a procedure for transitioning smoothly across turbulence segments.
Enhanced modeling features within TREETOPS
NASA Technical Reports Server (NTRS)
Vandervoort, R. J.; Kumar, Manoj N.
1989-01-01
The original motivation for TREETOPS was to build a generic multi-body simulation and remove the burden of writing multi-body equations from the engineers. The motivation of the enhancement was twofold: (1) to extend the menu of built-in features (sensors, actuators, constraints, etc.) that did not require user code; and (2) to extend the control system design capabilities by linking with other government funded software (NASTRAN and MATLAB). These enhancements also serve to bridge the gap between structures and control groups. It is common on large space programs for the structures groups to build hi-fidelity models of the structure using NASTRAN and for the controls group to build lower order models because they lack the tools to incorporate the former into their analysis. Now the controls engineers can accept the hi-fidelity NASTRAN models into TREETOPS, add sensors and actuators, perform model reduction and couple the result directly into MATLAB to perform their design. The controller can then be imported directly into TREETOPS for non-linear, time-history simulation.
Introducing Computational Fluid Dynamics Simulation into Olfactory Display
NASA Astrophysics Data System (ADS)
Ishida, Hiroshi; Yoshida, Hitoshi; Nakamoto, Takamichi
An olfactory display is a device that delivers various odors to the user's nose. It can be used to add special effects to movies and games by releasing odors relevant to the scenes shown on the screen. In order to provide high-presence olfactory stimuli to the users, the display must be able to generate realistic odors with appropriate concentrations in a timely manner together with visual and audio playbacks. In this paper, we propose to use computational fluid dynamics (CFD) simulations in conjunction with the olfactory display. Odor molecules released from their source are transported mainly by turbulent flow, and their behavior can be extremely complicated even in a simple indoor environment. In the proposed system, a CFD solver is employed to calculate the airflow field and the odor dispersal in the given environment. An odor blender is used to generate the odor with the concentration determined based on the calculated odor distribution. Experimental results on presenting odor stimuli synchronously with movie clips show the effectiveness of the proposed system.
Spatial frequency spectrum of the x-ray scatter distribution in CBCT projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bootsma, G. J.; Verhaegen, F.; Department of Oncology, Medical Physics Unit, McGill University, Montreal, Quebec H3G 1A4
2013-11-15
Purpose: X-ray scatter is a source of significant image quality loss in cone-beam computed tomography (CBCT). The use of Monte Carlo (MC) simulations separating primary and scattered photons has allowed the structure and nature of the scatter distribution in CBCT to become better elucidated. This work seeks to quantify the structure and determine a suitable basis function for the scatter distribution by examining its spectral components using Fourier analysis.Methods: The scatter distribution projection data were simulated using a CBCT MC model based on the EGSnrc code. CBCT projection data, with separated primary and scatter signal, were generated for a 30.6more » cm diameter water cylinder [single angle projection with varying axis-to-detector distance (ADD) and bowtie filters] and two anthropomorphic phantoms (head and pelvis, 360 projections sampled every 1°, with and without a compensator). The Fourier transform of the resulting scatter distributions was computed and analyzed both qualitatively and quantitatively. A novel metric called the scatter frequency width (SFW) is introduced to determine the scatter distribution's frequency content. The frequency content results are used to determine a set basis functions, consisting of low-frequency sine and cosine functions, to fit and denoise the scatter distribution generated from MC simulations using a reduced number of photons and projections. The signal recovery is implemented using Fourier filtering (low-pass Butterworth filter) and interpolation. Estimates of the scatter distribution are used to correct and reconstruct simulated projections.Results: The spatial and angular frequencies are contained within a maximum frequency of 0.1 cm{sup −1} and 7/(2π) rad{sup −1} for the imaging scenarios examined, with these values varying depending on the object and imaging setup (e.g., ADD and compensator). These data indicate spatial and angular sampling every 5 cm and π/7 rad (∼25°) can be used to properly capture the scatter distribution, with reduced sampling possible depending on the imaging scenario. Using a low-pass Butterworth filter, tuned with the SFW values, to denoise the scatter projection data generated from MC simulations using 10{sup 6} photons resulted in an error reduction of greater than 85% for the estimating scatter in single and multiple projections. Analysis showed that the use of a compensator helped reduce the error in estimating the scatter distribution from limited photon simulations by more than 37% when compared to the case without a compensator for the head and pelvis phantoms. Reconstructions of simulated head phantom projections corrected by the filtered and interpolated scatter estimates showed improvements in overall image quality.Conclusions: The spatial frequency content of the scatter distribution in CBCT is found to be contained within the low frequency domain. The frequency content is modulated both by object and imaging parameters (ADD and compensator). The low-frequency nature of the scatter distribution allows for a limited set of sine and cosine basis functions to be used to accurately represent the scatter signal in the presence of noise and reduced data sampling decreasing MC based scatter estimation time. Compensator induced modulation of the scatter distribution reduces the frequency content and improves the fitting results.« less
Code of Federal Regulations, 2010 CFR
2010-01-01
..., including vehicle simulations using industry standard model (need to add name and location of this open source model) to show projected fuel economy; (d) A detailed estimate of the total project costs together..., equity, and debt, and the liability of parties associated with the project; (f) Applicant's business plan...
LC-MSsim – a simulation software for liquid chromatography mass spectrometry data
Schulz-Trieglaff, Ole; Pfeifer, Nico; Gröpl, Clemens; Kohlbacher, Oliver; Reinert, Knut
2008-01-01
Background Mass Spectrometry coupled to Liquid Chromatography (LC-MS) is commonly used to analyze the protein content of biological samples in large scale studies. The data resulting from an LC-MS experiment is huge, highly complex and noisy. Accordingly, it has sparked new developments in Bioinformatics, especially in the fields of algorithm development, statistics and software engineering. In a quantitative label-free mass spectrometry experiment, crucial steps are the detection of peptide features in the mass spectra and the alignment of samples by correcting for shifts in retention time. At the moment, it is difficult to compare the plethora of algorithms for these tasks. So far, curated benchmark data exists only for peptide identification algorithms but no data that represents a ground truth for the evaluation of feature detection, alignment and filtering algorithms. Results We present LC-MSsim, a simulation software for LC-ESI-MS experiments. It simulates ESI spectra on the MS level. It reads a list of proteins from a FASTA file and digests the protein mixture using a user-defined enzyme. The software creates an LC-MS data set using a predictor for the retention time of the peptides and a model for peak shapes and elution profiles of the mass spectral peaks. Our software also offers the possibility to add contaminants, to change the background noise level and includes a model for the detectability of peptides in mass spectra. After the simulation, LC-MSsim writes the simulated data to mzData, a public XML format. The software also stores the positions (monoisotopic m/z and retention time) and ion counts of the simulated ions in separate files. Conclusion LC-MSsim generates simulated LC-MS data sets and incorporates models for peak shapes and contaminations. Algorithm developers can match the results of feature detection and alignment algorithms against the simulated ion lists and meaningful error rates can be computed. We anticipate that LC-MSsim will be useful to the wider community to perform benchmark studies and comparisons between computational tools. PMID:18842122
Self-directed versus traditional classroom training for neonatal resuscitation.
Weiner, Gary M; Menghini, Karin; Zaichkin, Jeanette; Caid, Ann E; Jacoby, Carrie J; Simon, Wendy M
2011-04-01
Neonatal Resuscitation Program instructors spend most of their classroom time giving lectures and demonstrating basic skills. We hypothesized that a self-directed education program could shift acquisition of these skills outside the classroom, shorten the duration of the class, and allow instructors to use their time to facilitate low-fidelity simulation and debriefing. Novice providers were randomly allocated to self-directed education or a traditional class. Self-directed participants received a textbook, instructional video, and portable equipment kit and attended a 90-minute simulation session with an instructor. The traditional class included 6 hours of lectures and instructor-directed skill stations. Outcome measures included resuscitation skill (megacode assessment score), content knowledge, participant satisfaction, and self-confidence. Forty-six subjects completed the study. There was no significant difference between the study groups in either the megacode assessment score (23.8 [traditional] vs 24.5 [self-directed]; P = .46) or fraction that passed the "megacode" (final skills assessment) (56% [traditional] vs 65% [self-directed]; P = .76). There were no significant differences in content knowledge, course satisfaction, or postcourse self-confidence. Content knowledge, years of experience, and self-confidence did not predict resuscitation skill. Self-directed education improves the educational efficiency of the neonatal resuscitation course by shifting the acquisition of cognitive and basic procedural skills outside of the classroom, which allows the instructor to add low-fidelity simulation and debriefing while significantly decreasing the duration of the course.
NASA Technical Reports Server (NTRS)
Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelly, J. H.; Depkovich, T. M.; Wolfe, W. J.; Nguyen, T.
1986-01-01
The purpose of the Robotic Simulation (ROBSIM) program is to provide a broad range of computer capabilities to assist in the design, verification, simulation, and study of robotic systems. ROBSIM is programmed in FORTRAM 77 and implemented on a VAX 11/750 computer using the VMS operating system. The programmer's guide describes the ROBSIM implementation and program logic flow, and the functions and structures of the different subroutines. With the manual and the in-code documentation, an experienced programmer can incorporate additional routines and modify existing ones to add desired capabilities.
NASA Technical Reports Server (NTRS)
Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelly, J. H.; Depkovich, T. M.
1984-01-01
The purpose of the Robotics Simulation (ROBSIM) program is to provide a broad range of computer capabilities to assist in the design, verification, simulation, and study of robotic systems. ROBSIM is programmed in FORTRAN 77 and implemented on a VAX 11/750 computer using the VMS operating system. This programmer's guide describes the ROBSIM implementation and program logic flow, and the functions and structures of the different subroutines. With this manual and the in-code documentation, and experienced programmer can incorporate additional routines and modify existing ones to add desired capabilities.
BOAST 2 for the IBM 3090 and RISC 6000
NASA Astrophysics Data System (ADS)
Hebert, P.; Bourgoyne, A. T., Jr.; Tyler, J.
1993-05-01
BOAST 2 simulates isothermal, darcy flow in three dimensions. It assumes that reservoir liquids can be described in three fluid phases (oil, gas, and water) of constant composition, with physical properties that depend on pressure, only. These reservoir fluid approximations are acceptable for a large percentage of the world's oil and gas reservoirs. Consequently, BOAST 2 has a wide range of applicability. BOAST 2 can simulate oil and/or gas recovery by fluid expansion, displacement, gravity drainage, and capillary imhibition mechanisms. Typical field production problems that BOAST 2 can handle include primary depletion studies, pressure maintenance by water and/or gas injection, and evaluation of secondary recovery waterflooding and displacement operations. Technically, BOAST 2 is a finite, implicit pressure, explicit saturation (IMPES) numerical simulator. It applies both direct and iterative solution techniques for solving systems of algebraic equations. The well model allows specification of rate or pressure constraints on well performance, and the user is free to add or to recomplete wells during the simulation. In addition, the user can define multiple rock and PVT regions and can choose from three aquifer models. BOAST 2 also provides flexible initialization, a bubble-point tracking scheme, automatic time-step control, and a material balance check on solution stability. The user controls output, which includes a run summary and line-printer plots of fieldwide performance.
Rotation of a synchronous viscoelastic shell
NASA Astrophysics Data System (ADS)
Noyelles, Benoît
2018-03-01
Several natural satellites of the giant planets have shown evidence of a global internal ocean, coated by a thin, icy crust. This crust is probably viscoelastic, which would alter its rotational response. This response would translate into several rotational quantities, i.e. the obliquity, and the librations at different frequencies, for which the crustal elasticity reacts differently. This study aims at modelling the global response of the viscoelastic crust. For that, I derive the time-dependence of the tensor of inertia, which I combine with the time evolution of the rotational quantities, thanks to an iterative algorithm. This algorithm combines numerical simulations of the rotation with a digital filtering of the resulting tensor of inertia. The algorithm works very well in the elastic case, provided the problem is not resonant. However, considering tidal dissipation adds different phase lags to the oscillating contributions, which challenge the convergence of the algorithm.
Anomalous electron heating effects on the E region ionosphere in TIEGCM
NASA Astrophysics Data System (ADS)
Liu, Jing; Wang, Wenbin; Oppenheim, Meers; Dimant, Yakov; Wiltberger, Michael; Merkin, Slava
2016-03-01
We have recently implemented a new module that includes both the anomalous electron heating and the electron-neutral cooling rate correction associated with the Farley-Buneman Instability (FBI) in the thermosphere-ionosphere electrodynamics global circulation model (TIEGCM). This implementation provides, for the first time, a modeling capability to describe macroscopic effects of the FBI on the ionosphere and thermosphere in the context of a first-principle, self-consistent model. The added heating sources primarily operate between 100 and 130 km altitude, and their magnitudes often exceed auroral precipitation heating in the TIEGCM. The induced changes in E region electron temperature in the auroral oval and polar cap by the FBI are remarkable with a maximum Te approaching 2200 K. This is about 4 times larger than the TIEGCM run without FBI heating. This investigation demonstrates how researchers can add the important effects of the FBI to magnetosphere-ionosphere-thermosphere models and simulators.
An accurate model for predicting high frequency noise of nanoscale NMOS SOI transistors
NASA Astrophysics Data System (ADS)
Shen, Yanfei; Cui, Jie; Mohammadi, Saeed
2017-05-01
A nonlinear and scalable model suitable for predicting high frequency noise of N-type Metal Oxide Semiconductor (NMOS) transistors is presented. The model is developed for a commercial 45 nm CMOS SOI technology and its accuracy is validated through comparison with measured performance of a microwave low noise amplifier. The model employs the virtual source nonlinear core and adds parasitic elements to accurately simulate the RF behavior of multi-finger NMOS transistors up to 40 GHz. For the first time, the traditional long-channel thermal noise model is supplemented with an injection noise model to accurately represent the noise behavior of these short-channel transistors up to 26 GHz. The developed model is simple and easy to extract, yet very accurate.
Li, Dian-Jeng; Wang, Fu-Chiang; Chu, Che-Sheng; Chen, Tien-Yu; Tang, Chia-Hung; Yang, Wei-Cheng; Chow, Philip Chik-Keung; Wu, Ching-Kuan; Tseng, Ping-Tao; Lin, Pao-Yen
2017-01-01
Add-on ketamine anesthesia in electroconvulsive therapy (ECT) has been studied in depressive patients in several clinical trials with inconclusive findings. Two most recent meta-analyses reported insignificant findings with regards to the treatment effect of add-on ketamine anesthesia in ECT in depressive patients. The aim of this study is to update the current evidence and investigate the role of add-on ketamine anesthesia in ECT in depressive patients via a systematic review and meta-analysis. We performed a thorough literature search of the PubMed and ScienceDirect databases, and extracted all relevant clinical variables to compare the antidepressive outcomes between add-on ketamine anesthesia and other anesthetics in ECT. Total 16 articles with 346 patients receiving add-on ketamine anesthesia in ECT and 329 controls were recruited. We found that the antidepressive treatment effect of add-on ketamine anesthesia in ECT in depressive patients was significantly higher than that of other anesthetics (p<0.001). This significance persisted in both short-term (1-2 weeks) and moderate-term (3-4 weeks) treatment courses (all p<0.05). However, the side effect profiles and recovery time profiles were significantly worse in add-on ketamine anesthesia group than in control group. Our meta-analysis highlights the significantly higher antidepressive treatment effect of add-on ketamine in depressive patients receiving ECT compared to other anesthetics. However, clinicians need to take undesirable side effects into consideration when using add-on ketamine anesthesia in ECT in depressive patients. Copyright © 2016 Elsevier B.V. and ECNP. All rights reserved.
Systems Epidemiology: What’s in a Name?
Dammann, O.; Gray, P.; Gressens, P.; Wolkenhauer, O.; Leviton, A.
2014-01-01
Systems biology is an interdisciplinary effort to integrate molecular, cellular, tissue, organ, and organism levels of function into computational models that facilitate the identification of general principles. Systems medicine adds a disease focus. Systems epidemiology adds yet another level consisting of antecedents that might contribute to the disease process in populations. In etiologic and prevention research, systems-type thinking about multiple levels of causation will allow epidemiologists to identify contributors to disease at multiple levels as well as their interactions. In public health, systems epidemiology will contribute to the improvement of syndromic surveillance methods. We encourage the creation of computational simulation models that integrate information about disease etiology, pathogenetic data, and the expertise of investigators from different disciplines. PMID:25598870
Adding Value: Online Student Engagement
ERIC Educational Resources Information Center
Everett, Donna R.
2015-01-01
This paper seeks to add to the emerging literature related to online student engagement with additional suggestions for instructional strategies. Student engagement is one of the tenets of effective online instruction; as such, particular attention to how it adds value to student learning is crucial and worth the time and effort to enhance…
Meeting Learning Challenges: Working with the Child Who Has ADD
ERIC Educational Resources Information Center
Greenspan, Stanley I.
2006-01-01
The terms ADD (Attention Deficit Disorder) and ADHD (Attention Deficit Hyperactivity Disorder) are applied to several symptoms, including: difficulty in paying attention, distractibility, having a hard time following through on things, and sometimes over-activity and impulsivity. There are many different reasons why children have these symptoms.…
40 CFR 63.4120 - What reports must I submit?
Code of Federal Regulations, 2010 CFR
2010-07-01
... emission limitation (including any periods when emissions bypassed the add-on control device and were... emission reduction during the compliance period by emission capture systems and add-on control devices... of the CPMS. (5) The date of the latest CPMS certification or audit. (6) The date and time that each...
Monte Carlo simulation of chemistry following radiolysis with TOPAS-nBio
NASA Astrophysics Data System (ADS)
Ramos-Méndez, J.; Perl, J.; Schuemann, J.; McNamara, A.; Paganetti, H.; Faddegon, B.
2018-05-01
Simulation of water radiolysis and the subsequent chemistry provides important information on the effect of ionizing radiation on biological material. The Geant4 Monte Carlo toolkit has added chemical processes via the Geant4-DNA project. The TOPAS tool simplifies the modeling of complex radiotherapy applications with Geant4 without requiring advanced computational skills, extending the pool of users. Thus, a new extension to TOPAS, TOPAS-nBio, is under development to facilitate the configuration of track-structure simulations as well as water radiolysis simulations with Geant4-DNA for radiobiological studies. In this work, radiolysis simulations were implemented in TOPAS-nBio. Users may now easily add chemical species and their reactions, and set parameters including branching ratios, dissociation schemes, diffusion coefficients, and reaction rates. In addition, parameters for the chemical stage were re-evaluated and updated from those used by default in Geant4-DNA to improve the accuracy of chemical yields. Simulation results of time-dependent and LET-dependent primary yields Gx (chemical species per 100 eV deposited) produced at neutral pH and 25 °C by short track-segments of charged particles were compared to published measurements. The LET range was 0.05–230 keV µm‑1. The calculated Gx values for electrons satisfied the material balance equation within 0.3%, similar for protons albeit with long calculation time. A smaller geometry was used to speed up proton and alpha simulations, with an acceptable difference in the balance equation of 1.3%. Available experimental data of time-dependent G-values for agreed with simulated results within 7% ± 8% over the entire time range; for over the full time range within 3% ± 4% for H2O2 from 49% ± 7% at earliest stages and 3% ± 12% at saturation. For the LET-dependent Gx, the mean ratios to the experimental data were 1.11 ± 0.98, 1.21 ± 1.11, 1.05 ± 0.52, 1.23 ± 0.59 and 1.49 ± 0.63 (1 standard deviation) for , , H2, H2O2 and , respectively. In conclusion, radiolysis and subsequent chemistry with Geant4-DNA has been successfully incorporated in TOPAS-nBio. Results are in reasonable agreement with published measured and simulated data.
Demographic history and gene flow during silkworm domestication
2014-01-01
Background Gene flow plays an important role in domestication history of domesticated species. However, little is known about the demographic history of domesticated silkworm involving gene flow with its wild relative. Results In this study, four model-based evolutionary scenarios to describe the demographic history of B. mori were hypothesized. Using Approximate Bayesian Computation method and DNA sequence data from 29 nuclear loci, we found that the gene flow at bottleneck model is the most likely scenario for silkworm domestication. The starting time of silkworm domestication was estimated to be approximate 7,500 years ago; the time of domestication termination was 3,984 years ago. Using coalescent simulation analysis, we also found that bi-directional gene flow occurred during silkworm domestication. Conclusions Estimates of silkworm domestication time are nearly consistent with the archeological evidence and our previous results. Importantly, we found that the bi-directional gene flow might occur during silkworm domestication. Our findings add a dimension to highlight the important role of gene flow in domestication of crops and animals. PMID:25123546
Physical habitat simulation system reference manual: version II
Milhous, Robert T.; Updike, Marlys A.; Schneider, Diane M.
1989-01-01
There are four major components of a stream system that determine the productivity of the fishery (Karr and Dudley 1978). These are: (1) flow regime, (2) physical habitat structure (channel form, substrate distribution, and riparian vegetation), (3) water quality (including temperature), and (4) energy inputs from the watershed (sediments, nutrients, and organic matter). The complex interaction of these components determines the primary production, secondary production, and fish population of the stream reach. The basic components and interactions needed to simulate fish populations as a function of management alternatives are illustrated in Figure I.1. The assessment process utilizes a hierarchical and modular approach combined with computer simulation techniques. The modular components represent the "building blocks" for the simulation. The quality of the physical habitat is a function of flow and, therefore, varies in quality and quantity over the range of the flow regime. The conceptual framework of the Incremental Methodology and guidelines for its application are described in "A Guide to Stream Habitat Analysis Using the Instream Flow Incremental Methodology" (Bovee 1982). Simulation of physical habitat is accomplished using the physical structure of the stream and streamflow. The modification of physical habitat by temperature and water quality is analyzed separately from physical habitat simulation. Temperature in a stream varies with the seasons, local meteorological conditions, stream network configuration, and the flow regime; thus, the temperature influences on habitat must be analysed on a stream system basis. Water quality under natural conditions is strongly influenced by climate and the geological materials, with the result that there is considerable natural variation in water quality. When we add the activities of man, the possible range of water quality possibilities becomes rather large. Consequently, water quality must also be analysed on a stream system basis. Such analysis is outside the scope of this manual, which concentrates on simulation of physical habitat based on depth, velocity, and a channel index. The results form PHABSIM can be used alone or by using a series of habitat time series programs that have been developed to generate monthly or daily habitat time series from the Weighted Usable Area versus streamflow table resulting from the habitat simulation programs and streamflow time series data. Monthly and daily streamflow time series may be obtained from USGS gages near the study site or as the output of river system management models.
A carrier sensed multiple access protocol for high data base rate ring networks
NASA Technical Reports Server (NTRS)
Foudriat, E. C.; Maly, Kurt J.; Overstreet, C. Michael; Khanna, S.; Paterra, Frank
1990-01-01
The results of the study of a simple but effective media access protocol for high data rate networks are presented. The protocol is based on the fact that at high data rates networks can contain multiple messages simultaneously over their span, and that in a ring, nodes used to detect the presence of a message arriving from the immediate upstream neighbor. When an incoming signal is detected, the node must either abort or truncate a message it is presently sending. Thus, the protocol with local carrier sensing and multiple access is designated CSMA/RN. The performance of CSMA/RN with TTattempt and truncate is studied using analytic and simulation models. Three performance factors, wait or access time, service time and response or end-to-end travel time are presented. The service time is basically a function of the network rate, it changes by a factor of 1 between no load and full load. Wait time, which is zero for no load, remains small for load factors up to 70 percent of full load. Response time, which adds travel time while on the network to wait and service time, is mainly a function of network length, especially for longer distance networks. Simulation results are shown for CSMA/RN where messages are removed at the destination. A wide range of local and metropolitan area network parameters including variations in message size, network length, and node count are studied. Finally, a scaling factor based upon the ratio of message to network length demonstrates that the results, and hence, the CSMA/RN protocol, are applicable to wide area networks.
Simulating Hadronic-to-Quark-Matter with Burn-UD: Recent work and astrophysical applications
NASA Astrophysics Data System (ADS)
Welbanks, Luis; Ouyed, Amir; Koning, Nico; Ouyed, Rachid
2017-06-01
We present the new developments in Burn-UD, our in-house hydrodynamic combustion code used to model the phase transition of hadronic-to-quark matter. Our two new modules add neutrino transport and the time evolution of a (u, d, s) quark star (QS). Preliminary simulations show that the inclusion of neutrino transport points towards new hydrodynamic instabilities that increase the burning speed. A higher burning speed could elicit the deflagration to detonation of a neutron star (NS) into a QS. We propose that a Quark-Nova (QN: the explosive transition of a NS to a QS) could help us explain the most energetic astronomical events to this day: superluminous supernovae (SLSNe). Our models consider a QN occurring in a massive binary, experiencing two common envelope stages and a QN occurring after the supernova explosion of a Wolf-Rayet (WO) star. Both models have been successful in explaining the double humped light curves of over half a dozen SLSNe. We also introduce SiRop our r-process simulation code and propose that a QN site has the hot temperatures and neutron densities required to make it an ideal site for the r-process.
Leake, S.A.; Lilly, M.R.
1995-01-01
The Fairbanks, Alaska, area has many contaminated sites in a shallow alluvial aquifer. A ground-water flow model is being developed using the MODFLOW finite-difference ground-water flow model program with the River Package. The modeled area is discretized in the horizontal dimensions into 118 rows and 158 columns of approximately 150-meter square cells. The fine grid spacing has the advantage of providing needed detail at the contaminated sites and surface-water features that bound the aquifer. However, the fine spacing of cells adds difficulty to simulating interaction between the aquifer and the large, braided Tanana River. In particular, the assignment of a river head is difficult if cells are much smaller than the river width. This was solved by developing a procedure for interpolating and extrapolating river head using a river distance function. Another problem is that future transient simulations would require excessive numbers of input records using the current version of the River Package. The proposed solution to this problem is to modify the River Package to linearly interpolate river head for time steps within each stress period, thereby reducing the number of stress periods required.
Parametric investigations of plasma characteristics in a remote inductively coupled plasma system
NASA Astrophysics Data System (ADS)
Shukla, Prasoon; Roy, Abhra; Jain, Kunal; Bhoj, Ananth
2016-09-01
Designing a remote plasma system involves source chamber sizing, selection of coils and/or electrodes to power the plasma, designing the downstream tubes, selection of materials used in the source and downstream regions, locations of inlets and outlets and finally optimizing the process parameter space of pressure, gas flow rates and power delivery. Simulations can aid in spatial and temporal plasma characterization in what are often inaccessible locations for experimental probes in the source chamber. In this paper, we report on simulations of a remote inductively coupled Argon plasma system using the modeling platform CFD-ACE +. The coupled multiphysics model description successfully address flow, chemistry, electromagnetics, heat transfer and plasma transport in the remote plasma system. The SimManager tool enables easy setup of parametric simulations to investigate the effect of varying the pressure, power, frequency, flow rates and downstream tube lengths. It can also enable the automatic solution of the varied parameters to optimize a user-defined objective function, which may be the integral ion and radical fluxes at the wafer. The fast run time coupled with the parametric and optimization capabilities can add significant insight and value in design and optimization.
Coarse-graining to the meso and continuum scales with molecular-dynamics-like models
NASA Astrophysics Data System (ADS)
Plimpton, Steve
Many engineering-scale problems that industry or the national labs try to address with particle-based simulations occur at length and time scales well beyond the most optimistic hopes of traditional coarse-graining methods for molecular dynamics (MD), which typically start at the atomic scale and build upward. However classical MD can be viewed as an engine for simulating particles at literally any length or time scale, depending on the models used for individual particles and their interactions. To illustrate I'll highlight several coarse-grained (CG) materials models, some of which are likely familiar to molecular-scale modelers, but others probably not. These include models for water droplet freezing on surfaces, dissipative particle dynamics (DPD) models of explosives where particles have internal state, CG models of nano or colloidal particles in solution, models for aspherical particles, Peridynamics models for fracture, and models of granular materials at the scale of industrial processing. All of these can be implemented as MD-style models for either soft or hard materials; in fact they are all part of our LAMMPS MD package, added either by our group or contributed by collaborators. Unlike most all-atom MD simulations, CG simulations at these scales often involve highly non-uniform particle densities. So I'll also discuss a load-balancing method we've implemented for these kinds of models, which can improve parallel efficiencies. From the physics point-of-view, these models may be viewed as non-traditional or ad hoc. But because they are MD-style simulations, there's an opportunity for physicists to add statistical mechanics rigor to individual models. Or, in keeping with a theme of this session, to devise methods that more accurately bridge models from one scale to the next.
NASA Astrophysics Data System (ADS)
Gauthier, Robert C.; Mnaymneh, Khaled
2005-09-01
The key feature that gives photonic crystals (PhCs) their ability to form photonic band gaps (PBGs) analogous to electronic band gaps of semiconductors is their translation symmetries. In recent years, however, it has been found that structures that possess only rotational symmetries can also have PBGs. In addition, these structures, known as Photonic Quasicrystals (PhQs), have other interesting qualities that set them apart of their translational cousins. One interesting feature is how defect states can be created in PhQs. If the rotational symmetry is disturbed, defect states analogous to defects states that are created in PhCs can be obtained. Simulation results of these defect states and other propagation properties of planar 12-fold photonic quasicrystal patterns, and its physical implementations in Silicon-On-Insulator (SOI) are presented. The main mechanisms required to make any optical multiplexing system is propagation; stop bands and add/drop ports. With the rotationally symmetry of the PhQ causing the stop bands, line defects facilitating propagation and now these specially design defect states acting as add/drop ports, a physical implementation of an OADM can be presented. Theoretical, practical and manufacturing benefits of PhQs are discussed. Simulated transmission plots are shown for various fill factors, dielectric contrast and propagation direction. It is shown that low index waveguides can be produced using the quasi-crystal photonic crystal pattern. Fabrication steps and results are shown.
Acquisition of Ice-Tethered Profilers with Velocity (ITP-V) Instruments for Future Arctic Studies
2016-11-15
instrument that measures sea water temperature and salinity versus depth, the ITP-V adds a multi-axis acoustic -travel-time current meter and...housing capped by an ultra-high-molecular-weight polyethylene dome. The electronics case sits within a foam body designed to provide buoyancy for...then transmits them by satellite to a logger computer at WHO I. The ITP-V instruments add a multi-axis acoustic -travel-time current meter and
Cultural Norms of Clinical Simulation in Undergraduate Nursing Education
2015-01-01
Simulated practice of clinical skills has occurred in skills laboratories for generations, and there is strong evidence to support high-fidelity clinical simulation as an effective tool for learning performance-based skills. What are less known are the processes within clinical simulation environments that facilitate the learning of socially bound and integrated components of nursing practice. Our purpose in this study was to ethnographically describe the situated learning within a simulation laboratory for baccalaureate nursing students within the western United States. We gathered and analyzed data from observations of simulation sessions as well as interviews with students and faculty to produce a rich contextualization of the relationships, beliefs, practices, environmental factors, and theoretical underpinnings encoded in cultural norms of the students’ situated practice within simulation. Our findings add to the evidence linking learning in simulation to the development of broad practice-based skills and clinical reasoning for undergraduate nursing students. PMID:28462300
Effect of monthly areal rainfall uncertainty on streamflow simulation
NASA Astrophysics Data System (ADS)
Ndiritu, J. G.; Mkhize, N.
2017-08-01
Areal rainfall is mostly obtained from point rainfall measurements that are sparsely located and several studies have shown that this results in large areal rainfall uncertainties at the daily time step. However, water resources assessment is often carried out a monthly time step and streamflow simulation is usually an essential component of this assessment. This study set out to quantify monthly areal rainfall uncertainties and assess their effect on streamflow simulation. This was achieved by; i) quantifying areal rainfall uncertainties and using these to generate stochastic monthly areal rainfalls, and ii) finding out how the quality of monthly streamflow simulation and streamflow variability change if stochastic areal rainfalls are used instead of historic areal rainfalls. Tests on monthly rainfall uncertainty were carried out using data from two South African catchments while streamflow simulation was confined to one of them. A non-parametric model that had been applied at a daily time step was used for stochastic areal rainfall generation and the Pitman catchment model calibrated using the SCE-UA optimizer was used for streamflow simulation. 100 randomly-initialised calibration-validation runs using 100 stochastic areal rainfalls were compared with 100 runs obtained using the single historic areal rainfall series. By using 4 rain gauges alternately to obtain areal rainfall, the resulting differences in areal rainfall averaged to 20% of the mean monthly areal rainfall and rainfall uncertainty was therefore highly significant. Pitman model simulations obtained coefficient of efficiencies averaging 0.66 and 0.64 in calibration and validation using historic rainfalls while the respective values using stochastic areal rainfalls were 0.59 and 0.57. Average bias was less than 5% in all cases. The streamflow ranges using historic rainfalls averaged to 29% of the mean naturalised flow in calibration and validation and the respective average ranges using stochastic monthly rainfalls were 86 and 90% of the mean naturalised streamflow. In calibration, 33% of the naturalised flow located within the streamflow ranges with historic rainfall simulations and using stochastic rainfalls increased this to 66%. In validation the respective percentages of naturalised flows located within the simulated streamflow ranges were 32 and 72% respectively. The analysis reveals that monthly areal rainfall uncertainty is significant and incorporating it into streamflow simulation would add validity to the results.
Cognitive task analysis for instruction in single-injection ultrasound guided-regional anesthesia
NASA Astrophysics Data System (ADS)
Gucev, Gligor V.
Cognitive task analysis (CTA) is methodology for eliciting knowledge from subject matter experts. CTA has been used to capture the cognitive processes, decision-making, and judgments that underlie expert behaviors. A review of the literature revealed that CTA has not yet been used to capture the knowledge required to perform ultrasound guided regional anesthesia (UGRA). The purpose of this study was to utilize CTA to extract knowledge from UGRA experts and to determine whether instruction based on CTA of UGRA will produce results superior to the results of traditional training. This study adds to the knowledge base of CTA in being the first one to effectively capture the expert knowledge of UGRA. The derived protocol was used in a randomized, double blinded experiment involving UGRA instruction to 39 novice learners. The results of this study strongly support the hypothesis that CTA-based instruction in UGRA is more effective than conventional clinical instruction, as measured by conceptual pre- and post-tests, performance of a simulated UGRA procedure, and time necessary for the task performance. This study adds to the number of studies that have proven the superiority of CTA-informed instruction. Finally, it produced several validated instruments that can be used in instructing and evaluating UGRA.
Stability and stabilization of the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Brownlee, R. A.; Gorban, A. N.; Levesley, J.
2007-03-01
We revisit the classical stability versus accuracy dilemma for the lattice Boltzmann methods (LBM). Our goal is a stable method of second-order accuracy for fluid dynamics based on the lattice Bhatnager-Gross-Krook method (LBGK). The LBGK scheme can be recognized as a discrete dynamical system generated by free flight and entropic involution. In this framework the stability and accuracy analysis are more natural. We find the necessary and sufficient conditions for second-order accurate fluid dynamics modeling. In particular, it is proven that in order to guarantee second-order accuracy the distribution should belong to a distinguished surface—the invariant film (up to second order in the time step). This surface is the trajectory of the (quasi)equilibrium distribution surface under free flight. The main instability mechanisms are identified. The simplest recipes for stabilization add no artificial dissipation (up to second order) and provide second-order accuracy of the method. Two other prescriptions add some artificial dissipation locally and prevent the system from loss of positivity and local blowup. Demonstration of the proposed stable LBGK schemes are provided by the numerical simulation of a one-dimensional (1D) shock tube and the unsteady 2D flow around a square cylinder up to Reynolds number Rẽ20000 .
Integrated Medical Model (IMM) 4.0 Enhanced Functionalities
NASA Technical Reports Server (NTRS)
Young, M.; Keenan, A. B.; Saile, L.; Boley, L. A.; Walton, M. E.; Shah, R. V.; Kerstman, E. L.; Myers, J. G.
2015-01-01
The Integrated Medical Model is a probabilistic simulation model that uses input data on 100 medical conditions to simulate expected medical events, the resources required to treat, and the resulting impact to the mission for specific crew and mission characteristics. The newest development version of IMM, IMM v4.0, adds capabilities that remove some of the conservative assumptions that underlie the current operational version, IMM v3. While IMM v3 provides the framework to simulate whether a medical event occurred, IMMv4 also simulates when the event occurred during a mission timeline. This allows for more accurate estimation of mission time lost and resource utilization. In addition to the mission timeline, IMMv4.0 features two enhancements that address IMM v3 assumptions regarding medical event treatment. Medical events in IMMv3 are assigned the untreated outcome if any resource required to treat the event was unavailable. IMMv4 allows for partially treated outcomes that are proportional to the amount of required resources available, thus removing the dichotomous treatment assumption. An additional capability IMMv4 is to use an alternative medical resource when the primary resource assigned to the condition is depleted, more accurately reflecting the real-world system. The additional capabilities defining IMM v4.0the mission timeline, partial treatment, and alternate drug result in more realistic predicted mission outcomes. The primary model outcomes of IMM v4.0 for the ISS6 mission, including mission time lost, probability of evacuation, and probability of loss of crew life, are be compared to those produced by the current operational version of IMM to showcase enhanced prediction capabilities.
Jani, Vinod; Sonavane, Uddhavesh; Joshi, Rajendra
2016-07-01
Protein folding is a multi-micro second time scale event and involves many conformational transitions. Crucial conformational transitions responsible for biological functions of biomolecules are difficult to capture using current state-of-the-art molecular dynamics (MD) simulations. Protein folding, being a stochastic process, witnesses these transitions as rare events. Many new methodologies have been proposed for observing these rare events. In this work, a temperature-aided cascade MD is proposed as a technique for studying the conformational transitions. Folding studies for Engrailed homeodomain and Immunoglobulin domain B of protein A have been carried out. Using this methodology, the unfolded structures with RMSD of 20 Å were folded to a structure with RMSD of 2 Å. Three sets of cascade MD runs were carried out using implicit solvation, explicit solvation, and charge updation scheme. In the charge updation scheme, charges based on the conformation obtained are calculated and are updated in the topology file. In all the simulations, the structure of 2 Å was reached within a few nanoseconds using these methods. Umbrella sampling has been performed using snapshots from the temperature-aided cascade MD simulation trajectory to build an entire conformational transition pathway. The advantage of the method is that the possible pathways for a particular reaction can be explored within a short duration of simulation time and the disadvantage is that the knowledge of the start and end state is required. The charge updation scheme adds the polarization effects in the force fields. This improves the electrostatic interaction among the atoms, which may help the protein to fold faster.
NASA Astrophysics Data System (ADS)
Wilusz, D. C.; Maxwell, R. M.; Buda, A. R.; Ball, W. P.; Harman, C. J.
2016-12-01
The catchment transit-time distribution (TTD) is the time-varying, probabilistic distribution of water travel times through a watershed. The TTD is increasingly recognized as a useful descriptor of a catchment's flow and transport processes. However, TTDs are temporally complex and cannot be observed directly at watershed scale. Estimates of TTDs depend on available environmental tracers (such as stable water isotopes) and an assumed model whose parameters can be inverted from tracer data. All tracers have limitations though, such as (typically) short periods of observation or non-conservative behavior. As a result, models that faithfully simulate tracer observations may nonetheless yield TTD estimates with significant errors at certain times and water ages, conditioned on the tracer data available and the model structure. Recent advances have shown that time-varying catchment TTDs can be parsimoniously modeled by the lumped parameter rank StorAge Selection (rSAS) model, in which an rSAS function relates the distribution of water ages in outflows to the composition of age-ranked water in storage. Like other TTD models, rSAS is calibrated and evaluated against environmental tracer data, and the relative influence of tracer-dependent and model-dependent error on its TTD estimates is poorly understood. The purpose of this study is to benchmark the ability of different rSAS formulations to simulate TTDs in a complex, synthetic watershed where the lumped model can be calibrated and directly compared to a virtually "true" TTD. This experimental design allows for isolation of model-dependent error from tracer-dependent error. The integrated hydrologic model ParFlow with SLIM-FAST particle tracking code is used to simulate the watershed and its true TTD. To add field intelligence, the ParFlow model is populated with over forty years of hydrometric and physiographic data from the WE-38 subwatershed of the USDA's Mahantango Creek experimental catchment in PA, USA. The results are intended to give practical insight into tradeoffs between rSAS model structure and skill, and define a new performance benchmark to which other transit time models can be compared.
The Psychologist Said Quickly, “Dialogue Descriptions Modulate Reading Speed!”
Stites, Mallory C.; Luke, Steven G.; Christianson, Kiel
2012-01-01
The current study investigates whether the semantic content of a dialogue description can affect reading times on an embedded quote to determine if the speed at which a character is described as saying a quote influences how quickly it is read. Yao and Scheepers (2011) previously found that readers were faster to read direct quotes when the preceding context implied that the talker generally spoke quickly, an effect attributed to perceptual simulation of talker speed. The current study manipulated the speed of a physical action performed by the speaker independently from character talking rate to determine if these sources have separable effects on perceptual simulation of a direct quote. Results showed that readers spent less time reading direct quotes described as being said quickly compared to slowly (e.g., John walked/bolted into the room and said energetically/nonchalantly, “I finally found my car keys”), an effect that was not present when a nearly identical phrase was presented as an indirect quote (e.g., John…said energetically that he finally found his car keys). The speed of the character’s movement did not affect direct quote reading times. Furthermore, fast adverbs were themselves read significantly faster than slow adverbs, an effect we attribute to implicit effects on the eye movement program stemming from automatically activated semantic features of the adverbs. Our findings add to the literature on perceptual simulation by showing that these effects can be instantiated with only a single adverb, and are strong enough to override effects of global sentence speed. PMID:22927027
Selecting Mangas and Graphic Novels
ERIC Educational Resources Information Center
Nylund, Carol
2007-01-01
The decision to add graphic novels, and particularly the Japanese styled called manga, was one the author has debated for a long time. In this article, the author shares her experience when she purchased graphic novels and mangas to add to her library collection. She shares how graphic novels and mangas have revitalized the library.
Interannual rainfall variability over China in the MetUM GA6 and GC2 configurations
NASA Astrophysics Data System (ADS)
Stephan, Claudia Christine; Klingaman, Nicholas P.; Vidale, Pier Luigi; Turner, Andrew G.; Demory, Marie-Estelle; Guo, Liang
2018-05-01
Six climate simulations of the Met Office Unified Model Global Atmosphere 6.0 and Global Coupled 2.0 configurations are evaluated against observations and reanalysis data for their ability to simulate the mean state and year-to-year variability of precipitation over China. To analyse the sensitivity to air-sea coupling and horizontal resolution, atmosphere-only and coupled integrations at atmospheric horizontal resolutions of N96, N216 and N512 (corresponding to ˜ 200, 90 and 40 km in the zonal direction at the equator, respectively) are analysed. The mean and interannual variance of seasonal precipitation are too high in all simulations over China but improve with finer resolution and coupling. Empirical orthogonal teleconnection (EOT) analysis is applied to simulated and observed precipitation to identify spatial patterns of temporally coherent interannual variability in seasonal precipitation. To connect these patterns to large-scale atmospheric and coupled air-sea processes, atmospheric and oceanic fields are regressed onto the corresponding seasonal mean time series. All simulations reproduce the observed leading pattern of interannual rainfall variability in winter, spring and autumn; the leading pattern in summer is present in all but one simulation. However, only in two simulations are the four leading patterns associated with the observed physical mechanisms. Coupled simulations capture more observed patterns of variability and associate more of them with the correct physical mechanism, compared to atmosphere-only simulations at the same resolution. However, finer resolution does not improve the fidelity of these patterns or their associated mechanisms. This shows that evaluating climate models by only geographical distribution of mean precipitation and its interannual variance is insufficient. The EOT analysis adds knowledge about coherent variability and associated mechanisms.
Cuerva, Marcos J; Piñel, Carlos S; Martin, Lourdes; Espinosa, Jose A; Corral, Octavio J; Mendoza, Nicolás
2018-02-12
The design of optimal courses for obstetric undergraduate teaching is a relevant question. This study evaluates two different designs of simulator-based learning activity on childbirth with regard to respect to the patient, obstetric manoeuvres, interpretation of cardiotocography tracings (CTG) and infection prevention. This randomised experimental study which differs in the content of their briefing sessions consisted of two groups of undergraduate students, who performed two simulator-based learning activities on childbirth. The first briefing session included the observations of a properly performed scenario according to Spanish clinical practice guidelines on care in normal childbirth by the teachers whereas the second group did not include the observations of a properly performed scenario, and the students observed it only after the simulation process. The group that observed a properly performed scenario after the simulation obtained worse grades during the simulation, but better grades during the debriefing and evaluation. Simulator use in childbirth may be more fruitful when the medical students observe correct performance at the completion of the scenario compared to that at the start of the scenario. Impact statement What is already known on this subject? There is a scarcity of literature about the design of optimal high-fidelity simulation training in childbirth. It is known that preparing simulator-based learning activities is a complex process. Simulator-based learning includes the following steps: briefing, simulation, debriefing and evaluation. The most important part of high-fidelity simulations is the debriefing. A good briefing and simulation are of high relevance in order to have a fruitful debriefing session. What do the results of this study add? Our study describes a full simulator-based learning activity on childbirth that can be reproduced in similar facilities. The findings of this study add that high-fidelity simulation training in childbirth is favoured by a short briefing session and an abrupt start to the scenario, rather than a long briefing session that includes direct instruction in the scenario. What are the implications of these findings for clinical practice and/or further research? The findings of this study reveal what to include in the briefing of simulator-based learning activities on childbirth. These findings have implications in medical teaching and in medical practice.
NASA Technical Reports Server (NTRS)
1993-01-01
MOOG, Inc. supplies hydraulic actuators for the Space Shuttle. When MOOG learned NASA was interested in electric actuators for possible future use, the company designed them with assistance from Marshall Space Flight Center. They also decided to pursue the system's commercial potential. This led to partnership with InterActive Simulation, Inc. for production of cabin flight simulators for museums, expositions, etc. The resulting products, the Magic Motion Simulator 30 Series, are the first electric powered simulators. Movements are computer-guided, including free fall to heighten the sense of moving through space. A projection system provides visual effects, and the 11 speakers of a digital laser based sound system add to the realism. The electric actuators are easier to install, have lower operating costs, noise, heat and staff requirements. The U.S. Space & Rocket Center and several other organizations have purchased the simulators.
Extended frequency turbofan model
NASA Technical Reports Server (NTRS)
Mason, J. R.; Park, J. W.; Jaekel, R. F.
1980-01-01
The fan model was developed using two dimensional modeling techniques to add dynamic radial coupling between the core stream and the bypass stream of the fan. When incorporated into a complete TF-30 engine simulation, the fan model greatly improved compression system frequency response to planar inlet pressure disturbances up to 100 Hz. The improved simulation also matched engine stability limits at 15 Hz, whereas the one dimensional fan model required twice the inlet pressure amplitude to stall the simulation. With verification of the two dimensional fan model, this program formulated a high frequency F-100(3) engine simulation using row by row compression system characteristics. In addition to the F-100(3) remote splitter fan, the program modified the model fan characteristics to simulate a proximate splitter version of the F-100(3) engine.
Simulated characteristics of the DEGAS γ-detector array
NASA Astrophysics Data System (ADS)
Li, G. S.; Lizarazo, C.; Gerl, J.; Kojouharov, I.; Schaffner, H.; Górska, M.; Pietralla, N.; Saha, S.; Liu, M. L.; Wang, J. G.
2018-05-01
The performance of the novel HPGe-Cluster array DEGAS to be used at FAIR has been studied through GEANT4 simulations using accurate geometries of most of the detector components. The simulation framework has been tested by comparing experimental data of various detector setups. The study showed that the DEGAS system could provide a clear improvement of the photo-peak efficiency compared to the previous RISING array. In addition, the active BGO Back-catcher could greatly enhance the background suppression capability. The add-back analysis revealed that even at a γ multiplicity of six the sensitivity is improved by adding back the energy depositions of the neighboring Ge crystals.
Cattle Uterus: A Novel Animal Laboratory Model for Advanced Hysteroscopic Surgery Training
Ewies, Ayman A. A.; Khan, Zahid R.
2015-01-01
In recent years, due to reduced training opportunities, the major shift in surgical training is towards the use of simulation and animal laboratories. Despite the merits of Virtual Reality Simulators, they are far from representing the real challenges encountered in theatres. We introduce the “Cattle Uterus Model” in the hope that it will be adopted in training courses as a low cost and easy-to-set-up tool. It adds new dimensions to the advanced hysteroscopic surgery training experience by providing tactile sensation and simulating intraoperative difficulties. It complements conventional surgical training, aiming to maximise clinical exposure and minimise patients' harm. PMID:26265918
The Sustainable Technology Division has recently completed an implementation of the U.S. EPA's Waste Reduction (WAR) Algorithm that can be directly accessed from a Cape-Open compliant process modeling environment. The WAR Algorithm add-in can be used in AmsterChem's COFE (Cape-Op...
Enterprise Requirements and Acquisition Model (ERAM) Analysis and Extension
2014-02-20
add them to the ERAM simulation. References . Arena, M. V., Obaid, Y., Galway L. A., Fox, B., Graser, J. C., Sollinger, J. M., Wu, F., & Wong, C... Galway L. A., Fox, B., Graser, J. C., Sollinger, J. M., Wu, F., & Wong, C. (2006). Impossible certainty: Cost risk analysis for air force systems (MG-415
Busse, Harald; Riedel, Tim; Garnov, Nikita; Thörmer, Gregor; Kahn, Thomas; Moche, Michael
2015-01-01
MRI is of great clinical utility for the guidance of special diagnostic and therapeutic interventions. The majority of such procedures are performed iteratively ("in-and-out") in standard, closed-bore MRI systems with control imaging inside the bore and needle adjustments outside the bore. The fundamental limitations of such an approach have led to the development of various assistance techniques, from simple guidance tools to advanced navigation systems. The purpose of this work was to thoroughly assess the targeting accuracy, workflow and usability of a clinical add-on navigation solution on 240 simulated biopsies by different medical operators. Navigation relied on a virtual 3D MRI scene with real-time overlay of the optically tracked biopsy needle. Smart reference markers on a freely adjustable arm ensured proper registration. Twenty-four operators - attending (AR) and resident radiologists (RR) as well as medical students (MS) - performed well-controlled biopsies of 10 embedded model targets (mean diameter: 8.5 mm, insertion depths: 17-76 mm). Targeting accuracy, procedure times and 13 Likert scores on system performance were determined (strong agreement: 5.0). Differences in diagnostic success rates (AR: 93%, RR: 88%, MS: 81%) were not significant. In contrast, between-group differences in biopsy times (AR: 4:15, RR: 4:40, MS: 5:06 min:sec) differed significantly (p<0.01). Mean overall rating was 4.2. The average operator would use the system again (4.8) and stated that the outcome justifies the extra effort (4.4). Lowest agreement was reported for the robustness against external perturbations (2.8). The described combination of optical tracking technology with an automatic MRI registration appears to be sufficiently accurate for instrument guidance in a standard (closed-bore) MRI environment. High targeting accuracy and usability was demonstrated on a relatively large number of procedures and operators. Between groups with different expertise there were significant differences in experimental procedure times but not in the number of successful biopsies.
Exploring the optimum step size for defocus curves.
Wolffsohn, James S; Jinabhai, Amit N; Kingsnorth, Alec; Sheppard, Amy L; Naroo, Shehzad A; Shah, Sunil; Buckhurst, Phillip; Hall, Lee A; Young, Graeme
2013-06-01
To evaluate the effect of reducing the number of visual acuity measurements made in a defocus curve on the quality of data quantified. Midland Eye, Solihull, United Kingdom. Evaluation of a technique. Defocus curves were constructed by measuring visual acuity on a distance logMAR letter chart, randomizing the test letters between lens presentations. The lens powers evaluated ranged between +1.50 diopters (D) and -5.00 D in 0.50 D steps, which were also presented in a randomized order. Defocus curves were measured binocularly with the Tecnis diffractive, Rezoom refractive, Lentis rotationally asymmetric segmented (+3.00 D addition [add]), and Finevision trifocal multifocal intraocular lenses (IOLs) implanted bilaterally, and also for the diffractive IOL and refractive or rotationally asymmetric segmented (+3.00 D and +1.50 D adds) multifocal IOLs implanted contralaterally. Relative and absolute range of clear-focus metrics and area metrics were calculated for curves fitted using 0.50 D, 1.00 D, and 1.50 D steps and a near add-specific profile (ie, distance, half the near add, and the full near-add powers). A significant difference in simulated results was found in at least 1 of the relative or absolute range of clear-focus or area metrics for each of the multifocal designs examined when the defocus-curve step size was increased (P<.05). Faster methods of capturing defocus curves from multifocal IOL designs appear to distort the metric results and are therefore not valid. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2013 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Digging Postholes Adds Depth and Authenticity to a Shallow Curriculum
ERIC Educational Resources Information Center
Virtue, David C.; Buchanan, Anne; Vogler, Kenneth E.
2012-01-01
In the current era of high-stakes testing and accountability, many social studies teachers struggle to find creative ways to add depth and authenticity to a broad, shallow curriculum. Teachers can use the time after tests are administered for students to reflect back on the social studies curriculum and select topics they want to study more deeply…
It's Never Too Early: Why Economics Education in the Elementary Classroom
ERIC Educational Resources Information Center
Meszaros, Bonnie T.; Evans, Stella
2010-01-01
There never seems to be enough time to teach everything that administrators, policy advocates, parents, legislators, and the general public think should be addressed in the elementary classroom. Each year, elementary teachers are asked to add more and more to their already crowded curriculum. Add to this the pressures of state standards and making…
40 CFR 63.1505 - Emission standards for affected sources and emission units.
Code of Federal Regulations, 2014 CFR
2014-07-01
... any PM add-on air pollution control device if a continuous opacity monitor (COM) or visible emissions... percent opacity from any PM add-on air pollution control device if a COM is chosen as the monitoring.../delacquering kiln/decoating kiln is equipped with an afterburner having a design residence time of at least 1...
40 CFR 63.4720 - What reports must I submit?
Code of Federal Regulations, 2010 CFR
2010-07-01
... an emission limitation (including any periods when emissions bypassed the add-on control device and... systems and add-on control devices, using Equations 1 and 1A through 1D of § 63.4761, and Equations 2, 3.... (v) The date of the latest CPMS certification or audit. (vi) The date and time that each CPMS was...
Simulation of Ultra-Small MOSFETs Using a 2-D Quantum-Corrected Drift-Diffusion Model
NASA Technical Reports Server (NTRS)
Biegal, Bryan A.; Rafferty, Connor S.; Yu, Zhiping; Ancona, Mario G.; Dutton, Robert W.; Saini, Subhash (Technical Monitor)
1998-01-01
The continued down-scaling of electronic devices, in particular the commercially dominant MOSFET, will force a fundamental change in the process of new electronics technology development in the next five to ten years. The cost of developing new technology generations is soaring along with the price of new fabrication facilities, even as competitive pressure intensifies to bring this new technology to market faster than ever before. To reduce cost and time to market, device simulation must become a more fundamental, indeed dominant, part of the technology development cycle. In order to produce these benefits, simulation accuracy must improve markedly. At the same time, device physics will become more complex, with the rapid increase in various small-geometry and quantum effects. This work describes both an approach to device simulator development and a physical model which advance the effort to meet the tremendous electronic device simulation challenge described above. The device simulation approach is to specify the physical model at a high level to a general-purpose (but highly efficient) partial differential equation solver (in this case PROPHET, developed by Lucent Technologies), which then simulates the model in 1-D, 2-D, or 3-D for a specified device and test regime. This approach allows for the rapid investigation of a wide range of device models and effects, which is certainly essential for device simulation to catch up with, and then stay ahead of, electronic device technology of the present and future. The physical device model used in this work is the density-gradient (DG) quantum correction to the drift-diffusion model [Ancona, Phys. Rev. B 35(5), 7959 (1987)]. This model adds tunneling and quantum smoothing of carrier density profiles to the drift-diffusion model. We used the DG model in 1-D and 2-D (for the first time) to simulate both bipolar and unipolar devices. Simulations of heavily-doped, short-base diodes indicated that the DG quantum corrections do not have a large effect on the IN characteristics of electronic devices without heteroj unction s. On the other hand, ultra-small MOSFETs certainly exhibit important quantum effects that the DG model will include: quantum repulsion of the inversion and gate charges from the oxide interfaces, and quantum tunneling through thin gate oxides. We present initial results of 2-D DG simulations of ultra-small MOSFETs. Subtle but important issues involving the specification of the model, boundary conditions, and interface constraints for DG simulation of MOSFETs will also be illuminated.
Constructing a multidimensional free energy surface like a spider weaving a web.
Chen, Changjun
2017-10-15
Complete free energy surface in the collective variable space provides important information of the reaction mechanisms of the molecules. But, sufficient sampling in the collective variable space is not easy. The space expands quickly with the number of the collective variables. To solve the problem, many methods utilize artificial biasing potentials to flatten out the original free energy surface of the molecule in the simulation. Their performances are sensitive to the definitions of the biasing potentials. Fast-growing biasing potential accelerates the sampling speed but decreases the accuracy of the free energy result. Slow-growing biasing potential gives an optimized result but needs more simulation time. In this article, we propose an alternative method. It adds the biasing potential to a representative point of the molecule in the collective variable space to improve the conformational sampling. And the free energy surface is calculated from the free energy gradient in the constrained simulation, not given by the negative of the biasing potential as previous methods. So the presented method does not require the biasing potential to remove all the barriers and basins on the free energy surface exactly. Practical applications show that the method in this work is able to produce the accurate free energy surfaces for different molecules in a short time period. The free energy errors are small in the cases of various biasing potentials. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Estes, Samantha; Parker, Nelson C. (Technical Monitor)
2001-01-01
Virtual reality and simulation applications are becoming widespread in human task analysis. These programs have many benefits for the Human Factors Engineering field. Not only do creating and using virtual environments for human engineering analyses save money and time, this approach also promotes user experimentation and provides increased quality of analyses. This paper explains the human engineering task analysis performed on the Environmental Control and Life Support System (ECLSS) space station rack and its Distillation Assembly (DA) subsystem using EAI's human modeling simulation software, Jack. When installed on the International Space Station (ISS), ECLSS will provide the life and environment support needed to adequately sustain crew life. The DA is an Orbital Replaceable Unit (ORU) that provides means of wastewater (primarily urine from flight crew and experimental animals) reclamation. Jack was used to create a model of the weightless environment of the ISS Node 3, where the ECLSS is housed. Computer aided drawings of the ECLSS rack and DA system were also brought into the environment. Anthropometric models of a 95th percentile male and 5th percentile female were used to examine the human interfaces encountered during various ECLSS and DA tasks. The results of the task analyses were used in suggesting modifications to hardware and crew task procedures to improve accessibility, conserve crew time, and add convenience for the crew. This paper will address some of those suggested modifications and the method of presenting final analyses for requirements verification.
Hierarchical Simulation to Assess Hardware and Software Dependability
NASA Technical Reports Server (NTRS)
Ries, Gregory Lawrence
1997-01-01
This thesis presents a method for conducting hierarchical simulations to assess system hardware and software dependability. The method is intended to model embedded microprocessor systems. A key contribution of the thesis is the idea of using fault dictionaries to propagate fault effects upward from the level of abstraction where a fault model is assumed to the system level where the ultimate impact of the fault is observed. A second important contribution is the analysis of the software behavior under faults as well as the hardware behavior. The simulation method is demonstrated and validated in four case studies analyzing Myrinet, a commercial, high-speed networking system. One key result from the case studies shows that the simulation method predicts the same fault impact 87.5% of the time as is obtained by similar fault injections into a real Myrinet system. Reasons for the remaining discrepancy are examined in the thesis. A second key result shows the reduction in the number of simulations needed due to the fault dictionary method. In one case study, 500 faults were injected at the chip level, but only 255 propagated to the system level. Of these 255 faults, 110 shared identical fault dictionary entries at the system level and so did not need to be resimulated. The necessary number of system-level simulations was therefore reduced from 500 to 145. Finally, the case studies show how the simulation method can be used to improve the dependability of the target system. The simulation analysis was used to add recovery to the target software for the most common fault propagation mechanisms that would cause the software to hang. After the modification, the number of hangs was reduced by 60% for fault injections into the real system.
Napoli, Pietro Emanuele; Coronella, Franco; Satta, Giovanni Maria; Galantuomo, Maria Silvana; Fossarello, Maurizio
2014-01-01
The aim was to determine the influence of meibomian gland dysfunction (MGD) and aqueous tear deficiency dry eye (ADDE) on the adhesive properties of the central cornea by means of optical coherence tomography (OCT), and to investigate the relationship between corneal adhesiveness and classical tear tests, as well as the reliability of results, in these lacrimal functional unit disorders. Prospective, case-control study. Twenty-eight patients with MGD and 27 patients with ADDE were studied. A group of 32 healthy subjects of similar age and gender distribution served as a control group. The adhesive properties of the anterior corneal surface were measured by OCT, based on the retention time of adhesion marker above it, in all participants. An excellent (≥5 minutes), borderline (within 3-5 minutes), fair (within 1-3 minutes) and poor (<1 minute) values of corneal adhesiveness were found, respectively, in 0%, 7.1%, 64.3% and 28.6% of MGD, in 0%, 7.4%, 63% and 29.6% of ADDE, and in 31.3%, 65.6%, 3.1% and 0% of healthy patients. The differences in time of corneal adhesiveness between MGD and healthy patients, as well as between ADDE and healthy patients, were found to be statistically significant (p<0.001; p<0.001; respectively). Conversely, no statistical significant differences between MGD and ADDE were found (p = 0.952). Data analysis revealed a statistically significant correlation between corneal adhesiveness and clinical tests of dry eye, as well as an excellent degree of inter-rater reliability and reproducibility for OCT measurements (p<0.001). ADDE and MGD share similar abnormalities on OCT imaging. Decreased adhesive properties of the anterior cornea were identified as a common feature of MGD and ADDE. This simple OCT approach may provide new clues into the mechanism and evaluation of dry eye syndrome.
1977-06-01
RESEARCH SIMULATOR • RAYMOND 0. FORREST SYSTEMS RESEARCH AND DEVELOPMENT SERVICE FEDERAL AVIATION ADMINISTRATION AMES RESEARCH CENTER MOFFE1T FIELD ...25 M o f f e t t Field , CA 94035 13. T ype of Repor t and P.r.od Co o er ed 12 . Sponsorrng Ar en cy Na me and Add eis ___________ U . S...dynamic stability derivatives of a complete airplane . The method utilizes potential flow theory to compute the surface flow fields and pressures on any
Integrated G and C Implementation within IDOS: A Simulink Based Reusable Launch Vehicle Simulation
NASA Technical Reports Server (NTRS)
Fisher, Joseph E.; Bevacqua, Tim; Lawrence, Douglas A.; Zhu, J. Jim; Mahoney, Michael
2003-01-01
The implementation of multiple Integrated Guidance and Control (IG&C) algorithms per flight phase within a vehicle simulation poses a daunting task to coordinate algorithm interactions with the other G&C components and with vehicle subsystems. Currently being developed by Universal Space Lines LLC (USL) under contract from NASA, the Integrated Development and Operations System (IDOS) contains a high fidelity Simulink vehicle simulation, which provides a means to test cutting edge G&C technologies. Combining the modularity of this vehicle simulation and Simulink s built-in primitive blocks provide a quick way to implement algorithms. To add discrete-event functionality to the unfinished IDOS simulation, Vehicle Event Manager (VEM) and Integrated Vehicle Health Monitoring (IVHM) subsystems were created to provide discrete-event and pseudo-health monitoring processing capabilities. Matlab's Stateflow is used to create the IVHM and Event Manager subsystems and to implement a supervisory logic controller referred to as the Auto-commander as part of the IG&C to coordinate the control system adaptation and reconfiguration and to select the control and guidance algorithms for a given flight phase. Manual creation of the Stateflow charts for all of these subsystems is a tedious and time-consuming process. The Stateflow Auto-builder was developed as a Matlab based software tool for the automatic generation of a Stateflow chart from information contained in a database. This paper describes the IG&C, VEM and IVHM implementations in IDOS. In addition, this paper describes the Stateflow Auto-builder.
Cyclone Simulation via Action Minimization
NASA Astrophysics Data System (ADS)
Plotkin, D. A.; Weare, J.; Abbot, D. S.
2016-12-01
A postulated impact of climate change is an increase in intensity of tropical cyclones (TCs). This hypothesized effect results from the fact that TCs are powered subsaturated boundary layer air picking up water vapor from the surface ocean as it flows inwards towards the eye. This water vapor serves as the energy input for TCs, which can be idealized as heat engines. The inflowing air has a nearly identical temperature as the surface ocean; therefore, warming of the surface leads to a warmer atmospheric boundary layer. By the Clausius-Clapeyron relationship, warmer boundary layer air can hold more water vapor and thus results in more energetic storms. Changes in TC intensity are difficult to predict due to the presence of fine structures (e.g. convective structures and rainbands) with length scales of less than 1 km, while general circulation models (GCMs) generally have horizontal resolutions of tens of kilometers. The models are therefore unable to capture these features, which are critical to accurately simulating cyclone structure and intensity. Further, strong TCs are rare events, meaning that long multi-decadal simulations are necessary to generate meaningful statistics about intense TC activity. This adds to the computational expense, making it yet more difficult to generate accurate statistics about long-term changes in TC intensity due to global warming via direct simulation. We take an alternative approach, applying action minimization techniques developed in molecular dynamics to the WRF weather/climate model. We construct artificial model trajectories that lead from quiescent (TC-free) states to TC states, then minimize the deviation of these trajectories from true model dynamics. We can thus create Monte Carlo model ensembles that are biased towards cyclogenesis, which reduces computational expense by limiting time spent in non-TC states. This allows for: 1) selective interrogation of model states with TCs; 2) finding the likeliest paths for transitions between TC-free and TC states; and 3) an increase in horizontal resolution due to computational savings achieved by reducing time spent simulating TC-free states. This increase in resolution, coupled with a decrease in simulation time, allows for prediction of the change in TC frequency and intensity distributions resulting from climate change.
Integrated Flight Path Planning System and Flight Control System for Unmanned Helicopters
Jan, Shau Shiun; Lin, Yu Hsiang
2011-01-01
This paper focuses on the design of an integrated navigation and guidance system for unmanned helicopters. The integrated navigation system comprises two systems: the Flight Path Planning System (FPPS) and the Flight Control System (FCS). The FPPS finds the shortest flight path by the A-Star (A*) algorithm in an adaptive manner for different flight conditions, and the FPPS can add a forbidden zone to stop the unmanned helicopter from crossing over into dangerous areas. In this paper, the FPPS computation time is reduced by the multi-resolution scheme, and the flight path quality is improved by the path smoothing methods. Meanwhile, the FCS includes the fuzzy inference systems (FISs) based on the fuzzy logic. By using expert knowledge and experience to train the FIS, the controller can operate the unmanned helicopter without dynamic models. The integrated system of the FPPS and the FCS is aimed at providing navigation and guidance to the mission destination and it is implemented by coupling the flight simulation software, X-Plane, and the computing software, MATLAB. Simulations are performed and shown in real time three-dimensional animations. Finally, the integrated system is demonstrated to work successfully in controlling the unmanned helicopter to operate in various terrains of a digital elevation model (DEM). PMID:22164029
Integrated flight path planning system and flight control system for unmanned helicopters.
Jan, Shau Shiun; Lin, Yu Hsiang
2011-01-01
This paper focuses on the design of an integrated navigation and guidance system for unmanned helicopters. The integrated navigation system comprises two systems: the Flight Path Planning System (FPPS) and the Flight Control System (FCS). The FPPS finds the shortest flight path by the A-Star (A*) algorithm in an adaptive manner for different flight conditions, and the FPPS can add a forbidden zone to stop the unmanned helicopter from crossing over into dangerous areas. In this paper, the FPPS computation time is reduced by the multi-resolution scheme, and the flight path quality is improved by the path smoothing methods. Meanwhile, the FCS includes the fuzzy inference systems (FISs) based on the fuzzy logic. By using expert knowledge and experience to train the FIS, the controller can operate the unmanned helicopter without dynamic models. The integrated system of the FPPS and the FCS is aimed at providing navigation and guidance to the mission destination and it is implemented by coupling the flight simulation software, X-Plane, and the computing software, MATLAB. Simulations are performed and shown in real time three-dimensional animations. Finally, the integrated system is demonstrated to work successfully in controlling the unmanned helicopter to operate in various terrains of a digital elevation model (DEM).
Simulation of Satellite Vibration Test
NASA Astrophysics Data System (ADS)
Bettacchioli, Alain
2014-06-01
During every mechanical qualification test of satellites on vibrator, we systematically notice beating phenomena that appear every time we cross a mode's frequency. There could lead to an over-qualification of the tested specimen when the beating reaches a maximum and a under-qualification when the beating passes by a minimum. On a satellite, three lateral modes raise such a problem in a recurring way: the first structure mode (between 10 and 15 hertz) and the two tanks modes (between 35 and 50 hertz).To step forward in the resolution of this problem, we are developing a simulator which is based on the identification of the responses of the accelerometers that are fixed on the satellite and on the shaker slip table. The estimated transfer functions then allow to reconstruct at once the sensors response and the drive which generated them.For the simulation, we do not select all the sensors but only those on the slip table and those used to limit the input level (notching). We may also add those which were close to generate a notching.To perform its calculations, the simulator reproduces on one hand the unity amplitude signal (cola) which serves as frequency reference for the sweep achievement (generally 3 octaves per minute from 5 to 100 and even 150 Hertz), and on the other hand, the vibrator control loop. The drive amplitude is calculated at each cola's period by taking into account a compression factor. The control applied through the amplifier to the shaker coil is the product of this amplitude by the cola. The simulated measurements are updated at each sampling period thanks to the propagation of the identified model. The superposition of these curves on those supplied by real sensors during the tests allows to validate the simulation.Thereby, it seems possible to actively control the beatings thanks to a real-time corrector which uses these identifications.
Optimizing smoke and plume rise modeling approaches at local scales
Derek V. Mallia; Adam K. Kochanski; Shawn P. Urbanski; John C. Lin
2018-01-01
Heating from wildfires adds buoyancy to the overlying air, often producing plumes that vertically distribute fire emissions throughout the atmospheric column over the fire. The height of the rising wildfire plume is a complex function of the size of the wildfire, fire heat flux, plume geometry, and atmospheric conditions, which can make simulating plume rises difficult...
Description of the LASSO Alpha 1 Release
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustafson, William I.; Vogelmann, Andrew M.; Cheng, Xiaoping
The Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility began a pilot project in May 2015 to design a routine, high-resolution modeling capability to complement ARM’s extensive suite of measurements. This modeling capability has been named the Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation (LASSO) project. The availability of LES simulations with concurrent observations will serve many purposes. LES helps bridge the scale gap between DOE ARM observations and models, and the use of routine LES adds value to observations. It provides a self-consistent representation of the atmosphere and a dynamical context for the observations. Further,more » it elucidates unobservable processes and properties. LASSO will generate a simulation library for researchers that enables statistical approaches beyond a single-case mentality. It will also provide tools necessary for modelers to reproduce the LES and conduct their own sensitivity experiments. Many different uses are envisioned for the combined LASSO LES and observational library. For an observationalist, LASSO can help inform instrument remote-sensing retrievals, conduct Observation System Simulation Experiments (OSSEs), and test implications of radar scan strategies or flight paths. For a theoretician, LASSO will help calculate estimates of fluxes and co-variability of values, and test relationships without having to run the model yourself. For a modeler, LASSO will help one know ahead of time which days have good forcing, have co-registered observations at high-resolution scales, and have simulation inputs and corresponding outputs to test parameterizations. Further details on the overall LASSO project are available at http://www.arm. gov/science/themes/lasso.« less
Modular Manufacturing Simulator: Users Manual
NASA Technical Reports Server (NTRS)
1997-01-01
The Modular Manufacturing Simulator (MMS) has been developed for the beginning user of computer simulations. Consequently, the MMS cannot model complex systems that require branching and convergence logic. Once a user becomes more proficient in computer simulation and wants to add more complexity, the user is encouraged to use one of the many available commercial simulation systems. The (MMS) is based on the SSE5 that was developed in the early 1990's by the University of Alabama in Huntsville (UAH). A recent survey by MSFC indicated that the simulator has been a major contributor to the economic impact of the MSFC technology transfer program. Many manufacturers have requested additional features for the SSE5. Consequently, the following features have been added to the MMS that are not available in the SSE5: runs under Windows, print option for both input parameters and output statistics, operator can be fixed at a station or assigned to a group of stations, operator movement based on time limit, part limit, or work-in-process (WIP) limit at next station. The movement options for a moveable operators are: go to station with largest WIP, rabbit chase where operator moves in circular sequence between stations, and push/pull where operator moves back and forth between stations. This user's manual contains the necessary information for installing the MMS on a PC, a description of the various MMS commands, and the solutions to a number of sample problems using the MMS. Also included in the beginning of this report is a brief discussion of technology transfer.
Magnetized SASI: its mechanism and possible connection to some QPOs in XRBs
NASA Astrophysics Data System (ADS)
Dhang, Prasun; Sharma, Prateek; Mukhopadhyay, Banibrata
2018-05-01
The presence of a surface at the inner boundary, such as in a neutron star or a white dwarf, allows the existence of a standing shock in steady spherical accretion. The standing shock can become unstable in 2D or 3D; this is called the standing accretion shock instability (SASI). Two mechanisms - advective-acoustic and purely acoustic - have been proposed to explain SASI. Using axisymmetric hydrodynamic and magnetohydrodynamic simulations, we find that the advective-acoustic mechanism better matches the observed oscillation time-scales in our simulations. The global shock oscillations present in the accretion flow can explain many observed high frequency (≳100 Hz) quasi-periodic oscillations (QPOs) in X-ray binaries. The presence of a moderately strong magnetic field adds more features to the shock oscillation pattern, giving rise to low frequency modulation in the computed light curve. This low frequency modulation can be responsible for ˜100 Hz QPOs (known as hHz QPOs). We propose that the appearance of hHz QPO determines the separation of twin peak QPOs of higher frequencies.
Code of Federal Regulations, 2012 CFR
2012-04-01
... modify the INA grantee's plan to add funds or, if required by Congressional action, to reduce the amount... add, expand, delete, or diminish any service allowable under the regulations in this part. The INA... event that further clarification or modification is required, we may extend the thirty (30) day time...
Code of Federal Regulations, 2014 CFR
2014-04-01
... modify the INA grantee's plan to add funds or, if required by Congressional action, to reduce the amount... add, expand, delete, or diminish any service allowable under the regulations in this part. The INA... event that further clarification or modification is required, we may extend the thirty (30) day time...
Code of Federal Regulations, 2013 CFR
2013-04-01
... modify the INA grantee's plan to add funds or, if required by Congressional action, to reduce the amount... add, expand, delete, or diminish any service allowable under the regulations in this part. The INA... event that further clarification or modification is required, we may extend the thirty (30) day time...
Project DyAdd: Implicit Learning in Adult Dyslexia and ADHD
ERIC Educational Resources Information Center
Laasonen, Marja; Väre, Jenni; Oksanen-Hennah, Henna; Leppämäki, Sami; Tani, Pekka; Harno, Hanna; Hokkanen, Laura; Pothos, Emmanuel; Cleeremans, Axel
2014-01-01
In this study of the project DyAdd, implicit learning was investigated through two paradigms in adults (18-55 years) with dyslexia (n?=?36) or with attention deficit/hyperactivity disorder (ADHD, n?=?22) and in controls (n?=?35). In the serial reaction time (SRT) task, there were no group differences in learning. However, those with ADHD exhibited…
Mediants Make (Number) Sense of Fraction Foibles
ERIC Educational Resources Information Center
McDowell, Eric L.
2016-01-01
By the time they reach middle school, all students have been taught to add fractions. However, not all have "learned" to add fractions. The common mistake in adding fractions is to report that a/b + c/d is equal to (a + c)/(b + d). It is certainly necessary to correct this mistake when a student makes it. However, this occasion also…
Examination of the Arborsonic Decay Detector for Detecting Bacterial Wetwood in Red Oaks
Zicai Xu; Theodor D. Leininger; James G. Williams; Frank H. Tainter
2000-01-01
The Arborsonic Decay Detector (ADD; Fujikura Europe Limited, Wiltshire, England) was used to measure the time it took an ultrasound wave to cross 280 diameters in red oak trees with varying degrees of bacterial wetwood or heartwood decay. Linear regressions derived from the ADD readings of trees in Mississippi and South Carolina with wetwood and heartwood decay...
40 CFR 63.3920 - What reports must I submit?
Code of Federal Regulations, 2010 CFR
2010-07-01
... limitation (including any periods when emissions bypassed the add-on control device and were diverted to the... month by emission capture systems and add-on control devices using Equations 1 and 1A through 1D of § 63... description of the CPMS. (v) The date of the latest CPMS certification or audit. (vi) The date and time that...
Thermal fluctuation based study of aqueous deficient dry eyes by non-invasive thermal imaging.
Azharuddin, Mohammad; Bera, Sumanta Kr; Datta, Himadri; Dasgupta, Anjan Kr
2014-03-01
In this paper we have studied the thermal fluctuation patterns occurring at the ocular surface of the left and right eyes for aqueous deficient dry eye (ADDE) patients and control subjects by thermal imaging. We conducted our experiment on 42 patients (84 eyes) with aqueous deficient dry eyes and compared with 36 healthy volunteers (72 eyes) without any history of ocular surface disorder. Schirmer's test, Tear Break-up Time, tear Meniscus height and fluorescein staining tests were conducted. Ocular surface temperature measurement was done, using an FL-IR thermal camera and thermal fluctuation in left and right eyes was calculated and analyzed using MATLAB. The time series containing the sum of squares of the temperature fluctuation on the ocular surface were compared for aqueous deficient dry eye and control subjects. Significant statistical difference between the fluctuation patterns for control and ADDE was observed (p < 0.001 at 95% confidence interval). Thermal fluctuations in left and right eyes are significantly correlated in controls but not in ADDE subjects. The possible origin of such correlation in control and lack of correlation in the ADDE subjects is discussed in the text. Copyright © 2014 Elsevier Ltd. All rights reserved.
Monte Carlo simulation of chemistry following radiolysis with TOPAS-nBio.
Ramos-Méndez, J; Perl, J; Schuemann, J; McNamara, A; Paganetti, H; Faddegon, B
2018-05-17
Simulation of water radiolysis and the subsequent chemistry provides important information on the effect of ionizing radiation on biological material. The Geant4 Monte Carlo toolkit has added chemical processes via the Geant4-DNA project. The TOPAS tool simplifies the modeling of complex radiotherapy applications with Geant4 without requiring advanced computational skills, extending the pool of users. Thus, a new extension to TOPAS, TOPAS-nBio, is under development to facilitate the configuration of track-structure simulations as well as water radiolysis simulations with Geant4-DNA for radiobiological studies. In this work, radiolysis simulations were implemented in TOPAS-nBio. Users may now easily add chemical species and their reactions, and set parameters including branching ratios, dissociation schemes, diffusion coefficients, and reaction rates. In addition, parameters for the chemical stage were re-evaluated and updated from those used by default in Geant4-DNA to improve the accuracy of chemical yields. Simulation results of time-dependent and LET-dependent primary yields G x (chemical species per 100 eV deposited) produced at neutral pH and 25 °C by short track-segments of charged particles were compared to published measurements. The LET range was 0.05-230 keV µm -1 . The calculated G x values for electrons satisfied the material balance equation within 0.3%, similar for protons albeit with long calculation time. A smaller geometry was used to speed up proton and alpha simulations, with an acceptable difference in the balance equation of 1.3%. Available experimental data of time-dependent G-values for [Formula: see text] agreed with simulated results within 7% ± 8% over the entire time range; for [Formula: see text] over the full time range within 3% ± 4%; for H 2 O 2 from 49% ± 7% at earliest stages and 3% ± 12% at saturation. For the LET-dependent G x , the mean ratios to the experimental data were 1.11 ± 0.98, 1.21 ± 1.11, 1.05 ± 0.52, 1.23 ± 0.59 and 1.49 ± 0.63 (1 standard deviation) for [Formula: see text], [Formula: see text], H 2 , H 2 O 2 and [Formula: see text], respectively. In conclusion, radiolysis and subsequent chemistry with Geant4-DNA has been successfully incorporated in TOPAS-nBio. Results are in reasonable agreement with published measured and simulated data.
Software Development: 3D Animations and Creating User Interfaces for Realistic Simulations
NASA Technical Reports Server (NTRS)
Gordillo, Orlando Enrique
2015-01-01
My fall 2015 semester was spent at the Lyndon B. Johnson Space Center working in the Integrated Graphics, Operations, and Analysis Laboratory (IGOAL). My first project was to create a video animation that could tell the story of OMICS. OMICS is a term being used in the field of biomedical science to describe the collective technologies that study biological systems, such as what makes up a cell and how it functions with other systems. In the IGOAL I used a large 23 inch Wacom monitor to draw storyboards, graphics, and line art animations. I used Blender as the 3D environment to sculpt, shape, cut or modify the several scenes and models for the video. A challenge creating this video was to take a term used in biomedical science and describe it in such a way that an 8th grade student can understand. I used a line art style because it would visually set the tone for what we thought was an educational style. In order to get a handle on the perspective and overall feel for the animation without overloading my workspace, I split up the 2 minute animation into several scenes. I used Blender's python scripting capabilities which allowed for the addition of plugins to add or modify tools. The scripts can also directly interact with the objects to create naturalistic patterns or movements. After collecting the rendered scenes, I used Blender's built-in video editing workspace to output the animation. My second project was to write software that emulates a physical system's interface. The interface was to simulate a boat, ROV, and winch system. Simulations are a time and cost effective way to test complicated data and provide training for operators without having to use expensive hardware. We created the virtual controls with 3-D Blender models and 2-D graphics, and then add functionality in C# using the Unity game engine. The Unity engine provides several essential behaviors of a simulator, such as the start and update functions. A framework for Unity, which was developed in the lab, provided a way to place the different widgets on the virtual console dock and have them resize correctly based on the window dimensions.. My task in this project was to create the controls and visualizations for the data coming in from the simulator for the boat portion of the project. I wrote a class for each control window to handle the functionality of that widget. I implemented 11 widgets that make up the ship portion of the simulator. The members of the lab were each masters of their craft and I'm glad I had the opportunity to learn from them. I learned to plan strategically so I could finish this project on time. I allotted time for storyboarding, development, and refinement. In regards to animating I learned to use modifiers like lattice, boolean and build deformers. I also learned how to animate with drivers, how to use the dope sheet, and how to use the graph editor. In coding I learned to limit the chances for bugs by privatizing functions that should be exclusive to their class. I learned how to use the GIT repository to commit, stash and pull the latest build. I learned a bit of everything because I had the chance to see the entire application development process from the artwork, to the implementation.
Mild Normobaric Hypoxia Exposure for Human-Autonomy System Testing
NASA Technical Reports Server (NTRS)
Stephens, Chad L.; Kennedy, Kellie D.; Crook, Brenda L.; Williams, Ralph A.; Schutte, Paul
2017-01-01
An experiment investigated the impact of normobaric hypoxia induction on aircraft pilot performance to specifically evaluate the use of hypoxia as a method to induce mild cognitive impairment to explore human-autonomous systems integration opportunities. Results of this exploratory study show that the effect of 15,000 feet simulated altitude did not induce cognitive deficits as indicated by performance on written, computer-based, or simulated flight tasks. However, the subjective data demonstrated increased effort by the human test subject pilots to maintain equivalent performance in a flight simulation task. This study represents current research intended to add to the current knowledge of performance decrement and pilot workload assessment to improve automation support and increase aviation safety.
Description of the GMAO OSSE for Weather Analysis Software Package: Version 3
NASA Technical Reports Server (NTRS)
Koster, Randal D. (Editor); Errico, Ronald M.; Prive, Nikki C.; Carvalho, David; Sienkiewicz, Meta; El Akkraoui, Amal; Guo, Jing; Todling, Ricardo; McCarty, Will; Putman, William M.;
2017-01-01
The Global Modeling and Assimilation Office (GMAO) at the NASA Goddard Space Flight Center has developed software and products for conducting observing system simulation experiments (OSSEs) for weather analysis applications. Such applications include estimations of potential effects of new observing instruments or data assimilation techniques on improving weather analysis and forecasts. The GMAO software creates simulated observations from nature run (NR) data sets and adds simulated errors to those observations. The algorithms employed are much more sophisticated, adding a much greater degree of realism, compared with OSSE systems currently available elsewhere. The algorithms employed, software designs, and validation procedures are described in this document. Instructions for using the software are also provided.
MODFLOW-OWHM v2: The next generation of fully integrated hydrologic simulation software
NASA Astrophysics Data System (ADS)
Boyce, S. E.; Hanson, R. T.; Ferguson, I. M.; Reimann, T.; Henson, W.; Mehl, S.; Leake, S.; Maddock, T.
2016-12-01
The One-Water Hydrologic Flow Model (One-Water) is a MODFLOW-based integrated hydrologic flow model designed for the analysis of a broad range of conjunctive-use and climate-related issues. One-Water fully links the movement and use of groundwater, surface water, and imported water for consumption by agriculture and natural vegetation on the landscape, and for potable and other uses within a supply-and-demand framework. One-Water includes linkages for deformation-, flow-, and head-dependent flows; additional observation and parameter options for higher-order calibrations; and redesigned code for facilitation of self-updating models and faster simulation run times. The next version of One-Water, currently under development, will include a new surface-water operations module that simulates dynamic reservoir operations, a new sustainability analysis package that facilitates the estimation and simulation of reduced storage depletion and captured discharge, a conduit-flow process for karst aquifers and leaky pipe networks, a soil zone process that adds an enhanced infiltration process, interflow, deep percolation and soil moisture, and a new subsidence and aquifer compaction package. It will also include enhancements to local grid refinement, and additional features to facilitate easier model updates, faster execution, better error messages, and more integration/cross communication between the traditional MODFLOW packages. By retaining and tracking the water within the hydrosphere, One-Water accounts for "all of the water everywhere and all of the time." This philosophy provides more confidence in the water accounting by the scientific community and provides the public a foundation needed to address wider classes of problems. Ultimately, more complex questions are being asked about water resources, so they require a more complete answer about conjunctive-use and climate-related issues.
Relation Between the Cell Volume and the Cell Cycle Dynamics in Mammalian cell
NASA Astrophysics Data System (ADS)
Magno, A. C. G.; Oliveira, I. L.; Hauck, J. V. S.
2016-08-01
The main goal of this work is to add and analyze an equation that represents the volume in a dynamical model of the mammalian cell cycle proposed by Gérard and Goldbeter (2011) [1]. The cell division occurs when the cyclinB/Cdkl complex is totally degraded (Tyson and Novak, 2011)[2] and it reaches a minimum value. At this point, the cell is divided into two newborn daughter cells and each one will contain the half of the cytoplasmic content of the mother cell. The equations of our base model are only valid if the cell volume, where the reactions occur, is constant. Whether the cell volume is not constant, that is, the rate of change of its volume with respect to time is explicitly taken into account in the mathematical model, then the equations of the original model are no longer valid. Therefore, every equations were modified from the mass conservation principle for considering a volume that changes with time. Through this approach, the cell volume affects all model variables. Two different dynamic simulation methods were accomplished: deterministic and stochastic. In the stochastic simulation, the volume affects every model's parameters which have molar unit, whereas in the deterministic one, it is incorporated into the differential equations. In deterministic simulation, the biochemical species may be in concentration units, while in stochastic simulation such species must be converted to number of molecules which are directly proportional to the cell volume. In an effort to understand the influence of the new equation a stability analysis was performed. This elucidates how the growth factor impacts the stability of the model's limit cycles. In conclusion, a more precise model, in comparison to the base model, was created for the cell cycle as it now takes into consideration the cell volume variation
Advanced Stirling Convertor Testing at NASA Glenn Research Center
NASA Technical Reports Server (NTRS)
Oriti, Salvatore M.; Blaze, Gina M.
2007-01-01
The U.S. Department of Energy (DOE), Lockheed Martin Space Systems (LMSS), Sunpower Inc., and NASA Glenn Research Center (GRC) have been developing an Advanced Stirling Radioisotope Generator (ASRG) for use as a power system on space science and exploration missions. This generator will make use of the free-piston Stirling convertors to achieve higher conversion efficiency than currently available alternatives. The ASRG will utilize two Advanced Stirling Convertors (ASC) to convert thermal energy from a radioisotope heat source to electricity. NASA GRC has initiated several experiments to demonstrate the functionality of the ASC, including: in-air extended operation, thermal vacuum extended operation, and ASRG simulation for mobile applications. The in-air and thermal vacuum test articles are intended to provide convertor performance data over an extended operating time. These test articles mimic some features of the ASRG without the requirement of low system mass. Operation in thermal vacuum adds the element of simulating deep space. This test article is being used to gather convertor performance and thermal data in a relevant environment. The ASRG simulator was designed to incorporate a minimum amount of support equipment, allowing integration onto devices powered directly by the convertors, such as a rover. This paper discusses the design, fabrication, and implementation of these experiments.
Du, Zhuo-Ying; Gao, Xiang; Zhang, Xiao-Luo; Wang, Zhi-Qiu; Tang, Wei-Jun
2010-09-01
In this paper the authors' goal was to evaluate the feasibility and efficacy of a virtual reality (VR) system in preoperative planning for microvascular decompression (MVD) procedures treating idiopathic trigeminal neuralgia and hemifacial spasm. The system's role in surgical simulation and training was also assessed. Between May 2008 and April 2009, the authors used the Dextroscope system to visualize the neurovascular complex and simulate MVD in the cerebellopontine angle in a VR environment in 16 patients (6 patients had trigeminal neuralgia and 10 had hemifacial spasm). Reconstructions were carried out 2-3 days before MVD. Images were printed in a red-blue stereoscopic format for teaching and discussion and were brought into the operating room to be compared with real-time intraoperative findings. The VR environment was a powerful aid for spatial understanding of the neurovascular relationship in MVD for operating surgeons and trainees. Through an initial series of comparison/confirmation experiences, the senior neurosurgeon became accustomed to the system. He could predict intraoperative problems and simulate surgical maneuvering, which increased his confidence in performing the procedure. The Dextroscope system is an easy and rapid method to create a stereoscopic neurovascular model for MVD that is highly concordant with intraoperative findings. It effectively shortens the learning curve and adds to the surgeon's confidence.
Automated evaluation of AIMS images: an approach to minimize evaluation variability
NASA Astrophysics Data System (ADS)
Dürr, Arndt C.; Arndt, Martin; Fiebig, Jan; Weiss, Samuel
2006-05-01
Defect disposition and qualification with stepper simulating AIMS tools on advanced masks of the 90nm node and below is key to match the customer's expectations for "defect free" masks, i.e. masks containing only non-printing design variations. The recently available AIMS tools allow for a large degree of automated measurements enhancing the throughput of masks and hence reducing cycle time - up to 50 images can be recorded per hour. However, this amount of data still has to be evaluated by hand which is not only time-consuming but also error prone and exhibits a variability depending on the person doing the evaluation which adds to the tool intrinsic variability and decreases the reliability of the evaluation. In this paper we present the results of an MatLAB based algorithm which automatically evaluates AIMS images. We investigate its capabilities regarding throughput, reliability and matching with handmade evaluation for a large variety of dark and clear defects and discuss the limitations of an automated AIMS evaluation algorithm.
Farley-Buneman Instability Effects on the Ionosphere and Thermosphere
NASA Astrophysics Data System (ADS)
Liu, J.; Wang, W.; Oppenheim, M. M.; Dimant, Y. S.; Wiltberger, M. J.; Merkin, V. G.
2016-12-01
We have recently implemented a newmodule that includes both the anomalous electron heating and the electron-neutral cooling rate correction associated with the Farley-Buneman Instability (FBI) in the thermosphere-ionosphere electrodynamics global circulation model (TIEGCM). This implementation provides, for the first time, a modeling capability to describe macroscopic effects of the FBI on the ionosphere and thermosphere in the context of a first-principle, self-consistent model. The added heating sources primarily operate between 100 and 130km altitude, and their magnitudes often exceed auroral precipitation heating in the TIEGCM. The induced changes in E region electron temperature in the auroral oval and polar cap by the FBI are remarkable with a maximum Te approaching 2200 K. This is about 4 times larger than the TIEGCM run without FBI heating. Thermosphere wind and composition changes associated with FBI will also be investigated. This investigation demonstrates how researchers can add the important effects of the FBI to magnetosphere-ionosphere-thermosphere models and simulators.
Unified tensor model for space-frequency spreading-multiplexing (SFSM) MIMO communication systems
NASA Astrophysics Data System (ADS)
de Almeida, André LF; Favier, Gérard
2013-12-01
This paper presents a unified tensor model for space-frequency spreading-multiplexing (SFSM) multiple-input multiple-output (MIMO) wireless communication systems that combine space- and frequency-domain spreadings, followed by a space-frequency multiplexing. Spreading across space (transmit antennas) and frequency (subcarriers) adds resilience against deep channel fades and provides space and frequency diversities, while orthogonal space-frequency multiplexing enables multi-stream transmission. We adopt a tensor-based formulation for the proposed SFSM MIMO system that incorporates space, frequency, time, and code dimensions by means of the parallel factor model. The developed SFSM tensor model unifies the tensorial formulation of some existing multiple-access/multicarrier MIMO signaling schemes as special cases, while revealing interesting tradeoffs due to combined space, frequency, and time diversities which are of practical relevance for joint symbol-channel-code estimation. The performance of the proposed SFSM MIMO system using either a zero forcing receiver or a semi-blind tensor-based receiver is illustrated by means of computer simulation results under realistic channel and system parameters.
Dynamic Analysis of a Reaction-Diffusion Rumor Propagation Model
NASA Astrophysics Data System (ADS)
Zhao, Hongyong; Zhu, Linhe
2016-06-01
The rapid development of the Internet, especially the emergence of the social networks, leads rumor propagation into a new media era. Rumor propagation in social networks has brought new challenges to network security and social stability. This paper, based on partial differential equations (PDEs), proposes a new SIS rumor propagation model by considering the effect of the communication between the different rumor infected users on rumor propagation. The stabilities of a nonrumor equilibrium point and a rumor-spreading equilibrium point are discussed by linearization technique and the upper and lower solutions method, and the existence of a traveling wave solution is established by the cross-iteration scheme accompanied by the technique of upper and lower solutions and Schauder’s fixed point theorem. Furthermore, we add the time delay to rumor propagation and deduce the conditions of Hopf bifurcation and stability switches for the rumor-spreading equilibrium point by taking the time delay as the bifurcation parameter. Finally, numerical simulations are performed to illustrate the theoretical results.
Rosa, Roberto; Veronesi, Paolo; Leonelli, Cristina
2013-09-01
The thermal development of latent fingerprints on paper surfaces is a simple, safe, and chemicals-free method, based on the faster heating of the substrate underlying the print residue. Microwave heating is proposed for the first time for the development of latent fingerprints on cellulose-based substrate, in order to add to the thermal development mechanism the further characteristic of being able to heat the fingerprint residues to a different extent with respect to the substrate, due to the intrinsic difference in their dielectric properties. Numerical simulation was performed to confirm and highlight the selectivity of microwaves, and preliminary experimental results point out the great potentialities of this technique, which allowed developing both latent sebaceous-rich and latent eccrine-rich fingerprints on different porous surfaces, in less than 30 sec time with an applied output power of 500 W. Microwaves demonstrated more effectiveness in the development of eccrine-rich residues, aged up to 12 weeks. © 2013 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Salman Arafath, Mohammed; Rahman Khan, Khaleel Ur; Sunitha, K. V. N.
2018-01-01
Nowadays due to most of the telecommunication standard development organizations focusing on using device-to-device communication so that they can provide proximity-based services and add-on services on top of the available cellular infrastructure. An Oppnets and wireless sensor network play a prominent role here. Routing in these networks plays a significant role in fields such as traffic management, packet delivery etc. Routing is a prodigious research area with diverse unresolved issues. This paper firstly focuses on the importance of Opportunistic routing and its concept then focus is shifted to prime aspect i.e. on packet reception ratio which is one of the highest QoS Awareness parameters. This paper discusses the two important functions of routing in wireless sensor networks (WSN) namely route selection using least routing time algorithm (LRTA) and data forwarding using clustering technique. Finally, the simulation result reveals that LRTA performs relatively better than the existing system in terms of average packet reception ratio and connectivity.
NASA Astrophysics Data System (ADS)
Noual, A.; Akjouj, A.; Pennec, Y.; Gillet, J.-N.; Djafari-Rouhani, B.
2009-10-01
Numerical simulations, based on a finite-difference-time-domain (FDTD) method, of infrared light propagation for add/drop filtering in two-dimensional (2D) metal-insulator-metal (Ag-SiO2-Ag) resonators are reported to design 2D Y-bent plasmonic waveguides with possible applications in telecommunication wavelength demultiplexing (WDM). First, we study optical transmission and reflection of a nanoscale SiO2 waveguide coupled to a nanocavity of the same insulator located either inside or on the side of a linear waveguide sandwiched between Ag. According to the inside or outside positioning of the nanocavity with respect to the waveguide, the transmission spectrum displays peaks or dips, respectively, which occur at the same central frequency. A fundamental study of the possible cavity modes in the near-infrared frequency band is also given. These filtering properties are then exploited to propose a nanoscale demultiplexer based on a Y-shaped plasmonic waveguide for separation of two different wavelengths, in selection or rejection, from an input broadband signal around 1550 nm. We detail coupling of the 2D add/drop Y connector to two cavities inserted on each of its branches. Selection or rejection of a pair of different wavelengths depends on the inside or outside locations (respectively) of each cavity in the Y plasmonic device.
Temporal rainfall estimation using input data reduction and model inversion
NASA Astrophysics Data System (ADS)
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the INA grantee's plan to add funds or, if required by Congressional action, to reduce the amount of funds available for expenditure. (b) The INA grantee may request approval to modify its plan to add... event that further clarification or modification is required, we may extend the thirty (30) day time...
Code of Federal Regulations, 2011 CFR
2011-04-01
... the INA grantee's plan to add funds or, if required by Congressional action, to reduce the amount of funds available for expenditure. (b) The INA grantee may request approval to modify its plan to add... event that further clarification or modification is required, we may extend the thirty (30) day time...
MessyBoard: Lowering the Cost of Communication and Making it More Enjoyable
2005-05-02
38 Figure 2.9. The MessyBoard main menu ...nized in real time. Users add content to the board by using a menu or by dragging and dropping or cutting and pasting from other applications...initiated.) MessyBoard also allows users to add objects to the space using a menu (Figure 2.9) that appears when the user clicks the right mouse
Cognitive simulators for medical education and training.
Kahol, Kanav; Vankipuram, Mithra; Smith, Marshall L
2009-08-01
Simulators for honing procedural skills (such as surgical skills and central venous catheter placement) have proven to be valuable tools for medical educators and students. While such simulations represent an effective paradigm in surgical education, there is an opportunity to add a layer of cognitive exercises to these basic simulations that can facilitate robust skill learning in residents. This paper describes a controlled methodology, inspired by neuropsychological assessment tasks and embodied cognition, to develop cognitive simulators for laparoscopic surgery. These simulators provide psychomotor skill training and offer the additional challenge of accomplishing cognitive tasks in realistic environments. A generic framework for design, development and evaluation of such simulators is described. The presented framework is generalizable and can be applied to different task domains. It is independent of the types of sensors, simulation environment and feedback mechanisms that the simulators use. A proof of concept of the framework is provided through developing a simulator that includes cognitive variations to a basic psychomotor task. The results of two pilot studies are presented that show the validity of the methodology in providing an effective evaluation and learning environments for surgeons.
The Umbra Simulation and Integration Framework Applied to Emergency Response Training
NASA Technical Reports Server (NTRS)
Hamilton, Paul Lawrence; Britain, Robert
2010-01-01
The Mine Emergency Response Interactive Training Simulation (MERITS) is intended to prepare personnel to manage an emergency in an underground coal mine. The creation of an effective training environment required realistic emergent behavior in response to simulation events and trainee interventions, exploratory modification of miner behavior rules, realistic physics, and incorporation of legacy code. It also required the ability to add rich media to the simulation without conflicting with normal desktop security settings. Our Umbra Simulation and Integration Framework facilitated agent-based modeling of miners and rescuers and made it possible to work with subject matter experts to quickly adjust behavior through script editing, rather than through lengthy programming and recompilation. Integration of Umbra code with the WebKit browser engine allowed the use of JavaScript-enabled local web pages for media support. This project greatly extended the capabilities of Umbra in support of training simulations and has implications for simulations that combine human behavior, physics, and rich media.
Linkage analysis of systolic blood pressure: a score statistic and computer implementation
Wang, Kai; Peng, Yingwei
2003-01-01
A genome-wide linkage analysis was conducted on systolic blood pressure using a score statistic. The randomly selected Replicate 34 of the simulated data was used. The score statistic was applied to the sibships derived from the general pedigrees. An add-on R program to GENEHUNTER was developed for this analysis and is freely available. PMID:14975145
PDF added value of a high resolution climate simulation for precipitation
NASA Astrophysics Data System (ADS)
Soares, Pedro M. M.; Cardoso, Rita M.
2015-04-01
General Circulation Models (GCMs) are models suitable to study the global atmospheric system, its evolution and response to changes in external forcing, namely to increasing emissions of CO2. However, the resolution of GCMs, of the order of 1o, is not sufficient to reproduce finer scale features of the atmospheric flow related to complex topography, coastal processes and boundary layer processes, and higher resolution models are needed to describe observed weather and climate. The latter are known as Regional Climate Models (RCMs) and are widely used to downscale GCMs results for many regions of the globe and are able to capture physically consistent regional and local circulations. Most of the RCMs evaluations rely on the comparison of its results with observations, either from weather stations networks or regular gridded datasets, revealing the ability of RCMs to describe local climatic properties, and assuming most of the times its higher performance in comparison with the forcing GCMs. The additional climatic details given by RCMs when compared with the results of the driving models is usually named as added value, and it's evaluation is still scarce and controversial in the literuature. Recently, some studies have proposed different methodologies to different applications and processes to characterize the added value of specific RCMs. A number of examples reveal that some RCMs do add value to GCMs in some properties or regions, and also the opposite, elighnening that RCMs may add value to GCM resuls, but improvements depend basically on the type of application, model setup, atmospheric property and location. The precipitation can be characterized by histograms of daily precipitation, or also known as probability density functions (PDFs). There are different strategies to evaluate the quality of both GCMs and RCMs in describing the precipitation PDFs when compared to observations. Here, we present a new method to measure the PDF added value obtained from dynamical downscaling, based on simple PDF skill scores. The measure can assess the full quality of the PDFs and at the same time integrates a flexible manner to weight differently the PDF tails. In this study we apply the referred method to characaterize the PDF added value of a high resolution simulation with the WRF model. Results from a WRF climate simulation centred at the Iberian Penisnula with two nested grids, a larger one at 27km and a smaller one at 9km. This simulation is forced by ERA-Interim. The observational data used covers from rain gauges precipitation records to observational regular grids of daily precipitation. Two regular gridded precipitation datasets are used. A Portuguese grid precipitation dataset developed at 0.2°× 0.2°, from observed rain gauges daily precipitation. A second one corresponding to the ENSEMBLES observational gridded dataset for Europe, which includes daily precipitation values at 0.25°. The analisys shows an important PDF added value from the higher resolution simulation, regarding the full PDF and the extremes. This method shows higher potential to be applied to other simulation exercises and to evaluate other variables.
Genetic Adaptive Control for PZT Actuators
NASA Technical Reports Server (NTRS)
Kim, Jeongwook; Stover, Shelley K.; Madisetti, Vijay K.
1995-01-01
A piezoelectric transducer (PZT) is capable of providing linear motion if controlled correctly and could provide a replacement for traditional heavy and large servo systems using motors. This paper focuses on a genetic model reference adaptive control technique (GMRAC) for a PZT which is moving a mirror where the goal is to keep the mirror velocity constant. Genetic Algorithms (GAs) are an integral part of the GMRAC technique acting as the search engine for an optimal PID controller. Two methods are suggested to control the actuator in this research. The first one is to change the PID parameters and the other is to add an additional reference input in the system. The simulation results of these two methods are compared. Simulated Annealing (SA) is also used to solve the problem. Simulation results of GAs and SA are compared after simulation. GAs show the best result according to the simulation results. The entire model is designed using the Mathworks' Simulink tool.
System Engineering Infrastructure Evolution Galileo IOV and the Steps Beyond
NASA Astrophysics Data System (ADS)
Eickhoff, J.; Herpel, H.-J.; Steinle, T.; Birn, R.; Steiner, W.-D.; Eisenmann, H.; Ludwig, T.
2009-05-01
The trends to more and more constrained financial budgets in satellite engineering require a permanent optimization of the S/C system engineering processes and infrastructure. Astrium in the recent years already has built up a system simulation infrastructure - the "Model-based Development & Verification Environment" - which meanwhile is well known all over Europe and is established as Astrium's standard approach for ESA, DLR projects and now even the EU/ESA-Project Galileo IOV. The key feature of the MDVE / FVE approach is to provide entire S/C simulation (with full featured OBC simulation) already in early phases to start OBSW code tests on a simulated S/C and to later add hardware in the loop step by step up to an entire "Engineering Functional Model (EFM)" or "FlatSat". The subsequent enhancements to this simulator infrastructure w.r.t. spacecraft design data handling are reported in the following sections.
Immersive volume rendering of blood vessels
NASA Astrophysics Data System (ADS)
Long, Gregory; Kim, Han Suk; Marsden, Alison; Bazilevs, Yuri; Schulze, Jürgen P.
2012-03-01
In this paper, we present a novel method of visualizing flow in blood vessels. Our approach reads unstructured tetrahedral data, resamples it, and uses slice based 3D texture volume rendering. Due to the sparse structure of blood vessels, we utilize an octree to efficiently store the resampled data by discarding empty regions of the volume. We use animation to convey time series data, wireframe surface to give structure, and utilize the StarCAVE, a 3D virtual reality environment, to add a fully immersive element to the visualization. Our tool has great value in interdisciplinary work, helping scientists collaborate with clinicians, by improving the understanding of blood flow simulations. Full immersion in the flow field allows for a more intuitive understanding of the flow phenomena, and can be a great help to medical experts for treatment planning.
Polarization of submillimetre lines from interstellar medium
NASA Astrophysics Data System (ADS)
Zhang, Heshou; Yan, Huirong
2018-04-01
Magnetic fields play important roles in many astrophysical processes. However, there is no universal diagnostic for the magnetic fields in the interstellar medium (ISM) and each magnetic tracer has its limitation. Any new detection method is thus valuable. Theoretical studies have shown that submillimetre fine-structure lines are polarized due to atomic alignment by ultraviolet photon-excitation, which opens up a new avenue to probe interstellar magnetic fields. We will, for the first time, perform synthetic observations on the simulated three-dimensional ISM to demonstrate the measurability of the polarization of submillimetre atomic lines. The maximum polarization for different absorption and emission lines expected from various sources, including star-forming regions are provided. Our results demonstrate that the polarization of submillimetre atomic lines is a powerful magnetic tracer and add great value to the observational studies of the submilimetre astronomy.
Punzalan, Florencio Rusty; Kunieda, Yoshitoshi; Amano, Akira
2015-01-01
Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs). Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code generator can be used to generate code for physiological simulations and provides a tool for studying cardiac electrophysiology. PMID:26356082
NASA Astrophysics Data System (ADS)
De Lucia, Marco; Kempka, Thomas; Jatnieks, Janis; Kühn, Michael
2017-04-01
Reactive transport simulations - where geochemical reactions are coupled with hydrodynamic transport of reactants - are extremely time consuming and suffer from significant numerical issues. Given the high uncertainties inherently associated with the geochemical models, which also constitute the major computational bottleneck, such requirements may seem inappropriate and probably constitute the main limitation for their wide application. A promising way to ease and speed-up such coupled simulations is achievable employing statistical surrogates instead of "full-physics" geochemical models [1]. Data-driven surrogates are reduced models obtained on a set of pre-calculated "full physics" simulations, capturing their principal features while being extremely fast to compute. Model reduction of course comes at price of a precision loss; however, this appears justified in presence of large uncertainties regarding the parametrization of geochemical processes. This contribution illustrates the integration of surrogates into the flexible simulation framework currently being developed by the authors' research group [2]. The high level language of choice for obtaining and dealing with surrogate models is R, which profits from state-of-the-art methods for statistical analysis of large simulations ensembles. A stand-alone advective mass transport module was furthermore developed in order to add such capability to any multiphase finite volume hydrodynamic simulator within the simulation framework. We present 2D and 3D case studies benchmarking the performance of surrogates and "full physics" chemistry in scenarios pertaining the assessment of geological subsurface utilization. [1] Jatnieks, J., De Lucia, M., Dransch, D., Sips, M.: "Data-driven surrogate model approach for improving the performance of reactive transport simulations.", Energy Procedia 97, 2016, p. 447-453. [2] Kempka, T., Nakaten, B., De Lucia, M., Nakaten, N., Otto, C., Pohl, M., Chabab [Tillner], E., Kühn, M.: "Flexible Simulation Framework to Couple Processes in Complex 3D Models for Subsurface Utilization Assessment.", Energy Procedia, 97, 2016 p. 494-501.
Nintedanib with Add-on Pirfenidone in Idiopathic Pulmonary Fibrosis. Results of the INJOURNEY Trial.
Vancheri, Carlo; Kreuter, Michael; Richeldi, Luca; Ryerson, Christopher J; Valeyre, Dominique; Grutters, Jan C; Wiebe, Sabrina; Stansen, Wibke; Quaresma, Manuel; Stowasser, Susanne; Wuyts, Wim A
2018-02-01
Nintedanib and pirfenidone slow the progression of idiopathic pulmonary fibrosis (IPF), but the disease continues to progress. More data are needed on the safety and efficacy of combination therapy with nintedanib and add-on pirfenidone. To investigate safety, tolerability, and pharmacokinetic and exploratory efficacy endpoints in patients treated with nintedanib and add-on pirfenidone versus nintedanib alone. Patients with IPF and FVC greater than or equal to 50% predicted at screening who completed a 4- to 5-week run-in with nintedanib 150 mg twice daily without dose reduction or treatment interruption were randomized to receive nintedanib 150 mg twice daily with add-on pirfenidone (titrated to 801 mg three times daily) or nintedanib 150 mg twice daily alone in an open-label manner for 12 weeks. The primary endpoint was the percentage of patients with on-treatment gastrointestinal adverse events from baseline to Week 12. Analyses were descriptive and exploratory. On-treatment gastrointestinal adverse events were reported in 37 of 53 patients (69.8%) treated with nintedanib with add-on pirfenidone and 27 of 51 patients (52.9%) treated with nintedanib alone. Predose plasma trough concentrations of nintedanib were similar when it was administered alone or with add-on pirfenidone. Mean (SE) changes from baseline in FVC at Week 12 were -13.3 (17.4) ml and -40.9 (31.4) ml in patients treated with nintedanib with add-on pirfenidone (n = 48) and nintedanib alone (n = 44), respectively. Nintedanib with add-on pirfenidone had a manageable safety and tolerability profile in patients with IPF, in line with the adverse event profiles of each drug. These data support further research into combination regimens in the treatment of IPF. Clinical trial registered with www.clinicaltrials.gov (NCT02579603).
A study of dynamic data placement for ATLAS distributed data management
NASA Astrophysics Data System (ADS)
Beermann, T.; Stewart, G. A.; Maettig, P.
2015-12-01
This contribution presents a study on the applicability and usefulness of dynamic data placement methods for data-intensive systems, such as ATLAS distributed data management (DDM). In this system the jobs are sent to the data, therefore having a good distribution of data is significant. Ways of forecasting workload patterns are examined which then are used to redistribute data to achieve a better overall utilisation of computing resources and to reduce waiting time for jobs before they can run on the grid. This method is based on a tracer infrastructure that is able to monitor and store historical data accesses and which is used to create popularity reports. These reports provide detailed summaries about data accesses in the past, including information about the accessed files, the involved users and the sites. From this past data it is possible to then make near-term forecasts for data popularity in the future. This study evaluates simple prediction methods as well as more complex methods like neural networks. Based on the outcome of the predictions a redistribution algorithm deletes unused replicas and adds new replicas for potentially popular datasets. Finally, a grid simulator is used to examine the effects of the redistribution. The simulator replays workload on different data distributions while measuring the job waiting time and site usage. The study examines how the average waiting time is affected by the amount of data that is moved, how it differs for the various forecasting methods and how that compares to the optimal data distribution.
State-and-transition simulation models: a framework for forecasting landscape change
Daniel, Colin; Frid, Leonardo; Sleeter, Benjamin M.; Fortin, Marie-Josée
2016-01-01
SummaryA wide range of spatially explicit simulation models have been developed to forecast landscape dynamics, including models for projecting changes in both vegetation and land use. While these models have generally been developed as separate applications, each with a separate purpose and audience, they share many common features.We present a general framework, called a state-and-transition simulation model (STSM), which captures a number of these common features, accompanied by a software product, called ST-Sim, to build and run such models. The STSM method divides a landscape into a set of discrete spatial units and simulates the discrete state of each cell forward as a discrete-time-inhomogeneous stochastic process. The method differs from a spatially interacting Markov chain in several important ways, including the ability to add discrete counters such as age and time-since-transition as state variables, to specify one-step transition rates as either probabilities or target areas, and to represent multiple types of transitions between pairs of states.We demonstrate the STSM method using a model of land-use/land-cover (LULC) change for the state of Hawai'i, USA. Processes represented in this example include expansion/contraction of agricultural lands, urbanization, wildfire, shrub encroachment into grassland and harvest of tree plantations; the model also projects shifts in moisture zones due to climate change. Key model output includes projections of the future spatial and temporal distribution of LULC classes and moisture zones across the landscape over the next 50 years.State-and-transition simulation models can be applied to a wide range of landscapes, including questions of both land-use change and vegetation dynamics. Because the method is inherently stochastic, it is well suited for characterizing uncertainty in model projections. When combined with the ST-Sim software, STSMs offer a simple yet powerful means for developing a wide range of models of landscape dynamics.
Schrank, Simone; Jedinger, Nicole; Wu, Shengqian; Piller, Michael; Roblegg, Eva
2016-07-25
In this work calcium stearate (CaSt) multi-particulates loaded with codeine phosphate (COP) were developed in an attempt to provide extended release (ER) combined with alcohol dose dumping (ADD) resistance. The pellets were prepared via wet/extrusion spheronization and ER characteristics were obtained after fluid bed drying at 30°C. Pore blockers (i.e., xanthan, guar gum and TiO2) were integrated to control the uptake of ethanolic media, the CaSt swelling and consequently, the COP release. While all three pore blockers are insoluble in ethanol, xanthan dissolves, guar gum swells and TiO2 does not interact with water. The incorporation of 10 and 15% TiO2 still provided ER characteristics and yielded ADD resistance in up to 40v% ethanol. The in-vitro data were subjected to PK simulations, which revealed similar codeine plasma levels when the medication is used concomitantly with alcoholic beverages. Taken together the in-vitro and in-silico results demonstrate that the incorporation of appropriate pore blockers presents a promising strategy to provide ADD resistance of multi-particulate systems. Copyright © 2016 Elsevier B.V. All rights reserved.
Merging a Pair of Supermassive Black Holes
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-10-01
When galaxies merge, the supermassive black holes (SMBHs) at the galaxies centers are thought to coalesce, forming a new, larger black hole. But can this merger process take place on timescales short enough that we could actually observe it? Results from a new simulation suggests that it can!When Galaxies CollideThese stills demonstrate the time evolution of the galaxy merger after the beginning of the authors simulation (starting from z=3.6). The red and blue dots mark the positions of the SMBHs. [Adapted from Khan et al. 2016]At present, its not well understood how the merger of two SMBHs proceeds from the merger of their host galaxies. Whats more, there are concerns about whether the SMBHs can coalesce on reasonable timescales; in many simulations and models, the inspiral of these behemoths stalls out when they are about a parsec apart, in whats known as the final parsec problem.Why are these mergers poorly understood? Modeling them from the initial interactions of the host galaxies all the way down to the final coalescence of their SMBHs in a burst of gravitational waves is notoriously complicated, due to the enormous range of scales and different processes that must be accounted for.But in a recent study, a team of scientists led by Fazeel Khan (Institute of Space Technology in Pakistan) has presented a simulation that successfully manages to track the entire merger making it the first multi-scale simulation to model the complete evolution of an SMBH binary that forms within a cosmological galaxy merger.Stages of aSimulationKhan and collaborators tackled the challenges of this simulation by using a multi-tiered approach.Beginning with the output of a cosmological hydrodynamical simulation, the authors select a merger of two typical massive galaxies at z=3.6 and use this as the starting point for their simulation. They increase the resolution and add in two supermassive black holes, one at the center of each galaxy.They then continue to evolve the galaxies hydrodynamically, simulating the final stages of the galaxy merger.When the separation of the two SMBHs is small enough, the authors extract a spherical region of 5 kpc from around the pair and evolve this as an N-body simulation.Finally, the separation of the SMBHs becomes so small (0.01 pc) that gravitational-wave emission is the dominant loss of energy driving the inspiral. The authors add post-Newtonian terms into the N-body simulation to account for this.Time evolution of the separation between the SMBHs, beginning with the hydrodynamical simulation (blue), then transitioning to the direct N-body calculation (red), and ending with the introduction of post-Newtonian terms (green) to account for gravitational-wave emission. [Adapted from Khan et al. 2016]Successful CoalescenceKhan and collaborators complex approach allows them to simulate the entire process of the merger and SMBH coalescence, resulting in several key determinations.First, they demonstrate that the SMBHs can coalesce on timescales of only tens of Myr, which is roughly two orders of magnitude smaller than what was typically estimated before. They find that gas dissipation before the merger is instrumental in creating the conditions that allow for this rapid orbital decay.The authors also demonstrate that the gravitational potential of the galaxy merger remnant is triaxial throughout the merger. Khan and collaborators simulations confirm that this non-spherical potential solves the final parsec problem by sending stars on plunging orbits around the SMBHs. These more distant stars cause the SMBHs to lose angular momentum through dynamical friction and continue their inspiral, even when the stars immediately surrounding the SMBHs have been depleted.This simulation isan important step toward a better understanding of SMBH mergers. Its outcomes are especially promising for future gravitational-wave campaigns, as the short SMBH coalescence timescales indicate that these mergers could indeed be observable!CitationFazeel Mahmood Khan et al 2016 ApJ 828 73. doi:10.3847/0004-637X/828/2/73
Performance Analysis of Distributed Object-Oriented Applications
NASA Technical Reports Server (NTRS)
Schoeffler, James D.
1998-01-01
The purpose of this research was to evaluate the efficiency of a distributed simulation architecture which creates individual modules which are made self-scheduling through the use of a message-based communication system used for requesting input data from another module which is the source of that data. To make the architecture as general as possible, the message-based communication architecture was implemented using standard remote object architectures (Common Object Request Broker Architecture (CORBA) and/or Distributed Component Object Model (DCOM)). A series of experiments were run in which different systems are distributed in a variety of ways across multiple computers and the performance evaluated. The experiments were duplicated in each case so that the overhead due to message communication and data transmission can be separated from the time required to actually perform the computational update of a module each iteration. The software used to distribute the modules across multiple computers was developed in the first year of the current grant and was modified considerably to add a message-based communication scheme supported by the DCOM distributed object architecture. The resulting performance was analyzed using a model created during the first year of this grant which predicts the overhead due to CORBA and DCOM remote procedure calls and includes the effects of data passed to and from the remote objects. A report covering the distributed simulation software and the results of the performance experiments has been submitted separately. The above report also discusses possible future work to apply the methodology to dynamically distribute the simulation modules so as to minimize overall computation time.
NASA Technical Reports Server (NTRS)
Hakkinen, Sirpa
1999-01-01
Sea surface height (SSH) from altimeter observations from 1992 on and from modeling results is investigated to determine the modes of variability and the linkages to the state of oceanic circulation in the North Atlantic. First the altimeter and model simulated SSH are analyzed using the empirical orthogonal function (EOF) analysis. They are found to share a similar leading mode where the center of action is along the Gulf Stream and North Atlantic Current with opposite sign anomalies in the subpolar gyre and in the slope waters along the Eastern Seaboard. The time series of the leading EOF mode from the altimeter data shows that between winters of 1995 and 1996, SSH over the Gulf Stream decreased by about 12cm which change is reproduced by the model simulation. Based on the relationship from the model simulations between the time series of the SSH EOF1 and meridional heat transport, it is suggested that associated with this SSH change in 1995-96, the overturning has slowed down from its heights in the early 90's. Furthermore, it is shown that decadal variability in the leading SSH mode originates from the thermal forcing component. This adds confidence to the qualitative relationship between the state of overturning/meridional heat transport and SSH in the limited area described by the EOF1. SSH variability in the eastern side of the North Atlantic basin, outside the western boundary current region, is determined by local and remote (Rossby waves) wind stress curl forcing.
Realistic Simulation for Body Area and Body-To-Body Networks
Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D’Errico, Raffaele
2016-01-01
In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices. PMID:27104537
A new look at ocean ventilation time scales and their uncertainties
NASA Astrophysics Data System (ADS)
Fine, Rana A.; Peacock, Synte; Maltrud, Mathew E.; Bryan, Frank O.
2017-05-01
A suite of eddy-resolving ocean transient tracer model simulations are first compared to observations. Observational and model pCFC-11 ages agree quite well, with the eddy-resolving model adding detail. The CFC ages show that the thermocline is a barrier to interior ocean exchange with the atmosphere on time scales of 45 years, the measureable CFC transient, although there are exceptions. Next, model simulations are used to quantify effects on tracer ages of the spatial dependence of internal ocean tracer variability due to stirring from eddies and biases from nonstationarity of the atmospheric transient when there is mixing. These add to tracer age uncertainties and biases, which are large in frontal boundary regions, and small in subtropical gyre interiors. These uncertainties and biases are used to reinterpret observed temporal trends in tracer-derived ventilation time scales taken from observations more than a decade apart, and to assess whether interpretations of changes in tracer ages being due to changes in ocean ventilation hold water. For the southern hemisphere subtropical gyres, we infer that the rate of ocean ventilation 26-27.2 σθ increased between the mid-1990s and the decade of the 2000s. However, between the mid-1990s and the decade of the 2010s, there is no significant trend—perhaps except for South Atlantic. Observed age/AOU/ventilation changes are linked to a combination of natural cycles and climate change, and there is regional variability. Thus, for the future it is not clear how strong or steady in space and time ocean ventilation changes will be.
A new look at ocean ventilation time scales and their uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fine, Rana A.; Peacock, Synte; Maltrud, Mathew E.
A suite of eddy-resolving ocean transient tracer model simulations are first compared to observations. Observational and model pCFC-11 ages agree quite well, with the eddy-resolving model adding detail. The CFC ages show that the thermocline is a barrier to interior ocean exchange with the atmosphere on time scales of 45 years, the measureable CFC transient, although there are exceptions. Next, model simulations are used to quantify effects on tracer ages of the spatial dependence of internal ocean tracer variability due to stirring from eddies and biases from nonstationarity of the atmospheric transient when there is mixing. These add to tracermore » age uncertainties and biases, which are large in frontal boundary regions, and small in subtropical gyre interiors. These uncertainties and biases are used to reinterpret observed temporal trends in tracer-derived ventilation time scales taken from observations more than a decade apart, and to assess whether interpretations of changes in tracer ages being due to changes in ocean ventilation hold water. For the southern hemisphere subtropical gyres, we infer that the rate of ocean ventilation 26–27.2 σ θ increased between the mid-1990s and the decade of the 2000s. However, between the mid-1990s and the decade of the 2010s, there is no significant trend—perhaps except for South Atlantic. Observed age/AOU/ventilation changes are linked to a combination of natural cycles and climate change, and there is regional variability. Thus, for the future it is not clear how strong or steady in space and time ocean ventilation changes will be.« less
A new look at ocean ventilation time scales and their uncertainties
Fine, Rana A.; Peacock, Synte; Maltrud, Mathew E.; ...
2017-03-17
A suite of eddy-resolving ocean transient tracer model simulations are first compared to observations. Observational and model pCFC-11 ages agree quite well, with the eddy-resolving model adding detail. The CFC ages show that the thermocline is a barrier to interior ocean exchange with the atmosphere on time scales of 45 years, the measureable CFC transient, although there are exceptions. Next, model simulations are used to quantify effects on tracer ages of the spatial dependence of internal ocean tracer variability due to stirring from eddies and biases from nonstationarity of the atmospheric transient when there is mixing. These add to tracermore » age uncertainties and biases, which are large in frontal boundary regions, and small in subtropical gyre interiors. These uncertainties and biases are used to reinterpret observed temporal trends in tracer-derived ventilation time scales taken from observations more than a decade apart, and to assess whether interpretations of changes in tracer ages being due to changes in ocean ventilation hold water. For the southern hemisphere subtropical gyres, we infer that the rate of ocean ventilation 26–27.2 σ θ increased between the mid-1990s and the decade of the 2000s. However, between the mid-1990s and the decade of the 2010s, there is no significant trend—perhaps except for South Atlantic. Observed age/AOU/ventilation changes are linked to a combination of natural cycles and climate change, and there is regional variability. Thus, for the future it is not clear how strong or steady in space and time ocean ventilation changes will be.« less
Realistic Simulation for Body Area and Body-To-Body Networks.
Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D'Errico, Raffaele
2016-04-20
In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices.
Modeling and simulation in biomedicine.
Aarts, J.; Möller, D.; van Wijk van Brievingh, R.
1991-01-01
A group of researchers and educators in The Netherlands, Germany and Czechoslovakia have developed and adapted mathematical computer models of phenomena in the field of physiology and biomedicine for use in higher education. The models are graphical and highly interactive, and are all written in TurboPascal or the mathematical simulation language PSI. An educational shell has been developed to launch the models. The shell allows students to interact with the models and teachers to edit the models, to add new models and to monitor the achievements of the students. The models and the shell have been implemented on a MS-DOS personal computer. This paper describes the features of the modeling package and presents the modeling and simulation of the heart muscle as an example. PMID:1807745
OpenMM 7: Rapid development of high performance algorithms for molecular dynamics
Swails, Jason; Zhao, Yutong; Beauchamp, Kyle A.; Wang, Lee-Ping; Stern, Chaya D.; Brooks, Bernard R.; Pande, Vijay S.
2017-01-01
OpenMM is a molecular dynamics simulation toolkit with a unique focus on extensibility. It allows users to easily add new features, including forces with novel functional forms, new integration algorithms, and new simulation protocols. Those features automatically work on all supported hardware types (including both CPUs and GPUs) and perform well on all of them. In many cases they require minimal coding, just a mathematical description of the desired function. They also require no modification to OpenMM itself and can be distributed independently of OpenMM. This makes it an ideal tool for researchers developing new simulation methods, and also allows those new methods to be immediately available to the larger community. PMID:28746339
Comparing multiple turbulence restoration algorithms performance on noisy anisoplanatic imagery
NASA Astrophysics Data System (ADS)
Rucci, Michael A.; Hardie, Russell C.; Dapore, Alexander J.
2017-05-01
In this paper, we compare the performance of multiple turbulence mitigation algorithms to restore imagery degraded by atmospheric turbulence and camera noise. In order to quantify and compare algorithm performance, imaging scenes were simulated by applying noise and varying levels of turbulence. For the simulation, a Monte-Carlo wave optics approach is used to simulate the spatially and temporally varying turbulence in an image sequence. A Poisson-Gaussian noise mixture model is then used to add noise to the observed turbulence image set. These degraded image sets are processed with three separate restoration algorithms: Lucky Look imaging, bispectral speckle imaging, and a block matching method with restoration filter. These algorithms were chosen because they incorporate different approaches and processing techniques. The results quantitatively show how well the algorithms are able to restore the simulated degraded imagery.
Advanced methods for modeling water-levels and estimating drawdowns with SeriesSEE, an Excel add-in
Halford, Keith; Garcia, C. Amanda; Fenelon, Joe; Mirus, Benjamin B.
2012-12-21
Water-level modeling is used for multiple-well aquifer tests to reliably differentiate pumping responses from natural water-level changes in wells, or “environmental fluctuations.” Synthetic water levels are created during water-level modeling and represent the summation of multiple component fluctuations, including those caused by environmental forcing and pumping. Pumping signals are modeled by transforming step-wise pumping records into water-level changes by using superimposed Theis functions. Water-levels can be modeled robustly with this Theis-transform approach because environmental fluctuations and pumping signals are simulated simultaneously. Water-level modeling with Theis transforms has been implemented in the program SeriesSEE, which is a Microsoft® Excel add-in. Moving average, Theis, pneumatic-lag, and gamma functions transform time series of measured values into water-level model components in SeriesSEE. Earth tides and step transforms are additional computed water-level model components. Water-level models are calibrated by minimizing a sum-of-squares objective function where singular value decomposition and Tikhonov regularization stabilize results. Drawdown estimates from a water-level model are the summation of all Theis transforms minus residual differences between synthetic and measured water levels. The accuracy of drawdown estimates is limited primarily by noise in the data sets, not the Theis-transform approach. Drawdowns much smaller than environmental fluctuations have been detected across major fault structures, at distances of more than 1 mile from the pumping well, and with limited pre-pumping and recovery data at sites across the United States. In addition to water-level modeling, utilities exist in SeriesSEE for viewing, cleaning, manipulating, and analyzing time-series data.
Geant4 Modifications for Accurate Fission Simulations
NASA Astrophysics Data System (ADS)
Tan, Jiawei; Bendahan, Joseph
Monte Carlo is one of the methods to simulate the generation and transport of radiation through matter. The most widely used radiation simulation codes are MCNP and Geant4. The simulation of fission production and transport by MCNP has been thoroughly benchmarked. There is an increasing number of users that prefer using Geant4 due to the flexibility of adding features. However, it has been found that Geant4 does not have the proper fission-production cross sections and does not produce the correct fission products. To achieve accurate results for studies in fissionable material applications, Geant4 was modified to correct these inaccuracies and to add new capabilities. The fission model developed by the Lawrence Livermore National Laboratory was integrated into the neutron-fission modeling package. The photofission simulation capability was enabled using the same neutron-fission library under the assumption that nuclei fission in the same way, independent of the excitation source. The modified fission code provides the correct multiplicity of prompt neutrons and gamma rays, and produces delayed gamma rays and neutrons with time and energy dependencies that are consistent with ENDF/B-VII. The delayed neutrons are now directly produced by a custom package that bypasses the fragment cascade model. The modifications were made for U-235, U-238 and Pu-239 isotopes; however, the new framework allows adding new isotopes easily. The SLAC nuclear data library is used for simulation of isotopes with an atomic number above 92 because it is not available in Geant4. Results of the modified Geant4.10.1 package of neutron-fission and photofission for prompt and delayed radiation are compared with ENDFB-VII and with results produced with the original package.
A platform for dynamic simulation and control of movement based on OpenSim and MATLAB.
Mansouri, Misagh; Reinbolt, Jeffrey A
2012-05-11
Numerical simulations play an important role in solving complex engineering problems and have the potential to revolutionize medical decision making and treatment strategies. In this paper, we combine the rapid model-based design, control systems and powerful numerical method strengths of MATLAB/Simulink with the simulation and human movement dynamics strengths of OpenSim by developing a new interface between the two software tools. OpenSim is integrated with Simulink using the MATLAB S-function mechanism, and the interface is demonstrated using both open-loop and closed-loop control systems. While the open-loop system uses MATLAB/Simulink to separately reproduce the OpenSim Forward Dynamics Tool, the closed-loop system adds the unique feature of feedback control to OpenSim, which is necessary for most human movement simulations. An arm model example was successfully used in both open-loop and closed-loop cases. For the open-loop case, the simulation reproduced results from the OpenSim Forward Dynamics Tool with root mean square (RMS) differences of 0.03° for the shoulder elevation angle and 0.06° for the elbow flexion angle. MATLAB's variable step-size integrator reduced the time required to generate the forward dynamic simulation from 7.1s (OpenSim) to 2.9s (MATLAB). For the closed-loop case, a proportional-integral-derivative controller was used to successfully balance a pole on model's hand despite random force disturbances on the pole. The new interface presented here not only integrates the OpenSim and MATLAB/Simulink software tools, but also will allow neuroscientists, physiologists, biomechanists, and physical therapists to adapt and generate new solutions as treatments for musculoskeletal conditions. Copyright © 2012 Elsevier Ltd. All rights reserved.
A platform for dynamic simulation and control of movement based on OpenSim and MATLAB
Mansouri, Misagh; Reinbolt, Jeffrey A.
2013-01-01
Numerical simulations play an important role in solving complex engineering problems and have the potential to revolutionize medical decision making and treatment strategies. In this paper, we combine the rapid model-based design, control systems and powerful numerical method strengths of MATLAB/Simulink with the simulation and human movement dynamics strengths of OpenSim by developing a new interface between the two software tools. OpenSim is integrated with Simulink using the MATLAB S-function mechanism, and the interface is demonstrated using both open-loop and closed-loop control systems. While the open-loop system uses MATLAB/Simulink to separately reproduce the OpenSim Forward Dynamics Tool, the closed-loop system adds the unique feature of feedback control to OpenSim, which is necessary for most human movement simulations. An arm model example was successfully used in both open-loop and closed-loop cases. For the open-loop case, the simulation reproduced results from the OpenSim Forward Dynamics Tool with root mean square (RMS) differences of 0.03° for the shoulder elevation angle and 0.06° for the elbow flexion angle. MATLAB’s variable step-size integrator reduced the time required to generate the forward dynamic simulation from 7.1 s (OpenSim) to 2.9 s (MATLAB). For the closed-loop case, a proportional–integral–derivative controller was used to successfully balance a pole on model’s hand despite random force disturbances on the pole. The new interface presented here not only integrates the OpenSim and MATLAB/Simulink software tools, but also will allow neuroscientists, physiologists, biomechanists, and physical therapists to adapt and generate new solutions as treatments for musculoskeletal conditions. PMID:22464351
Milliren, Carly E; Evans, Clare R; Richmond, Tracy K; Dunn, Erin C
2018-06-06
Recent advances in multilevel modeling allow for modeling non-hierarchical levels (e.g., youth in non-nested schools and neighborhoods) using cross-classified multilevel models (CCMM). Current practice is to cluster samples from one context (e.g., schools) and utilize the observations however they are distributed from the second context (e.g., neighborhoods). However, it is unknown whether an uneven distribution of sample size across these contexts leads to incorrect estimates of random effects in CCMMs. Using the school and neighborhood data structure in Add Health, we examined the effect of neighborhood sample size imbalance on the estimation of variance parameters in models predicting BMI. We differentially assigned students from a given school to neighborhoods within that school's catchment area using three scenarios of (im)balance. 1000 random datasets were simulated for each of five combinations of school- and neighborhood-level variance and imbalance scenarios, for a total of 15,000 simulated data sets. For each simulation, we calculated 95% CIs for the variance parameters to determine whether the true simulated variance fell within the interval. Across all simulations, the "true" school and neighborhood variance parameters were estimated 93-96% of the time. Only 5% of models failed to capture neighborhood variance; 6% failed to capture school variance. These results suggest that there is no systematic bias in the ability of CCMM to capture the true variance parameters regardless of the distribution of students across neighborhoods. Ongoing efforts to use CCMM are warranted and can proceed without concern for the sample imbalance across contexts. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Yang, Fan Y.; Nelson, Bron; Carlino, Roberto; Perez, Andres D.; Faber, Nicolas; Henze, Chris; Karacahoglu, Arif G.; O'Toole, Conor; Swenson, Jason; Stupl, Jan
2015-01-01
This work provides an efficiency analysis of the LightForce space debris collision avoidance scheme in the current debris environment and describes a simulation approach to assess its impact on the long-term evolution of the space debris environment. LightForce aims to provide just-in-time collision avoidance by utilizing photon pressure from ground-based industrial lasers. These ground stations impart minimal accelerations to increase the miss distance for a predicted conjunction between two objects. In the first part of this paper we will present research that investigates the short-term effect of a few systems consisting of 10kW class lasers directed by 1.5 m diameter telescopes using adaptive optics. The results found such a network of ground stations to mitigate more than 85 percent of conjunctions and could lower the expected number of collisions in Low Earth Orbit (LEO) by an order of magnitude. While these are impressive numbers that indicate LightForce's utility in the short-term, the remaining 15 percent of possible collisions contain (among others) conjunctions between two massive objects that would add large amount of debris if they collide. Still, conjunctions between massive objects and smaller objects can be mitigated. Hence we choose to expand the capabilities of the simulation software to investigate the overall effect of a network of LightForce stations on the long-term debris evolution. In the second part of this paper, we will present the planed simulation approach for that effort.
Rapinesi, Chiara; Del Casale, Antonio; Scatena, Paola; Kotzalidis, Georgios D; Di Pietro, Simone; Ferri, Vittoria Rachele; Bersani, Francesco Saverio; Brugnoli, Roberto; Raccah, Ruggero Nessim; Zangen, Abraham; Ferracuti, Stefano; Orzi, Francesco; Girardi, Paolo; Sette, Giuliano
2016-06-03
Deep Transcranial Magnetic Stimulation (dTMS) can be an alternative treatment to relieve pain in chronic migraine (CM). The aim of this study was to evaluate the effect of high-frequency dTMS in add-on to standard treatment for CM in patients not responding to effective abortive or preventive drug treatment. We randomized 14 patients with International Classification of Headache Disorders, 3rd Edition (ICHD-3) treatment-resistant CM to add-on dTMS (n=7) or standard abortive or preventive antimigraine treatment (n=7). Three sessions of alternate day 10Hz dTMS consisting of 600 pulses in 10 trains were delivered to the dorsolateral prefrontal cortex (DLPFC), bilaterally, but with left hemisphere prevalence, for 12 sessions spread over one month. The add-on dTMS treatment was well tolerated. Patients treated with dTMS showed significant reduction of pain intensity, frequency of attacks, analgesic overuse, and depressive symptoms during treatment and one month later, compared to the month preceding treatment and at the same time-points compared to the control group. As compared to standard pharmacological treatment alone, add-on high-frequency dTMS of the bilateral DLPFC reduced the frequency and intensity of migraine attack, drug overuse, and depressive symptoms. This study supports the add-on dTMS treatment in treatment-resistant CM. Copyright © 2016. Published by Elsevier Ireland Ltd.
Development of FQ-PCR method to determine the level of ADD1 expression in fatty and lean pigs.
Cui, J X; Chen, W; Zeng, Y Q
2015-10-30
To determine how adipocyte determination and differentiation factor 1 (ADD1), a gene involved in the determination of pork quality, is regulated in Laiwu and Large pigs, we used TaqMan fluorescence quantitative real-time polymerase chain reaction (FQ-PCR) to detect differential expression in the longissimus muscle of Laiwu (fatty) and Large White (lean) pigs. In this study, the ADD1 and GAPDH cDNA sequences were cloned using a T-A cloning assay, and the clone sequences were consistent with those deposited in GenBank. Thus, the target fragment was successfully recombined into the vector, and its integrity was maintained. The standard curve and regression equation were established through the optimized FQ-PCR protocol. The standard curve of porcine ADD1 and GAPDH cDNA was determined, and its linear range extension could reach seven orders of magnitudes. The results showed that this method was used to quantify ADD1 expression in the longissimus muscle of two breeds of pig, and was found to be accurate, sensitive, and convenient. These results provide information regarding porcine ADD1 mRNA expression and the mechanism of adipocyte differentiation, and this study could help in the effort to meet the demands of consumers interested in the maintenance of health and prevention of obesity. Furthermore, it could lead to new approaches in the prevention and clinical treatment of this disease.
Mori, Yutaka; Taniguchi, Yukiko; Miyazaki, Shigeru; Yokoyama, Junichi; Utsunomiya, Kazunori
2013-03-01
In an earlier continuous glucose monitoring (CGM)-based study, we reported that sitagliptin not only reduced 24-h mean glucose levels but also suppressed postprandial glucose increases, thus reducing the range of glycemic fluctuations in type 2 diabetes patients. In this study, we investigated whether sitagliptin might provide similar benefits in type 2 diabetes patients receiving insulin therapy by using CGM. The study included a total of 13 type 2 diabetes patients in whom stable glycemic control had been achieved after admission for glycemic control. Insulin regimens used included long-acting insulin preparations once daily in four patients and biphasic insulin preparations twice daily in nine, with the daily insulin dose being 19.0±12.7 U. During the CGM-based study, the patients were given insulin therapy alone on Days 1 and 2 and were given sitagliptin 50 mg/day as add-on treatment on Days 3-6, with their daily insulin doses maintained. The add-on treatment with sitagliptin led to significant decreases in 24-h mean glucose levels and SDs of 288 glucose levels measured by CGM for 24 h, as well as in the indices for magnitude of glucose variability and proportion of time in hyperglycemia, compared with insulin therapy alone (P<0.01), whereas there was no significant change seen in regard to the proportion of time in hypoglycemia with or without add-on treatment with sitagliptin. This CGM-based study clearly demonstrated that insulin therapy alone, whether with long-acting or biphasic insulin preparations, does not provide adequate glycemic control in type 2 diabetes patients. In contrast, add-on sitagliptin was shown to narrow the range of 24-h glucose fluctuations in these patients, suggesting that add-on treatment with sitagliptin is effective for postprandial glucose control in type 2 diabetes patients receiving insulin therapy.
Li, Feng-Fei; Shen, Yun; Sun, Rui; Zhang, Dan-Feng; Jin, Xing; Zhai, Xiao-Fang; Chen, Mao-Yuan; Su, Xiao-Fei; Wu, Jin-Dan; Ye, Lei; Ma, Jian-Hua
2017-10-01
To investigate whether vildagliptin add-on insulin therapy improves glycemic variations in patients with uncontrolled type 2 diabetes (T2D) compared to patients with placebo therapy. This was a 24-week, single-center, double-blind, placebo-controlled trial. Inadequately controlled T2D patients treated with insulin therapy were recruited between June 2012 and April 2013. The trial included a 2-week screening period and a 24-week randomized period. Subjects were randomly assigned to a vildagliptin add-on insulin therapy group (n = 17) or a matched placebo group (n = 16). Scheduled visits occurred at weeks 4, 8, 12, 16, 20, and 24. Continuous glucose monitoring (CGM) was performed before and at the endpoint of the study. A total of 33 subjects were admitted, with 1 patient withdrawing from the placebo group. After 24 weeks of therapy, HbA1c values were significantly reduced at the endpoint in the vildagliptin add-on group. CGM data showed that patients with vildagliptin add-on therapy had a significantly lower 24-h mean glucose concentration and mean amplitude of glycemic excursion (MAGE). At the endpoint of the study, patients in the vildagliptin add-on group had a significantly lower MAGE and standard deviation compared to the control patients during the nocturnal period (0000-0600). A severe hypoglycemic episode was not observed in either group. Vildagliptin add-on therapy to insulin has the ability to improve glycemic variations, especially during the nocturnal time period, in patients with uncontrolled T2D.
2009-03-01
homeport, geographic stability for two tours and compressed work week; homeport, lump sum SRB, and telecommuting ). The Monte Carlo simulation...Geographic stability 2 tours, and compressed work week). The Add 2 combination includes home port choice, lump sum SRB, and telecommuting ...VALUATION OF NON-MONETARY INCENTIVES: MOTIVATING AND IMPLEMENTING THE COMBINATORIAL RETENTION AUCTION MECHANISM by Jason Blake Ellis March 2009
Lectin Enzyme Assay Detection of Viruses, Tissue Culture, and a Mycotoxin Simulant
1988-09-01
micromix vibrator at 37 OC for 10-30 min. 9. Read color development at 10, 20, and 30 min. Table 5. LEAD Test (Procedure II). 1. Add 0.1 mL of virus...or TC concentrations to 0.1 mL of WGA- peroxidase in microtiter tray. 2. Mix on yankee rotator or micromix vibrator at room temperature for 10 min. 3
SouthPro : a computer program for managing uneven-aged loblolly pine stands
Benedict Schulte; Joseph Buongiorno; Ching-Rong Lin; Kenneth E. Skog
1998-01-01
SouthPro is a Microsoft Excel add-in program that simulates the management, growth, and yield of uneven-aged loblolly pine stands in the Southern United States. The built-in growth model of this program was calibrated from 991 uneven-aged plots in seven states, covering most growing conditions and sites. Stands are described by the number of trees in 13 size classes...
Memory Subsystem Performance of Programs with Intensive Heap Allocation
1993-12-13
implroves all organiizatiotns. However Ilie imliprove- nient in going from one- way to t)two- wav set associa i vi; v is, tit1c ucl uialler t hal iil( tleil...simulating multi-cyclh instnlctiomis. we ca-nnot (let ,rmine their exa(’ct pen al il PIA. 26 Program ] Total I Div [I Midi F Add F Sub F Div F NIni CW 0.00
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serne, R. Jeffrey; Lanigan, David C.; Westsik, Joseph H.
This revision to the original report adds two longer term leach sets of data to the report and provides more discussion and graphics on how to interpret the results from long-term laboratory leach tests. The leach tests were performed at Pacific Northwest National Laboratory (PNNL) for Washington River Protection Solutions (WRPS) to evaluate the release of key constituents from monoliths of Cast Stone prepared with four simulated low-activity waste (LAW) liquid waste streams.
Dai, Jianrong; Que, William
2004-12-07
This paper introduces a method to simultaneously minimize the leaf travel distance and the tongue-and-groove effect for IMRT leaf sequences to be delivered in segmental mode. The basic idea is to add a large enough number of openings through cutting or splitting existing openings for those leaf pairs with openings fewer than the number of segments so that all leaf pairs have the same number of openings. The cutting positions are optimally determined with a simulated annealing technique called adaptive simulated annealing. The optimization goal is set to minimize the weighted summation of the leaf travel distance and tongue-and-groove effect. Its performance was evaluated with 19 beams from three clinical cases; one brain, one head-and-neck and one prostate case. The results show that it can reduce the leaf travel distance and (or) tongue-and-groove effect; the reduction of the leaf travel distance reaches its maximum of about 50% when minimized alone; the reduction of the tongue-and-groove reaches its maximum of about 70% when minimized alone. The maximum reduction in the leaf travel distance translates to a 1 to 2 min reduction in treatment delivery time per fraction, depending on leaf speed. If the method is implemented clinically, it could result in significant savings in treatment delivery time, and also result in significant reduction in the wear-and-tear of MLC mechanics.
Busse, Harald; Riedel, Tim; Garnov, Nikita; Thörmer, Gregor; Kahn, Thomas; Moche, Michael
2015-01-01
Objectives MRI is of great clinical utility for the guidance of special diagnostic and therapeutic interventions. The majority of such procedures are performed iteratively ("in-and-out") in standard, closed-bore MRI systems with control imaging inside the bore and needle adjustments outside the bore. The fundamental limitations of such an approach have led to the development of various assistance techniques, from simple guidance tools to advanced navigation systems. The purpose of this work was to thoroughly assess the targeting accuracy, workflow and usability of a clinical add-on navigation solution on 240 simulated biopsies by different medical operators. Methods Navigation relied on a virtual 3D MRI scene with real-time overlay of the optically tracked biopsy needle. Smart reference markers on a freely adjustable arm ensured proper registration. Twenty-four operators – attending (AR) and resident radiologists (RR) as well as medical students (MS) – performed well-controlled biopsies of 10 embedded model targets (mean diameter: 8.5 mm, insertion depths: 17-76 mm). Targeting accuracy, procedure times and 13 Likert scores on system performance were determined (strong agreement: 5.0). Results Differences in diagnostic success rates (AR: 93%, RR: 88%, MS: 81%) were not significant. In contrast, between-group differences in biopsy times (AR: 4:15, RR: 4:40, MS: 5:06 min:sec) differed significantly (p<0.01). Mean overall rating was 4.2. The average operator would use the system again (4.8) and stated that the outcome justifies the extra effort (4.4). Lowest agreement was reported for the robustness against external perturbations (2.8). Conclusions The described combination of optical tracking technology with an automatic MRI registration appears to be sufficiently accurate for instrument guidance in a standard (closed-bore) MRI environment. High targeting accuracy and usability was demonstrated on a relatively large number of procedures and operators. Between groups with different expertise there were significant differences in experimental procedure times but not in the number of successful biopsies. PMID:26222443
Scripting Module for the Satellite Orbit Analysis Program (SOAP)
NASA Technical Reports Server (NTRS)
Carnright, Robert; Paget, Jim; Coggi, John; Stodden, David
2008-01-01
This add-on module to the SOAP software can perform changes to simulation objects based on the occurrence of specific conditions. This allows the software to encompass simulation response of scheduled or physical events. Users can manipulate objects in the simulation environment under programmatic control. Inputs to the scripting module are Actions, Conditions, and the Script. Actions are arbitrary modifications to constructs such as Platform Objects (i.e. satellites), Sensor Objects (representing instruments or communication links), or Analysis Objects (user-defined logical or numeric variables). Examples of actions include changes to a satellite orbit ( v), changing a sensor-pointing direction, and the manipulation of a numerical expression. Conditions represent the circumstances under which Actions are performed and can be couched in If-Then-Else logic, like performing v at specific times or adding to the spacecraft power only when it is being illuminated by the Sun. The SOAP script represents the entire set of conditions being considered over a specific time interval. The output of the scripting module is a series of events, which are changes to objects at specific times. As the SOAP simulation clock runs forward, the scheduled events are performed. If the user sets the clock back in time, the events within that interval are automatically undone. This script offers an interface for defining scripts where the user does not have to remember the vocabulary of various keywords. Actions can be captured by employing the same user interface that is used to define the objects themselves. Conditions can be set to invoke Actions by selecting them from pull-down lists. Users define the script by selecting from the pool of defined conditions. Many space systems have to react to arbitrary events that can occur from scheduling or from the environment. For example, an instrument may cease to draw power when the area that it is tasked to observe is not in view. The contingency of the planetary body blocking the line of sight is a condition upon which the power being drawn is set to zero. It remains at zero until the observation objective is again in view. Computing the total power drawn by the instrument over a period of days or weeks can now take such factors into consideration. What makes the architecture especially powerful is that the scripting module can look ahead and behind in simulation time, and this temporal versatility can be leveraged in displays such as x-y plots. For example, a plot of a satellite s altitude as a function of time can take changes to the orbit into account.
Refinement of Objective Motion Cueing Criteria Investigation Based on Three Flight Tasks
NASA Technical Reports Server (NTRS)
Zaal, Petrus M. T.; Schroeder, Jeffery A.; Chung, William W.
2017-01-01
The objective of this paper is to refine objective motion cueing criteria for commercial transport simulators based on pilots' performance in three flying tasks. Actuator hardware and software algorithms determine motion cues. Today, during a simulator qualification, engineers objectively evaluate only the hardware. Pilot inspectors subjectively assess the overall motion cueing system (i.e., hardware plus software); however, it is acknowledged that pinpointing any deficiencies that might arise to either hardware or software is challenging. ICAO 9625 has an Objective Motion Cueing Test (OMCT), which is now a required test in the FAA's part 60 regulations for new devices, evaluating the software and hardware together; however, it lacks accompanying fidelity criteria. Hosman has documented OMCT results for a statistical sample of eight simulators which is useful, but having validated criteria would be an improvement. In a previous experiment, we developed initial objective motion cueing criteria that this paper is trying to refine. Sinacori suggested simple criteria which are in reasonable agreement with much of the literature. These criteria often necessitate motion displacements greater than most training simulators can provide. While some of the previous work has used transport aircraft in their studies, the majority used fighter aircraft or helicopters. Those that used transport aircraft considered degraded flight characteristics. As a result, earlier criteria lean more towards being sufficient, rather than necessary, criteria for typical transport aircraft training applications. Considering the prevalence of 60-inch, six-legged hexapod training simulators, a relevant question is "what are the necessary criteria that can be used with the ICAO 9625 diagnostic?" This study adds to the literature as follows. First, it examines well-behaved transport aircraft characteristics, but in three challenging tasks. The tasks are equivalent to the ones used in our previous experiment, allowing us to directly compare the results and add to the previous data. Second, it uses the Vertical Motion Simulator (VMS), the world's largest vertical displacement simulator. This allows inclusion of relatively large motion conditions, much larger than a typical training simulator can provide. Six new motion configurations were used that explore the motion responses between the initial objective motion cueing boundaries found in a previous experiment and what current hexapod simulators typically provide. Finally, a sufficiently large pilot pool added statistical reliability to the results.
Zanini, Filippo; Carmignato, Simone
2017-01-01
More than 60.000 hip arthroplasty are performed every year in Italy. Although Ultra-High-Molecular-Weight-Polyethylene remains the most used material as acetabular cup, wear of this material induces over time in vivo a foreign-body response and consequently osteolysis, pain, and the need of implant revision. Furthermore, oxidative wear of the polyethylene provoke several and severe failures. To solve these problems, highly cross-linked polyethylene and Vitamin-E-stabilized polyethylene were introduced in the last years. In in vitro experiments, various efforts have been made to compare the wear behavior of standard PE and vitamin-E infused liners. In this study we compared the in vitro wear behavior of two different configurations of cross-linked polyethylene (with and without the add of Vitamin E) vs. the standard polyethylene acetabular cups. The aim of the present study was to validate a micro X-ray computed tomography technique to assess the wear of different commercially available, polyethylene’s acetabular cups after wear simulation; in particular, the gravimetric method was used to provide reference wear values. The agreement between the two methods is documented in this paper. PMID:28107468
Radiation pressure driving of a dusty atmosphere
NASA Astrophysics Data System (ADS)
Tsang, Benny T.-H.; Milosavljević, Miloš
2015-10-01
Radiation pressure can be dynamically important in star-forming environments such as ultra-luminous infrared and submillimetre galaxies. Whether and how radiation drives turbulence and bulk outflows in star formation sites is still unclear. The uncertainty in part reflects the limitations of direct numerical schemes that are currently used to simulate radiation transfer and radiation-gas coupling. An idealized setup in which radiation is introduced at the base of a dusty atmosphere in a gravitational field has recently become the standard test for radiation-hydrodynamics methods in the context of star formation. To a series of treatments featuring the flux-limited diffusion approximation as well as a short-characteristics tracing and M1 closure for the variable Eddington tensor approximation, we here add another treatment that is based on the implicit Monte Carlo radiation transfer scheme. Consistent with all previous treatments, the atmosphere undergoes Rayleigh-Taylor instability and readjusts to a near-Eddington-limited state. We detect late-time net acceleration in which the turbulent velocity dispersion matches that reported previously with the short-characteristics-based radiation transport closure, the most accurate of the three preceding treatments. Our technical result demonstrates the importance of accurate radiation transfer in simulations of radiative feedback.
Changes in diffusion path length with old age in diffuse optical tomography
NASA Astrophysics Data System (ADS)
Bonnéry, Clément; Leclerc, Paul-Olivier; Desjardins, Michèle; Hoge, Rick; Bherer, Louis; Pouliot, Philippe; Lesage, Frédéric
2012-05-01
Diffuse, optical near infrared imaging is increasingly being used in various neurocognitive contexts where changes in optical signals are interpreted through activation maps. Statistical population comparison of different age or clinical groups rely on the relative homogeneous distribution of measurements across subjects in order to infer changes in brain function. In the context of an increasing use of diffuse optical imaging with older adult populations, changes in tissue properties and anatomy with age adds additional confounds. Few studies investigated these changes with age. Duncan et al. measured the so-called diffusion path length factor (DPF) in a large population but did not explore beyond the age of 51 after which physiological and anatomical changes are expected to occur [Pediatr. Res. 39(5), 889-894 (1996)]. With increasing interest in studying the geriatric population with optical imaging, we studied changes in tissue properties in young and old subjects using both magnetic resonance imaging (MRI)-guided Monte-Carlo simulations and time-domain diffuse optical imaging. Our results, measured in the frontal cortex, show changes in DPF that are smaller than previously measured by Duncan et al. in a younger population. The origin of these changes are studied using simulations and experimental measures.
Sisniega, A.; Zbijewski, W.; Badal, A.; Kyprianou, I. S.; Stayman, J. W.; Vaquero, J. J.; Siewerdsen, J. H.
2013-01-01
Purpose: The proliferation of cone-beam CT (CBCT) has created interest in performance optimization, with x-ray scatter identified among the main limitations to image quality. CBCT often contends with elevated scatter, but the wide variety of imaging geometry in different CBCT configurations suggests that not all configurations are affected to the same extent. Graphics processing unit (GPU) accelerated Monte Carlo (MC) simulations are employed over a range of imaging geometries to elucidate the factors governing scatter characteristics, efficacy of antiscatter grids, guide system design, and augment development of scatter correction. Methods: A MC x-ray simulator implemented on GPU was accelerated by inclusion of variance reduction techniques (interaction splitting, forced scattering, and forced detection) and extended to include x-ray spectra and analytical models of antiscatter grids and flat-panel detectors. The simulator was applied to small animal (SA), musculoskeletal (MSK) extremity, otolaryngology (Head), breast, interventional C-arm, and on-board (kilovoltage) linear accelerator (Linac) imaging, with an axis-to-detector distance (ADD) of 5, 12, 22, 32, 60, and 50 cm, respectively. Each configuration was modeled with and without an antiscatter grid and with (i) an elliptical cylinder varying 70–280 mm in major axis; and (ii) digital murine and anthropomorphic models. The effects of scatter were evaluated in terms of the angular distribution of scatter incident upon the detector, scatter-to-primary ratio (SPR), artifact magnitude, contrast, contrast-to-noise ratio (CNR), and visual assessment. Results: Variance reduction yielded improvements in MC simulation efficiency ranging from ∼17-fold (for SA CBCT) to ∼35-fold (for Head and C-arm), with the most significant acceleration due to interaction splitting (∼6 to ∼10-fold increase in efficiency). The benefit of a more extended geometry was evident by virtue of a larger air gap—e.g., for a 16 cm diameter object, the SPR reduced from 1.5 for ADD = 12 cm (MSK geometry) to 1.1 for ADD = 22 cm (Head) and to 0.5 for ADD = 60 cm (C-arm). Grid efficiency was higher for configurations with shorter air gap due to a broader angular distribution of scattered photons—e.g., scatter rejection factor ∼0.8 for MSK geometry versus ∼0.65 for C-arm. Grids reduced cupping for all configurations but had limited improvement on scatter-induced streaks and resulted in a loss of CNR for the SA, Breast, and C-arm. Relative contribution of forward-directed scatter increased with a grid (e.g., Rayleigh scatter fraction increasing from ∼0.15 without a grid to ∼0.25 with a grid for the MSK configuration), resulting in scatter distributions with greater spatial variation (the form of which depended on grid orientation). Conclusions: A fast MC simulator combining GPU acceleration with variance reduction provided a systematic examination of a range of CBCT configurations in relation to scatter, highlighting the magnitude and spatial uniformity of individual scatter components, illustrating tradeoffs in CNR and artifacts and identifying the system geometries for which grids are more beneficial (e.g., MSK) from those in which an extended geometry is the better defense (e.g., C-arm head imaging). Compact geometries with an antiscatter grid challenge assumptions of slowly varying scatter distributions due to increased contribution of Rayleigh scatter. PMID:23635285
A Monte Carlo risk assessment model for acrylamide formation in French fries.
Cummins, Enda; Butler, Francis; Gormley, Ronan; Brunton, Nigel
2009-10-01
The objective of this study is to estimate the likely human exposure to the group 2a carcinogen, acrylamide, from French fries by Irish consumers by developing a quantitative risk assessment model using Monte Carlo simulation techniques. Various stages in the French-fry-making process were modeled from initial potato harvest, storage, and processing procedures. The model was developed in Microsoft Excel with the @Risk add-on package. The model was run for 10,000 iterations using Latin hypercube sampling. The simulated mean acrylamide level in French fries was calculated to be 317 microg/kg. It was found that females are exposed to smaller levels of acrylamide than males (mean exposure of 0.20 microg/kg bw/day and 0.27 microg/kg bw/day, respectively). Although the carcinogenic potency of acrylamide is not well known, the simulated probability of exceeding the average chronic human dietary intake of 1 microg/kg bw/day (as suggested by WHO) was 0.054 and 0.029 for males and females, respectively. A sensitivity analysis highlighted the importance of the selection of appropriate cultivars with known low reducing sugar levels for French fry production. Strict control of cooking conditions (correlation coefficient of 0.42 and 0.35 for frying time and temperature, respectively) and blanching procedures (correlation coefficient -0.25) were also found to be important in ensuring minimal acrylamide formation.
A novel role for visual perspective cues in the neural computation of depth.
Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C
2015-01-01
As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extraretinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We found that incorporating these 'dynamic perspective' cues allowed the visual system to generate selectivity for depth sign from motion parallax in macaque cortical area MT, a computation that was previously thought to require extraretinal signals regarding eye velocity. Our findings suggest neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations.
Modelling the growth of triglycine sulphate crystals in Spacelab 3
NASA Technical Reports Server (NTRS)
Yoo, Hak-Do; Wilcox, William R.; Lal, Ravindra; Trolinger, James D.
1988-01-01
Two triglycine sulphate crystals were grown from an aqueous solution in Spacelab 3 aboard a Space Shuttle. Using a diffusion coefficient of 0.00002 sq cm/s, a computerized simulation gave reasonable agreement between experimental and theoretical crystal sizes and interferometric lines in the solution near the growing crystal. This diffusion coefficient is larger than most measured values, possibly due to fluctuating accelerations on the order of .001 g (Earth's gravity). The average acceleration was estimated to be less than .000001 g. At this level, buoyancy driven convection is predicted to add approx. 20 percent to the steady state growth rate. Only very slight distortion of the interferometric lines was observed at the end of a 33 hr run. It is suggested that the time to reach steady state convective transport may be inversely proportional to g at low g, so that the full effect of convection was not realized in these experiments.
Fringe-projection profilometry based on two-dimensional empirical mode decomposition.
Zheng, Suzhen; Cao, Yiping
2013-11-01
In 3D shape measurement, because deformed fringes often contain low-frequency information degraded with random noise and background intensity information, a new fringe-projection profilometry is proposed based on 2D empirical mode decomposition (2D-EMD). The fringe pattern is first decomposed into numbers of intrinsic mode functions by 2D-EMD. Because the method has partial noise reduction, the background components can be removed to obtain the fundamental components needed to perform Hilbert transformation to retrieve the phase information. The 2D-EMD can effectively extract the modulation phase of a single direction fringe and an inclined fringe pattern because it is a full 2D analysis method and considers the relationship between adjacent lines of a fringe patterns. In addition, as the method does not add noise repeatedly, as does ensemble EMD, the data processing time is shortened. Computer simulations and experiments prove the feasibility of this method.
Influences of rolling method on deformation force in cold roll-beating forming process
NASA Astrophysics Data System (ADS)
Su, Yongxiang; Cui, Fengkui; Liang, Xiaoming; Li, Yan
2018-03-01
In process, the research object, the gear rack was selected to study the influence law of rolling method on the deformation force. By the mean of the cold roll forming finite element simulation, the variation regularity of radial and tangential deformation was analysed under different rolling methods. The variation of deformation force of the complete forming racks and the single roll during the steady state under different rolling modes was analyzed. The results show: when upbeating and down beating, radial single point average force is similar, the tangential single point average force gap is bigger, the gap of tangential single point average force is relatively large. Add itionally, the tangential force at the time of direct beating is large, and the dire ction is opposite with down beating. With directly beating, deformation force loading fast and uninstall slow. Correspondingly, with down beating, deformat ion force loading slow and uninstall fast.
Construction and Utilization of a Beowulf Computing Cluster: A User's Perspective
NASA Technical Reports Server (NTRS)
Woods, Judy L.; West, Jeff S.; Sulyma, Peter R.
2000-01-01
Lockheed Martin Space Operations - Stennis Programs (LMSO) at the John C Stennis Space Center (NASA/SSC) has designed and built a Beowulf computer cluster which is owned by NASA/SSC and operated by LMSO. The design and construction of the cluster are detailed in this paper. The cluster is currently used for Computational Fluid Dynamics (CFD) simulations. The CFD codes in use and their applications are discussed. Examples of some of the work are also presented. Performance benchmark studies have been conducted for the CFD codes being run on the cluster. The results of two of the studies are presented and discussed. The cluster is not currently being utilized to its full potential; therefore, plans are underway to add more capabilities. These include the addition of structural, thermal, fluid, and acoustic Finite Element Analysis codes as well as real-time data acquisition and processing during test operations at NASA/SSC. These plans are discussed as well.
Home care for the disabled elderly: predictors and expected costs.
Coughlin, T A; McBride, T D; Perozek, M; Liu, K
1992-01-01
While interest in publicly funded home care for the disabled elderly is keen, basic policy issues need to be addressed before an appropriate program can be adopted and financed. This article presents findings from a study in which the cost implications of anticipated behavioral responses (for example, caregiver substitution) are estimated. Using simulation techniques, the results demonstrate that anticipated behavioral responses would likely add between $1.8 and $2.7 billion (1990 dollars) to the costs of a public home care program. Results from a variety of cost simulations are presented. The data base for the study was the 1982 National Long-Term Care Survey. PMID:1399652
Study on the characteristics of multi-infeed HVDC
NASA Astrophysics Data System (ADS)
Li, Ming; Song, Xinli; Liu, Wenzhuo; Xiang, Yinxing; Zhao, Shutao; Su, Zhida; Meng, Hang
2017-09-01
China has built more than ten HVDC transmission projects in recent years [1]. Now, east China has formed a multi-HVDC feed pattern grid. It is imminent to study the interaction of the multi-HVDC and the characteristics of it. In this paper, an electromechanical-electromagnetic hybrid model is built with electromechanical data of a certain power network. We use electromagnetic models to simulate the HVDC section and electromechanical models simulate the AC power network [2]. In order to study the characteristics of the grid, this paper adds some faults to the line and analysed the fault characteristics. At last give analysis of the fault characteristics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Allan Ray
1987-05-01
Increases in high speed hardware have mandated studies in software techniques to exploit the parallel capabilities. This thesis examines the effects a run-time scheduler has on a multiprocessor. The model consists of directed, acyclic graphs, generated from serial FORTRAN benchmark programs by the parallel compiler Parafrase. A multitasked, multiprogrammed environment is created. Dependencies are generated by the compiler. Tasks are bidimensional, i.e., they may specify both time and processor requests. Processor requests may be folded into execution time by the scheduler. The graphs may arrive at arbitrary time intervals. The general case is NP-hard, thus, a variety of heuristics aremore » examined by a simulator. Multiprogramming demonstrates a greater need for a run-time scheduler than does monoprogramming for a variety of reasons, e.g., greater stress on the processors, a larger number of independent control paths, more variety in the task parameters, etc. The dynamic critical path series of algorithms perform well. Dynamic critical volume did not add much. Unfortunately, dynamic critical path maximizes turnaround time as well as throughput. Two schedulers are presented which balance throughput and turnaround time. The first requires classification of jobs by type; the second requires selection of a ratio value which is dependent upon system parameters. 45 refs., 19 figs., 20 tabs.« less
Phosphonitrilic Fluoroelastomer Coated Fabrics for Collapsible Fuel Storage Tanks
1979-07-01
Coated F,,brics .*.... *• .. ...... ..... •---*..,- *... 97 36. Stabilizer Masterbatch Formulations R21960 and -601. 58 37- Banbury "BR" Mixes of P®FO...minutes total mix time. The mix is then dumped. Curing agent is then added to the masterbatch banded on a mill. Ambient temperature mills were generally...maximum flow. 0 minutes-load polymer. speed: slow (77 rpm) 2 minutes-add fillers 7 minutes-add stabilizer masterbatch 15 minutevk-dunip mix To obtain as
Cost-benefit analysis of Xpert MTB/RIF for tuberculosis suspects in German hospitals.
Diel, Roland; Nienhaus, Albert; Hillemann, Doris; Richter, Elvira
2016-02-01
Our objective was to assess the cost-benefit of enhancing or replacing the conventional sputum smear with the real-time PCR Xpert MTB/RIF method in the inpatient diagnostic schema for tuberculosis (TB).Recent data from published per-case cost studies for TB/multidrug-resistant (MDR)-TB and from comparative analyses of sputum microscopy, mycobacterial culture, Xpert MTB/RIF and drug susceptibility testing, performed at the German National Reference Center for Mycobacteria, were used. Potential cost savings of Xpert MTB/RIF, based on test accuracy and multiple cost drivers, were calculated for diagnosing TB/MDR-TB suspects from the hospital perspective.Implementing Xpert MTB/RIF as an add-on in smear-positive and smear-negative TB suspects saves on average €48.72 and €503, respectively, per admitted patient as compared with the conventional approach. In smear-positive and smear-negative MDR-TB suspects, cost savings amount to €189.56 and €515.25 per person, respectively. Full replacement of microscopy by Xpert MTB/RIF saves €449.98. In probabilistic Monte-Carlo simulation, adding Xpert MTB/RIF is less costly in 46.4% and 76.2% of smear-positive TB and MDR-TB suspects, respectively, but 100% less expensive in all smear-negative suspects. Full replacement by Xpert MTB/RIF is also consistently cost-saving.Using Xpert MTB/RIF as an add-on to and even as a replacement for sputum smear examination may significantly reduce expenditures in TB suspects. Copyright ©ERS 2016.
Ticknor, Christopher; Collins, Lee A.; Kress, Joel D.
2015-08-04
We present simulations of a four component mixture of HCNO with orbital free molecular dynamics (OFMD). These simulations were conducted for 5–200 eV with densities ranging between 0.184 and 36.8 g/cm 3. We extract the equation of state from the simulations and compare to average atom models. We found that we only need to add a cold curve model to find excellent agreement. In addition, we studied mass transport properties. We present fits to the self-diffusion and shear viscosity that are able to reproduce the transport properties over the parameter range studied. We compare these OFMD results to models basedmore » on the Coulomb coupling parameter and one-component plasmas.« less
Influence of Meibomian Gland Dysfunction and Friction-Related Disease on the Severity of Dry Eye.
Vu, Chi Hoang Viet; Kawashima, Motoko; Yamada, Masakazu; Suwaki, Kazuhisa; Uchino, Miki; Shigeyasu, Chika; Hiratsuka, Yoshimune; Yokoi, Norihiko; Tsubota, Kazuo
2018-02-16
To evaluate the effect of meibomian gland dysfunction (MGD) and friction-related disease (FRD) on the severity of dry eye disease (DED). Cross-sectional observational study. This study enrolled 449 patients with DED (63 men and 386 women; mean age, 62.6±15.7 years [range, 21-90 years]) for analysis. Subjective symptoms, the ocular surface, tear function, and the presence of MGD and FRD (superior limbic keratoconjunctivitis, conjunctivochalasis, and lid wiper epitheliopathy) were investigated. Schirmer value, tear film breakup time (TBUT), and keratoconjunctival score. We classified the participants into aqueous-deficient dry eye (ADDE; n = 231 [51.4%]) and short TBUT dry eye subtype (TBUT-DE; n = 109 [24.3%]) subgroups. The TBUT was shorter in patients with MGD than in those without MGD, whereas other ocular signs showed no difference (TBUT: MGD present, 1.97±1.02 seconds; MGD absent, 2.94±1.63 seconds [P < 0.001]; ADDE/MGD present, 1.94±1.08 seconds; ADDE/MGD absent, 2.77±1.61 seconds [P < 0.001]; short TBUT-DE/MGD present, 2.07±0.97 seconds; short TBUT-DE/MGD absent, 2.94±1.23 seconds [P = 0.01]). The ADDE patients with FRD showed a worse TBUT than ADDE patients without FRD (TBUT: ADDE/FRD present, 2.08±1.39 seconds; ADDE/FRD absent, 2.92±1.54 seconds; P < 0.001). This study showed associations between MGD, FRD, or both and ocular signs in DED. In the presence of MGD, FRD, or both, TBUT was significantly shortened regardless of the dry eye status or subtype. Copyright © 2018 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Abreau, Kerstin; Callan, Christine; Kottaiyan, Ranjini; Zhang, Aizhong; Yoon, Geunyoung; Aquavella, James V; Zavislan, James; Hindman, Holly B
2016-01-01
To compare the temperatures of the ocular surface, eyelid, and periorbital skin in normal eyes with Sjögren's syndrome (SS) eyes, evaporative dry eyes (EDE), and aqueous deficient dry eyes (ADDE). 10 eyes were analyzed in each age-matched group (normal, SS, EDE, and ADDE). A noninvasive infrared thermal camera captured two-dimensional images in three regions of interest (ROI) in each of three areas: the ocular surface, the upper eyelid, and the periorbital skin within a controlled environmental chamber. Mean temperatures in each ROI were calculated from the videos. Ocular surface time-segmented cooling rates were calculated over a 5-s blink interval. Relative to normal eyes, dry eyes had lower initial central OSTs (SS -0.71°C, EDE -0.55°C, ADDE -0.95°C, KW P<.0001) and lower central upper lid temperatures (SS -0.24°C, ADDE -0.51°C, and EDE -0.54°C, KW P<.0001). ADDE eyes had the lowest initial central OST (P<.0001), while EDE eyes had the lowest central lid temperature and lower periorbital temperatures (P<.0001). Over the 5-s interblink interval, the greatest rate of temperature loss occurred following eyelid opening, but varied by group (normals -0.52, SS -0.73, EDE -0.63, and ADDE -0.75°C/s). The ADDE group also had the most substantial heat loss over the 5-s interblink interval (-0.97°C). Differences in OST may be related to thermal differences in lids and periorbita along with an altered tear film. Thermography of the ocular surface, lids, and surrounding tissues may help to differentiate between different etiologies of dry eye. Copyright © 2016 Elsevier Inc. All rights reserved.
A Layered Solution for Supercomputing Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grider, Gary
To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.
CalPro: a spreadsheet program for the management of California mixed-conifer stands.
Jingjing Liang; Joseph Buongiorno; Robert A. Monserud
2004-01-01
CalPro is an add-in program developed to work with Microsoft Excel to simulate the growth and management of uneven-aged mixed-conifer stands in California. Its built-in growth model was calibrated from 177 uneven-aged plots on industry and other private lands. Stands are described by the number of trees per acre in each of nineteen 2-inch diameter classes in...
Enhancing robustness of interdependent network by adding connectivity and dependence links
NASA Astrophysics Data System (ADS)
Cui, Pengshuai; Zhu, Peidong; Wang, Ke; Xun, Peng; Xia, Zhuoqun
2018-05-01
Enhancing robustness of interdependent networks by adding connectivity links has been researched extensively, however, few of them are focusing on adding both connectivity and dependence links to enhance robustness. In this paper, we aim to study how to allocate the limited costs reasonably to add both connectivity and dependence links. Firstly, we divide the attackers into stubborn attackers and smart attackers according to whether would they change their attack modes with the changing of network structure; Then by simulations, link addition strategies are given separately according to different attackers, with which we can allocate the limited costs to add connectivity links and dependence links reasonably and achieve more robustness than only adding connectivity links or dependence links. The results show that compared to only adding connectivity links or dependence links, allocating the limited resources reasonably and adding both connectivity links and dependence links could bring more robustness to the interdependent networks.
Broadband Transmission Loss Due to Reverberant Excitation
NASA Technical Reports Server (NTRS)
Barisciano, Lawrence P. Jr.
1999-01-01
The noise transmission characteristics of candidate curved aircraft sidewall panel constructions is examined analytically using finite element models of the selected panel geometries. The models are validated by experimental modal analyses and transmission loss testing. The structural and acoustic response of the models are then examined when subjected to random or reverberant excitation, the simulation of which is also discussed. For a candidate curved honeycomb panel, the effect of add-on trim panel treatments is examined. Specifically, two different mounting configurations are discussed and their effect on the transmission loss of the panel is presented. This study finds that the add-on acoustical treatments do improve on the primary structures transmission loss characteristics, however, much more research is necessary to draw any valid conclusions about the optimal configuration for the maximum noise transmission loss. This paper describes several directions for the extension of this work.
Interference Canceller Based on Cycle-and-Add Property for Single User Detection in DS-CDMA
NASA Astrophysics Data System (ADS)
Hettiarachchi, Ranga; Yokoyama, Mitsuo; Uehara, Hideyuki; Ohira, Takashi
In this paper, performance of a novel interference cancellation technique for the single user detection in a direct-sequence code-division multiple access (DS-CDMA) system has been investigated. This new algorithm is based on the Cycle-and-Add property of PN (Pseudorandom Noise) sequences and can be applied for both synchronous and asynchronous systems. The proposed strategy provides a simple method that can delete interference signals one by one in spite of the power levels of interferences. Therefore, it is possible to overcome the near-far problem (NFP) in a successive manner without using transmit power control (TPC) techniques. The validity of the proposed procedure is corroborated by computer simulations in additive white Gaussian noise (AWGN) and frequency-nonselective fading channels. Performance results indicate that the proposed receiver outperforms the conventional receiver and, in many cases, it does so with a considerable gain.
3-D Image Encryption Based on Rubik's Cube and RC6 Algorithm
NASA Astrophysics Data System (ADS)
Helmy, Mai; El-Rabaie, El-Sayed M.; Eldokany, Ibrahim M.; El-Samie, Fathi E. Abd
2017-12-01
A novel encryption algorithm based on the 3-D Rubik's cube is proposed in this paper to achieve 3D encryption of a group of images. This proposed encryption algorithm begins with RC6 as a first step for encrypting multiple images, separately. After that, the obtained encrypted images are further encrypted with the 3-D Rubik's cube. The RC6 encrypted images are used as the faces of the Rubik's cube. From the concepts of image encryption, the RC6 algorithm adds a degree of diffusion, while the Rubik's cube algorithm adds a degree of permutation. The simulation results demonstrate that the proposed encryption algorithm is efficient, and it exhibits strong robustness and security. The encrypted images are further transmitted over wireless Orthogonal Frequency Division Multiplexing (OFDM) system and decrypted at the receiver side. Evaluation of the quality of the decrypted images at the receiver side reveals good results.
A Generalized Method for Automatic Downhand and Wirefeed Control of a Welding Robot and Positioner
NASA Technical Reports Server (NTRS)
Fernandez, Ken; Cook, George E.
1988-01-01
A generalized method for controlling a six degree-of-freedom (DOF) robot and a two DOF positioner used for arc welding operations is described. The welding path is defined in the part reference frame, and robot/positioner joint angles of the equivalent eight DOF serial linkage are determined via an iterative solution. Three algorithms are presented: the first solution controls motion of the eight DOF mechanism such that proper torch motion is achieved while minimizing the sum-of-squares of joint displacements; the second algorithm adds two constraint equations to achieve torch control while maintaining part orientation so that welding occurs in the downhand position; and the third algorithm adds the ability to control the proper orientation of a wire feed mechanism used in gas tungsten arc (GTA) welding operations. A verification of these algorithms is given using ROBOSIM, a NASA developed computer graphic simulation software package design for robot systems development.
Theta and Alpha Oscillations in Attentional Interaction during Distracted Driving
Wang, Yu-Kai; Jung, Tzyy-Ping; Lin, Chin-Teng
2018-01-01
Performing multiple tasks simultaneously usually affects the behavioral performance as compared with executing the single task. Moreover, processing multiple tasks simultaneously often involve more cognitive demands. Two visual tasks, lane-keeping task and mental calculation, were utilized to assess the brain dynamics through 32-channel electroencephalogram (EEG) recorded from 14 participants. A 400-ms stimulus onset asynchrony (SOA) factor was used to induce distinct levels of attentional requirements. In the dual-task conditions, the deteriorated behavior reflected the divided attention and the overlapping brain resources used. The frontal, parietal and occipital components were decomposed by independent component analysis (ICA) algorithm. The event- and response-related theta and alpha oscillations in selected brain regions were investigated first. The increased theta oscillation in frontal component and decreased alpha oscillations in parietal and occipital components reflect the cognitive demands and attentional requirements as executing the designed tasks. Furthermore, time-varying interactive over-additive (O-Add), additive (Add) and under-additive (U-Add) activations were explored and summarized through the comparison between the summation of the elicited spectral perturbations in two single-task conditions and the spectral perturbations in the dual task. Add and U-Add activations were observed while executing the dual tasks. U-Add theta and alpha activations dominated the posterior region in dual-task situations. Our results show that both deteriorated behaviors and interactive brain activations should be comprehensively considered for evaluating workload or attentional interaction precisely. PMID:29479310
Cyber Security Research Frameworks For Coevolutionary Network Defense
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rush, George D.; Tauritz, Daniel Remy
Several architectures have been created for developing and testing systems used in network security, but most are meant to provide a platform for running cyber security experiments as opposed to automating experiment processes. In the first paper, we propose a framework termed Distributed Cyber Security Automation Framework for Experiments (DCAFE) that enables experiment automation and control in a distributed environment. Predictive analysis of adversaries is another thorny issue in cyber security. Game theory can be used to mathematically analyze adversary models, but its scalability limitations restrict its use. Computational game theory allows us to scale classical game theory to larger,more » more complex systems. In the second paper, we propose a framework termed Coevolutionary Agent-based Network Defense Lightweight Event System (CANDLES) that can coevolve attacker and defender agent strategies and capabilities and evaluate potential solutions with a custom network defense simulation. The third paper is a continuation of the CANDLES project in which we rewrote key parts of the framework. Attackers and defenders have been redesigned to evolve pure strategy, and a new network security simulation is devised which specifies network architecture and adds a temporal aspect. We also add a hill climber algorithm to evaluate the search space and justify the use of a coevolutionary algorithm.« less
3D Laser Imprint Using a Smoother Ray-Traced Power Deposition Method
NASA Astrophysics Data System (ADS)
Schmitt, Andrew J.
2017-10-01
Imprinting of laser nonuniformities in directly-driven icf targets is a challenging problem to accurately simulate with large radiation-hydro codes. One of the most challenging aspects is the proper construction of the complex and rapidly changing laser interference structure driving the imprint using the reduced laser propagation models (usually ray-tracing) found in these codes. We have upgraded the modelling capability in our massively-parallel
MAISIE: a multipurpose astronomical instrument simulator environment
NASA Astrophysics Data System (ADS)
O'Brien, Alan; Beard, Steven; Geers, Vincent; Klaassen, Pamela
2016-07-01
Astronomical instruments often need simulators to preview their data products and test their data reduction pipelines. Instrument simulators have tended to be purpose-built with a single instrument in mind, and at- tempting to reuse one of these simulators for a different purpose is often a slow and difficult task. MAISIE is a simulator framework designed for reuse on different instruments. An object-oriented design encourages reuse of functionality and structure, while offering the flexibility to create new classes with new functionality. MAISIE is a set of Python classes, interfaces and tools to help build instrument simulators. MAISIE can just as easily build simulators for single and multi-channel instruments, imagers and spectrometers, ground and space based instruments. To remain easy to use and to facilitate the sharing of simulators across teams, MAISIE is written in Python, a freely available and open-source language. New functionality can be created for MAISIE by creating new classes that represent optical elements. This approach allows new and novel instruments to add functionality and take advantage of the existing MAISIE classes. MAISIE has recently been used successfully to develop the simulator for the JWST/MIRI- Medium Resolution Spectrometer.
Benchmark tests for a Formula SAE Student car prototyping
NASA Astrophysics Data System (ADS)
Mariasiu, Florin
2011-12-01
Aerodynamic characteristics of a vehicle are important elements in its design and construction. A low drag coefficient brings significant fuel savings and increased engine power efficiency. In designing and developing vehicles trough computer simulation process to determine the vehicles aerodynamic characteristics are using dedicated CFD (Computer Fluid Dynamics) software packages. However, the results obtained by this faster and cheaper method, are validated by experiments in wind tunnels tests, which are expensive and were complex testing equipment are used in relatively high costs. Therefore, the emergence and development of new low-cost testing methods to validate CFD simulation results would bring great economic benefits for auto vehicles prototyping process. This paper presents the initial development process of a Formula SAE Student race-car prototype using CFD simulation and also present a measurement system based on low-cost sensors through which CFD simulation results were experimentally validated. CFD software package used for simulation was Solid Works with the FloXpress add-on and experimental measurement system was built using four piezoresistive force sensors FlexiForce type.
Improvements in flight table dynamic transparency for hardware-in-the-loop facilities
NASA Astrophysics Data System (ADS)
DeMore, Louis A.; Mackin, Rob; Swamp, Michael; Rusterholtz, Roger
2000-07-01
Flight tables are a 'necessary evil' in the Hardware-In-The- Loop (HWIL) simulation. Adding the actual or prototypic flight hardware to the loop, in order to increase the realism of the simulation, forces us to add motion simulation to the process. Flight table motion bases bring unwanted dynamics, non- linearities, transport delays, etc to an already difficult problem sometimes requiring the simulation engineer to compromise the results. We desire that the flight tables be 'dynamically transparent' to the simulation scenario. This paper presents a State Variable Feedback (SVF) control system architecture with feed-forward techniques that improves the flight table's dynamic transparency by significantly reducing the table's low frequency phase lag. We offer some actual results with existing flight tables that demonstrate the improved transparency. These results come from a demonstration conducted on a flight table in the KHILS laboratory at Eglin AFB and during a refurbishment of a flight table for the Boeing Company of St. Charles, Missouri.
NASA Technical Reports Server (NTRS)
Lin, Jiguan Gene
1987-01-01
The quick suppression of the structural vibrations excited by bang-bang (BB) type time-optional slew maneuvers via modal-dashpot design of velocity output feedback control was investigated. Simulation studies were conducted, and modal dashpots were designed for the SCOLE flexible body dynamics. A two-stage approach was proposed for rapid slewing and precision pointing/retargeting of large, flexible space systems: (1) slew the whole system like a rigid body in a minimum time under specified limits on the control moments and forces, and (2) damp out the excited structural vibrations afterwards. This approach was found promising. High-power modal/dashpots can suppress very large vibrations, and can add a desirable amount of active damping to modeled modes. Unmodeled modes can also receive some concomitant active damping, as a benefit of spillover. Results also show that not all BB type rapid pointing maneuvers will excite large structural vibrations. When properly selected small forces (e.g., vernier thrusters) are used to complete the specified slew maneuver in the shortest time, even BB-type maneuvers will excite only small vibrations (e.g., 0.3 ft peak deflection for a 130 ft beam).
Estimating Controller Intervention Probabilities for Optimized Profile Descent Arrivals
NASA Technical Reports Server (NTRS)
Meyn, Larry A.; Erzberger, Heinz; Huynh, Phu V.
2011-01-01
Simulations of arrival traffic at Dallas/Fort-Worth and Denver airports were conducted to evaluate incorporating scheduling and separation constraints into advisories that define continuous descent approaches. The goal was to reduce the number of controller interventions required to ensure flights maintain minimum separation distances of 5 nmi horizontally and 1000 ft vertically. It was shown that simply incorporating arrival meter fix crossing-time constraints into the advisory generation could eliminate over half of the all predicted separation violations and more than 80% of the predicted violations between two arrival flights. Predicted separation violations between arrivals and non-arrivals were 32% of all predicted separation violations at Denver and 41% at Dallas/Fort-Worth. A probabilistic analysis of meter fix crossing-time errors is included which shows that some controller interventions will still be required even when the predicted crossing-times of the advisories are set to add a 1 or 2 nmi buffer above the minimum in-trail separation of 5 nmi. The 2 nmi buffer was shown to increase average flight delays by up to 30 sec when compared to the 1 nmi buffer, but it only resulted in a maximum decrease in average arrival throughput of one flight per hour.
Tracking Expected Improvements of Decadal Prediction in Climate Services
NASA Astrophysics Data System (ADS)
Suckling, E.; Thompson, E.; Smith, L. A.
2013-12-01
Physics-based simulation models are ultimately expected to provide the best available (decision-relevant) probabilistic climate predictions, as they can capture the dynamics of the Earth System across a range of situations, situations for which observations for the construction of empirical models are scant if not nonexistent. This fact in itself provides neither evidence that predictions from today's Earth Systems Models will outperform today's empirical models, nor a guide to the space and time scales on which today's model predictions are adequate for a given purpose. Empirical (data-based) models are employed to make probability forecasts on decadal timescales. The skill of these forecasts is contrasted with that of state-of-the-art climate models, and the challenges faced by each approach are discussed. The focus is on providing decision-relevant probability forecasts for decision support. An empirical model, known as Dynamic Climatology is shown to be competitive with CMIP5 climate models on decadal scale probability forecasts. Contrasting the skill of simulation models not only with each other but also with empirical models can reveal the space and time scales on which a generation of simulation models exploits their physical basis effectively. It can also quantify their ability to add information in the formation of operational forecasts. Difficulties (i) of information contamination (ii) of the interpretation of probabilistic skill and (iii) of artificial skill complicate each modelling approach, and are discussed. "Physics free" empirical models provide fixed, quantitative benchmarks for the evaluation of ever more complex climate models, that is not available from (inter)comparisons restricted to only complex models. At present, empirical models can also provide a background term for blending in the formation of probability forecasts from ensembles of simulation models. In weather forecasting this role is filled by the climatological distribution, and can significantly enhance the value of longer lead-time weather forecasts to those who use them. It is suggested that the direct comparison of simulation models with empirical models become a regular component of large model forecast intercomparison and evaluation. This would clarify the extent to which a given generation of state-of-the-art simulation models provide information beyond that available from simpler empirical models. It would also clarify current limitations in using simulation forecasting for decision support. No model-based probability forecast is complete without a quantitative estimate if its own irrelevance; this estimate is likely to increase as a function of lead time. A lack of decision-relevant quantitative skill would not bring the science-based foundation of anthropogenic warming into doubt. Similar levels of skill with empirical models does suggest a clear quantification of limits, as a function of lead time, for spatial and temporal scales on which decisions based on such model output are expected to prove maladaptive. Failing to clearly state such weaknesses of a given generation of simulation models, while clearly stating their strength and their foundation, risks the credibility of science in support of policy in the long term.
Nondestructive Intervention to Multi-Agent Systems through an Intelligent Agent
Han, Jing; Wang, Lin
2013-01-01
For a given multi-agent system where the local interaction rule of the existing agents can not be re-designed, one way to intervene the collective behavior of the system is to add one or a few special agents into the group which are still treated as normal agents by the existing ones. We study how to lead a Vicsek-like flocking model to reach synchronization by adding special agents. A popular method is to add some simple leaders (fixed-headings agents). However, we add one intelligent agent, called ‘shill’, which uses online feedback information of the group to decide the shill's moving direction at each step. A novel strategy for the shill to coordinate the group is proposed. It is strictly proved that a shill with this strategy and a limited speed can synchronize every agent in the group. The computer simulations show the effectiveness of this strategy in different scenarios, including different group sizes, shill speed, and with or without noise. Compared to the method of adding some fixed-heading leaders, our method can guarantee synchronization for any initial configuration in the deterministic scenario and improve the synchronization level significantly in low density groups, or model with noise. This suggests the advantage and power of feedback information in intervention of collective behavior. PMID:23658695
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenquist, Ian; Tonks, Michael
2016-10-01
Light water reactor fuel pellets are fabricated using sintering to final densities of 95% or greater. During reactor operation, the porosity remaining in the fuel after fabrication decreases further due to irradiation-assisted densification. While empirical models have been developed to describe this densification process, a mechanistic model is needed as part of the ongoing work by the NEAMS program to develop a more predictive fuel performance code. In this work we will develop a phase field model of sintering of UO 2 in the MARMOT code, and validate it by comparing to published sintering data. We will then add themore » capability to capture irradiation effects into the model, and use it to develop a mechanistic model of densification that will go into the BISON code and add another essential piece to the microstructure-based materials models. The final step will be to add the effects of applied fields, to model field-assisted sintering of UO 2. The results of the phase field model will be validated by comparing to data from field-assisted sintering. Tasks over three years: 1. Develop a sintering model for UO 2 in MARMOT 2. Expand model to account for irradiation effects 3. Develop a mechanistic macroscale model of densification for BISON« less
Random vs. Combinatorial Methods for Discrete Event Simulation of a Grid Computer Network
NASA Technical Reports Server (NTRS)
Kuhn, D. Richard; Kacker, Raghu; Lei, Yu
2010-01-01
This study compared random and t-way combinatorial inputs of a network simulator, to determine if these two approaches produce significantly different deadlock detection for varying network configurations. Modeling deadlock detection is important for analyzing configuration changes that could inadvertently degrade network operations, or to determine modifications that could be made by attackers to deliberately induce deadlock. Discrete event simulation of a network may be conducted using random generation, of inputs. In this study, we compare random with combinatorial generation of inputs. Combinatorial (or t-way) testing requires every combination of any t parameter values to be covered by at least one test. Combinatorial methods can be highly effective because empirical data suggest that nearly all failures involve the interaction of a small number of parameters (1 to 6). Thus, for example, if all deadlocks involve at most 5-way interactions between n parameters, then exhaustive testing of all n-way interactions adds no additional information that would not be obtained by testing all 5-way interactions. While the maximum degree of interaction between parameters involved in the deadlocks clearly cannot be known in advance, covering all t-way interactions may be more efficient than using random generation of inputs. In this study we tested this hypothesis for t = 2, 3, and 4 for deadlock detection in a network simulation. Achieving the same degree of coverage provided by 4-way tests would have required approximately 3.2 times as many random tests; thus combinatorial methods were more efficient for detecting deadlocks involving a higher degree of interactions. The paper reviews explanations for these results and implications for modeling and simulation.
A molecular simulation protocol to avoid sampling redundancy and discover new states.
Bacci, Marco; Vitalis, Andreas; Caflisch, Amedeo
2015-05-01
For biomacromolecules or their assemblies, experimental knowledge is often restricted to specific states. Ambiguity pervades simulations of these complex systems because there is no prior knowledge of relevant phase space domains, and sampling recurrence is difficult to achieve. In molecular dynamics methods, ruggedness of the free energy surface exacerbates this problem by slowing down the unbiased exploration of phase space. Sampling is inefficient if dwell times in metastable states are large. We suggest a heuristic algorithm to terminate and reseed trajectories run in multiple copies in parallel. It uses a recent method to order snapshots, which provides notions of "interesting" and "unique" for individual simulations. We define criteria to guide the reseeding of runs from more "interesting" points if they sample overlapping regions of phase space. Using a pedagogical example and an α-helical peptide, the approach is demonstrated to amplify the rate of exploration of phase space and to discover metastable states not found by conventional sampling schemes. Evidence is provided that accurate kinetics and pathways can be extracted from the simulations. The method, termed PIGS for Progress Index Guided Sampling, proceeds in unsupervised fashion, is scalable, and benefits synergistically from larger numbers of replicas. Results confirm that the underlying ideas are appropriate and sufficient to enhance sampling. In molecular simulations, errors caused by not exploring relevant domains in phase space are always unquantifiable and can be arbitrarily large. Our protocol adds to the toolkit available to researchers in reducing these types of errors. This article is part of a Special Issue entitled "Recent developments of molecular dynamics". Copyright © 2014 Elsevier B.V. All rights reserved.
Glnemo2: Interactive Visualization 3D Program
NASA Astrophysics Data System (ADS)
Lambert, Jean-Charles
2011-10-01
Glnemo2 is an interactive 3D visualization program developed in C++ using the OpenGL library and Nokia QT 4.X API. It displays in 3D the particles positions of the different components of an nbody snapshot. It quickly gives a lot of information about the data (shape, density area, formation of structures such as spirals, bars, or peanuts). It allows for in/out zooms, rotations, changes of scale, translations, selection of different groups of particles and plots in different blending colors. It can color particles according to their density or temperature, play with the density threshold, trace orbits, display different time steps, take automatic screenshots to make movies, select particles using the mouse, and fly over a simulation using a given camera path. All these features are accessible from a very intuitive graphic user interface. Glnemo2 supports a wide range of input file formats (Nemo, Gadget 1 and 2, phiGrape, Ramses, list of files, realtime gyrfalcON simulation) which are automatically detected at loading time without user intervention. Glnemo2 uses a plugin mechanism to load the data, so that it is easy to add a new file reader. It's powered by a 3D engine which uses the latest OpenGL technology, such as shaders (glsl), vertex buffer object, frame buffer object, and takes in account the power of the graphic card used in order to accelerate the rendering. With a fast GPU, millions of particles can be rendered in real time. Glnemo2 runs on Linux, Windows (using minGW compiler), and MaxOSX, thanks to the QT4API.
NASA Astrophysics Data System (ADS)
Abbaspour, K. C.; Rouholahnejad, E.; Vaghefi, S.; Srinivasan, R.; Yang, H.; Kløve, B.
2015-05-01
A combination of driving forces are increasing pressure on local, national, and regional water supplies needed for irrigation, energy production, industrial uses, domestic purposes, and the environment. In many parts of Europe groundwater quantity, and in particular quality, have come under sever degradation and water levels have decreased resulting in negative environmental impacts. Rapid improvements in the economy of the eastern European block of countries and uncertainties with regard to freshwater availability create challenges for water managers. At the same time, climate change adds a new level of uncertainty with regard to freshwater supplies. In this research we build and calibrate an integrated hydrological model of Europe using the Soil and Water Assessment Tool (SWAT) program. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals. Leaching of nitrate into groundwater is also simulated at a finer spatial level (HRU). The use of large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation. In this article we discuss issues with data availability, calibration of large-scale distributed models, and outline procedures for model calibration and uncertainty analysis. The calibrated model and results provide information support to the European Water Framework Directive and lay the basis for further assessment of the impact of climate change on water availability and quality. The approach and methods developed are general and can be applied to any large region around the world.
Narad, Megan; Garner, Annie A.; Brassell, Anne A.; Saxby, Dyani; Antonini, Tanya N.; O'Brien, Kathleen M.; Tamm, Leanne; Matthews, Gerald; Epstein, Jeffery N.
2013-01-01
Importance This study extends the literature regarding Attention-Deficit/Hyperactivity Disorder (ADHD) related driving impairments to a newly-licensed, adolescent population. Objective To investigate the combined risks of adolescence, ADHD, and distracted driving (cell phone conversation and text messaging) on driving performance. Design Adolescents with and without ADHD engaged in a simulated drive under three conditions (no distraction, cell phone conversation, texting). During each condition, one unexpected event (e.g., car suddenly merging into driver's lane) was introduced. Setting Driving simulator. Participants Adolescents aged 16–17 with ADHD (n=28) and controls (n=33). Interventions/Main Exposures Cell phone conversation, texting, and no distraction while driving. Outcome Measures Self-report of driving history; Average speed, standard deviation of speed, standard deviation of lateral position, braking reaction time during driving simulation. Results Adolescents with ADHD reported fewer months of driving experience and a higher proportion of driving violations than controls. After controlling for months of driving history, adolescents with ADHD demonstrated more variability in speed and lane position than controls. There were no group differences for braking reaction time. Further, texting negatively impacted the driving performance of all participants as evidenced by increased variability in speed and lane position. Conclusions This study, one of the first to investigate distracted driving in adolescents with ADHD, adds to a growing body of literature documenting that individuals with ADHD are at increased risk for negative driving outcomes. Furthermore, texting significantly impairs the driving performance of all adolescents and increases existing driving-related impairment in adolescents with ADHD, highlighting the need for education and enforcement of regulations against texting for this age group. PMID:23939758
NASA Astrophysics Data System (ADS)
Brown-Steiner, B.; Selin, N. E.; Prinn, R. G.; Monier, E.; Garcia-Menendez, F.; Tilmes, S.; Emmons, L. K.; Lamarque, J. F.; Cameron-Smith, P. J.
2017-12-01
We summarize two methods to aid in the identification of ozone signals from underlying spatially and temporally heterogeneous data in order to help research communities avoid the sometimes burdensome computational costs of high-resolution high-complexity models. The first method utilizes simplified chemical mechanisms (a Reduced Hydrocarbon Mechanism and a Superfast Mechanism) alongside a more complex mechanism (MOZART-4) within CESM CAM-Chem to extend the number of simulated meteorological years (or add additional members to an ensemble) for a given modeling problem. The Reduced Hydrocarbon mechanism is twice as fast, and the Superfast mechanism is three times faster than the MOZART-4 mechanism. We show that simplified chemical mechanisms are largely capable of simulating surface ozone across the globe as well as the more complex chemical mechanisms, and where they are not capable, a simple standardized anomaly emulation approach can correct for their inadequacies. The second method uses strategic averaging over both temporal and spatial scales to filter out the highly heterogeneous noise that underlies ozone observations and simulations. This method allows for a selection of temporal and spatial averaging scales that match a particular signal strength (between 0.5 and 5 ppbv), and enables the identification of regions where an ozone signal can rise above the ozone noise over a given region and a given period of time. In conjunction, these two methods can be used to "scale down" chemical mechanism complexity and quantitatively determine spatial and temporal scales that could enable research communities to utilize simplified representations of atmospheric chemistry and thereby maximize their productivity and efficiency given computational constraints. While this framework is here applied to ozone data, it could also be applied to a broad range of geospatial data sets (observed or modeled) that have spatial and temporal coverage.
Fernandez Castelao, Ezequiel; Boos, Margarete; Ringer, Christiane; Eich, Christoph; Russo, Sebastian G
2015-07-24
Effective team leadership in cardiopulmonary resuscitation (CPR) is well recognized as a crucial factor influencing performance. Generally, leadership training focuses on task requirements for leading as well as non-leading team members. We provided crisis resource management (CRM) training only for designated team leaders of advanced life support (ALS) trained teams. This study assessed the impact of the CRM team leader training on CPR performance and team leader verbalization. Forty-five teams of four members each were randomly assigned to one of two study groups: CRM team leader training (CRM-TL) and additional ALS-training (ALS add-on). After an initial lecture and three ALS skill training tutorials (basic life support, airway management and rhythm recognition/defibrillation) of 90-min each, one member of each team was randomly assigned to act as the team leader in the upcoming CPR simulation. Team leaders of the CRM-TL groups attended a 90-min CRM-TL training. All other participants received an additional 90-min ALS skill training. A simulated CPR scenario was videotaped and analyzed regarding no-flow time (NFT) percentage, adherence to the European Resuscitation Council 2010 ALS algorithm (ADH), and type and rate of team leader verbalizations (TLV). CRM-TL teams showed shorter, albeit statistically insignificant, NFT rates compared to ALS-Add teams (mean difference 1.34 (95% CI -2.5, 5.2), p = 0.48). ADH scores in the CRM-TL group were significantly higher (difference -6.4 (95% CI -10.3, -2.4), p = 0.002). Significantly higher TLV proportions were found for the CRM-TL group: direct orders (difference -1.82 (95% CI -2.4, -1.2), p < 0.001); undirected orders (difference -1.82 (95% CI -2.8, -0.9), p < 0.001); planning (difference -0.27 (95% CI -0.5, -0.05) p = 0.018) and task assignments (difference -0.09 (95% CI -0.2, -0.01), p = 0.023). Training only the designated team leaders in CRM improves performance of the entire team, in particular guideline adherence and team leader behavior. Emphasis on training of team leader behavior appears to be beneficial in resuscitation and emergency medical course performance.
NASA Astrophysics Data System (ADS)
Armitage, D. Bruce
1999-02-01
This simulator was developed to help students beginning the study of gas chromatographic instruments to understand their operation. It is not meant to teach chromatographic theory. The instrument simulator is divided into 5 sections. One is for sample preparation. Another is used to manage carrier gases and choose a detector and column. The third sets the conditions for either isothermal or programmed temperature operation. A fourth section models manual injections, and the fifth is the autosampler. The operator has a choice among 6 columns of differing diameters and packing polarities and a choice of either isothermal or simple one-stage temperature programming. The simulator can be operated in either single-sample mode or as a 10-sample autosampler. The integrator has two modes of operation, a "dumb" mode in which only the retention time, area of the peak, and percentage area are listed and a "smart" mode that also lists the components' identities. The identities are obtained from a list of names and retention times created by the operator. Without this list only the percentages and areas are listed. The percentages are based on the areas obtained from the chromatogram and not on the actual percentages assigned during sample preparation. The data files for the compounds used in the simulator are ASCII files and can be edited easily to add more compounds than the 11 included with the simulator. A maximum of 10 components can be used in any one sample. Sample mixtures can be made on a percent-by-volume basis, but not by mass of sample per volume of solvent. A maximum of 30 compounds can be present in any one file, but the number of files is limited only by the operating system. (I suggest that not more than 20 compounds be used in any one file, as scrolling through large numbers of compounds is annoying to say the least.) File construction and layout are discussed in detail in the User's Manual. Chromatograms are generated by calculating a retention time based on the difference between the boiling point of the component and the temperature of the column. The polarity difference between the column packing and the component is also used to modify the retention time. The retention time decreases as the difference between the boiling point of the component and the temperature of the column increases, and retention time increases as the polarity of the component approaches the polarity of the column. If the temperature of the column is too low, a warning message is given and the chromatogram does not show that component. There is no "carry-over" to the next chromatogram, as might be the case for an actual instrument. Carrier-gas flow rate is fixed and is not part of the retention-time calculation. Because of this latter condition and the method used to determine retention time, this simulator is not useful for gas chromatography method development and is not intended for such use. The purpose of the simulator is to give a beginning student experience in what happens as column temperature is varied, why one might need temperature programming, why an autosampler might be useful, and the pitfalls of "smart" integrators. When students make mistakes in instrument setup with the simulator the consequences are not damaging to the simulator but might cause serious problems with a real instrument. Hardware and Software Requirements Hardware and software requirements for A GC Instrument Simulator are shown in Table 1.

Shown (right to left) are the main instrument control window and the manual injection window from A GC Instrument Simulator.
Petrides, Athena K; Bixho, Ida; Goonan, Ellen M; Bates, David W; Shaykevich, Shimon; Lipsitz, Stuart R; Landman, Adam B; Tanasijevic, Milenko J; Melanson, Stacy E F
2017-03-01
- A recent government regulation incentivizes implementation of an electronic health record (EHR) with computerized order entry and structured results display. Many institutions have also chosen to interface their EHR with their laboratory information system (LIS). - To determine the impact of an interfaced EHR-LIS on laboratory processes. - We analyzed several different processes before and after implementation of an interfaced EHR-LIS: the turnaround time, the number of stat specimens received, venipunctures per patient per day, preanalytic errors in phlebotomy, the number of add-on tests using a new electronic process, and the number of wrong test codes ordered. Data were gathered through the LIS and/or EHR. - The turnaround time for potassium and hematocrit decreased significantly (P = .047 and P = .004, respectively). The number of stat orders also decreased significantly, from 40% to 7% for potassium and hematocrit, respectively (P < .001 for both). Even though the average number of inpatient venipunctures per day increased from 1.38 to 1.62 (P < .001), the average number of preanalytic errors per month decreased from 2.24 to 0.16 per 1000 specimens (P < .001). Overall there was a 16% increase in add-on tests. The number of wrong test codes ordered was high and it was challenging for providers to correctly order some common tests. - An interfaced EHR-LIS significantly improved within-laboratory turnaround time and decreased stat requests and preanalytic phlebotomy errors. Despite increasing the number of add-on requests, an electronic add-on process increased efficiency and improved provider satisfaction. Laboratories implementing an interfaced EHR-LIS should be cautious of its effects on test ordering and patient venipunctures per day.
Modeling interface exchange coupling: Effect on switching of granular FePt films
NASA Astrophysics Data System (ADS)
Abugri, Joseph B.; Visscher, P. B.; Su, Hao; Gupta, Subhadra
2015-07-01
To raise the areal density of magnetic recording to ˜1 Tbit/in2, there has been much recent work on the use of FePt granular films, because their high perpendicular anisotropy allows small grains to be stable. However, their coercivity may be higher than available write-head fields. One approach to reduce the coercivity is to heat the grain (heat assisted magnetic recording). Another strategy is to add a soft capping layer to help nucleate switching via exchange coupling with the hard FePt grains. We have simulated a model of such a capped medium and have studied the effect of the strength of the interface exchange and thickness of hard layer and soft layer on the overall coercivity. Although the magnetization variation within such boundary layers may be complex, the net effect of the boundary can often be modeled as an infinitely thin interface characterized by an interface exchange energy density—we show how to do this consistently in a micromagnetic simulation. Although the switching behavior in the presence of exchange, magnetostatic, and external fields is quite complex, we show that by adding these fields one at a time, the main features of the M-H loop can be understood. In particular, we find that even without hard-soft interface exchange, magnetostatic coupling eliminates the zero-field kink in the loop, so that the absence of the kink does not (as has sometimes been assumed) imply exchange coupling. The computations have been done with a public-domain micromagnetics simulator that has been adapted to easily simulate arrays of grains.
NASA Adds Leap Second to Master Clock
2017-12-08
On Dec. 31, 2016, official clocks around the world will add a leap second just before midnight Coordinated Universal Time — which corresponds to 6:59:59 p.m. EST. NASA missions will also have to make the switch, including the Solar Dynamics Observatory, or SDO, which watches the sun 24/7. Clocks do this to keep in sync with Earth's rotation, which gradually slows down over time. When the dinosaurs roamed Earth, for example, our globe took only 23 hours to make a complete rotation. In space, millisecond accuracy is crucial to understanding how satellites orbit. "SDO moves about 1.9 miles every second," said Dean Pesnell, the project scientist for SDO at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "So does every other object in orbit near SDO. We all have to use the same time to make sure our collision avoidance programs are accurate. So we all add a leap second to the end of 2016, delaying 2017 by one second." The leap second is also key to making sure that SDO is in sync with the Coordinated Universal Time, or UTC, used to label each of its images. SDO has a clock that counts the number of seconds since the beginning of the mission. To convert that count to UTC requires knowing just how many leap seconds have been added to Earth-bound clocks since the mission started. When the spacecraft wants to provide a time in UTC, it calls a software module that takes into consideration both the mission's second count and the number of leap seconds — and then returns a time in UTC.
Data-Driven Anomaly Detection Performance for the Ares I-X Ground Diagnostic Prototype
NASA Technical Reports Server (NTRS)
Martin, Rodney A.; Schwabacher, Mark A.; Matthews, Bryan L.
2010-01-01
In this paper, we will assess the performance of a data-driven anomaly detection algorithm, the Inductive Monitoring System (IMS), which can be used to detect simulated Thrust Vector Control (TVC) system failures. However, the ability of IMS to detect these failures in a true operational setting may be related to the realistic nature of how they are simulated. As such, we will investigate both a low fidelity and high fidelity approach to simulating such failures, with the latter based upon the underlying physics. Furthermore, the ability of IMS to detect anomalies that were previously unknown and not previously simulated will be studied in earnest, as well as apparent deficiencies or misapplications that result from using the data-driven paradigm. Our conclusions indicate that robust detection performance of simulated failures using IMS is not appreciably affected by the use of a high fidelity simulation. However, we have found that the inclusion of a data-driven algorithm such as IMS into a suite of deployable health management technologies does add significant value.
Phase imaging using shifted wavefront sensor images.
Zhang, Zhengyun; Chen, Zhi; Rehman, Shakil; Barbastathis, George
2014-11-01
We propose a new approach to the complete retrieval of a coherent field (amplitude and phase) using the same hardware configuration as a Shack-Hartmann sensor but with two modifications: first, we add a transversally shifted measurement to resolve ambiguities in the measured phase; and second, we employ factored form descent (FFD), an inverse algorithm for coherence retrieval, with a hard rank constraint. We verified the proposed approach using both numerical simulations and experiments.
A Layered Solution for Supercomputing Storage
Grider, Gary
2018-06-13
To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storageâbased on inexpensive, failure-prone disk drivesâbetween disk drives and tape archives.
SimWorx: An ADA Distributed Simulation Application Framework Supporting HLA and DIS
1996-12-01
The authors emphasize that most real systems have elements of several architectural styles; these are called heterogeneous architectures. Typically...In order for frameworks to be used, understood, and maintained, Adair emphasizes they must be clearly documented. 37 2.5.2.2 Framework Use Issues...0a) cuE U)) 00 Z64 Support Category Classes I Component-Type, Max Size _ Item-Type, Max-Size Bounded Buffer ProtectedContainer +Get() +Add() +Put
Jingjing Liang; Joseph Buongiorno; Robert A. Monserud
2006-01-01
WestProPlus is an add-in program developed to work with Microsoft Excel to simulate the growth and management of all-aged Douglas-firâwestern hemlock (Pseudotsuga menziesii (Mirb.) FrancoâTsuga heterophylla (Raf.) Sarg.) stands in Oregon and Washington. Its built-in growth model was calibrated from 2,706 permanent plots in the...
NASA Astrophysics Data System (ADS)
Levene, Mark; Roussos, George
We present a new extension of Conway's game of life for two players, which we call ``p2life''. P2life allows one of two types of token, black or white, to inhabit a cell and adds competitive elements into the birth and survival rules of the original game. We solve the mean-field equation for p2life and determine, using simulation, that the asymptotic density of p2life approaches 0.0362.
Direct phase projection and transcranial focusing of ultrasound for brain therapy.
Pinton, Gianmarco F; Aubry, Jean-Francois; Tanter, Mickaël
2012-06-01
Ultrasound can be used to noninvasively treat the human brain with hyperthermia by focusing through the skull. To obtain an accurate focus, especially at high frequencies (>500 kHz), the phase of the transmitted wave must be modified to correct the aberrations introduced by the patient's individual skull morphology. Currently, three-dimensional finite-difference time-domain simulations are used to model a point source at the target. The outward-propagating wave crosses the measured representation of the human skull and is recorded at the therapy array transducer locations. The signal is then time reversed and experimentally transmitted back to its origin. These simulations are resource intensive and add a significant delay to treatment planning. Ray propagation is computationally efficient because it neglects diffraction and only describes two propagation parameters: the wave's direction and the phase. We propose a minimal method that is based only on the phase. The phase information is projected from the external skull surface to the array locations. This replaces computationally expensive finite-difference computations with an almost instantaneous direct phase projection calculation. For the five human skull samples considered, the phase distribution outside of the skull is shown to vary by less than λ/20 as it propagates over a 5 cm distance and the validity of phase projection is established over these propagation distances. The phase aberration introduced by the skull is characterized and is shown to have a good correspondence with skull morphology. The shape of this aberration is shown to have little variation with propagation distance. The focusing quality with the proposed phase-projection algorithm is shown to be indistinguishable from the gold-standard full finite-difference simulation. In conclusion, a spherical wave that is aberrated by the skull has a phase propagation that can be accurately described as radial, even after it has been distorted. By combining finite-difference simulations with a phase-projection algorithm, the time required for treatment planning is significantly reduced. The correlation length of the phase is used to validate the algorithm and it can also be used to provide guiding parameters for clinical array transducer design in terms of transducer spacing and phase error.
Simulating low-flow conditions in an arctic watershed using WaSiM
NASA Astrophysics Data System (ADS)
Daanen, R. P.; Gaedeke, A.; Liljedahl, A. K.; Arp, C. D.; Whitman, M. S.; Jones, B. M.; Cai, L.; Alexeev, V. A.
2017-12-01
The goal of this study is to identify the magnitude, timing, and duration of low-flow conditions under scenarios of summer drought throughout the 4500-km2 Fish Creek watershed, which is set entirely on the Arctic Coastal Plain of northern Alaska. The hydrologic response of streams in this region to drought conditions is not well understood, but likely varies by stream size, upstream lake extent, and geologic setting. We used a physically based model, Water Balance Simulation Model (WaSiM) to simulate river discharge, surface runoff, active layer depth, soil temperatures, water levels, groundwater levels, groundwater flow, and snow distribution. We found that 7-day low flows were strongly affected by scenarios of drought or wet conditions. The 10-year-period scenarios were generated by selecting dry or wet years from a reanalysis dataset. Starting conditions for the simulations were based on a control run with average atmospheric conditions. Connectivity of lakes with better feeding conditions for fish significantly decreased in the scenarios of both summer and winter drought. The overall memory of the hydrologic network seems to be on the order of two to three years, based on the time to reach equilibrium hydrological conditions. This suggests that lake level fluctuation and water harvest could have a long-term effect on the connectivity of lakes. Climate change could strongly affect this system, and increased future water use could add more pressure on fish populations. Snowmelt is a major component of the water balance in a typical Arctic watershed and fish tend to migrate to their summer feeding lakes during the spring. Mid-summer periods without significant rainfall prove most limiting on fish movement, and during this time headwater lakes supply the majority of streamflow and are often the habitat destination for foraging fish. Models that predict connectivity of these lakes to downstream networks during low-flow conditions will help identify where lake water extraction for winter water supply should be managed more conservatively. A better understanding of how these responses vary in this watershed will help guide management of fish habitat and lake water extraction in the National Petroleum Reserve - Alaska (NPR-A), where the Fish Creek watershed is located.
Trick Simulation Environment 07
NASA Technical Reports Server (NTRS)
Lin, Alexander S.; Penn, John M.
2012-01-01
The Trick Simulation Environment is a generic simulation toolkit used for constructing and running simulations. This release includes a Monte Carlo analysis simulation framework and a data analysis package. It produces all auto documentation in XML. Also, the software is capable of inserting a malfunction at any point during the simulation. Trick 07 adds variable server output options and error messaging and is capable of using and manipulating wide characters for international support. Wide character strings are available as a fundamental type for variables processed by Trick. A Trick Monte Carlo simulation uses a statistically generated, or predetermined, set of inputs to iteratively drive the simulation. Also, there is a framework in place for optimization and solution finding where developers may iteratively modify the inputs per run based on some analysis of the outputs. The data analysis package is capable of reading data from external simulation packages such as MATLAB and Octave, as well as the common comma-separated values (CSV) format used by Excel, without the use of external converters. The file formats for MATLAB and Octave were obtained from their documentation sets, and Trick maintains generic file readers for each format. XML tags store the fields in the Trick header comments. For header files, XML tags for structures and enumerations, and the members within are stored in the auto documentation. For source code files, XML tags for each function and the calling arguments are stored in the auto documentation. When a simulation is built, a top level XML file, which includes all of the header and source code XML auto documentation files, is created in the simulation directory. Trick 07 provides an XML to TeX converter. The converter reads in header and source code XML documentation files and converts the data to TeX labels and tables suitable for inclusion in TeX documents. A malfunction insertion capability allows users to override the value of any simulation variable, or call a malfunction job, at any time during the simulation. Users may specify conditions, use the return value of a malfunction trigger job, or manually activate a malfunction. The malfunction action may consist of executing a block of input file statements in an action block, setting simulation variable values, call a malfunction job, or turn on/off simulation jobs.
Consensus on Learning Time Builds
ERIC Educational Resources Information Center
Gewertz, Catherine
2008-01-01
Under enormous pressure to prepare students for a successful future--and fearful that standard school hours do not offer enough time to do so--educators, policymakers, and community activists are adding more learning time to children's lives. Twenty-five years ago, the still-resonant report "A Nation at Risk" urged schools to add more time--an…
High-resolution regional climate model evaluation using variable-resolution CESM over California
NASA Astrophysics Data System (ADS)
Huang, X.; Rhoades, A.; Ullrich, P. A.; Zarzycki, C. M.
2015-12-01
Understanding the effect of climate change at regional scales remains a topic of intensive research. Though computational constraints remain a problem, high horizontal resolution is needed to represent topographic forcing, which is a significant driver of local climate variability. Although regional climate models (RCMs) have traditionally been used at these scales, variable-resolution global climate models (VRGCMs) have recently arisen as an alternative for studying regional weather and climate allowing two-way interaction between these domains without the need for nudging. In this study, the recently developed variable-resolution option within the Community Earth System Model (CESM) is assessed for long-term regional climate modeling over California. Our variable-resolution simulations will focus on relatively high resolutions for climate assessment, namely 28km and 14km regional resolution, which are much more typical for dynamically downscaled studies. For comparison with the more widely used RCM method, the Weather Research and Forecasting (WRF) model will be used for simulations at 27km and 9km. All simulations use the AMIP (Atmospheric Model Intercomparison Project) protocols. The time period is from 1979-01-01 to 2005-12-31 (UTC), and year 1979 was discarded as spin up time. The mean climatology across California's diverse climate zones, including temperature and precipitation, is analyzed and contrasted with the Weather Research and Forcasting (WRF) model (as a traditional RCM), regional reanalysis, gridded observational datasets and uniform high-resolution CESM at 0.25 degree with the finite volume (FV) dynamical core. The results show that variable-resolution CESM is competitive in representing regional climatology on both annual and seasonal time scales. This assessment adds value to the use of VRGCMs for projecting climate change over the coming century and improve our understanding of both past and future regional climate related to fine-scale processes. This assessment is also relevant for addressing the scale limitation of current RCMs or VRGCMs when next-generation model resolution increases to ~10km and beyond.
Sun, Tie Gang; Xiao, Rong Bo; Cai, Yun Nan; Wang, Yao Wu; Wu, Chang Guang
2016-08-01
Quantitative assessment of urban thermal environment has become a focus for urban climate and environmental science since the concept of urban heat island has been proposed. With the continual development of space information and computer simulation technology, substantial progresses have been made on quantitative assessment techniques and methods of urban thermal environment. The quantitative assessment techniques have been developed to dynamics simulation and forecast of thermal environment at various scales based on statistical analysis of thermal environment on urban-scale using the historical data of weather stations. This study reviewed the development progress of ground meteorological observation, thermal infrared remote sensing and numerical simulation. Moreover, the potential advantages and disadvantages, applicability and the development trends of these techniques were also summarized, aiming to add fundamental knowledge of understanding the urban thermal environment assessment and optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundstrom, B.; Shirazi, M.; Coddington, M.
2013-02-01
This poster describes a Grid Interconnection System Evaluator (GISE) that leverages hardware-in-the-loop (HIL) simulation techniques to rapidly evaluate the grid interconnection standard conformance of an ICS according to the procedures in IEEE Std 1547.1TM. The architecture and test sequencing of this evaluation tool, along with a set of representative ICS test results from three different photovoltaic (PV) inverters, are presented. The GISE adds to the National Renewable Energy Laboratory's (NREL) evaluation platform that now allows for rapid development of ICS control algorithms using controller HIL (CHIL) techniques, the ability to test the dc input characteristics of PV-based ICSs through themore » use of a PV simulator capable of simulating real-world dynamics using power HIL (PHIL), and evaluation of ICS grid interconnection conformance.« less
Simulating Society Transitions: Standstill, Collapse and Growth in an Evolving Network Model
Xu, Guanghua; Yang, Junjie; Li, Guoqing
2013-01-01
We developed a model society composed of various occupations that interact with each other and the environment, with the capability of simulating three widely recognized societal transition patterns: standstill, collapse and growth, which are important compositions of society evolving dynamics. Each occupation is equipped with a number of inhabitants that may randomly flow to other occupations, during which process new occupations may be created and then interact with existing ones. Total population of society is associated with productivity, which is determined by the structure and volume of the society. We ran the model under scenarios such as parasitism, environment fluctuation and invasion, which correspond to different driving forces of societal transition, and obtained reasonable simulation results. This work adds to our understanding of societal evolving dynamics as well as provides theoretical clues to sustainable development. PMID:24086530
Microscopic modeling of multi-lane highway traffic flow
NASA Astrophysics Data System (ADS)
Hodas, Nathan O.; Jagota, Anand
2003-12-01
We discuss a microscopic model for the study of multi-lane highway traffic flow dynamics. Each car experiences a force resulting from a combination of the desire of the driver to attain a certain velocity, aerodynamic drag, and change of the force due to car-car interactions. The model also includes multi-lane simulation capability and the ability to add and remove obstructions. We implement the model via a Java applet, which is used to simulate traffic jam formation, the effect of bottlenecks on traffic flow, and the existence of light, medium, and heavy traffic flow. The simulations also provide insight into how the properties of individual cars result in macroscopic behavior. Because the investigation of emergent characteristics is so common in physics, the study of traffic in this manner sheds new light on how the micro-to-macro transition works in general.
Research on a Queue Scheduling Algorithm in Wireless Communications Network
NASA Astrophysics Data System (ADS)
Yang, Wenchuan; Hu, Yuanmei; Zhou, Qiancai
This paper proposes a protocol QS-CT, Queue Scheduling Mechanism based on Multiple Access in Ad hoc net work, which adds queue scheduling mechanism to RTS-CTS-DATA using multiple access protocol. By endowing different queues different scheduling mechanisms, it makes networks access to the channel much more fairly and effectively, and greatly enhances the performance. In order to observe the final performance of the network with QS-CT protocol, we simulate it and compare it with MACA/C-T without QS-CT protocol. Contrast to MACA/C-T, the simulation result shows that QS-CT has greatly improved the throughput, delay, rate of packets' loss and other key indicators.
Conditioning with compound stimuli in Drosophila melanogaster in the flight simulator.
Brembs, B; Heisenberg, M
2001-08-01
Short-term memory in Drosophila melanogaster operant visual learning in the flight simulator is explored using patterns and colours as a compound stimulus. Presented together during training, the two stimuli accrue the same associative strength whether or not a prior training phase rendered one of the two stimuli a stronger predictor for the reinforcer than the other (no blocking). This result adds Drosophila to the list of other invertebrates that do not exhibit the robust vertebrate blocking phenomenon. Other forms of higher-order learning, however, were detected: a solid sensory preconditioning and a small second-order conditioning effect imply that associations between the two stimuli can be formed, even if the compound is not reinforced.
Simulation of complex pharmacokinetic models in Microsoft Excel.
Meineke, Ingolf; Brockmöller, Jürgen
2007-12-01
With the arrival of powerful personal computers in the office numerical methods are accessible to everybody. Simulation of complex processes therefore has become an indispensible tool in research and education. In this paper Microsoft EXCEL is used as a platform for a universal differential equation solver. The software is designed as an add-in aiming at a minimum of required user input to perform a given task. Four examples are included to demonstrate both, the simplicity of use and the versatility of possible applications. While the layout of the program is admittedly geared to the needs of pharmacokineticists, it can be used in any field where sets of differential equations are involved. The software package is available upon request.
USNO Scientific Colloquia - Naval Oceanography Portal
section Advanced Search... Sections Home Time Earth Orientation Astronomy Meteorology Oceanography Ice You Navigation Tour Information USNO Scientific Colloquia Info USNO Scientific Colloquia Time and Place: Unless departure. Add additional time prior to arriving at the colloquium for issuance of a visitors badge and
NASA Technical Reports Server (NTRS)
Ratnayake, Nalin A.; Waggoner, Erin R.; Taylor, Brian R.
2011-01-01
The problem of parameter estimation on hybrid-wing-body aircraft is complicated by the fact that many design candidates for such aircraft involve a large number of aerodynamic control effectors that act in coplanar motion. This adds to the complexity already present in the parameter estimation problem for any aircraft with a closed-loop control system. Decorrelation of flight and simulation data must be performed in order to ascertain individual surface derivatives with any sort of mathematical confidence. Non-standard control surface configurations, such as clamshell surfaces and drag-rudder modes, further complicate the modeling task. In this paper, time-decorrelation techniques are applied to a model structure selected through stepwise regression for simulated and flight-generated lateral-directional parameter estimation data. A virtual effector model that uses mathematical abstractions to describe the multi-axis effects of clamshell surfaces is developed and applied. Comparisons are made between time history reconstructions and observed data in order to assess the accuracy of the regression model. The Cram r-Rao lower bounds of the estimated parameters are used to assess the uncertainty of the regression model relative to alternative models. Stepwise regression was found to be a useful technique for lateral-directional model design for hybrid-wing-body aircraft, as suggested by available flight data. Based on the results of this study, linear regression parameter estimation methods using abstracted effectors are expected to perform well for hybrid-wing-body aircraft properly equipped for the task.
Driving reconnection in sheared magnetic configurations with forced fluctuations
NASA Astrophysics Data System (ADS)
Pongkitiwanichakul, Peera; Makwana, Kirit D.; Ruffolo, David
2018-02-01
We investigate reconnection of magnetic field lines in sheared magnetic field configurations due to fluctuations driven by random forcing by means of numerical simulations. The simulations are performed with an incompressible, pseudo-spectral magnetohydrodynamics code in 2D where we take thick, resistively decaying, current-sheet like sheared magnetic configurations which do not reconnect spontaneously. We describe and test the forcing that is introduced in the momentum equation to drive fluctuations. It is found that the forcing does not change the rate of decay; however, it adds and removes energy faster in the presence of the magnetic shear structure compared to when it has decayed away. We observe that such a forcing can induce magnetic reconnection due to field line wandering leading to the formation of magnetic islands and O-points. These reconnecting field lines spread out as the current sheet decays with time. A semi-empirical formula is derived which reasonably explains the formation and spread of O-points. We find that reconnection spreads faster with stronger forcing and longer correlation time of forcing, while the wavenumber of forcing does not have a significant effect. When the field line wandering becomes large enough, the neighboring current sheets with opposite polarity start interacting, and then the magnetic field is rapidly annihilated. This work is useful to understand how forced fluctuations can drive reconnection in large scale current structures in space and astrophysical plasmas that are not susceptible to reconnection.
Starshade Observation Scheduling for WFIRST
NASA Astrophysics Data System (ADS)
Soto, Gabriel; Garrett, Daniel; Delacroix, Christian; Savransky, Dmitry
2018-01-01
An exoplanet direct imaging mission can employ an external starshade for starlight suppression to achieve higher contrasts and potentially higher throughput than with an internal coronagraph. This separately-launched starshade spacecraft is assumed to maintain a single, constant separation distance from the space telescope—for this study, the Wide Field Infrared Survey Telescope (WFIRST)—based on a designated inner working angle during integration times. The science yield of such a mission can be quantified using the Exoplanet Open-Source Imaging Simulator (EXOSIMS): this simulator determines the distributions of mission outcomes, such as the types and amount of exoplanet detections, based on ensembles of end-to-end simulations of the mission. This study adds a starshade class to the survey simulation module of EXOSIMS and outlines a method for efficiently determining observation schedules. The new starshade class solves boundary value problems using circular restricted three-body dynamics to find fast, high-accuracy estimates of the starshade motion while repositioning between WFIRST observations. Fuel usage dictates the mission lifetime of the starshade given its limited fuel supply and is dominated by the Δv used to reposition the starshade between the LOS of different targets; the repositioning time-of-flight is kept constant in this study. A starshade burns less fuel to reach certain target stars based on their relative projected positions on a skymap; other targets with costly transfers can be filtered out to increase the starshade mission duration. Because the initial target list can consist of nearly 2000 stars, calculating the Δv required to move the starshade to every other star on the target list would be too computationally expensive and renders running ensembles of survey simulations infeasible. Assuming the starshade begins its transfer at the LOS of a certain star, a Δv curve is approximated for the remaining target stars based on their right ascension or declination angle, depending on the starting and ending position of WFIRST on its halo orbit. The required Δv for a given star can be quickly interpolated and used to filter out stars in the target list.
Quantum simulation from the bottom up: the case of rebits
NASA Astrophysics Data System (ADS)
Enshan Koh, Dax; Yuezhen Niu, Murphy; Yoder, Theodore J.
2018-05-01
Typically, quantum mechanics is thought of as a linear theory with unitary evolution governed by the Schrödinger equation. While this is technically true and useful for a physicist, with regards to computation it is an unfortunately narrow point of view. Just as a classical computer can simulate highly nonlinear functions of classical states, so too can the more general quantum computer simulate nonlinear evolutions of quantum states. We detail one particular simulation of nonlinearity on a quantum computer, showing how the entire class of -unitary evolutions (on n qubits) can be simulated using a unitary, real-amplitude quantum computer (consisting of n + 1 qubits in total). These operators can be represented as the sum of a linear and antilinear operator, and add an intriguing new set of nonlinear quantum gates to the toolbox of the quantum algorithm designer. Furthermore, a subgroup of these nonlinear evolutions, called the -Cliffords, can be efficiently classically simulated, by making use of the fact that Clifford operators can simulate non-Clifford (in fact, non-linear) operators. This perspective of using the physical operators that we have to simulate non-physical ones that we do not is what we call bottom-up simulation, and we give some examples of its broader implications.
Bethge, Anja; Schumacher, Udo; Wedemann, Gero
2015-10-01
Despite considerable research efforts, the process of metastasis formation is still a subject of intense discussion, and even established models differ considerably in basic details and in the conclusions drawn from them. Mathematical and computational models add a new perspective to the research as they can quantitatively investigate the processes of metastasis and the effects of treatment. However, existing models look at only one treatment option at a time. We enhanced a previously developed computer model (called CaTSiT) that enables quantitative comparison of different metastasis formation models with clinical and experimental data, to include the effects of chemotherapy, external beam radiation, radioimmunotherapy and radioembolization. CaTSiT is based on a discrete event simulation procedure. The growth of the primary tumor and its metastases is modeled by a piecewise-defined growth function that describes the growth behavior of the primary tumor and metastases during various time intervals. The piecewise-defined growth function is composed of analytical functions describing the growth behavior of the tumor based on characteristics of the tumor, such as dormancy, or the effects of various therapies. The spreading of malignant cells into the blood is modeled by intravasation events, which are generated according to a rate function. Further events in the model describe the behavior of the released malignant cells until the formation of a new metastasis. The model is published under the GNU General Public License version 3. To demonstrate the application of the computer model, a case of a patient with a hepatocellular carcinoma and multiple metastases in the liver was simulated. Besides the untreated case, different treatments were simulated at two time points: one directly after diagnosis of the primary tumor and the other several months later. Except for early applied radioimmunotherapy, no treatment strategy was able to eliminate all metastases. These results emphasize the importance of early diagnosis and of proceeding with treatment even if no clinically detectable metastases are present at the time of diagnosis of the primary tumor. CaTSiT could be a valuable tool for quantitative investigation of the process of tumor growth and metastasis formation, including the effects of various treatment options. Copyright © 2015 Elsevier Inc. All rights reserved.
Resampling procedures to identify important SNPs using a consensus approach.
Pardy, Christopher; Motyer, Allan; Wilson, Susan
2011-11-29
Our goal is to identify common single-nucleotide polymorphisms (SNPs) (minor allele frequency > 1%) that add predictive accuracy above that gained by knowledge of easily measured clinical variables. We take an algorithmic approach to predict each phenotypic variable using a combination of phenotypic and genotypic predictors. We perform our procedure on the first simulated replicate and then validate against the others. Our procedure performs well when predicting Q1 but is less successful for the other outcomes. We use resampling procedures where possible to guard against false positives and to improve generalizability. The approach is based on finding a consensus regarding important SNPs by applying random forests and the least absolute shrinkage and selection operator (LASSO) on multiple subsamples. Random forests are used first to discard unimportant predictors, narrowing our focus to roughly 100 important SNPs. A cross-validation LASSO is then used to further select variables. We combine these procedures to guarantee that cross-validation can be used to choose a shrinkage parameter for the LASSO. If the clinical variables were unavailable, this prefiltering step would be essential. We perform the SNP-based analyses simultaneously rather than one at a time to estimate SNP effects in the presence of other causal variants. We analyzed the first simulated replicate of Genetic Analysis Workshop 17 without knowledge of the true model. Post-conference knowledge of the simulation parameters allowed us to investigate the limitations of our approach. We found that many of the false positives we identified were substantially correlated with genuine causal SNPs.
A mathematical framework for modelling cambial surface evolution using a level set method
Sellier, Damien; Plank, Michael J.; Harrington, Jonathan J.
2011-01-01
Background and Aims During their lifetime, tree stems take a series of successive nested shapes. Individual tree growth models traditionally focus on apical growth and architecture. However, cambial growth, which is distributed over a surface layer wrapping the whole organism, equally contributes to plant form and function. This study aims at providing a framework to simulate how organism shape evolves as a result of a secondary growth process that occurs at the cellular scale. Methods The development of the vascular cambium is modelled as an expanding surface using the level set method. The surface consists of multiple compartments following distinct expansion rules. Growth behaviour can be formulated as a mathematical function of surface state variables and independent variables to describe biological processes. Key Results The model was coupled to an architectural model and to a forest stand model to simulate cambium dynamics and wood formation at the scale of the organism. The model is able to simulate competition between cambia, surface irregularities and local features. Predicting the shapes associated with arbitrarily complex growth functions does not add complexity to the numerical method itself. Conclusions Despite their slenderness, it is sometimes useful to conceive of trees as expanding surfaces. The proposed mathematical framework provides a way to integrate through time and space the biological and physical mechanisms underlying cambium activity. It can be used either to test growth hypotheses or to generate detailed maps of wood internal structure. PMID:21470972
A penalty-based nodal discontinuous Galerkin method for spontaneous rupture dynamics
NASA Astrophysics Data System (ADS)
Ye, R.; De Hoop, M. V.; Kumar, K.
2017-12-01
Numerical simulation of the dynamic rupture processes with slip is critical to understand the earthquake source process and the generation of ground motions. However, it can be challenging due to the nonlinear friction laws interacting with seismicity, coupled with the discontinuous boundary conditions across the rupture plane. In practice, the inhomogeneities in topography, fault geometry, elastic parameters and permiability add extra complexity. We develop a nodal discontinuous Galerkin method to simulate seismic wave phenomenon with slipping boundary conditions, including the fluid-solid boundaries and ruptures. By introducing a novel penalty flux, we avoid solving Riemann problems on interfaces, which makes our method capable for general anisotropic and poro-elastic materials. Based on unstructured tetrahedral meshes in 3D, the code can capture various geometries in geological model, and use polynomial expansion to achieve high-order accuracy. We consider the rate and state friction law, in the spontaneous rupture dynamics, as part of a nonlinear transmitting boundary condition, which is weakly enforced across the fault surface as numerical flux. An iterative coupling scheme is developed based on implicit time stepping, containing a constrained optimization process that accounts for the nonlinear part. To validate the method, we proof the convergence of the coupled system with error estimates. We test our algorithm on a well-established numerical example (TPV102) of the SCEC/USGS Spontaneous Rupture Code Verification Project, and benchmark with the simulation of PyLith and SPECFEM3D with agreeable results.
Synthesis of High-Speed Digital Systems.
1985-11-08
1 (sub2 sub 16 2 (sub3 sub 16) 3 (sub4 sub 16) 4 (eub5 sub 16) 5 (sub6 sub 16) 6 ( sub7 sub 16) 7 (addi add 16) 8 (add2 add 16) 9 (add3 add 16) 10...seB uub5 J2 16 se5) 15 (se6 sub6 JI 16 soO) 18 (se7 sub7 J5 16 se7) 17 (aol addi Dl 16 aol) 18 (a921 add2 add7 18 a02) 19 (&922 add2 add5 16 a02) 20...de4l D4 add4 16 de4) 33 Wd942 D4 sub4 16 de4) 34 (de~i D5 sub7 16 de5) 35 (deS2 D5 add8 16 deS) 36 (jell Ji add7 16 jel) 37 (je12 JI D5 16 jel) 38 (je2
Membrane, action, and oscillatory potentials in simulated protocells
NASA Technical Reports Server (NTRS)
Syren, R. M.; Fox, S. W.; Przybylski, A. T.; Stratten, W. P.
1982-01-01
Electrical membrane potentials, oscillations, and action potentials are observed in proteinoid microspheres impaled with (3 M KCl) microelectrodes. Although effects are of greater magnitude when the vesicles contain glycerol and natural or synthetic lecithin, the results in the purely synthetic thermal protein structures are substantial, attaining 20 mV amplitude in some cases. The results add the property of electrical potential to the other known properties of proteinoid microspheres, in their role as models for protocells.
High Resolution WENO Simulation of 3D Detonation Waves
2012-02-27
pocket behind the detonation front was not observed in their results because the rotating transverse detonation completely consumed the unburned gas. Dou...three-dimensional detonations We add source terms (functions of x, y, z and t) to the PDE system so that the following functions are exact solutions to... detonation rotates counter-clockwise, opposite to that in [48]. It can be seen that, the triple lines and transverse waves collide with the walls, and strong
Höckel, David; Koch, Lars; Martin, Eugen; Benson, Oliver
2009-10-15
We describe a Fabry-Perot-based spectral filter for free-space quantum key distribution (QKD). A multipass etalon filter was built, and its performance was studied. The whole filter setup was carefully optimized to add less than 2 dB attenuation to a signal beam but block stray light by 21 dB. Simulations show that such a filter might be sufficient to allow QKD satellite downlinks during daytime with the current technology.
The Physlet Approach to Simulation Design
NASA Astrophysics Data System (ADS)
Christian, Wolfgang; Belloni, Mario; Esquembre, Francisco; Mason, Bruce A.; Barbato, Lyle; Riggsbee, Matt
2015-10-01
Over the past two years, the AAPT/ComPADRE staff and the Open Source Physics group have published the second edition of Physlet Physics and Physlet Quantum Physics, delivered as interactive web pages on AAPT/ComPADRE and as free eBooks available through iTunes and Google Play. These two websites, and their associated books, add over 1000 interactive exercises for the teaching of introductory physics, introductory and intermediate modern physics, and quantum mechanics to AAPT/ComPADRE.
NEAMS-IPL MOOSE Midyear Framework Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Permann, Cody; Alger, Brian; Peterson, John
The MOOSE Framework is a modular pluggable framework for building complex simulations. The ability to add new objects with custom syntax is a core capability that makes MOOSE a powerful platform for coupling multiple applications together within a single environment. The creation of a new, more standardized JSON syntax output improves the external interfaces for generating graphical components or for validating input file syntax. The design of this interface and the requirements it satisfies are covered in this short report.
Challenges and Opportunities in Propulsion Simulations
2015-09-24
leverage Nvidia GPU accelerators • Release common computational infrastructure as Distro A for collaboration • Add physics modules as either...Gemini (6.4 GB/s) Dual Rail EDR-IB (23 GB/s) Interconnect Topology 3D Torus Non-blocking Fat Tree Processors AMD Opteron™ NVIDIA Kepler™ IBM...POWER9 NVIDIA Volta™ File System 32 PB, 1 TB/s, Lustre® 120 PB, 1 TB/s, GPFS™ Peak power consumption 9 MW 10 MW Titan vs. Summit Source: R
Vadose Zone Fate and Transport Simulation of Chemicals Associated with Coal Seam Gas Extraction
NASA Astrophysics Data System (ADS)
Simunek, J.; Mallants, D.; Jacques, D.; Van Genuchten, M.
2017-12-01
The HYDRUS-1D and HYDRUS (2D/3D) computer software packages are widely used finite element models for simulating the one-, and two- or three-dimensional movement of water, heat, and multiple solutes in variably-saturated media, respectively. While the standard HYDRUS models consider only the fate and transport of individual solutes or solutes subject to first-order degradation reactions, several specialized HYDRUS add-on modules can simulate far more complex biogeochemical processes. The objective of this presentation is to provide an overview of the HYDRUS models and their add-on modules, and to demonstrate applications of the software to the subsurface fate and transport of chemicals involved in coal seam gas extraction and water management operations. One application uses the standard HYDRUS model to evaluate the natural soil attenuation potential of hydraulic fracturing chemicals and their transformation products in case of an accidental release. By coupling the processes of retardation, first-order degradation and convective-dispersive transport of the biocide bronopol and its degradation products, we demonstrated how natural attenuation reduces initial concentrations by more than a factor of hundred in the top 5 cm of the vadose zone. A second application uses the UnsatChem module to explore the possible use of coal seam gas produced water for sustainable irrigation. Simulations with different irrigation waters (untreated, amended with surface water, and reverse osmosis treated) provided detailed results regarding chemical indicators of soil and plant health, notably SAR, EC and sodium concentrations. A third application uses the coupled HYDRUS-PHREEQC module to analyze trace metal transport involving cation exchange and surface complexation sorption reactions in the vadose zone leached with coal seam gas produced water following some accidental water release scenario. Results show that the main process responsible for trace metal migration is complexation of naturally present trace metals with inorganic ligands such as (bi)carbonate that enter the soil upon infiltration with alkaline produced water.
Lehrer, Roni; Schumacher, Gijs
2018-01-01
The policy positions parties choose are central to both attracting voters and forming coalition governments. How then should parties choose positions to best represent voters? Laver and Sergenti show that in an agent-based model with boundedly rational actors a decision rule (Aggregator) that takes the mean policy position of its supporters is the best rule to achieve high congruence between voter preferences and party positions. But this result only pertains to representation by the legislature, not representation by the government. To evaluate this we add a coalition formation procedure with boundedly rational parties to the Laver and Sergenti model of party competition. We also add two new decision rules that are sensitive to government formation outcomes rather than voter positions. We develop two simulations: a single-rule one in which parties with the same rule compete and an evolutionary simulation in which parties with different rules compete. In these simulations we analyze party behavior under a large number of different parameters that describe real-world variance in political parties' motives and party system characteristics. Our most important conclusion is that Aggregators also produce the best match between government policy and voter preferences. Moreover, even though citizens often frown upon politicians' interest in the prestige and rents that come with winning political office (office pay-offs), we find that citizens actually receive better representation by the government if politicians are motivated by these office pay-offs in contrast to politicians with ideological motivations (policy pay-offs). Finally, we show that while more parties are linked to better political representation, how parties choose policy positions affects political representation as well. Overall, we conclude that to understand variation in the quality of political representation scholars should look beyond electoral systems and take into account variation in party behavior as well.
2018-01-01
The policy positions parties choose are central to both attracting voters and forming coalition governments. How then should parties choose positions to best represent voters? Laver and Sergenti show that in an agent-based model with boundedly rational actors a decision rule (Aggregator) that takes the mean policy position of its supporters is the best rule to achieve high congruence between voter preferences and party positions. But this result only pertains to representation by the legislature, not representation by the government. To evaluate this we add a coalition formation procedure with boundedly rational parties to the Laver and Sergenti model of party competition. We also add two new decision rules that are sensitive to government formation outcomes rather than voter positions. We develop two simulations: a single-rule one in which parties with the same rule compete and an evolutionary simulation in which parties with different rules compete. In these simulations we analyze party behavior under a large number of different parameters that describe real-world variance in political parties’ motives and party system characteristics. Our most important conclusion is that Aggregators also produce the best match between government policy and voter preferences. Moreover, even though citizens often frown upon politicians’ interest in the prestige and rents that come with winning political office (office pay-offs), we find that citizens actually receive better representation by the government if politicians are motivated by these office pay-offs in contrast to politicians with ideological motivations (policy pay-offs). Finally, we show that while more parties are linked to better political representation, how parties choose policy positions affects political representation as well. Overall, we conclude that to understand variation in the quality of political representation scholars should look beyond electoral systems and take into account variation in party behavior as well. PMID:29394268
Simulating the effects of ground-water withdrawals on streamflow in a precipitation-runoff model
Zarriello, Philip J.; Barlow, P.M.; Duda, P.B.
2004-01-01
Precipitation-runoff models are used to assess the effects of water use and management alternatives on streamflow. Often, ground-water withdrawals are a major water-use component that affect streamflow, but the ability of surface-water models to simulate ground-water withdrawals is limited. As part of a Hydrologic Simulation Program-FORTRAN (HSPF) precipitation-runoff model developed to analyze the effect of ground-water and surface-water withdrawals on streamflow in the Ipswich River in northeastern Massachusetts, an analytical technique (STRMDEPL) was developed for calculating the effects of pumped wells on streamflow. STRMDEPL is a FORTRAN program based on two analytical solutions that solve equations for ground-water flow to a well completed in a semi-infinite, homogeneous, and isotropic aquifer in direct hydraulic connection to a fully penetrating stream. One analytical method calculates unimpeded flow at the stream-aquifer boundary and the other method calculates the resistance to flow caused by semipervious streambed and streambank material. The principle of superposition is used with these analytical equations to calculate time-varying streamflow depletions due to daily pumping. The HSPF model can readily incorporate streamflow depletions caused by a well or surface-water withdrawal, or by multiple wells or surface-water withdrawals, or both, as a combined time-varying outflow demand from affected channel reaches. These demands are stored as a time series in the Watershed Data Management (WDM) file. This time-series data is read into the model as an external source used to specify flow from the first outflow gate in the reach where these withdrawals are located. Although the STRMDEPL program can be run independently of the HSPF model, an extension was developed to run this program within GenScn, a scenario generator and graphical user interface developed for use with the HSPF model. This extension requires that actual pumping rates for each well be stored in a unique WDM dataset identified by an attribute that associates each well with the model reach from which water is withdrawn. Other attributes identify the type and characteristics of the data. The interface allows users to easily add new pumping wells, delete exiting pumping wells, or change properties of the simulated aquifer or well. Development of this application enhanced the ability of the HSPF model to simulate complex water-use conditions in the Ipswich River Basin. The STRMDEPL program and the GenScn extension provide a valuable tool for water managers to evaluate the effects of pumped wells on streamflow and to test alternative water-use scenarios. Copyright ASCE 2004.
Giner-Casares, J J; Camacho, L; Martín-Romero, M T; Cascales, J J López
2008-03-04
In this work, a DMPA Langmuir monolayer at the air/water interface was studied by molecular dynamics simulations. Thus, an atomistic picture of a Langmuir monolayer was drawn from its expanded gas phase to its final solid condensed one. In this sense, some properties of monolayers that were traditionally poorly or even not reproduced in computer simulations, such as lipid domain formation or pressure-area per lipid isotherm, were properly reproduced in this work. Thus, the physical laws that control the lipid domain formation in the gas phase and the structure of lipid monolayers from the gas to solid condensed phase were studied. Thanks to the atomistic information provided by the molecular dynamics simulations, we were able to add valuable information to the experimental description of these processes and to access experimental data related to the lipid monolayers in their expanded phase, which is difficult or inaccessible to study by experimental techniques. In this sense, properties such as lipids head hydration and lipid structure were studied.
P-8A Poseidon strategy for modeling & simulation verification validation & accreditation (VV&A)
NASA Astrophysics Data System (ADS)
Kropp, Derek L.
2009-05-01
One of the first challenges in addressing the need for Modeling & Simulation (M&S) Verification, Validation, & Accreditation (VV&A) is to develop an approach for applying structured and formalized VV&A processes. The P-8A Poseidon Multi-Mission Maritime Aircraft (MMA) Program Modeling and Simulation Accreditation Strategy documents the P-8A program's approach to VV&A. The P-8A strategy tailors a risk-based approach and leverages existing bodies of knowledge, such as the Defense Modeling and Simulation Office Recommended Practice Guide (DMSO RPG), to make the process practical and efficient. As the program progresses, the M&S team must continue to look for ways to streamline the process, add supplemental steps to enhance the process, and identify and overcome procedural, organizational, and cultural challenges. This paper includes some of the basics of the overall strategy, examples of specific approaches that have worked well, and examples of challenges that the M&S team has faced.
Modeling absolute plate and plume motions
NASA Astrophysics Data System (ADS)
Bodinier, G. P.; Wessel, P.; Conrad, C. P.
2016-12-01
Paleomagnetic evidence for plume drift has made modeling of absolute plate motions challenging, especially since direct observations of plume drift are lacking. Predictions of plume drift arising from mantle convection models and broadly satisfying observed paleolatitudes have so far provided the only framework for deriving absolute plate motions over moving hotspots. However, uncertainties in mantle rheology, temperature, and initial conditions make such models nonunique. Using simulated and real data, we will show that age progressions along Pacific hotspot trails provide strong constraints on plume motions for all major trails, and furthermore that it is possible to derive models for relative plume drift from these data alone. Relative plume drift depends on the inter-hotspot distances derived from age progressions but lacks a fixed reference point and orientation. By incorporating paleolatitude histories for the Hawaii and Louisville chains we add further constraints on allowable plume motions, yet one unknown parameter remains: a longitude shift that applies equally to all plumes. To obtain a solution we could restrict either the Hawaii or Louisville plume to have latitudinal motion only, thus satisfying paleolatitude constraints. Yet, restricting one plume to latitudinal motion while all others move freely is not realistic. Consequently, it is only possible to resolve the motion of hotspots relative to an overall and unknown longitudinal shift as a function of time. Our plate motions are therefore dependent on the same shift via an unknown rotation about the north pole. Yet, as plume drifts are consequences of mantle convection, our results place strong constraints on the pattern of convection. Other considerations, such as imposed limits on plate speed, plume speed, proximity to LLSVP edges, model smoothness, or relative plate motions via ridge-spotting may add further constraints that allow a unique model of Pacific absolute plate and plume motions to be inferred. Our modeling suggests that the acquisition of new age and paleomagnetic data from hotspot trails where data are lacking would add valuable constraints on both plume and plate motions. At present, the limiting factor is inconsistencies between paleomagnetic, geometric, and chronologic data, leading to large uncertainties in the results.
Remote visualization and scale analysis of large turbulence datatsets
NASA Astrophysics Data System (ADS)
Livescu, D.; Pulido, J.; Burns, R.; Canada, C.; Ahrens, J.; Hamann, B.
2015-12-01
Accurate simulations of turbulent flows require solving all the dynamically relevant scales of motions. This technique, called Direct Numerical Simulation, has been successfully applied to a variety of simple flows; however, the large-scale flows encountered in Geophysical Fluid Dynamics (GFD) would require meshes outside the range of the most powerful supercomputers for the foreseeable future. Nevertheless, the current generation of petascale computers has enabled unprecedented simulations of many types of turbulent flows which focus on various GFD aspects, from the idealized configurations extensively studied in the past to more complex flows closer to the practical applications. The pace at which such simulations are performed only continues to increase; however, the simulations themselves are restricted to a small number of groups with access to large computational platforms. Yet the petabytes of turbulence data offer almost limitless information on many different aspects of the flow, from the hierarchy of turbulence moments, spectra and correlations, to structure-functions, geometrical properties, etc. The ability to share such datasets with other groups can significantly reduce the time to analyze the data, help the creative process and increase the pace of discovery. Using the largest DOE supercomputing platforms, we have performed some of the biggest turbulence simulations to date, in various configurations, addressing specific aspects of turbulence production and mixing mechanisms. Until recently, the visualization and analysis of such datasets was restricted by access to large supercomputers. The public Johns Hopkins Turbulence database simplifies the access to multi-Terabyte turbulence datasets and facilitates turbulence analysis through the use of commodity hardware. First, one of our datasets, which is part of the database, will be described and then a framework that adds high-speed visualization and wavelet support for multi-resolution analysis of turbulence will be highlighted. The addition of wavelet support reduces the latency and bandwidth requirements for visualization, allowing for many concurrent users, and enables new types of analyses, including scale decomposition and coherent feature extraction.
NASA Technical Reports Server (NTRS)
Batten, Adam; Dunlop, John; Edwards, Graeme; Farmer, Tony; Gaffney, Bruce; Hedley, Mark; Hoschke, Nigel; Isaacs, Peter; Johnson, Mark; Lewis, Chris;
2009-01-01
This report describes the second phase of the implementation of the Concept Demonstrator experimental test-bed system containing sensors and processing hardware distributed throughout the structure, which uses multi-agent algorithms to characterize impacts and determine a suitable response to these impacts. This report expands and adds to the report of the first phase implementation. The current status of the system hardware is that all 192 physical cells (32 on each of the 6 hexagonal prism faces) have been constructed, although only four of these presently contain data-acquisition sub-modules to allow them to acquire sensor data. Impact detection.. location and severity have been successfully demonstrated. The software modules for simulating cells and controlling the test-bed are fully operational. although additional functionality will be added over time. The visualization workstation displays additional diagnostic information about the array of cells (both real and simulated) and additional damage information. Local agent algorithms have been developed that demonstrate emergent behavior of the complex multi-agent system, through the formation of impact damage boundaries and impact networks. The system has been shown to operate well for multiple impacts. and to demonstrate robust reconfiguration in the presence of damage to numbers of cells.
Feasibility of mercury removal from simulated flue gas by activated chars made from poultry manures.
Klasson, K Thomas; Lima, Isabel M; Boihem, Larry L; Wartelle, Lynda H
2010-12-01
Increased emphasis on reduction of mercury emissions from coal fired electric power plants has resulted in environmental regulations that may in the future require application of activated carbons as mercury sorbents for mercury removal. At the same time, the quantity of poultry manure generated each year is large and technologies that take advantage of the material should be explored. The purpose of the work was to obtain preliminary data to investigate if activated chars made from different poultry manures could adsorb mercury from simulated flue gas. In laboratory experiments, activated chars made from chicken cake and litter removed mercury from the gas as well as a commercial alternative. It was also found that acid-washing these chars after activation may improve pore structure but does not influence the mercury removal efficiency. Activated chars were also made from turkey cake and litter. These raw materials produced activated chars with similar pore structure as those made from chicken manure, but they did not adsorb mercury as well. Acid-washing the turkey manure-based chars improved their performance, but this step would add to the cost of production. Preliminary evaluations suggest that unwashed activated chars may cost as little as $0.95/kg to produce. Published by Elsevier Ltd.
Modeling the effect of exogenous melatonin on the sleep-wake switch.
Johnson, Nicholas; Jain, Gauray; Sandberg, Lianne; Sheets, Kevin
2012-01-01
According to the Centers for Disease Control and Prevention and the Institute of Medicine of the National Academies, insufficient sleep has become a public health epidemic. Approximately 50-70 million adults (20 years or older) suffer from some disorder of sleep and wakefulness, hindering daily functioning and adversely affecting health and longevity. Melatonin, a naturally produced hormone which plays a role in sleep-wake regulation, is currently offered as an over-the-counter sleep aid. However, the effects of melatonin on the sleep-wake cycle are incompletely understood. The goal of this modeling study was to incorporate the effects of exogenous melatonin administration into a mathematical model of the human sleep-wake switch. The model developed herein adds a simple kinetic model of the MT1 melatonin receptor to an existing model which simulates the interactions of different neuronal groups thought to be involved in sleep-wake regulation. Preliminary results were obtained by simulating the effects of an exogenous melatonin dose typical of over-the-counter sleep aids. The model predicted an increase in homeostatic sleep drive and a resulting alteration in circadian rhythm consistent with experimental results. The time of melatonin administration was also observed to have a strong influence on the sleep-wake effects elicited, which is also consistent with prior experimental findings.
Dórea, Fernanda C.; McEwen, Beverly J.; McNab, W. Bruce; Revie, Crawford W.; Sanchez, Javier
2013-01-01
Diagnostic test orders to an animal laboratory were explored as a data source for monitoring trends in the incidence of clinical syndromes in cattle. Four years of real data and over 200 simulated outbreak signals were used to compare pre-processing methods that could remove temporal effects in the data, as well as temporal aberration detection algorithms that provided high sensitivity and specificity. Weekly differencing demonstrated solid performance in removing day-of-week effects, even in series with low daily counts. For aberration detection, the results indicated that no single algorithm showed performance superior to all others across the range of outbreak scenarios simulated. Exponentially weighted moving average charts and Holt–Winters exponential smoothing demonstrated complementary performance, with the latter offering an automated method to adjust to changes in the time series that will likely occur in the future. Shewhart charts provided lower sensitivity but earlier detection in some scenarios. Cumulative sum charts did not appear to add value to the system; however, the poor performance of this algorithm was attributed to characteristics of the data monitored. These findings indicate that automated monitoring aimed at early detection of temporal aberrations will likely be most effective when a range of algorithms are implemented in parallel. PMID:23576782
Dórea, Fernanda C; McEwen, Beverly J; McNab, W Bruce; Revie, Crawford W; Sanchez, Javier
2013-06-06
Diagnostic test orders to an animal laboratory were explored as a data source for monitoring trends in the incidence of clinical syndromes in cattle. Four years of real data and over 200 simulated outbreak signals were used to compare pre-processing methods that could remove temporal effects in the data, as well as temporal aberration detection algorithms that provided high sensitivity and specificity. Weekly differencing demonstrated solid performance in removing day-of-week effects, even in series with low daily counts. For aberration detection, the results indicated that no single algorithm showed performance superior to all others across the range of outbreak scenarios simulated. Exponentially weighted moving average charts and Holt-Winters exponential smoothing demonstrated complementary performance, with the latter offering an automated method to adjust to changes in the time series that will likely occur in the future. Shewhart charts provided lower sensitivity but earlier detection in some scenarios. Cumulative sum charts did not appear to add value to the system; however, the poor performance of this algorithm was attributed to characteristics of the data monitored. These findings indicate that automated monitoring aimed at early detection of temporal aberrations will likely be most effective when a range of algorithms are implemented in parallel.
Uzayisenga, Viviane; Lin, Xiao-Dong; Li, Li-Mei; Anema, Jason R; Yang, Zhi-Lin; Huang, Yi-Fan; Lin, Hai-Xin; Li, Song-Bo; Li, Jian-Feng; Tian, Zhong-Qun
2012-06-19
Au-seed Ag-growth nanoparticles of controllable diameter (50-100 nm), and having an ultrathin SiO(2) shell of controllable thickness (2-3 nm), were prepared for shell-isolated nanoparticle-enhanced Raman spectroscopy (SHINERS). Their morphological, optical, and material properties were characterized; and their potential for use as a versatile Raman signal amplifier was investigated experimentally using pyridine as a probe molecule and theoretically by the three-dimensional finite-difference time-domain (3D-FDTD) method. We show that a SiO(2) shell as thin as 2 nm can be synthesized pinhole-free on the Ag surface of a nanoparticle, which then becomes the core. The dielectric SiO(2) shell serves to isolate the Raman-signal enhancing core and prevent it from interfering with the system under study. The SiO(2) shell also hinders oxidation of the Ag surface and nanoparticle aggregation. It significantly improves the stability and reproducibility of surface-enhanced Raman scattering (SERS) signal intensity, which is essential for SERS applications. Our 3D-FDTD simulations show that Ag-core SHINERS nanoparticles yield at least 2 orders of magnitude greater enhancement than Au-core ones when excited with green light on a smooth Ag surface, and thus add to the versatility of our SHINERS method.
Particulate matter exposure of bicycle path users in a high-altitude city
NASA Astrophysics Data System (ADS)
Fajardo, Oscar A.; Rojas, Nestor Y.
2012-01-01
It is necessary to evaluate cyclists' exposure to particulate matter and if they are at a higher risk due to their increased breathing rate and their exposure to freshly emitted pollutants. The aim of this pilot study was to determine cyclists' exposure to PM 10 in a highly-polluted, high-altitude city such as Bogotá, and comment on the appropriateness of building bicycle paths alongside roads with heavy traffic in third world cities. A total of 29 particulate matter (PM 10) measurements, taken at two sampling sites using Harvard impactors, were used for estimating the exposure of users of the 80th street bicycle path to this pollutant. PM 10 dose could be considered as being high, especially due to high concentrations and cyclists' increased inhalation rates. A random survey was conducted over 73 bicycle path users to determine cyclists' time, distance and speed on the bicycle path on a daily and weekly basis, their level of effort when cycling and general characteristics, such as this population's gender and age. Based on this information, the PM 10 average daily dose (ADD c) for different bicycle path users and the ratio between ADD c and a reference ADD for people at rest exposed to an indoor concentration of 25 μg m -3 were estimated. The average increase in ADD was 6%-9% when riding with light effort and by 12%-18% when riding with moderate effort. The most enthusiastic bicycle path users showed ADD c/ADD r ratios as high as 1.30 when riding with light effort and 1.64 when riding with moderate effort, thereby significantly increasing their PM 10 exposure-associated health risks.
Psychometric properties and norms of the German ABC-Community and PAS-ADD Checklist.
Zeilinger, Elisabeth L; Weber, Germain; Haveman, Meindert J
2011-01-01
The aim of the present study was to standardize and generate psychometric evidence of the German language versions of two well-established English language mental health instruments: the Aberrant Behavior Checklist-Community (ABC-C) and the Psychiatric Assessment Schedule for Adults with Developmental Disabilities (PAS-ADD) Checklist. New methods in this field were introduced: a simulation method for testing the factor structure and an exploration of long-term stability over two years. The checklists were both administered to a representative sample of 270 individuals with intellectual disability (ID) and, two years later in a second data collection, to 128 participants of the original sample. Principal component analysis and parallel analysis were performed. Reliability measures, long-term stability, subscale intercorrelations, as well as standardized norms were generated. Prevalence of mental health problems was examined. Psychometric properties were mostly excellent, with long-term stability showing moderate to strong effects. The original factor structure of the ABC-C was replicated. PAS-ADD Checklist produced a similar, but still different structure compared with findings from the English language area. The overall prevalence rate of mental health problems in the sample was about 20%. Considering the good results on the measured psychometric properties, the two checklists are recommended for the early detection of mental health problems in persons with ID. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Stephens, Chad; Kennedy, Kellie; Napoli, Nicholas; Demas, Matthew; Barnes, Laura; Crook, Brenda; Williams, Ralph; Last, Mary Carolyn; Schutte, Paul
2017-01-01
Human-autonomous systems have the potential to mitigate pilot cognitive impairment and improve aviation safety. A research team at NASA Langley conducted an experiment to study the impact of mild normobaric hypoxia induction on aircraft pilot performance and psychophysiological state. A within-subjects design involved non-hypoxic and hypoxic exposures while performing three 10-minute tasks. Results indicated the effect of 15,000 feet simulated altitude did not induce significant performance decrement but did produce increase in perceived workload. Analyses of psychophysiological responses evince the potential of biomarkers for hypoxia onset. This study represents on-going work at NASA intending to add to the current knowledge of psychophysiologically-based input to automation to increase aviation safety. Analyses involving coupling across physiological systems and wavelet transforms of cortical activity revealed patterns that can discern between the simulated altitude conditions. Specifically, multivariate entropy of ECG/Respiration components were found to be significant predictors (p< 0.02) of hypoxia. Furthermore, in EEG, there was a significant decrease in mid-level beta (15.19-18.37Hz) during the hypoxic condition in thirteen of sixteen sites across the scalp. Task performance was not appreciably impacted by the effect of 15,000 feet simulated altitude. Analyses of psychophysiological responses evince the potential of biomarkers for mild hypoxia onset.The potential for identifying shifts in underlying cortical and physiological systems could serve as a means to identify the onset of deteriorated cognitive state. Enabling such assessment in future flightdecks could permit increasingly autonomous systems-supported operations. Augmenting human operator through assessment of cognitive impairment has the potential to further improve operator performance and mitigate human error in safety critical contexts. This study represents ongoing work at NASA intending to add to the current knowledge of psychophysiologically-based input to automation to increase aviation safety.
NASA Astrophysics Data System (ADS)
Vrac, Mathieu
2018-06-01
Climate simulations often suffer from statistical biases with respect to observations or reanalyses. It is therefore common to correct (or adjust) those simulations before using them as inputs into impact models. However, most bias correction (BC) methods are univariate and so do not account for the statistical dependences linking the different locations and/or physical variables of interest. In addition, they are often deterministic, and stochasticity is frequently needed to investigate climate uncertainty and to add constrained randomness to climate simulations that do not possess a realistic variability. This study presents a multivariate method of rank resampling for distributions and dependences (R2D2) bias correction allowing one to adjust not only the univariate distributions but also their inter-variable and inter-site dependence structures. Moreover, the proposed R2D2 method provides some stochasticity since it can generate as many multivariate corrected outputs as the number of statistical dimensions (i.e., number of grid cell × number of climate variables) of the simulations to be corrected. It is based on an assumption of stability in time of the dependence structure - making it possible to deal with a high number of statistical dimensions - that lets the climate model drive the temporal properties and their changes in time. R2D2 is applied on temperature and precipitation reanalysis time series with respect to high-resolution reference data over the southeast of France (1506 grid cell). Bivariate, 1506-dimensional and 3012-dimensional versions of R2D2 are tested over a historical period and compared to a univariate BC. How the different BC methods behave in a climate change context is also illustrated with an application to regional climate simulations over the 2071-2100 period. The results indicate that the 1d-BC basically reproduces the climate model multivariate properties, 2d-R2D2 is only satisfying in the inter-variable context, 1506d-R2D2 strongly improves inter-site properties and 3012d-R2D2 is able to account for both. Applications of the proposed R2D2 method to various climate datasets are relevant for many impact studies. The perspectives of improvements are numerous, such as introducing stochasticity in the dependence itself, questioning its stability assumption, and accounting for temporal properties adjustment while including more physics in the adjustment procedures.
Learning outcomes in a simulation game for associate degree nursing students.
Clark-C
1977-01-01
Learning outcomes of a simulation game designed to have one-to-one correspondence between behavioral objectives and game plays is reported. The behavioral objectives were core concepts in psychiatric mental health nursing taught to associate degree nursing students. Decisions to use the simulation game method method grew out of difficulties inherent in the community college nursing program, as well as the need for self-paced, efficient, learner-centered learning and evaluative tools. After the trial and revision of the game, a number of research hypotheses were tested. Simulation gaming was found to be an effective mode of learning, and students who acted as teachers for other students learned significantly more than those who were taught. Some of the recommendations for further research were to study varied nursing populations, to add a control group, to test the long-range learning effects of playing the game, to decrease experimenter bias, to study transfer of learning to actual nurse-patient situations and changes in attitudes toward psychiatric patients, and to develop more simulation games for nursing education.
Voss, Clifford I.; Provost, A.M.
2002-01-01
SUTRA (Saturated-Unsaturated Transport) is a computer program that simulates fluid movement and the transport of either energy or dissolved substances in a subsurface environment. This upgraded version of SUTRA adds the capability for three-dimensional simulation to the former code (Voss, 1984), which allowed only two-dimensional simulation. The code employs a two- or three-dimensional finite-element and finite-difference method to approximate the governing equations that describe the two interdependent processes that are simulated: 1) fluid density-dependent saturated or unsaturated ground-water flow; and 2) either (a) transport of a solute in the ground water, in which the solute may be subject to: equilibrium adsorption on the porous matrix, and both first-order and zero-order production or decay; or (b) transport of thermal energy in the ground water and solid matrix of the aquifer. SUTRA may also be used to simulate simpler subsets of the above processes. A flow-direction-dependent dispersion process for anisotropic media is also provided by the code and is introduced in this report. As the primary calculated result, SUTRA provides fluid pressures and either solute concentrations or temperatures, as they vary with time, everywhere in the simulated subsurface system. SUTRA flow simulation may be employed for two-dimensional (2D) areal, cross sectional and three-dimensional (3D) modeling of saturated ground-water flow systems, and for cross sectional and 3D modeling of unsaturated zone flow. Solute-transport simulation using SUTRA may be employed to model natural or man-induced chemical-species transport including processes of solute sorption, production, and decay. For example, it may be applied to analyze ground-water contaminant transport problems and aquifer restoration designs. In addition, solute-transport simulation with SUTRA may be used for modeling of variable-density leachate movement, and for cross sectional modeling of saltwater intrusion in aquifers at near-well or regional scales, with either dispersed or relatively sharp transition zones between freshwater and saltwater. SUTRA energy-transport simulation may be employed to model thermal regimes in aquifers, subsurface heat conduction, aquifer thermal-energy storage systems, geothermal reservoirs, thermal pollution of aquifers, and natural hydrogeologic convection systems. Mesh construction, which is quite flexible for arbitrary geometries, employs quadrilateral finite elements in 2D Cartesian or radial-cylindrical coordinate systems, and hexahedral finite elements in 3D systems. 3D meshes are currently restricted to be logically rectangular; in other words, they are similar to deformable finite-difference-style grids. Permeabilities may be anisotropic and may vary in both direction and magnitude throughout the system, as may most other aquifer and fluid properties. Boundary conditions, sources and sinks may be time dependent. A number of input data checks are made to verify the input data set. An option is available for storing intermediate results and restarting a simulation at the intermediate time. Output options include fluid velocities, fluid mass and solute mass or energy budgets, and time-varying observations at points in the system. Both the mathematical basis for SUTRA and the program structure are highly general, and are modularized to allow for straightforward addition of new methods or processes to the simulation. The FORTRAN-90 coding stresses clarity and modularity rather than efficiency, providing easy access for later modifications.
NASA Astrophysics Data System (ADS)
Frigenti, G.; Arjmand, M.; Barucci, A.; Baldini, F.; Berneschi, S.; Farnesi, D.; Gianfreda, M.; Pelli, S.; Soria, S.; Aray, A.; Dumeige, Y.; Féron, P.; Nunzi Conti, G.
2018-06-01
An original method able to fully characterize high-Q resonators in an add-drop configuration has been implemented. The method is based on the study of two cavity ringdown (CRD) signals, which are produced at the transmission and drop ports by wavelength sweeping a resonance in a time interval comparable with the photon cavity lifetime. All the resonator parameters can be assessed with a single set of simultaneous measurements. We first developed a model describing the two CRD output signals and a fitting program able to deduce the key parameters from the measured profiles. We successfully validated the model with an experiment based on a fiber ring resonator of known characteristics. Finally, we characterized a high-Q, home-made, MgF2 whispering gallery mode disk resonator in the add-drop configuration, assessing its intrinsic and coupling parameters.
Thermal Characterization of a Simulated Fission Engine via Distributed Fiber Bragg Gratings
NASA Astrophysics Data System (ADS)
Duncan, Roger G.; Fielder, Robert S.; Seeley, Ryan J.; Kozikowski, Carrie L.; Raum, Matthew T.
2005-02-01
We report the use of distributed fiber Bragg gratings to monitor thermal conditions within a simulated nuclear reactor core located at the Early Flight Fission Test Facility of the NASA Marshall Space Flight Center. Distributed fiber-optic temperature measurements promise to add significant capability and advance the state-of-the-art in high-temperature sensing. For the work reported herein, seven probes were constructed with ten sensors each for a total of 70 sensor locations throughout the core. These discrete temperature sensors were monitored over a nine hour period while the test article was heated to over 700 °C and cooled to ambient through two operational cycles. The sensor density available permits a significantly elevated understanding of thermal effects within the simulated reactor. Fiber-optic sensor performance is shown to compare very favorably with co-located thermocouples where such co-location was feasible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orea, Adrian; Betancourt, Minerba
aThe objective for this project was to use MINERvA data to tune the simulation models in order to obtain the precision needed for current and future neutrino experiments. In order to do this, the current models need to be validated and then improved.more » $$\\#10146$$; Validation was done by recreating figures that have been used in previous publications $$\\#61553$$; This was done by comparing data from the detector and the simulation model (GENIE) $$\\#10146$$; Additionally, a newer version of GENIE was compared to the GENIE used for the publications to validate the new version as well as to note any improvements Another objective was to add new samples into the NUISANCE framework, which was used to compare data from the detector and simulation models. $$\\#10146$$; Specifically, the added sample was the two dimensional histogram of the double differential cross section as a function of the transversal and z-direction momentum for Numu and Numubar $$\\#61553$$; Was also used for validation« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundstrom, B.; Shirazi, M.; Coddington, M.
2013-01-01
This paper, presented at the IEEE Green Technologies Conference 2013, describes a Grid Interconnection System Evaluator (GISE) that leverages hardware-in-the-loop (HIL) simulation techniques to rapidly evaluate the grid interconnection standard conformance of an ICS according to the procedures in IEEE Std 1547.1 (TM). The architecture and test sequencing of this evaluation tool, along with a set of representative ICS test results from three different photovoltaic (PV) inverters, are presented. The GISE adds to the National Renewable Energy Laboratory's (NREL) evaluation platform that now allows for rapid development of ICS control algorithms using controller HIL (CHIL) techniques, the ability to testmore » the dc input characteristics of PV-based ICSs through the use of a PV simulator capable of simulating real-world dynamics using power HIL (PHIL), and evaluation of ICS grid interconnection conformance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundstrom, B.; Shirazi, M.; Coddington, M.
2013-01-01
This paper describes a Grid Interconnection System Evaluator (GISE) that leverages hardware-in-the-loop (HIL) simulation techniques to rapidly evaluate the grid interconnection standard conformance of an ICS according to the procedures in IEEE Std 1547.1. The architecture and test sequencing of this evaluation tool, along with a set of representative ICS test results from three different photovoltaic (PV) inverters, are presented. The GISE adds to the National Renewable Energy Laboratory's (NREL) evaluation platform that now allows for rapid development of ICS control algorithms using controller HIL (CHIL) techniques, the ability to test the dc input characteristics of PV-based ICSs through themore » use of a PV simulator capable of simulating real-world dynamics using power HIL (PHIL), and evaluation of ICS grid interconnection conformance.« less
Interprofessional communication in healthcare: An integrative review.
Foronda, Cynthia; MacWilliams, Brent; McArthur, Erin
2016-07-01
The link between miscommunication and poor patient outcomes has been well documented. To understand the current state of knowledge regarding interprofessional communication, an integrative review was performed. The review suggested that nurses and physicians are trained differently and they exhibit differences in communication styles. The distinct frustrations that nurses and physicians expressed with each other were discussed. Egos, lack of confidence, lack of organization and structural hierarchies hindered relationships and communications. Research suggested that training programs with the use of standardized tools and simulation are effective in improving interprofessional communication skills. Recommendations include education beyond communication techniques to address the broader related constructs of patient safety, valuing diversity, team science, and cultural humility. Future directions in education are to add courses in patient safety to the curriculum, use handover tools that are interprofessional in nature, practice in simulation hospitals for training, and use virtual simulation to unite the professions. Copyright © 2016 Elsevier Ltd. All rights reserved.
PhET: The Best Education Software You Can't Buy
NASA Astrophysics Data System (ADS)
Dubson, M.; Duncan, D. K.
2009-12-01
Project PhET provides free educational software in the form of stand-alone java and flash simulations and associated classroom materials. Our motto is "It's the best educational software that money can buy, except you can't buy it, because its free." You can start playing with PhET sims right now at http://phet.colorado.edu and add to our 1 million hits per month. PhET originally stood for Physics Education Technology, but we now include other science fields so PhET is now a brand name. Our site has about 80 simulations, mostly in physics and math, but also in chemistry, geology, and biology. Based on careful research and student interviews, our sims have no instructions because no one reads instructions. These simulations can be used in lecture demonstrations, classroom activities, and homework assignments. The PhET site includes a long list of user-tested classroom activities and teacher tips.
A COTS-Based Replacement Strategy for Aging Avionics Computers
2001-12-01
Communication Control Unit. A COTS-Based Replacement Strategy for Aging Avionics Computers COTS Microprocessor Real Time Operating System New Native Code...Native Code Objec ts Native Code Thread Real - Time Operating System Legacy Function x Virtual Component Environment Context Switch Thunk Add-in Replace
Forces required for a knife to penetrate a variety of clothing types.
Nolan, Gary; Hainsworth, Sarah V; Rutty, Guy N
2013-03-01
In stabbing incidents, it is usual for the victim to be clothed and therefore a knife penetrates both clothes and skin. Clothes (other than leather) have been thought to make little difference to the penetration force. However, there is little quantitative data in the literature. In this study, a range of clothes have been tested, either singly or in layers of, for example, T-shirt and shirt, to quantify the additional force required when clothes are present. A materials testing system has been used to test the penetration force required to stab through clothes into a foam-silicone rubber skin simulant. The results show that the force required can be significantly different, particularly when layers of clothing are penetrated. A cotton t-shirt adds c. 8 N to the penetration force, while a T-shirt and jacket can add an additional 21 N. The results allow a more quantitative assessment of forces required in stabbing. © 2012 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Nussbaumer, Raphaël; Gloaguen, Erwan; Mariéthoz, Grégoire; Holliger, Klaus
2016-04-01
Bayesian sequential simulation (BSS) is a powerful geostatistical technique, which notably has shown significant potential for the assimilation of datasets that are diverse with regard to the spatial resolution and their relationship. However, these types of applications of BSS require a large number of realizations to adequately explore the solution space and to assess the corresponding uncertainties. Moreover, such simulations generally need to be performed on very fine grids in order to adequately exploit the technique's potential for characterizing heterogeneous environments. Correspondingly, the computational cost of BSS algorithms in their classical form is very high, which so far has limited an effective application of this method to large models and/or vast datasets. In this context, it is also important to note that the inherent assumption regarding the independence of the considered datasets is generally regarded as being too strong in the context of sequential simulation. To alleviate these problems, we have revisited the classical implementation of BSS and incorporated two key features to increase the computational efficiency. The first feature is a combined quadrant spiral - superblock search, which targets run-time savings on large grids and adds flexibility with regard to the selection of neighboring points using equal directional sampling and treating hard data and previously simulated points separately. The second feature is a constant path of simulation, which enhances the efficiency for multiple realizations. We have also modified the aggregation operator to be more flexible with regard to the assumption of independence of the considered datasets. This is achieved through log-linear pooling, which essentially allows for attributing weights to the various data components. Finally, a multi-grid simulating path was created to enforce large-scale variance and to allow for adapting parameters, such as, for example, the log-linear weights or the type of simulation path at various scales. The newly implemented search method for kriging reduces the computational cost from an exponential dependence with regard to the grid size in the original algorithm to a linear relationship, as each neighboring search becomes independent from the grid size. For the considered examples, our results show a sevenfold reduction in run time for each additional realization when a constant simulation path is used. The traditional criticism that constant path techniques introduce a bias to the simulations was explored and our findings do indeed reveal a minor reduction in the diversity of the simulations. This bias can, however, be largely eliminated by changing the path type at different scales through the use of the multi-grid approach. Finally, we show that adapting the aggregation weight at each scale considered in our multi-grid approach allows for reproducing both the variogram and histogram, and the spatial trend of the underlying data.
NASA Astrophysics Data System (ADS)
Wang, Enli; Xu, J.; Jiang, Q.; Austin, J.
2009-03-01
Quantification of the spatial impact of climate on crop productivity and the potential value of seasonal climate forecasts can effectively assist the strategic planning of crop layout and help to understand to what extent climate risk can be managed through responsive management strategies at a regional level. A simulation study was carried out to assess the climate impact on the performance of a dryland wheat-fallow system and the potential value of seasonal climate forecasts in nitrogen management in the Murray-Darling Basin (MDB) of Australia. Daily climate data (1889-2002) from 57 stations were used with the agricultural systems simulator (APSIM) to simulate wheat productivity and nitrogen requirement as affected by climate. On a good soil, simulated grain yield ranged from <2 t/ha in west inland to >7 t/ha in the east border regions. Optimal nitrogen rates ranged from <60 kgN/ha/yr to >200 kgN/ha/yr. Simulated gross margin was in the range of -20/ha to 700/ha, increasing eastwards. Wheat yield was closely related to rainfall in the growing season and the stored soil moisture at sowing time. The impact of stored soil moisture increased from southwest to northeast. Simulated annual deep drainage ranged from zero in western inland to >200 mm in the east. Nitrogen management, optimised based on ‘perfect’ knowledge of daily weather in the coming season, could add value of 26˜79/ha compared to management optimised based on historical climate, with the maximum occurring in central to western part of MDB. It would also reduce the nitrogen application by 5˜25 kgN/ha in the main cropping areas. Comparison of simulation results with the current land use mapping in MDB revealed that the western boundary of the current cropping zone approximated the isolines of 160 mm of growing season rainfall, 2.5t/ha of wheat grain yield, and 150/ha of gross margin in QLD and NSW. In VIC and SA, the 160-mm isohyets corresponded relatively lower simulated yield due to less stored soil water. Impacts of other factors like soil types were also discussed.
Fault Analysis in Solar Photovoltaic Arrays
NASA Astrophysics Data System (ADS)
Zhao, Ye
Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.
Amanzi: An Open-Source Multi-process Simulator for Environmental Applications
NASA Astrophysics Data System (ADS)
Moulton, J. D.; Molins, S.; Johnson, J. N.; Coon, E.; Lipnikov, K.; Day, M.; Barker, E.
2014-12-01
The Advanced Simulation Capabililty for Environmental Management (ASCEM) program is developing an approach and open-source tool suite for standardized risk and performance assessments at legacy nuclear waste sites. These assessments begin with simplified models, and add geometric and geologic complexity as understanding is gained. The Platform toolsets (Akuna) generates these conceptual models and Amanzi provides the computational engine to perform the simulations, returning the results for analysis and visualization. In this presentation we highlight key elements of the design, algorithms and implementations used in Amanzi. In particular, the hierarchical and modular design is aligned with the coupled processes being sumulated, and naturally supports a wide range of model complexity. This design leverages a dynamic data manager and the synergy of two graphs (one from the high-level perspective of the models the other from the dependencies of the variables in the model) to enable this flexible model configuration at run time. Moreover, to model sites with complex hydrostratigraphy, as well as engineered systems, we are developing a dual unstructured/structured capability. Recently, these capabilities have been collected in a framework named Arcos, and efforts have begun to improve interoperability between the unstructured and structured AMR approaches in Amanzi. To leverage a range of biogeochemistry capability from the community (e.g., CrunchFlow, PFLOTRAN, etc.), a biogeochemistry interface library was developed called Alquimia. To ensure that Amanzi is truly an open-source community code we require a completely open-source tool chain for our development. We will comment on elements of this tool chain, including the testing and documentation development tools such as docutils, and Sphinx. Finally, we will show simulation results from our phased demonstrations, including the geochemically complex Savannah River F-Area seepage basins.
Simulation of the Universal-Time Diurnal Variation of the Global Electric Circuit Charging Rate
NASA Technical Reports Server (NTRS)
Mackerras, David; Darveniza, Mat; Orville, Richard E.; Williams, Earle R.; Goodman, Steven J.
1999-01-01
A global lightning model that includes diurnal and annual lightning variation, and total flash density versus latitude for each major land and ocean, has been used as the basis for simulating the global electric circuit charging rate. A particular objective has been to reconcile the difference in amplitude ratios [AR=(max-min)/mean] between global lightning diurnal variation (AR approximately equals 0.8) and the diurnal variation of typical atmospheric potential gradient curves (AR approximately equals 0.35). A constraint on the simulation is that the annual mean charging current should be about 1000 A. The global lightning model shows that negative ground flashes can contribute, at most, about 10-15% of the required current. For the purpose of the charging rate simulation, it was assumed that each ground flash contributes 5 C to the charging process. It was necessary to assume that all electrified clouds contribute to charging by means other than lightning, that the total flash rate can serve as an indirect indicator of the rate of charge transfer, and that oceanic electrified clouds contribute to charging even though they are relatively inefficient in producing lightning. It was also found necessary to add a diurnally invariant charging current component. By trial and error it was found that charging rate diurnal variation curves could be produced with amplitude ratios and general shapes similar to those of the potential gradient diurnal variation curves measured over ocean and arctic regions during voyages of the Carnegie Institute research vessels. The comparisons were made for the northern winter (Nov.-Feb.), the equinox (Mar., Apr., Sept., Oct.), the northern summer (May-Aug.), and the whole year.
Night vision goggle stimulation using LCoS and DLP projection technology, which is better?
NASA Astrophysics Data System (ADS)
Ali, Masoud H.; Lyon, Paul; De Meerleer, Peter
2014-06-01
High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.
Scorpion Hybrid Optical-based Inertial Tracker (HObIT) test results
NASA Astrophysics Data System (ADS)
Atac, Robert; Spink, Scott; Calloway, Tom; Foxlin, Eric
2014-06-01
High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.
Cooper, Simon J; Kinsman, Leigh; Chung, Catherine; Cant, Robyn; Boyle, Jayne; Bull, Loretta; Cameron, Amanda; Connell, Cliff; Kim, Jeong-Ah; McInnes, Denise; McKay, Angela; Nankervis, Katrina; Penz, Erika; Rotter, Thomas
2016-09-07
There are international concerns in relation to the management of patient deterioration which has led to a body of evidence known as the 'failure to rescue' literature. Nursing staff are known to miss cues of deterioration and often fail to call for assistance. Medical Emergency Teams (Rapid Response Teams) do improve the management of acutely deteriorating patients, but first responders need the requisite skills to impact on patient safety. In this study we aim to address these issues in a mixed methods interventional trial with the objective of measuring and comparing the cost and clinical impact of face-to-face and web-based simulation programs on the management of patient deterioration and related patient outcomes. The education programs, known as 'FIRST(2)ACT', have been found to have an impact on education and will be tested in four hospitals in the State of Victoria, Australia. Nursing staff will be trained in primary (the first 8 min) responses to emergencies in two medical wards using a face-to-face approach and in two medical wards using a web-based version FIRST(2)ACTWeb. The impact of these interventions will be determined through quantitative and qualitative approaches, cost analyses and patient notes review (time series analyses) to measure quality of care and patient outcomes. In this 18 month study it is hypothesised that both simulation programs will improve the detection and management of deteriorating patients but that the web-based program will have lower total costs. The study will also add to our overall understanding of the utility of simulation approaches in the preparation of nurses working in hospital wards. (ACTRN12616000468426, retrospectively registered 8.4.2016).
NASA Astrophysics Data System (ADS)
Alves da Silva Junior, J.; Frank, W.; Campillo, M.; Juanes, R.
2017-12-01
Current models for slow slip earthquakes (SSE) assume a simplified fault embedded on a homogeneous half-space. In these models SSE events nucleate on the transition from velocity strengthening (VS) to velocity weakening (VW) down dip from the trench and propagate towards the base of the seismogenic zone, where high normal effective stress is assumed to arrest slip. Here, we investigate SSE nucleation and arrest using quasi-static finite element simulations, with rate and state friction, on a domain with heterogeneous properties and realistic fault geometry. We use the fault geometry of the Guerrero Gap in the Cocos subduction zone, where SSE events occurs every 4 years, as a proxy for subduction zone. Our model is calibrated using surface displacements from GPS observations. We apply boundary conditions according to the plate convergence rate and impose a depth-dependent pore pressure on the fault. Our simulations indicate that the fault geometry and elastic properties of the medium play a key role in the arrest of SSE events at the base of the seismogenic zone. SSE arrest occurs due to aseismic deformations of the domain that result in areas with elevated effective stress. SSE nucleation occurs in the transition from VS to VW and propagates as a crack-like expansion with increased nucleation length prior to dynamic instability. Our simulations encompassing multiple seismic cycles indicate SSE interval times between 1 and 10 years and, importantly, a systematic increase of rupture area prior to dynamic instability, followed by a hiatus in the SSE occurrence. We hypothesize that these SSE characteristics, if confirmed by GPS observations in different subduction zones, can add to the understanding of nucleation of large earthquakes in the seismogenic zone.
Joint Sentinel-1 and SMAP data assimilation to improve soil moisture estimates
NASA Astrophysics Data System (ADS)
Lievens, H.; Reichle, R. H.; Liu, Q.; De Lannoy, G.; Dunbar, R. S.; Kim, S.; Das, N. N.; Cosh, M. H.; Walker, J. P.; Wagner, W.
2017-12-01
SMAP (Soil Moisture Active and Passive) radiometer observations at 40 km resolution are routinely assimilated into the NASA Catchment Land Surface Model (CLSM) to generate the SMAP Level 4 Soil Moisture product. The use of C-band radar backscatter observations from Sentinel-1 has the potential to add value to the radiance assimilation by increasing the level of spatial detail. The specifications of Sentinel-1 are appealing, particularly its high spatial resolution (5 by 20 m in interferometric wide swath mode) and frequent revisit time (6 day repeat cycle for the Sentinel-1A and Sentinel-1B constellation). However, the shorter wavelength of Sentinel-1 observations implies less sensitivity to soil moisture. This study investigates the value of Sentinel-1 data for hydrologic simulations by assimilating the radar observations into CLSM, either separately from or simultaneously with SMAP radiometer observations. To facilitate the assimilation of the radar observations, CLSM is coupled to the water cloud model, simulating the radar backscatter as observed by Sentinel-1. The innovations, i.e. differences between observations and simulations, are converted into increments to the model soil moisture state through an Ensemble Kalman Filter. The assimilation impact is assessed by comparing 3-hourly, 9 km surface and root-zone soil moisture simulations with in situ measurements from 9 km SMAP core validation sites and sparse networks, from May 2015 to 2017. The Sentinel-1 assimilation consistently improves surface soil moisture, whereas root-zone impacts are mostly neutral. Relatively larger improvements are obtained from SMAP assimilation. The joint assimilation of SMAP and Sentinel-1 observations performs best, demonstrating the complementary value of radar and radiometer observations.
Zhan, Yijian; Meschke, Günther
2017-07-08
The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense.
Zhan, Yijian
2017-01-01
The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense. PMID:28773130
1983-06-01
V4.0 MODULE PAGE 6 LOC OBJ LINE SOURCE STATEMENT 0 0178 30 275 Sim () 0179 F3 276 DI 277 ; 017A C38700 278 JP PXNXT 279 ; 0000 280 END RSTOO I6 j 6 B.3...003E XOR ACCB,IDB 0089 003F LD! ITR,000H 0090 0040 SPOS: LDI eg,-6 0091 0041 OP NOV @K,A 0092 0042 OP SUB ACCA ,IDD 0092 0042 NOV £NONRO 0092 0042...004B NOv NON,RP / 0102 004C OP XCHG ACCA 1$ STEP BITS .1 0102 004C NOV @RPv 0103 004D OP ADD ACCAIDB 0103 004D NOV @NONtRO ; le ADD CORD *1 0104 004E OP
Xyce Parallel Electronic Simulator : users' guide, version 2.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoekstra, Robert John; Waters, Lon J.; Rankin, Eric Lamont
2004-06-01
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator capable of simulating electrical circuits at a variety of abstraction levels. Primarily, Xyce has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability the current state-of-the-art in the following areas: {sm_bullet} Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. {sm_bullet} Improved performance for allmore » numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. {sm_bullet} Device models which are specifically tailored to meet Sandia's needs, including many radiation-aware devices. {sm_bullet} A client-server or multi-tiered operating model wherein the numerical kernel can operate independently of the graphical user interface (GUI). {sm_bullet} Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing of computing platforms. These include serial, shared-memory and distributed-memory parallel implementation - which allows it to run efficiently on the widest possible number parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. One feature required by designers is the ability to add device models, many specific to the needs of Sandia, to the code. To this end, the device package in the Xyce These input formats include standard analytical models, behavioral models look-up Parallel Electronic Simulator is designed to support a variety of device model inputs. tables, and mesh-level PDE device models. Combined with this flexible interface is an architectural design that greatly simplifies the addition of circuit models. One of the most important feature of Xyce is in providing a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia now has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods) research and development can be performed. Ultimately, these capabilities are migrated to end users.« less
NASA Astrophysics Data System (ADS)
Yu, Y.; Tan, X.; Liu, Q.; Xue, G.; Yu, H.; Zhao, Y.; Wang, Z.
Topological band theory has attracted much attention since several types of topological metals and semimetals have been explored. These robustness of nodal band structures are symmetry-protected, whose topological features have deepened and widened the understandings of condensed matter physics. Meanwhile, as artificial quantum systems superconducting circuits possess high controllability, supplying a powerful approach to investigate topological properties of condensed matter systems. We realize a Hamiltonian with space-time (PT) symmetry by mapping momentum space of nodal band structure to parameter space in a superconducting quantum circuit. By measuring energy spectrum of the system, we observe the gapless band structure of topological semimetals, shown as Dirac points in momentum space. The phase transition from topological semimetal to topological insulator can be realized by continuously tuning the parameter in Hamiltonian. We add perturbation to broken time reversal symmetry. As long as the combined PT symmetry is preserved, the Dirac points of the topological semimetal are still observable, suggesting the robustness of the topological protection of the gapless energy band. Our work open a platform to simulate the relation between the symmetry and topological stability in condensed matter systems. Supported by the NKRDP of China (2016YFA0301802) and the GRF of Hong Kong (HKU173051/14P&HKU173055/15P).
NASA Astrophysics Data System (ADS)
Yang Yang, Fan; Nelson, Bron; Aziz, Jonathan; Carlino, Roberto; Dono Perez, Andres; Faber, Nicolas; Foster, Cyrus; Frost, Chad; Henze, Chris; Karacalıoğlu, Arif Göktuğ; Levit, Creon; Marshall, William; Mason, James; O'Toole, Conor; Swenson, Jason; Worden, Simon P.; Stupl, Jan
2016-09-01
This work provides an efficiency analysis of the LightForce space debris collision avoidance scheme in the current debris environment and describes a simulation approach to assess its impact on the long-term evolution of the space debris environment. LightForce aims to provide just-in-time collision avoidance by utilizing photon pressure from ground-based industrial lasers. These ground stations impart minimal accelerations to increase the miss distance for a predicted conjunction between two objects. In the first part of this paper we will present research that investigates the short-term effect of a few systems consisting of 20 kW class lasers directed by 1.5 m diameter telescopes using adaptive optics. The results found such a network of ground stations to mitigate more than 85 percent of conjunctions and could lower the expected number of collisions in Low Earth Orbit (LEO) by an order of magnitude. While these are impressive numbers that indicate LightForce's utility in the short-term, the remaining 15 % of possible collisions contain (among others) conjunctions between two massive objects that would add large amount of debris if they collide. Still, conjunctions between massive objects and smaller objects can be mitigated. Hence, we choose to expand the capabilities of the simulation software to investigate the overall effect of a network of LightForce stations on the long-term debris evolution. In the second part of this paper, we will present the planned simulation approach for that effort. For the efficiency analysis of collision avoidance in the current debris environment, we utilize a simulation approach that uses the entire Two Line Element (TLE) catalog in LEO for a given day as initial input. These objects are propagated for one year and an all-on-all conjunction analysis is performed. For conjunctions that fall below a range threshold, we calculate the probability of collision and record those values. To assess efficiency, we compare a baseline (without collision avoidance) conjunction analysis with an analysis where LightForce is active. Using that approach, we take into account that collision avoidance maneuvers could have effects on third objects. Performing all-on-all conjunction analyses for extended period of time requires significant computer resources; hence we implemented this simulation utilizing a highly parallel approach on the NASA Pleiades supercomputer.
Yang, Fan Yang; Nelson, Bron; Aziz, Jonathan; Carlino, Roberto; Perez, Andres Dono; Faber, Nicolas; Foster, Cyrus; Frost, Chad; Henze, Chris; Karacalıoğlu, Arif Göktuğ; Levit, Creon; Marshall, William; Mason, James; O’Toole, Conor; Swenson, Jason; Worden, Simon P.; Stupl, Jan
2017-01-01
This work provides an efficiency analysis of the LightForce space debris collision avoidance scheme in the current debris environment and describes a simulation approach to assess its impact on the long-term evolution of the space debris environment. LightForce aims to provide just-in-time collision avoidance by utilizing photon pressure from ground-based industrial lasers. These ground stations impart minimal accelerations to increase the miss distance for a predicted conjunction between two objects. In the first part of this paper we will present research that investigates the short-term effect of a few systems consisting of 20 kW class lasers directed by 1.5 m diameter telescopes using adaptive optics. The results found such a network of ground stations to mitigate more than 85 percent of conjunctions and could lower the expected number of collisions in Low Earth Orbit (LEO) by an order of magnitude. While these are impressive numbers that indicate LightForce’s utility in the short-term, the remaining 15 % of possible collisions contain (among others) conjunctions between two massive objects that would add large amount of debris if they collide. Still, conjunctions between massive objects and smaller objects can be mitigated. Hence, we choose to expand the capabilities of the simulation software to investigate the overall effect of a network of LightForce stations on the long-term debris evolution. In the second part of this paper, we will present the planned simulation approach for that effort. For the efficiency analysis of collision avoidance in the current debris environment, we utilize a simulation approach that uses the entire Two Line Element (TLE) catalog in LEO for a given day as initial input. These objects are propagated for one year and an all-on-all conjunction analysis is performed. For conjunctions that fall below a range threshold, we calculate the probability of collision and record those values. To assess efficiency, we compare a baseline (without collision avoidance) conjunction analysis with an analysis where LightForce is active. Using that approach, we take into account that collision avoidance maneuvers could have effects on third objects. Performing all-on-all conjunction analyses for extended period of time requires significant computer resources; hence we implemented this simulation utilizing a highly parallel approach on the NASA Pleiades supercomputer. PMID:29302129
Yang, Fan Yang; Nelson, Bron; Aziz, Jonathan; Carlino, Roberto; Perez, Andres Dono; Faber, Nicolas; Foster, Cyrus; Frost, Chad; Henze, Chris; Karacalıoğlu, Arif Göktuğ; Levit, Creon; Marshall, William; Mason, James; O'Toole, Conor; Swenson, Jason; Worden, Simon P; Stupl, Jan
2016-09-01
This work provides an efficiency analysis of the LightForce space debris collision avoidance scheme in the current debris environment and describes a simulation approach to assess its impact on the long-term evolution of the space debris environment. LightForce aims to provide just-in-time collision avoidance by utilizing photon pressure from ground-based industrial lasers. These ground stations impart minimal accelerations to increase the miss distance for a predicted conjunction between two objects. In the first part of this paper we will present research that investigates the short-term effect of a few systems consisting of 20 kW class lasers directed by 1.5 m diameter telescopes using adaptive optics. The results found such a network of ground stations to mitigate more than 85 percent of conjunctions and could lower the expected number of collisions in Low Earth Orbit (LEO) by an order of magnitude. While these are impressive numbers that indicate LightForce's utility in the short-term, the remaining 15 % of possible collisions contain (among others) conjunctions between two massive objects that would add large amount of debris if they collide. Still, conjunctions between massive objects and smaller objects can be mitigated. Hence, we choose to expand the capabilities of the simulation software to investigate the overall effect of a network of LightForce stations on the long-term debris evolution. In the second part of this paper, we will present the planned simulation approach for that effort. For the efficiency analysis of collision avoidance in the current debris environment, we utilize a simulation approach that uses the entire Two Line Element (TLE) catalog in LEO for a given day as initial input. These objects are propagated for one year and an all-on-all conjunction analysis is performed. For conjunctions that fall below a range threshold, we calculate the probability of collision and record those values. To assess efficiency, we compare a baseline (without collision avoidance) conjunction analysis with an analysis where LightForce is active. Using that approach, we take into account that collision avoidance maneuvers could have effects on third objects. Performing all-on-all conjunction analyses for extended period of time requires significant computer resources; hence we implemented this simulation utilizing a highly parallel approach on the NASA Pleiades supercomputer.
A multilingual audiometer simulator software for training purposes.
Kompis, Martin; Steffen, Pascal; Caversaccio, Marco; Brugger, Urs; Oesch, Ivo
2012-04-01
A set of algorithms, which allows a computer to determine the answers of simulated patients during pure tone and speech audiometry, is presented. Based on these algorithms, a computer program for training in audiometry was written and found to be useful for teaching purposes. To develop a flexible audiometer simulator software as a teaching and training tool for pure tone and speech audiometry, both with and without masking. First a set of algorithms, which allows a computer to determine the answers of a simulated, hearing-impaired patient, was developed. Then, the software was implemented. Extensive use was made of simple, editable text files to define all texts in the user interface and all patient definitions. The software 'audiometer simulator' is available for free download. It can be used to train pure tone audiometry (both with and without masking), speech audiometry, measurement of the uncomfortable level, and simple simulation tests. Due to the use of text files, the user can alter or add patient definitions and all texts and labels shown on the screen. So far, English, French, German, and Portuguese user interfaces are available and the user can choose between German or French speech audiometry.
High fidelity simulations of infrared imagery with animated characters
NASA Astrophysics Data System (ADS)
Näsström, F.; Persson, A.; Bergström, D.; Berggren, J.; Hedström, J.; Allvar, J.; Karlsson, M.
2012-06-01
High fidelity simulations of IR signatures and imagery tend to be slow and do not have effective support for animation of characters. Simplified rendering methods based on computer graphics methods can be used to overcome these limitations. This paper presents a method to combine these tools and produce simulated high fidelity thermal IR data of animated people in terrain. Infrared signatures for human characters have been calculated using RadThermIR. To handle multiple character models, these calculations use a simplified material model for the anatomy and clothing. Weather and temperature conditions match the IR-texture used in the terrain model. The calculated signatures are applied to the animated 3D characters that, together with the terrain model, are used to produce high fidelity IR imagery of people or crowds. For high level animation control and crowd simulations, HLAS (High Level Animation System) has been developed. There are tools available to create and visualize skeleton based animations, but tools that allow control of the animated characters on a higher level, e.g. for crowd simulation, are usually expensive and closed source. We need the flexibility of HLAS to add animation into an HLA enabled sensor system simulation framework.
Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children.
Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie
2016-01-01
Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10-40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89-98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities.
Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children
Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie
2016-01-01
Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10–40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89–98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities. PMID:27471442
Formalism of photons in a nonlinear microring resonator
NASA Astrophysics Data System (ADS)
Tran, Quang Loc; Yupapin, Preecha
2018-03-01
In this paper, using short Gaussian pulses input from a monochromatic light source, we simulate the photon distribution and analyse the output gate's signals of PANDA nonlinear ring resonator. The present analysis is restricted to directional couplers characterized by two parameters, the power coupling coefficient κ and power coupling loss γ. Add/drop filters are also employed and investigated for the suitable to implement in the practical communication system. The experiment was conducted by using the combination of Lumerical FDTD Solutions and Lumerical MODE Solutions software.
Electromelting of confined monolayer ice.
Qiu, Hu; Guo, Wanlin
2013-05-10
In sharp contrast to the prevailing view that electric fields promote water freezing, here we show by molecular dynamics simulations that monolayer ice confined between two parallel plates can melt into liquid water under a perpendicularly applied electric field. The melting temperature of the monolayer ice decreases with the increasing strength of the external field due to the field-induced disruption of the water-wall interaction induced well-ordered network of the hydrogen bond. This electromelting process should add an important new ingredient to the physics of water.
Domestic Ice Breaking (DOMICE) Simulation Model User Guide
2013-02-01
Second, add new ice data to the variable “D9 Historical Ice Data (SIGRID Coded) NBL Waterways” (D9_historical_ice_d3), which contains the...within that “ NBL ” scheme. The interpretation of the SIGRID ice codes into ice thickness estimates is also contained within the sub- module “District 9...User Guide) “D9 Historical Ice Data (SIGRID Coded) NBL Waterways” (see Section 5.1.1.3.2 of this User Guide) “Historical District 1 Weekly Air
2012-01-01
performance. Ob- stacle climbing using the tail is compared to results from a previous robot with a posterior body segment and body flexion joint. Actual...3. Mechanisms of Locomotion for Multi-Modal Mobility 3.1. Gate and Tail Design Demands of multi-modal locomotion motivated a quadruped design for...tail instead of a rear body segment simplifies waterproofing design requirements and adds stability both on land and in water. This new morphology is
Noise-enhanced CVQKD with untrusted source
NASA Astrophysics Data System (ADS)
Wang, Xiaoqun; Huang, Chunhui
2017-06-01
The performance of one-way and two-way continuous variable quantum key distribution (CVQKD) protocols can be increased by adding some noise on the reconciliation side. In this paper, we propose to add noise at the reconciliation end to improve the performance of CVQKD with untrusted source. We derive the key rate of this case and analyze the impact of the additive noise. The simulation results show that the optimal additive noise can improve the performance of the system in terms of maximum transmission distance and tolerable excess noise.
50 CFR 648.165 - Framework specifications.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Council may, at any time, initiate action to add or adjust management measures if it finds that action is... management measures are based allows for adequate time to publish a proposed rule, and whether regulations... 648.165 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC...
Synthesis of spatially variant lattices.
Rumpf, Raymond C; Pazos, Javier
2012-07-02
It is often desired to functionally grade and/or spatially vary a periodic structure like a photonic crystal or metamaterial, yet no general method for doing this has been offered in the literature. A straightforward procedure is described here that allows many properties of the lattice to be spatially varied at the same time while producing a final lattice that is still smooth and continuous. Properties include unit cell orientation, lattice spacing, fill fraction, and more. This adds many degrees of freedom to a design such as spatially varying the orientation to exploit directional phenomena. The method is not a coordinate transformation technique so it can more easily produce complicated and arbitrary spatial variance. To demonstrate, the algorithm is used to synthesize a spatially variant self-collimating photonic crystal to flow a Gaussian beam around a 90° bend. The performance of the structure was confirmed through simulation and it showed virtually no scattering around the bend that would have arisen if the lattice had defects or discontinuities.
Slow and Steady: Ocean Circulation. The Influence of Sea Surface Height on Ocean Currents
NASA Technical Reports Server (NTRS)
Haekkinen, Sirpa
2000-01-01
The study of ocean circulation is vital to understanding how our climate works. The movement of the ocean is closely linked to the progression of atmospheric motion. Winds close to sea level add momentum to ocean surface currents. At the same time, heat that is stored and transported by the ocean warms the atmosphere above and alters air pressure distribution. Therefore, any attempt to model climate variation accurately must include reliable calculations of ocean circulation. Unlike movement of the atmosphere, movement of the ocean's waters takes place mostly near the surface. The major patterns of surface circulation form gigantic circular cells known as gyres. They are categorized according to their general location-equatorial, subtropical, subpolar, and polar-and may run across an entire ocean. The smaller-scale cell of ocean circulation is known' as an eddy. Eddies are much more common than gyres and much more difficult to track in computer simulations of ocean currents.
NASA Astrophysics Data System (ADS)
Miriello, D.; Bloise, A.; De Luca, R.; Apollaro, C.; Crisci, G. M.; Medaglia, S.; Taliano Grasso, A.
2015-06-01
Dressel 2-4 amphorae are a type of pottery, which was used to transport wine and produced in the Mediterranean area between the first century BC and the second century AD. This study shows, for the first time, that their production also occurred in Ionian Calabria. These results were achieved by studying 11 samples of archaeological pottery (five samples of Dressel 2-4 and six samples of other ceramic types) taken from Cariati (Calabria—Southern Italy). The composition of the pottery was compared with that of the local raw materials (clays and sands) potentially usable for their production. Samples were studied by polarized optical microscopy and analysed by XRF, XRPD and Raman spectroscopy. An innovative approach, based on Microsoft Excel optimizer "Solver" add-in on geochemical data, was used to define the provenance of archaeological pottery and to calculate the mixtures of local clay and sand needed for the production of the pottery itself.
A novel role for visual perspective cues in the neural computation of depth
Kim, HyungGoo R.; Angelaki, Dora E.; DeAngelis, Gregory C.
2014-01-01
As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extra-retinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We demonstrate that incorporating these “dynamic perspective” cues allows the visual system to generate selectivity for depth sign from motion parallax in macaque area MT, a computation that was previously thought to require extra-retinal signals regarding eye velocity. Our findings suggest novel neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations. PMID:25436667
NASA Astrophysics Data System (ADS)
Heo, Youn Jeong; Cho, Jeongho; Heo, Moon Beom
2010-07-01
The broadcast ephemeris and IGS ultra-rapid predicted (IGU-P) products are primarily available for use in real-time GPS applications. The IGU orbit precision has been remarkably improved since late 2007, but its clock products have not shown acceptably high-quality prediction performance. One reason for this fact is that satellite atomic clocks in space can be easily influenced by various factors such as temperature and environment and this leads to complicated aspects like periodic variations, which are not sufficiently described by conventional models. A more reliable prediction model is thus proposed in this paper in order to be utilized particularly in describing the periodic variation behaviour satisfactorily. The proposed prediction model for satellite clocks adds cyclic terms to overcome the periodic effects and adopts delay coordinate embedding, which offers the possibility of accessing linear or nonlinear coupling characteristics like satellite behaviour. The simulation results have shown that the proposed prediction model outperforms the IGU-P solutions at least on a daily basis.
Traffic Aware Planner for Cockpit-Based Trajectory Optimization
NASA Technical Reports Server (NTRS)
Woods, Sharon E.; Vivona, Robert A.; Henderson, Jeffrey; Wing, David J.; Burke, Kelly A.
2016-01-01
The Traffic Aware Planner (TAP) software application is a cockpit-based advisory tool designed to be hosted on an Electronic Flight Bag and to enable and test the NASA concept of Traffic Aware Strategic Aircrew Requests (TASAR). The TASAR concept provides pilots with optimized route changes (including altitude) that reduce fuel burn and/or flight time, avoid interactions with known traffic, weather and restricted airspace, and may be used by the pilots to request a route and/or altitude change from Air Traffic Control. Developed using an iterative process, TAP's latest improvements include human-machine interface design upgrades and added functionality based on the results of human-in-the-loop simulation experiments and flight trials. Architectural improvements have been implemented to prepare the system for operational-use trials with partner commercial airlines. Future iterations will enhance coordination with airline dispatch and add functionality to improve the acceptability of TAP-generated route-change requests to pilots, dispatchers, and air traffic controllers.
Way-Scaling to Reduce Power of Cache with Delay Variation
NASA Astrophysics Data System (ADS)
Goudarzi, Maziar; Matsumura, Tadayuki; Ishihara, Tohru
The share of leakage in cache power consumption increases with technology scaling. Choosing a higher threshold voltage (Vth) and/or gate-oxide thickness (Tox) for cache transistors improves leakage, but impacts cell delay. We show that due to uncorrelated random within-die delay variation, only some (not all) of cells actually violate the cache delay after the above change. We propose to add a spare cache way to replace delay-violating cache-lines separately in each cache-set. By SPICE and gate-level simulations in a commercial 90nm process, we show that choosing higher Vth, Tox and adding one spare way to a 4-way 16KB cache reduces leakage power by 42%, which depending on the share of leakage in total cache power, gives up to 22.59% and 41.37% reduction of total energy respectively in L1 instruction- and L2 unified-cache with a negligible delay penalty, but without sacrificing cache capacity or timing-yield.
Leisman, Gerry; Mualem, Raed; Machado, Calixto
2013-01-01
ADD/ADHD is the most common and most studied neurodevelopmental problem. Recent statistics from the U.S. Center for Disease Control state that 11% or approximately one out of every nine children in the US and one in five high school boys are diagnosed with ADD/ADHD. This number is thought to be increasing at around 15-20% per year. The US National Institute of Mental Health's Multi-modal Treatment Study has shown that medication has no long-term benefit for those with ADHD. To effectively address ADD/ADHD from within the framework of child public health, an interdisciplinary strategy is necessary that is based on a neuroeducational model that can be readily implemented on a large-scale within the educational system. This study is based on previous findings that ADD/ADHD children possess underactivity between sub-cortical and cortical regions. An imbalance of activity or arousal in one area can result in functional disconnections similar to that seen in split-brain patients. Since ADD/ADHD children exhibit deficient performance on tests developed to measure perceptual laterality, evidence of weak laterality or failure to develop laterality has been found across various modalities (auditory, visual, tactile). This has reportedly resulted in abnormal cerebral organization and ineffective cortical specialization necessary for the development of language and non-language function. This pilot study examines groups of ADD/ADHD and control elementary school children all of whom were administered all of the subtests of the Wechsler Individual Achievement Tests, the Brown Parent Questionnaire, and given objective performance measures on tests of motor and sensory coordinative abilities. Results measured after a 12-week remediation program aimed at increasing the activity of the hypothesized underactive right hemisphere function, yielded significant improvement of greater than 2 years in grade level in all domains except in mathematical reasoning. The treated group also displayed a significant improvement in behavior with a reduction in Brown scale behavioral scores. Non-treated control participants did not exhibit significant differences during the same 12 week period in academic measurements. Controls were significantly different from treatment participants in all domains after a 12-week period. The non-treatment group also demonstrated an increase in behavioral scores and increased symptoms of ADD/ADHD over the same time period when compared to the treated group. Results are discussed in the context of the concept of functional disconnectivity in ADD/ADHD children.
NASA Astrophysics Data System (ADS)
Wright, Ashley J.; Walker, Jeffrey P.; Pauwels, Valentijn R. N.
2017-08-01
Floods are devastating natural hazards. To provide accurate, precise, and timely flood forecasts, there is a need to understand the uncertainties associated within an entire rainfall time series, even when rainfall was not observed. The estimation of an entire rainfall time series and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of entire rainfall input time series to be considered when estimating model parameters, and provides the ability to improve rainfall estimates from poorly gauged catchments. Current methods to estimate entire rainfall time series from streamflow records are unable to adequately invert complex nonlinear hydrologic systems. This study aims to explore the use of wavelets in the estimation of rainfall time series from streamflow records. Using the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia, it is shown that model parameter distributions and an entire rainfall time series can be estimated. Including rainfall in the estimation process improves streamflow simulations by a factor of up to 1.78. This is achieved while estimating an entire rainfall time series, inclusive of days when none was observed. It is shown that the choice of wavelet can have a considerable impact on the robustness of the inversion. Combining the use of a likelihood function that considers rainfall and streamflow errors with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
Go Figure: Computer Database Adds the Personal Touch.
ERIC Educational Resources Information Center
Gaffney, Jean; Crawford, Pat
1992-01-01
A database for recordkeeping for a summer reading club was developed for a public library system using an IBM PC and Microsoft Works. Use of the database resulted in more efficient program management, giving librarians more time to spend with patrons and enabling timely awarding of incentives. (LAE)
78 FR 65180 - Airworthiness Directives; MD Helicopters, Inc., Helicopters
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-31
... reducing the retirement life of each tail rotor blade (blade), performing a one-time visual inspection of... required reporting information to the FAA within 24 hours following the one-time inspection. Since we... pitting and the shot peen surface's condition in addition to cracks and corrosion, and adds certain part...
Married Thai Working Mothers: Coping with Initial Part-Time Doctoral Study
ERIC Educational Resources Information Center
Thinnam, Thanit
2011-01-01
Advanced educational attainment can "grow" a career. But acquiring a doctoral qualification adds study to existing work and family responsibilities, especially for women. This phenomenological research explores the experiences of eight Thai working mothers enrolled in the initial stage of part-time doctoral programs in Thailand. A…
The VIIRS Ocean Data Simulator Enhancements and Results
NASA Technical Reports Server (NTRS)
Robinson, Wayne D.; Patt, Fredrick S.; Franz, Bryan A.; Turpie, Kevin R.; McClain, Charles R.
2011-01-01
The VIIRS Ocean Science Team (VOST) has been developing an Ocean Data Simulator to create realistic VIIRS SDR datasets based on MODIS water-leaving radiances. The simulator is helping to assess instrument performance and scientific processing algorithms. Several changes were made in the last two years to complete the simulator and broaden its usefulness. The simulator is now fully functional and includes all sensor characteristics measured during prelaunch testing, including electronic and optical crosstalk influences, polarization sensitivity, and relative spectral response. Also included is the simulation of cloud and land radiances to make more realistic data sets and to understand their important influence on nearby ocean color data. The atmospheric tables used in the processing, including aerosol and Rayleigh reflectance coefficients, have been modeled using VIIRS relative spectral responses. The capabilities of the simulator were expanded to work in an unaggregated sample mode and to produce scans with additional samples beyond the standard scan. These features improve the capability to realistically add artifacts which act upon individual instrument samples prior to aggregation and which may originate from beyond the actual scan boundaries. The simulator was expanded to simulate all 16 M-bands and the EDR processing was improved to use these bands to make an SST product. The simulator is being used to generate global VIIRS data from and in parallel with the MODIS Aqua data stream. Studies have been conducted using the simulator to investigate the impact of instrument artifacts. This paper discusses the simulator improvements and results from the artifact impact studies.
The VIIRS ocean data simulator enhancements and results
NASA Astrophysics Data System (ADS)
Robinson, Wayne D.; Patt, Frederick S.; Franz, Bryan A.; Turpie, Kevin R.; McClain, Charles R.
2011-10-01
The VIIRS Ocean Science Team (VOST) has been developing an Ocean Data Simulator to create realistic VIIRS SDR datasets based on MODIS water-leaving radiances. The simulator is helping to assess instrument performance and scientific processing algorithms. Several changes were made in the last two years to complete the simulator and broaden its usefulness. The simulator is now fully functional and includes all sensor characteristics measured during prelaunch testing, including electronic and optical crosstalk influences, polarization sensitivity, and relative spectral response. Also included is the simulation of cloud and land radiances to make more realistic data sets and to understand their important influence on nearby ocean color data. The atmospheric tables used in the processing, including aerosol and Rayleigh reflectance coefficients, have been modeled using VIIRS relative spectral responses. The capabilities of the simulator were expanded to work in an unaggregated sample mode and to produce scans with additional samples beyond the standard scan. These features improve the capability to realistically add artifacts which act upon individual instrument samples prior to aggregation and which may originate from beyond the actual scan boundaries. The simulator was expanded to simulate all 16 M-bands and the EDR processing was improved to use these bands to make an SST product. The simulator is being used to generate global VIIRS data from and in parallel with the MODIS Aqua data stream. Studies have been conducted using the simulator to investigate the impact of instrument artifacts. This paper discusses the simulator improvements and results from the artifact impact studies.
ERIC Educational Resources Information Center
Checkoway, Amy; Gamse, Beth; Velez, Melissa; Caven, Meghan; de la Cruz, Rodolfo; Donoghue, Nathaniel; Kliorys, Kristina; Linkow, Tamara; Luck, Rachel; Sahni, Sarah; Woodford, Michelle
2012-01-01
The Massachusetts Expanded Learning Time (ELT) initiative was established in 2005 with planning grants that allowed a limited number of schools to explore a redesign of their respective schedules and add time to their day or year. Participating schools are required to expand learning time by at least 300 hours per academic year to improve student…
ERIC Educational Resources Information Center
Checkoway, Amy; Gamse, Beth; Velez, Melissa; Caven, Meghan; de la Cruz, Rodolfo; Donoghue, Nathaniel; Kliorys, Kristina; Linkow, Tamara; Luck, Rachel; Sahni, Sarah; Woodford, Michelle
2012-01-01
The Massachusetts Expanded Learning Time (ELT) initiative was established in 2005 with planning grants that allowed a limited number of schools to explore a redesign of their respective schedules and add time to their day or year. Participating schools are required to expand learning time by at least 300 hours per academic year to improve student…
Implementation of interconnect simulation tools in spice
NASA Technical Reports Server (NTRS)
Satsangi, H.; Schutt-Aine, J. E.
1993-01-01
Accurate computer simulation of high speed digital computer circuits and communication circuits requires a multimode approach to simulate both the devices and the interconnects between devices. Classical circuit analysis algorithms (lumped parameter) are needed for circuit devices and the network formed by the interconnected devices. The interconnects, however, have to be modeled as transmission lines which incorporate electromagnetic field analysis. An approach to writing a multimode simulator is to take an existing software package which performs either lumped parameter analysis or field analysis and add the missing type of analysis routines to the package. In this work a traditionally lumped parameter simulator, SPICE, is modified so that it will perform lossy transmission line analysis using a different model approach. Modifying SPICE3E2 or any other large software package is not a trivial task. An understanding of the programming conventions used, simulation software, and simulation algorithms is required. This thesis was written to clarify the procedure for installing a device into SPICE3E2. The installation of three devices is documented and the installations of the first two provide a foundation for installation of the lossy line which is the third device. The details of discussions are specific to SPICE, but the concepts will be helpful when performing installations into other circuit analysis packages.
The effect of topography of upper mantle discontinuities on SS precursors
NASA Astrophysics Data System (ADS)
Koroni, M.; Trampert, J.
2015-12-01
We assessed the reliability of methods used to infer the topography of the mantle transition zone discontinuities. In particular, using the spectral-element method,we explored the effect of topography of the '410' and '660' mantle discontinuities on the travel times of SS precursors recorded on transverse component seismograms.The latter are routinely used to infer the topography of mantle transition zone discontinuities. The step from precursorytravel times to topographic changes is mainly done using linearised ray theory, or sometimes using finite frequency kernels.We simulated exact seismograms in 1-D and 3-D elastic models of the mantle. In a second simulation, we added topography to the discontinuities. We compared the waveforms obtained with and without topography by cross-correlation of the SS precursors. Since we did not add noise, the precursors are visible in individual seismograms without the need of stacking. The resultingtime anomalies were then converted into topographic variations and compared to the original models of topography. We found that linearised ray theory gives a relatively good idea on the location of the uplifts and depressions of the discontinuities, provided that the ray coverage is good, although it seriously underestimates the amplitude of the topography. The amplitude of the topographic variation is underestimated in average by a factor of 2.8 for the '660' and of 4.5 for the '410'. Additionally, we found a strong non-linearity in the measured data which cannot be modelled without a fully non-linear inversion for elastic structure and discontinuities simultaneously.
Quantitative use of Palaeo-Proxy Data in Global Circulation Models
NASA Astrophysics Data System (ADS)
Collins, M.
2003-04-01
It is arguably one of the ultimate aims of palaeo-modelling science to somehow "get the palaeo-proxy data into the model" i.e. to constrain the climate of the model the trajectory of the real climate recorded in the palaeo data. The traditional way of interfacing data with models is to use data assimilation. This presents a number of problems in the palaeo context as the data are more often representative of seasonal to annual or decadal climate and models have time steps of order minutes, hence the model increments are likely to be vanishingly small. Also, variational data assimilation schemes would require the adjoint of the coupled ocean-atmosphere model and the adjoint of the functions which translate model variables such as temperature and precipitation into the palaeo-proxies, both of which are hard to determine because of the high degree of non-linearity in the system and the wide range of space and time scales. An alternative is to add forward models of proxies to the model and run "many years" of simulation until an analog state is found which matches the palaeo data for each season, year, decade etc. Clearly "many years" might range from a few thousand years to almost infinity and depends on the number of degrees of freedom in the climate system and on the error characteristics of the palaeo data. The length of simulation required is probably beyond the supercomputer capacity of a single institution and hence an alternative is to use idle capacity of home and business personal computers - the climateprediction.net project.
Narad, Megan; Garner, Annie A; Brassell, Anne A; Saxby, Dyani; Antonini, Tanya N; O'Brien, Kathleen M; Tamm, Leanne; Matthews, Gerald; Epstein, Jeffery N
2013-10-01
This study extends the literature regarding attention-deficit/hyperactivity disorder (ADHD)-related driving impairments to a newly licensed, adolescent population. To investigate the combined risks of adolescence, ADHD, and distracted driving (cell phone conversation and text messaging) on driving performance. Adolescents aged 16 to 17 years with (n = 28) and without (n = 33) ADHD engaged in a simulated drive under 3 conditions (no distraction, cell phone conversation, and texting). During each condition, one unexpected event (eg, another car suddenly merging into driver's lane) was introduced. Cell phone conversation, texting, and no distraction while driving. Self-report of driving history, average speed, standard deviation of speed, standard deviation of lateral position, and braking reaction time during driving simulation. Adolescents with ADHD reported fewer months of driving experience and a higher proportion of driving violations than control subjects. After controlling for months of driving history, adolescents with ADHD demonstrated more variability in speed and lane position than control subjects. There were no group differences for braking reaction time. Furthermore, texting negatively impacted the driving performance of all participants as evidenced by increased variability in speed and lane position. To our knowledge, this study is one of the first to investigate distracted driving in adolescents with ADHD and adds to a growing body of literature documenting that individuals with ADHD are at increased risk for negative driving outcomes. Furthermore, texting significantly impairs the driving performance of all adolescents and increases existing driving-related impairment in adolescents with ADHD, highlighting the need for education and enforcement of regulations against texting for this age group.
A controlled study of Tourette syndrome. IV. Obsessions, compulsions, and schizoid behaviors.
Comings, D E; Comings, B G
1987-01-01
To determine the frequency of obsessive, compulsive, and schizoid behaviors in Tourette syndrome (TS), we prospectively questioned 246 patients with TS, 17 with attention-deficit disorder (ADD), 15 with ADD due to a TS gene, and 47 random controls. The comparative frequency of obsessive, compulsive, and repetitive behaviors--such as obsessive unpleasant thoughts, obsessive silly thoughts, echolalia, palilalia, touching things excessively, touching things a specific number of times, touching others excessively, sexual touching, biting or hurting oneself, head banging, rocking, mimicking others, counting things, and occasional or frequent public exhibitionism--were significantly more common in TS patients than in controls. The frequency of each of these was much higher for grade 3 (severe) TS. Most of these behaviors also occurred significantly more often in individuals with ADD or in individuals with ADD secondary to TS (ADD 2(0) TS). When these features were combined into an obsessive-compulsive score, 45.4% of TS patients had a score of 4-15, whereas 8.5% of controls had a score of 4 or 5. These results indicate that obsessive-compulsive behaviors are an integral part of the expression of the TS gene and can be inherited as an autosomal dominant trait. Schizoid symptoms, such as thinking that people were watching them or plotting against them, were significantly more common in TS patients than in controls. Auditory hallucinations of hearing voices were present in 14.6% of TS patients, compared with 2.1% of controls (P = .02). These symptoms were absent in ADD patients but present in ADD 2(0) TS patients. These voices were often blamed for telling them to do bad things and were frequently identified with the devil. None of the controls had a total schizoid behavior score greater than 3, whereas 10.9% of the TS patients had scores of 4-10 (P = .02). This frequency increased to 20.6% in the grade 3 TS patients. These quantitative results confirm our clinical impression that some TS patients have paranoid ideations, often feel that people are out to get them, and hear voices. PMID:3479015
RCN adds crosses in remembrance.
2014-11-18
Outgoing RCN president Andrea Spyropoulos (left) and RCN student council member Claire Jeeves planted crosses at the Field of Remembrance at Westminster last week to honour all nurses who have died in times of conflict.
Dichromated polyvinyl alcohol (DC-PVA) wet processed for high index modulation
NASA Astrophysics Data System (ADS)
Rallison, Richard D.
1997-04-01
PVA films have been used as mold releases, strippable coatings, binders for photopolymers and when sensitized with metals and/or dyes they have been used as photoresists, volume HOEs, multiplexed holographic optical memory and real time non destructive holographic testing. The list goes on and includes Slime and birth control. In holography, DC-PVA is a real time photoanisotropic recording material useful for phase conjugation experiments and also a stable long term storage medium needing no processing other than heat. Now we add the capability of greatly increasing the versatility of PVA by boosting the index modulation by almost two orders of magnitude. We can add broadband display and HOE applications that were not possible before. Simple two or three step liquid processing is all that is required to make the index modulation grow.
Using complex networks towards information retrieval and diagnostics in multidimensional imaging
NASA Astrophysics Data System (ADS)
Banerjee, Soumya Jyoti; Azharuddin, Mohammad; Sen, Debanjan; Savale, Smruti; Datta, Himadri; Dasgupta, Anjan Kr; Roy, Soumen
2015-12-01
We present a fresh and broad yet simple approach towards information retrieval in general and diagnostics in particular by applying the theory of complex networks on multidimensional, dynamic images. We demonstrate a successful use of our method with the time series generated from high content thermal imaging videos of patients suffering from the aqueous deficient dry eye (ADDE) disease. Remarkably, network analyses of thermal imaging time series of contact lens users and patients upon whom Laser-Assisted in situ Keratomileusis (Lasik) surgery has been conducted, exhibit pronounced similarity with results obtained from ADDE patients. We also propose a general framework for the transformation of multidimensional images to networks for futuristic biometry. Our approach is general and scalable to other fluctuation-based devices where network parameters derived from fluctuations, act as effective discriminators and diagnostic markers.
Using complex networks towards information retrieval and diagnostics in multidimensional imaging.
Banerjee, Soumya Jyoti; Azharuddin, Mohammad; Sen, Debanjan; Savale, Smruti; Datta, Himadri; Dasgupta, Anjan Kr; Roy, Soumen
2015-12-02
We present a fresh and broad yet simple approach towards information retrieval in general and diagnostics in particular by applying the theory of complex networks on multidimensional, dynamic images. We demonstrate a successful use of our method with the time series generated from high content thermal imaging videos of patients suffering from the aqueous deficient dry eye (ADDE) disease. Remarkably, network analyses of thermal imaging time series of contact lens users and patients upon whom Laser-Assisted in situ Keratomileusis (Lasik) surgery has been conducted, exhibit pronounced similarity with results obtained from ADDE patients. We also propose a general framework for the transformation of multidimensional images to networks for futuristic biometry. Our approach is general and scalable to other fluctuation-based devices where network parameters derived from fluctuations, act as effective discriminators and diagnostic markers.
Using complex networks towards information retrieval and diagnostics in multidimensional imaging
Banerjee, Soumya Jyoti; Azharuddin, Mohammad; Sen, Debanjan; Savale, Smruti; Datta, Himadri; Dasgupta, Anjan Kr; Roy, Soumen
2015-01-01
We present a fresh and broad yet simple approach towards information retrieval in general and diagnostics in particular by applying the theory of complex networks on multidimensional, dynamic images. We demonstrate a successful use of our method with the time series generated from high content thermal imaging videos of patients suffering from the aqueous deficient dry eye (ADDE) disease. Remarkably, network analyses of thermal imaging time series of contact lens users and patients upon whom Laser-Assisted in situ Keratomileusis (Lasik) surgery has been conducted, exhibit pronounced similarity with results obtained from ADDE patients. We also propose a general framework for the transformation of multidimensional images to networks for futuristic biometry. Our approach is general and scalable to other fluctuation-based devices where network parameters derived from fluctuations, act as effective discriminators and diagnostic markers. PMID:26626047
Pipeline active filter utilizing a booth type multiplier
NASA Technical Reports Server (NTRS)
Nathan, Robert (Inventor)
1987-01-01
Multiplier units of the modified Booth decoder and carry-save adder/full adder combination are used to implement a pipeline active filter wherein pixel data is processed sequentially, and each pixel need only be accessed once and multiplied by a predetermined number of weights simultaneously, one multiplier unit for each weight. Each multiplier unit uses only one row of carry-save adders, and the results are shifted to less significant multiplier positions and one row of full adders to add the carry to the sum in order to provide the correct binary number for the product Wp. The full adder is also used to add this product Wp to the sum of products .SIGMA.Wp from preceding multiply units. If m.times.m multiplier units are pipelined, the system would be capable of processing a kernel array of m.times.m weighting factors.
Simulating 3D Spacecraft Constellations for Low Frequency Radio Imaging
NASA Astrophysics Data System (ADS)
Hegedus, A. M.; Amiri, N.; Lazio, J.; Belov, K.; Kasper, J. C.
2016-12-01
Constellations of small spacecraft could be used to realize a low-frequency phased array for either heliophysics or astrophysics observations. However, there are issues that arise with an orbiting array that do not occur on the ground, thus rendering much of the existing radio astronomy software inadequate for data analysis and simulation. In this work we address these issues and consider the performance of two constellation concepts. The first is a 32-spacecraft constellation for astrophysical observations, and the second is a 5-element concept for pointing to the location of radio emission from coronal mass ejections (CMEs). For the first, we fill the software gap by extending the APSYNSIM software to simulate the aperture synthesis for a radio interferometer in orbit. This involves using the dynamic baselines from the relative motion of the individual spacecraft as well as the capability to add galactic noise. The ability to simulate phase errors corresponding to positional uncertainty of the antennas was also added. The upgraded software was then used to model the imaging of a 32 spacecraft constellation that would orbit the moon to image radio galaxies like Cygnus A at .3-30 MHz. Animated images showing the improvement of the dirty image as the orbits progressed were made. RMSE plots that show how well the dirty image matches the input image as a function of integration time were made. For the second concept we performed radio interferometric simulations of the Sun Radio Interferometer Space Experiment (SunRISE) using the Common Astronomy Software Applications (CASA) package. SunRISE is a five spacecraft phased array that would orbit Earth to localize the low frequency radio emission from CMEs. This involved simulating the array in CASA, creating truth images for the CMEs over the entire frequency band of SunRISE, and observing them with the simulated array to see how well it could localize the true position of the CME. The results of our analysis show that we can localize the radio emission originating from the head or flanks of the CMEs in spite of the phase errors introduced by uncertainties in orbit and clock estimation.
USING TIME VARIANT VOLTAGE TO CALCULATE ENERGY CONSUMPTION AND POWER USE OF BUILDING SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makhmalbaf, Atefe; Augenbroe , Godfried
2015-12-09
Buildings are the main consumers of electricity across the world. However, in the research and studies related to building performance assessment, the focus has been on evaluating the energy efficiency of buildings whereas the instantaneous power efficiency has been overlooked as an important aspect of total energy consumption. As a result, we never developed adequate models that capture both thermal and electrical characteristics (e.g., voltage) of building systems to assess the impact of variations in the power system and emerging technologies of the smart grid on buildings energy and power performance and vice versa. This paper argues that the powermore » performance of buildings as a function of electrical parameters should be evaluated in addition to systems’ mechanical and thermal behavior. The main advantage of capturing electrical behavior of building load is to better understand instantaneous power consumption and more importantly to control it. Voltage is one of the electrical parameters that can be used to describe load. Hence, voltage dependent power models are constructed in this work and they are coupled with existing thermal energy models. Lack of models that describe electrical behavior of systems also adds to the uncertainty of energy consumption calculations carried out in building energy simulation tools such as EnergyPlus, a common building energy modeling and simulation tool. To integrate voltage-dependent power models with thermal models, the thermal cycle (operation mode) of each system was fed into the voltage-based electrical model. Energy consumption of systems used in this study were simulated using EnergyPlus. Simulated results were then compared with estimated and measured power data. The mean square error (MSE) between simulated, estimated, and measured values were calculated. Results indicate that estimated power has lower MSE when compared with measured data than simulated results. Results discussed in this paper will illustrate the significance of enhancing building energy models with electrical characteristics. This would support different studies such as those related to modernization of the power system that require micro scale building-grid interaction, evaluating building energy efficiency with power efficiency considerations, and also design and control decisions that rely on accuracy of building energy simulation results.« less
Development and deployment of a water-crop-nutrient simulation model embedded in a web application
NASA Astrophysics Data System (ADS)
Langella, Giuliano; Basile, Angelo; Coppola, Antonio; Manna, Piero; Orefice, Nadia; Terribile, Fabio
2016-04-01
It is long time by now that scientific research on environmental and agricultural issues spent large effort in the development and application of models for prediction and simulation in spatial and temporal domains. This is fulfilled by studying and observing natural processes (e.g. rainfall, water and chemicals transport in soils, crop growth) whose spatiotemporal behavior can be reproduced for instance to predict irrigation and fertilizer requirements and yield quantities/qualities. In this work a mechanistic model to simulate water flow and solute transport in the soil-plant-atmosphere continuum is presented. This desktop computer program was written according to the specific requirement of developing web applications. The model is capable to solve the following issues all together: (a) water balance and (b) solute transport; (c) crop modelling; (d) GIS-interoperability; (e) embedability in web-based geospatial Decision Support Systems (DSS); (f) adaptability at different scales of application; and (g) ease of code modification. We maintained the desktop characteristic in order to further develop (e.g. integrate novel features) and run the key program modules for testing and validation purporses, but we also developed a middleware component to allow the model run the simulations directly over the web, without software to be installed. The GIS capabilities allows the web application to make simulations in a user-defined region of interest (delimited over a geographical map) without the need to specify the proper combination of model parameters. It is possible since the geospatial database collects information on pedology, climate, crop parameters and soil hydraulic characteristics. Pedological attributes include the spatial distribution of key soil data such as soil profile horizons and texture. Further, hydrological parameters are selected according to the knowledge about the spatial distribution of soils. The availability and definition in the geospatial domain of these attributes allow the simulation outputs at a different spatial scale. Two different applications were implemented using the same framework but with different configurations of the software pieces making the physically based modelling chain: an irrigation tool simulating water requirements and their dates and a fertilization tool for optimizing in particular mineral nitrogen adds.
QuickStrike ASOC Battlefield Simulation: Preparing the War Fighter to Win
NASA Technical Reports Server (NTRS)
Jones, Richard L.
2010-01-01
The QuickStrike ASOC (Air Support Operations Center) Battlefield Simulation fills a crucial gap in USAF and United Kingdom Close Air Support (CAS) and airspace manager training. The system now provides six squadrons with the capability to conduct total-mission training events whenever the personnel and time are available. When the 111th ASOC returned from their first deployment to Afghanistan they realized the training available prior to deployment was inadequate. They sought an organic training capability focused on the ASOC mission that was low cost, simple to use, adaptable, and available now. Using a commercial off-the-shelf simulation, they developed a complete training system by adapting the simulation to their training needs. Through more than two years of spiral development, incorporating lessons learned, the system has matured, and can now realistically replicate the Tactical Operations Center (TOC) in Kabul, Afghanistan, the TOC supporting the mission in Iraq, or can expand to support a major conflict scenario. The training system provides a collaborative workspace for the training audience and exercise control group via integrated software and workstations that can easily adapt to new mission reqUirements and TOC configurations. The system continues to mature. Based on inputs from the war fighter, new capabilities have been incorporated to add realism and simplify the scenario development process. The QuickStrike simulation can now import TBMCS Air Tasking Order air mission data and can provide air and ground tracks to a common operating picture; presented through either C2PC or JADOCS. This oranic capability to practice team processes and tasks and to conduct mission rehearsals proved its value in the 111 h ASOS's next deployment. The ease of scenario development and the simple to learn and intuitive gamelike interface enables the squadrons to develop and share scenarios incorporating lessons learned from every deployment. These war fighters have now filled the training gap and have the capability they need to train to win.
Fast time- and frequency-domain finite-element methods for electromagnetic analysis
NASA Astrophysics Data System (ADS)
Lee, Woochan
Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution is a new method for making an explicit time-domain finite-element method (TDFEM) unconditionally stable for general electromagnetic analysis. In this method, for a given time step, we find the unstable modes that are the root cause of instability, and deduct them directly from the system matrix resulting from a TDFEM based analysis. As a result, an explicit TDFEM simulation is made stable for an arbitrarily large time step irrespective of the space step. The third contribution is a new method for full-wave applications from low to very high frequencies in a TDFEM based on matrix exponential. In this method, we directly deduct the eigenmodes having large eigenvalues from the system matrix, thus achieving a significantly increased time step in the matrix exponential based TDFEM. The fourth contribution is a new method for transforming the indefinite system matrix of a frequency-domain FEM to a symmetric positive definite one. We deduct non-positive definite component directly from the system matrix resulting from a frequency-domain FEM-based analysis. The resulting new representation of the finite-element operator ensures an iterative solution to converge in a small number of iterations. We then add back the non-positive definite component to synthesize the original solution with negligible cost.
The dubious assessment of gay, lesbian, and bisexual adolescents of add health.
Savin-Williams, Ritch C; Joyner, Kara
2014-04-01
In this essay, we argue that researchers who base their investigations of nonheterosexuality derived from reports of romantic attractions of adolescent participants from Wave 1 of Add Health must account for their disappearance in future waves of data collection. The high prevalence of Wave 1 youth with either both-sex or same-sex romantic attractions was initially striking and unexpected. Subsequent data from Add Health indicated that this prevalence sharply declined over time such that over 70 % of these Wave 1 adolescents identified as exclusively heterosexual as Wave 4 young adults. Three explanations are proposed to account for the high prevalence rate and the temporal inconsistency: (1) gay adolescents going into the closet during their young adult years; (2) confusion regarding the use and meaning of romantic attraction as a proxy for sexual orientation; and (3) the existence of mischievous adolescents who played a "jokester" role by reporting same-sex attraction when none was present. Relying on Add Health data, we dismissed the first explanation as highly unlikely and found support for the other two. Importantly, these "dubious" gay, lesbian, and bisexual adolescents may have led researchers to erroneously conclude from the data that sexual-minority youth are more problematic than heterosexual youth in terms of physical, mental, and social health.
Attention Deficit Disorder. NICHCY Briefing Paper.
ERIC Educational Resources Information Center
Fowler, Mary
This briefing paper uses a question-and-answer format to provide basic information about children with attention deficit disorder (ADD). Questions address the following concerns: nature and incidence of ADD; causes of ADD; signs of ADD (impulsivity, hyperactivity, disorganization, social skill deficits); the diagnostic ADD assessment; how to get…
Verkuil, Bart; Brosschot, Jos F; Tollenaar, Marieke S; Lane, Richard D; Thayer, Julian F
2016-10-01
Prolonged cardiac activity that exceeds metabolic needs can be detrimental for somatic health. Psychological stress could result in such "additional cardiac activity." In this study, we examined whether prolonged additional reductions in heart rate variability (AddHRVr) can be measured in daily life with an algorithm that filters out changes in HRV that are purely due to metabolic demand, as indexed by movement, using a brief calibration procedure. We tested whether these AddHRVr periods were related to worry, stress, and negative emotions. Movement and the root of the mean square of successive differences (RMSSD) in heart rate were measured during a calibration phase and the subsequent 24 h in 32 participants. Worry, stress, explicit and implicit emotions were assessed hourly using smartphones. The Levels of Emotional Awareness Scale and resting HRV were used to account for individual differences. During calibration, person-specific relations between movement and RMSSD were determined. The 24-h data were used to detect prolonged periods (i.e., 7.5 min) of AddHRVr. AddHRVr periods were associated with worrying, with decreased explicit positive affect, and with increased tension, but not with the frequency of stressful events or implicit emotions. Only in people high in emotional awareness and high in resting HRV did changes in AddHRVr covary with changes in explicit emotions. The algorithm can be used to capture prolonged reductions in HRV that are not due to metabolic needs. This enables the real-time assessment of episodes of potentially detrimental cardiac activity and its psychological determinants in daily life.
Brkicic, Ljiljana Sovic; Godman, Brian; Voncina, Luka; Sovic, Slavica; Relja, Maja
2012-06-01
Parkinson's disease (PD) is the second most common neurological disease affecting older adults. Consequently, this disease should be a focus among payers, with increasing utilization of newer premium-priced patent-protected add-on therapies to stabilize or even improve motor function over time. However, expenditure can be moderated by reforms. Consequently, there is a need to assess the influence of these reforms on the prescribing efficiency for drugs to treat PD in Croatia before proposing additional measures. Prescribing efficiency is defined as increasing the use of add-on therapies for similar expenditure. An observational retrospective study of the Croatian Institute for Health Insurance database of drugs to treat patients with PD in Croatia from 2000 to 2010 was carried out, with utilization measured in defined daily doses (defined as the average maintenance dose of a drug when used in its major indication in adults). The study years were chosen to reflect recent reforms. Only reimbursed expenditure is measured from a health insurance perspective. Utilization of drugs to treat PD increased by 218% between 2000 and 2010. Reimbursed expenditure increased by 360%, principally driven by increasing utilization of premium-priced patent-protected add-on therapies, including ropinirole and pramipexole. However, following recent reforms, reducing expenditure/defined daily dose for the different drugs, as well as overall expenditure, stabilized reimbursed expenditure between 2005 and 2010. Treatment of PD is complex, and add-on therapies are needed to improve care. Reimbursed expenditure should now fall following stabilization, despite increasing volumes, as successive add-on therapies lose their patents, further increasing prescribing efficiency.
NASA Astrophysics Data System (ADS)
Cao, Shuo; Zhu, Zong-Hong
2014-10-01
Using relatively complete observational data concerning four angular diameter distance (ADD) measurements and combined SN +GRB observations representing current luminosity distance (LD) data, this paper investigates the compatibility of these two cosmological distances considering three classes of dark energy equation of state (EoS) reconstruction. In particular, we use strongly gravitationally lensed systems from various large systematic gravitational lens surveys and galaxy clusters, which yield the Hubble constant independent ratio between two angular diameter distances Dl s/Ds data. Our results demonstrate that, with more general categories of standard ruler data, ADD and LD data are compatible at 1 σ level. Second, we note that consistency between ADD and LD data is maintained irrespective of the EoS parametrizations: there is a good match between the universally explored Chevalier-Polarski-Linder model and other formulations of cosmic equation of state. Especially for the truncated generalized equation of state (GEoS) model with β =-2 , the conclusions obtained with ADD and LD are almost the same. Finally, statistical analysis of generalized dark energy equation of state performed on four classes of ADD data provides stringent constraints on the EoS parameters w0 , wβ, and β , which suggest that dark energy was a subdominant component at early times. Moreover, the GEoS parametrization with β ≃1 seems to be a more favorable two-parameter model to characterize the cosmic equation of state, because the combined angular diameter distance data (SGL +CBF +BAO +WMAP 9 ) provide the best-fit value β =0.75 1-0.480+0.465 .
NASA Astrophysics Data System (ADS)
Cukier, Robert I.
2011-01-01
Leucine zippers consist of alpha helical monomers dimerized (or oligomerized) into alpha superhelical structures known as coiled coils. Forming the correct interface of a dimer from its monomers requires an exploration of configuration space focused on the side chains of one monomer that must interdigitate with sites on the other monomer. The aim of this work is to generate good interfaces in short simulations starting from separated monomers. Methods are developed to accomplish this goal based on an extension of a previously introduced [Su and Cukier, J. Phys. Chem. B 113, 9595, (2009)] Hamiltonian temperature replica exchange method (HTREM), which scales the Hamiltonian in both potential and kinetic energies that was used for the simulation of dimer melting curves. The new method, HTREM_MS (MS designates mean square), focused on interface formation, adds restraints to the Hamiltonians for all but the physical system, which is characterized by the normal molecular dynamics force field at the desired temperature. The restraints in the nonphysical systems serve to prevent the monomers from separating too far, and have the dual aims of enhancing the sampling of close in configurations and breaking unwanted correlations in the restrained systems. The method is applied to a 31-residue truncation of the 33-residue leucine zipper (GCN4-p1) of the yeast transcriptional activator GCN4. The monomers are initially separated by a distance that is beyond their capture length. HTREM simulations show that the monomers oscillate between dimerlike and monomerlike configurations, but do not form a stable interface. HTREM_MS simulations result in the dimer interface being faithfully reconstructed on a 2 ns time scale. A small number of systems (one physical and two restrained with modified potentials and higher effective temperatures) are sufficient. An in silico mutant that should not dimerize because it lacks charged residues that provide electrostatic stabilization of the dimer does not with HTREM_MS, giving confidence in the method. The interface formation time scale is sufficiently short that using HTREM_MS as a screening tool to validate leucine zipper design methods may be feasible.
Recovering the slip history of a scenario earthquake in the Mexican subduction zone
NASA Astrophysics Data System (ADS)
Hjorleifsdottir, V.; Perez-Campos, X.; Iglesias, A.; Cruz-Atienza, V.; Ji, C.; Legrand, D.; Husker, A. L.; Kostoglodov, V.; Valdes Gonzalez, C.
2011-12-01
The Guerrero segment of the Mexican subduction zone has not experienced a large earthquake for almost 100 years (Singh et al., 1981). Due to its proximity to Mexico City, which was devastated by an earthquake in the more distant Michoacan segment in 1985, it has been studied extensively in recent years. Silent slip events have been observed by a local GPS network (Kostoglodov et al. 2003) and seismic observations from a dense linear array of broadband seismometers (MASE) have provided detailed images of the crustal structure of this part of the subduction zone (see for example Pérez-Campos et al., 2008, Iglesias et al., 2010). Interestingly the part of the fault zone that is locked during the inter-seismic period is thought to reach up to or inland from the coast line. In the event of a large megathrust earthquake, this geometry could allow recordings from above the fault interface. These types of recordings can be critical to resolve the history of slip as a function of time on the fault plane during the earthquake. A well constrained model of slip-time history, together with other observations as mentioned above, could provide very valuable insights into earthquake physics and the earthquake cycle. In order to prepare the scientific response for such an event we generate a scenario earthquake in the Guerrero segment of the subduction zone. We calculate synthetic strong motion records, seismograms for global stations and static offsets on the Earth's surface. To simulate the real data available we add real noise, recorded during times of no earthquake, to the synthetic data. We use a simulated annealing inversion algorithm (Ji et al., 1999) to invert the different datasets and combinations thereof for the time-history of slip on the fault plane. We present the recovery of the slip model using the different datasets, as well as idealized datasets, investigating the expected and best possible levels of recovery.
The role of simulation in mixed-methods research: a framework & application to patient safety.
Guise, Jeanne-Marie; Hansen, Matthew; Lambert, William; O'Brien, Kerth
2017-05-04
Research in patient safety is an important area of health services research and is a national priority. It is challenging to investigate rare occurrences, explore potential causes, and account for the complex, dynamic context of healthcare - yet all are required in patient safety research. Simulation technologies have become widely accepted as education and clinical tools, but have yet to become a standard tool for research. We developed a framework for research that integrates accepted patient safety models with mixed-methods research approaches and describe the performance of the framework in a working example of a large National Institutes of Health (NIH)-funded R01 investigation. This worked example of a framework in action, identifies the strengths and limitations of qualitative and quantitative research approaches commonly used in health services research. Each approach builds essential layers of knowledge. We describe how the use of simulation ties these layers of knowledge together and adds new and unique dimensions of knowledge. A mixed-methods research approach that includes simulation provides a broad multi-dimensional approach to health services and patient safety research.
Piloted simulation of one-on-one helicopter air combat at NOE flight levels
NASA Technical Reports Server (NTRS)
Lewis, M. S.; Aiken, E. W.
1985-01-01
A piloted simulation designed to examine the effects of terrain proximity and control system design on helicopter performance during one-on-one air combat maneuvering (ACM) is discussed. The NASA Ames vertical motion simulator (VMS) and the computer generated imagery (CGI) systems were modified to allow two aircraft to be independently piloted on a single CGI data base. Engagements were begun with the blue aircraft already in a tail-chase position behind the red, and also with the two aircraft originating from positions unknown to each other. Maneuvering was very aggressive and safety requirements for minimum altitude, separation, and maximum bank angles typical of flight test were not used. Results indicate that the presence of terrain features adds an order of complexiaty to the task performed over clear air ACM and that mix of attitude and rate command-type stability and control augmentation system (SCAS) design may be desirable. The simulation system design, the flight paths flown, and the tactics used were compared favorably by the evaluation pilots to actual flight test experiments.
Fusion Simulation Project Workshop Report
NASA Astrophysics Data System (ADS)
Kritz, Arnold; Keyes, David
2009-03-01
The mission of the Fusion Simulation Project is to develop a predictive capability for the integrated modeling of magnetically confined plasmas. This FSP report adds to the previous activities that defined an approach to integrated modeling in magnetic fusion. These previous activities included a Fusion Energy Sciences Advisory Committee panel that was charged to study integrated simulation in 2002. The report of that panel [Journal of Fusion Energy 20, 135 (2001)] recommended the prompt initiation of a Fusion Simulation Project. In 2003, the Office of Fusion Energy Sciences formed a steering committee that developed a project vision, roadmap, and governance concepts [Journal of Fusion Energy 23, 1 (2004)]. The current FSP planning effort involved 46 physicists, applied mathematicians and computer scientists, from 21 institutions, formed into four panels and a coordinating committee. These panels were constituted to consider: Status of Physics Components, Required Computational and Applied Mathematics Tools, Integration and Management of Code Components, and Project Structure and Management. The ideas, reported here, are the products of these panels, working together over several months and culminating in a 3-day workshop in May 2007.
Enhanced conformational sampling of carbohydrates by Hamiltonian replica-exchange simulation.
Mishra, Sushil Kumar; Kara, Mahmut; Zacharias, Martin; Koca, Jaroslav
2014-01-01
Knowledge of the structure and conformational flexibility of carbohydrates in an aqueous solvent is important to improving our understanding of how carbohydrates function in biological systems. In this study, we extend a variant of the Hamiltonian replica-exchange molecular dynamics (MD) simulation to improve the conformational sampling of saccharides in an explicit solvent. During the simulations, a biasing potential along the glycosidic-dihedral linkage between the saccharide monomer units in an oligomer is applied at various levels along the replica runs to enable effective transitions between various conformations. One reference replica runs under the control of the original force field. The method was tested on disaccharide structures and further validated on biologically relevant blood group B, Lewis X and Lewis A trisaccharides. The biasing potential-based replica-exchange molecular dynamics (BP-REMD) method provided a significantly improved sampling of relevant conformational states compared with standard continuous MD simulations, with modest computational costs. Thus, the proposed BP-REMD approach adds a new dimension to existing carbohydrate conformational sampling approaches by enhancing conformational sampling in the presence of solvent molecules explicitly at relatively low computational cost.
ERIC Educational Resources Information Center
Adams, Lisa G.
2011-01-01
Take advantage of teen internet savvy and redirect students' online travels toward exploration of our environment through streaming real-time data (RTD). Studies have shown that using RTD adds relevancy to students' learning experiences and engages them in scientific investigations. (Contains 14 online resources and 5 figures.)
Minnesota Measures: 2014 Report on Higher Education Performance
ERIC Educational Resources Information Center
Armstrong, John; Bak, Leonid; Djurovich, Alexandra; Edlund, Melissa; Fergus, Meredith; Grimes, Tricia; Trost, Jennifer; Williams-Wyche, Shaun
2014-01-01
This report provides a resource of accurate, timely and comprehensive facts about higher education in Minnesota. It includes comparisons over time as well as national and peer institution comparisons to add context for the interpretation of the data. It is expected to be used by a number of stakeholder groups such as legislators, educators and…
78 FR 27867 - Airworthiness Directives; MD Helicopters Inc. Helicopters
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-13
... the retirement life of each tail rotor blade (blade), performing a one-time visual inspection of each... information to the FAA within 24 hours following the one-time inspection. Since we issued that AD, an accident... shot peen surface's condition in addition to cracks and corrosion, and would add certain part-numbered...
Push Is on to Add Time to School Day, Year
ERIC Educational Resources Information Center
Fleming, Nora
2011-01-01
Policymakers are promoting expanded learning time to help low-performing students, but the know-how and resources for implementation are lacking. Providence's expanded-school-day pilot is a partnership between the school district and the Providence After School Alliance, a nonprofit that manages after-school programs for low-income students in…
Gu, Shuyan; Wang, Xiaoyong; Qiao, Qing; Gao, Weiguo; Wang, Jian; Dong, Hengjin
2017-12-01
To estimate the long-term cost-effectiveness of exenatide twice daily vs insulin glargine once daily as add-on therapy to oral antidiabetic agents (OADs) for Chinese patients with type 2 diabetes (T2DM). The Cardiff Diabetes Model was used to simulate disease progression and estimate the long-term effects of exenatide twice daily vs insulin glargine once daily. Patient profiles and treatment effects required for the model were obtained from literature reviews (English and Chinese databases) and from a meta-analysis of 8 randomized controlled trials comparing exenatide twice daily with insulin glargine once daily add-on to OADs for T2DM in China. Medical expenditure data were collected from 639 patients with T2DM (aged ≥18 years) with and without complications incurred between January 1, 2014 and December 31, 2015 from claims databases in Shandong, China. Costs (2014 Chinese Yuan [¥]) and benefits were estimated, from the payers' perspective, over 40 years at a discount rate of 3%. A series of sensitivity analyses were performed. Patients on exenatide twice daily + OAD had a lower predicted incidence of most cardiovascular and hypoglycaemic events and lower total costs compared with those on insulin glargine once daily + OAD. A greater number of quality-adjusted life years (QALYs; 1.94) at a cost saving of ¥117 706 gained was associated with exenatide twice daily vs insulin glargine once daily. (i.e. cost saving of ¥60 764/QALY) per patient. In Chinese patients with T2DM inadequately controlled by OADs, exenatide twice daily is a cost-effective add-on therapy alternative to insulin glargine once daily, and may address the problem of an excess of medical needs resulting from weight gain and hypoglycaemia in T2DM treatment. © 2017 John Wiley & Sons Ltd.
Layout-aware simulation of soft errors in sub-100 nm integrated circuits
NASA Astrophysics Data System (ADS)
Balbekov, A.; Gorbunov, M.; Bobkov, S.
2016-12-01
Single Event Transient (SET) caused by charged particle traveling through the sensitive volume of integral circuit (IC) may lead to different errors in digital circuits in some cases. In technologies below 180 nm, a single particle can affect multiple devices causing multiple SET. This fact adds the complexity to fault tolerant devices design, because the schematic design techniques become useless without their layout consideration. The most common layout mitigation technique is a spatial separation of sensitive nodes of hardened circuits. Spatial separation decreases the circuit performance and increases power consumption. Spacing should thus be reasonable and its scaling follows the device dimensions' scaling trend. This paper presents the development of the SET simulation approach comprised of SPICE simulation with "double exponent" current source as SET model. The technique uses layout in GDSII format to locate nearby devices that can be affected by a single particle and that can share the generated charge. The developed software tool automatizes multiple simulations and gathers the produced data to present it as the sensitivity map. The examples of conducted simulations of fault tolerant cells and their sensitivity maps are presented in this paper.
Incorporation of EGPWS in the NASA Ames Research Center 747-400 Flight Simulator
NASA Technical Reports Server (NTRS)
Sallant, Ghislain; DeGennaro, Robert A.
2001-01-01
The NASA Ames Research Center CAE Boeing 747300 flight simulator is used primarily for the study of human factors in aviation safety. The simulator is constantly upgraded to maintain a configuration match to a specific United Airlines aircraft and maintains the highest level of FAA certification to ensure credibility to the results of research programs. United's 747-400 fleet and hence the simulator are transitioning from the older Ground Proximity Warning System (GPWS) to the state-of-the-art Enhanced Ground Proximity Warning System (EGPWS). GPWS was an early attempt to reduce or eliminate Controlled Flight Into Terrain (CFIT). Basic GPWS alerting modes include: excessive descent rate, excessive terrain closure rate, altitude loss after takeoff, unsafe terrain clearance, excessive deviation below glideslope, advisory callouts and windshear alerting. However, since GPWS uses the radar altimeter which looks straight down, ample warning is not always provided. EGPWS retains all of the basic functions of GPWS but adds the ability to look ahead by comparing the aircraft position to an internal database and provide additional alerting and display capabilities. This paper evaluates three methods of incorporating EGPWS in the simulator and describes the implementation and architecture of the preferred option.
Data Management Life Cycle, Final report
DOT National Transportation Integrated Search
2018-03-01
Transportation inefficiencies cost money, reduce safety, increase pollution-causing emissions, and take time away from peoples lives. The solution is not always to build more roads, create parking spaces, or add more bus routes. Sometimes, the bet...
78 FR 73477 - List of Fisheries for 2014
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-06
... between 8 a.m. and 4 p.m. Eastern time, Monday through Friday, excluding Federal holidays. SUPPLEMENTARY... 2011 (Carretta and Enriquez, 2012). NMFS proposes to add grey whale (Eastern North Pacific) to the list...
Nobrega, Diego B; De Buck, Jeroen; Naqvi, S Ali; Liu, Gang; Naushad, Sohail; Saini, Vineet; Barkema, Herman W
2017-12-01
Assessment of antimicrobial use (AMU) is vital for interpreting the origin of changes in antimicrobial resistance (AMR). The objectives of the present study were to estimate the association between AMU determined using on-farm treatment records (TR) and inventory of empty drug containers (INV). Herds were selected to represent Canadian dairy farms. Producers were asked to record animal health events and treatments on a standard General Health Event form. For inventory data, 40-L receptacles were placed at various locations considered convenient to deposit all empty drug containers. Antimicrobial defined-daily dosages (ADD) were calculated for 51 Canadian herds using the 2 methods. Estimation of AMU was 31,840 ADD using the INV and 14,487 ADD using the TR, indicating that for every TR entry, 2.20 times more treatments were observed using the INV. Mastitis, reproductive conditions, and dry cow therapy were the most frequent reasons for antimicrobial therapy when assessing TR. For all antimicrobials evaluated, mean ADD was higher using the INV versus TR. Regardless, a strong positive correlation (0.80) was observed between the 2 methods, indicating that herds with increased number of ADD recorded using the INV also had increased number of ADD recorded using TR. Furthermore, a positive association was observed for the 6 most commonly used antimicrobials. In comparison to methods used in surveillance programs on AMU in livestock that assume a constant use in all herds (i.e., sales data), INV provided a herd-level specific quantity of AMU positively correlated with AMU recorded at the animal level in general. The INV was easy to implement and provided a measure of total AMU in the herd. Availability of such information would be valuable for interpreting changes in AMR at the herd level and enabling evaluation of interventions for decreasing AMR. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Conservative GRMHD simulations of moderately thin, tilted accretion disks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teixeira, Danilo Morales; Fragile, P. Chris; Zhuravlev, Viacheslav V.
2014-12-01
This paper presents our latest numerical simulations of accretion disks that are misaligned with respect to the rotation axis of a Kerr black hole. In this work, we use a new, fully conservative version of the Cosmos++ general relativistic magnetohydrodynamics (GRMHD) code, coupled with an ad hoc cooling function designed to control the thickness of the disk. Together these allow us to simulate the thinnest tilted accretion disks ever using a GRMHD code. In this way, we are able to probe the regime where the dimensionless stress and scale height of the disk become comparable. We present results for bothmore » prograde and retrograde cases. The simulated prograde tilted disk shows no sign of Bardeen-Petterson alignment even in the innermost parts of the disk. The simulated retrograde tilted disk, however, does show modest alignment. The implication of these results is that the parameter space associated with Bardeen-Petterson alignment for prograde disks may be rather small, only including very thin disks. Unlike our previous work, we find no evidence for standing shocks in our simulated tilted disks. We ascribe this to the black hole spin, tilt angle, and disk scale height all being small in these simulations. We also add to the growing body of literature pointing out that the turbulence driven by the magnetorotational instability in global simulations of accretion disks is not isotropic. Finally, we provide a comparison between our moderately thin, untilted reference simulation and other numerical simulations of thin disks in the literature.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-04
...The Coast Guard proposes to add, delete, and modify safety zones and special local regulations and add language to clarify time frames and notification requirements for annual marine events in the Sector Long Island Sound Captain of the Port (COTP) Zone. When these regulated areas are activated and subject to enforcement, this rule would restrict vessels from portions of water areas during these recurring events. The safety zones and special local regulations will facilitate public notification of events and provide protective measures for the maritime public and event participants from the hazards associated with these recurring events.
Modeling and simulating vortex pinning and transport currents for high temperature superconductors
NASA Astrophysics Data System (ADS)
Sockwell, K. Chad
Superconductivity is a phenomenon characterized by two hallmark properties, zero electrical resistance and the Meissner effect. These properties give great promise to a new generation of resistance free electronics and powerful superconducting magnets. However this possibility is limited by the extremely low critical temperature the superconductors must operate under, typically close to 0K. The recent discovery of high temperature superconductors has brought the critical temperature closer to room temperature than ever before, making the realization of room temperature superconductivity a possibility. Simulations of superconducting technology and materials will be necessary to usher in the new wave of superconducting electronics. Unfortunately these new materials come with new properties such as effects from multiple electron bands, as is the case for magnesium diboride. Moreover, we must consider that all high temperature superconductors are of a Type II variety, which possess magnetic tubes of flux, known as vortices. These vortices interact with transport currents, creating an electrical resistance through a process known as flux flow. Thankfully this process can be prevented by placing impurities in the superconductor, pinning the vortices, making vortex pinning a necessary aspect of our model. At this time there are no other models or simulations that are aimed at modeling vortex pinning, using impurities, in two-band materials. In this work we modify an existing Ginzburg-Landau model for two-band superconductors and add the ability to model normal inclusions (impurities) with a new approach which is unique to the two-band model. Simulations in an attempt to model the material magnesium diboride are also presented. In particular simulations of vortex pinning and transport currents are shown using the modified model. The qualitative properties of magnesium diboride are used to validate the model and its simulations. One main goal from the computational end of the simulations is to enlarge the domain size to produce more realistic simulations that avoid boundary pinning effects. In this work we also implement the numerical software library Trilinos in order to parallelize the simulation to enlarge the domain size. Decoupling methods are also investigated with a goal of enlarging the domain size as well. The One-Band Ginzburg-Landau model serves as a prototypical problem in this endeavor and the methods shown that enlarge the domain size can be easily implemented in the two-band model.
Randomized trial of safinamide add-on to levodopa in Parkinson's disease with motor fluctuations
Borgohain, Rupam; Szasz, J; Stanzione, P; Meshram, C; Bhatt, M; Chirilineau, D; Stocchi, F; Lucini, V; Giuliani, R; Forrest, E; Rice, P; Anand, R
2014-01-01
Levodopa is effective for the motor symptoms of Parkinson's disease (PD), but is associated with motor fluctuations and dyskinesia. Many patients require add-on therapy to improve motor fluctuations without exacerbating dyskinesia. The objective of this Phase III, multicenter, double-blind, placebo-controlled, parallel-group study was to evaluate the efficacy and safety of safinamide, an α-aminoamide with dopaminergic and nondopaminergic mechanisms, as add-on to l-dopa in the treatment of patients with PD and motor fluctuations. Patients were randomized to oral safinamide 100 mg/day (n = 224), 50 mg/day (n = 223), or placebo (n = 222) for 24 weeks. The primary endpoint was total on time with no or nontroublesome dyskinesia (assessed using the Hauser patient diaries). Secondary endpoints included off time, Unified Parkinson's Disease Rating Scale (UPDRS) Part III (motor) scores, and Clinical Global Impression-Change (CGI-C). At week 24, mean ± SD increases in total on time with no or nontroublesome dyskinesia were 1.36 ± 2.625 hours for safinamide 100 mg/day, 1.37 ± 2.745 hours for safinamide 50 mg/day, and 0.97 ± 2.375 hours for placebo. Least squares means differences in both safinamide groups were significantly higher versus placebo. Improvements in off time, UPDRS Part III, and CGI-C were significantly greater in both safinamide groups versus placebo. There were no significant between-group differences for incidences of treatment-emergent adverse events (TEAEs) or TEAEs leading to discontinuation. The addition of safinamide 50 mg/day or 100 mg/day to l-dopa in patients with PD and motor fluctuations significantly increased total on time with no or nontroublesome dyskinesia, decreased off time, and improved parkinsonism, indicating that safinamide improves motor symptoms and parkinsonism without worsening dyskinesia. PMID:24323641