Effects of linking a soil-water-balance model with a groundwater-flow model
Stanton, Jennifer S.; Ryter, Derek W.; Peterson, Steven M.
2013-01-01
A previously published regional groundwater-flow model in north-central Nebraska was sequentially linked with the recently developed soil-water-balance (SWB) model to analyze effects to groundwater-flow model parameters and calibration results. The linked models provided a more detailed spatial and temporal distribution of simulated recharge based on hydrologic processes, improvement of simulated groundwater-level changes and base flows at specific sites in agricultural areas, and a physically based assessment of the relative magnitude of recharge for grassland, nonirrigated cropland, and irrigated cropland areas. Root-mean-squared (RMS) differences between the simulated and estimated or measured target values for the previously published model and linked models were relatively similar and did not improve for all types of calibration targets. However, without any adjustment to the SWB-generated recharge, the RMS difference between simulated and estimated base-flow target values for the groundwater-flow model was slightly smaller than for the previously published model, possibly indicating that the volume of recharge simulated by the SWB code was closer to actual hydrogeologic conditions than the previously published model provided. Groundwater-level and base-flow hydrographs showed that temporal patterns of simulated groundwater levels and base flows were more accurate for the linked models than for the previously published model at several sites, particularly in agricultural areas.
NASA Astrophysics Data System (ADS)
Apel, W. D.; Arteaga-Velázquez, J. C.; Bähren, L.; Bekk, K.; Bertaina, M.; Biermann, P. L.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Falcke, H.; Fuchs, B.; Gemmeke, H.; Grupen, C.; Haungs, A.; Heck, D.; Hiller, R.; Hörandel, J. R.; Horneffer, A.; Huber, D.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Krömer, O.; Kuijpers, J.; Link, K.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Melissas, M.; Morello, C.; Nehls, S.; Oehlschläger, J.; Palmieri, N.; Pierog, T.; Rautenberg, J.; Rebel, H.; Roth, M.; Rühle, C.; Saftoiu, A.; Schieler, H.; Schmidt, A.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Weindl, A.; Wochele, J.; Zabierowski, J.; Zensus, J. A.
2016-02-01
LOPES was a digital antenna array detecting the radio emission of cosmic-ray air showers. The calibration of the absolute amplitude scale of the measurements was done using an external, commercial reference source, which emits a frequency comb with defined amplitudes. Recently, we obtained improved reference values by the manufacturer of the reference source, which significantly changed the absolute calibration of LOPES. We reanalyzed previously published LOPES measurements, studying the impact of the changed calibration. The main effect is an overall decrease of the LOPES amplitude scale by a factor of 2.6 ± 0.2, affecting all previously published values for measurements of the electric-field strength. This results in a major change in the conclusion of the paper 'Comparing LOPES measurements of air-shower radio emission with REAS 3.11 and CoREAS simulations' published by Apel et al. (2013) : With the revised calibration, LOPES measurements now are compatible with CoREAS simulations, but in tension with REAS 3.11 simulations. Since CoREAS is the latest version of the simulation code incorporating the current state of knowledge on the radio emission of air showers, this new result indicates that the absolute amplitude prediction of current simulations now is in agreement with experimental data.
Combined PEST and Trial-Error approach to improve APEX calibration
USDA-ARS?s Scientific Manuscript database
The Agricultural Policy Environmental eXtender (APEX), a physically-based hydrologic model that simulates management impacts on the environment for small watersheds, requires improved understanding of the input parameters for improved simulations. However, most previously published studies used the ...
ERIC Educational Resources Information Center
Isaranuwatchai, Wanrudee; Brydges, Ryan; Carnahan, Heather; Backstein, David; Dubrowski, Adam
2014-01-01
While the ultimate goal of simulation training is to enhance learning, cost-effectiveness is a critical factor. Research that compares simulation training in terms of educational- and cost-effectiveness will lead to better-informed curricular decisions. Using previously published data we conducted a cost-effectiveness analysis of three…
Simulation optimization of PSA-threshold based prostate cancer screening policies
Zhang, Jingyu; Denton, Brian T.; Shah, Nilay D.; Inman, Brant A.
2013-01-01
We describe a simulation optimization method to design PSA screening policies based on expected quality adjusted life years (QALYs). Our method integrates a simulation model in a genetic algorithm which uses a probabilistic method for selection of the best policy. We present computational results about the efficiency of our algorithm. The best policy generated by our algorithm is compared to previously recommended screening policies. Using the policies determined by our model, we present evidence that patients should be screened more aggressively but for a shorter length of time than previously published guidelines recommend. PMID:22302420
Atmospheric Dispersion Modeling of the February 2014 Waste Isolation Pilot Plant Release
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nasstrom, John; Piggott, Tom; Simpson, Matthew
2015-07-22
This report presents the results of a simulation of the atmospheric dispersion and deposition of radioactivity released from the Waste Isolation Pilot Plant (WIPP) site in New Mexico in February 2014. These simulations were made by the National Atmospheric Release Advisory Center (NARAC) at Lawrence Livermore National Laboratory (LLNL), and supersede NARAC simulation results published in a previous WIPP report (WIPP, 2014). The results presented in this report use additional, more detailed data from WIPP on the specific radionuclides released, radioactivity release amounts and release times. Compared to the previous NARAC simulations, the new simulation results in this report aremore » based on more detailed modeling of the winds, turbulence, and particle dry deposition. In addition, the initial plume rise from the exhaust vent was considered in the new simulations, but not in the previous NARAC simulations. The new model results show some small differences compared to previous results, but do not change the conclusions in the WIPP (2014) report. Presented are the data and assumptions used in these model simulations, as well as the model-predicted dose and deposition on and near the WIPP site. A comparison of predicted and measured radionuclide-specific air concentrations is also presented.« less
Building an Open-source Simulation Platform of Acoustic Radiation Force-based Breast Elastography
Wang, Yu; Peng, Bo; Jiang, Jingfeng
2017-01-01
Ultrasound-based elastography including strain elastography (SE), acoustic radiation force Impulse (ARFI) imaging, point shear wave elastography (pSWE) and supersonic shear imaging (SSI) have been used to differentiate breast tumors among other clinical applications. The objective of this study is to extend a previously published virtual simulation platform built for ultrasound quasi-static breast elastography toward acoustic radiation force-based breast elastography. Consequently, the extended virtual breast elastography simulation platform can be used to validate image pixels with known underlying soft tissue properties (i.e. “ground truth”) in complex, heterogeneous media, enhancing confidence in elastographic image interpretations. The proposed virtual breast elastography system inherited four key components from the previously published virtual simulation platform: an ultrasound simulator (Field II), a mesh generator (Tetgen), a finite element solver (FEBio) and a visualization and data processing package (VTK). Using a simple message passing mechanism, functionalities have now been extended to acoustic radiation force-based elastography simulations. Examples involving three different numerical breast models with increasing complexity – one uniform model, one simple inclusion model and one virtual complex breast model derived from magnetic resonance imaging data, were used to demonstrate capabilities of this extended virtual platform. Overall, simulation results were compared with the published results. In the uniform model, the estimated shear wave speed (SWS) values were within 4% compared to the predetermined SWS values. In the simple inclusion and the complex breast models, SWS values of all hard inclusions in soft backgrounds were slightly underestimated, similar to what has been reported. The elastic contrast values and visual observation show that ARFI images have higher spatial resolution, while SSI images can provide higher inclusion-to-background contrast. In summary, our initial results were consistent with our expectations and what have been reported in the literature. The proposed (open-source) simulation platform can serve as a single gateway to perform many elastographic simulations in a transparent manner, thereby promoting collaborative developments. PMID:28075330
Building an open-source simulation platform of acoustic radiation force-based breast elastography
NASA Astrophysics Data System (ADS)
Wang, Yu; Peng, Bo; Jiang, Jingfeng
2017-03-01
Ultrasound-based elastography including strain elastography, acoustic radiation force impulse (ARFI) imaging, point shear wave elastography and supersonic shear imaging (SSI) have been used to differentiate breast tumors among other clinical applications. The objective of this study is to extend a previously published virtual simulation platform built for ultrasound quasi-static breast elastography toward acoustic radiation force-based breast elastography. Consequently, the extended virtual breast elastography simulation platform can be used to validate image pixels with known underlying soft tissue properties (i.e. ‘ground truth’) in complex, heterogeneous media, enhancing confidence in elastographic image interpretations. The proposed virtual breast elastography system inherited four key components from the previously published virtual simulation platform: an ultrasound simulator (Field II), a mesh generator (Tetgen), a finite element solver (FEBio) and a visualization and data processing package (VTK). Using a simple message passing mechanism, functionalities have now been extended to acoustic radiation force-based elastography simulations. Examples involving three different numerical breast models with increasing complexity—one uniform model, one simple inclusion model and one virtual complex breast model derived from magnetic resonance imaging data, were used to demonstrate capabilities of this extended virtual platform. Overall, simulation results were compared with the published results. In the uniform model, the estimated shear wave speed (SWS) values were within 4% compared to the predetermined SWS values. In the simple inclusion and the complex breast models, SWS values of all hard inclusions in soft backgrounds were slightly underestimated, similar to what has been reported. The elastic contrast values and visual observation show that ARFI images have higher spatial resolution, while SSI images can provide higher inclusion-to-background contrast. In summary, our initial results were consistent with our expectations and what have been reported in the literature. The proposed (open-source) simulation platform can serve as a single gateway to perform many elastographic simulations in a transparent manner, thereby promoting collaborative developments.
Population models and simulation methods: The case of the Spearman rank correlation.
Astivia, Oscar L Olvera; Zumbo, Bruno D
2017-11-01
The purpose of this paper is to highlight the importance of a population model in guiding the design and interpretation of simulation studies used to investigate the Spearman rank correlation. The Spearman rank correlation has been known for over a hundred years to applied researchers and methodologists alike and is one of the most widely used non-parametric statistics. Still, certain misconceptions can be found, either explicitly or implicitly, in the published literature because a population definition for this statistic is rarely discussed within the social and behavioural sciences. By relying on copula distribution theory, a population model is presented for the Spearman rank correlation, and its properties are explored both theoretically and in a simulation study. Through the use of the Iman-Conover algorithm (which allows the user to specify the rank correlation as a population parameter), simulation studies from previously published articles are explored, and it is found that many of the conclusions purported in them regarding the nature of the Spearman correlation would change if the data-generation mechanism better matched the simulation design. More specifically, issues such as small sample bias and lack of power of the t-test and r-to-z Fisher transformation disappear when the rank correlation is calculated from data sampled where the rank correlation is the population parameter. A proof for the consistency of the sample estimate of the rank correlation is shown as well as the flexibility of the copula model to encompass results previously published in the mathematical literature. © 2017 The British Psychological Society.
2015-01-01
High-density lipoprotein (HDL) retards atherosclerosis by accepting cholesterol from the artery wall. However, the structure of the proposed acceptor, monomeric apolipoprotein A-I (apoA-I), the major protein of HDL, is poorly understood. Two published models for monomeric apoA-I used cross-linking distance constraints to derive best fit conformations. This approach has limitations. (i) Cross-linked peptides provide no information about secondary structure. (ii) A protein chain can be folded in multiple ways to create a best fit. (iii) Ad hoc folding of a secondary structure is unlikely to produce a stable orientation of hydrophobic and hydrophilic residues. To address these limitations, we used a different approach. We first noted that the dimeric apoA-I crystal structure, (Δ185–243)apoA-I, is topologically identical to a monomer in which helix 5 forms a helical hairpin, a monomer with a hydrophobic cleft running the length of the molecule. We then realized that a second crystal structure, (Δ1–43)apoA-I, contains a C-terminal structure that fits snuggly via aromatic and hydrophobic interactions into the hydrophobic cleft. Consequently, we combined these crystal structures into an initial model that was subjected to molecular dynamics simulations. We tested the initial and simulated models and the two previously published models in three ways: against two published data sets (domains predicted to be helical by H/D exchange and six spin-coupled residues) and against our own experimentally determined cross-linking distance constraints. We note that the best fit simulation model, superior by all tests to previously published models, has dynamic features of a molten globule with interesting implications for the functions of apoA-I. PMID:25423138
Caswell, Joseph M; Singh, Manraj; Persinger, Michael A
2016-08-01
Previous research investigating the potential influence of geomagnetic factors on human cardiovascular state has tended to converge upon similar inferences although the results remain relatively controversial. Furthermore, previous findings have remained essentially correlational without accompanying experimental verification. An exception to this was noted for human brain activity in a previous study employing experimental simulation of sudden geomagnetic impulses in order to assess correlational results that had demonstrated a relationship between geomagnetic perturbations and neuroelectrical parameters. The present study employed the same equipment in a similar procedure in order to validate previous findings of a geomagnetic-cardiovascular dynamic with electrocardiography and heart rate variability measures. Results indicated that potential magnetic field effects on frequency components of heart rate variability tended to overlap with previous correlational studies where low frequency power and the ratio between low and high frequency components of heart rate variability appeared affected. In the present study, a significant increase in these particular parameters was noted during geomagnetic simulation compared to baseline recordings. Copyright © 2016 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.
Xia, Zeyang; Chen, Jie
2014-01-01
Objectives To develop an artificial tooth–periodontal ligament (PDL)–bone complex (ATPBC) that simulates clinical crown displacement. Material and Methods An ATPBC was created. It had a socket hosting a tooth with a thin layer of silicon mixture in between for simulating the PDL. The complex was attached to a device that allows applying a controlled force to the crown and measuring the resulting crown displacement. Crown displacements were compared to previously published data for validation. Results The ATPBC that had a PDL made of two types of silicones, 50% gasket sealant No. 2 and 50% RTV 587 silicone, with a thickness of 0.3 mm, simulated the PDL well. The mechanical behaviors (1) force-displacement relationship, (2) stress relaxation, (3) creep, and (4) hysteresis were validated by the published results. Conclusion The ATPBC simulated the crown displacement behavior reported from biological studies well. PMID:22970752
NASA Astrophysics Data System (ADS)
Hieu, Nguyen Huu
2017-09-01
Pervaporation is a potential process for the final step of ethanol biofuel production. In this study, a mathematical model was developed based on the resistance-in-series model and a simulation was carried out using the specialized simulation software COMSOL Multiphysics to describe a tubular type pervaporation module with membranes for the dehydration of ethanol solution. The permeance of membranes, operating conditions, and feed conditions in the simulation were referred from experimental data reported previously in literature. Accordingly, the simulated temperature and density profiles of pure water and ethanol-water mixture were validated based on existing published data.
An investigation of the effects of reading and writing text-based messages while driving.
DOT National Transportation Integrated Search
2012-08-01
Previous research, using driving simulation, crash data, and naturalistic methods, has begun to shed light on the dangers of texting while driving. Perhaps because of the dangers, no published work has experimentally investigated the dangers of texti...
NASA Astrophysics Data System (ADS)
Shin, Soon-Gi
2018-03-01
This article [1] has been retracted at the request of the Editor-in-Chief. Concerns were raised regarding substantial duplications with previous articles published in other journals. After a thorough analysis, we conclude that the concerns are valid. The article contains sections that substantially overlap with the following published article [2] (amongst others). S.-G. Shin has not responded to correspondence from the Editor about this retraction.
The Elastic Behaviour of Sintered Metallic Fibre Networks: A Finite Element Study by Beam Theory
Bosbach, Wolfram A.
2015-01-01
Background The finite element method has complimented research in the field of network mechanics in the past years in numerous studies about various materials. Numerical predictions and the planning efficiency of experimental procedures are two of the motivational aspects for these numerical studies. The widespread availability of high performance computing facilities has been the enabler for the simulation of sufficiently large systems. Objectives and Motivation In the present study, finite element models were built for sintered, metallic fibre networks and validated by previously published experimental stiffness measurements. The validated models were the basis for predictions about so far unknown properties. Materials and Methods The finite element models were built by transferring previously published skeletons of fibre networks into finite element models. Beam theory was applied as simplification method. Results and Conclusions The obtained material stiffness isn’t a constant but rather a function of variables such as sample size and boundary conditions. Beam theory offers an efficient finite element method for the simulated fibre networks. The experimental results can be approximated by the simulated systems. Two worthwhile aspects for future work will be the influence of size and shape and the mechanical interaction with matrix materials. PMID:26569603
Computer simulated modeling of healthy and diseased right ventricular and pulmonary circulation.
Chou, Jody; Rinehart, Joseph B
2018-01-12
We have previously developed a simulated cardiovascular physiology model for in-silico testing and validation of novel closed-loop controllers. To date, a detailed model of the right heart and pulmonary circulation was not needed, as previous controllers were not intended for use in patients with cardiac or pulmonary pathology. With new development of controllers for vasopressors, and looking forward, for combined vasopressor-fluid controllers, modeling of right-sided and pulmonary pathology is now relevant to further in-silico validation, so we aimed to expand our existing simulation platform to include these elements. Our hypothesis was that the completed platform could be tuned and stabilized such that the distributions of a randomized sample of simulated patients' baseline characteristics would be similar to reported population values. Our secondary outcomes were to further test the system in representing acute right heart failure and pulmonary artery hypertension. After development and tuning of the right-sided circulation, the model was validated against clinical data from multiple previously published articles. The model was considered 'tuned' when 100% of generated randomized patients converged to stability (steady, physiologically-plausible compartmental volumes, flows, and pressures) and 'valid' when the means for the model data in each health condition were contained within the standard deviations for the published data for the condition. A fully described right heart and pulmonary circulation model including non-linear pressure/volume relationships and pressure dependent flows was created over a 6-month span. The model was successfully tuned such that 100% of simulated patients converged into a steady state within 30 s. Simulation results in the healthy state for central venous volume (3350 ± 132 ml) pulmonary blood volume (405 ± 39 ml), pulmonary artery pressures (systolic 20.8 ± 4.1 mmHg and diastolic 9.4 ± 1.8 mmHg), left atrial pressure (4.6 ± 0.8 mmHg), PVR (1.0 ± 0.2 wood units), and CI (3.8 ± 0.5 l/min/m 2 ) all met criteria for acceptance of the model, though the standard deviations of LAP and CI were somewhat narrower than published comparators. The simulation results for right ventricular infarction also fell within the published ranges: pulmonary blood volume (727 ± 102 ml), pulmonary arterial pressures (30 ± 4 mmHg systolic, 12 ± 2 mmHg diastolic), left atrial pressure (13 ± 2 mmHg), PVR (1.6 ± 0.3 wood units), and CI (2.0 ± 0.4 l/min/m 2 ) all fell within one standard deviation of the reported population values and vice-versa. In the pulmonary hypertension model, pulmonary blood volume of 615 ± 90 ml, pulmonary arterial pressures of 80 ± 14 mmHg systolic, 36 ± 7 mmHg diastolic, and the left atrial pressure of 11 ± 2 mmHg all met criteria for acceptance. For CI, the simulated value of 2.8 ± 0.4 l/min/m 2 once again had a narrower spread than most of the published data, but fell inside of the SD of all published data, and the PVR value of 7.5 ± 1.6 wood units fell in the middle of the four published studies. The right-ventricular and pulmonary circulation simulation appears to be a reasonable approximation of the right-sided circulation for healthy physiology as well as the pathologic conditions tested.
Numerical Simulation of Liquids Draining From a Tank Using OpenFOAM
NASA Astrophysics Data System (ADS)
Sakri, Fadhilah Mohd; Sukri Mat Ali, Mohamed; Zaki Shaikh Salim, Sheikh Ahmad; Muhamad, Sallehuddin
2017-08-01
Accurate simulation of liquids draining is a challenging task. It involves two phases flow, i.e. liquid and air. In this study draining a liquid from a cylindrical tank is numerically simulated using OpenFOAM. OpenFOAM is an open source CFD package and it becomes increasingly popular among the academician and also industries. Comparisons with theoretical and results from previous published data confirmed that OpenFOAM is able to simulate the liquids draining very well. This is done using the gas-liquid interface solver available in the standard library of OpenFOAM. Additionally, this study was also able to explain the physics flow of the draining tank.
Neuronvisio: A Graphical User Interface with 3D Capabilities for NEURON.
Mattioni, Michele; Cohen, Uri; Le Novère, Nicolas
2012-01-01
The NEURON simulation environment is a commonly used tool to perform electrical simulation of neurons and neuronal networks. The NEURON User Interface, based on the now discontinued InterViews library, provides some limited facilities to explore models and to plot their simulation results. Other limitations include the inability to generate a three-dimensional visualization, no standard mean to save the results of simulations, or to store the model geometry within the results. Neuronvisio (http://neuronvisio.org) aims to address these deficiencies through a set of well designed python APIs and provides an improved UI, allowing users to explore and interact with the model. Neuronvisio also facilitates access to previously published models, allowing users to browse, download, and locally run NEURON models stored in ModelDB. Neuronvisio uses the matplotlib library to plot simulation results and uses the HDF standard format to store simulation results. Neuronvisio can be viewed as an extension of NEURON, facilitating typical user workflows such as model browsing, selection, download, compilation, and simulation. The 3D viewer simplifies the exploration of complex model structure, while matplotlib permits the plotting of high-quality graphs. The newly introduced ability of saving numerical results allows users to perform additional analysis on their previous simulations.
Decision-Making Accuracy of CBM Progress-Monitoring Data
ERIC Educational Resources Information Center
Hintze, John M.; Wells, Craig S.; Marcotte, Amanda M.; Solomon, Benjamin G.
2018-01-01
This study examined the diagnostic accuracy associated with decision making as is typically conducted with curriculum-based measurement (CBM) approaches to progress monitoring. Using previously published estimates of the standard errors of estimate associated with CBM, 20,000 progress-monitoring data sets were simulated to model student reading…
Gryphon: A Hybrid Agent-Based Modeling and Simulation Platform for Infectious Diseases
NASA Astrophysics Data System (ADS)
Yu, Bin; Wang, Jijun; McGowan, Michael; Vaidyanathan, Ganesh; Younger, Kristofer
In this paper we present Gryphon, a hybrid agent-based stochastic modeling and simulation platform developed for characterizing the geographic spread of infectious diseases and the effects of interventions. We study both local and non-local transmission dynamics of stochastic simulations based on the published parameters and data for SARS. The results suggest that the expected numbers of infections and the timeline of control strategies predicted by our stochastic model are in reasonably good agreement with previous studies. These preliminary results indicate that Gryphon is able to characterize other future infectious diseases and identify endangered regions in advance.
The benefits of being a video gamer in laparoscopic surgery.
Sammut, Matthew; Sammut, Mark; Andrejevic, Predrag
2017-09-01
Video games are mainly considered to be of entertainment value in our society. Laparoscopic surgery and video games are activities similarly requiring eye-hand and visual-spatial skills. Previous studies have not conclusively shown a positive correlation between video game experience and improved ability to accomplish visual-spatial tasks in laparoscopic surgery. This study was an attempt to investigate this relationship. The aim of the study was to investigate whether previous video gaming experience affects the baseline performance on a laparoscopic simulator trainer. Newly qualified medical officers with minimal experience in laparoscopic surgery were invited to participate in the study and assigned to the following groups: gamers (n = 20) and non-gamers (n = 20). Analysis included participants' demographic data and baseline video gaming experience. Laparoscopic skills were assessed using a laparoscopic simulator trainer. There were no significant demographic differences between the two groups. Each participant performed three laparoscopic tasks and mean scores between the two groups were compared. The gamer group had statistically significant better results in maintaining the laparoscopic camera horizon ± 15° (p value = 0.009), in the complex ball manipulation accuracy rates (p value = 0.024) and completed the complex laparoscopic simulator task in a significantly shorter time period (p value = 0.001). Although prior video gaming experience correlated with better results, there were no significant differences for camera accuracy rates (p value = 0.074) and in a two-handed laparoscopic exercise task accuracy rates (p value = 0.092). The results show that previous video-gaming experience improved the baseline performance in laparoscopic simulator skills. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.
Frequency domain phase noise analysis of dual injection-locked optoelectronic oscillators.
Jahanbakht, Sajad
2016-10-01
Dual injection-locked optoelectronic oscillators (DIL-OEOs) have been introduced as a means to achieve very low-noise microwave oscillations while avoiding the large spurious peaks that occur in the phase noise of the conventional single-loop OEOs. In these systems, two OEOs are inter-injection locked to each other. The OEO with the longer optical fiber delay line is called the master OEO, and the other is called the slave OEO. Here, a frequency domain approach for simulating the phase noise spectrum of each of the OEOs in a DIL-OEO system and based on the conversion matrix approach is presented. The validity of the new approach is verified by comparing its results with previously published data in the literature. In the new approach, first, in each of the master or slave OEOs, the power spectral densities (PSDs) of two white and 1/f noise sources are optimized such that the resulting simulated phase noise of any of the master or slave OEOs in the free-running state matches the measured phase noise of that OEO. After that, the proposed approach is able to simulate the phase noise PSD of both OEOs at the injection-locked state. Because of the short run-time requirements, especially compared to previously proposed time domain approaches, the new approach is suitable for optimizing the power injection ratios (PIRs), and potentially other circuit parameters, in order to achieve good performance regarding the phase noise in each of the OEOs. Through various numerical simulations, the optimum PIRs for achieving good phase noise performance are presented and discussed; they are in agreement with the previously published results. This further verifies the applicability of the new approach. Moreover, some other interesting results regarding the spur levels are also presented.
A permeation theory for single-file ion channels: one- and two-step models.
Nelson, Peter Hugo
2011-04-28
How many steps are required to model permeation through ion channels? This question is investigated by comparing one- and two-step models of permeation with experiment and MD simulation for the first time. In recent MD simulations, the observed permeation mechanism was identified as resembling a Hodgkin and Keynes knock-on mechanism with one voltage-dependent rate-determining step [Jensen et al., PNAS 107, 5833 (2010)]. These previously published simulation data are fitted to a one-step knock-on model that successfully explains the highly non-Ohmic current-voltage curve observed in the simulation. However, these predictions (and the simulations upon which they are based) are not representative of real channel behavior, which is typically Ohmic at low voltages. A two-step association/dissociation (A/D) model is then compared with experiment for the first time. This two-parameter model is shown to be remarkably consistent with previously published permeation experiments through the MaxiK potassium channel over a wide range of concentrations and positive voltages. The A/D model also provides a first-order explanation of permeation through the Shaker potassium channel, but it does not explain the asymmetry observed experimentally. To address this, a new asymmetric variant of the A/D model is developed using the present theoretical framework. It includes a third parameter that represents the value of the "permeation coordinate" (fractional electric potential energy) corresponding to the triply occupied state n of the channel. This asymmetric A/D model is fitted to published permeation data through the Shaker potassium channel at physiological concentrations, and it successfully predicts qualitative changes in the negative current-voltage data (including a transition to super-Ohmic behavior) based solely on a fit to positive-voltage data (that appear linear). The A/D model appears to be qualitatively consistent with a large group of published MD simulations, but no quantitative comparison has yet been made. The A/D model makes a network of predictions for how the elementary steps and the channel occupancy vary with both concentration and voltage. In addition, the proposed theoretical framework suggests a new way of plotting the energetics of the simulated system using a one-dimensional permeation coordinate that uses electric potential energy as a metric for the net fractional progress through the permeation mechanism. This approach has the potential to provide a quantitative connection between atomistic simulations and permeation experiments for the first time.
This paper presents a modeling analysis of airborne mercury fate in rural catchments by coupling components of simulation models developed and published previously by the authors. Results for individual rural catchments are presented and discussed, with a focus on the major mercu...
NASA Astrophysics Data System (ADS)
Shin, Soon-Gi
2018-03-01
This article [1] has been retracted at the request of the Editor-in-Chief. Concerns were raised regarding substantial duplications with previous articles published in other journals in which S.-G. Shin is one of the co-authors.
ERIC Educational Resources Information Center
Lane, Justin D.; Ledford, Jennifer R.
2014-01-01
The purpose of this article is to summarize the current literature on the accuracy and reliability of interval systems using data from previously published experimental studies that used either human observations of behavior or computer simulations. Although multiple comparison studies provided mathematical adjustments or modifications to interval…
The role of moisture content in above-ground leaching
Stan Lebow; Patricia Lebow
2007-01-01
This paper reviews previous reports on the moisture content of wood exposed above ground and compares those values to moisture contents obtained using simulated rainfall and immersion methods. Laboratory leaching trials with CCA-treated specimens were also conducted and the results compared to published values for leaching of CCA-treated specimens exposed above ground...
egs_brachy: a versatile and fast Monte Carlo code for brachytherapy
NASA Astrophysics Data System (ADS)
Chamberland, Marc J. P.; Taylor, Randle E. P.; Rogers, D. W. O.; Thomson, Rowan M.
2016-12-01
egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm)3 voxels) and eye plaque (with (1 mm)3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.
egs_brachy: a versatile and fast Monte Carlo code for brachytherapy.
Chamberland, Marc J P; Taylor, Randle E P; Rogers, D W O; Thomson, Rowan M
2016-12-07
egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm) 3 voxels) and eye plaque (with (1 mm) 3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.
Trecker, Molly A; Hogan, Daniel J; Waldner, Cheryl L; Dillon, Jo-Anne R; Osgood, Nathaniel D
2015-06-01
To determine the effects of using discrete versus continuous quantities of people in a compartmental model examining the contribution of antimicrobial resistance (AMR) to rebound in the prevalence of gonorrhoea. A previously published transmission model was reconfigured to represent the occurrence of gonorrhoea in discrete persons, rather than allowing fractions of infected individuals during simulations. In the revised model, prevalence only rebounded under scenarios reproduced from the original paper when AMR occurrence was increased by 10(5) times. In such situations, treatment of high-risk individuals yielded outcomes very similar to those resulting from treatment of low-risk and intermediate-risk individuals. Otherwise, in contrast with the original model, prevalence was the lowest when the high-risk group was treated, supporting the current policy of targeting treatment to high-risk groups. Simulation models can be highly sensitive to structural features. Small differences in structure and parameters can substantially influence predicted outcomes and policy prescriptions, and must be carefully considered. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Zhao, Huawei
2009-01-01
A ZEMAX model was constructed to simulate a clinical trial of intraocular lenses (IOLs) based on a clinically oriented Monte Carlo ensemble analysis using postoperative ocular parameters. The purpose of this model is to test the feasibility of streamlining and optimizing both the design process and the clinical testing of IOLs. This optical ensemble analysis (OEA) is also validated. Simulated pseudophakic eyes were generated by using the tolerancing and programming features of ZEMAX optical design software. OEA methodology was verified by demonstrating that the results of clinical performance simulations were consistent with previously published clinical performance data using the same types of IOLs. From these results we conclude that the OEA method can objectively simulate the potential clinical trial performance of IOLs.
Yao, Po-Ju; Chung, Ren-Hua
2016-02-15
It is difficult for current simulation tools to simulate sequence data in a pre-specified pedigree structure and pre-specified affection status. Previously, we developed a flexible tool, SeqSIMLA2, for simulating sequence data in either unrelated case-control or family samples with different disease and quantitative trait models. Here we extended the tool to efficiently simulate sequences with multiple disease sites in large pedigrees with a given disease status for each pedigree member, assuming that the disease prevalence is low. SeqSIMLA2_exact is implemented with C++ and is available at http://seqsimla.sourceforge.net. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Frembgen-Kesner, Tamara; Elcock, Adrian H
2010-11-03
Theory and computation have long been used to rationalize the experimental association rate constants of protein-protein complexes, and Brownian dynamics (BD) simulations, in particular, have been successful in reproducing the relative rate constants of wild-type and mutant protein pairs. Missing from previous BD studies of association kinetics, however, has been the description of hydrodynamic interactions (HIs) between, and within, the diffusing proteins. Here we address this issue by rigorously including HIs in BD simulations of the barnase-barstar association reaction. We first show that even very simplified representations of the proteins--involving approximately one pseudoatom for every three residues in the protein--can provide excellent reproduction of the absolute association rate constants of wild-type and mutant protein pairs. We then show that simulations that include intermolecular HIs also produce excellent estimates of association rate constants, but, for a given reaction criterion, yield values that are decreased by ∼35-80% relative to those obtained in the absence of intermolecular HIs. The neglect of intermolecular HIs in previous BD simulation studies, therefore, is likely to have contributed to the somewhat overestimated absolute rate constants previously obtained. Consequently, intermolecular HIs could be an important component to include in accurate modeling of the kinetics of macromolecular association events. Copyright © 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.
1-D blood flow modelling in a running human body.
Szabó, Viktor; Halász, Gábor
2017-07-01
In this paper an attempt was made to simulate blood flow in a mobile human arterial network, specifically, in a running human subject. In order to simulate the effect of motion, a previously published immobile 1-D model was modified by including an inertial force term into the momentum equation. To calculate inertial force, gait analysis was performed at different levels of speed. Our results show that motion has a significant effect on the amplitudes of the blood pressure and flow rate but the average values are not effected significantly.
A method to investigate the diffusion properties of nuclear calcium.
Queisser, Gillian; Wittum, Gabriel
2011-10-01
Modeling biophysical processes in general requires knowledge about underlying biological parameters. The quality of simulation results is strongly influenced by the accuracy of these parameters, hence the identification of parameter values that the model includes is a major part of simulating biophysical processes. In many cases, secondary data can be gathered by experimental setups, which are exploitable by mathematical inverse modeling techniques. Here we describe a method for parameter identification of diffusion properties of calcium in the nuclei of rat hippocampal neurons. The method is based on a Gauss-Newton method for solving a least-squares minimization problem and was formulated in such a way that it is ideally implementable in the simulation platform uG. Making use of independently published space- and time-dependent calcium imaging data, generated from laser-assisted calcium uncaging experiments, here we could identify the diffusion properties of nuclear calcium and were able to validate a previously published model that describes nuclear calcium dynamics as a diffusion process.
The development of an industrial-scale fed-batch fermentation simulation.
Goldrick, Stephen; Ştefan, Andrei; Lovett, David; Montague, Gary; Lennox, Barry
2015-01-10
This paper describes a simulation of an industrial-scale fed-batch fermentation that can be used as a benchmark in process systems analysis and control studies. The simulation was developed using a mechanistic model and validated using historical data collected from an industrial-scale penicillin fermentation process. Each batch was carried out in a 100,000 L bioreactor that used an industrial strain of Penicillium chrysogenum. The manipulated variables recorded during each batch were used as inputs to the simulator and the predicted outputs were then compared with the on-line and off-line measurements recorded in the real process. The simulator adapted a previously published structured model to describe the penicillin fermentation and extended it to include the main environmental effects of dissolved oxygen, viscosity, temperature, pH and dissolved carbon dioxide. In addition the effects of nitrogen and phenylacetic acid concentrations on the biomass and penicillin production rates were also included. The simulated model predictions of all the on-line and off-line process measurements, including the off-gas analysis, were in good agreement with the batch records. The simulator and industrial process data are available to download at www.industrialpenicillinsimulation.com and can be used to evaluate, study and improve on the current control strategy implemented on this facility. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, D. S.; Marinak, M. M.; Weber, C. R.
2015-02-15
The recently completed National Ignition Campaign (NIC) on the National Ignition Facility (NIF) showed significant discrepancies between post-shot simulations of implosion performance and experimentally measured performance, particularly in thermonuclear yield. This discrepancy between simulation and observation persisted despite concerted efforts to include all of the known sources of performance degradation within a reasonable two-dimensional (2-D), and even three-dimensional (3-D), simulation model, e.g., using measured surface imperfections and radiation drives adjusted to reproduce observed implosion trajectories [Clark et al., Phys. Plasmas 20, 056318 (2013)]. Since the completion of the NIC, several effects have been identified that could explain these discrepancies andmore » that were omitted in previous simulations. In particular, there is now clear evidence for larger than anticipated long-wavelength radiation drive asymmetries and a larger than expected perturbation seeded by the capsule support tent. This paper describes an updated suite of one-dimensional (1-D), 2-D, and 3-D simulations that include the current best understanding of these effects identified since the NIC, as applied to a specific NIC shot. The relative importance of each effect on the experimental observables is compared. In combination, these effects reduce the simulated-to-measured yield ratio from 125:1 in 1-D to 1.5:1 in 3-D, as compared to 15:1 in the best 2-D simulations published previously. While the agreement with the experimental data remains imperfect, the comparison to the data is significantly improved and suggests that the largest sources for the previous discrepancies between simulation and experiment are now being included.« less
Characterizing the role of the hippocampus during episodic simulation and encoding.
Thakral, Preston P; Benoit, Roland G; Schacter, Daniel L
2017-12-01
The hippocampus has been consistently associated with episodic simulation (i.e., the mental construction of a possible future episode). In a recent study, we identified an anterior-posterior temporal dissociation within the hippocampus during simulation. Specifically, transient simulation-related activity occurred in relatively posterior portions of the hippocampus and sustained activity occurred in anterior portions. In line with previous theoretical proposals of hippocampal function during simulation, the posterior hippocampal activity was interpreted as reflecting a transient retrieval process for the episodic details necessary to construct an episode. In contrast, the sustained anterior hippocampal activity was interpreted as reflecting the continual recruitment of encoding and/or relational processing associated with a simulation. In the present study, we provide a direct test of these interpretations by conducting a subsequent memory analysis of our previously published data to assess whether successful encoding during episodic simulation is associated with the anterior hippocampus. Analyses revealed a subsequent memory effect (i.e., later remembered > later forgotten simulations) in the anterior hippocampus. The subsequent memory effect was transient and not sustained. Taken together, the current findings provide further support for a component process model of hippocampal function during simulation. That is, unique regions of the hippocampus support dissociable processes during simulation, which include the transient retrieval of episodic information, the sustained binding of such information into a coherent episode, and the transient encoding of that episode for later retrieval. © 2017 Wiley Periodicals, Inc.
Nataraja, R M; Webb, N; Lopez, P J
2018-04-01
Surgical training has changed radically in the last few decades. The traditional Halstedian model of time-bound apprenticeship has been replaced with competency-based training. In our previous article, we presented an overview of learning theory relevant to clinical teaching; a summary for the busy paediatric surgeon and urologist. We introduced the concepts underpinning current changes in surgical education and training. In this next article, we give an overview of the various modalities of surgical simulation, the educational principles that underlie them, and potential applications in clinical practice. These modalities include; open surgical models and trainers, laparoscopic bench trainers, virtual reality trainers, simulated patients and role-play, hybrid simulation, scenario-based simulation, distributed simulation, virtual reality, and online simulation. Specific examples of technology that may be used for these modalities are included but this is not a comprehensive review of all available products. Copyright © 2018 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.
Reilly, T.E.; Frimpter, M.H.; LeBlanc, D.R.; Goodman, A.S.
1987-01-01
Sharp interface methods have been used successfully to describe the physics of upconing. A finite-element model is developed to simulate a sharp interface for determination of the steady-state position of the interface and maximum permissible well discharges. The model developed is compared to previous published electric-analog model results of Bennett and others (1968). -from Authors
Global Ray Tracing Simulations of the SABER Gravity Wave Climatology
2009-01-01
atmosphere , the residual temperature profiles are analyzed by a combi- nation of maximum entropy method (MEM) and harmonic analysis, thus providing the...accepted 24 February 2009; published 30 April 2009. [1] Since February 2002, the SABER (sounding of the atmosphere using broadband emission radiometry...satellite instrument has measured temperatures throughout the entire middle atmosphere . Employing the same techniques as previously used for CRISTA
Trotter, R Talbot; Keena, Melody A
2016-12-01
Efforts to manage and eradicate invasive species can benefit from an improved understanding of the physiology, biology, and behavior of the target species, and ongoing efforts to eradicate the Asian longhorned beetle (Anoplophora glabripennis Motschulsky) highlight the roles this information may play. Here, we present a climate-driven phenology model for A. glabripennis that provides simulated life-tables for populations of individual beetles under variable climatic conditions that takes into account the variable number of instars beetles may undergo as larvae. Phenology parameters in the model are based on a synthesis of published data and studies of A. glabripennis, and the model output was evaluated using a laboratory-reared population maintained under varying temperatures mimicking those typical of Central Park in New York City. The model was stable under variations in population size, simulation length, and the Julian dates used to initiate individual beetles within the population. Comparison of model results with previously published field-based phenology studies in native and invasive populations indicates both this new phenology model, and the previously published heating-degree-day model show good agreement in the prediction of the beginning of the flight season for adults. However, the phenology model described here avoids underpredicting the cumulative emergence of adults through the season, in addition to providing tables of life stages and estimations of voltinism for local populations. This information can play a key role in evaluating risk by predicting the potential for population growth, and may facilitate the optimization of management and eradication efforts. Published by Oxford University Press on behalf of Entomological Society of America 2016. This work is written by US Government employees and is in the public domain in the US.
Vafaeian, Behzad; Al-Daghreer, Saleh; El-Rich, Marwan; Adeeb, Samer; El-Bialy, Tarek
2015-08-01
The therapeutic effect of low-intensity pulsed ultrasound on orthodontically induced inflammatory root resorption is believed to be brought about through mechanical signals induced by the low-intensity pulsed ultrasound. However, the stimulatory mechanism triggering dental cell response has not been clearly identified yet. The aim of this study was to evaluate possible relations between the amounts of new cementum regeneration and ultrasonic parameters such as pressure amplitude and time-averaged energy density. We used the finite-element method to simulate the previously published experiment on ultrasonic wave propagation in the dentoalveolar structure of beagle dogs. Qualitative relations between the thickness of the regenerated cementum in the experiment and the ultrasonic parameters were observed. Our results indicated that the areas of the root surface with greater ultrasonic pressure were associated with larger amounts of cementum regeneration. However, the establishment of reliable quantitative correlations between ultrasound parameters and cementum regeneration requires more experimental data and simulations. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
JAMSS: proteomics mass spectrometry simulation in Java.
Smith, Rob; Prince, John T
2015-03-01
Countless proteomics data processing algorithms have been proposed, yet few have been critically evaluated due to lack of labeled data (data with known identities and quantities). Although labeling techniques exist, they are limited in terms of confidence and accuracy. In silico simulators have recently been used to create complex data with known identities and quantities. We propose Java Mass Spectrometry Simulator (JAMSS): a fast, self-contained in silico simulator capable of generating simulated MS and LC-MS runs while providing meta information on the provenance of each generated signal. JAMSS improves upon previous in silico simulators in terms of its ease to install, minimal parameters, graphical user interface, multithreading capability, retention time shift model and reproducibility. The simulator creates mzML 1.1.0. It is open source software licensed under the GPLv3. The software and source are available at https://github.com/optimusmoose/JAMSS. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Effects of simulated microgravity on Streptococcus mutans physiology and biofilm structure.
Cheng, Xingqun; Xu, Xin; Chen, Jing; Zhou, Xuedong; Cheng, Lei; Li, Mingyun; Li, Jiyao; Wang, Renke; Jia, Wenxiang; Li, Yu-Qing
2014-10-01
Long-term spaceflights will eventually become an inevitable occurrence. Previous studies have indicated that oral infectious diseases, including dental caries, were more prevalent in astronauts due to the effect of microgravity. However, the impact of the space environment, especially the microgravity environment, on the virulence factors of Streptococcus mutans, a major caries-associated bacterium, is yet to be explored. In the present study, we investigated the impact of simulated microgravity on the physiology and biofilm structure of S. mutans. We also explored the dual-species interaction between S. mutans and Streptococcus sanguinis under a simulated microgravity condition. Results indicated that the simulated microgravity condition can enhance the acid tolerance ability, modify the biofilm architecture and extracellular polysaccharide distribution of S. mutans, and increase the proportion of S. mutans within a dual-species biofilm, probably through the regulation of various gene expressions. We hypothesize that the enhanced competitiveness of S. mutans under simulated microgravity may cause a multispecies micro-ecological imbalance, which would result in the initiation of dental caries. Our current findings are consistent with previous studies, which revealed a higher astronaut-associated incidence of caries. Further research is required to explore the detailed mechanisms. © 2014 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.
The preferred walk to run transition speed in actual lunar gravity.
De Witt, John K; Edwards, W Brent; Scott-Pandorf, Melissa M; Norcross, Jason R; Gernhardt, Michael L
2014-09-15
Quantifying the preferred transition speed (PTS) from walking to running has provided insight into the underlying mechanics of locomotion. The dynamic similarity hypothesis suggests that the PTS should occur at the same Froude number across gravitational environments. In normal Earth gravity, the PTS occurs at a Froude number of 0.5 in adult humans, but previous reports found the PTS occurred at Froude numbers greater than 0.5 in simulated lunar gravity. Our purpose was to (1) determine the Froude number at the PTS in actual lunar gravity during parabolic flight and (2) compare it with the Froude number at the PTS in simulated lunar gravity during overhead suspension. We observed that Froude numbers at the PTS in actual lunar gravity (1.39±0.45) and simulated lunar gravity (1.11±0.26) were much greater than 0.5. Froude numbers at the PTS above 1.0 suggest that the use of the inverted pendulum model may not necessarily be valid in actual lunar gravity and that earlier findings in simulated reduced gravity are more accurate than previously thought. © 2014. Published by The Company of Biologists Ltd.
ANCA: Anharmonic Conformational Analysis of Biomolecular Simulations.
Parvatikar, Akash; Vacaliuc, Gabriel S; Ramanathan, Arvind; Chennubhotla, S Chakra
2018-05-08
Anharmonicity in time-dependent conformational fluctuations is noted to be a key feature of functional dynamics of biomolecules. Although anharmonic events are rare, long-timescale (μs-ms and beyond) simulations facilitate probing of such events. We have previously developed quasi-anharmonic analysis to resolve higher-order spatial correlations and characterize anharmonicity in biomolecular simulations. In this article, we have extended this toolbox to resolve higher-order temporal correlations and built a scalable Python package called anharmonic conformational analysis (ANCA). ANCA has modules to: 1) measure anharmonicity in the form of higher-order statistics and its variation as a function of time, 2) output a storyboard representation of the simulations to identify key anharmonic conformational events, and 3) identify putative anharmonic conformational substates and visualization of transitions between these substates. Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Folding cooperativity in a three-stranded beta-sheet model.
Roe, Daniel R; Hornak, Viktor; Simmerling, Carlos
2005-09-16
The thermodynamic behavior of a previously designed three-stranded beta-sheet was studied via several microseconds of standard and replica exchange molecular dynamics simulations. The system is shown to populate at least four thermodynamic minima, including two partially folded states in which only a single hairpin is formed. Simulated melting curves show different profiles for the C and N-terminal hairpins, consistent with differences in secondary structure content in published NMR and CD/FTIR measurements, which probed different regions of the chain. Individual beta-hairpins that comprise the three-stranded beta-sheet are observed to form cooperatively. Partial folding cooperativity between the component hairpins is observed, and good agreement between calculated and experimental values quantifying this cooperativity is obtained when similar analysis techniques are used. However, the structural detail in the ensemble of conformations sampled in the simulations permits a more direct analysis of this cooperativity than has been performed on the basis of experimental data. The results indicate the actual folding cooperativity perpendicular to strand direction is significantly larger than the lower bound obtained previously.
Folding cooperativity in a 3-stranded β-sheet model
Roe, Daniel R.; Hornak, Viktor
2015-01-01
Summary The thermodynamic behavior of a previously designed three-stranded β-sheet was studied via several µs of standard and replica exchange molecular dynamics simulations. The system is shown to populate at least four thermodynamic minima, including 2 partially folded states in which only a single hairpin is formed. Simulated melting curves show different profiles for the C and N-terminal hairpins, consistent with differences in secondary structure content in published NMR and CD/FTIR measurements, which probed different regions of the chain. Individual β-hairpins that comprise the 3-stranded β-sheet are observed to form cooperatively. Partial folding cooperativity between the component hairpins is observed, and good agreement between calculated and experimental values quantifying this cooperativity is obtained when similar analysis techniques are used. However, the structural detail in the ensemble of conformations sampled in the simulations permits a more direct analysis of this cooperatively than has been performed based on experimental data. The results indicate the actual folding cooperativity perpendicular to strand direction is significantly larger than the lower bound obtained previously. PMID:16095612
Concentrating small particles in protoplanetary disks through the streaming instability
NASA Astrophysics Data System (ADS)
Yang, C.-C.; Johansen, A.; Carrera, D.
2017-10-01
Laboratory experiments indicate that direct growth of silicate grains via mutual collisions can only produce particles up to roughly millimeters in size. On the other hand, recent simulations of the streaming instability have shown that mm/cm-sized particles require an excessively high metallicity for dense filaments to emerge. Using a numerical algorithm for stiff mutual drag force, we perform simulations of small particles with significantly higher resolutions and longer simulation times than in previous investigations. We find that particles of dimensionless stopping time τs = 10-2 and 10-3 - representing cm- and mm-sized particles interior of the water ice line - concentrate themselves via the streaming instability at a solid abundance of a few percent. We thus revise a previously published critical solid abundance curve for the regime of τs ≪ 1. The solid density in the concentrated regions reaches values higher than the Roche density, indicating that direct collapse of particles down to mm sizes into planetesimals is possible. Our results hence bridge the gap in particle size between direct dust growth limited by bouncing and the streaming instability.
A Component-Based Extension Framework for Large-Scale Parallel Simulations in NEURON
King, James G.; Hines, Michael; Hill, Sean; Goodman, Philip H.; Markram, Henry; Schürmann, Felix
2008-01-01
As neuronal simulations approach larger scales with increasing levels of detail, the neurosimulator software represents only a part of a chain of tools ranging from setup, simulation, interaction with virtual environments to analysis and visualizations. Previously published approaches to abstracting simulator engines have not received wide-spread acceptance, which in part may be to the fact that they tried to address the challenge of solving the model specification problem. Here, we present an approach that uses a neurosimulator, in this case NEURON, to describe and instantiate the network model in the simulator's native model language but then replaces the main integration loop with its own. Existing parallel network models are easily adopted to run in the presented framework. The presented approach is thus an extension to NEURON but uses a component-based architecture to allow for replaceable spike exchange components and pluggable components for monitoring, analysis, or control that can run in this framework alongside with the simulation. PMID:19430597
Global Simulation of Proton Precipitation Due to Field Line Curvature During Substorms
NASA Technical Reports Server (NTRS)
Gilson, M. L.; Raeder, J.; Donovan, E.; Ge, Y. S.; Kepko, L.
2012-01-01
The low latitude boundary of the proton aurora (known as the Isotropy Boundary or IB) marks an important boundary between empty and full downgoing loss cones. There is significant evidence that the IB maps to a region in the magnetosphere where the ion gyroradius becomes comparable to the local field line curvature. However, the location of the IB in the magnetosphere remains in question. In this paper, we show simulated proton precipitation derived from the Field Line Curvature (FLC) model of proton scattering and a global magnetohydrodynamic simulation during two substorms. The simulated proton precipitation drifts equatorward during the growth phase, intensifies at onset and reproduces the azimuthal splitting published in previous studies. In the simulation, the pre-onset IB maps to 7-8 RE for the substorms presented and the azimuthal splitting is caused by the development of the substorm current wedge. The simulation also demonstrates that the central plasma sheet temperature can significantly influence when and where the azimuthal splitting takes place.
NASA Technical Reports Server (NTRS)
Tuey, Richard C.; Lane, Robert; Hart, Susan V.
1995-01-01
The NASA Scientific and Technical Information Office was assigned the responsibility to continue with the expansion of the NASAwide networked electronic duplicating effort by including the Goddard Space Flight Center (GSFC) as an additional node to the existing configuration of networked electronic duplicating systems within NASA. The subject of this report is the evaluation of a networked electronic duplicating system which meets the duplicating requirements and expands electronic publishing capabilities without increasing current operating costs. This report continues the evaluation reported in 'NASA Electronic Publishing System - Electronic Printing and Duplicating Evaluation Report' (NASA TM-106242) and 'NASA Electronic Publishing System - Stage 1 Evaluation Report' (NASA TM-106510). This report differs from the previous reports through the inclusion of an external networked desktop editing, archival, and publishing functionality which did not exist with the previous networked electronic duplicating system. Additionally, a two-phase approach to the evaluation was undertaken; the first was a paper study justifying a 90-day, on-site evaluation, and the second phase was to validate, during the 90-day evaluation, the cost benefits and productivity increases that could be achieved in an operational mode. A benchmark of the functionality of the networked electronic publishing system and external networked desktop editing, archival, and publishing system was performed under a simulated daily production environment. This report can be used to guide others in determining the most cost effective duplicating/publishing alternative through the use of cost/benefit analysis and return on investment techniques. A treatise on the use of these techniques can be found by referring to 'NASA Electronic Publishing System -Cost/Benefit Methodology' (NASA TM-106662).
Infantino, Angelo; Cicoria, Gianfranco; Lucconi, Giulia; Pancaldi, Davide; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano; Marengo, Mario
2016-12-01
In the planning of a new cyclotron facility, an accurate knowledge of the radiation field around the accelerator is fundamental for the design of shielding, the protection of workers, the general public and the environment. Monte Carlo simulations can be very useful in this process, and their use is constantly increasing. However, few data have been published so far as regards the proper validation of Monte Carlo simulation against experimental measurements, particularly in the energy range of biomedical cyclotrons. In this work a detailed model of an existing installation of a GE PETtrace 16.5MeV cyclotron was developed using FLUKA. An extensive measurement campaign of the neutron ambient dose equivalent H ∗ (10) in marked positions around the cyclotron was conducted using a neutron rem-counter probe and CR39 neutron detectors. Data from a previous measurement campaign performed by our group using TLDs were also re-evaluated. The FLUKA model was then validated by comparing the results of high-statistics simulations with experimental data. In 10 out of 12 measurement locations, FLUKA simulations were in agreement within uncertainties with all the three different sets of experimental data; in the remaining 2 positions, the agreement was with 2/3 of the measurements. Our work allows to quantitatively validate our FLUKA simulation setup and confirms that Monte Carlo technique can produce accurate results in the energy range of biomedical cyclotrons. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
The space-dependent model and output characteristics of intra-cavity pumped dual-wavelength lasers
NASA Astrophysics Data System (ADS)
He, Jin-Qi; Dong, Yuan; Zhang, Feng-Dong; Yu, Yong-Ji; Jin, Guang-Yong; Liu, Li-Da
2016-01-01
The intra-cavity pumping scheme which is used to simultaneously generate dual-wavelength lasers was proposed and published by us and the space-independent model of quasi-three-level and four-level intra-cavity pumped dual-wavelength lasers was constructed based on this scheme. In this paper, to make the previous study more rigorous, the space-dependent model is adopted. As an example, the output characteristics of 946 nm and 1064 nm dual-wavelength lasers under the conditions of different output mirror transmittances are numerically simulated by using the derived formula and the results are nearly identical to what was previously reported.
Torsional anharmonicity in the conformational thermodynamics of flexible molecules
NASA Astrophysics Data System (ADS)
Miller, Thomas F., III; Clary, David C.
We present an algorithm for calculating the conformational thermodynamics of large, flexible molecules that combines ab initio electronic structure theory calculations with a torsional path integral Monte Carlo (TPIMC) simulation. The new algorithm overcomes the previous limitations of the TPIMC method by including the thermodynamic contributions of non-torsional vibrational modes and by affordably incorporating the ab initio calculation of conformer electronic energies, and it improves the conventional ab initio treatment of conformational thermodynamics by accounting for the anharmonicity of the torsional modes. Using previously published ab initio results and new TPIMC calculations, we apply the algorithm to the conformers of the adrenaline molecule.
Battery Calendar Life Estimator Manual Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jon P. Christophersen; Ira Bloom; Ed Thomas
2012-10-01
The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.
Battery Life Estimator Manual Linear Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jon P. Christophersen; Ira Bloom; Ed Thomas
2009-08-01
The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.
Physics-based agent to simulant correlations for vapor phase mass transport.
Willis, Matthew P; Varady, Mark J; Pearl, Thomas P; Fouse, Janet C; Riley, Patrick C; Mantooth, Brent A; Lalain, Teri A
2013-12-15
Chemical warfare agent simulants are often used as an agent surrogate to perform environmental testing, mitigating exposure hazards. This work specifically addresses the assessment of downwind agent vapor concentration resulting from an evaporating simulant droplet. A previously developed methodology was used to estimate the mass diffusivities of the chemical warfare agent simulants methyl salicylate, 2-chloroethyl ethyl sulfide, di-ethyl malonate, and chloroethyl phenyl sulfide. Along with the diffusivity of the chemical warfare agent bis(2-chloroethyl) sulfide, the simulant diffusivities were used in an advection-diffusion model to predict the vapor concentrations downwind from an evaporating droplet of each chemical at various wind velocities and temperatures. The results demonstrate that the simulant-to-agent concentration ratio and the corresponding vapor pressure ratio are equivalent under certain conditions. Specifically, the relationship is valid within ranges of measurement locations relative to the evaporating droplet and observation times. The valid ranges depend on the relative transport properties of the agent and simulant, and whether vapor transport is diffusion or advection dominant. Published by Elsevier B.V.
Rossow, Heidi A; Calvert, C Chris
2014-10-01
The goal of this research was to use a computational model of human metabolism to predict energy metabolism for lean and obese men. The model is composed of 6 state variables representing amino acids, muscle protein, visceral protein, glucose, triglycerides, and fatty acids (FAs). Differential equations represent carbohydrate, amino acid, and FA uptake and output by tissues based on ATP creation and use for both lean and obese men. Model parameterization is based on data from previous studies. Results from sensitivity analyses indicate that model predictions of resting energy expenditure (REE) and respiratory quotient (RQ) are dependent on FA and glucose oxidation rates with the highest sensitivity coefficients (0.6, 0.8 and 0.43, 0.15, respectively, for lean and obese models). Metabolizable energy (ME) is influenced by ingested energy intake with a sensitivity coefficient of 0.98, and a phosphate-to-oxygen ratio by FA oxidation rate and amino acid oxidation rate (0.32, 0.24 and 0.55, 0.65 for lean and obese models, respectively). Simulations of previously published studies showed that the model is able to predict ME ranging from 6.6 to 9.3 with 0% differences between published and model values, and RQ ranging from 0.79 to 0.86 with 1% differences between published and model values. REEs >7 MJ/d are predicted with 6% differences between published and model values. Glucose oxidation increases by ∼0.59 mol/d, RQ increases by 0.03, REE increases by 2 MJ/d, and heat production increases by 1.8 MJ/d in the obese model compared with lean model simulations. Increased FA oxidation results in higher changes in RQ and lower relative changes in REE. These results suggest that because fat mass is directly related to REE and rate of FA oxidation, body fat content could be used as a predictor of RQ. © 2014 American Society for Nutrition.
Fine-resolution imaging of solar features using Phase-Diverse Speckle
NASA Technical Reports Server (NTRS)
Paxman, Richard G.
1995-01-01
Phase-diverse speckle (PDS) is a novel imaging technique intended to overcome the degrading effects of atmospheric turbulence on fine-resolution imaging. As its name suggests, PDS is a blend of phase-diversity and speckle-imaging concepts. PDS reconstructions on solar data were validated by simulation, by demonstrating internal consistency of PDS estimates, and by comparing PDS reconstructions with those produced from well accepted speckle-imaging processing. Several sources of error in data collected with the Swedish Vacuum Solar Telescope (SVST) were simulated: CCD noise, quantization error, image misalignment, and defocus error, as well as atmospheric turbulence model error. The simulations demonstrate that fine-resolution information can be reliably recovered out to at least 70% of the diffraction limit without significant introduction of image artifacts. Additional confidence in the SVST restoration is obtained by comparing its spatial power spectrum with previously-published power spectra derived from both space-based images and earth-based images corrected with traditional speckle-imaging techniques; the shape of the spectrum is found to match well the previous measurements. In addition, the imagery is found to be consistent with, but slightly sharper than, imagery reconstructed with accepted speckle-imaging techniques.
Fulton, Lawrence; Kerr, Bernie; Inglis, James M; Brooks, Matthew; Bastian, Nathaniel D
2015-07-01
In this study, we re-evaluate air ambulance requirements (rules of allocation) and planning considerations based on an Army-approved, Theater Army Analysis scenario. A previous study using workload only estimated a requirement of 0.4 to 0.6 aircraft per admission, a significant bolus over existence-based rules. In this updated study, we estimate requirements for Phase III (major combat operations) using a simulation grounded in previously published work and Phase IV (stability operations) based on four rules of allocation: unit existence rules, workload factors, theater structure (geography), and manual input. This study improves upon previous work by including the new air ambulance mission requirements of Department of Defense 51001.1, Roles and Functions of the Services, by expanding the analysis over two phases, and by considering unit rotation requirements known as Army Force Generation based on Department of Defense policy. The recommendations of this study are intended to inform future planning factors and already provided decision support to the Army Aviation Branch in determining force structure requirements. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.
Rühle, K H; Karweina, D; Domanski, U; Nilius, G
2009-07-01
The function of automatic CPAP devices is difficult to investigate using clinical examinations due to the high variability of breathing disorders. With a flow generator, however, identical breathing patterns can be reproduced so that comparative studies on the behaviour of pressure of APAP devices are possible. Because the algorithms of APAP devices based on the experience of users can be modified without much effort, also previously investigated devices should regularly be reviewed with regard to programme changes. Had changes occurred in the algorithms of 3 selected devices--compared to the previously published benchmark studies? Do the current versions of these investigated devices differentiate between open and closed apnoeas? With a self-developed respiratory pump, sleep-related breathing patterns and, with the help of a computerised valve, resistances of the upper respiratory tract were simulated. Three different auto-CPAP devices were subjected to a bench test with and without feedback (open/closed loop). Open loop: the 3 devices showed marked differences in the rate of pressure rise but did not differ from the earlier published results. From an initial pressure of 4 mbar the pressure increased to 10 mbar after a different number of apnoeas (1-6 repetitive apnoeas). Only one device differentiated between closed and open apnoeas. Closed loop: due to the pressure increase, the flow generator simulated reduced obstruction of the upper airways (apnoeas changed to hypopnoeas, hypopnoeas changed to flattening) but different patterns of pressure regulation could still be observed. By applying bench-testing, the algorithms of auto-CPAP devices can regularly be reviewed to detect changes in the software. The differentiation between open and closed apnoeas should be improved in several APAP devices.
NASA Astrophysics Data System (ADS)
Henrot, Alexandra-Jane; Stanelle, Tanja; Schröder, Sabine; Siegenthaler, Colombe; Taraborrelli, Domenico; Schultz, Martin G.
2017-02-01
A biogenic emission scheme based on the Model of Emissions of Gases and Aerosols from Nature (MEGAN) version 2.1 (Guenther et al., 2012) has been integrated into the ECHAM6-HAMMOZ chemistry climate model in order to calculate the emissions from terrestrial vegetation of 32 compounds. The estimated annual global total for the reference simulation is 634 Tg C yr-1 (simulation period 2000-2012). Isoprene is the main contributor to the average emission total, accounting for 66 % (417 Tg C yr-1), followed by several monoterpenes (12 %), methanol (7 %), acetone (3.6 %), and ethene (3.6 %). Regionally, most of the high annual emissions are found to be associated with tropical regions and tropical vegetation types. In order to evaluate the implementation of the biogenic model in ECHAM-HAMMOZ, global and regional biogenic volatile organic compound (BVOC) emissions of the reference simulation were compared to previous published experiment results with MEGAN. Several sensitivity simulations were performed to study the impact of different model input and parameters related to the vegetation cover and the ECHAM6 climate. BVOC emissions obtained here are within the range of previous published estimates. The large range of emission estimates can be attributed to the use of different input data and empirical coefficients within different setups of MEGAN. The biogenic model shows a high sensitivity to the changes in plant functional type (PFT) distributions and associated emission factors for most of the compounds. The global emission impact for isoprene is about -9 %, but reaches +75 % for α-pinene when switching from global emission factor maps to PFT-specific emission factor distributions. The highest sensitivity of isoprene emissions is calculated when considering soil moisture impact, with a global decrease of 12.5 % when the soil moisture activity factor is included in the model parameterization. Nudging ECHAM6 climate towards ERA-Interim reanalysis has an impact on the biogenic emissions, slightly lowering the global total emissions and their interannual variability.
Modelling of thermal stresses in bearing steel structure generated by electrical current impulses
NASA Astrophysics Data System (ADS)
Birjukovs, M.; Jakovics, A.; Holweger, W.
2018-05-01
This work is the study of one particular candidate for white etching crack (WEC) initiation mechanism in wind turbine gearbox bearings: discharge current impulses flowing through bearing steel with associated thermal stresses and material fatigue. Using data/results from previously published works, the authors develop a series of models that are utilized to simulate these processes under various conditions/local microstructure configurations, as well as to verify the results of the previous numerical studies. Presented models show that the resulting stresses are several orders of magnitude below the fatigue limit/yield strength for the parameters used herein. Results and analysis of models provided by Scepanskis, M. et al. also indicate that certain effects predicted in their previous work resulted from a physically unfounded assumption about material thermodynamic properties and numerical model implementation issues.
NASA Technical Reports Server (NTRS)
Likins, P. W.
1974-01-01
Equations of motion are derived for use in simulating a spacecraft or other complex electromechanical system amenable to idealization as a set of hinge-connected rigid bodies of tree topology, with rigid axisymmetric rotors and nonrigid appendages attached to each rigid body in the set. In conjunction with a previously published report on finite-element appendage vibration equations, this report provides a complete minimum-dimension formulation suitable for generic programming for digital computer numerical integration.
Methods for Reachability-based Hybrid Controller Design
2012-05-10
approaches for airport runways ( Teo and Tomlin, 2003). The results of the reachability calculations were validated in extensive simulations as well as...UAV flight experiments (Jang and Tomlin, 2005; Teo , 2005). While the focus of these previous applications lies largely in safety verification, the work...B([15, 0],a0)× [−π,π])\\ V,∀qi ∈ Q, where a0 = 30m is the protected radius (chosen based upon published data of the wingspan of a Boeing KC -135
Precipitation Dynamical Downscaling Over the Great Plains
NASA Astrophysics Data System (ADS)
Hu, Xiao-Ming; Xue, Ming; McPherson, Renee A.; Martin, Elinor; Rosendahl, Derek H.; Qiao, Lei
2018-02-01
Detailed, regional climate projections, particularly for precipitation, are critical for many applications. Accurate precipitation downscaling in the United States Great Plains remains a great challenge for most Regional Climate Models, particularly for warm months. Most previous dynamic downscaling simulations significantly underestimate warm-season precipitation in the region. This study aims to achieve a better precipitation downscaling in the Great Plains with the Weather Research and Forecast (WRF) model. To this end, WRF simulations with different physics schemes and nudging strategies are first conducted for a representative warm season. Results show that different cumulus schemes lead to more pronounced difference in simulated precipitation than other tested physics schemes. Simply choosing different physics schemes is not enough to alleviate the dry bias over the southern Great Plains, which is related to an anticyclonic circulation anomaly over the central and western parts of continental U.S. in the simulations. Spectral nudging emerges as an effective solution for alleviating the precipitation bias. Spectral nudging ensures that large and synoptic-scale circulations are faithfully reproduced while still allowing WRF to develop small-scale dynamics, thus effectively suppressing the large-scale circulation anomaly in the downscaling. As a result, a better precipitation downscaling is achieved. With the carefully validated configurations, WRF downscaling is conducted for 1980-2015. The downscaling captures well the spatial distribution of monthly climatology precipitation and the monthly/yearly variability, showing improvement over at least two previously published precipitation downscaling studies. With the improved precipitation downscaling, a better hydrological simulation over the trans-state Oologah watershed is also achieved.
NASA Technical Reports Server (NTRS)
Schrader, Christian M.; Rickman, Doug; Stoeser, Doug; Wentworth, Susan J.; Botha, Pieter WSK; Butcher, Alan R.; McKay, David; Horsch, Hanna; Benedictus, Aukje; Gottlieb, Paul
2008-01-01
We present modal data from QEMSCAN(registered TradeMark) beam analysis of Apollo 16 samples from drive core 64001/2. The analyzed lunar samples are thin sections 64002,6019 (5.0-8.0 cm depth) and 64001,6031 (50.0-53.1 cm depth) and sieved grain mounts 64002,262 and 64001,374 from depths corresponding to the thin sections, respectively. We also analyzed lunar highland regolith simulants NU-LHT-1M, -2M, and OB-1, low-Ti mare simulants JSC-1, -lA, -1AF, and FJS-1, and high-Ti mare simulant MLS-1. The preliminary results comprise the beginning of an internally consistent database of lunar regolith and regolith simulant mineral and glass information. This database, combined with previous and concurrent studies on phase chemistry, bulk chemistry, and with data on particle shape and size distribution, will serve to guide lunar scientists and engineers in choosing simulants for their applications. These results are modal% by phase rather than by particle type, so they are not directly comparable to most previously published lunar data that report lithic fragments, monomineralic particles, agglutinates, etc. Of the highland simulants, 08-1 has an integrated modal composition closer than NU-LHT-1M to that of the 64001/2 samples, However, this and other studies show that NU-LHT-1M and -2M have minor and trace mineral (e.g., Fe-Ti oxides and phosphates) populations and mineral and glass chemistry closer to these lunar samples. The finest fractions (0-20 microns) in the sieved lunar samples are enriched in glass relative to the integrated compositions by approx.30% for 64002,262 and approx.15% for 64001,374. Plagioclase, pyroxene, and olivine are depleted in these finest fractions. This could be important to lunar dust mitigation efforts and astronaut health - none of the analyzed simulants show this trend. Contrary to previously reported modal analyses of monomineralic grains in lunar regolith, these area% modal analyses do not show a systematic increase in plagiociase/pyroxene as size fraction decreases.
Unified-theory-of-reinforcement neural networks do not simulate the blocking effect.
Calvin, Nicholas T; J McDowell, J
2015-11-01
For the last 20 years the unified theory of reinforcement (Donahoe et al., 1993) has been used to develop computer simulations to evaluate its plausibility as an account for behavior. The unified theory of reinforcement states that operant and respondent learning occurs via the same neural mechanisms. As part of a larger project to evaluate the operant behavior predicted by the theory, this project was the first replication of neural network models based on the unified theory of reinforcement. In the process of replicating these neural network models it became apparent that a previously published finding, namely, that the networks simulate the blocking phenomenon (Donahoe et al., 1993), was a misinterpretation of the data. We show that the apparent blocking produced by these networks is an artifact of the inability of these networks to generate the same conditioned response to multiple stimuli. The piecemeal approach to evaluate the unified theory of reinforcement via simulation is critiqued and alternatives are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Geant4 simulation of the CERN-EU high-energy reference field (CERF) facility.
Prokopovich, D A; Reinhard, M I; Cornelius, I M; Rosenfeld, A B
2010-09-01
The CERN-EU high-energy reference field facility is used for testing and calibrating both active and passive radiation dosemeters for radiation protection applications in space and aviation. Through a combination of a primary particle beam, target and a suitable designed shielding configuration, the facility is able to reproduce the neutron component of the high altitude radiation field relevant to the jet aviation industry. Simulations of the facility using the GEANT4 (GEometry ANd Tracking) toolkit provide an improved understanding of the neutron particle fluence as well as the particle fluence of other radiation components present. The secondary particle fluence as a function of the primary particle fluence incident on the target and the associated dose equivalent rates were determined at the 20 designated irradiation positions available at the facility. Comparisons of the simulated results with previously published simulations obtained using the FLUKA Monte Carlo code, as well as with experimental results of the neutron fluence obtained with a Bonner sphere spectrometer, are made.
Virtual reality simulators: valuable surgical skills trainers or video games?
Willis, Ross E; Gomez, Pedro Pablo; Ivatury, Srinivas J; Mitra, Hari S; Van Sickle, Kent R
2014-01-01
Virtual reality (VR) and physical model (PM) simulators differ in terms of whether the trainee is manipulating actual 3-dimensional objects (PM) or computer-generated 3-dimensional objects (VR). Much like video games (VG), VR simulators utilize computer-generated graphics. These differences may have profound effects on the utility of VR and PM training platforms. In this study, we aimed to determine whether a relationship exists between VR, PM, and VG platforms. VR and PM simulators for laparoscopic camera navigation ([LCN], experiment 1) and flexible endoscopy ([FE] experiment 2) were used in this study. In experiment 1, 20 laparoscopic novices played VG and performed 0° and 30° LCN exercises on VR and PM simulators. In experiment 2, 20 FE novices played VG and performed colonoscopy exercises on VR and PM simulators. In both experiments, VG performance was correlated with VR performance but not with PM performance. Performance on VR simulators did not correlate with performance on respective PM models. VR environments may be more like VG than previously thought. © 2013 Published by Association of Program Directors in Surgery on behalf of Association of Program Directors in Surgery.
Optimization of the design of Gas Cherenkov Detectors for ICF diagnosis
NASA Astrophysics Data System (ADS)
Liu, Bin; Hu, Huasi; Han, Hetong; Lv, Huanwen; Li, Lan
2018-07-01
A design method, which combines a genetic algorithm (GA) with Monte-Carlo simulation, is established and applied to two different types of Cherenkov detectors, namely, Gas Cherenkov Detector (GCD) and Gamma Reaction History (GRH). For accelerating the optimization program, open Message Passing Interface (MPI) is used in the Geant4 simulation. Compared with the traditional optical ray-tracing method, the performances of these detectors have been improved with the optimization method. The efficiency for GCD system, with a threshold of 6.3 MeV, is enhanced by ∼20% and time response improved by ∼7.2%. For the GRH system, with threshold of 10 MeV, the efficiency is enhanced by ∼76% in comparison with previously published results.
Prediction of drug-packaging interactions via molecular dynamics (MD) simulations.
Feenstra, Peter; Brunsteiner, Michael; Khinast, Johannes
2012-07-15
The interaction between packaging materials and drug products is an important issue for the pharmaceutical industry, since during manufacturing, processing and storage a drug product is continuously exposed to various packaging materials. The experimental investigation of a great variety of different packaging material-drug product combinations in terms of efficacy and safety can be a costly and time-consuming task. In our work we used molecular dynamics (MD) simulations in order to evaluate the applicability of such methods to pre-screening of the packaging material-solute compatibility. The solvation free energy and the free energy of adsorption of diverse solute/solvent/solid systems were estimated. The results of our simulations agree with experimental values previously published in the literature, which indicates that the methods in question can be used to semi-quantitatively reproduce the solid-liquid interactions of the investigated systems. Copyright © 2012 Elsevier B.V. All rights reserved.
Bistable behavior of the lac operon in E. coli when induced with a mixture of lactose and TMG.
Díaz-Hernández, Orlando; Santillán, Moisés
2010-01-01
In this work we investigate multistability in the lac operon of Escherichia coli when it is induced by a mixture of lactose and the non-metabolizable thiomethyl galactoside (TMG). In accordance with previously published experimental results and computer simulations, our simulations predict that: (1) when the system is induced by TMG, the system shows a discernible bistable behavior while, (2) when the system is induced by lactose, bistability does not disappear but excessively high concentrations of lactose would be required to observe it. Finally, our simulation results predict that when a mixture of lactose and TMG is used, the bistability region in the extracellular glucose concentration vs. extracellular lactose concentration parameter space changes in such a way that the model predictions regarding bistability could be tested experimentally. These experiments could help to solve a recent controversy regarding the existence of bistability in the lac operon under natural conditions.
A unified relation for the solid-liquid interface free energy of pure FCC, BCC, and HCP metals.
Wilson, S R; Mendelev, M I
2016-04-14
We study correlations between the solid-liquid interface (SLI) free energy and bulk material properties (melting temperature, latent heat, and liquid structure) through the determination of SLI free energies for bcc and hcp metals from molecular dynamics (MD) simulation. Values obtained for the bcc metals in this study were compared to values predicted by the Turnbull, Laird, and Ewing relations on the basis of previously published MD simulation data. We found that of these three empirical relations, the Ewing relation better describes the MD simulation data. Moreover, whereas the original Ewing relation contains two constants for a particular crystal structure, we found that the first coefficient in the Ewing relation does not depend on crystal structure, taking a common value for all three phases, at least for the class of the systems described by embedded-atom method potentials (which are considered to provide a reasonable approximation for metals).
A unified relation for the solid-liquid interface free energy of pure FCC, BCC, and HCP metals
NASA Astrophysics Data System (ADS)
Wilson, S. R.; Mendelev, M. I.
2016-04-01
We study correlations between the solid-liquid interface (SLI) free energy and bulk material properties (melting temperature, latent heat, and liquid structure) through the determination of SLI free energies for bcc and hcp metals from molecular dynamics (MD) simulation. Values obtained for the bcc metals in this study were compared to values predicted by the Turnbull, Laird, and Ewing relations on the basis of previously published MD simulation data. We found that of these three empirical relations, the Ewing relation better describes the MD simulation data. Moreover, whereas the original Ewing relation contains two constants for a particular crystal structure, we found that the first coefficient in the Ewing relation does not depend on crystal structure, taking a common value for all three phases, at least for the class of the systems described by embedded-atom method potentials (which are considered to provide a reasonable approximation for metals).
Thermo-mechanical simulation of liquid-supported stretch blow molding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimmer, J.; Stommel, M.
2015-05-22
Stretch blow molding is the well-established plastics forming method to produce Polyehtylene therephtalate (PET) bottles. An injection molded preform is heated up above the PET glass transition temperature (Tg∼85°C) and subsequently inflated by pressurized air into a closed cavity. In the follow-up filling process, the resulting bottle is filled with the final product. A recently developed modification of the process combines the blowing and filling stages by directly using the final liquid product to inflate the preform. In a previously published paper, a mechanical simulation and successful evaluation of this liquid-driven stretch blow molding process was presented. In this way,more » a realistic process parameter dependent simulation of the preform deformation throughout the forming process was enabled, whereas the preform temperature evolution during forming was neglected. However, the formability of the preform is highly reduced when the temperature sinks below Tg during forming. Experimental investigations show temperature-induced failure cases due to the fast heat transfer between hot preform and cold liquid. Therefore, in this paper, a process dependent simulation of the temperature evolution during processing to avoid preform failure is presented. For this purpose, the previously developed mechanical model is used to extract the time dependent thickness evolution. This information serves as input for the heat transfer simulation. The required material parameters are calibrated from preform cooling experiments recorded with an infrared-camera. Furthermore, the high deformation ratios during processing lead to strain induced crystallization. This exothermal reaction is included into the simulation by extracting data from preform measurements at different stages of deformation via Differential Scanning Calorimetry (DSC). Finally, the thermal simulation model is evaluated by free forming experiments, recorded by a high-speed infrared camera.« less
Bias and inference from misspecified mixed-effect models in stepped wedge trial analysis.
Thompson, Jennifer A; Fielding, Katherine L; Davey, Calum; Aiken, Alexander M; Hargreaves, James R; Hayes, Richard J
2017-10-15
Many stepped wedge trials (SWTs) are analysed by using a mixed-effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common-to-all or varied-between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within-cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within-cluster comparisons in the standard model. In the SWTs simulated here, mixed-effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within-cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Cuesta-Gragera, Ana; Navarro-Fontestad, Carmen; Mangas-Sanjuan, Victor; González-Álvarez, Isabel; García-Arieta, Alfredo; Trocóniz, Iñaki F; Casabó, Vicente G; Bermejo, Marival
2015-07-10
The objective of this paper is to apply a previously developed semi-physiologic pharmacokinetic model implemented in NONMEM to simulate bioequivalence trials (BE) of acetyl salicylic acid (ASA) in order to validate the model performance against ASA human experimental data. ASA is a drug with first-pass hepatic and intestinal metabolism following Michaelis-Menten kinetics that leads to the formation of two main metabolites in two generations (first and second generation metabolites). The first aim was to adapt the semi-physiological model for ASA in NOMMEN using ASA pharmacokinetic parameters from literature, showing its sequential metabolism. The second aim was to validate this model by comparing the results obtained in NONMEM simulations with published experimental data at a dose of 1000 mg. The validated model was used to simulate bioequivalence trials at 3 dose schemes (100, 1000 and 3000 mg) and with 6 test formulations with decreasing in vivo dissolution rate constants versus the reference formulation (kD 8-0.25 h (-1)). Finally, the third aim was to determine which analyte (parent drug, first generation or second generation metabolite) was more sensitive to changes in formulation performance. The validation results showed that the concentration-time curves obtained with the simulations reproduced closely the published experimental data, confirming model performance. The parent drug (ASA) was the analyte that showed to be more sensitive to the decrease in pharmaceutical quality, with the highest decrease in Cmax and AUC ratio between test and reference formulations. Copyright © 2015 Elsevier B.V. All rights reserved.
James, Thomas; Wyke, Stacey; Marczylo, Tim; Collins, Samuel; Gaulton, Tom; Foxall, Kerry; Amlôt, Richard; Duarte-Davidson, Raquel
2018-01-01
Incidents involving the release of chemical agents can pose significant risks to public health. In such an event, emergency decontamination of affected casualties may need to be undertaken to reduce injury and possible loss of life. To ensure these methods are effective, human volunteer trials (HVTs) of decontamination protocols, using simulant contaminants, have been conducted. Simulants must be used to mimic the physicochemical properties of more harmful chemicals, while remaining non-toxic at the dose applied. This review focuses on studies that employed chemical warfare agent simulants in decontamination contexts, to identify those simulants most suitable for use in HVTs of emergency decontamination. Twenty-two simulants were identified, of which 17 were determined unsuitable for use in HVTs. The remaining simulants (n = 5) were further scrutinized for potential suitability according to toxicity, physicochemical properties and similarities to their equivalent toxic counterparts. Three suitable simulants, for use in HVTs were identified; methyl salicylate (simulant for sulphur mustard), diethyl malonate (simulant for soman) and malathion (simulant for VX or toxic industrial chemicals). All have been safely used in previous HVTs, and have a range of physicochemical properties that would allow useful inference to more toxic chemicals when employed in future studies of emergency decontamination systems. © 2017 Crown Copyright. Journal of Applied Toxicology published by John Wiley & Sons, Ltd.
Pan, Albert C; Weinreich, Thomas M; Piana, Stefano; Shaw, David E
2016-03-08
Molecular dynamics (MD) simulations can describe protein motions in atomic detail, but transitions between protein conformational states sometimes take place on time scales that are infeasible or very expensive to reach by direct simulation. Enhanced sampling methods, the aim of which is to increase the sampling efficiency of MD simulations, have thus been extensively employed. The effectiveness of such methods when applied to complex biological systems like proteins, however, has been difficult to establish because even enhanced sampling simulations of such systems do not typically reach time scales at which convergence is extensive enough to reliably quantify sampling efficiency. Here, we obtain sufficiently converged simulations of three proteins to evaluate the performance of simulated tempering, a member of a widely used class of enhanced sampling methods that use elevated temperature to accelerate sampling. Simulated tempering simulations with individual lengths of up to 100 μs were compared to (previously published) conventional MD simulations with individual lengths of up to 1 ms. With two proteins, BPTI and ubiquitin, we evaluated the efficiency of sampling of conformational states near the native state, and for the third, the villin headpiece, we examined the rate of folding and unfolding. Our comparisons demonstrate that simulated tempering can consistently achieve a substantial sampling speedup of an order of magnitude or more relative to conventional MD.
Shipitalo, Martin J; Malone, Robert W; Ma, Liwang; Nolan, Bernard T; Kanwar, Rameshwar S; Shaner, Dale L; Pederson, Carl H
2016-06-01
Crop residue removal for bioenergy production can alter soil hydrologic properties and the movement of agrochemicals to subsurface drains. The Root Zone Water Quality Model (RZWQM), previously calibrated using measured flow and atrazine concentrations in drainage from a 0.4 ha chisel-tilled plot, was used to investigate effects of 50 and 100% corn (Zea mays L.) stover harvest and the accompanying reductions in soil crust hydraulic conductivity and total macroporosity on transport of atrazine, metolachlor and metolachlor oxanilic acid (OXA). The model accurately simulated field-measured metolachlor transport in drainage. A 3 year simulation indicated that 50% residue removal reduced subsurface drainage by 31% and increased atrazine and metolachlor transport in drainage 4-5-fold when surface crust conductivity and macroporosity were reduced by 25%. Based on its measured sorption coefficient, approximately twofold reductions in OXA losses were simulated with residue removal. The RZWQM indicated that, if corn stover harvest reduces crust conductivity and soil macroporosity, losses of atrazine and metolachlor in subsurface drainage will increase owing to reduced sorption related to more water moving through fewer macropores. Losses of the metolachlor degradation product OXA will decrease as a result of the more rapid movement of the parent compound into the soil. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Virtual milk for modelling and simulation of dairy processes.
Munir, M T; Zhang, Y; Yu, W; Wilson, D I; Young, B R
2016-05-01
The modeling of dairy processing using a generic process simulator suffers from shortcomings, given that many simulators do not contain milk components in their component libraries. Recently, pseudo-milk components for a commercial process simulator were proposed for simulation and the current work extends this pseudo-milk concept by studying the effect of both total milk solids and temperature on key physical properties such as thermal conductivity, density, viscosity, and heat capacity. This paper also uses expanded fluid and power law models to predict milk viscosity over the temperature range from 4 to 75°C and develops a succinct regressed model for heat capacity as a function of temperature and fat composition. The pseudo-milk was validated by comparing the simulated and actual values of the physical properties of milk. The milk thermal conductivity, density, viscosity, and heat capacity showed differences of less than 2, 4, 3, and 1.5%, respectively, between the simulated results and actual values. This work extends the capabilities of the previously proposed pseudo-milk and of a process simulator to model dairy processes, processing different types of milk (e.g., whole milk, skim milk, and concentrated milk) with different intrinsic compositions, and to predict correct material and energy balances for dairy processes. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
A user's Manual for the Emulation Simulation Computer Model was published previously. The model consisted of a detailed model (emulation) of a SAWD CO2 removal subsystem which operated with much less detailed (simulation) models of a cabin, crew, and condensing and sensible heat exchangers. The purpose was to explore the utility of such an emulation/simulation combination in the design, development, and test of a piece of ARS hardware - SAWD. Extensions to this original effort are presented. The first extension is an update of the model to reflect changes in the SAWD control logic which resulted from the test. In addition, slight changes were also made to the SAWD model to permit restarting and to improve the iteration technique. The second extension is the development of simulation models for more pieces of air and water processing equipment. Models are presented for: EDC, Molecular Sieve, Bosch, Sabatier, a new condensing heat exchanger, SPE, SFWES, Catalytic Oxidizer, and multifiltration. The third extension is to create two system simulations using these models. The first system presented consists of one air and one water processing system, the second a potential Space Station air revitalization system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starrett, C. E.; Saumon, D.
Here, we present an approximation for calculating the equation of state (EOS) of warm and hot dense matter that is built on the previously published pseudoatom molecular dynamics (PAMD) model of dense plasmas [Starrett et al., Phys. Rev. E 91, 013104 (2015)]. And while the EOS calculation with PAMD was previously limited to orbital-free density functional theory (DFT), the new approximation presented here allows a Kohn-Sham DFT treatment of the electrons. The resulting EOS thus includes a quantum mechanical treatment of the electrons with a self-consistent model of the ionic structure, while remaining tractable at high temperatures. The method ismore » validated by comparisons with pressures from ab initio simulations of Be, Al, Si, and Fe. The EOS in the Thomas-Fermi approximation shows remarkable thermodynamic consistency over a wide range of temperatures for aluminum. We also calculate the principal Hugoniots of aluminum and silicon up to 500 eV. We find that the ionic structure of the plasma has a modest effect that peaks at temperatures of a few eV and that the features arising from the electronic structure agree well with ab initio simulations.« less
Role of facet curvature for accurate vertebral facet load analysis.
Holzapfel, Gerhard A; Stadler, Michael
2006-06-01
The curvature of vertebral facet joints may play an important role in the study of load-bearing characteristics and clinical interventions such as graded facetectomy. In previously-published finite element simulations of this procedure, the curvature was either neglected or approximated with a varying degree of accuracy. Here we study the effect of the curvature in three different load situations by using a numerical model which is able to represent the actual curvature without any loss of accuracy. The results show that previously-used approximations of the curvature lead to good results in the analysis of sagittal moment/rotation. However, for sagittal shear-force/displacement and for the contact stress distribution, previous results deviate significantly from our results. These findings are supported through related convergence studies. Hence we can conclude that in order to obtain reliable results for the analysis of sagittal shear-force/displacement and the contact stress distribution in the facet joint, the curvature must not be neglected. This is of particular importance for the numerical simulation of the spine, which may lead to improved diagnostics, effective surgical planning and intervention. The proposed method may represent a more reliable basis for optimizing the biomedical engineering design for tissue engineering or, for example, for spinal implants.
Direct numerical simulation of turbulent pipe flow using the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Peng, Cheng; Geneva, Nicholas; Guo, Zhaoli; Wang, Lian-Ping
2018-03-01
In this paper, we present a first direct numerical simulation (DNS) of a turbulent pipe flow using the mesoscopic lattice Boltzmann method (LBM) on both a D3Q19 lattice grid and a D3Q27 lattice grid. DNS of turbulent pipe flows using LBM has never been reported previously, perhaps due to inaccuracy and numerical stability associated with the previous implementations of LBM in the presence of a curved solid surface. In fact, it was even speculated that the D3Q19 lattice might be inappropriate as a DNS tool for turbulent pipe flows. In this paper, we show, through careful implementation, accurate turbulent statistics can be obtained using both D3Q19 and D3Q27 lattice grids. In the simulation with D3Q19 lattice, a few problems related to the numerical stability of the simulation are exposed. Discussions and solutions for those problems are provided. The simulation with D3Q27 lattice, on the other hand, is found to be more stable than its D3Q19 counterpart. The resulting turbulent flow statistics at a friction Reynolds number of Reτ = 180 are compared systematically with both published experimental and other DNS results based on solving the Navier-Stokes equations. The comparisons cover the mean-flow profile, the r.m.s. velocity and vorticity profiles, the mean and r.m.s. pressure profiles, the velocity skewness and flatness, and spatial correlations and energy spectra of velocity and vorticity. Overall, we conclude that both D3Q19 and D3Q27 simulations yield accurate turbulent flow statistics. The use of the D3Q27 lattice is shown to suppress the weak secondary flow pattern in the mean flow due to numerical artifacts.
Distributed synchronization of networked drive-response systems: A nonlinear fixed-time protocol.
Zhao, Wen; Liu, Gang; Ma, Xi; He, Bing; Dong, Yunfeng
2017-11-01
The distributed synchronization of networked drive-response systems is investigated in this paper. A novel nonlinear protocol is proposed to ensure that the tracking errors converge to zeros in a fixed-time. By comparison with previous synchronization methods, the present method considers more practical conditions and the synchronization time is not dependent of arbitrary initial conditions but can be offline pre-assign according to the task assignment. Finally, the feasibility and validity of the presented protocol have been illustrated by a numerical simulation. Copyright © 2017. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dragnea, Bogdan G.
Achievements which resulted from previous DOE funding include: templated virus-like particle assembly thermodynamics, development of single particle photothermal absorption spectroscopy and dark- field spectroscopy instrumentation for the measurement of optical properties of virus-like nanoparticles, electromagnetic simulations of coupled nanoparticle cluster systems, virus contact mechanics, energy transfer and fluorescence quenching in multichromophore systems supported on biomolecular templates, and photo physical work on virus-aptamer systems. A current total of eight published research articles and a book chapter are acknowledging DOE support for the period 2013-2016.
Measurement techniques for determining the static stiffness of foundations for machine tools
NASA Astrophysics Data System (ADS)
Myers, A.; Barrans, S. M.; Ford, D. G.
2005-01-01
The paper presents a novel technique for accurately measuring the static stiffness of a machine tool concrete foundation using various items of metrology equipment. The foundation was loaded in a number of different ways which simulated the erection of the machine, traversing of the axes and loading of the heaviest component. The results were compared with the stiffness tolerances specified for the foundation which were deemed necessary in order that the machine alignments could be achieved. This paper is a continuation of research previously published for a FEA of the foundation.
Molecular dynamics equation of state for nonpolar geochemical fluids
NASA Astrophysics Data System (ADS)
Duan, Zhenhao; Møller, Nancy; Wears, John H.
1995-04-01
Remarkable agreement between molecular dynamics simulations and experimental measurements has been obtained for methane for a large range of intensive variables, including those corresponding to liquid/vapor coexistence. Using a simple Lennard-Jones potential the simulations not only predict the PVT properties up to 2000°C and 20,000 bar with errors less than 1.5%, but also reproduce phase equilibria well below 0°C with accuracy close to experiment. This two-parameter molecular dynamics equation of state (SOS) is accurate for a much larger range of temperatures and pressures than our previously published EOS with a total fifteen parameters or that of Angus et al. (1978) with thirty-three parameters. By simple scaling, it is possible to predict PVT and phase equilibria of other nonpolar and weakly polar species.
Simulations of the Fuel Economy and Emissions of Hybrid Transit Buses over Planned Local Routes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Zhiming; LaClair, Tim J; Daw, C Stuart
2014-01-01
We present simulated fuel economy and emissions city transit buses powered by conventional diesel engines and diesel-hybrid electric powertrains of varying size. Six representative city drive cycles were included in the study. In addition, we included previously published aftertreatment device models for control of CO, HC, NOx, and particulate matter (PM) emissions. Our results reveal that bus hybridization can significantly enhance fuel economy by reducing engine idling time, reducing demands for accessory loads, exploiting regenerative braking, and shifting engine operation to speeds and loads with higher fuel efficiency. Increased hybridization also tends to monotonically reduce engine-out emissions, but trends inmore » the tailpipe (post-aftertreatment) emissions involve more complex interactions that significantly depend on motor size and drive cycle details.« less
Shape optimization of road tunnel cross-section by simulated annealing
NASA Astrophysics Data System (ADS)
Sobótka, Maciej; Pachnicz, Michał
2016-06-01
The paper concerns shape optimization of a tunnel excavation cross-section. The study incorporates optimization procedure of the simulated annealing (SA). The form of a cost function derives from the energetic optimality condition, formulated in the authors' previous papers. The utilized algorithm takes advantage of the optimization procedure already published by the authors. Unlike other approaches presented in literature, the one introduced in this paper takes into consideration a practical requirement of preserving fixed clearance gauge. Itasca Flac software is utilized in numerical examples. The optimal excavation shapes are determined for five different in situ stress ratios. This factor significantly affects the optimal topology of excavation. The resulting shapes are elongated in the direction of a principal stress greater value. Moreover, the obtained optimal shapes have smooth contours circumscribing the gauge.
ERIC Educational Resources Information Center
Gohring, Ralph J.
1979-01-01
A case study describing the process involved in publishing a personally developed simulation game including finding a publisher, obtaining a copyright, negotiating the contract, controlling front-end costs, marketing the product, and receiving feedback from users. (CMV)
Zardo, Pauline; Graves, Nicholas
2018-01-01
The “publish or perish” incentive drives many researchers to increase the quantity of their papers at the cost of quality. Lowering quality increases the number of false positive errors which is a key cause of the reproducibility crisis. We adapted a previously published simulation of the research world where labs that produce many papers are more likely to have “child” labs that inherit their characteristics. This selection creates a competitive spiral that favours quantity over quality. To try to halt the competitive spiral we added random audits that could detect and remove labs with a high proportion of false positives, and also improved the behaviour of “child” and “parent” labs who increased their effort and so lowered their probability of making a false positive error. Without auditing, only 0.2% of simulations did not experience the competitive spiral, defined by a convergence to the highest possible false positive probability. Auditing 1.35% of papers avoided the competitive spiral in 71% of simulations, and auditing 1.94% of papers in 95% of simulations. Audits worked best when they were only applied to established labs with 50 or more papers compared with labs with 25 or more papers. Adding a ±20% random error to the number of false positives to simulate peer reviewer error did not reduce the audits’ efficacy. The main benefit of the audits was via the increase in effort in “child” and “parent” labs. Audits improved the literature by reducing the number of false positives from 30.2 per 100 papers to 12.3 per 100 papers. Auditing 1.94% of papers would cost an estimated $15.9 million per year if applied to papers produced by National Institutes of Health funding. Our simulation greatly simplifies the research world and there are many unanswered questions about if and how audits would work that can only be addressed by a trial of an audit. PMID:29649314
Barnett, Adrian G; Zardo, Pauline; Graves, Nicholas
2018-01-01
The "publish or perish" incentive drives many researchers to increase the quantity of their papers at the cost of quality. Lowering quality increases the number of false positive errors which is a key cause of the reproducibility crisis. We adapted a previously published simulation of the research world where labs that produce many papers are more likely to have "child" labs that inherit their characteristics. This selection creates a competitive spiral that favours quantity over quality. To try to halt the competitive spiral we added random audits that could detect and remove labs with a high proportion of false positives, and also improved the behaviour of "child" and "parent" labs who increased their effort and so lowered their probability of making a false positive error. Without auditing, only 0.2% of simulations did not experience the competitive spiral, defined by a convergence to the highest possible false positive probability. Auditing 1.35% of papers avoided the competitive spiral in 71% of simulations, and auditing 1.94% of papers in 95% of simulations. Audits worked best when they were only applied to established labs with 50 or more papers compared with labs with 25 or more papers. Adding a ±20% random error to the number of false positives to simulate peer reviewer error did not reduce the audits' efficacy. The main benefit of the audits was via the increase in effort in "child" and "parent" labs. Audits improved the literature by reducing the number of false positives from 30.2 per 100 papers to 12.3 per 100 papers. Auditing 1.94% of papers would cost an estimated $15.9 million per year if applied to papers produced by National Institutes of Health funding. Our simulation greatly simplifies the research world and there are many unanswered questions about if and how audits would work that can only be addressed by a trial of an audit.
New Radiation Dosimetry Estimates for [18F]FLT based on Voxelized Phantoms.
Mendes, B M; Ferreira, A V; Nascimento, L T C; Ferreira, S M Z M D; Silveira, M B; Silva, J B
2018-04-25
3'-Deoxy-3-[ 18 F]fluorothymidine, or [ 18 F]FLT, is a positron emission tomography (PET) tracer used in clinical studies for noninvasive assessment of proliferation activity in several types of cancer. Although the use of this PET tracer is expanding, to date, few studies concerning its dosimetry have been published. In this work, new [ 18 F]FLT dosimetry estimates are determined for human and mice using Monte Carlo simulations. Modern voxelized male and female phantoms and [ 18 F]FLT biokinetic data, both published by the ICRP, were used for simulations of human cases. For most human organs/tissues the absorbed doses were higher than those reported in ICRP Publication 128. An effective dose of 1.70E-02 mSv/MBq to the whole body was determined, which is 13.5% higher than the ICRP reference value. These new human dosimetry estimates obtained using more realistic human phantoms represent an advance in the knowledge of [ 18 F]FLT dosimetry. In addition, mice biokinetic data were obtained experimentally. These data and a previously developed voxelized mouse phantom were used for simulations of animal cases. Concerning animal dosimetry, absorbed doses for organs/tissues ranged from 4.47 ± 0.75 to 155.74 ± 59.36 mGy/MBq. The obtained set of organ/tissue radiation doses for healthy Swiss mice is a useful tool for application in animal experiment design.
Buffi, James H.; Werner, Katie; Kepple, Tom; Murray, Wendy M.
2014-01-01
Baseball pitching imposes a dangerous valgus load on the elbow that puts the joint at severe risk for injury. The goal of this study was to develop a musculoskeletal modeling approach to enable evaluation of muscle-tendon contributions to mitigating elbow injury risk in pitching. We implemented a forward dynamic simulation framework that used a scaled biomechanical model to reproduce a pitching motion recorded from a high school pitcher. The medial elbow muscles generated substantial, protective, varus elbow moments in our simulations. For our subject, the triceps generated large varus moments at the time of peak valgus loading; varus moments generated by the flexor digitorum superficialis were larger, but occurred later in the motion. Increasing muscle-tendon force output, either by augmenting parameters associated with strength and power or by increasing activation levels, decreased the load on the ulnar collateral ligament. Published methods have not previously quantified the biomechanics of elbow muscles during pitching. This simulation study represents a critical advancement in the study of baseball pitching and highlights the utility of simulation techniques in the study of this difficult problem. PMID:25281409
Molecular dynamics studies of a hexameric purine nucleoside phosphorylase.
Zanchi, Fernando Berton; Caceres, Rafael Andrade; Stabeli, Rodrigo Guerino; de Azevedo, Walter Filgueira
2010-03-01
Purine nucleoside phosphorylase (PNP) (EC.2.4.2.1) is an enzyme that catalyzes the cleavage of N-ribosidic bonds of the purine ribonucleosides and 2-deoxyribonucleosides in the presence of inorganic orthophosphate as a second substrate. This enzyme is involved in purine-salvage pathway and has been proposed as a promising target for design and development of antimalarial and antibacterial drugs. Recent elucidation of the three-dimensional structure of PNP by X-ray protein crystallography left open the possibility of structure-based virtual screening initiatives in combination with molecular dynamics simulations focused on identification of potential new antimalarial drugs. Most of the previously published molecular dynamics simulations of PNP were carried out on human PNP, a trimeric PNP. The present article describes for the first time molecular dynamics simulations of hexameric PNP from Plasmodium falciparum (PfPNP). Two systems were simulated in the present work, PfPNP in ligand free form, and in complex with immucillin and sulfate. Based on the dynamical behavior of both systems the main results related to structural stability and protein-drug interactions are discussed.
Mittal, Jeetain; Best, Robert B
2010-08-04
The ability to fold proteins on a computer has highlighted the fact that existing force fields tend to be biased toward a particular type of secondary structure. Consequently, force fields for folding simulations are often chosen according to the native structure, implying that they are not truly "transferable." Here we show that, while the AMBER ff03 potential is known to favor helical structures, a simple correction to the backbone potential (ff03( *)) results in an unbiased energy function. We take as examples the 35-residue alpha-helical Villin HP35 and 37 residue beta-sheet Pin WW domains, which had not previously been folded with the same force field. Starting from unfolded configurations, simulations of both proteins in Amber ff03( *) in explicit solvent fold to within 2.0 A RMSD of the experimental structures. This demonstrates that a simple backbone correction results in a more transferable force field, an important requirement if simulations are to be used to interpret folding mechanism. 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Mariotti, Erika; Veronese, Mattia; Dunn, Joel T; Southworth, Richard; Eykyn, Thomas R
2015-06-01
To assess the feasibility of using a hybrid Maximum-Entropy/Nonlinear Least Squares (MEM/NLS) method for analyzing the kinetics of hyperpolarized dynamic data with minimum a priori knowledge. A continuous distribution of rates obtained through the Laplace inversion of the data is used as a constraint on the NLS fitting to derive a discrete spectrum of rates. Performance of the MEM/NLS algorithm was assessed through Monte Carlo simulations and validated by fitting the longitudinal relaxation time curves of hyperpolarized [1-(13) C] pyruvate acquired at 9.4 Tesla and at three different flip angles. The method was further used to assess the kinetics of hyperpolarized pyruvate-lactate exchange acquired in vitro in whole blood and to re-analyze the previously published in vitro reaction of hyperpolarized (15) N choline with choline kinase. The MEM/NLS method was found to be adequate for the kinetic characterization of hyperpolarized in vitro time-series. Additional insights were obtained from experimental data in blood as well as from previously published (15) N choline experimental data. The proposed method informs on the compartmental model that best approximate the biological system observed using hyperpolarized (13) C MR especially when the metabolic pathway assessed is complex or a new hyperpolarized probe is used. © 2014 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Pereira, A. S. N.; de Streel, G.; Planes, N.; Haond, M.; Giacomini, R.; Flandre, D.; Kilchytska, V.
2017-02-01
The Drain Induced Barrier Lowering (DIBL) behavior in Ultra-Thin Body and Buried oxide (UTBB) transistors is investigated in details in the temperature range up to 150 °C, for the first time to the best of our knowledge. The analysis is based on experimental data, physical device simulation, compact model (SPICE) simulation and previously published models. Contrary to MASTAR prediction, experiments reveal DIBL increase with temperature. Physical device simulations of different thin-film fully-depleted (FD) devices outline the generality of such behavior. SPICE simulations, with UTSOI DK2.4 model, only partially adhere to experimental trends. Several analytic models available in the literature are assessed for DIBL vs. temperature prediction. Although being the closest to experiments, Fasarakis' model overestimates DIBL(T) dependence for shortest devices and underestimates it for upsized gate lengths frequently used in ultra-low-voltage (ULV) applications. This model is improved in our work, by introducing a temperature-dependent inversion charge at threshold. The improved model shows very good agreement with experimental data, with high gain in precision for the gate lengths under test.
Menegakis, Apostolos; De Colle, Chiara; Yaromina, Ala; Hennenlotter, Joerg; Stenzl, Arnulf; Scharpf, Marcus; Fend, Falko; Noell, Susan; Tatagiba, Marcos; Brucker, Sara; Wallwiener, Diethelm; Boeke, Simon; Ricardi, Umberto; Baumann, Michael; Zips, Daniel
2015-09-01
To apply our previously published residual ex vivo γH2AX foci method to patient-derived tumour specimens covering a spectrum of tumour-types with known differences in radiation response. In addition, the data were used to simulate different experimental scenarios to simplify the method. Evaluation of residual γH2AX foci in well-oxygenated tumour areas of ex vivo irradiated patient-derived tumour specimens with graded single doses was performed. Immediately after surgical resection, the samples were cultivated for 24h in culture medium prior to irradiation and fixed 24h post-irradiation for γH2AX foci evaluation. Specimens from a total of 25 patients (including 7 previously published) with 10 different tumour types were included. Linear dose response of residual γH2AX foci was observed in all specimens with highly variable slopes among different tumour types ranging from 0.69 (95% CI: 1.14-0.24) to 3.26 (95% CI: 4.13-2.62) for chondrosarcomas (radioresistant) and classical seminomas (radiosensitive) respectively. Simulations suggest that omitting dose levels might simplify the assay without compromising robustness. Here we confirm clinical feasibility of the assay. The slopes of the residual foci number are well in line with the expected differences in radio-responsiveness of different tumour types implying that intrinsic radiation sensitivity contributes to tumour radiation response. Thus, this assay has a promising potential for individualized radiation therapy and prospective validation is warranted. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
QFASAR: Quantitative fatty acid signature analysis with R
Bromaghin, Jeffrey F.
2017-01-01
Knowledge of predator diets provides essential insights into their ecology, yet diet estimation is challenging and remains an active area of research.Quantitative fatty acid signature analysis (QFASA) is a popular method of estimating diet composition that continues to be investigated and extended. However, software to implement QFASA has only recently become publicly available.I summarize a new R package, qfasar, for diet estimation using QFASA methods. The package also provides functionality to evaluate and potentially improve the performance of a library of prey signature data, compute goodness-of-fit diagnostics, and support simulation-based research. Several procedures in the package have not previously been published.qfasar makes traditional and recently published QFASA diet estimation methods accessible to ecologists for the first time. Use of the package is illustrated with signature data from Chukchi Sea polar bears and potential prey species.
NASA Astrophysics Data System (ADS)
O'Connell, D.; Ruan, D.; Thomas, D. H.; Dou, T. H.; Lewis, J. H.; Santhanam, A.; Lee, P.; Low, D. A.
2018-02-01
Breathing motion modeling requires observation of tissues at sufficiently distinct respiratory states for proper 4D characterization. This work proposes a method to improve sampling of the breathing cycle with limited imaging dose. We designed and tested a prospective free-breathing acquisition protocol with a simulation using datasets from five patients imaged with a model-based 4DCT technique. Each dataset contained 25 free-breathing fast helical CT scans with simultaneous breathing surrogate measurements. Tissue displacements were measured using deformable image registration. A correspondence model related tissue displacement to the surrogate. Model residual was computed by comparing predicted displacements to image registration results. To determine a stopping criteria for the prospective protocol, i.e. when the breathing cycle had been sufficiently sampled, subsets of N scans where 5 ⩽ N ⩽ 9 were used to fit reduced models for each patient. A previously published metric was employed to describe the phase coverage, or ‘spread’, of the respiratory trajectories of each subset. Minimum phase coverage necessary to achieve mean model residual within 0.5 mm of the full 25-scan model was determined and used as the stopping criteria. Using the patient breathing traces, a prospective acquisition protocol was simulated. In all patients, phase coverage greater than the threshold necessary for model accuracy within 0.5 mm of the 25 scan model was achieved in six or fewer scans. The prospectively selected respiratory trajectories ranked in the (97.5 ± 4.2)th percentile among subsets of the originally sampled scans on average. Simulation results suggest that the proposed prospective method provides an effective means to sample the breathing cycle with limited free-breathing scans. One application of the method is to reduce the imaging dose of a previously published model-based 4DCT protocol to 25% of its original value while achieving mean model residual within 0.5 mm.
NASA Astrophysics Data System (ADS)
Wu, Bin; Kerkeni, Boutheïna; Egami, Takeshi; Do, Changwoo; Liu, Yun; Wang, Yongmei; Porcar, Lionel; Hong, Kunlun; Smith, Sean C.; Liu, Emily L.; Smith, Gregory S.; Chen, Wei-Ren
2012-04-01
Based on atomistic molecular dynamics (MD) simulations, the small angle neutron scattering (SANS) intensity behavior of a single generation-4 polyelectrolyte polyamidoamine starburst dendrimer is investigated at different levels of molecular protonation. The SANS form factor, P(Q), and Debye autocorrelation function, γ(r), are calculated from the equilibrium MD trajectory based on a mathematical approach proposed in this work. The consistency found in comparison against previously published experimental findings (W.-R. Chen, L. Porcar, Y. Liu, P. D. Butler, and L. J. Magid, Macromolecules 40, 5887 (2007)) leads to a link between the neutron scattering experiment and MD computation, and fresh perspectives. The simulations enable scattering calculations of not only the hydrocarbons but also the contribution from the scattering length density fluctuations caused by structured, confined water within the dendrimer. Based on our computational results, we explore the validity of using radius of gyration RG for microstructure characterization of a polyelectrolyte dendrimer from the scattering perspective.
Bistable Behavior of the Lac Operon in E. Coli When Induced with a Mixture of Lactose and TMG
Díaz-Hernández, Orlando; Santillán, Moisés
2010-01-01
In this work we investigate multistability in the lac operon of Escherichia coli when it is induced by a mixture of lactose and the non-metabolizable thiomethyl galactoside (TMG). In accordance with previously published experimental results and computer simulations, our simulations predict that: (1) when the system is induced by TMG, the system shows a discernible bistable behavior while, (2) when the system is induced by lactose, bistability does not disappear but excessively high concentrations of lactose would be required to observe it. Finally, our simulation results predict that when a mixture of lactose and TMG is used, the bistability region in the extracellular glucose concentration vs. extracellular lactose concentration parameter space changes in such a way that the model predictions regarding bistability could be tested experimentally. These experiments could help to solve a recent controversy regarding the existence of bistability in the lac operon under natural conditions. PMID:21423364
Simulation of Ametropic Human Eyes
NASA Astrophysics Data System (ADS)
Tan, Bo; Chen, Ying-Ling; Lewis, James W. L.
2004-11-01
The computational simulation of the performance of human eyes is complex because the optical parameters of the eye depend on many factors, including age, gender, race, refractive status (accommodation and near- or far-sightedness). This task is made more difficult by the inadequacy of the population statistical characteristics of these parameters. Previously we simulated ametropic (near- or far-sighted) eyes using three independent variables: the axial length of the eye, the corneal surface curvature, and the intraocular refractive index gradient. The prescription for the correction of an ametropic eye is determined by its second-order coefficients of the wavefront aberrations. These corrections are typically achieved using contact lens, spectacle lens, or laser surgery (LASIK). However, the higher order aberrations, which are not corrected and are likely complicated or enhanced by the lower-order correction, could be important for visual performance in a darkened environment. In this paper, we investigate the higher order wavefront aberrations of synthetic ametropic eyes and compare results with measured data published in the past decade. The behavior of three types of ametropes is discussed.
A Modal Model to Simulate Typical Structural Dynamic Nonlinearity [PowerPoint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayes, Randall L.; Pacini, Benjamin Robert; Roettgen, Dan
2016-01-01
Some initial investigations have been published which simulate nonlinear response with almost traditional modal models: instead of connecting the modal mass to ground through the traditional spring and damper, a nonlinear Iwan element was added. This assumes that the mode shapes do not change with amplitude and there are no interactions between modal degrees of freedom. This work expands on these previous studies. An impact experiment is performed on a structure which exhibits typical structural dynamic nonlinear response, i.e. weak frequency dependence and strong damping dependence on the amplitude of vibration. Use of low level modal test results in combinationmore » with high level impacts are processed using various combinations of modal filtering, the Hilbert Transform and band-pass filtering to develop response data that are then fit with various nonlinear elements to create a nonlinear pseudo-modal model. Simulations of forced response are compared with high level experimental data for various nonlinear element assumptions.« less
A Modal Model to Simulate Typical Structural Dynamic Nonlinearity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pacini, Benjamin Robert; Mayes, Randall L.; Roettgen, Daniel R
2015-10-01
Some initial investigations have been published which simulate nonlinear response with almost traditional modal models: instead of connecting the modal mass to ground through the traditional spring and damper, a nonlinear Iwan element was added. This assumes that the mode shapes do not change with amplitude and there are no interactions between modal degrees of freedom. This work expands on these previous studies. An impact experiment is performed on a structure which exhibits typical structural dynamic nonlinear response, i.e. weak frequency dependence and strong damping dependence on the amplitude of vibration. Use of low level modal test results in combinationmore » with high level impacts are processed using various combinations of modal filtering, the Hilbert Transform and band-pass filtering to develop response data that are then fit with various nonlinear elements to create a nonlinear pseudo-modal model. Simulations of forced response are compared with high level experimental data for various nonlinear element assumptions.« less
A unified relation for the solid-liquid interface free energy of pure FCC, BCC, and HCP metals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, S. R.; Mendelev, M. I., E-mail: mendelev@ameslab.gov
2016-04-14
We study correlations between the solid-liquid interface (SLI) free energy and bulk material properties (melting temperature, latent heat, and liquid structure) through the determination of SLI free energies for bcc and hcp metals from molecular dynamics (MD) simulation. Values obtained for the bcc metals in this study were compared to values predicted by the Turnbull, Laird, and Ewing relations on the basis of previously published MD simulation data. We found that of these three empirical relations, the Ewing relation better describes the MD simulation data. Moreover, whereas the original Ewing relation contains two constants for a particular crystal structure, wemore » found that the first coefficient in the Ewing relation does not depend on crystal structure, taking a common value for all three phases, at least for the class of the systems described by embedded-atom method potentials (which are considered to provide a reasonable approximation for metals).« less
Tunneling ionization and Wigner transform diagnostics in OSIRIS
NASA Astrophysics Data System (ADS)
Martins, S.; Fonseca, R. A.; Silva, L. O.; Deng, S.; Katsouleas, T.; Tsung, F.; Mori, W. B.
2004-11-01
We describe the ionization module implemented in the PIC code OSIRIS [1]. Benchmarks with previously published tunnel ionization results were made. Our ionization module works in 1D, 2D and 3D simulations with barrier suppression ionization or the ADK ionization model, and allows for moving ions. Several illustrative 3D numerical simulations were performed, namely of the propagation of a SLAC beam in a Li gas cell, for the parameters of [2]. We compare the performance of OSIRIS with/without the ionization module, concluding that much less simulation time is usually required when using the ionization module. A novel diagnostic over the electric field is implemented, the Wigner transform, that provides information on the local spectral content of the field. This diagnostic is applied to the analysis of the chirp induced in an ionizing laser pulse. [1] R. A. Fonseca et al., LNCS 2331, 342-351, (Springer, Heidelberg, 2002). [2] S. Deng et al., Phys. Rev. E 68, 047401 (2003).
Ghiglietti, Andrea; Scarale, Maria Giovanna; Miceli, Rosalba; Ieva, Francesca; Mariani, Luigi; Gavazzi, Cecilia; Paganoni, Anna Maria; Edefonti, Valeria
2018-03-22
Recently, response-adaptive designs have been proposed in randomized clinical trials to achieve ethical and/or cost advantages by using sequential accrual information collected during the trial to dynamically update the probabilities of treatment assignments. In this context, urn models-where the probability to assign patients to treatments is interpreted as the proportion of balls of different colors available in a virtual urn-have been used as response-adaptive randomization rules. We propose the use of Randomly Reinforced Urn (RRU) models in a simulation study based on a published randomized clinical trial on the efficacy of home enteral nutrition in cancer patients after major gastrointestinal surgery. We compare results with the RRU design with those previously published with the non-adaptive approach. We also provide a code written with the R software to implement the RRU design in practice. In detail, we simulate 10,000 trials based on the RRU model in three set-ups of different total sample sizes. We report information on the number of patients allocated to the inferior treatment and on the empirical power of the t-test for the treatment coefficient in the ANOVA model. We carry out a sensitivity analysis to assess the effect of different urn compositions. For each sample size, in approximately 75% of the simulation runs, the number of patients allocated to the inferior treatment by the RRU design is lower, as compared to the non-adaptive design. The empirical power of the t-test for the treatment effect is similar in the two designs.
Why Does a Method That Fails Continue To Be Used: The Answer
Templeton, Alan R.
2009-01-01
It has been claimed that hundreds of researchers use nested clade phylogeographic analysis (NCPA) based on what the method promises rather than requiring objective validation of the method. The supposed failure of NCPA is based upon the argument that validating it by using positive controls ignored type I error, and that computer simulations have shown a high type I error. The first argument is factually incorrect: the previously published validation analysis fully accounted for both type I and type II errors. The simulations that indicate a 75% type I error rate have serious flaws and only evaluate outdated versions of NCPA. These outdated type I error rates fall precipitously when the 2003 version of single locus NCPA is used or when the 2002 multi-locus version of NCPA is used. It is shown that the treewise type I errors in single-locus NCPA can be corrected to the desired nominal level by a simple statistical procedure, and that multilocus NCPA reconstructs a simulated scenario used to discredit NCPA with 100% accuracy. Hence, NCPA is a not a failed method at all, but rather has been validated both by actual data and by simulated data in a manner that satisfies the published criteria given by its critics. The critics have come to different conclusions because they have focused on the pre-2002 versions of NCPA and have failed to take into account the extensive developments in NCPA since 2002. Hence, researchers can choose to use NCPA based upon objective critical validation that shows that NCPA delivers what it promises. PMID:19335340
Development of a Dependency Theory Toolbox for Database Design.
1987-12-01
published algorithms and theorems , and hand simulating these algorithms can be a tedious and error prone chore. Additionally, since the process of...to design and study relational databases exists in the form of published algorithms and theorems . However, hand simulating these algorithms can be a...published algorithms and theorems . Hand simulating these algorithms can be a tedious and error prone chore. Therefore, a toolbox of algorithms and
Historian: accurate reconstruction of ancestral sequences and evolutionary rates.
Holmes, Ian H
2017-04-15
Reconstruction of ancestral sequence histories, and estimation of parameters like indel rates, are improved by using explicit evolutionary models and summing over uncertain alignments. The previous best tool for this purpose (according to simulation benchmarks) was ProtPal, but this tool was too slow for practical use. Historian combines an efficient reimplementation of the ProtPal algorithm with performance-improving heuristics from other alignment tools. Simulation results on fidelity of rate estimation via ancestral reconstruction, along with evaluations on the structurally informed alignment dataset BAliBase 3.0, recommend Historian over other alignment tools for evolutionary applications. Historian is available at https://github.com/evoldoers/historian under the Creative Commons Attribution 3.0 US license. ihholmes+historian@gmail.com. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
An updated model of induced airflow in the unsaturated zone
Baehr, Arthur L.; Joss, Craig J.
1995-01-01
Simulation of induced movement of air in the unsaturated zone provides a method to determine permeability and to design vapor extraction remediation systems. A previously published solution to the airflow equation for the case in which the unsaturated zone is separated from the atmosphere by a layer of lower permeability (such as a clay layer) has been superseded. The new solution simulates airflow through the layer of lower permeability more rigorously by defining the leakage in terms of the upper boundary condition rather than by adding a leakage term to the governing airflow equation. This note presents the derivation of the new solution. Formulas for steady state pressure, specific discharge, and mass flow in the domain are obtained for the new model and for the case in which the unsaturated zone is in direct contact with the atmosphere.
Monte Carlo simulations of nematic and chiral nematic shells
NASA Astrophysics Data System (ADS)
Wand, Charlie R.; Bates, Martin A.
2015-01-01
We present a systematic Monte Carlo simulation study of thin nematic and cholesteric shells with planar anchoring using an off-lattice model. The results obtained using the simple model correspond with previously published results for lattice-based systems, with the number, type, and position of defects observed dependent on the shell thickness with four half-strength defects in a tetrahedral arrangement found in very thin shells and a pair of defects in a bipolar (boojum) configuration observed in thicker shells. A third intermediate defect configuration is occasionally observed for intermediate thickness shells, which is stabilized in noncentrosymmetric shells of nonuniform thickness. Chiral nematic (cholesteric) shells are investigated by including a chiral term in the potential. Decreasing the pitch of the chiral nematic leads to a twisted bipolar (chiral boojum) configuration with the director twist increasing from the inner to the outer surface.
Temperature dependence of carrier capture by defects in gallium arsenide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wampler, William R.; Modine, Normand A.
2015-08-01
This report examines the temperature dependence of the capture rate of carriers by defects in gallium arsenide and compares two previously published theoretical treatments of this based on multi phonon emission (MPE). The objective is to reduce uncertainty in atomistic simulations of gain degradation in III-V HBTs from neutron irradiation. A major source of uncertainty in those simulations is poor knowledge of carrier capture rates, whose values can differ by several orders of magnitude between various defect types. Most of this variation is due to different dependence on temperature, which is closely related to the relaxation of the defect structuremore » that occurs as a result of the change in charge state of the defect. The uncertainty in capture rate can therefore be greatly reduced by better knowledge of the defect relaxation.« less
NASA Astrophysics Data System (ADS)
Linseis, V.; Völklein, F.; Reith, H.; Woias, P.; Nielsch, K.
2018-06-01
An analytical study has been performed on the measurement capabilities of a 100-nm thin suspended membrane setup for the in-plane thermal conductivity measurements of thin film samples using the 3 ω measurement technique, utilizing a COSMOL Multiphysics simulation. The maximum measurement range under observance of given boundary conditions has been studied. Three different exemplary sample materials, with a thickness from the nanometer to the micrometer range and a thermal conductivity from 0.4 W/mK up to 100 W/mK have been investigated as showcase studies. The results of the simulations have been compared to a previously published evaluation model, in order to determine the deviation between both and thereby the measurement limit. As thermal transport properties are temperature dependent, all calculations refer to constant room temperature conditions.
Alignment of cell division axes in directed epithelial cell migration
NASA Astrophysics Data System (ADS)
Marel, Anna-Kristina; Podewitz, Nils; Zorn, Matthias; Oskar Rädler, Joachim; Elgeti, Jens
2014-11-01
Cell division is an essential dynamic event in tissue remodeling during wound healing, cancer and embryogenesis. In collective migration, tensile stresses affect cell shape and polarity, hence, the orientation of the cell division axis is expected to depend on cellular flow patterns. Here, we study the degree of orientation of cell division axes in migrating and resting epithelial cell sheets. We use microstructured channels to create a defined scenario of directed cell invasion and compare this situation to resting but proliferating cell monolayers. In experiments, we find a strong alignment of the axis due to directed flow while resting sheets show very weak global order, but local flow gradients still correlate strongly with the cell division axis. We compare experimental results with a previously published mesoscopic particle based simulation model. Most of the observed effects are reproduced by the simulations.
Solution of the one-dimensional consolidation theory equation with a pseudospectral method
Sepulveda, N.; ,
1991-01-01
The one-dimensional consolidation theory equation is solved for an aquifer system using a pseudospectral method. The spatial derivatives are computed using Fast Fourier Transforms and the time derivative is solved using a fourth-order Runge-Kutta scheme. The computer model calculates compaction based on the void ratio changes accumulated during the simulated periods of time. Compactions and expansions resulting from groundwater withdrawals and recharges are simulated for two observation wells in Santa Clara Valley and two in San Joaquin Valley, California. Field data previously published are used to obtain mean values for the soil grain density and the compression index and to generate depth-dependent profiles for hydraulic conductivity and initial void ratio. The water-level plots for the wells studied were digitized and used to obtain the time dependent profiles of effective stress.
NASA Astrophysics Data System (ADS)
Li, Xiaoyi; Gao, Hui; Soteriou, Marios C.
2017-08-01
Atomization of extremely high viscosity liquid can be of interest for many applications in aerospace, automotive, pharmaceutical, and food industries. While detailed atomization measurements usually face grand challenges, high-fidelity numerical simulations offer the advantage to comprehensively explore the atomization details. In this work, a previously validated high-fidelity first-principle simulation code HiMIST is utilized to simulate high-viscosity liquid jet atomization in crossflow. The code is used to perform a parametric study of the atomization process in a wide range of Ohnesorge numbers (Oh = 0.004-2) and Weber numbers (We = 10-160). Direct comparisons between the present study and previously published low-viscosity jet in crossflow results are performed. The effects of viscous damping and slowing on jet penetration, liquid surface instabilities, ligament formation/breakup, and subsequent droplet formation are investigated. Complex variations in near-field and far-field jet penetrations with increasing Oh at different We are observed and linked with the underlying jet deformation and breakup physics. Transition in breakup regimes and increase in droplet size with increasing Oh are observed, mostly consistent with the literature reports. The detailed simulations elucidate a distinctive edge-ligament-breakup dominated process with long surviving ligaments for the higher Oh cases, as opposed to a two-stage edge-stripping/column-fracture process for the lower Oh counterparts. The trend of decreasing column deflection with increasing We is reversed as Oh increases. A predominantly unimodal droplet size distribution is predicted at higher Oh, in contrast to the bimodal distribution at lower Oh. It has been found that both Rayleigh-Taylor and Kelvin-Helmholtz linear stability theories cannot be easily applied to interpret the distinct edge breakup process and further study of the underlying physics is needed.
NASA Astrophysics Data System (ADS)
Campos, Joana; Van der Veer, Henk W.; Freitas, Vânia; Kooijman, Sebastiaan A. L. M.
2009-08-01
In this paper a contribution is made to the ongoing debate on which brown shrimp generation mostly sustains the autumn peak in coastal North Sea commercial fisheries: the generation born in summer, or the winter one. Since the two perspectives are based on different considerations on the growth timeframe from settlement till commercial size, the Dynamic Energy Budget (DEB) theory was applied to predict maximum possible growth under natural conditions. First, the parameters of the standard DEB model for Crangon crangon L. were estimated using available data sets. These were insufficient to allow a direct estimation, requiring a special protocol to achieve consistency between parameters. Next, the DEB model was validated by comparing simulations with published experimental data on shrimp growth in relation to water temperatures. Finally, the DEB model was applied to simulate growth under optimal food conditions using the prevailing water temperature conditions in the Wadden Sea. Results show clear differences between males and females whereby the fastest growth rates were observed in females. DEB model simulations of maximum growth in the Wadden Sea suggest that it is not the summer brood from the current year as Boddeke claimed, nor the previous winter generation as Kuipers and Dapper suggested, but more likely the summer generation from the previous year which contributes to the bulk of the fisheries recruits in autumn.
Sechopoulos, Ioannis; Ali, Elsayed S M; Badal, Andreu; Badano, Aldo; Boone, John M; Kyprianou, Iacovos S; Mainegra-Hing, Ernesto; McMillan, Kyle L; McNitt-Gray, Michael F; Rogers, D W O; Samei, Ehsan; Turner, Adam C
2015-10-01
The use of Monte Carlo simulations in diagnostic medical imaging research is widespread due to its flexibility and ability to estimate quantities that are challenging to measure empirically. However, any new Monte Carlo simulation code needs to be validated before it can be used reliably. The type and degree of validation required depends on the goals of the research project, but, typically, such validation involves either comparison of simulation results to physical measurements or to previously published results obtained with established Monte Carlo codes. The former is complicated due to nuances of experimental conditions and uncertainty, while the latter is challenging due to typical graphical presentation and lack of simulation details in previous publications. In addition, entering the field of Monte Carlo simulations in general involves a steep learning curve. It is not a simple task to learn how to program and interpret a Monte Carlo simulation, even when using one of the publicly available code packages. This Task Group report provides a common reference for benchmarking Monte Carlo simulations across a range of Monte Carlo codes and simulation scenarios. In the report, all simulation conditions are provided for six different Monte Carlo simulation cases that involve common x-ray based imaging research areas. The results obtained for the six cases using four publicly available Monte Carlo software packages are included in tabular form. In addition to a full description of all simulation conditions and results, a discussion and comparison of results among the Monte Carlo packages and the lessons learned during the compilation of these results are included. This abridged version of the report includes only an introductory description of the six cases and a brief example of the results of one of the cases. This work provides an investigator the necessary information to benchmark his/her Monte Carlo simulation software against the reference cases included here before performing his/her own novel research. In addition, an investigator entering the field of Monte Carlo simulations can use these descriptions and results as a self-teaching tool to ensure that he/she is able to perform a specific simulation correctly. Finally, educators can assign these cases as learning projects as part of course objectives or training programs.
On the predictability of land surface fluxes from meteorological variables
NASA Astrophysics Data System (ADS)
Haughton, Ned; Abramowitz, Gab; Pitman, Andy J.
2018-01-01
Previous research has shown that land surface models (LSMs) are performing poorly when compared with relatively simple empirical models over a wide range of metrics and environments. Atmospheric driving data appear to provide information about land surface fluxes that LSMs are not fully utilising. Here, we further quantify the information available in the meteorological forcing data that are used by LSMs for predicting land surface fluxes, by interrogating FLUXNET data, and extending the benchmarking methodology used in previous experiments. We show that substantial performance improvement is possible for empirical models using meteorological data alone, with no explicit vegetation or soil properties, thus setting lower bounds on a priori expectations on LSM performance. The process also identifies key meteorological variables that provide predictive power. We provide an ensemble of empirical benchmarks that are simple to reproduce and provide a range of behaviours and predictive performance, acting as a baseline benchmark set for future studies. We reanalyse previously published LSM simulations and show that there is more diversity between LSMs than previously indicated, although it remains unclear why LSMs are broadly performing so much worse than simple empirical models.
A multi-model framework for simulating wildlife population response to land-use and climate change
McRae, B.H.; Schumaker, N.H.; McKane, R.B.; Busing, R.T.; Solomon, A.M.; Burdick, C.A.
2008-01-01
Reliable assessments of how human activities will affect wildlife populations are essential for making scientifically defensible resource management decisions. A principle challenge of predicting effects of proposed management, development, or conservation actions is the need to incorporate multiple biotic and abiotic factors, including land-use and climate change, that interact to affect wildlife habitat and populations through time. Here we demonstrate how models of land-use, climate change, and other dynamic factors can be integrated into a coherent framework for predicting wildlife population trends. Our framework starts with land-use and climate change models developed for a region of interest. Vegetation changes through time under alternative future scenarios are predicted using an individual-based plant community model. These predictions are combined with spatially explicit animal habitat models to map changes in the distribution and quality of wildlife habitat expected under the various scenarios. Animal population responses to habitat changes and other factors are then projected using a flexible, individual-based animal population model. As an example application, we simulated animal population trends under three future land-use scenarios and four climate change scenarios in the Cascade Range of western Oregon. We chose two birds with contrasting habitat preferences for our simulations: winter wrens (Troglodytes troglodytes), which are most abundant in mature conifer forests, and song sparrows (Melospiza melodia), which prefer more open, shrubby habitats. We used climate and land-use predictions from previously published studies, as well as previously published predictions of vegetation responses using FORCLIM, an individual-based forest dynamics simulator. Vegetation predictions were integrated with other factors in PATCH, a spatially explicit, individual-based animal population simulator. Through incorporating effects of landscape history and limited dispersal, our framework predicted population changes that typically exceeded those expected based on changes in mean habitat suitability alone. Although land-use had greater impacts on habitat quality than did climate change in our simulations, we found that small changes in vital rates resulting from climate change or other stressors can have large consequences for population trajectories. The ability to integrate bottom-up demographic processes like these with top-down constraints imposed by climate and land-use in a dynamic modeling environment is a key advantage of our approach. The resulting framework should allow researchers to synthesize existing empirical evidence, and to explore complex interactions that are difficult or impossible to capture through piecemeal modeling approaches. ?? 2008 Elsevier B.V.
Muñoz, P; Pastor, D; Capmany, J; Martínez, A
2003-09-22
In this paper, the procedure to optimize flat-top Arrayed Waveguide Grating (AWG) devices in terms of transmission and dispersion properties is presented. The systematic procedure consists on the stigmatization and minimization of the Light Path Function (LPF) used in classic planar spectrograph theory. The resulting geometry arrangement for the Arrayed Waveguides (AW) and the Output Waveguides (OW) is not the classical Rowland mounting, but an arbitrary geometry arrangement. Simulation using previous published enhanced modeling show how this geometry reduces the passband ripple, asymmetry and dispersion, in a design example.
Experimental investigation of the Multipoint Ultrasonic Flowmeter
NASA Astrophysics Data System (ADS)
Jakub, Filipský
2018-06-01
The Multipoint Ultrasonic Flowmeter is a vector tomographic device capable of reconstructing all three components of velocity field based solely on boundary ultrasonic measurements. Computer simulations have shown the feasibility of such a device and have been published previously. This paper describes an experimental investigation of achievable accuracy of such a method. Doubled acoustic tripoles used to obtain information of the solenoidal part of vector field show extremely short differences between the Time Of Flights (TOFs) of individual sensors and are therefore sensitive to parasitic effects of TOF measurements. Sampling at 40MHz and correlation method is used to measure the TOF.
A Diversified Investment Strategy Using Autonomous Agents
NASA Astrophysics Data System (ADS)
Barbosa, Rui Pedro; Belo, Orlando
In a previously published article, we presented an architecture for implementing agents with the ability to trade autonomously in the Forex market. At the core of this architecture is an ensemble of classification and regression models that is used to predict the direction of the price of a currency pair. In this paper, we will describe a diversified investment strategy consisting of five agents which were implemented using that architecture. By simulating trades with 18 months of out-of-sample data, we will demonstrate that data mining models can produce profitable predictions, and that the trading risk can be diminished through investment diversification.
Armstrong, Don L.; Lancet, Doron
2018-01-01
Abstract We studied the simulated replication and growth of prebiotic vesicles composed of 140 phospholipids and cholesterol using our R-GARD (Real Graded Autocatalysis Replication Domain) formalism that utilizes currently extant lipids that have known rate constants of lipid-vesicle interactions from published experimental data. R-GARD normally modifies kinetic parameters of lipid-vesicle interactions based on vesicle composition and properties. Our original R-GARD model tracked the growth and division of one vesicle at a time in an environment with unlimited lipids at a constant concentration. We explore here a modified model where vesicles compete for a finite supply of lipids. We observed that vesicles exhibit complex behavior including initial fast unrestricted growth, followed by intervesicle competition for diminishing resources, then a second growth burst driven by better-adapted vesicles, and ending with a final steady state. Furthermore, in simulations without kinetic parameter modifications (“invariant kinetics”), the initial replication was an order of magnitude slower, and vesicles' composition variability at the final steady state was much lower. The complex kinetic behavior was not observed either in the previously published R-GARD simulations or in additional simulations presented here with only one lipid component. This demonstrates that both a finite environment (inducing selection) and multiple components (providing variation for selection to act upon) are crucial for portraying evolution-like behavior. Such properties can improve survival in a changing environment by increasing the ability of early protocellular entities to respond to rapid environmental fluctuations likely present during abiogenesis both on Earth and possibly on other planets. This in silico simulation predicts that a relatively simple in vitro chemical system containing only lipid molecules might exhibit properties that are relevant to prebiotic processes. Key Words: Phospholipid vesicles—Prebiotic compartments—Prebiotic vesicle competition—Prebiotic vesicle variability. Astrobiology 18, 419–430. PMID:29634319
Percolation of binary disk systems: Modeling and theory
Meeks, Kelsey; Tencer, John; Pantoya, Michelle L.
2017-01-12
The dispersion and connectivity of particles with a high degree of polydispersity is relevant to problems involving composite material properties and reaction decomposition prediction and has been the subject of much study in the literature. This paper utilizes Monte Carlo models to predict percolation thresholds for a two-dimensional systems containing disks of two different radii. Monte Carlo simulations and spanning probability are used to extend prior models into regions of higher polydispersity than those previously considered. A correlation to predict the percolation threshold for binary disk systems is proposed based on the extended dataset presented in this work and comparedmore » to previously published correlations. Finally, a set of boundary conditions necessary for a good fit is presented, and a condition for maximizing percolation threshold for binary disk systems is suggested.« less
Simulating the role of visual selective attention during the development of perceptual completion.
Schlesinger, Matthew; Amso, Dima; Johnson, Scott P
2012-11-01
We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds' performance on a second measure, the perceptual unity task. Two parameters in the model - corresponding to areas in the occipital and parietal cortices - were systematically varied while the gaze patterns produced by the model were recorded and subsequently analyzed. Three key findings emerged from the simulation study. First, the model successfully replicated the performance of 3-month-olds on the unity perception task. Second, the model also helps to explain the improved performance of 2-month-olds when the size of the occluder in the unity perception task is reduced. Third, in contrast to our previous simulation results, variation in only one of the two cortical regions simulated (i.e. recurrent activity in posterior parietal cortex) resulted in a performance pattern that matched 3-month-olds. These findings provide additional support for our hypothesis that the development of perceptual completion in early infancy is promoted by progressive improvements in visual selective attention and oculomotor skill. © 2012 Blackwell Publishing Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thoma, C.; Welch, D. R.; Hsu, S. C.
2013-08-15
We describe numerical simulations, using the particle-in-cell (PIC) and hybrid-PIC code lsp[T. P. Hughes et al., Phys. Rev. ST Accel. Beams 2, 110401 (1999)], of the head-on merging of two laboratory supersonic plasma jets. The goals of these experiments are to form and study astrophysically relevant collisionless shocks in the laboratory. Using the plasma jet initial conditions (density ∼10{sup 14}–10{sup 16} cm{sup −3}, temperature ∼ few eV, and propagation speed ∼20–150 km/s), large-scale simulations of jet propagation demonstrate that interactions between the two jets are essentially collisionless at the merge region. In highly resolved one- and two-dimensional simulations, we showmore » that collisionless shocks are generated by the merging jets when immersed in applied magnetic fields (B∼0.1–1 T). At expected plasma jet speeds of up to 150 km/s, our simulations do not give rise to unmagnetized collisionless shocks, which require much higher velocities. The orientation of the magnetic field and the axial and transverse density gradients of the jets have a strong effect on the nature of the interaction. We compare some of our simulation results with those of previously published PIC simulation studies of collisionless shock formation.« less
Equation of state of dense plasmas with pseudoatom molecular dynamics
Starrett, C. E.; Saumon, D.
2016-06-14
Here, we present an approximation for calculating the equation of state (EOS) of warm and hot dense matter that is built on the previously published pseudoatom molecular dynamics (PAMD) model of dense plasmas [Starrett et al., Phys. Rev. E 91, 013104 (2015)]. And while the EOS calculation with PAMD was previously limited to orbital-free density functional theory (DFT), the new approximation presented here allows a Kohn-Sham DFT treatment of the electrons. The resulting EOS thus includes a quantum mechanical treatment of the electrons with a self-consistent model of the ionic structure, while remaining tractable at high temperatures. The method ismore » validated by comparisons with pressures from ab initio simulations of Be, Al, Si, and Fe. The EOS in the Thomas-Fermi approximation shows remarkable thermodynamic consistency over a wide range of temperatures for aluminum. We also calculate the principal Hugoniots of aluminum and silicon up to 500 eV. We find that the ionic structure of the plasma has a modest effect that peaks at temperatures of a few eV and that the features arising from the electronic structure agree well with ab initio simulations.« less
Sun, Xiangfei; Ng, Carla A; Small, Mitchell J
2018-06-12
Organisms have long been treated as receptors in exposure studies of polychlorinated biphenyls (PCBs) and other persistent organic pollutants (POPs). The influences of environmental pollution on organisms are well recognized. However, the impact of biota on PCB transport in an environmental system has not been considered in sufficient detail. In this study, a population-based multi-compartment fugacity model is developed by reconfiguring the organisms as populated compartments and reconstructing all the exchange processes between the organism compartments and environmental compartments, especially the previously ignored feedback routes from biota to the environment. We evaluate the model performance by simulating the PCB concentration distribution in Lake Ontario using published loading records. The lake system is divided into three environment compartments (air, water, and sediment) and several organism groups according to the dominant local biotic species. The comparison indicates that the simulated results are well-matched by a list of published field measurements from different years. We identify a new process, called Facilitated Biotic Intermedia Transport (FBIT), to describe the enhanced pollution transport that occurs between environmental media and organisms. As the hydrophobicity of PCB congener increases, the organism population exerts greater influence on PCB mass flows. In a high biomass scenario, the model simulation indicates significant FBIT effects and biotic storage effects with hydrophobic PCB congeners, which also lead to significant shifts in systemic contaminant exchange rates between organisms and the environment. Copyright © 2018 Elsevier Ltd. All rights reserved.
Biosphere model simulations of interannual variability in terrestrial 13C/12C exchange
NASA Astrophysics Data System (ADS)
van der Velde, I. R.; Miller, J. B.; Schaefer, K.; Masarie, K. A.; Denning, S.; White, J. W. C.; Tans, P. P.; Krol, M. C.; Peters, W.
2013-09-01
Previous studies suggest that a large part of the variability in the atmospheric ratio of 13CO2/12CO2originates from carbon exchange with the terrestrial biosphere rather than with the oceans. Since this variability is used to quantitatively partition the total carbon sink, we here investigate the contribution of interannual variability (IAV) in biospheric exchange to the observed atmospheric 13C variations. We use the Simple Biosphere - Carnegie-Ames-Stanford Approach biogeochemical model, including a detailed isotopic fractionation scheme, separate 12C and 13C biogeochemical pools, and satellite-observed fire disturbances. This model of 12CO2 and 13CO2 thus also produces return fluxes of 13CO2from its differently aged pools, contributing to the so-called disequilibrium flux. Our simulated terrestrial 13C budget closely resembles previously published model results for plant discrimination and disequilibrium fluxes and similarly suggests that variations in C3 discrimination and year-to-year variations in C3and C4 productivity are the main drivers of their IAV. But the year-to-year variability in the isotopic disequilibrium flux is much lower (1σ=±1.5 PgC ‰ yr-1) than required (±12.5 PgC ‰ yr-1) to match atmospheric observations, under the common assumption of low variability in net ocean CO2 fluxes. This contrasts with earlier published results. It is currently unclear how to increase IAV in these drivers suggesting that SiBCASA still misses processes that enhance variability in plant discrimination and relative C3/C4productivity. Alternatively, 13C budget terms other than terrestrial disequilibrium fluxes, including possibly the atmospheric growth rate, must have significantly different IAV in order to close the atmospheric 13C budget on a year-to-year basis.
Carolan-Olah, Mary; Kruger, Gina; Brown, Vera; Lawton, Felicity; Mazzarino, Melissa
2016-01-01
Simulation provides opportunities for midwifery students to enhance their performance in emergency situations. Neonatal resuscitation is one such emergency and its management is a major concern for midwifery students. This project aimed to develop and evaluate a simulation exercise, for neonatal resuscitation, for 3rd year midwifery students. A quantitative survey design was employed using questions from two previously validated questionnaires: (1.) Student Satisfaction and Self-Confidence in Learning and (2.) the Clinical Teamwork Scale (CTS). Australian university. 40 final year midwifery students were invited to participate and 36 agreed to take part in the project. In pre-simulation questionnaires, students reported low levels of confidence in initiating care of an infant requiring resuscitation. Most anticipated that the simulation exercise would be useful to better prepare them respond to a neonatal emergency. Post-simulation questionnaires reported an increase in student confidence, with 30 of 36 students agreeing/ strongly agreeing that their confidence levels had improved. Nonetheless, an unexpected number of students reported a lack of familiarity with the equipment. The single simulation exercise evaluated in this project resulted in improved student confidence and greater knowledge and skills in neonatal resuscitation. However, deficits in handling emergency equipment, and in understanding the role of the student midwife/midwife in neonatal resuscitation, were also noted. For the future, the development and evaluation of a programme of simulation exercises, over a longer period, is warranted. This approach may reduce stress and better address student learning needs. Copyright © 2015. Published by Elsevier Ltd.
Day, Judy D.; Metes, Diana M.; Vodovotz, Yoram
2015-01-01
A mathematical model of the early inflammatory response in transplantation is formulated with ordinary differential equations. We first consider the inflammatory events associated only with the initial surgical procedure and the subsequent ischemia/reperfusion (I/R) events that cause tissue damage to the host as well as the donor graft. These events release damage-associated molecular pattern molecules (DAMPs), thereby initiating an acute inflammatory response. In simulations of this model, resolution of inflammation depends on the severity of the tissue damage caused by these events and the patient’s (co)-morbidities. We augment a portion of a previously published mathematical model of acute inflammation with the inflammatory effects of T cells in the absence of antigenic allograft mismatch (but with DAMP release proportional to the degree of graft damage prior to transplant). Finally, we include the antigenic mismatch of the graft, which leads to the stimulation of potent memory T cell responses, leading to further DAMP release from the graft and concomitant increase in allograft damage. Regulatory mechanisms are also included at the final stage. Our simulations suggest that surgical injury and I/R-induced graft damage can be well-tolerated by the recipient when each is present alone, but that their combination (along with antigenic mismatch) may lead to acute rejection, as seen clinically in a subset of patients. An emergent phenomenon from our simulations is that low-level DAMP release can tolerize the recipient to a mismatched allograft, whereas different restimulation regimens resulted in an exaggerated rejection response, in agreement with published studies. We suggest that mechanistic mathematical models might serve as an adjunct for patient- or sub-group-specific predictions, simulated clinical studies, and rational design of immunosuppression. PMID:26441988
Monte Carlo kinetics simulations of ice-mantle formation on interstellar grains
NASA Astrophysics Data System (ADS)
Garrod, Robin
2015-08-01
The majority of interstellar dust-grain chemical kinetics models use rate equations, or alternative population-based simulation methods, to trace the time-dependent formation of grain-surface molecules and ice mantles. Such methods are efficient, but are incapable of considering explicitly the morphologies of the dust grains, the structure of the ices formed thereon, or the influence of local surface composition on the chemistry.A new Monte Carlo chemical kinetics model, MIMICK, is presented here, whose prototype results were published recently (Garrod 2013, ApJ, 778, 158). The model calculates the strengths and positions of the potential mimima on the surface, on the fly, according to the individual pair-wise (van der Waals) bonds between surface species, allowing the structure of the ice to build up naturally as surface diffusion and chemistry occur. The prototype model considered contributions to a surface particle's potential only from contiguous (or "bonded") neighbors; the full model considers contributions from surface constituents from short to long range. Simulations are conducted on a fully 3-D user-generated dust-grain with amorphous surface characteristics. The chemical network has also been extended from the simple water system previously published, and now includes 33 chemical species and 55 reactions. This allows the major interstellar ice components to be simulated, such as water, methane, ammonia and methanol, as well as a small selection of more complex molecules, including methyl formate (HCOOCH3).The new model results indicate that the porosity of interstellar ices are dependent on multiple variables, including gas density, the dust temperature, and the relative accretion rates of key gas-phase species. The results presented also have implications for the formation of complex organic molecules on dust-grain surfaces at very low temperatures.
Detering, Karen; Silvester, William; Corke, Charlie; Milnes, Sharyn; Fullam, Rachael; Lewis, Virginia; Renton, Jodie
2014-09-01
To develop and evaluate an interactive advance care planning (ACP) educational programme for general practitioners and doctors-in-training. Development of training materials was overseen by a committee; informed by literature and previous teaching experience. The evaluation assessed participant confidence, knowledge and attitude toward ACP before and after training. Training provided to metropolitan and rural settings in Victoria, Australia. 148 doctors participated in training. The majority were aged at least 40 years with more than 10 years work experience; 63% had not trained in Australia. The programme included prereading, a DVD, interactive patient e-simulation workshop and a training manual. All educational materials followed an evidence-based stepwise approach to ACP: Introducing the topic, exploring concepts, introducing solutions and summarising the conversation. The primary outcome was the change in doctors' self-reported confidence to undertake ACP conversations. Secondary measures included pretest/post-test scores in patient ACP e-simulation, change in ACP knowledge and attitude, and satisfaction with programme materials. 69 participants completed the preworkshop and postworkshop evaluation. Following education, there was a significant change in self-reported confidence in six of eight items (p=0.008 -0.08). There was a significant improvement (p<0.001) in median scores on the e-simulation (pre 7/80, post 60/80). There were no significant differences observed in ACP knowledge following training, and most participants were supportive of patient autonomy and ACP pretraining. Educational materials were rated highly. A short multimodal interactive education programme improves doctors' confidence with ACP and performance on an ACP patient e-simulation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Strong control of Southern Ocean cloud reflectivity by ice-nucleating particles.
Vergara-Temprado, Jesús; Miltenberger, Annette K; Furtado, Kalli; Grosvenor, Daniel P; Shipway, Ben J; Hill, Adrian A; Wilkinson, Jonathan M; Field, Paul R; Murray, Benjamin J; Carslaw, Ken S
2018-03-13
Large biases in climate model simulations of cloud radiative properties over the Southern Ocean cause large errors in modeled sea surface temperatures, atmospheric circulation, and climate sensitivity. Here, we combine cloud-resolving model simulations with estimates of the concentration of ice-nucleating particles in this region to show that our simulated Southern Ocean clouds reflect far more radiation than predicted by global models, in agreement with satellite observations. Specifically, we show that the clouds that are most sensitive to the concentration of ice-nucleating particles are low-level mixed-phase clouds in the cold sectors of extratropical cyclones, which have previously been identified as a main contributor to the Southern Ocean radiation bias. The very low ice-nucleating particle concentrations that prevail over the Southern Ocean strongly suppress cloud droplet freezing, reduce precipitation, and enhance cloud reflectivity. The results help explain why a strong radiation bias occurs mainly in this remote region away from major sources of ice-nucleating particles. The results present a substantial challenge to climate models to be able to simulate realistic ice-nucleating particle concentrations and their effects under specific meteorological conditions. Copyright © 2018 the Author(s). Published by PNAS.
NASA Astrophysics Data System (ADS)
Cohen, Bruce; Umansky, Maxim; Joseph, Ilon
2015-11-01
Progress is reported on including self-consistent zonal flows in simulations of drift-resistive ballooning turbulence using the BOUT + + framework. Previous published work addressed the simulation of L-mode edge turbulence in realistic single-null tokamak geometry using the BOUT three-dimensional fluid code that solves Braginskii-based fluid equations. The effects of imposed sheared ExB poloidal rotation were included, with a static radial electric field fitted to experimental data. In new work our goal is to include the self-consistent effects on the radial electric field driven by the microturbulence, which contributes to the sheared ExB poloidal rotation (zonal flow generation). We describe a model for including self-consistent zonal flows and an algorithm for maintaining underlying plasma profiles to enable the simulation of steady-state turbulence. We examine the role of Braginskii viscous forces in providing necessary dissipation when including axisymmetric perturbations. We also report on some of the numerical difficulties associated with including the axisymmetric component of the fluctuating fields. This work was performed under the auspices of the U.S. Department of Energy under contract DE-AC52-07NA27344 at the Lawrence Livermore National Laboratory (LLNL-ABS-674950).
Simulations of Membrane-Disrupting Peptides I: Alamethicin Pore Stability and Spontaneous Insertion.
Perrin, B Scott; Pastor, Richard W
2016-09-20
An all-atom molecular dynamics simulation of the archetype barrel-stave alamethicin (alm) pore in a 1,2-dioleoyl-sn-glycero-3-phosphocholine bilayer at 313 K indicates that ∼7 μs is required for equilibration of a preformed 6-peptide pore; the pore remains stable for the duration of the remaining 7 μs of the trajectory, and the structure factors agree well with experiment. A 5 μs simulation of 10 surface-bound alm peptides shows significant peptide unfolding and some unbinding, but no insertion. Simulations at 363 and 413 K with a -0.2 V electric field yield peptide insertion in 1 μs. Insertion is initiated by the folding of residues 3-11 into an α-helix, and mediated by membrane water or by previously inserted peptides. The stability of five alm pore peptides at 413 K with a -0.2 V electric field demonstrates a significant preference for a transmembrane orientation. Hence, and in contrast to the cationic antimicrobial peptide described in the following article, alm shows a strong preference for the inserted over the surface-bound state. Published by Elsevier Inc.
Li, Xuejin; Popel, Aleksander S.; Karniadakis, George Em
2012-01-01
The motion of a suspension of red blood cells (RBCs) flowing in a Y-shaped bifurcating microfluidic channel is investigated using a validated low-dimensional RBC (LD-RBC) model based on dissipative particle dynamics (DPD). Specifically, the RBC is represented as a closed torus-like ring of ten colloidal particles, which leads to efficient simulations of blood flow in microcirculation over a wide range of hematocrits. Adaptive no-slip wall boundary conditions were implemented to model hydrodynamic flow within a specific wall structure of diverging 3D microfluidic channels, paying attention to controlling density fluctuations. Plasma skimming and the all-or-nothing phenomenon of RBCs in a bifurcating microfluidic channel have been investigated in our simulations for healthy and diseased blood, including the size of cell-free layer on the daughter branches. The feed hematocrit level in the parent channel has considerable influence on blood-plasma separation. Compared to the blood-plasma separation efficiencies of healthy RBCs, malaria-infected stiff RBCs (iRBCs) have a tendency to travel into the low flowrate daughter branch because of their different initial distribution in the parent channel. Our simulation results are consistent with previously published experimental results and theoretical predictions. PMID:22476709
Organ radiation exposure with EOS: GATE simulations versus TLD measurements
NASA Astrophysics Data System (ADS)
Clavel, A. H.; Thevenard-Berger, P.; Verdun, F. R.; Létang, J. M.; Darbon, A.
2016-03-01
EOS® is an innovative X-ray imaging system allowing the acquisition of two simultaneous images of a patient in the standing position, during the vertical scan of two orthogonal fan beams. This study aimed to compute organs radiation exposure to a patient, in the particular geometry of this system. Two different positions of the patient in the machine were studied, corresponding to postero-anterior plus left lateral projections (PA-LLAT) and antero-posterior plus right lateral projections (AP-RLAT). To achieve this goal, a Monte-Carlo simulation was developed based on a GATE environment. To model the physical properties of the patient, a computational phantom was produced based on computed tomography scan data of an anthropomorphic phantom. The simulations provided several organs doses, which were compared to previously published dose results measured with Thermo Luminescent Detectors (TLD) in the same conditions and with the same phantom. The simulation results showed a good agreement with measured doses at the TLD locations, for both AP-RLAT and PA-LLAT projections. This study also showed that the organ dose assessed only from a sample of locations, rather than considering the whole organ, introduced significant bias, depending on organs and projections.
Duan, Chang-Kui; Tanner, Peter A
2011-03-17
Published two photon excitation (TPE) intensities for the cubic elpasolite systems Cs(2)NaTbX(6) (X = Cl, F) have been simulated by a calculation of two photon absorption (TPA) intensities which takes into account electric dipole transitions involving the detailed crystal-field structure of 4f(7)5d intermediate states, as well as the interactions of the 4f(7) core with the d-electron. The intensity calculation employed parameters from an energy level calculation which not only presented an accurate fit, but also yielded parameters consistent with those from other lanthanide ions. The calculated intensities were used to confirm or adjust the previous assignments of energy levels, resulting in some minor revisions. Generally, the TPA intensity simulations were in better agreement with experimental data for the fluoride, rather than the chloride, system and possible reasons for this are given.
GPU-based simulations of fracture in idealized brick and mortar composites
NASA Astrophysics Data System (ADS)
William Pro, J.; Kwei Lim, Rone; Petzold, Linda R.; Utz, Marcel; Begley, Matthew R.
2015-07-01
Stiff ceramic platelets (or bricks) that are aligned and bonded to a second ductile phase with low volume fraction (mortar) are a promising pathway to produce stiff, high-toughness composites. For certain ranges of constituent properties, including those of some synthetic analogs to nacre, one can demonstrate that the deformation is dominated by relative brick motions. This paper describes simulations of fracture that explicitly track the motions of individual rigid bricks in an idealized microstructure; cohesive tractions acting between the bricks introduce elastic, plastic and rupture behaviors. Results are presented for the stresses and damage near macroscopic cracks with different brick orientations relative to the loading orientation. The anisotropic macroscopic initiation toughness is computed for small-scale yielding conditions and is shown to be independent of specimen geometry and loading configuration. The results are shown to be in agreement with previously published experiments on synthetic nacre.
Wind Resource Assessment of Gujarat (India)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Draxl, C.; Purkayastha, A.; Parker, Z.
India is one of the largest wind energy markets in the world. In 1986 Gujarat was the first Indian state to install a wind power project. In February 2013, the installed wind capacity in Gujarat was 3,093 MW. Due to the uncertainty around existing wind energy assessments in India, this analysis uses the Weather Research and Forecasting (WRF) model to simulate the wind at current hub heights for one year to provide more precise estimates of wind resources in Gujarat. The WRF model allows for accurate simulations of winds near the surface and at heights important for wind energy purposes.more » While previous resource assessments published wind power density, we focus on average wind speeds, which can be converted to wind power densities by the user with methods of their choice. The wind resource estimates in this study show regions with average annual wind speeds of more than 8 m/s.« less
Highly Automated Arrival Management and Control System Suitable for Early NextGen
NASA Technical Reports Server (NTRS)
Swenson, Harry N.; Jung, Jaewoo
2013-01-01
This is a presentation of previously published work conducted in the development of the Terminal Area Precision Scheduling and Spacing (TAPSS) system. Included are concept and technical descriptions of the TAPSS system and results from human in the loop simulations conducted at Ames Research Center. The Terminal Area Precision Scheduling and Spacing system has demonstrated through research and extensive high-fidelity simulation studies to have benefits in airport arrival throughput, supporting efficient arrival descents, and enabling mixed aircraft navigation capability operations during periods of high congestion. NASA is currently porting the TAPSS system into the FAA TBFM and STARS system prototypes to ensure its ability to operate in the FAA automation Infrastructure. NASA ATM Demonstration Project is using the the TAPSS technologies to provide the ground-based automation tools to enable airborne Interval Management (IM) capabilities. NASA and the FAA have initiated a Research Transition Team to enable potential TAPSS and IM Technology Transfer.
Simulating Stable Isotope Ratios in Plumes of Groundwater Pollutants with BIOSCREEN-AT-ISO.
Höhener, Patrick; Li, Zhi M; Julien, Maxime; Nun, Pierrick; Robins, Richard J; Remaud, Gérald S
2017-03-01
BIOSCREEN is a well-known simple tool for evaluating the transport of dissolved contaminants in groundwater, ideal for rapid screening and teaching. This work extends the BIOSCREEN model for the calculation of stable isotope ratios in contaminants. A three-dimensional exact solution of the reactive transport from a patch source, accounting for fractionation by first-order decay and/or sorption, is used. The results match those from a previously published isotope model but are much simpler to obtain. Two different isotopes may be computed, and dual isotope plots can be viewed. The dual isotope assessment is a rapidly emerging new approach for identifying process mechanisms in aquifers. Furthermore, deviations of isotope ratios at specific reactive positions with respect to "bulk" ratios in the whole compound can be simulated. This model is named BIOSCREEN-AT-ISO and will be downloadable from the journal homepage. © 2016, National Ground Water Association.
Impact of Neutrino Opacities on Core-collapse Supernova Simulations
NASA Astrophysics Data System (ADS)
Kotake, Kei; Takiwaki, Tomoya; Fischer, Tobias; Nakamura, Ko; Martínez-Pinedo, Gabriel
2018-02-01
The accurate description of neutrino opacities is central to both the core-collapse supernova (CCSN) phenomenon and the validity of the explosion mechanism itself. In this work, we study in a systematic fashion the role of a variety of well-selected neutrino opacities in CCSN simulations where the multi-energy, three-flavor neutrino transport is solved using the isotropic diffusion source approximation (IDSA) scheme. To verify our code, we first present results from one-dimensional (1D) simulations following the core collapse, bounce, and ∼250 ms postbounce of a 15 {M}ȯ star using a standard set of neutrino opacities by Bruenn. A detailed comparison with published results supports the reliability of our three-flavor IDSA scheme using the standard opacity set. We then investigate in 1D simulations how individual opacity updates lead to differences with the baseline run with the standard opacity set. Through detailed comparisons with previous work, we check the validity of our implementation of each update in a step-by-step manner. Individual neutrino opacities with the largest impact on the overall evolution in 1D simulations are selected for systematic comparisons in our two-dimensional (2D) simulations. Special attention is given to the criterion of explodability in the 2D models. We discuss the implications of these results as well as its limitations and the requirements for future, more elaborate CCSN modeling.
SimBA: simulation algorithm to fit extant-population distributions.
Parida, Laxmi; Haiminen, Niina
2015-03-14
Simulation of populations with specified characteristics such as allele frequencies, linkage disequilibrium etc., is an integral component of many studies, including in-silico breeding optimization. Since the accuracy and sensitivity of population simulation is critical to the quality of the output of the applications that use them, accurate algorithms are required to provide a strong foundation to the methods in these studies. In this paper we present SimBA (Simulation using Best-fit Algorithm) a non-generative approach, based on a combination of stochastic techniques and discrete methods. We optimize a hill climbing algorithm and extend the framework to include multiple subpopulation structures. Additionally, we show that SimBA is very sensitive to the input specifications, i.e., very similar but distinct input characteristics result in distinct outputs with high fidelity to the specified distributions. This property of the simulation is not explicitly modeled or studied by previous methods. We show that SimBA outperforms the existing population simulation methods, both in terms of accuracy as well as time-efficiency. Not only does it construct populations that meet the input specifications more stringently than other published methods, SimBA is also easy to use. It does not require explicit parameter adaptations or calibrations. Also, it can work with input specified as distributions, without an exemplar matrix or population as required by some methods. SimBA is available at http://researcher.ibm.com/project/5669 .
NASA Astrophysics Data System (ADS)
Kim, Jihwan; Løvholt, Finn
2016-04-01
Enormous submarine landslides having volumes up to thousands of km3 and long run-out may cause tsunamis with widespread effects. Clay-rich landslides, such as Trænadjupet and Storegga offshore Norway commonly involve retrogressive mass and momentum release mechanisms that affect the tsunami generation. As a consequence, the failure mechanisms, soil parameters, and release rate of the retrogression are of importance for the tsunami generation. Previous attempts to model the tsunami generation due to retrogressive landslides are few, and limited to idealized conditions. Here, a visco-plastic model including additional effects such as remolding, time dependent mass release, and hydrodynamic resistance, is employed for simulating the Storegga Slide. As landslide strength parameters and their evolution in time are uncertain, it is necessary to conduct a sensitivity study to shed light on the tsunamigenic processes. The induced tsunami is simulated using Geoclaw. We also compare our tsunami simulations with recent analysis conducted using a pure retrogressive model for the landslide, as well as previously published results using a block model. The availability of paleotsunami run-up data and detailed slide deposits provides a suitable background for improved understanding of the slide mechanics and tsunami generation. The research leading to these results has received funding from the Research Council of Norway under grant number 231252 (Project TsunamiLand) and the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement 603839 (Project ASTARTE).
NASA Astrophysics Data System (ADS)
Henke, Paul S.; Mak, Chi H.
2014-08-01
The thermodynamic stability of a folded RNA is intricately tied to the counterions and the free energy of this interaction must be accounted for in any realistic RNA simulations. Extending a tight-binding model published previously, in this paper we investigate the fundamental structure of charges arising from the interaction between small functional RNA molecules and divalent ions such as Mg2+ that are especially conducive to stabilizing folded conformations. The characteristic nature of these charges is utilized to construct a discretely connected energy landscape that is then traversed via a novel application of a deterministic graph search technique. This search method can be incorporated into larger simulations of small RNA molecules and provides a fast and accurate way to calculate the free energy arising from the interactions between an RNA and divalent counterions. The utility of this algorithm is demonstrated within a fully atomistic Monte Carlo simulation of the P4-P6 domain of the Tetrahymena group I intron, in which it is shown that the counterion-mediated free energy conclusively directs folding into a compact structure.
Henke, Paul S; Mak, Chi H
2014-08-14
The thermodynamic stability of a folded RNA is intricately tied to the counterions and the free energy of this interaction must be accounted for in any realistic RNA simulations. Extending a tight-binding model published previously, in this paper we investigate the fundamental structure of charges arising from the interaction between small functional RNA molecules and divalent ions such as Mg(2+) that are especially conducive to stabilizing folded conformations. The characteristic nature of these charges is utilized to construct a discretely connected energy landscape that is then traversed via a novel application of a deterministic graph search technique. This search method can be incorporated into larger simulations of small RNA molecules and provides a fast and accurate way to calculate the free energy arising from the interactions between an RNA and divalent counterions. The utility of this algorithm is demonstrated within a fully atomistic Monte Carlo simulation of the P4-P6 domain of the Tetrahymena group I intron, in which it is shown that the counterion-mediated free energy conclusively directs folding into a compact structure.
Sharma, Diksha; Badano, Aldo
2013-03-01
hybridMANTIS is a Monte Carlo package for modeling indirect x-ray imagers using columnar geometry based on a hybrid concept that maximizes the utilization of available CPU and graphics processing unit processors in a workstation. The authors compare hybridMANTIS x-ray response simulations to previously published MANTIS and experimental data for four cesium iodide scintillator screens. These screens have a variety of reflective and absorptive surfaces with different thicknesses. The authors analyze hybridMANTIS results in terms of modulation transfer function and calculate the root mean square difference and Swank factors from simulated and experimental results. The comparison suggests that hybridMANTIS better matches the experimental data as compared to MANTIS, especially at high spatial frequencies and for the thicker screens. hybridMANTIS simulations are much faster than MANTIS with speed-ups up to 5260. hybridMANTIS is a useful tool for improved description and optimization of image acquisition stages in medical imaging systems and for modeling the forward problem in iterative reconstruction algorithms.
Guo, Ruiying; Nendel, Claas; Rahn, Clive; Jiang, Chunguang; Chen, Qing
2010-06-01
Vegetable production in China is associated with high inputs of nitrogen, posing a risk of losses to the environment. Organic matter mineralisation is a considerable source of nitrogen (N) which is hard to quantify. In a two-year greenhouse cucumber experiment with different N treatments in North China, non-observed pathways of the N cycle were estimated using the EU-Rotate_N simulation model. EU-Rotate_N was calibrated against crop dry matter and soil moisture data to predict crop N uptake, soil mineral N contents, N mineralisation and N loss. Crop N uptake (Modelling Efficiencies (ME) between 0.80 and 0.92) and soil mineral N contents in different soil layers (ME between 0.24 and 0.74) were satisfactorily simulated by the model for all N treatments except for the traditional N management. The model predicted high N mineralisation rates and N leaching losses, suggesting that previously published estimates of N leaching for these production systems strongly underestimated the mineralisation of N from organic matter. Copyright 2010 Elsevier Ltd. All rights reserved.
The effects of tapering and artery wall stiffness on treatments for Coarctation of the Aorta.
Pathirana, Dilan; Johnston, Barbara; Johnston, Peter
2017-11-01
Coarctation of the Aorta is a congenital narrowing of the aorta. Two commonly used treatments are resection and end-to-end anastomosis, and stent placements. We simulate blood flow through one-dimensional models of aortas. Different artery stiffnesses, due to treatments, are included in our model, and used to compare blood flow properties in the treated aortas. We expand our previously published model to include the natural tapering of aortas. We look at change in aorta wall radius, blood pressure and blood flow velocity, and find that, of the two treatments, the resection and end-to-end anastomosis treatment more closely matches healthy aortas.
A fast, parallel algorithm for distant-dependent calculation of crystal properties
NASA Astrophysics Data System (ADS)
Stein, Matthew
2017-12-01
A fast, parallel algorithm for distant-dependent calculation and simulation of crystal properties is presented along with speedup results and methods of application. An illustrative example is used to compute the Lennard-Jones lattice constants up to 32 significant figures for 4 ≤ p ≤ 30 in the simple cubic, face-centered cubic, body-centered cubic, hexagonal-close-pack, and diamond lattices. In most cases, the known precision of these constants is more than doubled, and in some cases, corrected from previously published figures. The tools and strategies to make this computation possible are detailed along with application to other potentials, including those that model defects.
Solar Energetic Particle Spectrum on 2006 December 13 Determined by IceTop
NASA Astrophysics Data System (ADS)
Abbasi, R.; Ackermann, M.; Adams, J.; Ahlers, M.; Ahrens, J.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Baret, B.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Becka, T.; Becker, J. K.; Becker, K. H.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bieber, J. W.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Bolmont, J.; Böser, S.; Botner, O.; Braun, J.; Breder, D.; Burgess, T.; Castermans, T.; Chirkin, D.; Christy, B.; Clem, J.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Davour, A.; Day, C. T.; De Clercq, C.; Demirörs, L.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; DeYoung, T.; Diaz-Velez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Edwards, W. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegaard, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Gerhardt, L.; Gladstone, L.; Goldschmidt, A.; Goodman, J. A.; Gozzini, R.; Grant, D.; Griesel, T.; Gross, A.; Grullon, S.; Gunasingha, R. M.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Hardtke, D.; Hardtke, R.; Hasegawa, Y.; Heise, J.; Helbing, K.; Hellwig, M.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Hoshina, K.; Hubert, D.; Hülss, J. P.; Hulth, P. O.; Hultqvist, K.; Hundertmark, S.; Imlay, R. L.; Inaba, M.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K. H.; Kappes, A.; Karg, T.; Karle, A.; Kawai, H.; Kelley, J. L.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Klepser, S.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Kuehn, K.; Kuwabara, T.; Labare, M.; Laihem, K.; Landsman, H.; Lauer, R.; Leich, H.; Leier, D.; Lucke, A.; Lundberg, J.; Lünemann, J.; Madsen, J.; Maruyama, R.; Mase, K.; Matis, H. S.; McParland, C. P.; Meagher, K.; Meli, A.; Merck, M.; Messarius, T.; Mészáros, P.; Miyamoto, H.; Mohr, A.; Montaruli, T.; Morse, R.; Movit, S. M.; Münich, K.; Nahnhauer, R.; Nam, J. W.; Niessen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; Ono, M.; Panknin, S.; Patton, S.; Pérez de los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Pohl, A. C.; Porrata, R.; Potthoff, N.; Pretz, J.; Price, P. B.; Przybylski, G. T.; Pyle, R.; Rawlins, K.; Razzaque, S.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Robbins, W. J.; Rodrigues, J.; Roth, P.; Rothmaier, F.; Rott, C.; Roucelle, C.; Rutledge, D.; Ryckbosch, D.; Sander, H. G.; Sarkar, S.; Satalecka, K.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Schultz, O.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Smith, A. J.; Song, C.; Spiczak, G. M.; Spiering, C.; Stanev, T.; Stezelberger, T.; Stokstad, R. G.; Stoufer, M. C.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sulanke, K. H.; Sullivan, G. W.; Swillens, Q.; Taboada, I.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Tilav, S.; Tluczykont, M.; Toale, P. A.; Tosi, D.; Turcan, D.; van Eijndhoven, N.; Vandenbroucke, J.; Van Overloop, A.; Viscomi, V.; Vogt, C.; Voigt, B.; Walck, C.; Waldenmaier, T.; Waldmann, H.; Walter, M.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebusch, C. H.; Wiedemann, C.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, X. W.; Yodh, G.; Yoshida, S.
2008-12-01
On 2006 December 13 the IceTop air shower array at the South Pole detected a major solar particle event. By numerically simulating the response of the IceTop tanks, which are thick Cerenkov detectors with multiple thresholds deployed at high altitude with no geomagnetic cutoff, we determined the particle energy spectrum in the energy range 0.6-7.6 GeV. This is the first such spectral measurement using a single instrument with a well-defined viewing direction. We compare the IceTop spectrum and its time evolution with previously published results and outline plans for improved resolution of future solar particle spectra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glenzer, Siegfried
We have developed an experimental platform for the National Ignition Facility (NIF) that uses spherically converging shock waves for absolute equation of state (EOS) measurements along the principal Hugoniot. In this Letter we present radiographic compression measurements for polystyrene that were taken at shock pressures reaching 60 Mbar (6 TPa). This significantly exceeds previously published results obtained on the Nova laser [Cauble et al., Phys. Rev. Lett. 80, 1248 (1998)] at strongly improved precision, allowing to discriminate between different EOS models. We find excellent agreement with Kohn-Sham Density Functional Theory based molecular dynamics simulations.
Smans, Kristien; Zoetelief, Johannes; Verbrugge, Beatrijs; Haeck, Wim; Struelens, Lara; Vanhavere, Filip; Bosmans, Hilde
2010-05-01
The purpose of this study was to compare and validate three methods to simulate radiographic image detectors with the Monte Carlo software MCNP/MCNPX in a time efficient way. The first detector model was the standard semideterministic radiography tally, which has been used in previous image simulation studies. Next to the radiography tally two alternative stochastic detector models were developed: A perfect energy integrating detector and a detector based on the energy absorbed in the detector material. Validation of three image detector models was performed by comparing calculated scatter-to-primary ratios (SPRs) with the published and experimentally acquired SPR values. For mammographic applications, SPRs computed with the radiography tally were up to 44% larger than the published results, while the SPRs computed with the perfect energy integrating detectors and the blur-free absorbed energy detector model were, on the average, 0.3% (ranging from -3% to 3%) and 0.4% (ranging from -5% to 5%) lower, respectively. For general radiography applications, the radiography tally overestimated the measured SPR by as much as 46%. The SPRs calculated with the perfect energy integrating detectors were, on the average, 4.7% (ranging from -5.3% to -4%) lower than the measured SPRs, whereas for the blur-free absorbed energy detector model, the calculated SPRs were, on the average, 1.3% (ranging from -0.1% to 2.4%) larger than the measured SPRs. For mammographic applications, both the perfect energy integrating detector model and the blur-free energy absorbing detector model can be used to simulate image detectors, whereas for conventional x-ray imaging using higher energies, the blur-free energy absorbing detector model is the most appropriate image detector model. The radiography tally overestimates the scattered part and should therefore not be used to simulate radiographic image detectors.
Retrieval of volcanic ash height from satellite-based infrared measurements
NASA Astrophysics Data System (ADS)
Zhu, Lin; Li, Jun; Zhao, Yingying; Gong, He; Li, Wenjie
2017-05-01
A new algorithm for retrieving volcanic ash cloud height from satellite-based measurements is presented. This algorithm, which was developed in preparation for China's next-generation meteorological satellite (FY-4), is based on volcanic ash microphysical property simulation and statistical optimal estimation theory. The MSG satellite's main payload, a 12-channel Spinning Enhanced Visible and Infrared Imager, was used as proxy data to test this new algorithm. A series of eruptions of Iceland's Eyjafjallajökull volcano during April to May 2010 and the Puyehue-Cordón Caulle volcanic complex eruption in the Chilean Andes on 16 June 2011 were selected as two typical cases for evaluating the algorithm under various meteorological backgrounds. Independent volcanic ash simulation training samples and satellite-based Cloud-Aerosol Lidar with Orthogonal Polarization data were used as validation data. It is demonstrated that the statistically based volcanic ash height algorithm is able to rapidly retrieve volcanic ash heights, globally. The retrieved ash heights show comparable accuracy with both independent training data and the lidar measurements, which is consistent with previous studies. However, under complicated background, with multilayers in vertical scale, underlying stratus clouds tend to have detrimental effects on the final retrieval accuracy. This is an unresolved problem, like many other previously published methods using passive satellite sensors. Compared with previous studies, the FY-4 ash height algorithm is independent of simultaneous atmospheric profiles, providing a flexible way to estimate volcanic ash height using passive satellite infrared measurements.
Dynamic Biological Functioning Important for Simulating and Stabilizing Ocean Biogeochemistry
NASA Astrophysics Data System (ADS)
Buchanan, P. J.; Matear, R. J.; Chase, Z.; Phipps, S. J.; Bindoff, N. L.
2018-04-01
The biogeochemistry of the ocean exerts a strong influence on the climate by modulating atmospheric greenhouse gases. In turn, ocean biogeochemistry depends on numerous physical and biological processes that change over space and time. Accurately simulating these processes is fundamental for accurately simulating the ocean's role within the climate. However, our simulation of these processes is often simplistic, despite a growing understanding of underlying biological dynamics. Here we explore how new parameterizations of biological processes affect simulated biogeochemical properties in a global ocean model. We combine 6 different physical realizations with 6 different biogeochemical parameterizations (36 unique ocean states). The biogeochemical parameterizations, all previously published, aim to more accurately represent the response of ocean biology to changing physical conditions. We make three major findings. First, oxygen, carbon, alkalinity, and phosphate fields are more sensitive to changes in the ocean's physical state. Only nitrate is more sensitive to changes in biological processes, and we suggest that assessment protocols for ocean biogeochemical models formally include the marine nitrogen cycle to assess their performance. Second, we show that dynamic variations in the production, remineralization, and stoichiometry of organic matter in response to changing environmental conditions benefit the simulation of ocean biogeochemistry. Third, dynamic biological functioning reduces the sensitivity of biogeochemical properties to physical change. Carbon and nitrogen inventories were 50% and 20% less sensitive to physical changes, respectively, in simulations that incorporated dynamic biological functioning. These results highlight the importance of a dynamic biology for ocean properties and climate.
Laparoscopic skills maintenance: a randomized trial of virtual reality and box trainer simulators.
Khan, Montaha W; Lin, Diwei; Marlow, Nicholas; Altree, Meryl; Babidge, Wendy; Field, John; Hewett, Peter; Maddern, Guy
2014-01-01
A number of simulators have been developed to teach surgical trainees the basic skills required to effectively perform laparoscopic surgery; however, consideration needs to be given to how well the skills taught by these simulators are maintained over time. This study compared the maintenance of laparoscopic skills learned using box trainer and virtual reality simulators. Participants were randomly allocated to be trained and assessed using either the Society of American Gastrointestinal Endoscopic Surgeons Fundamentals of Laparoscopic Surgery (FLS) simulator or the Surgical Science virtual reality simulator. Once participants achieved a predetermined level of proficiency, they were assessed 1, 3, and 6 months later. At each assessment, participants were given 2 practice attempts and assessed on their third attempt. The study was conducted through the Simulated Surgical Skills Program that was held at the Royal Australasian College of Surgeons, Adelaide, Australia. Overall, 26 participants (13 per group) completed the training and all follow-up assessments. There were no significant differences between simulation-trained cohorts for age, gender, training level, and the number of surgeries previously performed, observed, or assisted. Scores for the FLS-trained participants did not significantly change over the follow-up period. Scores for LapSim-trained participants significantly deteriorated at the first 2 follow-up points (1 and 3 months) (p < 0.050), but returned to be near initial levels by the final follow-up (6 months). This research showed that basic laparoscopic skills learned using the FLS simulator were maintained more consistently than those learned on the LapSim simulator. However, by the final follow-up, both simulator-trained cohorts had skill levels that were not significantly different to those at proficiency after the initial training period. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Pater, P; Bernal, M; Naqa, I El; Seuntjens, J
2012-06-01
To validate and scrutinize published DNA strand break data with Geant4-DNA and a probabilistic model. To study the impact of source size, electronic equilibrium and secondary electron tracking cutoff on direct relative biological effectiveness (DRBE). Geant4 (v4.9.5) was used to simulate a cylindrical region of interest (ROI) with r = 15 nm and length = 1.05 mm, in a slab of liquid water of 1.06 g/cm 3 density. The ROI was irradiated with mono-energetic photons, with a uniformly distributed volumetric isotropic source (0.28, 1.5 keV) or a plane beam (0.662, 1.25 MeV), of variable size. Electrons were tracked down to 50 or 10 eV, with G4-DNA processes and energy transfer greater than 10.79 eV was scored. Based on volume ratios, each scored event had a 0.0388 probability of happening on either DNA helix (break). Clusters of at least one break on each DNA helix within 3.4 nm were found using a DBSCAN algorithm and categorized as double strand breaks (DSB). All other events were categorized as single strand breaks (SSB). Geant4-DNA is able to reproduce strand break yields previously published. Homogeneous irradiation conditions should be present throughout the ROI for DRBE comparisons. SSB yields seem slightly dependent on the primary photon energy. DRBEs show a significant increasing trend for lower energy incident photons. A lower electron cutoff produces higher SSB yields, but decreases the SSB/DSB yields ratio. The probabilistic and geometrical DNA models can predict equivalent results. Using Geant4, we were able to reproduce previously published results on the direct strand break yields of photon and study the importance of irradiation conditions. We also show an ascending trend for DRBE with lower incident photon energies. A probabilistic model coupled with track structure analysis can be used to simulate strand break yields. NSERC, CIHR. © 2012 American Association of Physicists in Medicine.
Reis, C Q M; Nicolucci, P
2016-02-01
The purpose of this study was to investigate Monte Carlo-based perturbation and beam quality correction factors for ionization chambers in photon beams using a saving time strategy with PENELOPE code. Simulations for calculating absorbed doses to water using full spectra of photon beams impinging the whole water phantom and those using a phase-space file previously stored around the point of interest were performed and compared. The widely used NE2571 ionization chamber was modeled with PENELOPE using data from the literature in order to calculate absorbed doses to the air cavity of the chamber. Absorbed doses to water at reference depth were also calculated for providing the perturbation and beam quality correction factors for that chamber in high energy photon beams. Results obtained in this study show that simulations with phase-space files appropriately stored can be up to ten times shorter than using a full spectrum of photon beams in the input-file. Values of kQ and its components for the NE2571 ionization chamber showed good agreement with published values in the literature and are provided with typical statistical uncertainties of 0.2%. Comparisons to kQ values published in current dosimetry protocols such as the AAPM TG-51 and IAEA TRS-398 showed maximum percentage differences of 0.1% and 0.6% respectively. The proposed strategy presented a significant efficiency gain and can be applied for a variety of ionization chambers and clinical photon beams. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Visible-to-SWIR wavelength variation of skylight polarization
NASA Astrophysics Data System (ADS)
Dahl, Laura M.; Shaw, Joseph A.
2015-09-01
Knowledge of the polarization state of natural skylight is important to growing applications using polarimetric sensing. We previously published measurements and simulations illustrating the complex interaction between atmospheric and surface properties in determining the spectrum of skylight polarization from the visible to near-infrared (1 μm).1 Those results showed that skylight polarization can trend upward or downward, or even have unusual spectral discontinuities that arise because of sharp features in the underlying surface reflectance. The specific spectrum observed in a given case depended strongly on atmospheric and surface properties that varied with wavelength. In the previous study, the model was fed with actual measurements of highly variable aerosol and surface properties from locations around the world. Results, however, were limited to wavelengths below 1 μm from a lack in available satellite surface reflectance data at longer wavelengths. We now report measurement-driven simulations of skylight polarization from 350 nm to 2500 nm in the short-wave infrared (SWIR) using hand-held spectrometer measurements of spectral surface reflectance. The SWIR degree of linear polarization was found to be highly dependent on the aerosol size distribution and on the resulting relationship between the aerosol and Rayleigh optical depths. Unique polarization features in the modeled results were attributed to the surface reflectance and the skylight DoLP generally decreased as surface reflectance increased.
Fourier Transform Spectroscopy of the A {sup 3}Π– X {sup 3}Σ{sup −} Transition of OH{sup +}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodges, James N.; Bernath, Peter F.
The OH{sup +} ion is of critical importance to the chemistry in the interstellar medium and is a prerequisite for the generation of more complex chemical species. Submillimeter and ultraviolet observations rely on high quality laboratory spectra. Recent measurements of the fundamental vibrational band and previously unanalyzed Fourier transform spectra of the near-ultraviolet A {sup 3}Π− X {sup 3}Σ{sup −} electronic spectrum, acquired at the National Solar Observatory at Kitt Peak in 1989, provide an excellent opportunity to perform a global fit of the available data. These new optical data are approximately four times more precise as compared to themore » previous values. The fit to the new data provides updated molecular constants, which are necessary to predict the OH{sup +} transition frequencies accurately to support future observations. These new constants are the first published using the modern effective Hamiltonian for a linear molecule. These new molecular constants allow for easy simulation of transition frequencies and spectra using the PGOPHER program. The new constants improve simulations of higher J -value infrared transitions, and represent an improvement of an order of magnitude for some constants pertaining to the optical transitions.« less
McFadden, Emily; Stevens, Richard; Glasziou, Paul; Perera, Rafael
2015-01-01
To estimate numbers affected by a recent change in UK guidelines for statin use in primary prevention of cardiovascular disease. We modelled cholesterol ratio over time using a sample of 45,151 men (≥40years) and 36,168 women (≥55years) in 2006, without statin treatment or previous cardiovascular disease, from the Clinical Practice Research Datalink. Using simulation methods, we estimated numbers indicated for new statin treatment, if cholesterol was measured annually and used in the QRISK2 CVD risk calculator, using the previous 20% and newly recommended 10% thresholds. We estimate that 58% of men and 55% of women would be indicated for treatment by five years and 71% of men and 73% of women by ten years using the 20% threshold. Using the proposed threshold of 10%, 84% of men and 90% of women would be indicated for treatment by 5years and 92% of men and 98% of women by ten years. The proposed change of risk threshold from 20% to 10% would result in the substantial majority of those recommended for cholesterol testing being indicated for statin treatment. Implications depend on the value of statins in those at low to medium risk, and whether there are harms. Copyright © 2014. Published by Elsevier Inc.
Olives, Casey; Valadez, Joseph J; Pagano, Marcello
2014-03-01
To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.
Bouazza, Naïm; Cressey, Tim R; Foissac, Frantz; Bienczak, Andrzej; Denti, Paolo; McIlleron, Helen; Burger, David; Penazzato, Martina; Lallemant, Marc; Capparelli, Edmund V; Treluyer, Jean-Marc; Urien, Saïk
2017-02-01
Child-friendly, low-cost, solid, oral fixed-dose combinations (FDCs) of efavirenz with lamivudine and abacavir are urgently needed to improve clinical management and drug adherence for children. Data were pooled from several clinical trials and therapeutic drug monitoring datasets from different countries. The number of children/observations was 505/3667 for efavirenz. Population pharmacokinetic analyses were performed using a non-linear mixed-effects approach. For abacavir and lamivudine, data from 187 and 920 subjects were available (population pharmacokinetic models previously published). Efavirenz/lamivudine/abacavir FDC strength options assessed were (I) 150/75/150, (II) 120/60/120 and (III) 200/100/200 mg. Monte Carlo simulations of the different FDC strengths were performed to determine the optimal dose within each of the WHO weight bands based on drug efficacy/safety targets. The probability of being within the target efavirenz concentration range 12 h post-dose (1-4 mg/L) varied between 56% and 60%, regardless of FDC option. Option I provided a best possible balance between efavirenz treatment failure and toxicity risks. For abacavir and lamivudine, simulations showed that for option I >75% of subjects were above the efficacy target. According to simulations, a paediatric efavirenz/lamivudine/abacavir fixed-dose formulation of 150 mg efavirenz, 75 mg lamivudine and 150 mg abacavir provided the most effective and safe concentrations across WHO weight bands, with the flexibility of dosage required across the paediatric population. © The Author 2016. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
A Monte Carlo analysis of breast screening randomized trials.
Zamora, Luis I; Forastero, Cristina; Guirado, Damián; Lallena, Antonio M
2016-12-01
To analyze breast screening randomized trials with a Monte Carlo simulation tool. A simulation tool previously developed to simulate breast screening programmes was adapted for that purpose. The history of women participating in the trials was simulated, including a model for survival after local treatment of invasive cancers. Distributions of time gained due to screening detection against symptomatic detection and the overall screening sensitivity were used as inputs. Several randomized controlled trials were simulated. Except for the age range of women involved, all simulations used the same population characteristics and this permitted to analyze their external validity. The relative risks obtained were compared to those quoted for the trials, whose internal validity was addressed by further investigating the reasons of the disagreements observed. The Monte Carlo simulations produce results that are in good agreement with most of the randomized trials analyzed, thus indicating their methodological quality and external validity. A reduction of the breast cancer mortality around 20% appears to be a reasonable value according to the results of the trials that are methodologically correct. Discrepancies observed with Canada I and II trials may be attributed to a low mammography quality and some methodological problems. Kopparberg trial appears to show a low methodological quality. Monte Carlo simulations are a powerful tool to investigate breast screening controlled randomized trials, helping to establish those whose results are reliable enough to be extrapolated to other populations and to design the trial strategies and, eventually, adapting them during their development. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Evans, Cecile B; Mixon, Diana K
2015-12-01
The purpose of this paper was to assess undergraduate nursing students' pain knowledge after participation in a simulation scenario. The Knowledge and Attitudes of Survey Regarding Pain (KASRP) was used to assess pain knowledge. In addition, reflective questions related to the simulation were examined. Student preferences for education method and reactions to the simulation (SIM) were described. Undergraduate nursing students' knowledge of pain management is reported as inadequate. An emerging pedagogy used to educate undergraduate nurses in a safe, controlled environment is simulation. Literature reports of simulation to educate students' about pain management are limited. As part of the undergraduate nursing student clinical coursework, a post-operative pain management simulation, the SIM was developed. Students were required to assess pain levels and then manage the pain for a late adolescent male whose mother's fear of addiction was a barrier to pain management. The students completed an anonymous written survey that included selected questions from the KASRP and an evaluation of the SIM experience. The students' mean KASRP percent correct was 70.4% ± 8.6%. Students scored the best on items specific to pain assessment and worst on items specific to opiate equivalents and decisions on PRN orders. The students' overall KASRP score post simulation was slightly better than previous studies of nursing students. These results suggest that educators should consider simulations to educate about pain assessment and patient/family education. Future pain simulations should include more opportunities for students to choose appropriate pain medications when provided PRN orders. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.
Malmström, B; Nohlert, E; Ewald, U; Widarsson, M
2017-08-01
The use of simulation-based team training in neonatal resuscitation has increased in Sweden during the last decade, but no formal evaluation of this training method has been performed. This study evaluated the effect of simulation-based team training on the self-assessed ability of personnel to perform neonatal resuscitation. We evaluated a full-day simulation-based team training course in neonatal resuscitation, by administering a questionnaire to 110 physicians, nurses and midwives before and after the training period. The questionnaire focused on four important domains: communication, leadership, confidence and technical skills. The study was carried out in Sweden from 2005 to 2007. The response rate was 84%. Improvements in the participants' self-assessed ability to perform neonatal resuscitation were seen in all four domains after training (p < 0.001). Professionally inexperienced personnel showed a significant improvement in the technical skills domain compared to experienced personnel (p = 0.001). No differences were seen between professions or time since training in any of the four domains. Personnel with less previous experience with neonatal resuscitation showed improved confidence (p = 0.007) and technical skills (p = 0.003). A full-day course on simulation-based team training with video-supported debriefing improved the participants' self-assessed ability to perform neonatal resuscitation. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Lewellen, D. C.; Lewellen, W. S.
2001-01-01
High-resolution numerical large-eddy simulations of the near wake of a B757 including simplified NOx and HOx chemistry were performed to explore the effects of dynamics on chemistry in wakes of ages from a few seconds to several minutes. Dilution plays an important basic role in the NOx-O3 chemistry in the wake, while a more interesting interaction between the chemistry and dynamics occurs for the HOx species. These simulation results are compared with published measurements of OH and HO2 within a B757 wake under cruise conditions in the upper troposphere taken during the Subsonic Aircraft Contrail and Cloud Effects Special Study (SUCCESS) mission in May 1996. The simulation provides a much finer grained representation of the chemistry and dynamics of the early wake than is possible from the 1 s data samples taken in situ. The comparison suggests that the previously reported discrepancy of up to a factor of 20 - 50 between the SUCCESS measurements of the [HO2]/[OH] ratio and that predicted by simplified theoretical computations is due to the combined effects of large mixing rates around the wake plume edges and averaging over volumes containing large species fluctuations. The results demonstrate the feasibility of using three-dimensional unsteady large-eddy simulations with coupled chemistry to study such phenomena.
Deviation from equilibrium conditions in molecular dynamic simulations of homogeneous nucleation.
Halonen, Roope; Zapadinsky, Evgeni; Vehkamäki, Hanna
2018-04-28
We present a comparison between Monte Carlo (MC) results for homogeneous vapour-liquid nucleation of Lennard-Jones clusters and previously published values from molecular dynamics (MD) simulations. Both the MC and MD methods sample real cluster configuration distributions. In the MD simulations, the extent of the temperature fluctuation is usually controlled with an artificial thermostat rather than with more realistic carrier gas. In this study, not only a primarily velocity scaling thermostat is considered, but also Nosé-Hoover, Berendsen, and stochastic Langevin thermostat methods are covered. The nucleation rates based on a kinetic scheme and the canonical MC calculation serve as a point of reference since they by definition describe an equilibrated system. The studied temperature range is from T = 0.3 to 0.65 ϵ/k. The kinetic scheme reproduces well the isothermal nucleation rates obtained by Wedekind et al. [J. Chem. Phys. 127, 064501 (2007)] using MD simulations with carrier gas. The nucleation rates obtained by artificially thermostatted MD simulations are consistently lower than the reference nucleation rates based on MC calculations. The discrepancy increases up to several orders of magnitude when the density of the nucleating vapour decreases. At low temperatures, the difference to the MC-based reference nucleation rates in some cases exceeds the maximal nonisothermal effect predicted by classical theory of Feder et al. [Adv. Phys. 15, 111 (1966)].
Deviation from equilibrium conditions in molecular dynamic simulations of homogeneous nucleation
NASA Astrophysics Data System (ADS)
Halonen, Roope; Zapadinsky, Evgeni; Vehkamäki, Hanna
2018-04-01
We present a comparison between Monte Carlo (MC) results for homogeneous vapour-liquid nucleation of Lennard-Jones clusters and previously published values from molecular dynamics (MD) simulations. Both the MC and MD methods sample real cluster configuration distributions. In the MD simulations, the extent of the temperature fluctuation is usually controlled with an artificial thermostat rather than with more realistic carrier gas. In this study, not only a primarily velocity scaling thermostat is considered, but also Nosé-Hoover, Berendsen, and stochastic Langevin thermostat methods are covered. The nucleation rates based on a kinetic scheme and the canonical MC calculation serve as a point of reference since they by definition describe an equilibrated system. The studied temperature range is from T = 0.3 to 0.65 ɛ/k. The kinetic scheme reproduces well the isothermal nucleation rates obtained by Wedekind et al. [J. Chem. Phys. 127, 064501 (2007)] using MD simulations with carrier gas. The nucleation rates obtained by artificially thermostatted MD simulations are consistently lower than the reference nucleation rates based on MC calculations. The discrepancy increases up to several orders of magnitude when the density of the nucleating vapour decreases. At low temperatures, the difference to the MC-based reference nucleation rates in some cases exceeds the maximal nonisothermal effect predicted by classical theory of Feder et al. [Adv. Phys. 15, 111 (1966)].
In silico concurrent multisite pH titration in proteins.
Hu, Hao; Shen, Lin
2014-07-30
The concurrent proton binding at multiple sites in macromolecules such as proteins and nucleic acids is an important yet challenging problem in biochemistry. We develop an efficient generalized Hamiltonian approach to attack this issue. Based on the previously developed generalized-ensemble methods, an effective potential energy is constructed which combines the contributions of all (relevant) protonation states of the molecule. The effective potential preserves important phase regions of all states and, thus, allows efficient sampling of these regions in one simulation. The need for intermediate states in alchemical free energy simulations is greatly reduced. Free energy differences between different protonation states can be determined accurately and enable one to construct the grand canonical partition function. Therefore, the complicated concurrent multisite proton titration process of protein molecules can be satisfactorily simulated. Application of this method to the simulation of the pKa of Glu49, Asp50, and C-terminus of bovine pancreatic trypsin inhibitor shows reasonably good agreement with published experimental work. This method provides an unprecedented vivid picture of how different protonation states change their relative population upon pH titration. We believe that the method will be very useful in deciphering the molecular mechanism of pH-dependent biomolecular processes in terms of a detailed atomistic description. Copyright © 2014 Wiley Periodicals, Inc.
The effects of clutter-rejection filtering on estimating weather spectrum parameters
NASA Technical Reports Server (NTRS)
Davis, W. T.
1989-01-01
The effects of clutter-rejection filtering on estimating the weather parameters from pulse Doppler radar measurement data are investigated. The pulse pair method of estimating the spectrum mean and spectrum width of the weather is emphasized. The loss of sensitivity, a measure of the signal power lost due to filtering, is also considered. A flexible software tool developed to investigate these effects is described. It allows for simulated weather radar data, in which the user specifies an underlying truncated Gaussian spectrum, as well as for externally generated data which may be real or simulated. The filter may be implemented in either the time or the frequency domain. The software tool is validated by comparing unfiltered spectrum mean and width estimates to their true values, and by reproducing previously published results. The effects on the weather parameter estimates using simulated weather-only data are evaluated for five filters: an ideal filter, two infinite impulse response filters, and two finite impulse response filters. Results considering external data, consisting of weather and clutter data, are evaluated on a range cell by range cell basis. Finally, it is shown theoretically and by computer simulation that a linear phase response is not required for a clutter rejection filter preceeding pulse-pair parameter estimation.
A model of milk production in lactating dairy cows in relation to energy and nitrogen dynamics.
Johnson, I R; France, J; Cullen, B R
2016-02-01
A generic daily time-step model of a dairy cow, designed to be included in whole-system pasture simulation models, is described that includes growth, milk production, and lactation in relation to energy and nitrogen dynamics. It is a development of a previously described animal growth and metabolism model that describes animal body composition in terms of protein, water, and fat, and energy dynamics in relation to growth requirements, resynthesis of degraded protein, and animal activity. This is further developed to include lactation and fetal growth. Intake is calculated in relation to stage of lactation, pasture availability, supplementary feed, and feed quality. Energy costs associated with urine N excretion and methane fermentation are accounted for. Milk production and fetal growth are then calculated in relation to the overall energy and nitrogen dynamics. The general behavior of the model is consistent with expected characteristics. Simulations using the model as part of a whole-system pasture simulation model (DairyMod) are compared with experimental data where good agreement between pasture, concentrate and forage intake, as well as milk production over 3 consecutive lactation cycles, is observed. The model is shown to be well suited for inclusion in large-scale system simulation models. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Rosebraugh, Matthew R; Widness, John A; Nalbant, Demet; Cress, Gretchen; Veng-Pedersen, Peter
2014-02-01
Preterm very-low-birth-weight (VLBW) infants weighing <1.5 kg at birth develop anemia, often requiring multiple red blood cell transfusions (RBCTx). Because laboratory blood loss is a primary cause of anemia leading to RBCTx in VLBW infants, our purpose was to simulate the extent to which RBCTx can be reduced or eliminated by reducing laboratory blood loss in combination with pharmacodynamically optimized erythropoietin (Epo) treatment. Twenty-six VLBW ventilated infants receiving RBCTx were studied during the first month of life. RBCTx simulations were based on previously published RBCTx criteria and data-driven Epo pharmacodynamic optimization of literature-derived RBC life span and blood volume data corrected for phlebotomy loss. Simulated pharmacodynamic optimization of Epo administration and reduction in phlebotomy by ≥ 55% predicted a complete elimination of RBCTx in 1.0-1.5 kg infants. In infants <1.0 kg with 100% reduction in simulated phlebotomy and optimized Epo administration, a 45% reduction in RBCTx was predicted. The mean blood volume drawn from all infants was 63 ml/kg: 33% required for analysis and 67% discarded. When reduced laboratory blood loss and optimized Epo treatment are combined, marked reductions in RBCTx in ventilated VLBW infants were predicted, particularly among those with birth weights >1.0 kg.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reedlunn, Benjamin
Room D was an in-situ, isothermal, underground experiment conducted at the Waste Isolation Pilot Plant between 1984 and 1991. The room was carefully instrumented to measure the horizontal and vertical closure immediately upon excavation and for several years thereafter. Early finite element simulations of salt creep around Room D under-predicted the vertical closure by 4.5×, causing investigators to explore a series of changes to the way Room D was modeled. Discrepancies between simulations and measurements were resolved through a series of adjustments to model parameters, which were openly acknowledged in published reports. Interest in Room D has been rekindled recentlymore » by the U.S./German Joint Project III and Project WEIMOS, which seek to improve the predictions of rock salt constitutive models. Joint Project participants calibrate their models solely against laboratory tests, and benchmark the models against underground experiments, such as room D. This report describes updating legacy Room D simulations to today’s computational standards by rectifying several numerical issues. Subsequently, the constitutive model used in previous modeling is recalibrated two different ways against a suite of new laboratory creep experiments on salt extracted from the repository horizon of the Waste Isolation Pilot Plant. Simulations with the new, laboratory-based, calibrations under-predict Room D vertical closure by 3.1×. A list of potential improvements is discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reedlunn, Benjamin
Room D was an in-situ, isothermal, underground experiment conducted at theWaste Isolation Pilot Plant between 1984 and 1991. The room was carefully instrumented to measure the horizontal and vertical closure immediately upon excavation and for several years thereafter. Early finite element simulations of salt creep around Room D under predicted the vertical closure by 4.5×, causing investigators to explore a series of changes to the way Room D was modeled. Discrepancies between simulations and measurements were resolved through a series of adjustments to model parameters, which were openly acknowledged in published reports. Interest in Room D has been rekindled recentlymore » by the U.S./German Joint Project III and Project WEIMOS, which seek to improve the predictions of rock salt constitutive models. Joint Project participants calibrate their models solely against laboratory tests, and benchmark the models against underground experiments, such as room D. This report describes updating legacy Room D simulations to today’s computational standards by rectifying several numerical issues. Subsequently, the constitutive model used in previous modeling is recalibrated two different ways against a suite of new laboratory creep experiments on salt extracted from the repository horizon of the Waste Isolation Pilot Plant. Simulations with the new, laboratory-based, calibrations under predict Room D vertical closure by 3.1×. A list of potential improvements is discussed.« less
Hu, Jingwen; Klinich, Kathleen D; Reed, Matthew P; Kokkolaras, Michael; Rupp, Jonathan D
2012-06-01
In motor-vehicle crashes, young school-aged children restrained by vehicle seat belt systems often suffer from abdominal injuries due to submarining. However, the current anthropomorphic test device, so-called "crash dummy", is not adequate for proper simulation of submarining. In this study, a modified Hybrid-III six-year-old dummy model capable of simulating and predicting submarining was developed using MADYMO (TNO Automotive Safety Solutions). The model incorporated improved pelvis and abdomen geometry and properties previously tested in a modified physical dummy. The model was calibrated and validated against four sled tests under two test conditions with and without submarining using a multi-objective optimization method. A sensitivity analysis using this validated child dummy model showed that dummy knee excursion, torso rotation angle, and the difference between head and knee excursions were good predictors for submarining status. It was also shown that restraint system design variables, such as lap belt angle, D-ring height, and seat coefficient of friction (COF), may have opposite effects on head and abdomen injury risks; therefore child dummies and dummy models capable of simulating submarining are crucial for future restraint system design optimization for young school-aged children. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.
Exploring the energy landscapes of protein folding simulations with Bayesian computation.
Burkoff, Nikolas S; Várnai, Csilla; Wells, Stephen A; Wild, David L
2012-02-22
Nested sampling is a Bayesian sampling technique developed to explore probability distributions localized in an exponentially small area of the parameter space. The algorithm provides both posterior samples and an estimate of the evidence (marginal likelihood) of the model. The nested sampling algorithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post processing of the output. Previous applications of the algorithm have yielded large efficiency gains over other sampling techniques, including parallel tempering. In this article, we describe a parallel implementation of the nested sampling algorithm and its application to the problem of protein folding in a Gō-like force field of empirical potentials that were designed to stabilize secondary structure elements in room-temperature simulations. We demonstrate the method by conducting folding simulations on a number of small proteins that are commonly used for testing protein-folding procedures. A topological analysis of the posterior samples is performed to produce energy landscape charts, which give a high-level description of the potential energy surface for the protein folding simulations. These charts provide qualitative insights into both the folding process and the nature of the model and force field used. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda
2015-01-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity. PMID:26865735
Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda
2014-06-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity.
Isaranuwatchai, Wanrudee; Brydges, Ryan; Carnahan, Heather; Backstein, David; Dubrowski, Adam
2014-05-01
While the ultimate goal of simulation training is to enhance learning, cost-effectiveness is a critical factor. Research that compares simulation training in terms of educational- and cost-effectiveness will lead to better-informed curricular decisions. Using previously published data we conducted a cost-effectiveness analysis of three simulation-based programs. Medical students (n = 15 per group) practiced in one of three 2-h intravenous catheterization skills training programs: low-fidelity (virtual reality), high-fidelity (mannequin), or progressive (consisting of virtual reality, task trainer, and mannequin simulator). One week later, all performed a transfer test on a hybrid simulation (standardized patient with a task trainer). We used a net benefit regression model to identify the most cost-effective training program via paired comparisons. We also created a cost-effectiveness acceptability curve to visually represent the probability that one program is more cost-effective when compared to its comparator at various 'willingness-to-pay' values. We conducted separate analyses for implementation and total costs. The results showed that the progressive program had the highest total cost (p < 0.001) whereas the high-fidelity program had the highest implementation cost (p < 0.001). While the most cost-effective program depended on the decision makers' willingness-to-pay value, the progressive training program was generally most educationally- and cost-effective. Our analyses suggest that a progressive program that strategically combines simulation modalities provides a cost-effective solution. More generally, we have introduced how a cost-effectiveness analysis may be applied to simulation training; a method that medical educators may use to investment decisions (e.g., purchasing cost-effective and educationally sound simulators).
Marshall, Deborah A; Burgos-Liz, Lina; IJzerman, Maarten J; Crown, William; Padula, William V; Wong, Peter K; Pasupathy, Kalyan S; Higashi, Mitchell K; Osgood, Nathaniel D
2015-03-01
In a previous report, the ISPOR Task Force on Dynamic Simulation Modeling Applications in Health Care Delivery Research Emerging Good Practices introduced the fundamentals of dynamic simulation modeling and identified the types of health care delivery problems for which dynamic simulation modeling can be used more effectively than other modeling methods. The hierarchical relationship between the health care delivery system, providers, patients, and other stakeholders exhibits a level of complexity that ought to be captured using dynamic simulation modeling methods. As a tool to help researchers decide whether dynamic simulation modeling is an appropriate method for modeling the effects of an intervention on a health care system, we presented the System, Interactions, Multilevel, Understanding, Loops, Agents, Time, Emergence (SIMULATE) checklist consisting of eight elements. This report builds on the previous work, systematically comparing each of the three most commonly used dynamic simulation modeling methods-system dynamics, discrete-event simulation, and agent-based modeling. We review criteria for selecting the most suitable method depending on 1) the purpose-type of problem and research questions being investigated, 2) the object-scope of the model, and 3) the method to model the object to achieve the purpose. Finally, we provide guidance for emerging good practices for dynamic simulation modeling in the health sector, covering all aspects, from the engagement of decision makers in the model design through model maintenance and upkeep. We conclude by providing some recommendations about the application of these methods to add value to informed decision making, with an emphasis on stakeholder engagement, starting with the problem definition. Finally, we identify areas in which further methodological development will likely occur given the growing "volume, velocity and variety" and availability of "big data" to provide empirical evidence and techniques such as machine learning for parameter estimation in dynamic simulation models. Upon reviewing this report in addition to using the SIMULATE checklist, the readers should be able to identify whether dynamic simulation modeling methods are appropriate to address the problem at hand and to recognize the differences of these methods from those of other, more traditional modeling approaches such as Markov models and decision trees. This report provides an overview of these modeling methods and examples of health care system problems in which such methods have been useful. The primary aim of the report was to aid decisions as to whether these simulation methods are appropriate to address specific health systems problems. The report directs readers to other resources for further education on these individual modeling methods for system interventions in the emerging field of health care delivery science and implementation. Copyright © 2015. Published by Elsevier Inc.
Simulation of granular and gas-solid flows using discrete element method
NASA Astrophysics Data System (ADS)
Boyalakuntla, Dhanunjay S.
2003-10-01
In recent years there has been increased research activity in the experimental and numerical study of gas-solid flows. Flows of this type have numerous applications in the energy, pharmaceuticals, and chemicals process industries. Typical applications include pulverized coal combustion, flow and heat transfer in bubbling and circulating fluidized beds, hopper and chute flows, pneumatic transport of pharmaceutical powders and pellets, and many more. The present work addresses the study of gas-solid flows using computational fluid dynamics (CFD) techniques and discrete element simulation methods (DES) combined. Many previous studies of coupled gas-solid flows have been performed assuming the solid phase as a continuum with averaged properties and treating the gas-solid flow as constituting of interpenetrating continua. Instead, in the present work, the gas phase flow is simulated using continuum theory and the solid phase flow is simulated using DES. DES treats each solid particle individually, thus accounting for its dynamics due to particle-particle interactions, particle-wall interactions as well as fluid drag and buoyancy. The present work involves developing efficient DES methods for dense granular flow and coupling this simulation to continuum simulations of the gas phase flow. Simulations have been performed to observe pure granular behavior in vibrating beds. Benchmark cases have been simulated and the results obtained match the published literature. The dimensionless acceleration amplitude and the bed height are the parameters governing bed behavior. Various interesting behaviors such as heaping, round and cusp surface standing waves, as well as kinks, have been observed for different values of the acceleration amplitude for a given bed height. Furthermore, binary granular mixtures (granular mixtures with two particle sizes) in a vibrated bed have also been studied. Gas-solid flow simulations have been performed to study fluidized beds. Benchmark 2D fluidized bed simulations have been performed and the results have been shown to satisfactorily compare with those published in the literature. A comprehensive study of the effect of drag correlations on the simulation of fluidized beds has been performed. It has been found that nearly all the drag correlations studied make similar predictions of global quantities such as the time-dependent pressure drop, bubbling frequency and growth. In conclusion, discrete element simulation has been successfully coupled to continuum gas-phase. Though all the results presented in the thesis are two-dimensional, the present implementation is completely three dimensional and can be used to study 3D fluidized beds to aid in better design and understanding. Other industrially important phenomena like particle coating, coal gasification etc., and applications in emerging areas such as nano-particle/fluid mixtures can also be studied through this type of simulation. (Abstract shortened by UMI.)
Role of collateral paths in long-range diffusion in lungs
Bartel, Seth-Emil T.; Haywood, Susan E.; Woods, Jason C.; Chang, Yulin V.; Menard, Christopher; Yablonskiy, Dmitriy A.; Gierada, David S.; Conradi, Mark S.
2010-01-01
The long-range apparent diffusion coefficient (LRADC) of 3He gas in lungs, measured over times of several seconds and distances of 1–3 cm, probes the connections between the airways. Previous work has shown the LRADC to be small in health and substantially elevated in emphysema, reflecting tissue destruction, which is known to create collateral pathways. To better understand what controls LRADC, we report computer simulations and measurements of 3He gas diffusion in healthy lungs. The lung is generated with a random algorithm using well-defined rules, yielding a three-dimensional set of nodes or junctions, each connected by airways to one parent node and two daughters; airway dimensions are taken from published values. Spin magnetization in the simulated lung is modulated sinusoidally, and the diffusion equation is solved to 1,000 s. The modulated magnetization decays with a time constant corresponding to an LRADC of ~0.001 cm2/s, which is smaller by a factor of ~20 than the values in healthy lungs measured here and previously in vivo and in explanted lungs. It appears that collateral gas pathways, not present in the simulations, are functional in healthy lungs; they provide additional and more direct routes for long-range motion than the canonical airway tree. This is surprising, inasmuch as collateral ventilation is believed to be physiologically insignificant in healthy lungs. We discuss the effect on LRADC of small collateral connections through airway walls and rule out other possible mechanisms. The role of collateral paths is supported by measurements of smaller LRADC in pigs, where collateral ventilation is known to be smaller. PMID:18292298
Investigating the Transonic Flutter Boundary of the Benchmark Supercritical Wing
NASA Technical Reports Server (NTRS)
Heeg, Jennifer; Chwalowski, Pawel
2017-01-01
This paper builds on the computational aeroelastic results published previously and generated in support of the second Aeroelastic Prediction Workshop for the NASA Benchmark Supercritical Wing configuration. The computational results are obtained using FUN3D, an unstructured grid Reynolds-Averaged Navier-Stokes solver developed at the NASA Langley Research Center. The analysis results focus on understanding the dip in the transonic flutter boundary at a single Mach number (0.74), exploring an angle of attack range of ??1 to 8 and dynamic pressures from wind off to beyond flutter onset. The rigid analysis results are examined for insights into the behavior of the aeroelastic system. Both static and dynamic aeroelastic simulation results are also examined.
NASA Astrophysics Data System (ADS)
Eggenberger, Rolf; Gerber, Stefan; Huber, Hanspeter; Searles, Debra; Welker, Marc
1992-08-01
The shear viscosity is calculated ab initio for the liquid and hypercritical state, i.e. a previously published potential for Ne 2, obtained from ab initio calculations including electron correlation, is used in classical equilibrium molecular dynamics simulations to obtain the shear viscosity from a Green-Kubo integral. The quality of the results is quite uniform over a large pressure range up to 1000 MPa and a wide temperature range from 26 to 600 K. In most cases the calculated shear viscosity deviates by less than 10% from the experimental value, in general the error being only a few percent.
NASA Technical Reports Server (NTRS)
Connelly, Joseph; Blake, Peter; Jones, Joycelyn
2008-01-01
The authors report operational upgrades and streamlined data analysis of a commissioned electronic speckle interferometer (ESPI) in a permanent in-house facility at NASA's Goddard Space Flight Center. Our ESPI was commercially purchased for use by the James Webb Space Telescope (JWST) development team. We have quantified and reduced systematic error sources, improved the software operability with a user-friendly graphic interface, developed an instrument simulator, streamlined data analysis for long-duration testing, and implemented a turn-key approach to speckle interferometry. We also summarize results from a test of the JWST support structure (previously published), and present new results from several pieces of test hardware at various environmental conditions.
The Structure of the Protonated Serine Octamer.
Scutelnic, Valeriu; Perez, Marta A S; Marianski, Mateusz; Warnke, Stephan; Gregor, Aurelien; Rothlisberger, Ursula; Bowers, Michael T; Baldauf, Carsten; von Helden, Gert; Rizzo, Thomas R; Seo, Jongcheol
2018-06-20
The amino acid serine has long been known to form a protonated "magic-number" cluster containing eight monomer units that shows an unusually high abundance in mass spectra and has a remarkable homochiral preference. Despite many experimental and theoretical studies, there is no consensus on a Ser 8 H + structure that is in agreement with all experimental observations. Here, we present the structure of Ser 8 H + determined by a combination of infrared spectroscopy and ab initio molecular dynamics simulations. The three-dimensional structure that we determine is ∼25 kcal mol -1 more stable than the previous most stable published structure and explains both the homochiral preference and the experimentally observed facile replacement of two serine units.
NASA Astrophysics Data System (ADS)
Archirel, Pierre
1997-09-01
We generalise the preoptimisation of orbitals within VB (Part I of this series) through letting the orbitals delocalise on the neighbouring fragments. The method is more accurate than the local preoptimisation. The method is tested on the rare gas clusters He 2+, Ar 2+, He 3+ and Ar 3+. The results are in good agreement with previously published data on these systems. We complete these data with higher excited states. The binding energies of (ArCO) +, (ArN 2) + and N 4+ are revisited. The simulation of the SCF method is extended to Cu +H 2O.
Spatially variant apodization for squinted synthetic aperture radar images.
Castillo-Rubio, Carlos F; Llorente-Romano, Sergio; Burgos-García, Mateo
2007-08-01
Spatially variant apodization (SVA) is a nonlinear sidelobe reduction technique that improves sidelobe level and preserves resolution at the same time. This method implements a bidimensional finite impulse response filter with adaptive taps depending on image information. Some papers that have been previously published analyze SVA at the Nyquist rate or at higher rates focused on strip synthetic aperture radar (SAR). This paper shows that traditional SVA techniques are useless when the sensor operates with a squint angle. The reasons for this behaviour are analyzed, and a new implementation that largely improves the results is presented. The algorithm is applied to simulated SAR images in order to demonstrate the good quality achieved along with efficient computation.
Allergic reaction to polyethylene glycol in a painter.
Antolin-Amerigo, D; Sánchez-González, M J; Barbarroja-Escudero, J; Rodríguez-Rodríguez, M; Álvarez-Perea, A; Alvarez-Mon, M
2015-08-01
We report a case of a male painter who visited our outpatient clinic after developing a distinct skin reaction 15 min after the ingestion of a laxative solution containing polyethylene glycol (PEG) prior to colonoscopy. He described suffering from the same skin reaction when he was previously exposed to paints that contained PEG-4000. An exposure challenge test with pure PEG-4000, simulating his workplace conditions, elicited a generalized urticarial reaction. Allergy to PEG should be considered in painters who develop urticarial or other systemic symptoms after handling PEG-containing products. © The Author 2015. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
INACSL Standards of Best Practice for Simulation: Past, Present, and Future.
Sittner, Barbara J; Aebersold, Michelle L; Paige, Jane B; Graham, Leslie L M; Schram, Andrea Parsons; Decker, Sharon I; Lioce, Lori
2015-01-01
To describe the historical evolution of the International Nursing Association for Clinical Simulation and Learning's (INACSL) Standards of Best Practice: Simulation. The establishment of simulation standards began as a concerted effort by the INACSL Board of Directors in 2010 to provide best practices to design, conduct, and evaluate simulation activities in order to advance the science of simulation as a teaching methodology. A comprehensive review of the evolution of INACSL Standards of Best Practice: Simulation was conducted using journal publications, the INACSL website, INACSL member survey, and reports from members of the INACSL Standards Committee. The initial seven standards, published in 2011, were reviewed and revised in 2013. Two new standards were published in 2015. The standards will continue to evolve as the science of simulation advances. As the use of simulation-based experiences increases, the INACSL Standards of Best Practice: Simulation are foundational to standardizing language, behaviors, and curricular design for facilitators and learners.
Murillo, Gabriel H; You, Na; Su, Xiaoquan; Cui, Wei; Reilly, Muredach P; Li, Mingyao; Ning, Kang; Cui, Xinping
2016-05-15
Single nucleotide variant (SNV) detection procedures are being utilized as never before to analyze the recent abundance of high-throughput DNA sequencing data, both on single and multiple sample datasets. Building on previously published work with the single sample SNV caller genotype model selection (GeMS), a multiple sample version of GeMS (MultiGeMS) is introduced. Unlike other popular multiple sample SNV callers, the MultiGeMS statistical model accounts for enzymatic substitution sequencing errors. It also addresses the multiple testing problem endemic to multiple sample SNV calling and utilizes high performance computing (HPC) techniques. A simulation study demonstrates that MultiGeMS ranks highest in precision among a selection of popular multiple sample SNV callers, while showing exceptional recall in calling common SNVs. Further, both simulation studies and real data analyses indicate that MultiGeMS is robust to low-quality data. We also demonstrate that accounting for enzymatic substitution sequencing errors not only improves SNV call precision at low mapping quality regions, but also improves recall at reference allele-dominated sites with high mapping quality. The MultiGeMS package can be downloaded from https://github.com/cui-lab/multigems xinping.cui@ucr.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
du Plessis, Anton; Broeckhoven, Chris; le Roux, Stephan G
2018-01-01
This Data Note provides data from an experimental campaign to analyse the detailed internal and external morphology and mechanical properties of venomous snake fangs. The aim of the experimental campaign was to investigate the evolutionary development of 3 fang phenotypes and investigate their mechanical behaviour. The study involved the use of load simulations to compare maximum Von Mises stress values when a load is applied to the tip of the fang. The conclusions of this study have been published elsewhere, but in this data note we extend the analysis, providing morphological comparisons including details such as curvature comparisons, thickness, etc. Physical compression results of individual fangs, though reported in the original paper, were also extended here by calculating the effective elastic modulus of the entire snake fang structure including internal cavities for the first time. This elastic modulus of the entire fang is significantly lower than the locally measured values previously reported from indentation experiments, highlighting the possibility that the elastic modulus is higher on the surface than in the rest of the material. The micro-computed tomography (microCT) data are presented both in image stacks and in the form of STL files, which simplifies the handling of the data and allows its re-use for future morphological studies. These fangs might also serve as bio-inspiration for future hypodermic needles. © The Author 2017. Published by Oxford University Press.
Nicolsky, D. J.; Freymueller, J.T.; Witter, R.C.; Suleimani, E. N.; Koehler, R.D.
2016-01-01
We reassess the slip distribution of the 1957 Andreanof Islands earthquake in the eastern part of the aftershock zone where published slip models infer little or no slip. Eyewitness reports, tide gauge data, and geological evidence for 9–23 m tsunami runups imply seafloor deformation offshore Unalaska Island in 1957, in contrast with previous studies that labeled the area a seismic gap. Here, we simulate tsunami dynamics for a suite of deformation models that vary in depth and amount of megathrust slip. Tsunami simulations show that a shallow (5–15 km deep) rupture with ~20 m of slip most closely reproduces the 1957 Dutch Harbor marigram and nearby >18 m runup at Sedanka Island marked by stranded drift logs. Models that place slip >20 km predict waves that arrive too soon. Our results imply that shallow slip on the megathrust in 1957 extended east into an area that presently creeps.
Weighted analysis of paired microarray experiments.
Kristiansson, Erik; Sjögren, Anders; Rudemo, Mats; Nerman, Olle
2005-01-01
In microarray experiments quality often varies, for example between samples and between arrays. The need for quality control is therefore strong. A statistical model and a corresponding analysis method is suggested for experiments with pairing, including designs with individuals observed before and after treatment and many experiments with two-colour spotted arrays. The model is of mixed type with some parameters estimated by an empirical Bayes method. Differences in quality are modelled by individual variances and correlations between repetitions. The method is applied to three real and several simulated datasets. Two of the real datasets are of Affymetrix type with patients profiled before and after treatment, and the third dataset is of two-colour spotted cDNA type. In all cases, the patients or arrays had different estimated variances, leading to distinctly unequal weights in the analysis. We suggest also plots which illustrate the variances and correlations that affect the weights computed by our analysis method. For simulated data the improvement relative to previously published methods without weighting is shown to be substantial.
Li, R; Barton, HA; Maurer, TS
2015-01-01
Liver cirrhosis is a disease characterized by the loss of functional liver mass. Physiologically based pharmacokinetic (PBPK) modeling was applied to interpret and predict how the interplay among physiological changes in cirrhosis affects pharmacokinetics. However, previous PBPK models under cirrhotic conditions were developed for permeable cytochrome P450 substrates and do not directly apply to substrates of liver transporters. This study characterizes a PBPK model for liver transporter substrates in relation to the severity of liver cirrhosis. A published PBPK model structure for liver transporter substrates under healthy conditions and the physiological changes for cirrhosis are combined to simulate pharmacokinetics of liver transporter substrates in patients with mild and moderate cirrhosis. The simulated pharmacokinetics under liver cirrhosis reasonably approximate observations. This analysis includes meta-analysis to obtain system-dependent parameters in cirrhosis patients and a top-down approach to improve understanding of the effect of cirrhosis on transporter-mediated drug disposition under cirrhotic conditions. PMID:26225262
Rahimi Azghadi, Mostafa; Iannella, Nicolangelo; Al-Sarawi, Said; Abbott, Derek
2014-01-01
Cortical circuits in the brain have long been recognised for their information processing capabilities and have been studied both experimentally and theoretically via spiking neural networks. Neuromorphic engineers are primarily concerned with translating the computational capabilities of biological cortical circuits, using the Spiking Neural Network (SNN) paradigm, into in silico applications that can mimic the behaviour and capabilities of real biological circuits/systems. These capabilities include low power consumption, compactness, and relevant dynamics. In this paper, we propose a new accelerated-time circuit that has several advantages over its previous neuromorphic counterparts in terms of compactness, power consumption, and capability to mimic the outcomes of biological experiments. The presented circuit simulation results demonstrate that, in comparing the new circuit to previous published synaptic plasticity circuits, reduced silicon area and lower energy consumption for processing each spike is achieved. In addition, it can be tuned in order to closely mimic the outcomes of various spike timing- and rate-based synaptic plasticity experiments. The proposed circuit is also investigated and compared to other designs in terms of tolerance to mismatch and process variation. Monte Carlo simulation results show that the proposed design is much more stable than its previous counterparts in terms of vulnerability to transistor mismatch, which is a significant challenge in analog neuromorphic design. All these features make the proposed design an ideal circuit for use in large scale SNNs, which aim at implementing neuromorphic systems with an inherent capability that can adapt to a continuously changing environment, thus leading to systems with significant learning and computational abilities. PMID:24551089
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciricosta, O.; Scott, H.; Durey, P.
In a National Ignition Facility implosion, hydrodynamic instabilities may cause the cold material from the imploding shell to be injected into the hot-spot (hot-spot mix), enhancing the radiative and conductive losses, which in turn may lead to a quenching of the ignition process. The bound-bound features of the spectrum emitted by high-Z ablator dopants that get mixed into the hot-spot have been previously used to infer the total amount of mixed mass; however, the typical errorbars are larger than the maximum tolerable mix. We present in this paper an improved 2D model for mix spectroscopy which can be used tomore » retrieve information on both the amount of mixed mass and the full imploded plasma profile. By performing radiation transfer and simultaneously fitting all of the features exhibited by the spectra, we are able to constrain self-consistently the effect of the opacity of the external layers of the target on the emission, thus improving the accuracy of the inferred mixed mass. The model's predictive capabilities are first validated by fitting simulated spectra arising from fully characterized hydrodynamic simulations, and then, the model is applied to previously published experimental results, providing values of mix mass in agreement with previous estimates. Finally, we show that the new self consistent procedure leads to better constrained estimates of mix and also provides insight into the sensitivity of the hot-spot spectroscopy to the spatial properties of the imploded capsule, such as the in-flight aspect ratio of the cold fuel surrounding the hotspot.« less
Ciricosta, O.; Scott, H.; Durey, P.; ...
2017-11-06
In a National Ignition Facility implosion, hydrodynamic instabilities may cause the cold material from the imploding shell to be injected into the hot-spot (hot-spot mix), enhancing the radiative and conductive losses, which in turn may lead to a quenching of the ignition process. The bound-bound features of the spectrum emitted by high-Z ablator dopants that get mixed into the hot-spot have been previously used to infer the total amount of mixed mass; however, the typical errorbars are larger than the maximum tolerable mix. We present in this paper an improved 2D model for mix spectroscopy which can be used tomore » retrieve information on both the amount of mixed mass and the full imploded plasma profile. By performing radiation transfer and simultaneously fitting all of the features exhibited by the spectra, we are able to constrain self-consistently the effect of the opacity of the external layers of the target on the emission, thus improving the accuracy of the inferred mixed mass. The model's predictive capabilities are first validated by fitting simulated spectra arising from fully characterized hydrodynamic simulations, and then, the model is applied to previously published experimental results, providing values of mix mass in agreement with previous estimates. Finally, we show that the new self consistent procedure leads to better constrained estimates of mix and also provides insight into the sensitivity of the hot-spot spectroscopy to the spatial properties of the imploded capsule, such as the in-flight aspect ratio of the cold fuel surrounding the hotspot.« less
Rahimi Azghadi, Mostafa; Iannella, Nicolangelo; Al-Sarawi, Said; Abbott, Derek
2014-01-01
Cortical circuits in the brain have long been recognised for their information processing capabilities and have been studied both experimentally and theoretically via spiking neural networks. Neuromorphic engineers are primarily concerned with translating the computational capabilities of biological cortical circuits, using the Spiking Neural Network (SNN) paradigm, into in silico applications that can mimic the behaviour and capabilities of real biological circuits/systems. These capabilities include low power consumption, compactness, and relevant dynamics. In this paper, we propose a new accelerated-time circuit that has several advantages over its previous neuromorphic counterparts in terms of compactness, power consumption, and capability to mimic the outcomes of biological experiments. The presented circuit simulation results demonstrate that, in comparing the new circuit to previous published synaptic plasticity circuits, reduced silicon area and lower energy consumption for processing each spike is achieved. In addition, it can be tuned in order to closely mimic the outcomes of various spike timing- and rate-based synaptic plasticity experiments. The proposed circuit is also investigated and compared to other designs in terms of tolerance to mismatch and process variation. Monte Carlo simulation results show that the proposed design is much more stable than its previous counterparts in terms of vulnerability to transistor mismatch, which is a significant challenge in analog neuromorphic design. All these features make the proposed design an ideal circuit for use in large scale SNNs, which aim at implementing neuromorphic systems with an inherent capability that can adapt to a continuously changing environment, thus leading to systems with significant learning and computational abilities.
NASA Astrophysics Data System (ADS)
Ciricosta, O.; Scott, H.; Durey, P.; Hammel, B. A.; Epstein, R.; Preston, T. R.; Regan, S. P.; Vinko, S. M.; Woolsey, N. C.; Wark, J. S.
2017-11-01
In a National Ignition Facility implosion, hydrodynamic instabilities may cause the cold material from the imploding shell to be injected into the hot-spot (hot-spot mix), enhancing the radiative and conductive losses, which in turn may lead to a quenching of the ignition process. The bound-bound features of the spectrum emitted by high-Z ablator dopants that get mixed into the hot-spot have been previously used to infer the total amount of mixed mass; however, the typical errorbars are larger than the maximum tolerable mix. We present here an improved 2D model for mix spectroscopy which can be used to retrieve information on both the amount of mixed mass and the full imploded plasma profile. By performing radiation transfer and simultaneously fitting all of the features exhibited by the spectra, we are able to constrain self-consistently the effect of the opacity of the external layers of the target on the emission, thus improving the accuracy of the inferred mixed mass. The model's predictive capabilities are first validated by fitting simulated spectra arising from fully characterized hydrodynamic simulations, and then, the model is applied to previously published experimental results, providing values of mix mass in agreement with previous estimates. We show that the new self consistent procedure leads to better constrained estimates of mix and also provides insight into the sensitivity of the hot-spot spectroscopy to the spatial properties of the imploded capsule, such as the in-flight aspect ratio of the cold fuel surrounding the hotspot.
Ko, Youn Jo; Jo, Won Ho
2010-05-19
Several prokaryotic ClC proteins have been demonstrated to function as exchangers that transport both chloride ions and protons simultaneously in opposite directions. However, the path of the proton through the ClC exchanger, and how the protein brings about the coupled movement of both ions are still unknown. In this work, we use an atomistic molecular dynamics (MD) simulation to demonstrate that a previously unknown secondary water pore is formed inside an Escherichia coli ClC exchanger. The secondary water pore is bifurcated from the chloride ion pathway at E148. From the systematic simulations, we determined that the glutamate residue exposed to the intracellular solution, E203, plays an important role as a trigger for the formation of the secondary water pore, and that the highly conserved tyrosine residue Y445 functions as a barrier that separates the proton from the chloride ion pathways. Based on our simulation results, we conclude that protons in the ClC exchanger are conducted via a water network through the secondary water pore, and we propose a new mechanism for the coupled transport of chloride ions and protons. It has been reported that several members of ClC proteins are not just channels that simply transport chloride ions across lipid bilayers; rather, they are exchangers that transport both the chloride ion and proton in opposite directions. However, the ion transit pathways and the mechanism of the coupled movement of these two ions have not yet been unveiled. In this article, we report a new finding (to our knowledge) of a water pore inside a prokaryotic ClC protein as revealed by computer simulation. This water pore is bifurcated from the putative chloride ion, and water molecules inside the new pore connect two glutamate residues that are known to be key residues for proton transport. On the basis of our simulation results, we conclude that the water wire that is formed inside the newly found pore acts as a proton pathway, which enables us to resolve many problems that could not be addressed by previous experimental studies. Copyright 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Reference Accuracy among Research Articles Published in "Research on Social Work Practice"
ERIC Educational Resources Information Center
Wilks, Scott E.; Geiger, Jennifer R.; Bates, Samantha M.; Wright, Amy L.
2017-01-01
Objective: The objective was to examine reference errors in research articles published in Research on Social Work Practice. High rates of reference errors in other top social work journals have been noted in previous studies. Methods: Via a sampling frame of 22,177 total references among 464 research articles published in the previous decade, a…
Florian, J; Garnett, C E; Nallani, S C; Rappaport, B A; Throckmorton, D C
2012-04-01
Pharmacokinetic (PK)-pharmacodynamic modeling and simulation were used to establish a link between methadone dose, concentrations, and Fridericia rate-corrected QT (QTcF) interval prolongation, and to identify a dose that was associated with increased risk of developing torsade de pointes. A linear relationship between concentration and QTcF described the data from five clinical trials in patients on methadone maintenance treatment (MMT). A previously published population PK model adequately described the concentration-time data, and this model was used for simulation. QTcF was increased by a mean (90% confidence interval (CI)) of 17 (12, 22) ms per 1,000 ng/ml of methadone. Based on this model, doses >120 mg/day would increase the QTcF interval by >20 ms. The model predicts that 1-3% of patients would have ΔQTcF >60 ms, and 0.3-2.0% of patients would have QTcF >500 ms at doses of 160-200 mg/day. Our predictions are consistent with available observational data and support the need for electrocardiogram (ECG) monitoring and arrhythmia risk factor assessment in patients receiving methadone doses >120 mg/day.
Computing nonhydrostatic shallow-water flow over steep terrain
Denlinger, R.P.; O'Connell, D. R. H.
2008-01-01
Flood and dambreak hazards are not limited to moderate terrain, yet most shallow-water models assume that flow occurs over gentle slopes. Shallow-water flow over rugged or steep terrain often generates significant nonhydrostatic pressures, violating the assumption of hydrostatic pressure made in most shallow-water codes. In this paper, we adapt a previously published nonhydrostatic granular flow model to simulate shallow-water flow, and we solve conservation equations using a finite volume approach and an Harten, Lax, Van Leer, and Einfeldt approximate Riemann solver that is modified for a sloping bed and transient wetting and drying conditions. To simulate bed friction, we use the law of the wall. We test the model by comparison with an analytical solution and with results of experiments in flumes that have steep (31??) or shallow (0.3??) slopes. The law of the wall provides an accurate prediction of the effect of bed roughness on mean flow velocity over two orders of magnitude of bed roughness. Our nonhydrostatic, law-of-the-wall flow simulation accurately reproduces flume measurements of front propagation speed, flow depth, and bed-shear stress for conditions of large bed roughness. ?? 2008 ASCE.
Simulation of an Impact Test of the All-Composite Lear Fan Aircraft
NASA Technical Reports Server (NTRS)
Stockwell, Alan E.; Jones, Lisa E. (Technical Monitor)
2002-01-01
An MSC.Dytran model of an all-composite Lear Fan aircraft fuselage was developed to simulate an impact test conducted at the NASA Langley Research Center Impact Dynamics Research Facility (IDRF). The test was the second of two Lear Fan impact tests. The purpose of the second test was to evaluate the performance of retrofitted composite energy-absorbing floor beams. A computerized photogrammetric survey was performed to provide airframe geometric coordinates, and over 5000 points were processed and imported into MSC.Patran via an IGES file. MSC.Patran was then used to develop the curves and surfaces and to mesh the finite element model. A model of the energy-absorbing floor beams was developed separately and then integrated into the Lear Fan model. Structural responses of components such as the wings were compared with experimental data or previously published analytical data wherever possible. Comparisons with experimental results were used to guide structural model modifications to improve the simulation performance. This process was based largely on qualitative (video and still camera images and post-test inspections) rather than quantitative results due to the relatively few accelerometers attached to the structure.
Estimating freshwater turtle mortality rates and population declines following hook ingestion.
Steen, David A; Robinson, Orin J
2017-12-01
Freshwater turtle populations are susceptible to declines following small increases in the mortality of adults, making it essential to identify and understand potential threats. Freshwater turtles ingest fish hooks associated with recreational angling, and this is likely a problem because hook ingestion is a source of additive mortality for sea turtles. We used a Bayesian-modeling framework, observed rates of hook ingestion by freshwater turtles, and mortality of sea turtles from hook ingestion to examine the probability that a freshwater turtle in a given population ingests a hook and subsequently dies from it. We used the results of these analyses and previously published life-history data to simulate the effects of hook ingestion on population growth for 3 species of freshwater turtle. In our simulation, the probability that an individual turtle ingests a hook and dies as a result was 1.2-11%. Our simulation results suggest that this rate of mortality from hook ingestion is sufficient to cause population declines. We believe we have identified fish-hook ingestion as a serious yet generally overlooked threat to the viability of freshwater turtle populations. © 2017 Society for Conservation Biology.
Schäuble, Sascha; Stavrum, Anne-Kristin; Bockwoldt, Mathias; Puntervoll, Pål; Heiland, Ines
2017-06-24
Systems Biology Markup Language (SBML) is the standard model representation and description language in systems biology. Enriching and analysing systems biology models by integrating the multitude of available data, increases the predictive power of these models. This may be a daunting task, which commonly requires bioinformatic competence and scripting. We present SBMLmod, a Python-based web application and service, that automates integration of high throughput data into SBML models. Subsequent steady state analysis is readily accessible via the web service COPASIWS. We illustrate the utility of SBMLmod by integrating gene expression data from different healthy tissues as well as from a cancer dataset into a previously published model of mammalian tryptophan metabolism. SBMLmod is a user-friendly platform for model modification and simulation. The web application is available at http://sbmlmod.uit.no , whereas the WSDL definition file for the web service is accessible via http://sbmlmod.uit.no/SBMLmod.wsdl . Furthermore, the entire package can be downloaded from https://github.com/MolecularBioinformatics/sbml-mod-ws . We envision that SBMLmod will make automated model modification and simulation available to a broader research community.
NASA Astrophysics Data System (ADS)
Ikuta, Nobuaki; Takeda, Akihide
2017-12-01
Research on the flight behavior of electrons and ions in a gas under an electric field has recently moved in a direction of clarifying the mechanism of the spatiotemporal development of a swarm, but the symbolic unknown state function f(r,c,t) of the Boltzmann equation has not been obtained in an explicit form. However, a few papers on the spatiotemporal development of an electron swarm using the Monte Carlo simulation have been published. On the other hand, a new simulation procedure for obtaining the lifelong state function FfT(t,x,ɛ) and local transport quantities J(t,x,ɛ) of electrons in the three domains of time t, one-dimensional position x, and energy ɛ under arbitrary initial and boundary conditions has been developed by extending the flight-time-integral (FTI) methods previously reported and is named the 3D-FTI method. A preliminary calculation has shown that this method can extensively provide the flight behavior of individual electrons in a swarm and local transport quantities consistent in the three domains with reasonable accuracy and career dependences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Diksha; Badano, Aldo
2013-03-15
Purpose: hybridMANTIS is a Monte Carlo package for modeling indirect x-ray imagers using columnar geometry based on a hybrid concept that maximizes the utilization of available CPU and graphics processing unit processors in a workstation. Methods: The authors compare hybridMANTIS x-ray response simulations to previously published MANTIS and experimental data for four cesium iodide scintillator screens. These screens have a variety of reflective and absorptive surfaces with different thicknesses. The authors analyze hybridMANTIS results in terms of modulation transfer function and calculate the root mean square difference and Swank factors from simulated and experimental results. Results: The comparison suggests thatmore » hybridMANTIS better matches the experimental data as compared to MANTIS, especially at high spatial frequencies and for the thicker screens. hybridMANTIS simulations are much faster than MANTIS with speed-ups up to 5260. Conclusions: hybridMANTIS is a useful tool for improved description and optimization of image acquisition stages in medical imaging systems and for modeling the forward problem in iterative reconstruction algorithms.« less
Safner, T.; Miller, M.P.; McRae, B.H.; Fortin, M.-J.; Manel, S.
2011-01-01
Recently, techniques available for identifying clusters of individuals or boundaries between clusters using genetic data from natural populations have expanded rapidly. Consequently, there is a need to evaluate these different techniques. We used spatially-explicit simulation models to compare three spatial Bayesian clustering programs and two edge detection methods. Spatially-structured populations were simulated where a continuous population was subdivided by barriers. We evaluated the ability of each method to correctly identify boundary locations while varying: (i) time after divergence, (ii) strength of isolation by distance, (iii) level of genetic diversity, and (iv) amount of gene flow across barriers. To further evaluate the methods' effectiveness to detect genetic clusters in natural populations, we used previously published data on North American pumas and a European shrub. Our results show that with simulated and empirical data, the Bayesian spatial clustering algorithms outperformed direct edge detection methods. All methods incorrectly detected boundaries in the presence of strong patterns of isolation by distance. Based on this finding, we support the application of Bayesian spatial clustering algorithms for boundary detection in empirical datasets, with necessary tests for the influence of isolation by distance. ?? 2011 by the authors; licensee MDPI, Basel, Switzerland.
Cost-effectiveness of supervised exercise therapy in heart failure patients.
Kühr, Eduardo M; Ribeiro, Rodrigo A; Rohde, Luis Eduardo P; Polanczyk, Carisi A
2011-01-01
Exercise therapy in heart failure (HF) patients is considered safe and has demonstrated modest reduction in hospitalization rates and death in recent trials. Previous cost-effectiveness analysis described favorable results considering long-term supervised exercise intervention and significant effectiveness of exercise therapy; however, these evidences are now no longer supported. To evaluate the cost-effectiveness of supervised exercise therapy in HF patients under the perspective of the Brazilian Public Healthcare System. We developed a Markov model to evaluate the incremental cost-effectiveness ratio of supervised exercise therapy compared to standard treatment in patients with New York Heart Association HF class II and III. Effectiveness was evaluated in quality-adjusted life years in a 10-year time horizon. We searched PUBMED for published clinical trials to estimate effectiveness, mortality, hospitalization, and utilities data. Treatment costs were obtained from published cohort updated to 2008 values. Exercise therapy intervention costs were obtained from a rehabilitation center. Model robustness was assessed through Monte Carlo simulation and sensitivity analysis. Cost were expressed as international dollars, applying the purchasing-power-parity conversion rate. Exercise therapy showed small reduction in hospitalization and mortality at a low cost, an incremental cost-effectiveness ratio of Int$26,462/quality-adjusted life year. Results were more sensitive to exercise therapy costs, standard treatment total costs, exercise therapy effectiveness, and medications costs. Considering a willingness-to-pay of Int$27,500, 55% of the trials fell below this value in the Monte Carlo simulation. In a Brazilian scenario, exercise therapy shows reasonable cost-effectiveness ratio, despite current evidence of limited benefit of this intervention. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Holmes, Thomas H; McCormick, Mark I
2010-03-01
The speed with which individuals can learn to identify and react appropriately to predation threats when transitioning to new life history stages and habitats will influence their survival. This study investigated the role of chemical alarm cues in both anti-predator responses and predator identification during a transitional period in a newly settled coral reef damselfish, Pomacentrus amboinensis. Individuals were tested for changes in seven behavioural traits in response to conspecific and heterospecific skin extracts. Additionally, we tested whether fish could learn to associate a previously novel chemical cue (i.e. simulated predator scent) with danger, after previously being exposed to a paired cue combining the conspecific skin extract with the novel scent. Fish exposed to conspecific skin extracts were found to significantly decreased their feeding rate whilst those exposed to heterospecific and control cues showed no change. Individuals were also able to associate a previously novel scent with danger after only a single previous exposure to the paired conspecific skin extract/novel scent cue. Our results indicate that chemical alarm cues play a large role in both threat detection and learned predator recognition during the early post-settlement period in coral reef fishes. Copyright (c) 2010. Published by Elsevier B.V.
Simulating ungulate herbivory across forest landscapes: A browsing extension for LANDIS-II
DeJager, Nathan R.; Drohan, Patrick J.; Miranda, Brian M.; Sturtevant, Brian R.; Stout, Susan L.; Royo, Alejandro; Gustafson, Eric J.; Romanski, Mark C.
2017-01-01
Browsing ungulates alter forest productivity and vegetation succession through selective foraging on species that often dominate early succession. However, the long-term and large-scale effects of browsing on forest succession are not possible to project without the use of simulation models. To explore the effects of ungulates on succession in a spatially explicit manner, we developed a Browse Extension that simulates the effects of browsing ungulates on the growth and survival of plant species cohorts within the LANDIS-II spatially dynamic forest landscape simulation model framework. We demonstrate the capabilities of the new extension and explore the spatial effects of ungulates on forest composition and dynamics using two case studies. The first case study examined the long-term effects of persistently high white-tailed deer browsing rates in the northern hardwood forests of the Allegheny National Forest, USA. In the second case study, we incorporated a dynamic ungulate population model to simulate interactions between the moose population and boreal forest landscape of Isle Royale National Park, USA. In both model applications, browsing reduced total aboveground live biomass and caused shifts in forest composition. Simulations that included effects of browsing resulted in successional patterns that were more similar to those observed in the study regions compared to simulations that did not incorporate browsing effects. Further, model estimates of moose population density and available forage biomass were similar to previously published field estimates at Isle Royale and in other moose-boreal forest systems. Our simulations suggest that neglecting effects of browsing when modeling forest succession in ecosystems known to be influenced by ungulates may result in flawed predictions of aboveground biomass and tree species composition.
ASP-G: an ASP-based method for finding attractors in genetic regulatory networks
Mushthofa, Mushthofa; Torres, Gustavo; Van de Peer, Yves; Marchal, Kathleen; De Cock, Martine
2014-01-01
Motivation: Boolean network models are suitable to simulate GRNs in the absence of detailed kinetic information. However, reducing the biological reality implies making assumptions on how genes interact (interaction rules) and how their state is updated during the simulation (update scheme). The exact choice of the assumptions largely determines the outcome of the simulations. In most cases, however, the biologically correct assumptions are unknown. An ideal simulation thus implies testing different rules and schemes to determine those that best capture an observed biological phenomenon. This is not trivial because most current methods to simulate Boolean network models of GRNs and to compute their attractors impose specific assumptions that cannot be easily altered, as they are built into the system. Results: To allow for a more flexible simulation framework, we developed ASP-G. We show the correctness of ASP-G in simulating Boolean network models and obtaining attractors under different assumptions by successfully recapitulating the detection of attractors of previously published studies. We also provide an example of how performing simulation of network models under different settings help determine the assumptions under which a certain conclusion holds. The main added value of ASP-G is in its modularity and declarativity, making it more flexible and less error-prone than traditional approaches. The declarative nature of ASP-G comes at the expense of being slower than the more dedicated systems but still achieves a good efficiency with respect to computational time. Availability and implementation: The source code of ASP-G is available at http://bioinformatics.intec.ugent.be/kmarchal/Supplementary_Information_Musthofa_2014/asp-g.zip. Contact: Kathleen.Marchal@UGent.be or Martine.DeCock@UGent.be Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25028722
State of the evidence on simulation-based training for laparoscopic surgery: a systematic review.
Zendejas, Benjamin; Brydges, Ryan; Hamstra, Stanley J; Cook, David A
2013-04-01
Summarize the outcomes and best practices of simulation training for laparoscopic surgery. Simulation-based training for laparoscopic surgery has become a mainstay of surgical training. Much new evidence has accrued since previous reviews were published. We systematically searched the literature through May 2011 for studies evaluating simulation, in comparison with no intervention or an alternate training activity, for training health professionals in laparoscopic surgery. Outcomes were classified as satisfaction, skills (in a test setting) of time (to perform the task), process (eg, performance rating), product (eg, knot strength), and behaviors when caring for patients. We used random effects to pool effect sizes. From 10,903 articles screened, we identified 219 eligible studies enrolling 7138 trainees, including 91 (42%) randomized trials. For comparisons with no intervention (n = 151 studies), pooled effect size (ES) favored simulation for outcomes of knowledge (1.18; N = 9 studies), skills time (1.13; N = 89), skills process (1.23; N = 114), skills product (1.09; N = 7), behavior time (1.15; N = 7), behavior process (1.22; N = 15), and patient effects (1.28; N = 1), all P < 0.05. When compared with nonsimulation instruction (n = 3 studies), results significantly favored simulation for outcomes of skills time (ES, 0.75) and skills process (ES, 0.54). Comparisons between different simulation interventions (n = 79 studies) clarified best practices. For example, in comparison with virtual reality, box trainers have similar effects for process skills outcomes and seem to be superior for outcomes of satisfaction and skills time. Simulation-based laparoscopic surgery training of health professionals has large benefits when compared with no intervention and is moderately more effective than nonsimulation instruction.
Dunn, John C; Belmont, Philip J; Lanzi, Joseph; Martin, Kevin; Bader, Julia; Owens, Brett; Waterman, Brian R
2015-01-01
Surgical education is evolving as work hour constraints limit the exposure of residents to the operating room. Potential consequences may include erosion of resident education and decreased quality of patient care. Surgical simulation training has become a focus of study in an effort to counter these challenges. Previous studies have validated the use of arthroscopic surgical simulation programs both in vitro and in vivo. However, no study has examined if the gains made by residents after a simulation program are retained after a period away from training. In all, 17 orthopedic surgery residents were randomized into simulation or standard practice groups. All subjects were oriented to the arthroscopic simulator, a 14-point anatomic checklist, and Arthroscopic Surgery Skill Evaluation Tool (ASSET). The experimental group received 1 hour of simulation training whereas the control group had no additional training. All subjects performed a recorded, diagnostic arthroscopy intraoperatively. These videos were scored by 2 blinded, fellowship-trained orthopedic surgeons and outcome measures were compared within and between the groups. After 1 year in which neither group had exposure to surgical simulation training, all residents were retested intraoperatively and scored in the exact same fashion. Individual surgical case logs were reviewed and surgical case volume was documented. There was no difference between the 2 groups after initial simulation testing and there was no correlation between case volume and initial scores. After training, the simulation group improved as compared with baseline in mean ASSET (p = 0.023) and mean time to completion (p = 0.01). After 1 year, there was no difference between the groups in any outcome measurements. Although individual technical skills can be cultivated with surgical simulation training, these advancements can be lost without continued education. It is imperative that residency programs implement a simulation curriculum and continue to train throughout the academic year. Published by Elsevier Inc.
Lo, Nathan C; Gurarie, David; Yoon, Nara; Coulibaly, Jean T; Bendavid, Eran; Andrews, Jason R; King, Charles H
2018-01-23
Schistosomiasis is a parasitic disease that affects over 240 million people globally. To improve population-level disease control, there is growing interest in adding chemical-based snail control interventions to interrupt the lifecycle of Schistosoma in its snail host to reduce parasite transmission. However, this approach is not widely implemented, and given environmental concerns, the optimal conditions for when snail control is appropriate are unclear. We assessed the potential impact and cost-effectiveness of various snail control strategies. We extended previously published dynamic, age-structured transmission and cost-effectiveness models to simulate mass drug administration (MDA) and focal snail control interventions against Schistosoma haematobium across a range of low-prevalence (5-20%) and high-prevalence (25-50%) rural Kenyan communities. We simulated strategies over a 10-year period of MDA targeting school children or entire communities, snail control, and combined strategies. We measured incremental cost-effectiveness in 2016 US dollars per disability-adjusted life year and defined a strategy as optimally cost-effective when maximizing health gains (averted disability-adjusted life years) with an incremental cost-effectiveness below a Kenya-specific economic threshold. In both low- and high-prevalence settings, community-wide MDA with additional snail control reduced total disability by an additional 40% compared with school-based MDA alone. The optimally cost-effective scenario included the addition of snail control to MDA in over 95% of simulations. These results support inclusion of snail control in global guidelines and national schistosomiasis control strategies for optimal disease control, especially in settings with high prevalence, "hot spots" of transmission, and noncompliance to MDA. Copyright © 2018 the Author(s). Published by PNAS.
Scott, Frank I; Shah, Yash; Lasch, Karen; Luo, Michelle; Lewis, James D
2018-01-18
Vedolizumab, an α4β7 integrin monoclonal antibody inhibiting gut lymphocyte trafficking, is an effective treatment for ulcerative colitis (UC). We evaluated the optimal position of vedolizumab in the UC treatment paradigm. Using Markov modeling, we assessed multiple algorithms for the treatment of UC. The base case was a 35-year-old male with steroid-dependent moderately to severely active UC without previous immunomodulator or biologic use. The model included 4 different algorithms over 1 year, with vedolizumab use prior to: initiating azathioprine (Algorithm 1), combination therapy with infliximab and azathioprine (Algorithm 2), combination therapy with an alternative anti-tumor necrosis factor (anti-TNF) and azathioprine (Algorithm 3), and colectomy (Algorithm 4). Transition probabilities and quality-adjusted life-year (QALY) estimates were derived from the published literature. Primary analyses included simulating 100 trials of 100,000 individuals, assessing clinical outcomes, and QALYs. Sensitivity analyses employed longer time horizons and ranges for all variables. Algorithm 1 (vedolizumab use prior to all other therapies) was the preferred strategy, resulting in 8981 additional individuals in remission, 18 fewer cases of lymphoma, and 1087 fewer serious infections per 100,000 patients compared with last-line use (A4). Algorithm 1 also resulted in 0.0197 to 0.0205 more QALYs compared with other algorithms. This benefit increased with longer time horizons. Algorithm 1 was preferred in all sensitivity analyses. The model suggests that treatment algorithms positioning vedolizumab prior to other therapies should be considered for individuals with moderately to severely active steroid-dependent UC. Further prospective research is needed to confirm these simulated results. © 2018 Crohn’s & Colitis Foundation of America. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Epitaxial Relationships between Calcium Carbonate and Inorganic Substrates
Yang, Taewook; Jho, Jae Young; Kim, Il Won
2014-01-01
The polymorph-selective crystallization of calcium carbonate has been studied in terms of epitaxial relationship between the inorganic substrates and the aragonite/calcite polymorphs with implication in bioinspired mineralization. EpiCalc software was employed to assess the previously published experimental results on two different groups of inorganic substrates: aragonitic carbonate crystals (SrCO3, PbCO3, and BaCO3) and a hexagonal crystal family (α-Al2O3, α-SiO2, and LiNbO3). The maximum size of the overlayer (aragonite or calcite) was calculated for each substrate based on a threshold value of the dimensionless potential to estimate the relative nucleation preference of the polymorphs of calcium carbonate. The results were in good agreement with previous experimental observations, although stereochemical effects between the overlayer and substrate should be separately considered when existed. In assessing the polymorph-selective nucleation, the current method appeared to provide a better tool than the oversimplified mismatch parameters without invoking time-consuming molecular simulation\\. PMID:25226539
Invariant aspects of human locomotion in different gravitational environments.
Minetti, A E
2001-01-01
Previous literature showed that walking gait follows the same mechanical paradigm, i.e. the straight/inverted pendulum, regardless the body size, the number of legs, and the amount of gravity acceleration. The Froude number, a dimensionless parameter originally designed to normalize the same (pendulum-like) motion in differently sized subjects, proved to be useful also in the comparison, within the same subject, of walking in heterogravity. In this paper the theory of dynamic similarity is tested by comparing the predictive power of the Froude number in terms of walking speed to previously published data on walking in hypogravity simulators. It is concluded that the Froude number is a good first predictor of the optimal walking speed and of the transition speed between walking and running in different gravitational conditions. According to the Froude number a dynamically similar walking speed on another planet can be calculated as [formula: see text] where V(Earth) is the reference speed on Earth. c 2001. Elsevier Science Ltd. All rights reserved.
The Brain/MINDS 3D digital marmoset brain atlas
Woodward, Alexander; Hashikawa, Tsutomu; Maeda, Masahide; Kaneko, Takaaki; Hikishima, Keigo; Iriki, Atsushi; Okano, Hideyuki; Yamaguchi, Yoko
2018-01-01
We present a new 3D digital brain atlas of the non-human primate, common marmoset monkey (Callithrix jacchus), with MRI and coregistered Nissl histology data. To the best of our knowledge this is the first comprehensive digital 3D brain atlas of the common marmoset having normalized multi-modal data, cortical and sub-cortical segmentation, and in a common file format (NIfTI). The atlas can be registered to new data, is useful for connectomics, functional studies, simulation and as a reference. The atlas was based on previously published work but we provide several critical improvements to make this release valuable for researchers. Nissl histology images were processed to remove illumination and shape artifacts and then normalized to the MRI data. Brain region segmentation is provided for both hemispheres. The data is in the NIfTI format making it easy to integrate into neuroscience pipelines, whereas the previous atlas was in an inaccessible file format. We also provide cortical, mid-cortical and white matter boundary segmentations useful for visualization and analysis. PMID:29437168
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mead, H; St. Jude Children’s Research Hospital, Memphis, TN; Brady, S
Purpose: To discover if a previously published methodology for estimating patient-specific organ dose in a pediatric population (5–55kg) is translatable to the adult sized patient population (> 55 kg). Methods: An adult male anthropomorphic phantom was scanned with metal oxide semiconductor field effect transistor (MOSFET) dosimeters placed at 23 organ locations in the chest and abdominopelvic regions to determine absolute organ dose. Organ-dose-to-SSDE correlation factors were developed by dividing individual phantom organ doses by SSDE of the phantom; where SSDE was calculated at the center of the scan volume of the chest and abdomen/pelvis separately. Organ dose correlation factors developedmore » in phantom were multiplied by 28 chest and 22 abdominopelvic patient SSDE values to estimate organ dose. The median patient weight from the CT examinations was 68.9 kg (range 57–87 kg) and median age was 17 years (range 13–28 years). Calculated organ dose estimates were compared to published Monte Carlo simulated patient and phantom results. Results: Organ-dose-to-SSDE correlation was determined for a total of 23 organs in the chest and abdominopelvic regions. For organs fully covered by the scan volume, correlation in the chest (median 1.3; range 1.1–1.5) and abdominopelvic (median 0.9; range 0.7–1.0) was 1.0 ± 10%. For organs that extended beyond the scan volume (i.e. skin bone marrow and bone surface) correlation was determined to be a median of 0.3 (range 0.1–0.4). Calculated patient organ dose using patient SSDE agreed to better than 6% (chest) and 15% (abdominopelvic) to published values. Conclusion: This study demonstrated that our previous published methodology for calculating organ dose using patient-specific SSDE for the chest and abdominopelvic regions is translatable to adult sized patients for organs fully covered by the scan volume.« less
Systematic review of skills transfer after surgical simulation-based training.
Dawe, S R; Pena, G N; Windsor, J A; Broeders, J A J L; Cregan, P C; Hewett, P J; Maddern, G J
2014-08-01
Simulation-based training assumes that skills are directly transferable to the patient-based setting, but few studies have correlated simulated performance with surgical performance. A systematic search strategy was undertaken to find studies published since the last systematic review, published in 2007. Inclusion of articles was determined using a predetermined protocol, independent assessment by two reviewers and a final consensus decision. Studies that reported on the use of surgical simulation-based training and assessed the transferability of the acquired skills to a patient-based setting were included. Twenty-seven randomized clinical trials and seven non-randomized comparative studies were included. Fourteen studies investigated laparoscopic procedures, 13 endoscopic procedures and seven other procedures. These studies provided strong evidence that participants who reached proficiency in simulation-based training performed better in the patient-based setting than their counterparts who did not have simulation-based training. Simulation-based training was equally as effective as patient-based training for colonoscopy, laparoscopic camera navigation and endoscopic sinus surgery in the patient-based setting. These studies strengthen the evidence that simulation-based training, as part of a structured programme and incorporating predetermined proficiency levels, results in skills transfer to the operative setting. © 2014 BJS Society Ltd. Published by John Wiley & Sons Ltd.
FUN3D Airload Predictions for the Full-Scale UH-60A Airloads Rotor in a Wind Tunnel
NASA Technical Reports Server (NTRS)
Lee-Rausch, Elizabeth M.; Biedron, Robert T.
2013-01-01
An unsteady Reynolds-Averaged Navier-Stokes solver for unstructured grids, FUN3D, is used to compute the rotor performance and airloads of the UH-60A Airloads Rotor in the National Full-Scale Aerodynamic Complex (NFAC) 40- by 80-foot Wind Tunnel. The flow solver is loosely coupled to a rotorcraft comprehensive code, CAMRAD-II, to account for trim and aeroelastic deflections. Computations are made for the 1-g level flight speed-sweep test conditions with the airloads rotor installed on the NFAC Large Rotor Test Apparatus (LRTA) and in the 40- by 80-ft wind tunnel to determine the influence of the test stand and wind-tunnel walls on the rotor performance and airloads. Detailed comparisons are made between the results of the CFD/CSD simulations and the wind tunnel measurements. The computed trends in solidity-weighted propulsive force and power coefficient match the experimental trends over the range of advance ratios and are comparable to previously published results. Rotor performance and sectional airloads show little sensitivity to the modeling of the wind-tunnel walls, which indicates that the rotor shaft-angle correction adequately compensates for the wall influence up to an advance ratio of 0.37. Sensitivity of the rotor performance and sectional airloads to the modeling of the rotor with the LRTA body/hub increases with advance ratio. The inclusion of the LRTA in the simulation slightly improves the comparison of rotor propulsive force between the computation and wind tunnel data but does not resolve the difference in the rotor power predictions at mu = 0.37. Despite a more precise knowledge of the rotor trim loads and flight condition, the level of comparison between the computed and measured sectional airloads/pressures at an advance ratio of 0.37 is comparable to the results previously published for the high-speed flight test condition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly-Gorham, Molly Rose K.; DeVetter, Brent M.; Brauer, Carolyn S.
We have re-investigated the optical constants n and k for the homologous series of inorganic salts barium fluoride (BaF2) and calcium fluoride (CaF2) using a single-angle near-normal incidence reflectance device in combination with a calibrated Fourier transform infrared (FTIR) spectrometer. Our results are in good qualitative agreement with most previous works. However, certain features of the previously published data near the reststrahlen band exhibit distinct differences in spectral characteristics. Notably, our measurements of BaF2 do not include a spectral feature in the ~250 cm-1 reststrahlen band that was previously published. Additionally, CaF2 exhibits a distinct wavelength shift relative to themore » model derived from previously published data. We confirmed our results with recently published works that use significantly more modern instrumentation and data reduction techniques« less
Sexton, Nicholas J; Cooper, Richard P
2017-05-01
Task inhibition (also known as backward inhibition) is an hypothesised form of cognitive inhibition evident in multi-task situations, with the role of facilitating switching between multiple, competing tasks. This article presents a novel cognitive computational model of a backward inhibition mechanism. By combining aspects of previous cognitive models in task switching and conflict monitoring, the model instantiates the theoretical proposal that backward inhibition is the direct result of conflict between multiple task representations. In a first simulation, we demonstrate that the model produces two effects widely observed in the empirical literature, specifically, reaction time costs for both (n-1) task switches and n-2 task repeats. Through a systematic search of parameter space, we demonstrate that these effects are a general property of the model's theoretical content, and not specific parameter settings. We further demonstrate that the model captures previously reported empirical effects of inter-trial interval on n-2 switch costs. A final simulation extends the paradigm of switching between tasks of asymmetric difficulty to three tasks, and generates novel predictions for n-2 repetition costs. Specifically, the model predicts that n-2 repetition costs associated with hard-easy-hard alternations are greater than for easy-hard-easy alternations. Finally, we report two behavioural experiments testing this hypothesis, with results consistent with the model predictions. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Cosmic-ray neutron simulations and measurements in Taiwan.
Chen, Wei-Lin; Jiang, Shiang-Huei; Sheu, Rong-Jiun
2014-10-01
This study used simulations of galactic cosmic ray in the atmosphere to investigate the neutron background environment in Taiwan, emphasising its altitude dependence and spectrum variation near interfaces. The calculated results were analysed and compared with two measurements. The first measurement was a mobile neutron survey from sea level up to 3275 m in altitude conducted using a car-mounted high-sensitivity neutron detector. The second was a previous measured result focusing on the changes in neutron spectra near air/ground and air/water interfaces. The attenuation length of cosmic-ray neutrons in the lower atmosphere was estimated to be 163 g cm(-2) in Taiwan. Cosmic-ray neutron spectra vary with altitude and especially near interfaces. The determined spectra near the air/ground and air/water interfaces agree well with measurements for neutrons below 10 MeV. However, the high-energy portion of spectra was observed to be much higher than our previous estimation. Because high-energy neutrons contribute substantially to a dose evaluation, revising the annual sea-level effective dose from cosmic-ray neutrons at ground level in Taiwan to 35 μSv, which corresponds to a neutron flux of 5.30 × 10(-3) n cm(-2) s(-1), was suggested. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Johnson, Lucas B; Gintner, Lucas P; Park, Sehoo; Snow, Christopher D
2015-08-01
Accuracy of current computational protein design (CPD) methods is limited by inherent approximations in energy potentials and sampling. These limitations are often used to qualitatively explain design failures; however, relatively few studies provide specific examples or quantitative details that can be used to improve future CPD methods. Expanding the design method to include a library of sequences provides data that is well suited for discriminating between stabilizing and destabilizing design elements. Using thermophilic endoglucanase E1 from Acidothermus cellulolyticus as a model enzyme, we computationally designed a sequence with 60 mutations. The design sequence was rationally divided into structural blocks and recombined with the wild-type sequence. Resulting chimeras were assessed for activity and thermostability. Surprisingly, unlike previous chimera libraries, regression analysis based on one- and two-body effects was not sufficient for predicting chimera stability. Analysis of molecular dynamics simulations proved helpful in distinguishing stabilizing and destabilizing mutations. Reverting to the wild-type amino acid at destabilized sites partially regained design stability, and introducing predicted stabilizing mutations in wild-type E1 significantly enhanced thermostability. The ability to isolate stabilizing and destabilizing elements in computational design offers an opportunity to interpret previous design failures and improve future CPD methods. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Blackburn, Patrick R; Barnett, Sarah S; Zimmermann, Michael T; Cousin, Margot A; Kaiwar, Charu; Pinto E Vairo, Filippo; Niu, Zhiyv; Ferber, Matthew J; Urrutia, Raul A; Selcen, Duygu; Klee, Eric W; Pichurin, Pavel N
2017-05-01
Pathogenic variants in EBF3 were recently described in three back-to-back publications in association with a novel neurodevelopmental disorder characterized by intellectual disability, speech delay, ataxia, and facial dysmorphisms. In this report, we describe an additional patient carrying a de novo missense variant in EBF3 (c.487C>T, p.(Arg163Trp)) that falls within a conserved residue in the zinc knuckle motif of the DNA binding domain. Without a solved structure of the DNA binding domain, we generated a homology-based atomic model and performed molecular dynamics simulations for EBF3, which predicted decreased DNA affinity for p.(Arg163Trp) compared with wild-type protein and control variants. These data are in agreement with previous experimental studies of EBF1 showing the paralogous residue is essential for DNA binding. The conservation and experimental evidence existing for EBF1 and in silico modeling and dynamics simulations to validate comparable behavior of multiple variants in EBF3 demonstrates strong support for the pathogenicity of p.(Arg163Trp). We show that our patient presents with phenotypes consistent with previously reported patients harboring EBF3 variants and expands the phenotypic spectrum of this newly identified disorder with the additional feature of a bicornuate uterus.
Madl, Amy K; Devlin, Kathryn D; Perez, Angela L; Hollins, Dana M; Cowan, Dallas M; Scott, Paul K; White, Katherine; Cheng, Thales J; Henshaw, John L
2015-02-01
A simulation study was conducted to evaluate worker and area exposure to airborne asbestos associated with the replacement of asbestos-containing gaskets and packing materials from flanges and valves and assess the influence of several variables previously not investigated. Additionally, potential of take home exposures from clothing worn during the study was characterized. Our data showed that product type, ventilation type, gasket location, flange or bonnet size, number of flanges involved, surface characteristics, gasket surface adherence, and even activity type did not have a significant effect on worker exposures. Average worker asbestos exposures during flange gasket work (PCME=0.166 f/cc, 12-59 min) were similar to average worker asbestos exposures during valve overhaul work (PCME=0.165 f/cc, 7-76 min). Average 8-h TWA asbestos exposures were estimated to range from 0.010 to 0.062 f/cc. Handling clothes worn during gasket and packing replacement activities demonstrated exposures that were 0.71% (0.0009 f/cc 40-h TWA) of the airborne asbestos concentration experienced during the 5 days of the study. Despite the many variables considered in this study, exposures during gasket and packing replacement occur within a relatively narrow range, are below current and historical occupational exposure limits for asbestos, and are consistent with previously published data. Copyright © 2014 Elsevier Inc. All rights reserved.
Monte Carlo simulations of electrical percolation in multicomponent thin films with nanofillers
NASA Astrophysics Data System (ADS)
Ni, Xiaojuan; Hui, Chao; Su, Ninghai; Jiang, Wei; Liu, Feng
2018-02-01
We developed a 2D disk-stick percolation model to investigate the electrical percolation behavior of an insulating thin film reinforced with 1D and 2D conductive nanofillers via Monte Carlo simulation. Numerical predictions of the percolation threshold in single component thin films showed good agreement with the previous published work, validating our model for investigating the characteristics of the percolation phenomena. Parametric studies of size effect, i.e., length of 1D nanofiller and diameter of 2D nanofiller, were carried out to predict the electrical percolation threshold for hybrid systems. The relationships between the nanofillers in two hybrid systems was established, which showed differences from previous linear assumption. The effective electrical conductance was evaluated through Kirchhoff’s current law by transforming it into a resistor network. The equivalent resistance was obtained from the distribution of nodal voltages by solving a system of linear equations with a Gaussian elimination method. We examined the effects of stick length, relative concentration, and contact patterns of 1D/2D inclusions on electrical performance. One novel aspect of our study is its ability to investigate the effective conductance of nanocomposites as a function of relative concentrations, which shows there is a synergistic effect when nanofillers with different dimensionalities combine properly. Our work provides an important theoretical basis for designing the conductive networks and predicting the percolation properties of multicomponent nanocomposites.
Numerical Simulation of Screech Tones from Supersonic Jets: Physics and Prediction
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.; Zaman, Khairul Q. (Technical Monitor)
2002-01-01
The objectives of this project are to: (1) perform a numerical simulation of the jet screech phenomenon; and (2) use the data of the simulations to obtain a better understanding of the physics of jet screech. The original grant period was for three years. This was extended at no cost for an extra year to allow the principal investigator time to publish the results. We would like to report that our research work and results (supported by this grant) have fulfilled both objectives of the grant. The following is a summary of the important accomplishments: (1) We have now demonstrated that it is possible to perform accurate numerical simulations of the jet screech phenomenon. Both the axisymmetric case and the fully three-dimensional case were carried out successfully. It is worthwhile to note that this is the first time the screech tone phenomenon has been successfully simulated numerically; (2) All four screech modes were reproduced in the simulation. The computed screech frequencies and intensities were in good agreement with the NASA Langley Research Center data; (3) The staging phenomenon was reproduced in the simulation; (4) The effects of nozzle lip thickness and jet temperature were studied. Simulated tone frequencies at various nozzle lip thickness and jet temperature were found to agree well with experiments; (5) The simulated data were used to explain, for the first time, why there are two axisymmetric screech modes and two helical/flapping screech modes; (6) The simulated data were used to show that when two tones are observed, they co-exist rather than switching from one mode to the other, back and forth, as some previous investigators have suggested; and (7) Some resources of the grant were used to support the development of new computational aeroacoustics (CAA) methodology. (Our screech tone simulations have benefited because of the availability of these improved methods.)
A Journey in Standard Development: The Core Manufacturing Simulation Data (CMSD) Information Model.
Lee, Yung-Tsun Tina
2015-01-01
This report documents a journey "from research to an approved standard" of a NIST-led standard development activity. That standard, Core Manufacturing Simulation Data (CMSD) information model, provides neutral structures for the efficient exchange of manufacturing data in a simulation environment. The model was standardized under the auspices of the international Simulation Interoperability Standards Organization (SISO). NIST started the research in 2001 and initiated the standardization effort in 2004. The CMSD standard was published in two SISO Products. In the first Product, the information model was defined in the Unified Modeling Language (UML) and published in 2010 as SISO-STD-008-2010. In the second Product, the information model was defined in Extensible Markup Language (XML) and published in 2013 as SISO-STD-008-01-2012. Both SISO-STD-008-2010 and SISO-STD-008-01-2012 are intended to be used together.
Comparisons of Spectra from 3D Kinetic Meteor PIC Simulations with Theory and Observations
NASA Astrophysics Data System (ADS)
Oppenheim, M. M.; Tarnecki, L. K.
2017-12-01
Meteoroids smaller than a grain of sand have significant impacts on the composition, chemistry, and dynamics of the atmosphere. The processes by which they turbulently diffuse can be studied using collisional kinetic particle-in-cell (PIC) simulations. Spectral analysis is a valuable tool for comparing such simulations of turbulent, non-specular meteor trails with observations. We present three types of spectral information: full spectra along the trail in k-ω space, spectral widths at common radar frequencies, and power as a function of angle with respect to B. These properties can be compared to previously published data. Zhou et al. (2004) use radar theory to predict the power observed by a radar as a function of the angle between the meteor trail and the radar beam and the size of field-aligned irregularities (FAI) within the trail. Close et al. (2008) present observations of meteor trails from the ALTAIR radar, including power returned as a function of angle off B for a small sample of meteors. Close et al. (2008) and Zhou et al. (2004) both suggest a power drop off of 2-3 dB per degree off perpendicular to B. We compare results from our simulations with both theory and observations for a range of conditions, including trail altitude and incident neutral wind speed. For 1m waves, power fell off by 1-3 dB per degree off perpendicular to B. These comparisons help determine if small-scale simulations accurately capture the behavior of real meteors.
Perera, Undugodage Don Nuwan; Nishikida, Koichi; Lavine, Barry K
2018-06-01
A previously published study featuring an attenuated total reflection (ATR) simulation algorithm that mitigated distortions in ATR spectra was further investigated to evaluate its efficacy to enhance searching of infrared (IR) transmission libraries. In the present study, search prefilters were developed from transformed ATR spectra to identify the assembly plant of a vehicle from ATR spectra of the clear coat layer. A total of 456 IR transmission spectra from the Paint Data Query (PDQ) database that spanned 22 General Motors assembly plants and served as a training set cohort were transformed into ATR spectra by the simulation algorithm. These search prefilters were formulated using the fingerprint region (1500 cm -1 to 500 cm -1 ). Both the transformed ATR spectra (training set) and the experimental ATR spectra (validation set) were preprocessed for pattern recognition analysis using the discrete wavelet transform, which increased the signal-to-noise of the ATR spectra by concentrating the signal in specific wavelet coefficients. Attenuated total reflection spectra of 14 clear coat samples (validation set) measured with a Nicolet iS50 Fourier transform IR spectrometer were correctly classified as to assembly plant(s) of the automotive vehicle from which the paint sample originated using search prefilters developed from 456 simulated ATR spectra. The ATR simulation (transformation) algorithm successfully facilitated spectral library matching of ATR spectra against IR transmission spectra of automotive clear coats in the PDQ database.
ff14ipq: A Self-Consistent Force Field for Condensed-Phase Simulations of Proteins
2015-01-01
We present the ff14ipq force field, implementing the previously published IPolQ charge set for simulations of complete proteins. Minor modifications to the charge derivation scheme and van der Waals interactions between polar atoms are introduced. Torsion parameters are developed through a generational learning approach, based on gas-phase MP2/cc-pVTZ single-point energies computed of structures optimized by the force field itself rather than the quantum benchmark. In this manner, we sacrifice information about the true quantum minima in order to ensure that the force field maintains optimal agreement with the MP2/cc-pVTZ benchmark for the ensembles it will actually produce in simulations. A means of making the gas-phase torsion parameters compatible with solution-phase IPolQ charges is presented. The ff14ipq model is an alternative to ff99SB and other Amber force fields for protein simulations in programs that accommodate pair-specific Lennard–Jones combining rules. The force field gives strong performance on α-helical and β-sheet oligopeptides as well as globular proteins over microsecond time scale simulations, although it has not yet been tested in conjunction with lipid and nucleic acid models. We show how our choices in parameter development influence the resulting force field and how other choices that may have appeared reasonable would actually have led to poorer results. The tools we developed may also aid in the development of future fixed-charge and even polarizable biomolecular force fields. PMID:25328495
Properties of Syntactic Foam for Simulation of Mechanical Insults.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hubbard, Neal Benson; Haulenbeek, Kimberly K.; Spletzer, Matthew A.
Syntactic foam encapsulation protects sensitive components. The energy mitigated by the foam is calculated with numerical simulations. The properties of a syntactic foam consisting of a mixture of an epoxy-rubber adduct and glass microballoons are obtained from published literature and test results. The conditions and outcomes of the tests are discussed. The method for converting published properties and test results to input for finite element models is described. Simulations of the test conditions are performed to validate the inputs.
Ciezak, Jennifer A; Trevino, S F
2006-04-20
Solid-state geometry optimizations and corresponding normal-mode analysis of the widely used energetic material cyclotrimethylenetrinitramine (RDX) were performed using density functional theory with both the generalized gradient approximation (BLYP and BP functionals) and the local density approximation (PWC and VWN functionals). The structural results were found to be in good agreement with experimental neutron diffraction data and previously reported calculations based on the isolated-molecule approximation. The vibrational inelastic neutron scattering (INS) spectrum of polycrystalline RDX was measured and compared with simulated INS constructed from the solid-state calculations. The vibrational frequencies calculated from the solid-state methods had average deviations of 10 cm(-1) or less, whereas previously published frequencies based on an isolated-molecule approximation had deviations of 65 cm(-1) or less, illustrating the importance of including crystalline forces. On the basis of the calculations and analysis, it was possible to assign the normal modes and symmetries, which agree well with previous assignments. Four possible "doorway modes" were found in the energy range defined by the lattice modes, which were all found to contain fundamental contributions from rotation of the nitro groups.
Book review: The Wilderness Debate Rages On: Continuing the Great New Wilderness Debate
Peter Landres
2009-01-01
The Wilderness Debate Rages On is a collection of mostly previously published papers about the meaning, value, and role of wilderness and continues the discussion that was propelled by the editors' previous book The Great New Wilderness Debate (also a collection of papers) published in 1998. The editors state that this sequel to their previous book is mandated...
Ozaki, Y; Watanabe, H; Kaida, A; Miura, M; Nakagawa, K; Toda, K; Yoshimura, R; Sumi, Y; Kurabayashi, T
2017-07-01
Early stage oral cancer can be cured with oral brachytherapy, but whole-body radiation exposure status has not been previously studied. Recently, the International Commission on Radiological Protection Committee (ICRP) recommended the use of ICRP phantoms to estimate radiation exposure from external and internal radiation sources. In this study, we used a Monte Carlo simulation with ICRP phantoms to estimate whole-body exposure from oral brachytherapy. We used a Particle and Heavy Ion Transport code System (PHITS) to model oral brachytherapy with 192Ir hairpins and 198Au grains and to perform a Monte Carlo simulation on the ICRP adult reference computational phantoms. To confirm the simulations, we also computed local dose distributions from these small sources, and compared them with the results from Oncentra manual Low Dose Rate Treatment Planning (mLDR) software which is used in day-to-day clinical practice. We successfully obtained data on absorbed dose for each organ in males and females. Sex-averaged equivalent doses were 0.547 and 0.710 Sv with 192Ir hairpins and 198Au grains, respectively. Simulation with PHITS was reliable when compared with an alternative computational technique using mLDR software. We concluded that the absorbed dose for each organ and whole-body exposure from oral brachytherapy can be estimated with Monte Carlo simulation using PHITS on ICRP reference phantoms. Effective doses for patients with oral cancer were obtained. © The Author 2017. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leon, Stephanie M., E-mail: Stephanie.Leon@uth.tmc.edu; Wagner, Louis K.; Brateman, Libby F.
2014-11-01
Purpose: Monte Carlo simulations were performed with the goal of verifying previously published physical measurements characterizing scatter as a function of apparent thickness. A secondary goal was to provide a way of determining what effect tissue glandularity might have on the scatter characteristics of breast tissue. The overall reason for characterizing mammography scatter in this research is the application of these data to an image processing-based scatter-correction program. Methods: MCNPX was used to simulate scatter from an infinitesimal pencil beam using typical mammography geometries and techniques. The spreading of the pencil beam was characterized by two parameters: mean radial extentmore » (MRE) and scatter fraction (SF). The SF and MRE were found as functions of target, filter, tube potential, phantom thickness, and the presence or absence of a grid. The SF was determined by separating scatter and primary by the angle of incidence on the detector, then finding the ratio of the measured scatter to the total number of detected events. The accuracy of the MRE was determined by placing ring-shaped tallies around the impulse and fitting those data to the point-spread function (PSF) equation using the value for MRE derived from the physical measurements. The goodness-of-fit was determined for each data set as a means of assessing the accuracy of the physical MRE data. The effect of breast glandularity on the SF, MRE, and apparent tissue thickness was also considered for a limited number of techniques. Results: The agreement between the physical measurements and the results of the Monte Carlo simulations was assessed. With a grid, the SFs ranged from 0.065 to 0.089, with absolute differences between the measured and simulated SFs averaging 0.02. Without a grid, the range was 0.28–0.51, with absolute differences averaging −0.01. The goodness-of-fit values comparing the Monte Carlo data to the PSF from the physical measurements ranged from 0.96 to 1.00 with a grid and 0.65 to 0.86 without a grid. Analysis of the data suggested that the nongrid data could be better described by a biexponential function than the single exponential used here. The simulations assessing the effect of breast composition on SF and MRE showed only a slight impact on these quantities. When compared to a mix of 50% glandular/50% adipose tissue, the impact of substituting adipose or glandular breast compositions on the apparent thickness of the tissue was about 5%. Conclusions: The findings show agreement between the physical measurements published previously and the Monte Carlo simulations presented here; the resulting data can therefore be used more confidently for an application such as image processing-based scatter correction. The findings also suggest that breast composition does not have a major impact on the scatter characteristics of breast tissue. Application of the scatter data to the development of a scatter-correction software program can be simplified by ignoring the variations in density among breast tissues.« less
NASA Technical Reports Server (NTRS)
Kleb, William L.; Wood, William A.
2004-01-01
The computational simulation community is not routinely publishing independently verifiable tests to accompany new models or algorithms. A survey reveals that only 22% of new models published are accompanied by tests suitable for independently verifying the new model. As the community develops larger codes with increased functionality, and hence increased complexity in terms of the number of building block components and their interactions, it becomes prohibitively expensive for each development group to derive the appropriate tests for each component. Therefore, the computational simulation community is building its collective castle on a very shaky foundation of components with unpublished and unrepeatable verification tests. The computational simulation community needs to begin publishing component level verification tests before the tide of complexity undermines its foundation.
Anatomically realistic multiscale models of normal and abnormal gastrointestinal electrical activity
Cheng, Leo K; Komuro, Rie; Austin, Travis M; Buist, Martin L; Pullan, Andrew J
2007-01-01
One of the major aims of the International Union of Physiological Sciences (IUPS) Physiome Project is to develop multiscale mathematical and computer models that can be used to help understand human health. We present here a small facet of this broad plan that applies to the gastrointestinal system. Specifically, we present an anatomically and physiologically based modelling framework that is capable of simulating normal and pathological electrical activity within the stomach and small intestine. The continuum models used within this framework have been created using anatomical information derived from common medical imaging modalities and data from the Visible Human Project. These models explicitly incorporate the various smooth muscle layers and networks of interstitial cells of Cajal (ICC) that are known to exist within the walls of the stomach and small bowel. Electrical activity within individual ICCs and smooth muscle cells is simulated using a previously published simplified representation of the cell level electrical activity. This simulated cell level activity is incorporated into a bidomain representation of the tissue, allowing electrical activity of the entire stomach or intestine to be simulated in the anatomically derived models. This electrical modelling framework successfully replicates many of the qualitative features of the slow wave activity within the stomach and intestine and has also been used to investigate activity associated with functional uncoupling of the stomach. PMID:17457969
Model estimation of land-use effects on water levels of northern Prairie wetlands
Voldseth, R.A.; Johnson, W.C.; Gilmanov, T.; Guntenspergen, G.R.; Millett, B.V.
2007-01-01
Wetlands of the Prairie Pothole Region exist in a matrix of grassland dominated by intensive pastoral and cultivation agriculture. Recent conservation management has emphasized the conversion of cultivated farmland and degraded pastures to intact grassland to improve upland nesting habitat. The consequences of changes in land-use cover that alter watershed processes have not been evaluated relative to their effect on the water budgets and vegetation dynamics of associated wetlands. We simulated the effect of upland agricultural practices on the water budget and vegetation of a semipermanent prairie wetland by modifying a previously published mathematical model (WETSIM). Watershed cover/land-use practices were categorized as unmanaged grassland (native grass, smooth brome), managed grassland (moderately heavily grazed, prescribed burned), cultivated crops (row crop, small grain), and alfalfa hayland. Model simulations showed that differing rates of evapotranspiration and runoff associated with different upland plant-cover categories in the surrounding catchment produced differences in wetland water budgets and linked ecological dynamics. Wetland water levels were highest and vegetation the most dynamic under the managed-grassland simulations, while water levels were the lowest and vegetation the least dynamic under the unmanaged-grassland simulations. The modeling results suggest that unmanaged grassland, often planted for waterfowl nesting, may produce the least favorable wetland conditions for birds, especially in drier regions of the Prairie Pothole Region. These results stand as hypotheses that urgently need to be verified with empirical data.
Riem, N; Boet, S; Bould, M D; Tavares, W; Naik, V N
2012-11-01
Both technical skills (TS) and non-technical skills (NTS) are key to ensuring patient safety in acute care practice and effective crisis management. These skills are often taught and assessed separately. We hypothesized that TS and NTS are not independent of each other, and we aimed to evaluate the relationship between TS and NTS during a simulated intraoperative crisis scenario. This study was a retrospective analysis of performances from a previously published work. After institutional ethics approval, 50 anaesthesiology residents managed a simulated crisis scenario of an intraoperative cardiac arrest secondary to a malignant arrhythmia. We used a modified Delphi approach to design a TS checklist, specific for the management of a malignant arrhythmia requiring defibrillation. All scenarios were recorded. Each performance was analysed by four independent experts. For each performance, two experts independently rated the technical performance using the TS checklist, and two other experts independently rated NTS using the Anaesthetists' Non-Technical Skills score. TS and NTS were significantly correlated to each other (r=0.45, P<0.05). During a simulated 5 min resuscitation requiring crisis resource management, our results indicate that TS and NTS are related to one another. This research provides the basis for future studies evaluating the nature of this relationship, the influence of NTS training on the performance of TS, and to determine whether NTS are generic and transferrable between crises that require different TS.
Numerical Simulations of Subscale Wind Turbine Rotor Inboard Airfoils at Low Reynolds Number
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaylock, Myra L.; Maniaci, David Charles; Resor, Brian R.
2015-04-01
New blade designs are planned to support future research campaigns at the SWiFT facility in Lubbock, Texas. The sub-scale blades will reproduce specific aerodynamic characteristics of utility-scale rotors. Reynolds numbers for megawatt-, utility-scale rotors are generally above 2-8 million. The thickness of inboard airfoils for these large rotors are typically as high as 35-40%. The thickness and the proximity to three-dimensional flow of these airfoils present design and analysis challenges, even at the full scale. However, more than a decade of experience with the airfoils in numerical simulation, in the wind tunnel, and in the field has generated confidence inmore » their performance. Reynolds number regimes for the sub-scale rotor are significantly lower for the inboard blade, ranging from 0.7 to 1 million. Performance of the thick airfoils in this regime is uncertain because of the lack of wind tunnel data and the inherent challenge associated with numerical simulations. This report documents efforts to determine the most capable analysis tools to support these simulations in an effort to improve understanding of the aerodynamic properties of thick airfoils in this Reynolds number regime. Numerical results from various codes of four airfoils are verified against previously published wind tunnel results where data at those Reynolds numbers are available. Results are then computed for other Reynolds numbers of interest.« less
NASA Astrophysics Data System (ADS)
Tang, H.; Weiss, R.
2016-12-01
GeoClaw-STRICHE is designed for simulating the physical impacts of tsunami as it relates to erosion, transport and deposition. GeoClaw-STRICHE comprises GeoClaw for the hydrodynamics and the sediment transport model we refer to as STRICHE, which includes an advection diffusion equation as well as bed-updating. Multiple grain sizes and sediment layers are added into GeoClaw-STRICHE to simulate grain-size distribution and add the capability to develop grain-size trends from bottom to the top of a simulated deposit as well as along the inundation. Unlike previous models based on empirical equations or sediment concentration gradient, the standard Van Leer method is applied to calculate sediment flux. We tested and verified GeoClaw-STRICHE with flume experiment by Johnson et al. (2016) and data from the 2004 Indian Ocean tsunami in Kuala Meurisi as published in Apotsos et al. (2011). The comparison with experimental data shows GeoClaw-STRICHE's capability to simulate sediment thickness and grain-size distribution in experimental conditions, which builds confidence that sediment transport is correctly predicted by this model. The comparison with the data from the 2004 Indian Ocean tsunami reveals that the pattern of sediment thickness is well predicted and is of similar quality, if not better than the established computational models such as Delft3D.
Analysis of Measured and Simulated Supraglottal Acoustic Waves.
Fraile, Rubén; Evdokimova, Vera V; Evgrafova, Karina V; Godino-Llorente, Juan I; Skrelin, Pavel A
2016-09-01
To date, although much attention has been paid to the estimation and modeling of the voice source (ie, the glottal airflow volume velocity), the measurement and characterization of the supraglottal pressure wave have been much less studied. Some previous results have unveiled that the supraglottal pressure wave has some spectral resonances similar to those of the voice pressure wave. This makes the supraglottal wave partially intelligible. Although the explanation for such effect seems to be clearly related to the reflected pressure wave traveling upstream along the vocal tract, the influence that nonlinear source-filter interaction has on it is not as clear. This article provides an insight into this issue by comparing the acoustic analyses of measured and simulated supraglottal and voice waves. Simulations have been performed using a high-dimensional discrete vocal fold model. Results of such comparative analysis indicate that spectral resonances in the supraglottal wave are mainly caused by the regressive pressure wave that travels upstream along the vocal tract and not by source-tract interaction. On the contrary and according to simulation results, source-tract interaction has a role in the loss of intelligibility that happens in the supraglottal wave with respect to the voice wave. This loss of intelligibility mainly corresponds to spectral differences for frequencies above 1500 Hz. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Lin, Risa J; Jaeger, Dieter
2011-05-01
In previous studies we used the technique of dynamic clamp to study how temporal modulation of inhibitory and excitatory inputs control the frequency and precise timing of spikes in neurons of the deep cerebellar nuclei (DCN). Although this technique is now widely used, it is limited to interpreting conductance inputs as being location independent; i.e., all inputs that are biologically distributed across the dendritic tree are applied to the soma. We used computer simulations of a morphologically realistic model of DCN neurons to compare the effects of purely somatic vs. distributed dendritic inputs in this cell type. We applied the same conductance stimuli used in our published experiments to the model. To simulate variability in neuronal responses to repeated stimuli, we added a somatic white current noise to reproduce subthreshold fluctuations in the membrane potential. We were able to replicate our dynamic clamp results with respect to spike rates and spike precision for different patterns of background synaptic activity. We found only minor differences in the spike pattern generation between focal or distributed input in this cell type even when strong inhibitory or excitatory bursts were applied. However, the location dependence of dynamic clamp stimuli is likely to be different for each cell type examined, and the simulation approach developed in the present study will allow a careful assessment of location dependence in all cell types.
TopoGromacs: Automated Topology Conversion from CHARMM to GROMACS within VMD.
Vermaas, Josh V; Hardy, David J; Stone, John E; Tajkhorshid, Emad; Kohlmeyer, Axel
2016-06-27
Molecular dynamics (MD) simulation engines use a variety of different approaches for modeling molecular systems with force fields that govern their dynamics and describe their topology. These different approaches introduce incompatibilities between engines, and previously published software bridges the gaps between many popular MD packages, such as between CHARMM and AMBER or GROMACS and LAMMPS. While there are many structure building tools available that generate topologies and structures in CHARMM format, only recently have mechanisms been developed to convert their results into GROMACS input. We present an approach to convert CHARMM-formatted topology and parameters into a format suitable for simulation with GROMACS by expanding the functionality of TopoTools, a plugin integrated within the widely used molecular visualization and analysis software VMD. The conversion process was diligently tested on a comprehensive set of biological molecules in vacuo. The resulting comparison between energy terms shows that the translation performed was lossless as the energies were unchanged for identical starting configurations. By applying the conversion process to conventional benchmark systems that mimic typical modestly sized MD systems, we explore the effect of the implementation choices made in CHARMM, NAMD, and GROMACS. The newly available automatic conversion capability breaks down barriers between simulation tools and user communities and allows users to easily compare simulation programs and leverage their unique features without the tedium of constructing a topology twice.
Contributions to muscle force and EMG by combined neural excitation and electrical stimulation
Crago, Patrick E; Makowski, Nathaniel S; Cole, Natalie M
2014-01-01
Objective Stimulation of muscle for research or clinical interventions is often superimposed on ongoing physiological activity, without a quantitative understanding of the impact of the stimulation on the net muscle activity and the physiological response. Experimental studies show that total force during stimulation is less than the sum of the isolated voluntary and stimulated forces, but the occlusion mechanism is not understood. Approach We develop a model of efferent motor activity elicited by superimposing stimulation during a physiologically activated contraction. The model combines action potential interactions due to collision block, source resetting, and refractory periods with previously published models of physiological motor unit recruitment, rate modulation, force production, and EMG generation in human first dorsal interosseous muscle to investigate the mechanisms and effectiveness of stimulation on the net muscle force and EMG. Main Results Stimulation during a physiological contraction demonstrates partial occlusion of force and the neural component of the EMG, due to action potential interactions in motor units activated by both sources. Depending on neural and stimulation firing rates as well as on force-frequency properties, individual motor unit forces can be greater, smaller, or unchanged by the stimulation. In contrast, voluntary motor unit EMG potentials in simultaneously stimulated motor units show progressive occlusion with increasing stimulus rate. The simulations predict that occlusion would be decreased by a reverse stimulation recruitment order. Significance The results are consistent with and provide a mechanistic interpretation of previously published experimental evidence of force occlusion. The models also predict two effects that have not been reported previously - voluntary EMG occlusion and the advantages of a proximal stimulation site. This study provides a basis for the rational design of both future experiments and clinical neuroprosthetic interventions involving either motor or sensory stimulation. PMID:25242203
Assessment of virtual reality robotic simulation performance by urology resident trainees.
Ruparel, Raaj K; Taylor, Abby S; Patel, Janil; Patel, Vipul R; Heckman, Michael G; Rawal, Bhupendra; Leveillee, Raymond J; Thiel, David D
2014-01-01
To examine resident performance on the Mimic dV-Trainer (MdVT; Mimic Technologies, Inc., Seattle, WA) for correlation with resident trainee level (postgraduate year [PGY]), console experience (CE), and simulator exposure in their training program to assess for internal bias with the simulator. Residents from programs of the Southeastern Section of the American Urologic Association participated. Each resident was scored on 4 simulator tasks (peg board, camera targeting, energy dissection [ED], and needle targeting) with 3 different outcomes (final score, economy of motion score, and time to complete exercise) measured for each task. These scores were evaluated for association with PGY, CE, and simulator exposure. Robotic skills training laboratory. A total of 27 residents from 14 programs of the Southeastern Section of the American Urologic Association participated. Time to complete the ED exercise was significantly shorter for residents who had logged live robotic console compared with those who had not (p = 0.003). There were no other associations with live robotic console time that approached significance (all p ≥ 0.21). The only measure that was significantly associated with PGY was time to complete ED exercise (p = 0.009). No associations with previous utilization of a robotic simulator in the resident's home training program were statistically significant. The ED exercise on the MdVT is most associated with CE and PGY compared with other exercises. Exposure of trainees to the MdVT in training programs does not appear to alter performance scores compared with trainees who do not have the simulator. © 2013 Published by Association of Program Directors in Surgery on behalf of Association of Program Directors in Surgery.
Validation of the Dynamic Wake Meander model with focus on tower loads
NASA Astrophysics Data System (ADS)
Larsen, T. J.; Larsen, G. C.; Pedersen, M. M.; Enevoldsen, K.; Madsen, H. A.
2017-05-01
This paper presents a comparison between measured and simulated tower loads for the Danish offshore wind farm Nysted 2. Previously, only limited full scale experimental data containing tower load measurements have been published, and in many cases the measurements include only a limited range of wind speeds. In general, tower loads in wake conditions are very challenging to predict correctly in simulations. The Nysted project offers an improved insight to this field as six wind turbines located in the Nysted II wind farm have been instrumented to measure tower top and tower bottom moments. All recorded structural data have been organized in a database, which in addition contains relevant wind turbine SCADA data as well as relevant meteorological data - e.g. wind speed and wind direction - from an offshore mast located in the immediate vicinity of the wind farm. The database contains data from a period extending over a time span of more than 3 years. Based on the recorded data basic mechanisms driving the increased loading experienced by wind turbines operating in offshore wind farm conditions have been identified, characterized and modeled. The modeling is based on the Dynamic Wake Meandering (DWM) approach in combination with the state-of-the-art aeroelastic model HAWC2, and has previously as well as in this study shown good agreement with the measurements. The conclusions from the study have several parts. In general the tower bending and yaw loads show a good agreement between measurements and simulations. However, there are situations that are still difficult to match. One is tower loads of single-wake operation near rated ambient wind speed for single wake situations for spacing’s around 7-8D. A specific target of the study was to investigate whether the largest tower fatigue loads are associated with a certain downstream distance. This has been identified in both simulations and measurements, though a rather flat optimum is seen in the measurements.
Scene-based nonuniformity correction with reduced ghosting using a gated LMS algorithm.
Hardie, Russell C; Baxley, Frank; Brys, Brandon; Hytla, Patrick
2009-08-17
In this paper, we present a scene-based nouniformity correction (NUC) method using a modified adaptive least mean square (LMS) algorithm with a novel gating operation on the updates. The gating is designed to significantly reduce ghosting artifacts produced by many scene-based NUC algorithms by halting updates when temporal variation is lacking. We define the algorithm and present a number of experimental results to demonstrate the efficacy of the proposed method in comparison to several previously published methods including other LMS and constant statistics based methods. The experimental results include simulated imagery and a real infrared image sequence. We show that the proposed method significantly reduces ghosting artifacts, but has a slightly longer convergence time. (c) 2009 Optical Society of America
Singular observation of the polarization-conversion effect for a gammadion-shaped metasurface
Lin, Chu-En; Yen, Ta-Jen; Yu, Chih-Jen; Hsieh, Cheng-Min; Lee, Min-Han; Chen, Chii-Chang; Chang, Cheng-Wei
2016-01-01
In this article, the polarization-conversion effects of a gammadion-shaped metasurface in transmission and reflection modes are discussed. In our experiment, the polarization-conversion effect of a gammadion-shaped metasurface is investigated because of the contribution of the phase and amplitude anisotropies. According to our experimental and simulated results, the polarization property of the first-order transmitted diffraction is dominated by linear anisotropy and has weak depolarization; the first-order reflected diffraction exhibits both linear and circular anisotropies and has stronger depolarization than the transmission mode. These results are different from previously published research. The Mueller matrix ellipsometer and polar decomposition method will aid in the investigation of the polarization properties of other nanostructures. PMID:26915332
Structure formation in Ag-X (X = Au, Cu) alloys synthesized far-from-equilibrium
NASA Astrophysics Data System (ADS)
Elofsson, V.; Almyras, G. A.; Lü, B.; Garbrecht, M.; Boyd, R. D.; Sarakinos, K.
2018-04-01
We employ sub-monolayer, pulsed Ag and Au vapor fluxes, along with deterministic growth simulations, and nanoscale probes to study structure formation in miscible Ag-Au films synthesized under far-from-equilibrium conditions. Our results show that nanoscale atomic arrangement is primarily determined by roughness build up at the film growth front, whereby larger roughness leads to increased intermixing between Ag and Au. These findings suggest a different structure formation pathway as compared to the immiscible Ag-Cu system for which the present study, in combination with previously published data, reveals that no significant roughness is developed, and the local atomic structure is predominantly determined by the tendency of Ag and Cu to phase-separate.
On the nullspace of TLS multi-station adjustment
NASA Astrophysics Data System (ADS)
Sterle, Oskar; Kogoj, Dušan; Stopar, Bojan; Kregar, Klemen
2018-07-01
In the article we present an analytic aspect of TLS multi-station least-squares adjustment with the main focus on the datum problem. The datum problem is, compared to previously published researches, theoretically analyzed and solved, where the solution is based on nullspace derivation of the mathematical model. The importance of datum problem solution is seen in a complete description of TLS multi-station adjustment solutions from a set of all minimally constrained least-squares solutions. On a basis of known nullspace, estimable parameters are described and the geometric interpretation of all minimally constrained least squares solutions is presented. At the end a simulated example is used to analyze the results of TLS multi-station minimally constrained and inner constrained least-squares adjustment solutions.
2016-01-01
Although corporate sponsorship of research does not necessarily lead to biased results, in some industries it has resulted in the publication of inaccurate and misleading information. Some companies have hired scientific consulting firms to retrospectively calculate exposures to products that are no longer manufactured or sold. As an example, this paper reviews one such study – a litigation-engendered study of Union Carbide Corporation’s asbestos-containing product, Bakelite™. This analysis is based on previously secret documents produced as a result of litigation. The study published asbestos fiber exposure measurements that underestimated actual exposures to create doubt about the hazards associated with the manufacture and manipulation of Bakelite™. PMID:27128626
Improvements, testing and development of the ADM-τ sub-grid surface tension model for two-phase LES
NASA Astrophysics Data System (ADS)
Aniszewski, Wojciech
2016-12-01
In this paper, a specific subgrid term occurring in Large Eddy Simulation (LES) of two-phase flows is investigated. This and other subgrid terms are presented, we subsequently elaborate on the existing models for those and re-formulate the ADM-τ model for sub-grid surface tension previously published by these authors. This paper presents a substantial, conceptual simplification over the original model version, accompanied by a decrease in its computational cost. At the same time, it addresses the issues the original model version faced, e.g. introduces non-isotropic applicability criteria based on resolved interface's principal curvature radii. Additionally, this paper introduces more throughout testing of the ADM-τ, in both simple and complex flows.
A mechanical jig for measuring ankle supination and pronation torque in vitro and in vivo.
Fong, Daniel Tik-Pui; Chung, Mandy Man-Ling; Chan, Yue-Yan; Chan, Kai-Ming
2012-07-01
This study presents the design of a mechanical jig for evaluating the ankle joint torque on both cadaver and human ankles. Previous study showed that ankle sprain motion was a combination of plantarflexion and inversion. The device allows measurement of ankle supination and pronation torque with one simple axis in a single step motion. More importantly, the ankle orientation allows rotation starting from an anatomical position. Six cadaveric specimens and six human subjects were tested with simulated and voluntary rotation respectively. The presented mechanical jig makes possible the determination of supination torque for studying ankle sprain injury and the estimation of pronation torque for examining peroneal muscle response. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
Boutsioukis, C; Verhaagen, B; Walmsley, A D; Versluis, M; van der Sluis, L W M
2013-11-01
(i) To quantify in a simulated root canal model the file-to-wall contact during ultrasonic activation of an irrigant and to evaluate the effect of root canal size, file insertion depth, ultrasonic power, root canal level and previous training, (ii) To investigate the effect of file-to-wall contact on file oscillation. File-to-wall contact was measured during ultrasonic activation of the irrigant performed by 15 trained and 15 untrained participants in two metal root canal models. Results were analyzed by two 5-way mixed-design anovas. The level of significance was set at P < 0.05. Additionally, high-speed visualizations, laser-vibrometer measurements and numerical simulations of the file oscillation were conducted. File-to-wall contact occurred in all cases during 20% of the activation time. Contact time was significantly shorter at high power (P < 0.001), when the file was positioned away from working length (P < 0.001), in the larger root canal (P < 0.001) and from coronal towards apical third of the root canal (P < 0.002), in most of the cases studied. Previous training did not show a consistent significant effect. File oscillation was affected by contact during 94% of the activation time. During wall contact, the file bounced back and forth against the wall at audible frequencies (ca. 5 kHz), but still performed the original 30 kHz oscillations. Travelling waves were identified on the file. The file oscillation was not dampened completely due to the contact and hydrodynamic cavitation was detected. Considerable file-to-wall contact occur-red during irrigant activation. Therefore, the term 'Passive Ultrasonic Irrigation' should be amended to 'Ultrasonically Activated Irrigation'. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Wijeakumar, Sobanawartiny; Ambrose, Joseph P.; Spencer, John P.; Curtu, Rodica
2017-01-01
A fundamental challenge in cognitive neuroscience is to develop theoretical frameworks that effectively span the gap between brain and behavior, between neuroscience and psychology. Here, we attempt to bridge this divide by formalizing an integrative cognitive neuroscience approach using dynamic field theory (DFT). We begin by providing an overview of how DFT seeks to understand the neural population dynamics that underlie cognitive processes through previous applications and comparisons to other modeling approaches. We then use previously published behavioral and neural data from a response selection Go/Nogo task as a case study for model simulations. Results from this study served as the ‘standard’ for comparisons with a model-based fMRI approach using dynamic neural fields (DNF). The tutorial explains the rationale and hypotheses involved in the process of creating the DNF architecture and fitting model parameters. Two DNF models, with similar structure and parameter sets, are then compared. Both models effectively simulated reaction times from the task as we varied the number of stimulus-response mappings and the proportion of Go trials. Next, we directly simulated hemodynamic predictions from the neural activation patterns from each model. These predictions were tested using general linear models (GLMs). Results showed that the DNF model that was created by tuning parameters to capture simultaneously trends in neural activation and behavioral data quantitatively outperformed a Standard GLM analysis of the same dataset. Further, by using the GLM results to assign functional roles to particular clusters in the brain, we illustrate how DNF models shed new light on the neural populations’ dynamics within particular brain regions. Thus, the present study illustrates how an interactive cognitive neuroscience model can be used in practice to bridge the gap between brain and behavior. PMID:29118459
Stefanidis, Dimitrios; Hope, William W; Korndorffer, James R; Markley, Sarah; Scott, Daniel J
2010-04-01
Laparoscopic suturing is an advanced skill that is difficult to acquire. Simulator-based skills curricula have been developed that have been shown to transfer to the operating room. Currently available skills curricula need to be optimized. We hypothesized that mastering basic laparoscopic skills first would shorten the learning curve of a more complex laparoscopic task and reduce resource requirements for the Fundamentals of Laparoscopic Surgery suturing curriculum. Medical students (n = 20) with no previous simulator experience were enrolled in an IRB-approved protocol, pretested on the Fundamentals of Laparoscopic Surgery suturing model, and randomized into 2 groups. Group I (n = 10) trained (unsupervised) until proficiency levels were achieved on 5 basic tasks; Group II (n = 10) received no basic training. Both groups then trained (supervised) on the Fundamentals of Laparoscopic Surgery suturing model until previously reported proficiency levels were achieved. Two weeks later, they were retested to evaluate their retention scores, training parameters, instruction requirements, and cost between groups using t-test. Baseline characteristics and performance were similar for both groups, and 9 of 10 subjects in each group achieved the proficiency levels. The initial performance on the simulator was better for Group I after basic skills training, and their suturing learning curve was shorter compared with Group II. In addition, Group I required less active instruction. Overall time required to finish the curriculum was similar for both groups; but the Group I training strategy cost less, with a savings of $148 per trainee. Teaching novices basic laparoscopic skills before a more complex laparoscopic task produces substantial cost savings. Additional studies are needed to assess the impact of such integrated curricula on ultimate educational benefit. Copyright (c) 2010 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Inman, Matthew Clay
A novel, open-cathode direct methanol fuel cell (DMFC ) has been designed and built by researchers at the University of North Florida and University of Florida. Foremost among the advances of this system over previous DMFC architectures is a passive water recovery system which allows product water to replenish that consumed at the anode. This is enabled by a specially-designed water pathway combined with a liquid barrier layer (LBL ). The LBL membrane is positioned between the cathode catalyst layer and the cathode gas diffusion layer, and must exhibit high permeability and low diffusive resistance to both oxygen and water vapor, bulk hydrophobicity to hold back the product liquid water, and must remain electrically conductive. Maintaining water balance at optimum operating temperatures is problematic with the current LBL design, forcing the system to run at lower temperatures decreasing the overall system efficiency. This research presents a novel approach to nanoporous membrane design whereby flux of a given species is determined based upon the molecular properties of said species and those of the diffusing medium, the pore geometry, and the membrane thickness. A molecular dynamics (MD ) model is developed for tracking Knudsen regime flows of a Lennard-Jones (LJ ) fluid through an atomistic pore structure, hundreds of thousands of wall collision simulations are performed on the University of Florida HiPerGator supercomputer, and the generated trajectory information is used to develop number density and axial velocity profiles for use in a rigorous approach to total flux calculation absent in previously attempted MD models. Results are compared to other published approaches and diffusion data available in the literature. The impact of this study on various applications of membrane design is discussed and additional simulations and model improvements are outlined for future consideration.
van Empel, Pieter J; Verdam, Mathilde G E; Strypet, Magnus; van Rijssen, Lennart B; Huirne, Judith A; Scheele, Fedde; Bonjer, H Jaap; Meijerink, W Jeroen
2012-01-01
Knot tying and suturing skills in minimally invasive surgery (MIS) differ markedly from those in open surgery. Appropriate MIS training is mandatory before implementation into practice. The Advanced Suturing Course (ASC) is a structured simulator based training course that includes a 6-week autonomous training period at home on a traditional laparoscopic box trainer. Previous research did not demonstrate a significant progress in laparoscopic skills after this training period. This study aims to identify factors determining autonomous training on a laparoscopic box trainer at home. Residents (n = 97) attending 1 of 7 ASC courses between January 2009 and June 2011 were consecutively included. After 6 weeks of autonomous, training a questionnaire was completed. A random subgroup of 30 residents was requested to keep a time log. All residents received an online survey after attending the ASC. We performed outcome comparison to examine the accuracy of individual responses. Out of 97 residents, the main motives for noncompliant autonomous training included a lack of (training) time after working hours (n = 80, 83.3%), preferred practice time during working hours (n = 76, 31.6%), or another surgical interest than MIS (n = 79, 15.2%). Previously set training goals would encourage autonomous training according to 27.8% (n = 18) of residents. Thirty participants submitted a time log and reported an average 76.5-minute weekly training time. All residents confirmed that autonomous home practice on a laparoscopic box trainer is valuable. Autonomous practice should be structured and inclusive of adequate and sufficient feedback points. A minimally required practice time should be set. An obligatory assessment, including corresponding consequence should be conducted. Compliance herewith may result in increased voluntary (autonomous) simulator based (laparoscopic) training by residents. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Chapa, Joaquin; An, Gary; Kulkarni, Swati A
2016-01-01
Breast cancer, the product of numerous rare mutational events that occur over an extended time period, presents numerous challenges to investigators interested in studying the transformation from normal breast epithelium to malignancy using traditional laboratory methods, particularly with respect to characterizing transitional and pre-malignant states. Dynamic computational modeling can provide insight into these pathophysiological dynamics, and as such we use a previously validated agent-based computational model of the mammary epithelium (the DEABM) to investigate the probabilistic mechanisms by which normal populations of ductal cells could transform into states replicating features of both pre-malignant breast lesions and a diverse set of breast cancer subtypes. The DEABM consists of simulated cellular populations governed by algorithms based on accepted and previously published cellular mechanisms. Cells respond to hormones, undergo mitosis, apoptosis and cellular differentiation. Heritable mutations to 12 genes prominently implicated in breast cancer are acquired via a probabilistic mechanism. 3000 simulations of the 40-year period of menstrual cycling were run in wild-type (WT) and BRCA1-mutated groups. Simulations were analyzed by development of hyperplastic states, incidence of malignancy, hormone receptor and HER-2 status, frequency of mutation to particular genes, and whether mutations were early events in carcinogenesis. Cancer incidence in WT (2.6%) and BRCA1-mutated (45.9%) populations closely matched published epidemiologic rates. Hormone receptor expression profiles in both WT and BRCA groups also closely matched epidemiologic data. Hyperplastic populations carried more mutations than normal populations and mutations were similar to early mutations found in ER+ tumors (telomerase, E-cadherin, TGFB, RUNX3, p < .01). ER- tumors carried significantly more mutations and carried more early mutations in BRCA1, c-MYC and genes associated with epithelial-mesenchymal transition. The DEABM generates diverse tumors that express tumor markers consistent with epidemiologic data. The DEABM also generates non-invasive, hyperplastic populations, analogous to atypia or ductal carcinoma in situ (DCIS), via mutations to genes known to be present in hyperplastic lesions and as early mutations in breast cancers. The results demonstrate that agent-based models are well-suited to studying tumor evolution through stages of carcinogenesis and have the potential to be used to develop prevention and treatment strategies.
Punzalan, Florencio Rusty; Kunieda, Yoshitoshi; Amano, Akira
2015-01-01
Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs). Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code generator can be used to generate code for physiological simulations and provides a tool for studying cardiac electrophysiology. PMID:26356082
Can small field diode correction factors be applied universally?
Liu, Paul Z Y; Suchowerska, Natalka; McKenzie, David R
2014-09-01
Diode detectors are commonly used in dosimetry, but have been reported to over-respond in small fields. Diode correction factors have been reported in the literature. The purpose of this study is to determine whether correction factors for a given diode type can be universally applied over a range of irradiation conditions including beams of different qualities. A mathematical relation of diode over-response as a function of the field size was developed using previously published experimental data in which diodes were compared to an air core scintillation dosimeter. Correction factors calculated from the mathematical relation were then compared those available in the literature. The mathematical relation established between diode over-response and the field size was found to predict the measured diode correction factors for fields between 5 and 30 mm in width. The average deviation between measured and predicted over-response was 0.32% for IBA SFD and PTW Type E diodes. Diode over-response was found to be not strongly dependent on the type of linac, the method of collimation or the measurement depth. The mathematical relation was found to agree with published diode correction factors derived from Monte Carlo simulations and measurements, indicating that correction factors are robust in their transportability between different radiation beams. Copyright © 2014. Published by Elsevier Ireland Ltd.
NASA Astrophysics Data System (ADS)
Abu-Zayyad, T.; Aida, R.; Allen, M.; Anderson, R.; Azuma, R.; Barcikowski, E.; Belz, J. W.; Bergman, D. R.; Blake, S. A.; Cady, R.; Cheon, B. G.; Chiba, J.; Chikawa, M.; Cho, E. J.; Cho, W. R.; Fujii, H.; Fujii, T.; Fukuda, T.; Fukushima, M.; Hanlon, W.; Hayashi, K.; Hayashi, Y.; Hayashida, N.; Hibino, K.; Hiyama, K.; Honda, K.; Iguchi, T.; Ikeda, D.; Ikuta, K.; Inoue, N.; Ishii, T.; Ishimori, R.; Ito, H.; Ivanov, D.; Iwamoto, S.; Jui, C. C. H.; Kadota, K.; Kakimoto, F.; Kalashev, O.; Kanbe, T.; Kasahara, K.; Kawai, H.; Kawakami, S.; Kawana, S.; Kido, E.; Kim, H. B.; Kim, H. K.; Kim, J. H.; Kim, J. H.; Kitamoto, K.; Kitamura, S.; Kitamura, Y.; Kobayashi, K.; Kobayashi, Y.; Kondo, Y.; Kuramoto, K.; Kuzmin, V.; Kwon, Y. J.; Lan, J.; Lim, S. I.; Lundquist, J. P.; Machida, S.; Martens, K.; Matsuda, T.; Matsuura, T.; Matsuyama, T.; Matthews, J. N.; Myers, I.; Minamino, M.; Miyata, K.; Murano, Y.; Nagataki, S.; Nakamura, T.; Nam, S. W.; Nonaka, T.; Ogio, S.; Ogura, J.; Ohnishi, M.; Ohoka, H.; Oki, K.; Oku, D.; Okuda, T.; Ono, M.; Oshima, A.; Ozawa, S.; Park, I. H.; Pshirkov, M. S.; Rodriguez, D. C.; Roh, S. Y.; Rubtsov, G.; Ryu, D.; Sagawa, H.; Sakurai, N.; Sampson, A. L.; Scott, L. M.; Shah, P. D.; Shibata, F.; Shibata, T.; Shimodaira, H.; Shin, B. K.; Shin, J. I.; Shirahama, T.; Smith, J. D.; Sokolsky, P.; Sonley, T. J.; Springer, R. W.; Stokes, B. T.; Stratton, S. R.; Stroman, T. A.; Suzuki, S.; Takahashi, Y.; Takeda, M.; Taketa, A.; Takita, M.; Tameda, Y.; Tanaka, H.; Tanaka, K.; Tanaka, M.; Thomas, S. B.; Thomson, G. B.; Tinyakov, P.; Tkachev, I.; Tokuno, H.; Tomida, T.; Troitsky, S.; Tsunesada, Y.; Tsutsumi, K.; Tsuyuguchi, Y.; Uchihori, Y.; Udo, S.; Ukai, H.; Vasiloff, G.; Wada, Y.; Wong, T.; Yamakawa, Y.; Yamane, R.; Yamaoka, H.; Yamazaki, K.; Yang, J.; Yoneda, Y.; Yoshida, S.; Yoshii, H.; Zollinger, R.; Zundel, Z.
2013-08-01
We present a measurement of the energy spectrum of ultra-high-energy cosmic rays performed by the Telescope Array experiment using monocular observations from its two new FADC-based fluorescence detectors. After a short description of the experiment, we describe the data analysis and event reconstruction procedures. Since the aperture of the experiment must be calculated by Monte Carlo simulation, we describe this calculation and the comparisons of simulated and real data used to verify the validity of the aperture calculation. Finally, we present the energy spectrum calculated from the merged monocular data sets of the two FADC-based detectors, and also the combination of this merged spectrum with an independent, previously published monocular spectrum measurement performed by Telescope Array's third fluorescence detector [T. Abu-Zayyad et al., The energy spectrum of Telescope Array's middle drum detector and the direct comparison to the high resolution fly's eye experiment, Astroparticle Physics 39 (2012) 109-119, http://dx.doi.org/10.1016/j.astropartphys.2012.05.012, Available from:
Homogeneous Freezing of Water Droplets and its Dependence on Droplet Size
NASA Astrophysics Data System (ADS)
Schmitt, Thea; Möhler, Ottmar; Höhler, Kristina; Leisner, Thomas
2014-05-01
The formulation and parameterisation of microphysical processes in tropospheric clouds, such as phase transitions, is still a challenge for weather and climate models. This includes the homogeneous freezing of supercooled water droplets, since this is an important process in deep convective systems, where almost pure water droplets may stay liquid until homogeneous freezing occurs at temperatures around 238 K. Though the homogeneous ice nucleation in supercooled water is considered to be well understood, recent laboratory experiments with typical cloud droplet sizes showed one to two orders of magnitude smaller nucleation rate coefficients than previous literature results, including earlier results from experiments with single levitated water droplets and from cloud simulation experiments at the AIDA (Aerosol Interaction and Dynamics in the Atmosphere) facility. This motivated us to re-analyse homogeneous droplet freezing experiments conducted during the previous years at the AIDA cloud chamber. This cloud chamber has a volume of 84m3 and operates under atmospherically relevant conditions within wide ranges of temperature, pressure and humidity, whereby investigations of both tropospheric mixed-phase clouds and cirrus clouds can be realised. By controlled adiabatic expansions, the ascent of an air parcel in the troposphere can be simulated. According to our new results and their comparison to the results from single levitated droplet experiments, the homogeneous freezing of water droplets seems to be a volume-dependent process, at least for droplets as small as a few micrometers in diameter. A contribution of surface induced freezing can be ruled out, in agreement to previous conclusions from the single droplet experiments. The obtained volume nucleation rate coefficients are in good agreement, within error bars, with some previous literature data, including our own results from earlier AIDA experiments, but they do not agree with recently published lower volume nucleation rate coefficients. This contribution will show the results from the re-analysis of AIDA homogeneous freezing experiments with pure water droplets and will discuss the comparison to the literature data.
Wain, Louise V.; Pedroso, Inti; Landers, John E.; Breen, Gerome; Shaw, Christopher E.; Leigh, P. Nigel; Brown, Robert H.
2009-01-01
Background The genetic contribution to sporadic amyotrophic lateral sclerosis (ALS) has not been fully elucidated. There are increasing efforts to characterise the role of copy number variants (CNVs) in human diseases; two previous studies concluded that CNVs may influence risk of sporadic ALS, with multiple rare CNVs more important than common CNVs. A little-explored issue surrounding genome-wide CNV association studies is that of post-calling filtering and merging of raw CNV calls. We undertook simulations to define filter thresholds and considered optimal ways of merging overlapping CNV calls for association testing, taking into consideration possibly overlapping or nested, but distinct, CNVs and boundary estimation uncertainty. Methodology and Principal Findings In this study we screened Illumina 300K SNP genotyping data from 730 ALS cases and 789 controls for copy number variation. Following quality control filters using thresholds defined by simulation, a total of 11321 CNV calls were made across 575 cases and 621 controls. Using region-based and gene-based association analyses, we identified several loci showing nominally significant association. However, the choice of criteria for combining calls for association testing has an impact on the ranking of the results by their significance. Several loci which were previously reported as being associated with ALS were identified here. However, of another 15 genes previously reported as exhibiting ALS-specific copy number variation, only four exhibited copy number variation in this study. Potentially interesting novel loci, including EEF1D, a translation elongation factor involved in the delivery of aminoacyl tRNAs to the ribosome (a process which has previously been implicated in genetic studies of spinal muscular atrophy) were identified but must be treated with caution due to concerns surrounding genomic location and platform suitability. Conclusions and Significance Interpretation of CNV association findings must take into account the effects of filtering and combining CNV calls when based on early genome-wide genotyping platforms and modest study sizes. PMID:19997636
15 CFR 10.10 - Review of published standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Review of published standards. 10.10... DEVELOPMENT OF VOLUNTARY PRODUCT STANDARDS § 10.10 Review of published standards. (a) Each standard published... considered until a replacement standard is published. (b) Each standard published under these or previous...
Application of SEAWAT to select variable-density and viscosity problems
Dausman, Alyssa M.; Langevin, Christian D.; Thorne, Danny T.; Sukop, Michael C.
2010-01-01
SEAWAT is a combined version of MODFLOW and MT3DMS, designed to simulate three-dimensional, variable-density, saturated groundwater flow. The most recent version of the SEAWAT program, SEAWAT Version 4 (or SEAWAT_V4), supports equations of state for fluid density and viscosity. In SEAWAT_V4, fluid density can be calculated as a function of one or more MT3DMS species, and optionally, fluid pressure. Fluid viscosity is calculated as a function of one or more MT3DMS species, and the program also includes additional functions for representing the dependence of fluid viscosity on temperature. This report documents testing of and experimentation with SEAWAT_V4 with six previously published problems that include various combinations of density-dependent flow due to temperature variations and/or concentration variations of one or more species. Some of the problems also include variations in viscosity that result from temperature differences in water and oil. Comparisons between the results of SEAWAT_V4 and other published results are generally consistent with one another, with minor differences considered acceptable.
Full core analysis of IRIS reactor by using MCNPX.
Amin, E A; Bashter, I I; Hassan, Nabil M; Mustafa, S S
2016-07-01
This paper describes neutronic analysis for fresh fuelled IRIS (International Reactor Innovative and Secure) reactor by MCNPX code. The analysis included criticality calculations, radial power and axial power distribution, nuclear peaking factor and axial offset percent at the beginning of fuel cycle. The effective multiplication factor obtained by MCNPX code is compared with previous calculations by HELIOS/NESTLE, CASMO/SIMULATE, modified CORD-2 nodal calculations and SAS2H/KENO-V code systems. It is found that k-eff value obtained by MCNPX is closer to CORD-2 value. The radial and axial powers are compared with other published results carried out using SAS2H/KENO-V code. Moreover, the WIMS-D5 code is used for studying the effect of enriched boron in form of ZrB2 on the effective multiplication factor (K-eff) of the fuel pin. In this part of calculation, K-eff is calculated at different concentrations of Boron-10 in mg/cm at different stages of burnup of unit cell. The results of this part are compared with published results performed by HELIOS code. Copyright © 2016 Elsevier Ltd. All rights reserved.
Information Management for Unmanned Systems: Combining DL-Reasoning with Publish/Subscribe
NASA Astrophysics Data System (ADS)
Moser, Herwig; Reichelt, Toni; Oswald, Norbert; Förster, Stefan
Sharing capabilities and information between collaborating entities by using modem information- and communication-technology is a core principle in complex distributed civil or military mission scenarios. Previous work proved the suitability of Service-oriented Architectures for modelling and sharing the participating entities' capabilities. Albeit providing a satisfactory model for capabilities sharing, pure service-orientation curtails expressiveness for information exchange as opposed to dedicated data-centric communication principles. In this paper we introduce an Information Management System which combines OWL-Ontologies and automated reasoning with Publish/Subscribe-Systems, providing for a shared but decoupled data model. While confirming existing related research results, we emphasise the novel application and lack of practical experience of using Semantic Web technologies in areas other than originally intended. That is, aiding decision support and software design in the context of a mission scenario for an unmanned system. Experiments within a complex simulation environment show the immediate benefits of a semantic information-management and -dissemination platform: Clear separation of concerns in code and data model, increased service re-usability and extensibility as well as regulation of data flow and respective system behaviour through declarative rules.
NASA Astrophysics Data System (ADS)
Reisner, Jon; D'Angelo, Gennaro; Koo, Eunmo; Even, Wesley; Hecht, Matthew; Hunke, Elizabeth; Comeau, Darin; Bos, Randall; Cooley, James
2018-03-01
We present a multiscale study examining the impact of a regional exchange of nuclear weapons on global climate. Our models investigate multiple phases of the effects of nuclear weapons usage, including growth and rise of the nuclear fireball, ignition and spread of the induced firestorm, and comprehensive Earth system modeling of the oceans, land, ice, and atmosphere. This study follows from the scenario originally envisioned by Robock, Oman, Stenchikov, et al. (2007, https://doi.org/10.5194/acp-7-2003-2007), based on the analysis of Toon et al. (2007, https://doi.org/10.5194/acp-7-1973-2007), which assumes a regional exchange between India and Pakistan of fifty 15 kt weapons detonated by each side. We expand this scenario by modeling the processes that lead to production of black carbon, in order to refine the black carbon forcing estimates of these previous studies. When the Earth system model is initiated with 5 × 109 kg of black carbon in the upper troposphere (approximately from 9 to 13 km), the impact on climate variables such as global temperature and precipitation in our simulations is similar to that predicted by previously published work. However, while our thorough simulations of the firestorm produce about 3.7 × 109 kg of black carbon, we find that the vast majority of the black carbon never reaches an altitude above weather systems (approximately 12 km). Therefore, our Earth system model simulations conducted with model-informed atmospheric distributions of black carbon produce significantly lower global climatic impacts than assessed in prior studies, as the carbon at lower altitudes is more quickly removed from the atmosphere. In addition, our model ensembles indicate that statistically significant effects on global surface temperatures are limited to the first 5 years and are much smaller in magnitude than those shown in earlier works. None of the simulations produced a nuclear winter effect. We find that the effects on global surface temperatures are not uniform and are concentrated primarily around the highest arctic latitudes, dramatically reducing the global impact on human health and agriculture compared with that reported by earlier studies. Our analysis demonstrates that the probability of significant global cooling from a limited exchange scenario as envisioned in previous studies is highly unlikely, a conclusion supported by examination of natural analogs, such as large forest fires and volcanic eruptions.
Reisner, Jon Michael; D'Angelo, Gennaro; Koo, Eunmo; ...
2018-02-13
In this paper, we present a multi-scale study examining the impact of a regional exchange of nuclear weapons on global climate. Our models investigate multiple phases of the effects of nuclear weapons usage, including growth and rise of the nuclear fireball, ignition and spread of the induced firestorm, and comprehensive Earth system modeling of the oceans, land, ice, and atmosphere. This study follows from the scenario originally envisioned by Robock et al. (2007a), based on the analysis of Toon et al. (2007), which assumes a regional exchange between India and Pakistan of fifty 15-kiloton weapons detonated by each side. Wemore » expand this scenario by modeling the processes that lead to production of black carbon, in order to refine the black carbon forcing estimates of these previous studies. When the Earth system model is initiated with 5 × 10 9 kg of black carbon in the upper troposphere (approximately 9 to 13 km), the impact on climate variables such as global temperature and precipitation in our simulations is similar to that predicted by previously published work. However, while our thorough simulations of the firestorm produce about 3.7 × 10 9 kg of black carbon, we find that the vast majority of the black carbon never reaches an altitude above weather systems (approximately 12 km). Therefore, our Earth system model simulations conducted with model-informed atmospheric distributions of black carbon produce significantly lower global climatic impacts than assessed in prior studies, as the carbon at lower altitudes is more quickly removed from the atmosphere. In addition, our model ensembles indicate that statistically significant effects on global surface temperatures are limited to the first 5 years and are much smaller in magnitude than those shown in earlier works. None of the simulations produced a nuclear winter effect. We find that the effects on global surface temperatures are not uniform and are concentrated primarily around the highest arctic latitudes, dramatically reducing the global impact on human health and agriculture compared with that reported by earlier studies. Lastly, our analysis demonstrates that the probability of significant global cooling from a limited exchange scenario as envisioned in the previous studies is highly unlikely, a conclusion supported by examination of natural analogs, such as large forest fires and volcanic eruptions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reisner, Jon Michael; D'Angelo, Gennaro; Koo, Eunmo
In this paper, we present a multi-scale study examining the impact of a regional exchange of nuclear weapons on global climate. Our models investigate multiple phases of the effects of nuclear weapons usage, including growth and rise of the nuclear fireball, ignition and spread of the induced firestorm, and comprehensive Earth system modeling of the oceans, land, ice, and atmosphere. This study follows from the scenario originally envisioned by Robock et al. (2007a), based on the analysis of Toon et al. (2007), which assumes a regional exchange between India and Pakistan of fifty 15-kiloton weapons detonated by each side. Wemore » expand this scenario by modeling the processes that lead to production of black carbon, in order to refine the black carbon forcing estimates of these previous studies. When the Earth system model is initiated with 5 × 10 9 kg of black carbon in the upper troposphere (approximately 9 to 13 km), the impact on climate variables such as global temperature and precipitation in our simulations is similar to that predicted by previously published work. However, while our thorough simulations of the firestorm produce about 3.7 × 10 9 kg of black carbon, we find that the vast majority of the black carbon never reaches an altitude above weather systems (approximately 12 km). Therefore, our Earth system model simulations conducted with model-informed atmospheric distributions of black carbon produce significantly lower global climatic impacts than assessed in prior studies, as the carbon at lower altitudes is more quickly removed from the atmosphere. In addition, our model ensembles indicate that statistically significant effects on global surface temperatures are limited to the first 5 years and are much smaller in magnitude than those shown in earlier works. None of the simulations produced a nuclear winter effect. We find that the effects on global surface temperatures are not uniform and are concentrated primarily around the highest arctic latitudes, dramatically reducing the global impact on human health and agriculture compared with that reported by earlier studies. Lastly, our analysis demonstrates that the probability of significant global cooling from a limited exchange scenario as envisioned in the previous studies is highly unlikely, a conclusion supported by examination of natural analogs, such as large forest fires and volcanic eruptions.« less
Understanding bulk behavior of particulate materials from particle scale simulations
NASA Astrophysics Data System (ADS)
Deng, Xiaoliang
Particulate materials play an increasingly significant role in various industries, such as pharmaceutical manufacturing, food, mining, and civil engineering. The objective of this research is to better understand bulk behaviors of particulate materials from particle scale simulations. Packing properties of assembly of particles are investigated first, focusing on the effects of particle size, surface energy, and aspect ratio on the coordination number, porosity, and packing structures. The simulation results show that particle sizes, surface energy, and aspect ratio all influence the porosity of packing to various degrees. The heterogeneous force networks within particle assembly under external compressive loading are investigated as well. The results show that coarse-coarse contacts dominate the strong network and coarse-fine contacts dominate the total network. Next, DEM models are developed to simulate the particle dynamics inside a conical screen mill (comil) and magnetically assisted impaction mixer (MAIM), both are important particle processing devices. For comil, the mean residence time (MRT), spatial distribution of particles, along with the collision dynamics between particles as well as particle and vessel geometries are examined as a function of the various operating parameters such as impeller speed, screen hole size, open area, and feed rate. The simulation results can help better understand dry coating experimental results using comil. For MAIM system, the magnetic force is incorporated into the contact model, allowing to describe the interactions between magnets. The simulation results reveal the connections between homogeneity of mixture and particle scale variables such as size of magnets and surface energy of non-magnets. In particular, at the fixed mass ratio of magnets to non-magnets and surface energy the smaller magnets lead to better homogeneity of mixing, which is in good agreement with previously published experimental results. Last but not least, numerical simulations, along with theoretical analysis, are performed to investigate the interparticle force of dry coated particles. A model is derived and can be used to predict the probabilities of hose-host (HH), host-guest (HG), and guest-guest (GG) contacts. The results indicate that there are three different regions dominated by HH, HG, and GG contacts, respectively. Moreover, the critical SAC for the transition of HG to GG contacts is lower than previously estimated value. In summary, particle packing, particle dynamics associated with various particle processing devices, and interparticle force of dry coated particles are investigated in this thesis. The results show that particle scale information such as coordination number, collision dynamics, and contact force between particles from simulation results can help better understand bulk properties of assembly of individual particles.
Jaffer, Usman; Normahani, Pasha; Singh, Prashant; Aslam, Mohammed; Standfield, Nigel J
2015-01-01
In vascular surgery, duplex ultrasonography is a valuable diagnostic tool in patients with peripheral vascular disease, and there is increasing demand for vascular surgeons to be able to perform duplex scanning. This study evaluates the role of a novel simulation training package on vascular ultrasound (US) skill acquisition. A total of 19 novices measured predefined stenosis in a simulated pulsatile vessel using both peak systolic velocity ratio (PSVR) and diameter reduction (DR) methods before and after a short period of training using a simulated training package. The training package consisted of a simulated pulsatile vessel phantom, a set of instructional videos, duplex ultrasound objective structured assessment of technical skills (DUOSATS) tool, and a portable US scanner. Quantitative metrics (procedure time, percentage error using PSVR and DR methods, DUOSAT scores, and global rating scores) before and after training were compared. Subjects spent a median time of 144 mins (IQR: 60-195) training using the simulation package. Subjects exhibited statistically significant improvements when comparing pretraining and posttraining DUOSAT scores (pretraining = 17 [16-19.3] vs posttraining = 30 [27.8-31.8]; p < 0.01), global rating score (pretraining = 1 [1-2] vs posttraining = 4 [3.8-4]; p < 0.01), percentage error using both the DR (pretraining = 12.6% [9-29.6] vs posttraining = 10.3% [8.9-11.1]; p = 0.03) and PSVR (pretraining = 60% [40-60] vs posttraining = 20% [6.7-20]; p < 0.01) methods. In this study, subjects with no previous practical US experience developed the ability to both acquire and interpret arterial duplex images in a pulsatile simulated phantom following a short period of goal direct training using a simulation training package. A simulation training package may be a valuable tool for integration into a vascular training program. However, further work is needed to explore whether these newly attained skills are translated into clinical assessment. Crown Copyright © 2014. Published by Elsevier Inc. All rights reserved.
Yang, Guowei; You, Shengzui; Bi, Meihua; Fan, Bing; Lu, Yang; Zhou, Xuefang; Li, Jing; Geng, Hujun; Wang, Tianshu
2017-09-10
Free-space optical (FSO) communication utilizing a modulating retro-reflector (MRR) is an innovative way to convey information between the traditional optical transceiver and the semi-passive MRR unit that reflects optical signals. The reflected signals experience turbulence-induced fading in the double-pass channel, which is very different from that in the traditional single-pass FSO channel. In this paper, we consider the corner cube reflector (CCR) as the retro-reflective device in the MRR. A general geometrical model of the CCR is established based on the ray tracing method to describe the ray trajectory inside the CCR. This ray tracing model could treat the general case that the optical beam is obliquely incident on the hypotenuse surface of the CCR with the dihedral angle error and surface nonflatness. Then, we integrate this general CCR model into the wave-optics (WO) simulation to construct the double-pass beam propagation simulation. This double-pass simulation contains the forward propagation from the transceiver to the MRR through the atmosphere, the retro-reflection of the CCR, and the backward propagation from the MRR to the transceiver, which can be realized by a single-pass WO simulation, the ray tracing CCR model, and another single-pass WO simulation, respectively. To verify the proposed CCR model and double-pass WO simulation, the effective reflection area, the incremental phase, and the reflected beam spot on the transceiver plane of the CCR are analyzed, and the numerical results are in agreement with the previously published results. Finally, we use the double-pass WO simulation to investigate the double-pass channel in the MRR FSO systems. The histograms of the turbulence-induced fading in the forward and backward channels are obtained from the simulation data and are fitted by gamma-gamma (ΓΓ) distributions. As the two opposite channels are highly correlated, we model the double-pass channel fading by the product of two correlated ΓΓ random variables (RVs).
Secomb, Jacinta; McKenna, Lisa; Smith, Colleen
2012-12-01
To provide evidence on the effectiveness of simulation activities on the clinical decision-making abilities of undergraduate nursing students. Based on previous research, it was hypothesised that the higher the cognitive score, the greater the ability a nursing student would have to make informed valid decisions in their clinical practice. Globally, simulation is being espoused as an education method that increases the competence of health professionals. At present, there is very little evidence to support current investment in time and resources. Following ethical approval, fifty-eight third-year undergraduate nursing students were randomised in a pretest-post-test group-parallel controlled trial. The learning environment preferences (LEP) inventory was used to test cognitive abilities in order to refute the null hypothesis that activities in computer-based simulated learning environments have a negative effect on cognitive abilities when compared with activities in skills laboratory simulated learning environments. There was no significant difference in cognitive development following two cycles of simulation activities. Therefore, it is reasonable to assume that two simulation tasks, either computer-based or laboratory-based, have no effect on an undergraduate student's ability to make clinical decisions in practice. However, there was a significant finding for non-English first-language students, which requires further investigation. More longitudinal studies that quantify the education effects of simulation on the cognitive, affective and psychomotor attributes of health science students and professionals from both English-speaking and non-English-speaking backgrounds are urgently required. It is also recommended that to achieve increased participant numbers and prevent non-participation owing to absenteeism, further studies need to be imbedded directly into curricula. This investigation confirms the effect of simulation activities on real-life clinical practice, and the comparative learning benefits with traditional clinical practice and university education remain unknown. © 2012 Blackwell Publishing Ltd.
Routine Discovery of Complex Genetic Models using Genetic Algorithms
Moore, Jason H.; Hahn, Lance W.; Ritchie, Marylyn D.; Thornton, Tricia A.; White, Bill C.
2010-01-01
Simulation studies are useful in various disciplines for a number of reasons including the development and evaluation of new computational and statistical methods. This is particularly true in human genetics and genetic epidemiology where new analytical methods are needed for the detection and characterization of disease susceptibility genes whose effects are complex, nonlinear, and partially or solely dependent on the effects of other genes (i.e. epistasis or gene-gene interaction). Despite this need, the development of complex genetic models that can be used to simulate data is not always intuitive. In fact, only a few such models have been published. We have previously developed a genetic algorithm approach to discovering complex genetic models in which two single nucleotide polymorphisms (SNPs) influence disease risk solely through nonlinear interactions. In this paper, we extend this approach for the discovery of high-order epistasis models involving three to five SNPs. We demonstrate that the genetic algorithm is capable of routinely discovering interesting high-order epistasis models in which each SNP influences risk of disease only through interactions with the other SNPs in the model. This study opens the door for routine simulation of complex gene-gene interactions among SNPs for the development and evaluation of new statistical and computational approaches for identifying common, complex multifactorial disease susceptibility genes. PMID:20948983
Klon, Anthony E; Segrest, Jere P; Harvey, Stephen C
2002-12-06
Apolipoprotein A-I (apo A-I) is the major protein component of high-density lipoprotein (HDL) particles. Elevated levels of HDL in the bloodstream have been shown to correlate strongly with a reduced risk factor for atherosclerosis. Molecular dynamics simulations have been carried out on three separate model discoidal high-density lipoprotein particles (HDL) containing two monomers of apo A-I and 160 molecules of palmitoyloleoylphosphatidylcholine (POPC), to a time-scale of 1ns. The starting structures were on the basis of previously published molecular belt models of HDL consisting of the lipid-binding C-terminal domain (residues 44-243) wrapped around the circumference of a discoidal HDL particle. Subtle changes between two of the starting structures resulted in significantly different behavior during the course of the simulation. The results provide support for the hypothesis of Segrest et al. that helical registration in the molecular belt model of apo A-I is modulated by intermolecular salt bridges. In addition, we propose an explanation for the presence of proline punctuation in the molecular belt model, and for the presence of two 11-mer helical repeats interrupting the otherwise regular pattern of 22-mer helical repeats in the lipid-binding domain of apo A-I.
Simulating potential water grabbing from large-scale land acquisitions in Africa}
NASA Astrophysics Data System (ADS)
Li Johansson, Emma; Fader, Marianela; Seaquist, Jonathan W.; Nicholas, Kimberly A.
2017-04-01
The potential high level of water appropriation in Africa by foreign companies might pose high socioenvironmental challenges, including overconsumption of water and conflicts and tensions over water resources allocation. We will present a study published recently in the Proceedings of the National Academy of Sciences11 of the USA, where we simulated green and blue water demand and crop yields of large-scale land acquisitions in several African countries. Green water refers to precipitation stored in soils and consumed by plants through evapotranspiration, while blue water is extracted from rivers, lakes, aquifers, and dams. We simulated seven irrigation scenarios, and compared these data with two baseline scenarios of staple crops representing previous water demand. The results indicate that the green and blue water use is 39% and 76-86% greater, respectively, for crops grown on acquired land compared with the baseline of common staple crops, showing that land acquisitions substantially increase water demands. We also found that most land acquisitions are planted with crops such as sugarcane, jatropha, and eucalyptus, that demand volumes of water >9,000 m3ṡha-1. And even if the most efficient irrigation systems were implemented, 18% of the land acquisitions, totaling 91,000 ha, would still require more than 50% of water from blue water sources.
A new computer-aided simulation model for polycrystalline silicon film resistors
NASA Astrophysics Data System (ADS)
Ching-Yuan Wu; Weng-Dah Ken
1983-07-01
A general transport theory for the I-V characteristics of a polycrystalline film resistor has been derived by including the effects of carrier degeneracy, majority-carrier thermionic-diffusion across the space charge regions produced by carrier trapping in the grain boundaries, and quantum mechanical tunneling through the grain boundaries. Based on the derived transport theory, a new conduction model for the electrical resistivity of polycrystalline film resitors has been developed by incorporating the effects of carrier trapping and dopant segregation in the grain boundaries. Moreover, an empirical formula for the coefficient of the dopant-segregation effects has been proposed, which enables us to predict the dependence of the electrical resistivity of phosphorus-and arsenic-doped polycrystalline silicon films on thermal annealing temperature. Phosphorus-doped polycrystalline silicon resistors have been fabricated by using ion-implantation with doses ranged from 1.6 × 10 11 to 5 × 10 15/cm 2. The dependence of the electrical resistivity on doping concentration and temperature have been measured and shown to be in good agreement with the results of computer simulations. In addition, computer simulations for boron-and arsenic-doped polycrystalline silicon resistors have also been performed and shown to be consistent with the experimental results published by previous authors.
Simulating multi-scale oceanic processes around Taiwan on unstructured grids
NASA Astrophysics Data System (ADS)
Yu, Hao-Cheng; Zhang, Yinglong J.; Yu, Jason C. S.; Terng, C.; Sun, Weiling; Ye, Fei; Wang, Harry V.; Wang, Zhengui; Huang, Hai
2017-11-01
We validate a 3D unstructured-grid (UG) model for simulating multi-scale processes as occurred in Northwestern Pacific around Taiwan using recently developed new techniques (Zhang et al., Ocean Modeling, 102, 64-81, 2016) that require no bathymetry smoothing even for this region with prevalent steep bottom slopes and many islands. The focus is on short-term forecast for several months instead of long-term variability. Compared with satellite products, the errors for the simulated Sea-surface Height (SSH) and Sea-surface Temperature (SST) are similar to a reference data-assimilated global model. In the nearshore region, comparison with 34 tide gauges located around Taiwan indicates an average RMSE of 13 cm for the tidal elevation. The average RMSE for SST at 6 coastal buoys is 1.2 °C. The mean transport and eddy kinetic energy compare reasonably with previously published values and the reference model used to provide boundary and initial conditions. The model suggests ∼2-day interruption of Kuroshio east of Taiwan during a typhoon period. The effect of tidal mixing is shown to be significant nearshore. The multi-scale model is easily extendable to target regions of interest due to its UG framework and a flexible vertical gridding system, which is shown to be superior to terrain-following coordinates.
Statistical testing of association between menstruation and migraine.
Barra, Mathias; Dahl, Fredrik A; Vetvik, Kjersti G
2015-02-01
To repair and refine a previously proposed method for statistical analysis of association between migraine and menstruation. Menstrually related migraine (MRM) affects about 20% of female migraineurs in the general population. The exact pathophysiological link from menstruation to migraine is hypothesized to be through fluctuations in female reproductive hormones, but the exact mechanisms remain unknown. Therefore, the main diagnostic criterion today is concurrency of migraine attacks with menstruation. Methods aiming to exclude spurious associations are wanted, so that further research into these mechanisms can be performed on a population with a true association. The statistical method is based on a simple two-parameter null model of MRM (which allows for simulation modeling), and Fisher's exact test (with mid-p correction) applied to standard 2 × 2 contingency tables derived from the patients' headache diaries. Our method is a corrected version of a previously published flawed framework. To our best knowledge, no other published methods for establishing a menstruation-migraine association by statistical means exist today. The probabilistic methodology shows good performance when subjected to receiver operator characteristic curve analysis. Quick reference cutoff values for the clinical setting were tabulated for assessing association given a patient's headache history. In this paper, we correct a proposed method for establishing association between menstruation and migraine by statistical methods. We conclude that the proposed standard of 3-cycle observations prior to setting an MRM diagnosis should be extended with at least one perimenstrual window to obtain sufficient information for statistical processing. © 2014 American Headache Society.
MHC variability supports dog domestication from a large number of wolves: high diversity in Asia.
Niskanen, A K; Hagström, E; Lohi, H; Ruokonen, M; Esparza-Salas, R; Aspi, J; Savolainen, P
2013-01-01
The process of dog domestication is still somewhat unresolved. Earlier studies indicate that domestic dogs from all over the world have a common origin in Asia. So far, major histocompatibility complex (MHC) diversity has not been studied in detail in Asian dogs, although high levels of genetic diversity are expected at the domestication locality. We sequenced the second exon of the canine MHC gene DLA-DRB1 from 128 Asian dogs and compared our data with a previously published large data set of MHC alleles, mostly from European dogs. Our results show that Asian dogs have a higher MHC diversity than European dogs. We also estimated that there is only a small probability that new alleles have arisen by mutation since domestication. Based on the assumption that all of the currently known 102 DLA-DRB1 alleles come from the founding wolf population, we simulated the number of founding wolf individuals. Our simulations indicate an effective population size of at least 500 founding wolves, suggesting that the founding wolf population was large or that backcrossing has taken place.
Heinz, Leonard P; Kopec, Wojciech; de Groot, Bert L; Fink, Rainer H A
2018-05-02
The ryanodine receptor 1 is a large calcium ion channel found in mammalian skeletal muscle. The ion channel gained a lot of attention recently, after multiple independent authors published near-atomic cryo electron microscopy data. Taking advantage of the unprecedented quality of structural data, we performed molecular dynamics simulations on the entire ion channel as well as on a reduced model. We calculated potentials of mean force for Ba 2+ , Ca 2+ , Mg 2+ , K + , Na + and Cl - ions using umbrella sampling to identify the key residues involved in ion permeation. We found two main binding sites for the cations, whereas the channel is strongly repulsive for chloride ions. Furthermore, the data is consistent with the model that the receptor achieves its ion selectivity by over-affinity for divalent cations in a calcium-block-like fashion. We reproduced the experimental conductance for potassium ions in permeation simulations with applied voltage. The analysis of the permeation paths shows that ions exit the pore via multiple pathways, which we suggest to be related to the experimental observation of different subconducting states.
Liu, Ting-Wu; Niu, Li; Fu, Bin; Chen, Juan; Wu, Fei-Hua; Chen, Juan; Wang, Wen-Hua; Hu, Wen-Jun; He, Jun-Xian; Zheng, Hai-Lei
2013-01-01
Acid rain, as a worldwide environmental issue, can cause serious damage to plants. In this study, we provided the first case study on the systematic responses of arabidopsis (Arabidopsis thaliana (L.) Heynh.) to simulated acid rain (SiAR) by transcriptome approach. Transcriptomic analysis revealed that the expression of a set of genes related to primary metabolisms, including nitrogen, sulfur, amino acid, photosynthesis, and reactive oxygen species metabolism, were altered under SiAR. In addition, transport and signal transduction related pathways, especially calcium-related signaling pathways, were found to play important roles in the response of arabidopsis to SiAR stress. Further, we compared our data set with previously published data sets on arabidopsis transcriptome subjected to various stresses, including wound, salt, light, heavy metal, karrikin, temperature, osmosis, etc. The results showed that many genes were overlapped in several stresses, suggesting that plant response to SiAR is a complex process, which may require the participation of multiple defense-signaling pathways. The results of this study will help us gain further insights into the response mechanisms of plants to acid rain stress.
A preliminary investigation of inlet unstart effects on a high-speed civil transport concept
NASA Technical Reports Server (NTRS)
Domack, Christopher S.
1991-01-01
Vehicle motions resulting from a supersonic mixed-compression inlet unstart were examined to determine if the unstart constituted a hazard severe enough to warrant rejection of mixed-compression inlets on high-speed civil transport (HSCT) concepts. A simple kinematic analysis of an inlet unstart during cruise was performed for a Mach 2, 4, 250-passenger HSCT concept using data from a wind-tunnel test of a representative configuration with unstarted inlets simulated. A survey of previously published research on inlet unstart effects, including simulation and flight test data for the YF-12, XB-70, and Concorde aircraft, was conducted to validate the calculated results. It was concluded that, when countered by suitable automatic propulsion and flight control systems, the vehicle dynamics induced by an inlet unstart are not severe enough to preclude the use of mixed-compression inlets on an HSCT from a passenger safety standpoint. The ability to provide suitable automatic controls appears to be within the current state of the art. However, the passenger startle and discomfort caused by the noise, vibration, and cabin motions associated with an inlet unstart remain a concern.
MHC variability supports dog domestication from a large number of wolves: high diversity in Asia
Niskanen, A K; Hagström, E; Lohi, H; Ruokonen, M; Esparza-Salas, R; Aspi, J; Savolainen, P
2013-01-01
The process of dog domestication is still somewhat unresolved. Earlier studies indicate that domestic dogs from all over the world have a common origin in Asia. So far, major histocompatibility complex (MHC) diversity has not been studied in detail in Asian dogs, although high levels of genetic diversity are expected at the domestication locality. We sequenced the second exon of the canine MHC gene DLA–DRB1 from 128 Asian dogs and compared our data with a previously published large data set of MHC alleles, mostly from European dogs. Our results show that Asian dogs have a higher MHC diversity than European dogs. We also estimated that there is only a small probability that new alleles have arisen by mutation since domestication. Based on the assumption that all of the currently known 102 DLA–DRB1 alleles come from the founding wolf population, we simulated the number of founding wolf individuals. Our simulations indicate an effective population size of at least 500 founding wolves, suggesting that the founding wolf population was large or that backcrossing has taken place. PMID:23073392
A presentation system for just-in-time learning in radiology.
Kahn, Charles E; Santos, Amadeu; Thao, Cheng; Rock, Jayson J; Nagy, Paul G; Ehlers, Kevin C
2007-03-01
There is growing interest in bringing medical educational materials to the point of care. We sought to develop a system for just-in-time learning in radiology. A database of 34 learning modules was derived from previously published journal articles. Learning objectives were specified for each module, and multiple-choice test items were created. A web-based system-called TEMPO-was developed to allow radiologists to select and view the learning modules. Web services were used to exchange clinical context information between TEMPO and the simulated radiology work station. Preliminary evaluation was conducted using the System Usability Scale (SUS) questionnaire. TEMPO identified learning modules that were relevant to the age, sex, imaging modality, and body part or organ system of the patient being viewed by the radiologist on the simulated clinical work station. Users expressed a high degree of satisfaction with the system's design and user interface. TEMPO enables just-in-time learning in radiology, and can be extended to create a fully functional learning management system for point-of-care learning in radiology.
A bio-optical model for integration into ecosystem models for the Ligurian Sea
NASA Astrophysics Data System (ADS)
Bengil, Fethi; McKee, David; Beşiktepe, Sükrü T.; Sanjuan Calzado, Violeta; Trees, Charles
2016-12-01
A bio-optical model has been developed for the Ligurian Sea which encompasses both deep, oceanic Case 1 waters and shallow, coastal Case 2 waters. The model builds on earlier Case 1 models for the region and uses field data collected on the BP09 research cruise to establish new relationships for non-biogenic particles and CDOM. The bio-optical model reproduces in situ IOPs accurately and is used to parameterize radiative transfer simulations which demonstrate its utility for modeling underwater light levels and above surface remote sensing reflectance. Prediction of euphotic depth is found to be accurate to within ∼3.2 m (RMSE). Previously published light field models work well for deep oceanic parts of the Ligurian Sea that fit the Case 1 classification. However, they are found to significantly over-estimate euphotic depth in optically complex coastal waters where the influence of non-biogenic materials is strongest. For these coastal waters, the combination of the bio-optical model proposed here and full radiative transfer simulations provides significantly more accurate predictions of euphotic depth.
Topographic effects on infrasound propagation.
McKenna, Mihan H; Gibson, Robert G; Walker, Bob E; McKenna, Jason; Winslow, Nathan W; Kofford, Aaron S
2012-01-01
Infrasound data were collected using portable arrays in a region of variable terrain elevation to quantify the effects of topography on observed signal amplitude and waveform features at distances less than 25 km from partially contained explosive sources during the Frozen Rock Experiment (FRE) in 2006. Observed infrasound signals varied in amplitude and waveform complexity, indicating propagation effects that are due in part to repeated local maxima and minima in the topography on the scale of the dominant wavelengths of the observed data. Numerical simulations using an empirically derived pressure source function combining published FRE accelerometer data and historical data from Project ESSEX, a time-domain parabolic equation model that accounted for local terrain elevation through terrain-masking, and local meteorological atmospheric profiles were able to explain some but not all of the observed signal features. Specifically, the simulations matched the timing of the observed infrasound signals but underestimated the waveform amplitude observed behind terrain features, suggesting complex scattering and absorption of energy associated with variable topography influences infrasonic energy more than previously observed. © 2012 Acoustical Society of America.
Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O
2016-06-01
Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.
An RCM-E simulation of a steady magnetospheric convection event
NASA Astrophysics Data System (ADS)
Yang, J.; Toffoletto, F.; Wolf, R.; Song, Y.
2009-12-01
We present simulation results of an idealized steady magnetospheric convection (SMC) event using the Rice Convection Model coupled with an equilibrium magnetic field solver (RCM-E). The event is modeled by placing a plasma distribution with substantially depleted entropy parameter PV5/3 on the RCM's high latitude boundary. The calculated magnetic field shows a highly depressed configuration due to the enhanced westward current around geosynchronous orbit where the resulting partial ring current is stronger and more symmetric than in a typical substorm growth phase. The magnitude of BZ component in the mid plasma sheet is large compared to empirical magnetic field models. Contrary to some previous results, there is no deep BZ minimum in the near-Earth plasma sheet. This suggests that the magnetosphere could transfer into a strong adiabatic earthward convection mode without significant stretching of the plasma-sheet magnetic field, when there are flux tubes with depleted plasma content continuously entering the inner magnetosphere from the mid-tail. Virtual AU/AL and Dst indices are also calculated using a synthetic magnetogram code and are compared to typical features in published observations.
Emergency Airway Response Team Simulation Training: A Nursing Perspective.
Crimlisk, Janet T; Krisciunas, Gintas P; Grillone, Gregory A; Gonzalez, R Mauricio; Winter, Michael R; Griever, Susan C; Fernandes, Eduarda; Medzon, Ron; Blansfield, Joseph S; Blumenthal, Adam
Simulation-based education is an important tool in the training of professionals in the medical field, especially for low-frequency, high-risk events. An interprofessional simulation-based training program was developed to enhance Emergency Airway Response Team (EART) knowledge, team dynamics, and personnel confidence. This quality improvement study evaluated the EART simulation training results of nurse participants. Twenty-four simulation-based classes of 4-hour sessions were conducted during a 12-week period. Sixty-three nurses from the emergency department (ED) and the intensive care units (ICUs) completed the simulation. Participants were evaluated before and after the simulation program with a knowledge-based test and a team dynamics and confidence questionnaire. Additional comparisons were made between ED and ICU nurses and between nurses with previous EART experience and those without previous EART experience. Comparison of presimulation (presim) and postsimulation (postsim) results indicated a statistically significant gain in both team dynamics and confidence and Knowledge Test scores (P < .01). There were no differences in scores between ED and ICU groups in presim or postsim scores; nurses with previous EART experience demonstrated significantly higher presim scores than nurses without EART experience, but there were no differences between these nurse groups at postsim. This project supports the use of simulation training to increase nurses' knowledge, confidence, and team dynamics in an EART response. Importantly, nurses with no previous experience achieved outcome scores similar to nurses who had experience, suggesting that emergency airway simulation is an effective way to train both new and experienced nurses.
Bias and inference from misspecified mixed‐effect models in stepped wedge trial analysis
Fielding, Katherine L.; Davey, Calum; Aiken, Alexander M.; Hargreaves, James R.; Hayes, Richard J.
2017-01-01
Many stepped wedge trials (SWTs) are analysed by using a mixed‐effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common‐to‐all or varied‐between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within‐cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within‐cluster comparisons in the standard model. In the SWTs simulated here, mixed‐effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within‐cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28556355
Strong scaling and speedup to 16,384 processors in cardiac electro-mechanical simulations.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J
2009-01-01
High performance computing is required to make feasible simulations of whole organ models of the heart with biophysically detailed cellular models in a clinical setting. Increasing model detail by simulating electrophysiology and mechanical models increases computation demands. We present scaling results of an electro - mechanical cardiac model of two ventricles and compare them to our previously published results using an electrophysiological model only. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Fiber orientation was included. Data decomposition for the distribution onto the distributed memory system was carried out by orthogonal recursive bisection. Load weight ratios for non-tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100. The ten Tusscher et al. (2004) electrophysiological cell model was used and the Rice et al. (1999) model for the computation of the calcium transient dependent force. Scaling results for 512, 1024, 2048, 4096, 8192 and 16,384 processors were obtained for 1 ms simulation time. The simulations were carried out on an IBM Blue Gene/L supercomputer. The results show linear scaling from 512 to 16,384 processors with speedup factors between 1.82 and 2.14 between partitions. The most optimal load ratio was 1:25 for on all partitions. However, a shift towards load ratios with higher weight for the tissue elements can be recognized as can be expected when adding computational complexity to the model while keeping the same communication setup. This work demonstrates that it is potentially possible to run simulations of 0.5 s using the presented electro-mechanical cardiac model within 1.5 hours.
Kissing loop interaction in adenine riboswitch: insights from umbrella sampling simulations.
Di Palma, Francesco; Bottaro, Sandro; Bussi, Giovanni
2015-01-01
Riboswitches are cis-acting regulatory RNA elements prevalently located in the leader sequences of bacterial mRNA. An adenine sensing riboswitch cis-regulates adeninosine deaminase gene (add) in Vibrio vulnificus. The structural mechanism regulating its conformational changes upon ligand binding mostly remains to be elucidated. In this open framework it has been suggested that the ligand stabilizes the interaction of the distal "kissing loop" complex. Using accurate full-atom molecular dynamics with explicit solvent in combination with enhanced sampling techniques and advanced analysis methods it could be possible to provide a more detailed perspective on the formation of these tertiary contacts. In this work, we used umbrella sampling simulations to study the thermodynamics of the kissing loop complex in the presence and in the absence of the cognate ligand. We enforced the breaking/formation of the loop-loop interaction restraining the distance between the two loops. We also assessed the convergence of the results by using two alternative initialization protocols. A structural analysis was performed using a novel approach to analyze base contacts. Contacts between the two loops were progressively lost when larger inter-loop distances were enforced. Inter-loop Watson-Crick contacts survived at larger separation when compared with non-canonical pairing and stacking interactions. Intra-loop stacking contacts remained formed upon loop undocking. Our simulations qualitatively indicated that the ligand could stabilize the kissing loop complex. We also compared with previously published simulation studies. Kissing complex stabilization given by the ligand was compatible with available experimental data. However, the dependence of its value on the initialization protocol of the umbrella sampling simulations posed some questions on the quantitative interpretation of the results and called for better converged enhanced sampling simulations.
Exploring the use of high-fidelity simulation training to enhance clinical skills.
Ann Kirkham, Lucy
2018-02-07
The use of interprofessional simulation training to enhance nursing students' performance of technical and non-technical clinical skills is becoming increasingly common. Simulation training can involve the use of role play, virtual reality or patient simulator manikins to replicate clinical scenarios and assess the nursing student's ability to, for example, undertake clinical observations or work as part of a team. Simulation training enables nursing students to practise clinical skills in a safe environment. Effective simulation training requires extensive preparation, and debriefing is necessary following a simulated training session to review any positive or negative aspects of the learning experience. This article discusses a high-fidelity simulated training session that was used to assess a group of third-year nursing students and foundation level 1 medical students. This involved the use of a patient simulator manikin in a scenario that required the collaborative management of a deteriorating patient. ©2018 RCN Publishing Company Ltd. All rights reserved. Not to be copied, transmitted or recorded in any way, in whole or part, without prior permission of the publishers.
SimGen: A General Simulation Method for Large Systems.
Taylor, William R
2017-02-03
SimGen is a stand-alone computer program that reads a script of commands to represent complex macromolecules, including proteins and nucleic acids, in a structural hierarchy that can then be viewed using an integral graphical viewer or animated through a high-level application programming interface in C++. Structural levels in the hierarchy range from α-carbon or phosphate backbones through secondary structure to domains, molecules, and multimers with each level represented in an identical data structure that can be manipulated using the application programming interface. Unlike most coarse-grained simulation approaches, the higher-level objects represented in SimGen can be soft, allowing the lower-level objects that they contain to interact directly. The default motion simulated by SimGen is a Brownian-like diffusion that can be set to occur across all levels of representation in the hierarchy. Links can also be defined between objects, which, when combined with large high-level random movements, result in an effective search strategy for constraint satisfaction, including structure prediction from predicted pairwise distances. The implementation of SimGen makes use of the hierarchic data structure to avoid unnecessary calculation, especially for collision detection, allowing it to be simultaneously run and viewed on a laptop computer while simulating large systems of over 20,000 objects. It has been used previously to model complex molecular interactions including the motion of a myosin-V dimer "walking" on an actin fibre, RNA stem-loop packing, and the simulation of cell motion and aggregation. Several extensions to this original functionality are described. Copyright © 2016 The Francis Crick Institute. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Soba, A.; Denis, A.
2007-03-01
The codes PLACA and DPLACA, elaborated in this working group, simulate the behavior of a plate-type fuel containing in its core a foil of monolithic or dispersed fissile material, respectively, under normal operation conditions of a research reactor. Dispersion fuels usually consist of ceramic particles of a uranium compound in a high thermal conductivity matrix. The use of particles of a U-Mo alloy in a matrix of Al requires especially devoted subroutines able to simulate the growth of the interaction layer that develops between the particles and the matrix. A model is presented in this work that gives account of these particular phenomena. It is based on the assumption that diffusion of U and Al through the layer is the rate-determining step. Two moving interfaces separate the growing reaction layer from the original phases. The kinetics of these boundaries are solved as Stefan problems. In order to test the model and the associated code, some previous, simpler problems corresponding to similar systems for which analytical solutions or experimental data are known were simulated. Experiments performed with planar U-Mo/Al diffusion couples are reported in the literature, which purpose is to obtain information on the system parameters. These experiments were simulated with PLACA. Results of experiments performed with U-Mo particles disperse in Al either without or with irradiation, published in the open literature were simulated with DPLACA. A satisfactory prediction of the whole reaction layer thickness and of the individual fractions corresponding to alloy and matrix consumption was obtained.
Improving the representation of mixed-phase cloud microphysics in the ICON-LEM
NASA Astrophysics Data System (ADS)
Tonttila, Juha; Hoose, Corinna; Milbrandt, Jason; Morrison, Hugh
2017-04-01
The representation of ice-phase cloud microphysics in ICON-LEM (the Large-Eddy Model configuration of the ICOsahedral Nonhydrostatic model) is improved by implementing the recently published Predicted Particle Properties (P3) scheme into the model. In the typical two-moment microphysical schemes, such as that previously used in ICON-LEM, ice-phase particles must be partitioned into several prescribed categories. It is inherently difficult to distinguish between categories such as graupel and hail based on just the particle size, yet this partitioning may significantly affect the simulation of convective clouds. The P3 scheme avoids the problems associated with predefined ice-phase categories that are inherent in traditional microphysics schemes by introducing the concept of "free" ice-phase categories, whereby the prognostic variables enable the prediction of a wide range of smoothly varying physical properties and hence particle types. To our knowledge, this is the first application of the P3 scheme in a large-eddy model with horizontal grid spacings on the order of 100 m. We will present results from ICON-LEM simulations with the new P3 scheme comprising idealized stratiform and convective cloud cases. We will also present real-case limited-area simulations focusing on the HOPE (HD(CP)2 Observational Prototype Experiment) intensive observation campaign. The results are compared with a matching set of simulations employing the two-moment scheme and the performance of the model is also evaluated against observations in the context of the HOPE simulations, comprising data from ground based remote sensing instruments.
The Ongoing and Open-Ended Simulation
ERIC Educational Resources Information Center
Cohen, Alexander
2016-01-01
This case study explores a novel form of classroom simulation that differs from published examples in two important respects. First, it is ongoing. While most simulations represent a single learning episode embedded within a course, the ongoing simulation is a continuous set of interrelated events and decisions that accompany learning throughout…
A low mass for Mars from Jupiter's early gas-driven migration.
Walsh, Kevin J; Morbidelli, Alessandro; Raymond, Sean N; O'Brien, David P; Mandell, Avi M
2011-06-05
Jupiter and Saturn formed in a few million years (ref. 1) from a gas-dominated protoplanetary disk, and were susceptible to gas-driven migration of their orbits on timescales of only ∼100,000 years (ref. 2). Hydrodynamic simulations show that these giant planets can undergo a two-stage, inward-then-outward, migration. The terrestrial planets finished accreting much later, and their characteristics, including Mars' small mass, are best reproduced by starting from a planetesimal disk with an outer edge at about one astronomical unit from the Sun (1 au is the Earth-Sun distance). Here we report simulations of the early Solar System that show how the inward migration of Jupiter to 1.5 au, and its subsequent outward migration, lead to a planetesimal disk truncated at 1 au; the terrestrial planets then form from this disk over the next 30-50 million years, with an Earth/Mars mass ratio consistent with observations. Scattering by Jupiter initially empties but then repopulates the asteroid belt, with inner-belt bodies originating between 1 and 3 au and outer-belt bodies originating between and beyond the giant planets. This explains the significant compositional differences across the asteroid belt. The key aspect missing from previous models of terrestrial planet formation is the substantial radial migration of the giant planets, which suggests that their behaviour is more similar to that inferred for extrasolar planets than previously thought. ©2011 Macmillan Publishers Limited. All rights reserved
Chromium(III) and chromium(VI) release from leather during 8 months of simulated use.
Hedberg, Yolanda S; Lidén, Carola
2016-08-01
Chromium (Cr) release from Cr-tanned leather articles is a major cause of Cr contact dermatitis. It has been suggested that Cr(VI) release from leather is not necessarily an intrinsic property of the leather, but is strongly dependent on environmental conditions. To test this hypothesis for long-term (8 months) simulated use. The release of total Cr and Cr(VI) from Cr-tanned, unfinished leather was analysed in subsequent phosphate buffer (pH 8.0) immersions for a period of 7.5 months. The effect of combined ultraviolet treatment and alkaline solution (pH 12.1) was tested. Dry storage [20% relative humidity (RH)] was maintained between immersions. Atomic absorption spectroscopy, X-ray fluorescence and diphenylcarbazide tests were used. Cr(VI) release was dependent on previous dry storage or alkaline treatment, but not on duration or number of previous immersions. Cr(III) release decreased with time. Fifty-two percent of the total Cr released during the last immersion period was Cr(VI). Cr(VI) release exceeded 9 mg/kg in all immersion periods except in the first 10-day immersion (2.6 mg/kg). Cr(VI) release is primarily determined by environmental factors (RH prior to immersion, solution pH, and antioxidant content). The RH should be kept low prior to testing Cr(VI) release from leather. © 2016 The Authors. Contact Dermatitis published by John Wiley & Sons Ltd.
Lapidus, Nathanael; Chevret, Sylvie; Resche-Rigon, Matthieu
2014-12-30
Agreement between two assays is usually based on the concordance correlation coefficient (CCC), estimated from the means, standard deviations, and correlation coefficient of these assays. However, such data will often suffer from left-censoring because of lower limits of detection of these assays. To handle such data, we propose to extend a multiple imputation approach by chained equations (MICE) developed in a close setting of one left-censored assay. The performance of this two-step approach is compared with that of a previously published maximum likelihood estimation through a simulation study. Results show close estimates of the CCC by both methods, although the coverage is improved by our MICE proposal. An application to cytomegalovirus quantification data is provided. Copyright © 2014 John Wiley & Sons, Ltd.
Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.
Gil, Manuel
2014-01-01
Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.
Fast and accurate estimation of the covariance between pairwise maximum likelihood distances
2014-01-01
Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263
NASA Technical Reports Server (NTRS)
Asenov, Asen; Slavcheva, G.; Brown, A. R.; Davies, J. H.; Saini, Subhash
1999-01-01
A detailed study of the influence of quantum effects in the inversion layer on the random dopant induced threshold voltage fluctuations and lowering in sub 0.1 micron MOSFETs has been performed. This has been achieved using a full 3D implementation of the density gradient (DG) formalism incorporated in our previously published 3D 'atomistic' simulation approach. This results in a consistent, fully 3D, quantum mechanical picture which implies not only the vertical inversion layer quantisation but also the lateral confinement effects manifested by current filamentation in the 'valleys' of the random potential fluctuations. We have shown that the net result of including quantum mechanical effects, while considering statistical fluctuations, is an increase in both threshold voltage fluctuations and lowering.
Mofid, Omid; Mobayen, Saleh
2018-01-01
Adaptive control methods are developed for stability and tracking control of flight systems in the presence of parametric uncertainties. This paper offers a design technique of adaptive sliding mode control (ASMC) for finite-time stabilization of unmanned aerial vehicle (UAV) systems with parametric uncertainties. Applying the Lyapunov stability concept and finite-time convergence idea, the recommended control method guarantees that the states of the quad-rotor UAV are converged to the origin with a finite-time convergence rate. Furthermore, an adaptive-tuning scheme is advised to guesstimate the unknown parameters of the quad-rotor UAV at any moment. Finally, simulation results are presented to exhibit the helpfulness of the offered technique compared to the previous methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Application of a Model for Simulating the Vacuum Arc Remelting Process in Titanium Alloys
NASA Astrophysics Data System (ADS)
Patel, Ashish; Tripp, David W.; Fiore, Daniel
Mathematical modeling is routinely used in the process development and production of advanced aerospace alloys to gain greater insight into system dynamics and to predict the effect of process modifications or upsets on final properties. This article describes the application of a 2-D mathematical VAR model presented in previous LMPC meetings. The impact of process parameters on melt pool geometry, solidification behavior, fluid-flow and chemistry in Ti-6Al-4V ingots will be discussed. Model predictions were first validated against the measured characteristics of industrially produced ingots, and process inputs and model formulation were adjusted to match macro-etched pool shapes. The results are compared to published data in the literature. Finally, the model is used to examine ingot chemistry during successive VAR melts.
Statistical behavior of ten million experimental detection limits
NASA Astrophysics Data System (ADS)
Voigtman, Edward; Abraham, Kevin T.
2011-02-01
Using a lab-constructed laser-excited fluorimeter, together with bootstrapping methodology, the authors have generated many millions of experimental linear calibration curves for the detection of rhodamine 6G tetrafluoroborate in ethanol solutions. The detection limits computed from them are in excellent agreement with both previously published theory and with comprehensive Monte Carlo computer simulations. Currie decision levels and Currie detection limits, each in the theoretical, chemical content domain, were found to be simply scaled reciprocals of the non-centrality parameter of the non-central t distribution that characterizes univariate linear calibration curves that have homoscedastic, additive Gaussian white noise. Accurate and precise estimates of the theoretical, content domain Currie detection limit for the experimental system, with 5% (each) probabilities of false positives and false negatives, are presented.
Analytical Study of 90Sr Betavoltaic Nuclear Battery Performance Based on p-n Junction Silicon
NASA Astrophysics Data System (ADS)
Rahastama, Swastya; Waris, Abdul
2016-08-01
Previously, an analytical calculation of 63Ni p-n junction betavoltaic battery has been published. As the basic approach, we reproduced the analytical simulation of 63Ni betavoltaic battery and then compared it to previous results using the same design of the battery. Furthermore, we calculated its maximum power output and radiation- electricity conversion efficiency using semiconductor analysis method.Then, the same method were applied to calculate and analyse the performance of 90Sr betavoltaic battery. The aim of this project is to compare the analytical perfomance results of 90Sr betavoltaic battery to 63Ni betavoltaic battery and the source activity influences to performance. Since it has a higher power density, 90Sr betavoltaic battery yields more power than 63Ni betavoltaic battery but less radiation-electricity conversion efficiency. However, beta particles emitted from 90Sr source could travel further inside the silicon corresponding to stopping range of beta particles, thus the 90Sr betavoltaic battery could be designed thicker than 63Ni betavoltaic battery to achieve higher conversion efficiency.
Raz, O; Herrera, J; Dorren, H J S
2009-02-02
By using a tunable filter with tunability of both bandwidth and wavelength and a very sharp filter roll-off, considerable improvement of all optical Wavelength Conversion, based on Cross Gain and Phase Modulation effects in a Semiconductor Optical Amplifier and spectral slicing, is shown. At 40 Gb/s slicing of blue spectral components is shown to result in a small penalty of 0.7 dB, with a minimal eye broadening, and at 80 Gb/s the low demonstrated 0.5 dB penalty is a dramatic improvement over previously reported wavelength converters using the same principal. Additionally, we give for the first time quantitative results for the case of red spectral slicing at 40 Gb/s which we found to have only 0.5 dB penalty and a narrower time response, as anticipated by previously published theoretical papers. Numerical simulations for the dependence of the eye opening on the filter characteristics highlight the importance of the combination of a sharp filter roll-off and a broad passband.
Regulation of Cortical Dynamic Range by Background Synaptic Noise and Feedforward Inhibition.
Khubieh, Ayah; Ratté, Stéphanie; Lankarany, Milad; Prescott, Steven A
2016-08-01
The cortex encodes a broad range of inputs. This breadth of operation requires sensitivity to weak inputs yet non-saturating responses to strong inputs. If individual pyramidal neurons were to have a narrow dynamic range, as previously claimed, then staggered all-or-none recruitment of those neurons would be necessary for the population to achieve a broad dynamic range. Contrary to this explanation, we show here through dynamic clamp experiments in vitro and computer simulations that pyramidal neurons have a broad dynamic range under the noisy conditions that exist in the intact brain due to background synaptic input. Feedforward inhibition capitalizes on those noise effects to control neuronal gain and thereby regulates the population dynamic range. Importantly, noise allows neurons to be recruited gradually and occludes the staggered recruitment previously attributed to heterogeneous excitation. Feedforward inhibition protects spike timing against the disruptive effects of noise, meaning noise can enable the gain control required for rate coding without compromising the precise spike timing required for temporal coding. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Anderson, Kimberly R.; Anthony, T. Renée
2014-01-01
An understanding of how particles are inhaled into the human nose is important for developing samplers that measure biologically relevant estimates of exposure in the workplace. While previous computational mouth-breathing investigations of particle aspiration have been conducted in slow moving air, nose breathing still required exploration. Computational fluid dynamics was used to estimate nasal aspiration efficiency for an inhaling humanoid form in low velocity wind speeds (0.1–0.4 m s−1). Breathing was simplified as continuous inhalation through the nose. Fluid flow and particle trajectories were simulated over seven discrete orientations relative to the oncoming wind (0, 15, 30, 60, 90, 135, 180°). Sensitivities of the model simplification and methods were assessed, particularly the placement of the recessed nostril surface and the size of the nose. Simulations identified higher aspiration (13% on average) when compared to published experimental wind tunnel data. Significant differences in aspiration were identified between nose geometry, with the smaller nose aspirating an average of 8.6% more than the larger nose. Differences in fluid flow solution methods accounted for 2% average differences, on the order of methodological uncertainty. Similar trends to mouth-breathing simulations were observed including increasing aspiration efficiency with decreasing freestream velocity and decreasing aspiration with increasing rotation away from the oncoming wind. These models indicate nasal aspiration in slow moving air occurs only for particles <100 µm. PMID:24665111
Numerical Homogenization of Jointed Rock Masses Using Wave Propagation Simulation
NASA Astrophysics Data System (ADS)
Gasmi, Hatem; Hamdi, Essaïeb; Bouden Romdhane, Nejla
2014-07-01
Homogenization in fractured rock analyses is essentially based on the calculation of equivalent elastic parameters. In this paper, a new numerical homogenization method that was programmed by means of a MATLAB code, called HLA-Dissim, is presented. The developed approach simulates a discontinuity network of real rock masses based on the International Society of Rock Mechanics (ISRM) scanline field mapping methodology. Then, it evaluates a series of classic joint parameters to characterize density (RQD, specific length of discontinuities). A pulse wave, characterized by its amplitude, central frequency, and duration, is propagated from a source point to a receiver point of the simulated jointed rock mass using a complex recursive method for evaluating the transmission and reflection coefficient for each simulated discontinuity. The seismic parameters, such as delay, velocity, and attenuation, are then calculated. Finally, the equivalent medium model parameters of the rock mass are computed numerically while taking into account the natural discontinuity distribution. This methodology was applied to 17 bench fronts from six aggregate quarries located in Tunisia, Spain, Austria, and Sweden. It allowed characterizing the rock mass discontinuity network, the resulting seismic performance, and the equivalent medium stiffness. The relationship between the equivalent Young's modulus and rock discontinuity parameters was also analyzed. For these different bench fronts, the proposed numerical approach was also compared to several empirical formulas, based on RQD and fracture density values, published in previous research studies, showing its usefulness and efficiency in estimating rapidly the Young's modulus of equivalent medium for wave propagation analysis.
Analysis of electron beam induced deposition (EBID) of residual hydrocarbons in electron microscopy
NASA Astrophysics Data System (ADS)
Rykaczewski, Konrad; White, William B.; Fedorov, Andrei G.
2007-03-01
In this work we have developed a comprehensive dynamic model of electron beam induced deposition (EBID) of residual hydrocarbon coupling mass transport, electron transport and scattering, and species decomposition to predict the deposition of carbon nanopillars. The simulations predict the local species and electron density distributions, as well as the three-demensional morphology and the growth rate of the deposit. Since the process occurs in a high vacuum environment, surface diffusion is considered as the primary transport mode of surface-adsorbed hydrocarbon precursor. The governing surface transport equation (STE) of the adsorbed species is derived and solved numerically. The transport, scattering, and absorption of primary electron as well as secondary electron generation are treated using the Monte Carlo method. Low energy secondary electrons are the major contributors to hydrocarbon decomposition due to their energy range matching peak dissociation reaction cross section energies for precursor molecules. The deposit and substrate are treated as a continuous entity allowing the simulation of the growth of a realistically sized deposit rather than a large number of cells representing each individual atom as in previously published simulations [Mitsuishi et al., Ultramicroscopy 103, 17 (2005); Silvis-Cividjian, Ph.D. thesis, University of Delft, 2002]. Such formulation allows for simple coupling of the STE with the dynamic growth of the nanopillar. Three different growth regimes occurring in EBID are identified using scaling analysis, and simulations are used to describe the deposit morphology and precursor surface concentration specific for each growth regime.
Egelund, E F; Isaza, R; Brock, A P; Alsultan, A; An, G; Peloquin, C A
2015-04-01
The objective of this study was to develop a population pharmacokinetic model for rifampin in elephants. Rifampin concentration data from three sources were pooled to provide a total of 233 oral concentrations from 37 Asian elephants. The population pharmacokinetic models were created using Monolix (version 4.2). Simulations were conducted using ModelRisk. We examined the influence of age, food, sex, and weight as model covariates. We further optimized the dosing of rifampin based upon simulations using the population pharmacokinetic model. Rifampin pharmacokinetics were best described by a one-compartment open model including first-order absorption with a lag time and first-order elimination. Body weight was a significant covariate for volume of distribution, and food intake was a significant covariate for lag time. The median Cmax of 6.07 μg/mL was below the target range of 8-24 μg/mL. Monte Carlo simulations predicted the highest treatable MIC of 0.25 μg/mL with the current initial dosing recommendation of 10 mg/kg, based upon a previously published target AUC0-24/MIC > 271 (fAUC > 41). Simulations from the population model indicate that the current dose of 10 mg/kg may be adequate for MICs up to 0.25 μg/mL. While the targeted AUC/MIC may be adequate for most MICs, the median Cmax for all elephants is below the human and elephant targeted ranges. © 2014 John Wiley & Sons Ltd.
Jackson, Rachel W; Dembia, Christopher L; Delp, Scott L; Collins, Steven H
2017-06-01
The goal of this study was to gain insight into how ankle exoskeletons affect the behavior of the plantarflexor muscles during walking. Using data from previous experiments, we performed electromyography-driven simulations of musculoskeletal dynamics to explore how changes in exoskeleton assistance affected plantarflexor muscle-tendon mechanics, particularly for the soleus. We used a model of muscle energy consumption to estimate individual muscle metabolic rate. As average exoskeleton torque was increased, while no net exoskeleton work was provided, a reduction in tendon recoil led to an increase in positive mechanical work performed by the soleus muscle fibers. As net exoskeleton work was increased, both soleus muscle fiber force and positive mechanical work decreased. Trends in the sum of the metabolic rates of the simulated muscles correlated well with trends in experimentally observed whole-body metabolic rate ( R 2 =0.9), providing confidence in our model estimates. Our simulation results suggest that different exoskeleton behaviors can alter the functioning of the muscles and tendons acting at the assisted joint. Furthermore, our results support the idea that the series tendon helps reduce positive work done by the muscle fibers by storing and returning energy elastically. We expect the results from this study to promote the use of electromyography-driven simulations to gain insight into the operation of muscle-tendon units and to guide the design and control of assistive devices. © 2017. Published by The Company of Biologists Ltd.
Grouleff, Julie; Schiøtt, Birgit
2013-01-01
The competitive inhibitor cocaine and the non-competitive inhibitor ibogaine induce different conformational states of the human serotonin transporter. It has been shown from accessibility experiments that cocaine mainly induces an outward-facing conformation, while the non-competitive inhibitor ibogaine, and its active metabolite noribogaine, have been proposed to induce an inward-facing conformation of the human serotonin transporter similar to what has been observed for the endogenous substrate, serotonin. The ligand induced conformational changes within the human serotonin transporter caused by these three different types of ligands, substrate, non-competitive and competitive inhibitors, are studied from multiple atomistic molecular dynamics simulations initiated from a homology model of the human serotonin transporter. The results reveal that diverse conformations of the human serotonin transporter are captured from the molecular dynamics simulations depending on the type of the ligand bound. The inward-facing conformation of the human serotonin transporter is reached with noribogaine bound, and this state resembles a previously identified inward-facing conformation of the human serotonin transporter obtained from molecular dynamics simulation with bound substrate, but also a recently published inward-facing conformation of a bacterial homolog, the leucine transporter from Aquifex Aoelicus. The differences observed in ligand induced behavior are found to originate from different interaction patterns between the ligands and the protein. Such atomic-level understanding of how an inhibitor can dictate the conformational response of a transporter by ligand binding may be of great importance for future drug design. PMID:23776432
Planetary Boundary Layer Simulation Using TASS
NASA Technical Reports Server (NTRS)
Schowalter, David G.; DeCroix, David S.; Lin, Yuh-Lang; Arya, S. Pal; Kaplan, Michael
1996-01-01
Boundary conditions to an existing large-eddy simulation model have been changed in order to simulate turbulence in the atmospheric boundary layer. Several options are now available, including the use of a surface energy balance. In addition, we compare convective boundary layer simulations with the Wangara and Minnesota field experiments as well as with other model results. We find excellent agreement of modelled mean profiles of wind and temperature with observations and good agreement for velocity variances. Neutral boundary simulation results are compared with theory and with previously used models. Agreement with theory is reasonable, while agreement with previous models is excellent.
Emotion, cognitive load and learning outcomes during simulation training.
Fraser, Kristin; Ma, Irene; Teteris, Elise; Baxter, Heather; Wright, Bruce; McLaughlin, Kevin
2012-11-01
Simulation training has emerged as an effective way to complement clinical training of medical students. Yet outcomes from simulation training must be considered suboptimal when 25-30% of students fail to recognise a cardiac murmur on which they were trained 1 hour previously. There are several possible explanations for failure to improve following simulation training, which include the impact of heightened emotions on learning and cognitive overload caused by interactivity with high-fidelity simulators. This study was conducted to assess emotion during simulation training and to explore the relationships between emotion and cognitive load, and diagnostic performance. We trained 84 Year 1 medical students on a scenario of chest pain caused by symptomatic aortic stenosis. After training, students were asked to rate their emotional state and cognitive load. We then provided training on a dyspnoea scenario before asking participants to diagnose the murmur in which they had been trained (aortic stenosis) and a novel murmur (mitral regurgitation). We used factor analysis to identify the principal components of emotion, and then studied the associations between these components of emotion and cognitive load and diagnostic performance. We identified two principal components of emotion, which we felt represented invigoration and tranquillity. Both of these were associated with cognitive load with adjusted regression coefficients of 0.63 (95% confidence interval [CI] 0.28-0.99; p = 0.001) and - 0.44 (95% CI - 0.77 to - 0.10; p = 0.009), respectively. We found a significant negative association between cognitive load and the odds of subsequently identifying the trained murmur (odds ratio 0.27, 95% CI 0.11-0.67; p = 0.004). We found that increased invigoration and reduced tranquillity during simulation training were associated with increased cognitive load, and that the likelihood of correctly identifying a trained murmur declined with increasing cognitive load. Further studies are needed to evaluate the impact on performance of strategies to alter emotion and cognitive load during simulation training. © Blackwell Publishing Ltd 2012.
Modeling the Cost Effectiveness of Malaria Control Interventions in the Highlands of Western Kenya
Stuckey, Erin M.; Stevenson, Jennifer; Galactionova, Katya; Baidjoe, Amrish Y.; Bousema, Teun; Odongo, Wycliffe; Kariuki, Simon; Drakeley, Chris; Smith, Thomas A.; Cox, Jonathan; Chitnis, Nakul
2014-01-01
Introduction Tools that allow for in silico optimization of available malaria control strategies can assist the decision-making process for prioritizing interventions. The OpenMalaria stochastic simulation modeling platform can be applied to simulate the impact of interventions singly and in combination as implemented in Rachuonyo South District, western Kenya, to support this goal. Methods Combinations of malaria interventions were simulated using a previously-published, validated model of malaria epidemiology and control in the study area. An economic model of the costs of case management and malaria control interventions in Kenya was applied to simulation results and cost-effectiveness of each intervention combination compared to the corresponding simulated outputs of a scenario without interventions. Uncertainty was evaluated by varying health system and intervention delivery parameters. Results The intervention strategy with the greatest simulated health impact employed long lasting insecticide treated net (LLIN) use by 80% of the population, 90% of households covered by indoor residual spraying (IRS) with deployment starting in April, and intermittent screen and treat (IST) of school children using Artemether lumefantrine (AL) with 80% coverage twice per term. However, the current malaria control strategy in the study area including LLIN use of 56% and IRS coverage of 70% was the most cost effective at reducing disability-adjusted life years (DALYs) over a five year period. Conclusions All the simulated intervention combinations can be considered cost effective in the context of available resources for health in Kenya. Increasing coverage of vector control interventions has a larger simulated impact compared to adding IST to the current implementation strategy, suggesting that transmission in the study area is not at a level to warrant replacing vector control to a school-based screen and treat program. These results have the potential to assist malaria control program managers in the study area in adding new or changing implementation of current interventions. PMID:25290939
Modeling the cost effectiveness of malaria control interventions in the highlands of western Kenya.
Stuckey, Erin M; Stevenson, Jennifer; Galactionova, Katya; Baidjoe, Amrish Y; Bousema, Teun; Odongo, Wycliffe; Kariuki, Simon; Drakeley, Chris; Smith, Thomas A; Cox, Jonathan; Chitnis, Nakul
2014-01-01
Tools that allow for in silico optimization of available malaria control strategies can assist the decision-making process for prioritizing interventions. The OpenMalaria stochastic simulation modeling platform can be applied to simulate the impact of interventions singly and in combination as implemented in Rachuonyo South District, western Kenya, to support this goal. Combinations of malaria interventions were simulated using a previously-published, validated model of malaria epidemiology and control in the study area. An economic model of the costs of case management and malaria control interventions in Kenya was applied to simulation results and cost-effectiveness of each intervention combination compared to the corresponding simulated outputs of a scenario without interventions. Uncertainty was evaluated by varying health system and intervention delivery parameters. The intervention strategy with the greatest simulated health impact employed long lasting insecticide treated net (LLIN) use by 80% of the population, 90% of households covered by indoor residual spraying (IRS) with deployment starting in April, and intermittent screen and treat (IST) of school children using Artemether lumefantrine (AL) with 80% coverage twice per term. However, the current malaria control strategy in the study area including LLIN use of 56% and IRS coverage of 70% was the most cost effective at reducing disability-adjusted life years (DALYs) over a five year period. All the simulated intervention combinations can be considered cost effective in the context of available resources for health in Kenya. Increasing coverage of vector control interventions has a larger simulated impact compared to adding IST to the current implementation strategy, suggesting that transmission in the study area is not at a level to warrant replacing vector control to a school-based screen and treat program. These results have the potential to assist malaria control program managers in the study area in adding new or changing implementation of current interventions.
Distributed Observer Network (DON), Version 3.0, User's Guide
NASA Technical Reports Server (NTRS)
Mazzone, Rebecca A.; Conroy, Michael P.
2015-01-01
The Distributed Observer Network (DON) is a data presentation tool developed by the National Aeronautics and Space Administration (NASA) to distribute and publish simulation results. Leveraging the display capabilities inherent in modern gaming technology, DON places users in a fully navigable 3-D environment containing graphical models and allows the users to observe how those models evolve and interact over time in a given scenario. Each scenario is driven with data that has been generated by authoritative NASA simulation tools and exported in accordance with a published data interface specification. This decoupling of the data from the source tool enables DON to faithfully display a simulator's results and ensure that every simulation stakeholder will view the exact same information every time.
NASA Astrophysics Data System (ADS)
Clark, Stephen; Winske, Dan; Schaeffer, Derek; Everson, Erik; Bondarenko, Anton; Constantin, Carmen; Niemann, Christoph
2014-10-01
We present 3D hybrid simulations of laser produced expanding debris clouds propagating though a magnetized ambient plasma in the context of magnetized collisionless shocks. New results from the 3D code are compared to previously obtained simulation results using a 2D hybrid code. The 3D code is an extension of a previously developed 2D code developed at Los Alamos National Laboratory. It has been parallelized and ported to execute on a cluster environment. The new simulations are used to verify scaling relationships, such as shock onset time and coupling parameter (Rm /ρd), developed via 2D simulations. Previous 2D results focus primarily on laboratory shock formation relevant to experiments being performed on the Large Plasma Device, where the shock propagates across the magnetic field. The new 3D simulations show wave structure and dynamics oblique to the magnetic field that introduce new physics to be considered in future experiments.
NASA Astrophysics Data System (ADS)
Gherghel-Lascu, A.; Apel, W. D.; Arteaga-Velázquez, J. C.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Engler, J.; Fuchs, B.; Fuhrmann, D.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huber, D.; Huege, T.; Kampert, K.-H.; Kang, D.; Klages, H. O.; Link, K.; Łuczak, P.; Mathes, H. J.; Mayer, H. J.; Milke, J.; Mitrica, B.; Morello, C.; Oehlschläger, J.; Ostapchenko, S.; Palmieri, N.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Zabierowski, J.
2015-02-01
In previous studies of KASCADE-Grande data, a Monte Carlo simulation code based on the GEANT3 program has been developed to describe the energy deposited by EAS particles in the detector stations. In an attempt to decrease the simulation time and ensure compatibility with the geometry description in standard KASCADE-Grande analysis software, several structural elements have been neglected in the implementation of the Grande station geometry. To improve the agreement between experimental and simulated data, a more accurate simulation of the response of the KASCADE-Grande detector is necessary. A new simulation code has been developed based on the GEANT4 program, including a realistic geometry of the detector station with structural elements that have not been considered in previous studies. The new code is used to study the influence of a realistic detector geometry on the energy deposited in the Grande detector stations by particles from EAS events simulated by CORSIKA. Lateral Energy Correction Functions are determined and compared with previous results based on GEANT3.
ERIC Educational Resources Information Center
Joyce, William W., Ed.
1974-01-01
An argument for simulation games in elementary education, instructional problems related to homemade adapted, and prepackaged games, research results on gaming with 8-year-olds, simulation games for middle schools, and an annotated bibliography on published simulation games offer answers to most questions on simulation games. (KM)
14 CFR Appendix H to Part 121 - Advanced Simulation
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Advanced Simulation H Appendix H to Part... Simulation Link to an amendment published at 78 FR 67846, Nov. 12, 2013. This appendix provides guidelines... Simulation Training Program For an operator to conduct Level C or D training under this appendix all required...
ERIC Educational Resources Information Center
Hart, Jeffrey A.
1985-01-01
Presents a discussion of how computer simulations are used in two undergraduate social science courses and a faculty computer literacy course on simulations and artificial intelligence. Includes a list of 60 simulations for use on mainframes and microcomputers. Entries include type of hardware required, publisher's address, and cost. Sample…
14 CFR 121.409 - Training courses using airplane simulators and other training devices.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Training courses using airplane simulators... Program § 121.409 Training courses using airplane simulators and other training devices. Link to an amendment published at 78 FR 67837, Nov. 12, 2013. (a) Training courses utilizing airplane simulators and...
Using the Everest Team Simulation to Teach Threshold Concepts
ERIC Educational Resources Information Center
Nichols, Elizabeth; Wright, April L.
2015-01-01
This resource review focuses on "Leadership and Team Simulation: Everest V2" released by Harvard Business Publishing. The review describes the simulation's story line of a commercial team expedition climbing to the summit of Mount Everest along with the simulation's architecture and key features. Building on Wright and Gilmore's (2012)…
Practical Unitary Simulator for Non-Markovian Complex Processes
NASA Astrophysics Data System (ADS)
Binder, Felix C.; Thompson, Jayne; Gu, Mile
2018-06-01
Stochastic processes are as ubiquitous throughout the quantitative sciences as they are notorious for being difficult to simulate and predict. In this Letter, we propose a unitary quantum simulator for discrete-time stochastic processes which requires less internal memory than any classical analogue throughout the simulation. The simulator's internal memory requirements equal those of the best previous quantum models. However, in contrast to previous models, it only requires a (small) finite-dimensional Hilbert space. Moreover, since the simulator operates unitarily throughout, it avoids any unnecessary information loss. We provide a stepwise construction for simulators for a large class of stochastic processes hence directly opening the possibility for experimental implementations with current platforms for quantum computation. The results are illustrated for an example process.
Integration and Validation of Hysteroscopy Simulation in the Surgical Training Curriculum.
Elessawy, Mohamed; Skrzipczyk, Moritz; Eckmann-Scholz, Christel; Maass, Nicolai; Mettler, Liselotte; Guenther, Veronika; van Mackelenbergh, Marion; Bauerschlag, Dirk O; Alkatout, Ibrahim
The primary objective of our study was to test the construct validity of the HystSim hysteroscopic simulator to determine whether simulation training can improve the acquisition of hysteroscopic skills regardless of the previous levels of experience of the participants. The secondary objective was to analyze the performance of a selected task, using specially designed scoring charts to help reduce the learning curve for both novices and experienced surgeons. The teaching of hysteroscopic intervention has received only scant attention, focusing mainly on the development of physical models and box simulators. This encouraged our working group to search for a suitable hysteroscopic simulator module and to test its validation. We decided to use the HystSim hysteroscopic simulator, which is one of the few such simulators that has already completed a validation process, with high ratings for both realism and training capacity. As a testing tool for our study, we selected the myoma resection task. We analyzed the results using the multimetric score system suggested by HystSim, allowing a more precise interpretation of the results. Between June 2014 and May 2015, our group collected data on 57 participants of minimally invasive surgical training courses at the Kiel School of Gynecological Endoscopy, Department of Gynecology and Obstetrics, University Hospitals Schleswig-Holstein, Campus Kiel. The novice group consisted of 42 medical students and residents with no prior experience in hysteroscopy, whereas the expert group consisted of 15 participants with more than 2 years of experience of advanced hysteroscopy operations. The overall results demonstrated that all participants attained significant improvements between their pretest and posttests, independent of their previous levels of experience (p < 0.002). Those in the expert group demonstrated statistically significant, superior scores in the pretest and posttests (p = 0.001, p = 0.006). Regarding visualization and ergonomics, the novices showed a better pretest value than the experts; however, the experts were able to improve significantly during the posttest. These precise findings demonstrated that the multimetric scoring system achieved several important objectives, including clinical relevance, critical relevance, and training motivation. All participants demonstrated improvements in their hysteroscopic skills, proving an adequate construct validation of the HystSim. Using the multimetric scoring system enabled a more accurate analysis of the performance of the participants independent of their levels of experience which could be an important key for streamlining the learning curve. Future studies testing the predictive validation of the simulator and frequency of the training intervals are necessary before the introduction of the simulator into the standard surgical training curriculum. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Ruano-Ravina, Alberto; Álvarez-Dardet, Carlos; Domínguez-Berjón, M Felicitas; Fernández, Esteve; García, Ana M; Borrell, Carme
2016-01-01
The purpose of the study was to analyze the determinants of citations such as publication year, article type, article topic, article selected for a press release, number of articles previously published by the corresponding author, and publication language in a Spanish journal of public health. Observational study including all articles published in Gaceta Sanitaria during 2007-2011. We retrieved the number of citations from the ISI Web of Knowledge database in June 2013 and also information on other variables such as number of articles published by the corresponding author in the previous 5 years (searched through PubMed), selection for a press release, publication language, article type and topic, and others. We included 542 articles. Of these, 62.5% were cited in the period considered. We observed an increased odds ratio of citations for articles selected for a press release and also with the number of articles published previously by the corresponding author. Articles published in English do not seem to increase their citations. Certain externalities such as number of articles published by the corresponding author and being selected for a press release seem to influence the number of citations in national journals. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
DeBeer, Chris M.; Pomeroy, John W.
2017-10-01
The spatial heterogeneity of mountain snow cover and ablation is important in controlling patterns of snow cover depletion (SCD), meltwater production, and runoff, yet is not well-represented in most large-scale hydrological models and land surface schemes. Analyses were conducted in this study to examine the influence of various representations of snow cover and melt energy heterogeneity on both simulated SCD and stream discharge from a small alpine basin in the Canadian Rocky Mountains. Simulations were performed using the Cold Regions Hydrological Model (CRHM), where point-scale snowmelt computations were made using a snowpack energy balance formulation and applied to spatial frequency distributions of snow water equivalent (SWE) on individual slope-, aspect-, and landcover-based hydrological response units (HRUs) in the basin. Hydrological routines were added to represent the vertical and lateral transfers of water through the basin and channel system. From previous studies it is understood that the heterogeneity of late winter SWE is a primary control on patterns of SCD. The analyses here showed that spatial variation in applied melt energy, mainly due to differences in net radiation, has an important influence on SCD at multiple scales and basin discharge, and cannot be neglected without serious error in the prediction of these variables. A single basin SWE distribution using the basin-wide mean SWE (SWE ‾) and coefficient of variation (CV; standard deviation/mean) was found to represent the fine-scale spatial heterogeneity of SWE sufficiently well. Simulations that accounted for differences in (SWE ‾) among HRUs but neglected the sub-HRU heterogeneity of SWE were found to yield similar discharge results as simulations that included this heterogeneity, while SCD was poorly represented, even at the basin level. Finally, applying point-scale snowmelt computations based on a single SWE depth for each HRU (thereby neglecting spatial differences in internal snowpack energetics over the distributions) was found to yield similar SCD and discharge results as simulations that resolved internal energy differences. Spatial/internal snowpack melt energy effects are more pronounced at times earlier in spring before the main period of snowmelt and SCD, as shown in previously published work. The paper discusses the importance of these findings as they apply to the warranted complexity of snowmelt process simulation in cold mountain environments, and shows how the end-of-winter SWE distribution represents an effective means of resolving snow cover heterogeneity at multiple scales for modelling, even in steep and complex terrain.
Evaluation of Accelerometer-Based Fall Detection Algorithms on Real-World Falls
Bagalà, Fabio; Becker, Clemens; Cappello, Angelo; Chiari, Lorenzo; Aminian, Kamiar; Hausdorff, Jeffrey M.; Zijlstra, Wiebren; Klenk, Jochen
2012-01-01
Despite extensive preventive efforts, falls continue to be a major source of morbidity and mortality among elderly. Real-time detection of falls and their urgent communication to a telecare center may enable rapid medical assistance, thus increasing the sense of security of the elderly and reducing some of the negative consequences of falls. Many different approaches have been explored to automatically detect a fall using inertial sensors. Although previously published algorithms report high sensitivity (SE) and high specificity (SP), they have usually been tested on simulated falls performed by healthy volunteers. We recently collected acceleration data during a number of real-world falls among a patient population with a high-fall-risk as part of the SensAction-AAL European project. The aim of the present study is to benchmark the performance of thirteen published fall-detection algorithms when they are applied to the database of 29 real-world falls. To the best of our knowledge, this is the first systematic comparison of fall detection algorithms tested on real-world falls. We found that the SP average of the thirteen algorithms, was (mean±std) 83.0%±30.3% (maximum value = 98%). The SE was considerably lower (SE = 57.0%±27.3%, maximum value = 82.8%), much lower than the values obtained on simulated falls. The number of false alarms generated by the algorithms during 1-day monitoring of three representative fallers ranged from 3 to 85. The factors that affect the performance of the published algorithms, when they are applied to the real-world falls, are also discussed. These findings indicate the importance of testing fall-detection algorithms in real-life conditions in order to produce more effective automated alarm systems with higher acceptance. Further, the present results support the idea that a large, shared real-world fall database could, potentially, provide an enhanced understanding of the fall process and the information needed to design and evaluate a high-performance fall detector. PMID:22615890
Future Climate Change in the Baltic Sea Area
NASA Astrophysics Data System (ADS)
Bøssing Christensen, Ole; Kjellström, Erik; Zorita, Eduardo; Sonnenborg, Torben; Meier, Markus; Grinsted, Aslak
2015-04-01
Regional climate models have been used extensively since the first assessment of climate change in the Baltic Sea region published in 2008, not the least for studies of Europe (and including the Baltic Sea catchment area). Therefore, conclusions regarding climate model results have a better foundation than was the case for the first BACC report of 2008. This presentation will report model results regarding future climate. What is the state of understanding about future human-driven climate change? We will cover regional models, statistical downscaling, hydrological modelling, ocean modelling and sea-level change as it is projected for the Baltic Sea region. Collections of regional model simulations from the ENSEMBLES project for example, financed through the European 5th Framework Programme and the World Climate Research Programme Coordinated Regional Climate Downscaling Experiment, have made it possible to obtain an increasingly robust estimation of model uncertainty. While the first Baltic Sea assessment mainly used four simulations from the European 5th Framework Programme PRUDENCE project, an ensemble of 13 transient regional simulations with twice the horizontal resolution reaching the end of the 21st century has been available from the ENSEMBLES project; therefore it has been possible to obtain more quantitative assessments of model uncertainty. The literature about future climate change in the Baltic Sea region is largely built upon the ENSEMBLES project. Also within statistical downscaling, a considerable number of papers have been published, encompassing now the application of non-linear statistical models, projected changes in extremes and correction of climate model biases. The uncertainty of hydrological change has received increasing attention since the previous Baltic Sea assessment. Several studies on the propagation of uncertainties originating in GCMs, RCMs, and emission scenarios are presented. The number of studies on uncertainties related to downscaling and impact models is relatively small, but more are emerging. A large number of coupled climate-environmental scenario simulations for the Baltic Sea have been performed within the BONUS+ projects (ECOSUPPORT, INFLOW, AMBER and Baltic-C (2009-2011)), using various combinations of output from GCMs, RCMs, hydrological models and scenarios for load and emission of nutrients as forcing for Baltic Sea models. Such a large ensemble of scenario simulations for the Baltic Sea has never before been produced and enables for the first time an estimation of uncertainties.
The Impact of Guided Notes on Post-Secondary Student Achievement: A Meta-Analysis
ERIC Educational Resources Information Center
Larwin, Karen H.; Larwin, David A.
2013-01-01
The common practice of using of guided notes in the post-secondary classroom is not fully appreciated or understood. In an effort to add to the existing research about this phenomenon, the current investigation expands on previously published research and one previously published meta-analysis that examined the impact of guided notes on…
[Liver injury in visceral leishmaniasis in children: systematic review].
Medeiros, Francisco Salomao de; Tavares-Neto, Jose; D'Oliveira, Argemiro; Paraná, Raymundo
2007-09-01
Visceral Leisshimaniosis or Kalazar is a parasitic infection caused by Leishimania Donovani subspecies. It is transmitted by phlebotomineos and may lead to liver and spleen enlargements as well as immunological impairment. Sometimes it is described liver injury simulating acute or chronic viral hepatitis and even portal hypertension. The liver injury makes difficult the diffencial diagnosis of Kalazar and other liver diseases in endemic regions. To define and clarify the liver injury spectrum described in published cases reports. Systematic revision of published data on Kalazar and liver injury using the following databank: LILACS, MEDLINE and EMBASE. Only paper published in French, English, Portuguese and Spanish were taken into consideration. The procedures for systematic review recommended by the NHS Centre for Reviews and Dissemination, University of Cork, were adopted. The paper quality classification was based on the number of reported variables previously defined in our study Only 11/28 (55%) publications were included in our analysis because they filled the minimal required data. Acute and chronic liver disease were well documented in these articles. Serum albumin and prothombine time were associated with severity of liver disease (P < .05). "Liver involvement, even when it is severe, may occur at tha begining of the disease. Kalazar should be considered as a differential diagnosis of cholestasis, acute and chronic liver injury as well as portal hypertension in children.
NASA Astrophysics Data System (ADS)
Kim, H.; McIntyre, P. C.
2002-11-01
Among several metal silicate candidates for high permittivity gate dielectric applications, the mixing thermodynamics of the ZrO2-SiO2 system were analyzed, based on previously published experimental phase diagrams. The driving force for spinodal decomposition was investigated in an amorphous silicate that was treated as a supercooled liquid solution. A subregular model was used for the excess free energy of mixing of the liquid, and measured invariant points were adopted for the calculations. The resulting simulated ZrO2-SiO2 phase diagram matched the experimental results reasonably well and indicated that a driving force exists for amorphous Zr-silicate compositions between approx40 mol % and approx90 mol % SiO2 to decompose into a ZrO2-rich phase (approx20 mol % SiO2) and SiO2-rich phase (>98 mol % SiO2) through diffusional phase separation at a temperature of 900 degC. These predictions are consistent with recent experimental reports of phase separation in amorphous Zr-silicate thin films. Other metal-silicate systems were also investigated and composition ranges for phase separation in amorphous Hf, La, and Y silicates were identified from the published bulk phase diagrams. The kinetics of one-dimensional spinodal decomposition normal to the plane of the film were simulated for an initially homogeneous Zr-silicate dielectric layer. We examined the effects that local stresses and the capillary driving force for component segregation to the interface have on the rate of spinodal decomposition in amorphous metal-silicate thin films.
Zheng, Manxu; Zou, Zhenmin; Bartolo, Paulo Jorge Da Silva; Peach, Chris; Ren, Lei
2017-02-01
The human shoulder is a complicated musculoskeletal structure and is a perfect compromise between mobility and stability. The objective of this paper is to provide a thorough review of previous finite element (FE) studies in biomechanics of the human shoulder complex. Those FE studies to investigate shoulder biomechanics have been reviewed according to the physiological and clinical problems addressed: glenohumeral joint stability, rotator cuff tears, joint capsular and labral defects and shoulder arthroplasty. The major findings, limitations, potential clinical applications and modelling techniques of those FE studies are critically discussed. The main challenges faced in order to accurately represent the realistic physiological functions of the shoulder mechanism in FE simulations involve (1) subject-specific representation of the anisotropic nonhomogeneous material properties of the shoulder tissues in both healthy and pathological conditions; (2) definition of boundary and loading conditions based on individualised physiological data; (3) more comprehensive modelling describing the whole shoulder complex including appropriate three-dimensional (3D) representation of all major shoulder hard tissues and soft tissues and their delicate interactions; (4) rigorous in vivo experimental validation of FE simulation results. Fully validated shoulder FE models would greatly enhance our understanding of the aetiology of shoulder disorders, and hence facilitate the development of more efficient clinical diagnoses, non-surgical and surgical treatments, as well as shoulder orthotics and prosthetics. © 2016 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons Ltd. © 2016 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons Ltd.
Development and validation of age-dependent FE human models of a mid-sized male thorax.
El-Jawahri, Raed E; Laituri, Tony R; Ruan, Jesse S; Rouhana, Stephen W; Barbat, Saeed D
2010-11-01
The increasing number of people over 65 years old (YO) is an important research topic in the area of impact biomechanics, and finite element (FE) modeling can provide valuable support for related research. There were three objectives of this study: (1) Estimation of the representative age of the previously-documented Ford Human Body Model (FHBM) -- an FE model which approximates the geometry and mass of a mid-sized male, (2) Development of FE models representing two additional ages, and (3) Validation of the resulting three models to the extent possible with respect to available physical tests. Specifically, the geometry of the model was compared to published data relating rib angles to age, and the mechanical properties of different simulated tissues were compared to a number of published aging functions. The FHBM was determined to represent a 53-59 YO mid-sized male. The aforementioned aging functions were used to develop FE models representing two additional ages: 35 and 75 YO. The rib model was validated against human rib specimens and whole rib tests, under different loading conditions, with and without modeled fracture. In addition, the resulting three age-dependent models were validated by simulating cadaveric tests of blunt and sled impacts. The responses of the models, in general, were within the cadaveric response corridors. When compared to peak responses from individual cadavers similar in size and age to the age-dependent models, some responses were within one standard deviation of the test data. All the other responses, but one, were within two standard deviations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisher, Jeffrey W., E-mail: jeffrey.fisher@fda.hhs.gov; Twaddle, Nathan C.; Vanlandingham, Michelle
A physiologically based pharmacokinetic (PBPK) model was developed for bisphenol A (BPA) in adult rhesus monkeys using intravenous (iv) and oral bolus doses of 100 {mu}g d6-BPA/kg (). This calibrated PBPK adult monkey model for BPA was then evaluated against published monkey kinetic studies with BPA. Using two versions of the adult monkey model based on monkey BPA kinetic data from and , the aglycone BPA pharmacokinetics were simulated for human oral ingestion of 5 mg d16-BPA per person (Voelkel et al., 2002). Voelkel et al. were unable to detect the aglycone BPA in plasma, but were able to detectmore » BPA metabolites. These human model predictions of the aglycone BPA in plasma were then compared to previously published PBPK model predictions obtained by simulating the Voelkel et al. kinetic study. Our BPA human model, using two parameter sets reflecting two adult monkey studies, both predicted lower aglycone levels in human serum than the previous human BPA PBPK model predictions. BPA was metabolized at all ages of monkey (PND 5 to adult) by the gut wall and liver. However, the hepatic metabolism of BPA and systemic clearance of its phase II metabolites appear to be slower in younger monkeys than adults. The use of the current non-human primate BPA model parameters provides more confidence in predicting the aglycone BPA in serum levels in humans after oral ingestion of BPA. -- Highlights: Black-Right-Pointing-Pointer A bisphenol A (BPA) PBPK model for the infant and adult monkey was constructed. Black-Right-Pointing-Pointer The hepatic metabolic rate of BPA increased with age of the monkey. Black-Right-Pointing-Pointer The systemic clearance rate of metabolites increased with age of the monkey. Black-Right-Pointing-Pointer Gut wall metabolism of orally administered BPA was substantial across all ages of monkeys. Black-Right-Pointing-Pointer Aglycone BPA plasma concentrations were predicted in humans orally given oral doses of deuterated BPA.« less
Publishing and sharing of hydrologic models through WaterHUB
NASA Astrophysics Data System (ADS)
Merwade, V.; Ruddell, B. L.; Song, C.; Zhao, L.; Kim, J.; Assi, A.
2011-12-01
Most hydrologists use hydrologic models to simulate the hydrologic processes to understand hydrologic pathways and fluxes for research, decision making and engineering design. Once these tasks are complete including publication of results, the models generally are not published or made available to the public for further use and improvement. Although publication or sharing of models is not required for journal publications, sharing of models may open doors for new collaborations, and avoids duplication of efforts if other researchers are interested in simulating a particular watershed for which a model already exists. For researchers, who are interested in sharing models, there are limited avenues to publishing their models to the wider community. Towards filling this gap, a prototype cyberinfrastructure (CI), called WaterHUB, is developed for sharing hydrologic data and modeling tools in an interactive environment. To test the utility of WaterHUB for sharing hydrologic models, a system to publish and share SWAT (Soil Water Assessment Tool) is developed. Users can utilize WaterHUB to search and download existing SWAT models, and also upload new SWAT models. Metadata such as the name of the watershed, name of the person or agency who developed the model, simulation period, time step, and list of calibrated parameters also published with individual model.
Sequenza: allele-specific copy number and mutation profiles from tumor sequencing data.
Favero, F; Joshi, T; Marquard, A M; Birkbak, N J; Krzystanek, M; Li, Q; Szallasi, Z; Eklund, A C
2015-01-01
Exome or whole-genome deep sequencing of tumor DNA along with paired normal DNA can potentially provide a detailed picture of the somatic mutations that characterize the tumor. However, analysis of such sequence data can be complicated by the presence of normal cells in the tumor specimen, by intratumor heterogeneity, and by the sheer size of the raw data. In particular, determination of copy number variations from exome sequencing data alone has proven difficult; thus, single nucleotide polymorphism (SNP) arrays have often been used for this task. Recently, algorithms to estimate absolute, but not allele-specific, copy number profiles from tumor sequencing data have been described. We developed Sequenza, a software package that uses paired tumor-normal DNA sequencing data to estimate tumor cellularity and ploidy, and to calculate allele-specific copy number profiles and mutation profiles. We applied Sequenza, as well as two previously published algorithms, to exome sequence data from 30 tumors from The Cancer Genome Atlas. We assessed the performance of these algorithms by comparing their results with those generated using matched SNP arrays and processed by the allele-specific copy number analysis of tumors (ASCAT) algorithm. Comparison between Sequenza/exome and SNP/ASCAT revealed strong correlation in cellularity (Pearson's r = 0.90) and ploidy estimates (r = 0.42, or r = 0.94 after manual inspecting alternative solutions). This performance was noticeably superior to previously published algorithms. In addition, in artificial data simulating normal-tumor admixtures, Sequenza detected the correct ploidy in samples with tumor content as low as 30%. The agreement between Sequenza and SNP array-based copy number profiles suggests that exome sequencing alone is sufficient not only for identifying small scale mutations but also for estimating cellularity and inferring DNA copy number aberrations. © The Author 2014. Published by Oxford University Press on behalf of the European Society for Medical Oncology.
Business Simulations in Language Teaching.
ERIC Educational Resources Information Center
Westerfield, Kay J.; And Others
This paper describes a pilot project, conducted within the American English Institute at the University of Oregon, on the use of a published business-oriented management simulation in English language training for university-bound international students. The management game simulated competition among a group of manufacturing companies to acquire…
Oxidation Mechanisms of Toluene and Benzene
NASA Technical Reports Server (NTRS)
Bittker, David A.
1995-01-01
An expanded and improved version of a previously published benzene oxidation mechanism is presented and shown to model published experimental data fairly successfully. This benzene submodel is coupled to a modified version of a toluene oxidation submodel from the recent literature. This complete mechanism is shown to successfully model published experimental toluene oxidation data for a highly mixed flow reactor and for higher temperature ignition delay times in a shock tube. A comprehensive sensitivity analysis showing the most important reactions is presented for both the benzene and toluene reacting systems. The NASA Lewis toluene mechanism's modeling capability is found to be equivalent to that of the previously published mechanism which contains a somewhat different benzene submodel.
Payload crew training complex simulation engineer's handbook
NASA Technical Reports Server (NTRS)
Shipman, D. L.
1984-01-01
The Simulation Engineer's Handbook is a guide for new engineers assigned to Experiment Simulation and a reference for engineers previously assigned. The experiment simulation process, development of experiment simulator requirements, development of experiment simulator hardware and software, and the verification of experiment simulators are discussed. The training required for experiment simulation is extensive and is only referenced in the handbook.
Meyerson, Paul; Tryon, Warren W
2003-11-01
This study evaluated the psychometric equivalency of Web-based research. The Sexual Boredom Scale was presented via the World-Wide Web along with five additional scales used to validate it. A subset of 533 participants that matched a previously published sample (Watt & Ewing, 1996) on age, gender, and race was identified. An 8 x 8 correlation matrix from the matched Internet sample was compared via structural equation modeling with a similar 8 x 8 correlation matrix from the previously published study. The Internet and previously published samples were psychometrically equivalent. Coefficient alpha values calculated on the matched Internet sample yielded reliability coefficients almost identical to those for the previously published sample. Factors such as computer administration and uncontrollable administration settings did not appear to affect the results. Demographic data indicated an overrepresentation of males by about 6% and Caucasians by about 13% relative to the U.S. Census (2000). A total of 2,230 participants were obtained in about 8 months without remuneration. These results suggest that data collection on the Web is (1) reliable, (2) valid, (3) reasonably representative, (4) cost effective, and (5) efficient.
Toltz, Allison; Hoesl, Michaela; Schuemann, Jan; Seuntjens, Jan; Lu, Hsiao-Ming; Paganetti, Harald
2017-11-01
Our group previously introduced an in vivo proton range verification methodology in which a silicon diode array system is used to correlate the dose rate profile per range modulation wheel cycle of the detector signal to the water-equivalent path length (WEPL) for passively scattered proton beam delivery. The implementation of this system requires a set of calibration data to establish a beam-specific response to WEPL fit for the selected 'scout' beam (a 1 cm overshoot of the predicted detector depth with a dose of 4 cGy) in water-equivalent plastic. This necessitates a separate set of measurements for every 'scout' beam that may be appropriate to the clinical case. The current study demonstrates the use of Monte Carlo simulations for calibration of the time-resolved diode dosimetry technique. Measurements for three 'scout' beams were compared against simulated detector response with Monte Carlo methods using the Tool for Particle Simulation (TOPAS). The 'scout' beams were then applied in the simulation environment to simulated water-equivalent plastic, a CT of water-equivalent plastic, and a patient CT data set to assess uncertainty. Simulated detector response in water-equivalent plastic was validated against measurements for 'scout' spread out Bragg peaks of range 10 cm, 15 cm, and 21 cm (168 MeV, 177 MeV, and 210 MeV) to within 3.4 mm for all beams, and to within 1 mm in the region where the detector is expected to lie. Feasibility has been shown for performing the calibration of the detector response for three 'scout' beams through simulation for the time-resolved diode dosimetry technique in passive scattered proton delivery. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Improved Estimation of Orbits and Physical Properties of Objects in GEO
NASA Astrophysics Data System (ADS)
Bradley, B.; Axelrad, P.
2013-09-01
Orbital debris is a major concern for satellite operators, both commercial and military. Debris in the geosynchronous (GEO) belt is of particular concern because this unique region is such a valuable, limited resource, and, from the ground we cannot reliably track and characterize GEO objects smaller than 1 meter in diameter. Space-based space surveillance (SBSS) is required to observe GEO objects without weather restriction and with improved viewing geometry. SBSS satellites have thus far been placed in Sun-synchronous orbits. This paper investigates the benefits to GEO orbit determination (including the estimation of mass, area, and shape) that arises from placing observing satellites in geosynchronous transfer orbit (GTO) and a sub-GEO orbit. Recently, several papers have reported on simulation studies to estimate orbits and physical properties; however, these studies use simulated objects and ground-based measurements, often with dense and long data arcs. While this type of simulation provides valuable insight into what is possible, as far as state estimation goes, it is not a very realistic observing scenario and thus may not yield meaningful accuracies. Our research improves upon simulations published to date by utilizing publicly available ephemerides for the WAAS satellites (Anik F1R and Galaxy 15), accurate at the meter level. By simulating and deliberately degrading right ascension and declination observations, consistent with these ephemerides, a realistic assessment of the achievable orbit determination accuracy using GTO and sub-GEO SBSS platforms is performed. Our results show that orbit accuracy is significantly improved as compared to a Sun-synchronous platform. Physical property estimation is also performed using simulated astrometric and photometric data taken from GTO and sub-GEO sensors. Simulations of SBSS-only as well as combined SBSS and ground-based observation tracks are used to study the improvement in area, mass, and shape estimation gained by the proposed systems. Again our work improves upon previous research by investigating realistic observation scheduling scenarios to gain insight into achievable accuracies.
Winkler-Schwartz, Alexander; Bajunaid, Khalid; Mullah, Muhammad A S; Marwa, Ibrahim; Alotaibi, Fahad E; Fares, Jawad; Baggiani, Marta; Azarnoush, Hamed; Zharni, Gmaan Al; Christie, Sommer; Sabbagh, Abdulrahman J; Werthner, Penny; Del Maestro, Rolando F
Current selection methods for neurosurgical residents fail to include objective measurements of bimanual psychomotor performance. Advancements in computer-based simulation provide opportunities to assess cognitive and psychomotor skills in surgically naive populations during complex simulated neurosurgical tasks in risk-free environments. This pilot study was designed to answer 3 questions: (1) What are the differences in bimanual psychomotor performance among neurosurgical residency applicants using NeuroTouch? (2) Are there exceptionally skilled medical students in the applicant cohort? and (3) Is there an influence of previous surgical exposure on surgical performance? Participants were instructed to remove 3 simulated brain tumors with identical visual appearance, stiffness, and random bleeding points. Validated tier 1, tier 2, and advanced tier 2 metrics were used to assess bimanual psychomotor performance. Demographic data included weeks of neurosurgical elective and prior operative exposure. This pilot study was carried out at the McGill Neurosurgical Simulation Research and Training Center immediately following neurosurgical residency interviews at McGill University, Montreal, Canada. All 17 medical students interviewed were asked to participate, of which 16 agreed. Performances were clustered in definable top, middle, and bottom groups with significant differences for all metrics. Increased time spent playing music, increased applicant self-evaluated technical skills, high self-ratings of confidence, and increased skin closures statistically influenced performance on univariate analysis. A trend for both self-rated increased operating room confidence and increased weeks of neurosurgical exposure to increased blood loss was seen in multivariate analysis. Simulation technology identifies neurosurgical residency applicants with differing levels of technical ability. These results provide information for studies being developed for longitudinal studies on the acquisition, development, and maintenance of psychomotor skills. Technical abilities customized training programs that maximize individual resident bimanual psychomotor training dependant on continuously updated and validated metrics from virtual reality simulation studies should be explored. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
A model to estimate cost-savings in diabetic foot ulcer prevention efforts.
Barshes, Neal R; Saedi, Samira; Wrobel, James; Kougias, Panos; Kundakcioglu, O Erhun; Armstrong, David G
2017-04-01
Sustained efforts at preventing diabetic foot ulcers (DFUs) and subsequent leg amputations are sporadic in most health care systems despite the high costs associated with such complications. We sought to estimate effectiveness targets at which cost-savings (i.e. improved health outcomes at decreased total costs) might occur. A Markov model with probabilistic sensitivity analyses was used to simulate the five-year survival, incidence of foot complications, and total health care costs in a hypothetical population of 100,000 people with diabetes. Clinical event and cost estimates were obtained from previously-published trials and studies. A population without previous DFU but with 17% neuropathy and 11% peripheral artery disease (PAD) prevalence was assumed. Primary prevention (PP) was defined as reducing initial DFU incidence. PP was more than 90% likely to provide cost-savings when annual prevention costs are less than $50/person and/or annual DFU incidence is reduced by at least 25%. Efforts directed at patients with diabetes who were at moderate or high risk for DFUs were very likely to provide cost-savings if DFU incidence was decreased by at least 10% and/or the cost was less than $150 per person per year. Low-cost DFU primary prevention efforts producing even small decreases in DFU incidence may provide the best opportunity for cost-savings, especially if focused on patients with neuropathy and/or PAD. Mobile phone-based reminders, self-identification of risk factors (ex. Ipswich touch test), and written brochures may be among such low-cost interventions that should be investigated for cost-savings potential. Published by Elsevier Inc.
GPI Spectra of HR 8799 c, d, and e from 1.5 to 2.4 μm with KLIP Forward Modeling
NASA Astrophysics Data System (ADS)
Greenbaum, Alexandra Z.; Pueyo, Laurent; Ruffio, Jean-Baptiste; Wang, Jason J.; De Rosa, Robert J.; Aguilar, Jonathan; Rameau, Julien; Barman, Travis; Marois, Christian; Marley, Mark S.; Konopacky, Quinn; Rajan, Abhijith; Macintosh, Bruce; Ansdell, Megan; Arriaga, Pauline; Bailey, Vanessa P.; Bulger, Joanna; Burrows, Adam S.; Chilcote, Jeffrey; Cotten, Tara; Doyon, Rene; Duchêne, Gaspard; Fitzgerald, Michael P.; Follette, Katherine B.; Gerard, Benjamin; Goodsell, Stephen J.; Graham, James R.; Hibon, Pascale; Hung, Li-Wei; Ingraham, Patrick; Kalas, Paul; Larkin, James E.; Maire, Jérôme; Marchis, Franck; Metchev, Stanimir; Millar-Blanchaer, Maxwell A.; Nielsen, Eric L.; Norton, Andrew; Oppenheimer, Rebecca; Palmer, David; Patience, Jennifer; Perrin, Marshall D.; Poyneer, Lisa; Rantakyrö, Fredrik T.; Savransky, Dmitry; Schneider, Adam C.; Sivaramakrishnan, Anand; Song, Inseok; Soummer, Rémi; Thomas, Sandrine; Wallace, J. Kent; Ward-Duong, Kimberly; Wiktorowicz, Sloane; Wolff, Schuyler
2018-06-01
We explore KLIP forward modeling spectral extraction on Gemini Planet Imager coronagraphic data of HR 8799, using PyKLIP, and show algorithm stability with varying KLIP parameters. We report new and re-reduced spectrophotometry of HR 8799 c, d, and e in the H and K bands. We discuss a strategy for choosing optimal KLIP PSF subtraction parameters by injecting simulated sources and recovering them over a range of parameters. The K1/K2 spectra for HR 8799 c and d are similar to previously published results from the same data set. We also present a K-band spectrum of HR 8799 e for the first time and show that our H-band spectra agree well with previously published spectra from the VLT/SPHERE instrument. We show that HR 8799 c and d show significant differences in their H and K spectra, but do not find any conclusive differences between d and e, nor between c and e, likely due to large error bars in the recovered spectrum of e. Compared to M-, L-, and T-type field brown dwarfs, all three planets are most consistent with mid- and late-L spectral types. All objects are consistent with low gravity, but a lack of standard spectra for low gravity limit the ability to fit the best spectral type. We discuss how dedicated modeling efforts can better fit HR 8799 planets’ near-IR flux, as well as how differences between the properties of these planets can be further explored.
Bae, Jinkun; Chung, Tae Nyoung; Je, Sang Mo
2016-02-12
To assess how the quality of metronome-guided cardiopulmonary resuscitation (CPR) was affected by the chest compression rate familiarised by training before the performance and to determine a possible mechanism for any effect shown. Prospective crossover trial of a simulated, one-person, chest-compression-only CPR. Participants were recruited from a medical school and two paramedic schools of South Korea. 42 senior students of a medical school and two paramedic schools were enrolled but five dropped out due to physical restraints. Senior medical and paramedic students performed 1 min of metronome-guided CPR with chest compressions only at a speed of 120 compressions/min after training for chest compression with three different rates (100, 120 and 140 compressions/min). Friedman's test was used to compare average compression depths based on the different rates used during training. Average compression depths were significantly different according to the rate used in training (p<0.001). A post hoc analysis showed that average compression depths were significantly different between trials after training at a speed of 100 compressions/min and those at speeds of 120 and 140 compressions/min (both p<0.001). The depth of chest compression during metronome-guided CPR is affected by the relative difference between the rate of metronome guidance and the chest compression rate practised in previous training. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
NASA Astrophysics Data System (ADS)
Bernabé, Y.; Wang, Y.; Qi, T.; Li, M.
2016-02-01
The main purpose of this work is to investigate the relationship between passive advection-dispersion and permeability in porous materials presumed to be statistically homogeneous at scales larger than the pore scale but smaller than the reservoir scale. We simulated fluid flow through pipe network realizations with different pipe radius distributions and different levels of connectivity. The flow simulations used periodic boundary conditions, allowing monitoring of the advective motion of solute particles in a large periodic array of identical network realizations. In order to simulate dispersion, we assumed that the solute particles obeyed Taylor dispersion in individual pipes. When a particle entered a pipe, a residence time consistent with local Taylor dispersion was randomly assigned to it. When exiting the pipe, the particle randomly proceeded into one of the pipes connected to the original one according to probabilities proportional to the outgoing volumetric flow in each pipe. For each simulation we tracked the motion of at least 6000 solute particles. The mean fluid velocity was 10-3 ms-1, and the distance traveled was on the order of 10 m. Macroscopic dispersion was quantified using the method of moments. Despite differences arising from using different types of lattices (simple cubic, body-centered cubic, and face-centered cubic), a number of general observations were made. Longitudinal dispersion was at least 1 order of magnitude greater than transverse dispersion, and both strongly increased with decreasing pore connectivity and/or pore size variability. In conditions of variable hydraulic radius and fixed pore connectivity and pore size variability, the simulated dispersivities increased as power laws of the hydraulic radius and, consequently, of permeability, in agreement with previously published experimental results. Based on these observations, we were able to resolve some of the complexity of the relationship between dispersivity and permeability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamarque, J. F.; Bond, Tami C.; Eyring, Veronika
2010-08-11
We present and discuss a new dataset of gridded emissions covering the historical period (1850-2000) in decadal increments at a horizontal resolution of 0.5° in latitude and longitude. The primary purpose of this inventory is to provide consistent gridded emissions of reactive gases and aerosols for use in chemistry model simulations needed by climate models for the Climate Model Intercomparison Program #5 (CMIP5) in support of the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment report. Our best estimate for the year 2000 inventory represents a combination of existing regional and global inventories to capture the best information available atmore » this point; 40 regions and 12 sectors were used to combine the various sources. The historical reconstruction of each emitted compound, for each region and sector, was then forced to agree with our 2000 estimate, ensuring continuity between past and 2000 emissions. Application of these emissions into two chemistry-climate models is used to test their ability to capture long-term changes in atmospheric ozone, carbon monoxide and aerosols distributions. The simulated long-term change in the Northern mid-latitudes surface and mid-troposphere ozone is not quite as rapid as observed. However, stations outside this latitude band show much better agreement in both present-day and long-term trend. The model simulations consistently underestimate the carbon monoxide trend, while capturing the long-term trend at the Mace Head station. The simulated sulfate and black carbon deposition over Greenland is in very good agreement with the ice-core observations spanning the simulation period. Finally, aerosol optical depth and additional aerosol diagnostics are shown to be in good agreement with previously published estimates.« less
Hogg, Melissa E; Tam, Vernissia; Zenati, Mazen; Novak, Stephanie; Miller, Jennifer; Zureikat, Amer H; Zeh, Herbert J
Hepatobiliary surgery is a highly complex, low-volume specialty with long learning curves necessary to achieve optimal outcomes. This creates significant challenges in both training and measuring surgical proficiency. We hypothesize that a virtual reality curriculum with mastery-based simulation is a valid tool to train fellows toward operative proficiency. This study evaluates the content and predictive validity of robotic simulation curriculum as a first step toward developing a comprehensive, proficiency-based pathway. A mastery-based simulation curriculum was performed in a virtual reality environment. A pretest/posttest experimental design used both virtual reality and inanimate environments to evaluate improvement. Participants self-reported previous robotic experience and assessed the curriculum by rating modules based on difficulty and utility. This study was conducted at the University of Pittsburgh Medical Center (Pittsburgh, PA), a tertiary care academic teaching hospital. A total of 17 surgical oncology fellows enrolled in the curriculum, 16 (94%) completed. Of 16 fellows who completed the curriculum, 4 fellows (25%) achieved mastery on all 24 modules; on average, fellows mastered 86% of the modules. Following curriculum completion, individual test scores improved (p < 0.0001). An average of 2.4 attempts was necessary to master each module (range: 1-17). Median time spent completing the curriculum was 4.2 hours (range: 1.1-6.6). Total 8 (50%) fellows continued practicing modules beyond mastery. Survey results show that "needle driving" and "endowrist 2" modules were perceived as most difficult although "needle driving" modules were most useful. Overall, 15 (94%) fellows perceived improvement in robotic skills after completing the curriculum. In a cohort of board-certified general surgeons who are novices in robotic surgery, a mastery-based simulation curriculum demonstrated internal validity with overall score improvement. Time to complete the curriculum was manageable. Published by Elsevier Inc.
Role of a plausible nuisance contributor in the declining obesity-mortality risks over time.
Mehta, Tapan; Pajewski, Nicholas M; Keith, Scott W; Fontaine, Kevin; Allison, David B
2016-12-15
Recent analyses of epidemiological data including the National Health and Nutrition Examination Survey (NHANES) have suggested that the harmful effects of obesity may have decreased over calendar time. The shifting BMI distribution over time coupled with the application of fixed broad BMI categories in these analyses could be a plausible "nuisance contributor" to this observed change in the obesity-associated mortality over calendar time. To evaluate the extent to which observed temporal changes in the obesity-mortality association may be due to a shifting population distribution for body mass index (BMI), coupled with analyses based on static, broad BMI categories. Simulations were conducted using data from NHANES I and III linked with mortality data. Data from NHANES I were used to fit a "true" model treating BMI as a continuous variable. Coefficients estimated from this model were used to simulate mortality for participants in NHANES III. Hence, the population-level association between BMI and mortality in NHANES III was fixed to be identical to the association estimated in NHANES I. Hazard ratios (HRs) for obesity categories based on BMI for NHANES III with simulated mortality data were compared to the corresponding estimated HRs from NHANES I. Change in hazard ratios for simulated data in NHANES III compared to observed estimates from NHANES I. On average, hazard ratios for NHANES III based on simulated mortality data were 29.3% lower than the estimates from NHANES I using observed mortality follow-up. This reduction accounted for roughly three-fourths of the apparent decrease in the obesity-mortality association observed in a previous analysis of these data. Some of the apparent diminution of the association between obesity and mortality may be an artifact of treating BMI as a categorical variable. Copyright © 2016. Published by Elsevier Inc.
Brown, Ross; Rasmussen, Rune; Baldwin, Ian; Wyeth, Peta
2012-08-01
Nursing training for an Intensive Care Unit (ICU) is a resource intensive process. High demands are made on staff, students and physical resources. Interactive, 3D computer simulations, known as virtual worlds, are increasingly being used to supplement training regimes in the health sciences; especially in areas such as complex hospital ward processes. Such worlds have been found to be very useful in maximising the utilisation of training resources. Our aim is to design and develop a novel virtual world application for teaching and training Intensive Care nurses in the approach and method for shift handover, to provide an independent, but rigorous approach to teaching these important skills. In this paper we present a virtual world simulator for students to practice key steps in handing over the 24/7 care requirements of intensive care patients during the commencing first hour of a shift. We describe the modelling process to provide a convincing interactive simulation of the handover steps involved. The virtual world provides a practice tool for students to test their analytical skills with scenarios previously provided by simple physical simulations, and live on the job training. Additional educational benefits include facilitation of remote learning, high flexibility in study hours and the automatic recording of a reviewable log from the session. To the best of our knowledge, we believe this is a novel and original application of virtual worlds to an ICU handover process. The major outcome of the work was a virtual world environment for training nurses in the shift handover process, designed and developed for use by postgraduate nurses in training. Copyright © 2012 Australian College of Critical Care Nurses Ltd. Published by Elsevier Ltd. All rights reserved.
Global land-atmosphere coupling associated with cold climate processes
NASA Astrophysics Data System (ADS)
Dutra, Emanuel
This dissertation constitutes an assessment of the role of cold processes, associated with snow cover, in controlling the land-atmosphere coupling. The work was based on model simulations, including offline simulations with the land surface model HTESSEL, and coupled atmosphere simulations with the EC-EARTH climate model. A revised snow scheme was developed and tested in HTESSEL and EC-EARTH. The snow scheme is currently operational at the European Centre for Medium-Range Weather Forecasts integrated forecast system, and in the default configuration of EC-EARTH. The improved representation of the snowpack dynamics in HTESSEL resulted in improvements in the near surface temperature simulations of EC-EARTH. The new snow scheme development was complemented with the option of multi-layer version that showed its potential in modeling thick snowpacks. A key process was the snow thermal insulation that led to significant improvements of the surface water and energy balance components. Similar findings were observed when coupling the snow scheme to lake ice, where lake ice duration was significantly improved. An assessment on the snow cover sensitivity to horizontal resolution, parameterizations and atmospheric forcing within HTESSEL highlighted the role of the atmospheric forcing accuracy and snowpack parameterizations in detriment of horizontal resolution over flat regions. A set of experiments with and without free snow evolution was carried out with EC-EARTH to assess the impact of the interannual variability of snow cover on near surface and soil temperatures. It was found that snow cover interannual variability explained up to 60% of the total interannual variability of near surface temperature over snow covered regions. Although these findings are model dependent, the results showed consistency with previously published work. Furthermore, the detailed validation of the snow dynamics simulations in HTESSEL and EC-EARTH guarantees consistency of the results.
A Computational Model of Liver Iron Metabolism
Mitchell, Simon; Mendes, Pedro
2013-01-01
Iron is essential for all known life due to its redox properties; however, these same properties can also lead to its toxicity in overload through the production of reactive oxygen species. Robust systemic and cellular control are required to maintain safe levels of iron, and the liver seems to be where this regulation is mainly located. Iron misregulation is implicated in many diseases, and as our understanding of iron metabolism improves, the list of iron-related disorders grows. Recent developments have resulted in greater knowledge of the fate of iron in the body and have led to a detailed map of its metabolism; however, a quantitative understanding at the systems level of how its components interact to produce tight regulation remains elusive. A mechanistic computational model of human liver iron metabolism, which includes the core regulatory components, is presented here. It was constructed based on known mechanisms of regulation and on their kinetic properties, obtained from several publications. The model was then quantitatively validated by comparing its results with previously published physiological data, and it is able to reproduce multiple experimental findings. A time course simulation following an oral dose of iron was compared to a clinical time course study and the simulation was found to recreate the dynamics and time scale of the systems response to iron challenge. A disease state simulation of haemochromatosis was created by altering a single reaction parameter that mimics a human haemochromatosis gene (HFE) mutation. The simulation provides a quantitative understanding of the liver iron overload that arises in this disease. This model supports and supplements understanding of the role of the liver as an iron sensor and provides a framework for further modelling, including simulations to identify valuable drug targets and design of experiments to improve further our knowledge of this system. PMID:24244122
How tall buildings affect turbulent air flows and dispersion of pollution within a neighbourhood.
Aristodemou, Elsa; Boganegra, Luz Maria; Mottet, Laetitia; Pavlidis, Dimitrios; Constantinou, Achilleas; Pain, Christopher; Robins, Alan; ApSimon, Helen
2018-02-01
The city of London, UK, has seen in recent years an increase in the number of high-rise/multi-storey buildings ("skyscrapers") with roof heights reaching 150 m and more, with the Shard being a prime example with a height of ∼310 m. This changing cityscape together with recent plans of local authorities of introducing Combined Heat and Power Plant (CHP) led to a detailed study in which CFD and wind tunnel studies were carried out to assess the effect of such high-rise buildings on the dispersion of air pollution in their vicinity. A new, open-source simulator, FLUIDITY, which incorporates the Large Eddy Simulation (LES) method, was implemented; the simulated results were subsequently validated against experimental measurements from the EnFlo wind tunnel. The novelty of the LES methodology within FLUIDITY is based on the combination of an adaptive, unstructured, mesh with an eddy-viscosity tensor (for the sub-grid scales) that is anisotropic. The simulated normalised mean concentrations results were compared to the corresponding wind tunnel measurements, showing for most detector locations good correlations, with differences ranging from 3% to 37%. The validation procedure was followed by the simulation of two further hypothetical scenarios, in which the heights of buildings surrounding the source building were increased. The results showed clearly how the high-rise buildings affected the surrounding air flows and dispersion patterns, with the generation of "dead-zones" and high-concentration "hotspots" in areas where these did not previously exist. The work clearly showed that complex CFD modelling can provide useful information to urban planners when changes to cityscapes are considered, so that design options can be tested against environmental quality criteria. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Thermalized Drude Oscillators with the LAMMPS Molecular Dynamics Simulator.
Dequidt, Alain; Devémy, Julien; Pádua, Agílio A H
2016-01-25
LAMMPS is a very customizable molecular dynamics simulation software, which can be used to simulate a large diversity of systems. We introduce a new package for simulation of polarizable systems with LAMMPS using thermalized Drude oscillators. The implemented functionalities are described and are illustrated by examples. The implementation was validated by comparing simulation results with published data and using a reference software. Computational performance is also analyzed.
Responses of plant available water and forest productivity to variably layered coarse textured soils
NASA Astrophysics Data System (ADS)
Huang, Mingbin; Barbour, Lee; Elshorbagy, Amin; Si, Bing; Zettl, Julie
2010-05-01
Reforestation is a primary end use for reconstructed soils following oil sands mining in northern Alberta, Canada. Limited soil water conditions strongly restrict plant growth. Previous research has shown that layering of sandy soils can produce enhanced water availability for plant growth; however, the effect of gradation on these enhancements is not well defined. The objective of this study was to evaluate the effect of soil texture (gradation and layering) on plant available water and consequently on forest productivity for reclaimed coarse textured soils. A previously validated system dynamics (SD) model of soil moisture dynamics was coupled with ecophysiological and biogeochemical processes model, Biome-BGC-SD, to simulate forest dynamics for different soil profiles. These profiles included contrasting 50 cm textural layers of finer sand overlying coarser sand in which the sand layers had either a well graded or uniform soil texture. These profiles were compared to uniform profiles of the same sands. Three tree species of jack pine (Pinus banksiana Lamb.), white spruce (Picea glauce Voss.), and trembling aspen (Populus tremuloides Michx.) were simulated using a 50 year climatic data base from northern Alberta. Available water holding capacity (AWHC) was used to identify soil moisture regime, and leaf area index (LAI) and net primary production (NPP) were used as indices of forest productivity. Published physiological parameters were used in the Biome-BGC-SD model. Relative productivity was assessed by comparing model predictions to the measured above-ground biomass dynamics for the three tree species, and was then used to study the responses of forest leaf area index and potential productivity to AWHC on different soil profiles. Simulated results indicated soil layering could significantly increase AWHC in the 1-m profile for coarse textured soils. This enhanced AWHC could result in an increase in forest LAI and NPP. The increased extent varied with soil textures and vegetative types. The simulated results showed that the presence of 50 cm of coarser graded sand overlying 50 cm of finer graded sand is the most effective reclaimed prescription to increase AWHC and forest productivity among the studied soil profiles.
Funkenbusch, Paul D; Rotella, Mario; Chochlidakis, Konstantinos; Ercoli, Carlo
2016-10-01
Laboratory studies of tooth preparation often involve single values for all variables other than the one being tested. In contrast, in clinical settings, not all variables can be adequately controlled. For example, a new dental rotary cutting instrument may be tested in the laboratory by making a specific cut with a fixed force, but, in clinical practice, the instrument must make different cuts with individual dentists applying different forces. Therefore, the broad applicability of laboratory results to diverse clinical conditions is uncertain and the comparison of effects across studies difficult. The purpose of this in vitro study was to examine the effects of 9 process variables on the dental cutting of rotary cutting instruments used with an electric handpiece and compare them with those of a previous study that used an air-turbine handpiece. The effects of 9 key process variables on the efficiency of a simulated dental cutting operation were measured. A fractional factorial experiment was conducted by using an electric handpiece in a computer-controlled, dedicated testing apparatus to simulate dental cutting procedures with Macor blocks as the cutting substrate. Analysis of variance (ANOVA) was used to assess the statistical significance (α=.05). Four variables (targeted applied load, cut length, diamond grit size, and cut type) consistently produced large, statistically significant effects, whereas 5 variables (rotation per minute, number of cooling ports, rotary cutting instrument diameter, disposability, and water flow rate) produced relatively small, statistically insignificant effects. These results are generally similar to those previously found for an air-turbine handpiece. Regardless of whether an electric or air-turbine handpiece was used, the control exerted by the dentist, simulated in this study by targeting a specific level of applied force, was the single most important factor affecting cutting efficiency. Cutting efficiency was also significantly affected by factors simulating patient/clinical circumstances and hardware choices. These results highlight the greater importance of local clinical conditions (procedure, dentist) in understanding dental cutting as opposed to other hardware-related factors. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Jiménez-Forteza, Xisco; Keitel, David; Husa, Sascha; Hannam, Mark; Khan, Sebastian; Pürrer, Michael
2017-03-01
Numerical relativity is an essential tool in studying the coalescence of binary black holes (BBHs). It is still computationally prohibitive to cover the BBH parameter space exhaustively, making phenomenological fitting formulas for BBH waveforms and final-state properties important for practical applications. We describe a general hierarchical bottom-up fitting methodology to design and calibrate fits to numerical relativity simulations for the three-dimensional parameter space of quasicircular nonprecessing merging BBHs, spanned by mass ratio and by the individual spin components orthogonal to the orbital plane. Particular attention is paid to incorporating the extreme-mass-ratio limit and to the subdominant unequal-spin effects. As an illustration of the method, we provide two applications, to the final spin and final mass (or equivalently: radiated energy) of the remnant black hole. Fitting to 427 numerical relativity simulations, we obtain results broadly consistent with previously published fits, but improving in overall accuracy and particularly in the approach to extremal limits and for unequal-spin configurations. We also discuss the importance of data quality studies when combining simulations from diverse sources, how detailed error budgets will be necessary for further improvements of these already highly accurate fits, and how this first detailed study of unequal-spin effects helps in choosing the most informative parameters for future numerical relativity runs.
Thomsen, Ann Sofia Skou; Bach-Holm, Daniella; Kjærbo, Hadi; Højgaard-Olsen, Klavs; Subhi, Yousif; Saleh, George M; Park, Yoon Soo; la Cour, Morten; Konge, Lars
2017-04-01
To investigate the effect of virtual reality proficiency-based training on actual cataract surgery performance. The secondary purpose of the study was to define which surgeons benefit from virtual reality training. Multicenter masked clinical trial. Eighteen cataract surgeons with different levels of experience. Cataract surgical training on a virtual reality simulator (EyeSi) until a proficiency-based test was passed. Technical performance in the operating room (OR) assessed by 3 independent, masked raters using a previously validated task-specific assessment tool for cataract surgery (Objective Structured Assessment of Cataract Surgical Skill). Three surgeries before and 3 surgeries after the virtual reality training were video-recorded, anonymized, and presented to the raters in random order. Novices (non-independently operating surgeons) and surgeons having performed fewer than 75 independent cataract surgeries showed significant improvements in the OR-32% and 38%, respectively-after virtual reality training (P = 0.008 and P = 0.018). More experienced cataract surgeons did not benefit from simulator training. The reliability of the assessments was high with a generalizability coefficient of 0.92 and 0.86 before and after the virtual reality training, respectively. Clinically relevant cataract surgical skills can be improved by proficiency-based training on a virtual reality simulator. Novices as well as surgeons with an intermediate level of experience showed improvement in OR performance score. Copyright © 2017 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
A Python tool to set up relative free energy calculations in GROMACS
Klimovich, Pavel V.; Mobley, David L.
2015-01-01
Free energy calculations based on molecular dynamics (MD) simulations have seen a tremendous growth in the last decade. However, it is still difficult and tedious to set them up in an automated manner, as the majority of the present-day MD simulation packages lack that functionality. Relative free energy calculations are a particular challenge for several reasons, including the problem of finding a common substructure and mapping the transformation to be applied. Here we present a tool, alchemical-setup.py, that automatically generates all the input files needed to perform relative solvation and binding free energy calculations with the MD package GROMACS. When combined with Lead Optimization Mapper [14], recently developed in our group, alchemical-setup.py allows fully automated setup of relative free energy calculations in GROMACS. Taking a graph of the planned calculations and a mapping, both computed by LOMAP, our tool generates the topology and coordinate files needed to perform relative free energy calculations for a given set of molecules, and provides a set of simulation input parameters. The tool was validated by performing relative hydration free energy calculations for a handful of molecules from the SAMPL4 challenge [16]. Good agreement with previously published results and the straightforward way in which free energy calculations can be conducted make alchemical-setup.py a promising tool for automated setup of relative solvation and binding free energy calculations. PMID:26487189
George, D.L.
2011-01-01
The simulation of advancing flood waves over rugged topography, by solving the shallow-water equations with well-balanced high-resolution finite volume methods and block-structured dynamic adaptive mesh refinement (AMR), is described and validated in this paper. The efficiency of block-structured AMR makes large-scale problems tractable, and allows the use of accurate and stable methods developed for solving general hyperbolic problems on quadrilateral grids. Features indicative of flooding in rugged terrain, such as advancing wet-dry fronts and non-stationary steady states due to balanced source terms from variable topography, present unique challenges and require modifications such as special Riemann solvers. A well-balanced Riemann solver for inundation and general (non-stationary) flow over topography is tested in this context. The difficulties of modeling floods in rugged terrain, and the rationale for and efficacy of using AMR and well-balanced methods, are presented. The algorithms are validated by simulating the Malpasset dam-break flood (France, 1959), which has served as a benchmark problem previously. Historical field data, laboratory model data and other numerical simulation results (computed on static fitted meshes) are shown for comparison. The methods are implemented in GEOCLAW, a subset of the open-source CLAWPACK software. All the software is freely available at. Published in 2010 by John Wiley & Sons, Ltd.
Anomalous surface diffusion of protons on lipid membranes.
Wolf, Maarten G; Grubmüller, Helmut; Groenhof, Gerrit
2014-07-01
The cellular energy machinery depends on the presence and properties of protons at or in the vicinity of lipid membranes. To asses the energetics and mobility of a proton near a membrane, we simulated an excess proton near a solvated DMPC bilayer at 323 K, using a recently developed method to include the Grotthuss proton shuttling mechanism in classical molecular dynamics simulations. We obtained a proton surface affinity of -13.0 ± 0.5 kJ mol(-1). The proton interacted strongly with both lipid headgroup and linker carbonyl oxygens. Furthermore, the surface diffusion of the proton was anomalous, with a subdiffusive regime over the first few nanoseconds, followed by a superdiffusive regime. The time- and distance dependence of the proton surface diffusion coefficient within these regimes may also resolve discrepancies between previously reported diffusion coefficients. Our simulations show that the proton anomalous surface diffusion originates from restricted diffusion in two different surface-bound states, interrupted by the occasional bulk-mediated long-range surface diffusion. Although only a DMPC membrane was considered in this work, we speculate that the restrictive character of the on-surface diffusion is highly sensitive to the specific membrane conditions, which can alter the relative contributions of the surface and bulk pathways to the overall diffusion process. Finally, we discuss the implications of our findings for the energy machinery. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Pazdera, J. S.
1974-01-01
Published report describes analytical development and simulation of braking system. System prevents wheels from skidding when brakes are applied, significantly reducing stopping distance. Report also presents computer simulation study on system as applied to aircraft.
NASA Technical Reports Server (NTRS)
Lindner, Bernhard Lee
1992-01-01
Research activities to date are discussed. Selected Mariner 9 UV spectra were obtained. Radiative transfer models were updated and then exercised to simulate spectra. Simulated and observed spectra compare favorably. It is noted that large amounts of ozone are currently not retrieved with reflectance spectroscopy, raising large doubts about earlier published ozone abundances. As these published abundances have been used as a benchmark for all theoretical photochemical models of Mars, this deserves further exploration. Three manuscripts were published, and one is in review. Papers were presented and published at three conferences, and are planned for five more conferences in the next six months. The research plan for the next reporting period is discussed and involves continuing studies of reflectance spectroscopy, further examination of Mariner 9 data, and climate change studies of ozone.
Third-order elastic constants of diamond determined from experimental data
Winey, J. M.; Hmiel, A.; Gupta, Y. M.
2016-06-01
The pressure derivatives of the second-order elastic constants (SOECs) of diamond were determined by analyzing previous sound velocity measurements under hydrostatic stress [McSkimin and Andreatch, J. Appl. Phys. 43, 294 (1972)]. Furthermore, our analysis corrects an error in the previously reported results.We present a complete and corrected set of third-order elastic constants (TOECs) using the corrected pressure derivatives, together with published data for the nonlinear elastic response of shock compressed diamond [Lang and Gupta, Phys. Rev. Lett. 106, 125502 (2011)] and it differs significantly from TOECs published previously.
STELLAR ENCOUNTER RATE IN GALACTIC GLOBULAR CLUSTERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bahramian, Arash; Heinke, Craig O.; Sivakoff, Gregory R.
2013-04-01
The high stellar densities in the cores of globular clusters cause significant stellar interactions. These stellar interactions can produce close binary mass-transferring systems involving compact objects and their progeny, such as X-ray binaries and radio millisecond pulsars. Comparing the numbers of these systems and interaction rates in different clusters drives our understanding of how cluster parameters affect the production of close binaries. In this paper we estimate stellar encounter rates ({Gamma}) for 124 Galactic globular clusters based on observational data as opposed to the methods previously employed, which assumed 'King-model' profiles for all clusters. By deprojecting cluster surface brightness profilesmore » to estimate luminosity density profiles, we treat 'King-model' and 'core-collapsed' clusters in the same way. In addition, we use Monte Carlo simulations to investigate the effects of uncertainties in various observational parameters (distance, reddening, surface brightness) on {Gamma}, producing the first catalog of globular cluster stellar encounter rates with estimated errors. Comparing our results with published observations of likely products of stellar interactions (numbers of X-ray binaries, numbers of radio millisecond pulsars, and {gamma}-ray luminosity) we find both clear correlations and some differences with published results.« less
Process Simulation of Cold Pressing and Sintering of Armstrong CP-Ti Powders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorti, Sarma B; Sabau, Adrian S; Peter, William H
A computational methodology is presented for the process simulation of cold pressing and sintering of Armstrong CP-Ti powders. Since the powder consolidation is governed by specific pressure-dependent constitutive equations, solution algorithms were developed for the ABAQUS user material subroutine, UMAT, for computing the plastic strain increments based on an implicit integration of the nonlinear yield function, flow rule, and hardening equations. Sintering was simulated using a model based on diffusional creep using the user subroutine CREEP. The initial mesh, stress, and density for the simulation of sintering were obtained from the results of the cold pressing simulation, minimizing the errorsmore » from decoupling the cold pressing and sintering simulations. Numerical simulation results are presented for the cold compaction followed by a sintering step of the Ti powders. The numerical simulation results for the relative density were compared to those measured from experiments before and after sintering, showing that the relative density can be accurately predicted. Notice: This manuscript has been authored by UT-Battelle, LLC, under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. ACKNOWLEDGEMENTS This research was sponsored by the U.S. DOE, and carried out at ORNL, under Contract DE-AC05-00OR22725 with UT-Battelle, LLC. This research was sponsored by the U.S. DOE, EERE Industrial Technology Program Office under CPS Agreement # 17881.« less
Current Status of Simulation-based Training Tools in Orthopedic Surgery: A Systematic Review.
Morgan, Michael; Aydin, Abdullatif; Salih, Alan; Robati, Shibby; Ahmed, Kamran
To conduct a systematic review of orthopedic training and assessment simulators with reference to their level of evidence (LoE) and level of recommendation. Medline and EMBASE library databases were searched for English language articles published between 1980 and 2016, describing orthopedic simulators or validation studies of these models. All studies were assessed for LoE, and each model was subsequently awarded a level of recommendation using a modified Oxford Centre for Evidence-Based Medicine classification, adapted for education. A total of 76 articles describing orthopedic simulators met the inclusion criteria, 47 of which described at least 1 validation study. The most commonly identified models (n = 34) and validation studies (n = 26) were for knee arthroscopy. Construct validation was the most frequent validation study attempted by authors. In all, 62% (47 of 76) of the simulator studies described arthroscopy simulators, which also contained validation studies with the highest LoE. Orthopedic simulators are increasingly being subjected to validation studies, although the LoE of such studies generally remain low. There remains a lack of focus on nontechnical skills and on cost analyses of orthopedic simulators. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Simulation Games: Practical References, Potential Use, Selected Bibliography.
ERIC Educational Resources Information Center
Kidder, Steven J.
Several recently published books on simulation and games are briefly discussed. Selected research studies and demonstration projects are examined to show the potential of simulation and gaming for teaching and training and for the study of social and psychological processes. The bibliography lists 113 publications which should lead the reader to…
Prediction of flunixin tissue residue concentrations in livers from diseased cattle.
Wu, H; Baynes, R E; Tell, L A; Riviere, J E
2013-12-01
Flunixin, a widely used non-steroidal anti-inflammatory drug, was a leading cause of violative residues in cattle. The objective of this analysis was to explore how the changes in pharmacokinetic (PK) parameters that may be associated with diseased animals affect the predicted liver residue of flunixin in cattle. Monte Carlo simulations for liver residues of flunixin were performed using the PK model structure and relevant PK parameter estimates from a previously published population PK model for flunixin in cattle. The magnitude of a change in the PK parameter value that resulted in a violative residue issue in more than one percent of a cattle population was compared. In this regard, elimination clearance and volume of distribution affected withdrawal times. Pathophysiological factors that can change these parameters may contribute to the occurrence of violative residues of flunixin.
Evaluation of Multiclass Model Observers in PET LROC Studies
NASA Astrophysics Data System (ADS)
Gifford, H. C.; Kinahan, P. E.; Lartizien, C.; King, M. A.
2007-02-01
A localization ROC (LROC) study was conducted to evaluate nonprewhitening matched-filter (NPW) and channelized NPW (CNPW) versions of a multiclass model observer as predictors of human tumor-detection performance with PET images. Target localization is explicitly performed by these model observers. Tumors were placed in the liver, lungs, and background soft tissue of a mathematical phantom, and the data simulation modeled a full-3D acquisition mode. Reconstructions were performed with the FORE+AWOSEM algorithm. The LROC study measured observer performance with 2D images consisting of either coronal, sagittal, or transverse views of the same set of cases. Versions of the CNPW observer based on two previously published difference-of-Gaussian channel models demonstrated good quantitative agreement with human observers. One interpretation of these results treats the CNPW observer as a channelized Hotelling observer with implicit internal noise
Using screen-based simulation of inhaled anaesthetic delivery to improve patient care.
Philip, J H
2015-12-01
Screen-based simulation can improve patient care by giving novices and experienced clinicians insight into drug behaviour. Gas Man(®) is a screen-based simulation program that depicts pictorially and graphically the anaesthetic gas and vapour tension from the vaporizer to the site of action, namely the brain and spinal cord. The gases and vapours depicted are desflurane, enflurane, ether, halothane, isoflurane, nitrogen, nitrous oxide, sevoflurane, and xenon. Multiple agents can be administered simultaneously or individually and the results shown on an overlay graph. Practice exercises provide in-depth knowledge of the subject matter. Experienced clinicians can simulate anaesthesia occurrences and practices for application to their clinical practice, and publish the results to benefit others to improve patient care. Published studies using this screen-based simulation have led to a number of findings, as follows: changing from isoflurane to desflurane toward the end of anaesthesia does not accelerate recovery in humans; vital capacity induction can produce loss of consciousness in 45 s; simulated context-sensitive decrement times explain recovery profiles; hyperventilation does not dramatically speed emergence; high fresh gas flow is wasteful; fresh gas flow and not the vaporizer setting should be reduced during intubation; re-anaesthetization can occur with severe hypoventilation after extubation; and in re-anaesthetization, the anaesthetic redistributes from skeletal muscle. Researchers using screen-based simulations can study fewer subjects to reach valid conclusions that impact clinical care. © The Author 2015. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Gemini Capsule and Rendezvous Docking Simulator
1962-12-19
Practicing with a full-scale model of the Gemini Capsule in Langley's Rendezvous Docking Simulator. -- Caption and photograph published in Winds of Change, 75th Anniversary NASA publication, (page 89), by James Schultz.
NASA Astrophysics Data System (ADS)
Døssing, A.; Muxworthy, A. R.; Mac Niocaill, C.; Riishuus, M. S.
2013-12-01
Statistical analyses of paleomagnetic data from sequential lava flows allow us to study the geomagnetic field behavior on kyr to Myr timescales. Previous paleomagnetic studies have lacked high-latitude, high-quality measurements and resolution necessary to investigate the persistence of high-latitude geomagnetic field anomalies observed in the recent and historical field records, and replicated in some numerical geodynamo simulations. As part of the Time-Averaged Field Initiative (TAFI) project, the lava sequences found in Nordurdalur (by Fljótsdalur) and Jökuldalur in eastern Iceland provide an excellent opportunity to improve high-latitude data suitable for investigating the 0-5 Ma TAF and paleosecular variation. These adjacent valleys, separated by 40 km, are known to comprise a fairly continuous record of lava flows erupted from the Northern Rift Zone between 0.5 and 5-7 Ma. During a five weeks field campaign in summer 2013, we collected a total of ~1900 cores (10-16 cores/site; mean = ~13 cores/site) from ~140 separate lava flows (165 in total) along eight stratigraphic profiles in Nordurdalur and Jökuldalur. In addition, hand samples were collected from ~70 sites to deliver ~40 new 40Ar/39Ar radiometric age measurements. We present a preliminary composite magnetostratigraphic interpretation of the exposed volcanic pile in Nordurdalur and Jökuldalur. The new data will be compared and contrasted with previously published paleomagnetic and geochronological results. In addition, determinations of the anisotropy of the magnetic susceptibility of individual lava flows is sought to deliver fossil lava flow directions. The aim of the study is ultimately to present a high-quality study of paleomagnetic directions and intensities from Iceland spanning the past 6-7 Myr. The new Fjlotsdalur and Jökuldalur data will be combined with previously published paleomagnetic results.
Yuhara, Daisuke; Brumby, Paul E; Wu, David T; Sum, Amadeu K; Yasuoka, Kenji
2018-05-14
To develop prediction methods of three-phase equilibrium (coexistence) conditions of methane hydrate by molecular simulations, we examined the use of NVT (isometric-isothermal) molecular dynamics (MD) simulations. NVT MD simulations of coexisting solid hydrate, liquid water, and vapor methane phases were performed at four different temperatures, namely, 285, 290, 295, and 300 K. NVT simulations do not require complex pressure control schemes in multi-phase systems, and the growth or dissociation of the hydrate phase can lead to significant pressure changes in the approach toward equilibrium conditions. We found that the calculated equilibrium pressures tended to be higher than those reported by previous NPT (isobaric-isothermal) simulation studies using the same water model. The deviations of equilibrium conditions from previous simulation studies are mainly attributable to the employed calculation methods of pressure and Lennard-Jones interactions. We monitored the pressure in the methane phase, far from the interfaces with other phases, and confirmed that it was higher than the total pressure of the system calculated by previous studies. This fact clearly highlights the difficulties associated with the pressure calculation and control for multi-phase systems. The treatment of Lennard-Jones interactions without tail corrections in MD simulations also contributes to the overestimation of equilibrium pressure. Although improvements are still required to obtain accurate equilibrium conditions, NVT MD simulations exhibit potential for the prediction of equilibrium conditions of multi-phase systems.
NASA Astrophysics Data System (ADS)
Yuhara, Daisuke; Brumby, Paul E.; Wu, David T.; Sum, Amadeu K.; Yasuoka, Kenji
2018-05-01
To develop prediction methods of three-phase equilibrium (coexistence) conditions of methane hydrate by molecular simulations, we examined the use of NVT (isometric-isothermal) molecular dynamics (MD) simulations. NVT MD simulations of coexisting solid hydrate, liquid water, and vapor methane phases were performed at four different temperatures, namely, 285, 290, 295, and 300 K. NVT simulations do not require complex pressure control schemes in multi-phase systems, and the growth or dissociation of the hydrate phase can lead to significant pressure changes in the approach toward equilibrium conditions. We found that the calculated equilibrium pressures tended to be higher than those reported by previous NPT (isobaric-isothermal) simulation studies using the same water model. The deviations of equilibrium conditions from previous simulation studies are mainly attributable to the employed calculation methods of pressure and Lennard-Jones interactions. We monitored the pressure in the methane phase, far from the interfaces with other phases, and confirmed that it was higher than the total pressure of the system calculated by previous studies. This fact clearly highlights the difficulties associated with the pressure calculation and control for multi-phase systems. The treatment of Lennard-Jones interactions without tail corrections in MD simulations also contributes to the overestimation of equilibrium pressure. Although improvements are still required to obtain accurate equilibrium conditions, NVT MD simulations exhibit potential for the prediction of equilibrium conditions of multi-phase systems.
Wilson, R; Abbott, J H
2018-04-01
To describe the construction and preliminary validation of a new population-based microsimulation model developed to analyse the health and economic burden and cost-effectiveness of treatments for knee osteoarthritis (OA) in New Zealand (NZ). We developed the New Zealand Management of Osteoarthritis (NZ-MOA) model, a discrete-time state-transition microsimulation model of the natural history of radiographic knee OA. In this article, we report on the model structure, derivation of input data, validation of baseline model parameters against external data sources, and validation of model outputs by comparison of the predicted population health loss with previous estimates. The NZ-MOA model simulates both the structural progression of radiographic knee OA and the stochastic development of multiple disease symptoms. Input parameters were sourced from NZ population-based data where possible, and from international sources where NZ-specific data were not available. The predicted distributions of structural OA severity and health utility detriments associated with OA were externally validated against other sources of evidence, and uncertainty resulting from key input parameters was quantified. The resulting lifetime and current population health-loss burden was consistent with estimates of previous studies. The new NZ-MOA model provides reliable estimates of the health loss associated with knee OA in the NZ population. The model structure is suitable for analysis of the effects of a range of potential treatments, and will be used in future work to evaluate the cost-effectiveness of recommended interventions within the NZ healthcare system. Copyright © 2018 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Lorne, Emmanuel; Diouf, Momar; de Wilde, Robert B P; Fischer, Marc-Olivier
2018-02-01
The Bland-Altman (BA) and percentage error (PE) methods have been previously described to assess the agreement between 2 methods of medical or laboratory measurements. This type of approach raises several problems: the BA methodology constitutes a subjective approach to interchangeability, whereas the PE approach does not take into account the distribution of values over a range. We describe a new methodology that defines an interchangeability rate between 2 methods of measurement and cutoff values that determine the range of interchangeable values. We used a simulated data and a previously published data set to demonstrate the concept of the method. The interchangeability rate of 5 different cardiac output (CO) pulse contour techniques (Wesseling method, LiDCO, PiCCO, Hemac method, and Modelflow) was calculated, in comparison with the reference pulmonary artery thermodilution CO using our new method. In our example, Modelflow with a good interchangeability rate of 93% and a cutoff value of 4.8 L min, was found to be interchangeable with the thermodilution method for >95% of measurements. Modelflow had a higher interchangeability rate compared to Hemac (93% vs 86%; P = .022) or other monitors (Wesseling cZ = 76%, LiDCO = 73%, and PiCCO = 62%; P < .0001). Simulated data and reanalysis of a data set comparing 5 CO monitors against thermodilution CO showed that, depending on the repeatability of the reference method, the interchangeability rate combined with a cutoff value could be used to define the range of values over which interchangeability remains acceptable.
Ammann, Caspar M.; Joos, Fortunat; Schimel, David S.; Otto-Bliesner, Bette L.; Tomas, Robert A.
2007-01-01
The potential role of solar variations in modulating recent climate has been debated for many decades and recent papers suggest that solar forcing may be less than previously believed. Because solar variability before the satellite period must be scaled from proxy data, large uncertainty exists about phase and magnitude of the forcing. We used a coupled climate system model to determine whether proxy-based irradiance series are capable of inducing climatic variations that resemble variations found in climate reconstructions, and if part of the previously estimated large range of past solar irradiance changes could be excluded. Transient simulations, covering the published range of solar irradiance estimates, were integrated from 850 AD to the present. Solar forcing as well as volcanic and anthropogenic forcing are detectable in the model results despite internal variability. The resulting climates are generally consistent with temperature reconstructions. Smaller, rather than larger, long-term trends in solar irradiance appear more plausible and produced modeled climates in better agreement with the range of Northern Hemisphere temperature proxy records both with respect to phase and magnitude. Despite the direct response of the model to solar forcing, even large solar irradiance change combined with realistic volcanic forcing over past centuries could not explain the late 20th century warming without inclusion of greenhouse gas forcing. Although solar and volcanic effects appear to dominate most of the slow climate variations within the past thousand years, the impacts of greenhouse gases have dominated since the second half of the last century. PMID:17360418
Byers, John A; Maoz, Yonatan; Levi-Zada, Anat
2017-08-01
The Euwallacea sp. near fornicatus (Euwallacea sp. 1 hereafter) feeds on many woody shrubs and trees and is a pest of avocado, Persea americana Mill., in several countries including Israel and the United States. Quercivorol baits are commercially available for Euwallacea sp. 1 females (males do not fly), but their attractive strength compared to other pheromones and potential for mass trapping are unknown. We used sticky traps baited with quercivorol released at 0.126 mg/d (1×) and at 0.01×, 0.1×, and 10× relative rates to obtain a dose-response curve of Euwallacea sp. 1 attraction. The curve fitted well a kinetic formation function of first order. Naturally infested limbs of living avocado trees had attraction rates equivalent to 1× quercivorol. An effective attraction radius (EAR) was calculated according to previous equations for each of the various baits (1× EAR = 1.18 m; 10× EAR = 2.00 m). A pole with six sticky traps spaced from 0.25-5.75 m in height had captures of Euwallacea sp. 1 yielding a mean flight height of 1.24 m with vertical flight distribution SD of 0.88 m (0.82-0.96 m, 95% CI). The SD with specific EAR was used to calculate EARc, two-dimensional EAR (1× EARc = 0.99 m; 10× EARc = 2.86 m), for comparison with other insect pheromone traps and for use in simulations. The simulation methods described previously were performed with combinations of 1-16 traps with 1-50 aggregations per 9-ha plot. The simulations indicate mass trapping with quercivorol could be effective if begun in spring before Euwallacea sp. 1 establishes competing sources of attraction. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
2017-01-01
The absorption of poorly water-soluble drugs is influenced by the luminal gastrointestinal fluid content and composition, which control solubility. Simulated intestinal fluids have been introduced into dissolution testing including endogenous amphiphiles and digested lipids at physiological levels; however, in vivo individual variation exists in the concentrations of these components, which will alter drug absorption through an effect on solubility. The use of a factorial design of experiment and varying media by introducing different levels of bile, lecithin, and digested lipids has been previously reported, but here we investigate the solubility variation of poorly soluble drugs through more complex biorelevant amphiphile interactions. A four-component mixture design was conducted to understand the solubilization capacity and interactions of bile salt, lecithin, oleate, and monoglyceride with a constant total concentration (11.7 mM) but varying molar ratios. The equilibrium solubility of seven low solubility acidic (zafirlukast), basic (aprepitant, carvedilol), and neutral (fenofibrate, felodipine, griseofulvin, and spironolactone) drugs was investigated. Solubility results are comparable with literature values and also our own previously published design of experiment studies. Results indicate that solubilization is not a sum accumulation of individual amphiphile concentrations, but a drug specific effect through interactions of mixed amphiphile compositions with the drug. This is probably due to a combined interaction of drug characteristics; for example, lipophilicity, molecular shape, and ionization with amphiphile components, which can generate specific drug–micelle affinities. The proportion of each component can have a remarkable influence on solubility with, in some cases, the highest and lowest points close to each other. A single-point solubility measurement in a fixed composition simulated media or human intestinal fluid sample will therefore provide a value without knowledge of the surrounding solubility topography meaning that variability may be overlooked. This study has demonstrated how the amphiphile ratios influence drug solubility and highlights the importance of the envelope of physiological variation when simulating in vivo drug behavior. PMID:28749696
Correlation, evaluation, and extension of linearized theories for tire motion and wheel shimmy
NASA Technical Reports Server (NTRS)
Smiley, Robert F
1957-01-01
An evaluation is made of the existing theories of a linearized tire motion and wheel shimmy. It is demonstrated that most of the previously published theories represent varying degrees of approximation to a summary theory developed in this report which is a minor modification of the basic theory of Von Schlippe and Dietrich. In most cases where strong differences exist between the previously published theories and summary theory, the previously published theories are shown to possess certain deficiencies. A series of systematic approximations to the summary theory is developed for the treatment of problems too simple to merit the use of the complete summary theory, and procedures are discussed for applying the summary theory and its systematic approximations to the shimmy of more complex landing-gear structures than have previously been considered. Comparisons of the existing experimental data with the predictions of the summary theory and the systematic approximations provide a fair substantiation of the more detailed approximate theories.
A web-based rapid assessment tool for production publishing solutions
NASA Astrophysics Data System (ADS)
Sun, Tong
2010-02-01
Solution assessment is a critical first-step in understanding and measuring the business process efficiency enabled by an integrated solution package. However, assessing the effectiveness of any solution is usually a very expensive and timeconsuming task which involves lots of domain knowledge, collecting and understanding the specific customer operational context, defining validation scenarios and estimating the expected performance and operational cost. This paper presents an intelligent web-based tool that can rapidly assess any given solution package for production publishing workflows via a simulation engine and create a report for various estimated performance metrics (e.g. throughput, turnaround time, resource utilization) and operational cost. By integrating the digital publishing workflow ontology and an activity based costing model with a Petri-net based workflow simulation engine, this web-based tool allows users to quickly evaluate any potential digital publishing solutions side-by-side within their desired operational contexts, and provides a low-cost and rapid assessment for organizations before committing any purchase. This tool also benefits the solution providers to shorten the sales cycles, establishing a trustworthy customer relationship and supplement the professional assessment services with a proven quantitative simulation and estimation technology.
Kirkman, Matthew A; Muirhead, William; Sevdalis, Nick; Nandi, Dipankar
2015-01-01
Simulation is gaining increasing interest as a method of delivering high-quality, time-effective, and safe training to neurosurgical residents. However, most current simulators are purpose-built for simulation, being relatively expensive and inaccessible to many residents. The purpose of this study was to provide the first comprehensive validity assessment of ventriculostomy performance metrics from the Medtronic StealthStation S7 Surgical Navigation System, a neuronavigational tool widely used in the clinical setting, as a training tool for simulated ventriculostomy while concomitantly reporting on stress measures. A prospective study where participants performed 6 simulated ventriculostomy attempts on a model head with StealthStation-coregistered imaging. The performance measures included distance of the ventricular catheter tip to the foramen of Monro and presence of the catheter tip in the ventricle. Data on objective and self-reported stress and workload measures were also collected. The operating rooms of the National Hospital for Neurology and Neurosurgery, Queen Square, London. A total of 31 individuals with varying levels of prior ventriculostomy experience, varying in seniority from medical student to senior resident. Performance at simulated ventriculostomy improved significantly over subsequent attempts, irrespective of previous ventriculostomy experience. Performance improved whether or not the StealthStation display monitor was used for real-time visual feedback, but performance was optimal when it was. Further, performance was inversely correlated with both objective and self-reported measures of stress (traditionally referred to as concurrent validity). Stress and workload measures were well-correlated with each other, and they also correlated with technical performance. These initial data support the use of the StealthStation as a training tool for simulated ventriculostomy, providing a safe environment for repeated practice with immediate feedback. Although the potential implications are profound for neurosurgical education and training, further research following this proof-of-concept study is required on a larger scale for full validation and proof that training translates into improved long-term simulated and patient outcomes. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Effect of image scaling and segmentation in digital rock characterisation
NASA Astrophysics Data System (ADS)
Jones, B. D.; Feng, Y. T.
2016-04-01
Digital material characterisation from microstructural geometry is an emerging field in computer simulation. For permeability characterisation, a variety of studies exist where the lattice Boltzmann method (LBM) has been used in conjunction with computed tomography (CT) imaging to simulate fluid flow through microscopic rock pores. While these previous works show that the technique is applicable, the use of binary image segmentation and the bounceback boundary condition results in a loss of grain surface definition when the modelled geometry is compared to the original CT image. We apply the immersed moving boundary (IMB) condition of Noble and Torczynski as a partial bounceback boundary condition which may be used to better represent the geometric definition provided by a CT image. The IMB condition is validated against published work on idealised porous geometries in both 2D and 3D. Following this, greyscale image segmentation is applied to a CT image of Diemelstadt sandstone. By varying the mapping of CT voxel densities to lattice sites, it is shown that binary image segmentation may underestimate the true permeability of the sample. A CUDA-C-based code, LBM-C, was developed specifically for this work and leverages GPU hardware in order to carry out computations.
Narayan, Lakshmi; Dodd, Richard S.; O’Hara, Kevin L.
2015-01-01
Premise of the study: Identifying clonal lineages in asexually reproducing plants using microsatellite markers is complicated by the possibility of nonidentical genotypes from the same clonal lineage due to somatic mutations, null alleles, and scoring errors. We developed and tested a clonal identification protocol that is robust to these issues for the asexually reproducing hexaploid tree species coast redwood (Sequoia sempervirens). Methods: Microsatellite data from four previously published and two newly developed primers were scored using a modified protocol, and clones were identified using Bruvo genetic distances. The effectiveness of this clonal identification protocol was assessed using simulations and by genotyping a test set of paired samples of different tissue types from the same trees. Results: Data from simulations showed that our protocol allowed us to accurately identify clonal lineages. Multiple test samples from the same trees were identified correctly, although certain tissue type pairs had larger genetic distances on average. Discussion: The methods described in this paper will allow for the accurate identification of coast redwood clones, facilitating future studies of the reproductive ecology of this species. The techniques used in this paper can be applied to studies of other clonal organisms as well. PMID:25798341
Narayan, Lakshmi; Dodd, Richard S; O'Hara, Kevin L
2015-03-01
Identifying clonal lineages in asexually reproducing plants using microsatellite markers is complicated by the possibility of nonidentical genotypes from the same clonal lineage due to somatic mutations, null alleles, and scoring errors. We developed and tested a clonal identification protocol that is robust to these issues for the asexually reproducing hexaploid tree species coast redwood (Sequoia sempervirens). Microsatellite data from four previously published and two newly developed primers were scored using a modified protocol, and clones were identified using Bruvo genetic distances. The effectiveness of this clonal identification protocol was assessed using simulations and by genotyping a test set of paired samples of different tissue types from the same trees. Data from simulations showed that our protocol allowed us to accurately identify clonal lineages. Multiple test samples from the same trees were identified correctly, although certain tissue type pairs had larger genetic distances on average. The methods described in this paper will allow for the accurate identification of coast redwood clones, facilitating future studies of the reproductive ecology of this species. The techniques used in this paper can be applied to studies of other clonal organisms as well.
A simple model of chromospheric evaporation and condensation driven conductively in a solar flare
DOE Office of Scientific and Technical Information (OSTI.GOV)
Longcope, D. W.
2014-11-01
Magnetic energy released in the corona by solar flares reaches the chromosphere where it drives characteristic upflows and downflows known as evaporation and condensation. These flows are studied here for the case where energy is transported to the chromosphere by thermal conduction. An analytic model is used to develop relations by which the density and velocity of each flow can be predicted from coronal parameters including the flare's energy flux F. These relations are explored and refined using a series of numerical investigations in which the transition region (TR) is represented by a simplified density jump. The maximum evaporation velocity,more » for example, is well approximated by v{sub e} ≅ 0.38(F/ρ{sub co,} {sub 0}){sup 1/3}, where ρ{sub co,} {sub 0} is the mass density of the pre-flare corona. This and the other relations are found to fit simulations using more realistic models of the TR both performed in this work, and taken from a variety of previously published investigations. These relations offer a novel and efficient means of simulating coronal reconnection without neglecting entirely the effects of evaporation.« less
Model-based prediction of myelosuppression and recovery based on frequent neutrophil monitoring.
Netterberg, Ida; Nielsen, Elisabet I; Friberg, Lena E; Karlsson, Mats O
2017-08-01
To investigate whether a more frequent monitoring of the absolute neutrophil counts (ANC) during myelosuppressive chemotherapy, together with model-based predictions, can improve therapy management, compared to the limited clinical monitoring typically applied today. Daily ANC in chemotherapy-treated cancer patients were simulated from a previously published population model describing docetaxel-induced myelosuppression. The simulated values were used to generate predictions of the individual ANC time-courses, given the myelosuppression model. The accuracy of the predicted ANC was evaluated under a range of conditions with reduced amount of ANC measurements. The predictions were most accurate when more data were available for generating the predictions and when making short forecasts. The inaccuracy of ANC predictions was highest around nadir, although a high sensitivity (≥90%) was demonstrated to forecast Grade 4 neutropenia before it occurred. The time for a patient to recover to baseline could be well forecasted 6 days (±1 day) before the typical value occurred on day 17. Daily monitoring of the ANC, together with model-based predictions, could improve anticancer drug treatment by identifying patients at risk for severe neutropenia and predicting when the next cycle could be initiated.
Not just the norm: exemplar-based models also predict face aftereffects.
Ross, David A; Deroche, Mickael; Palmeri, Thomas J
2014-02-01
The face recognition literature has considered two competing accounts of how faces are represented within the visual system: Exemplar-based models assume that faces are represented via their similarity to exemplars of previously experienced faces, while norm-based models assume that faces are represented with respect to their deviation from an average face, or norm. Face identity aftereffects have been taken as compelling evidence in favor of a norm-based account over an exemplar-based account. After a relatively brief period of adaptation to an adaptor face, the perceived identity of a test face is shifted toward a face with attributes opposite to those of the adaptor, suggesting an explicit psychological representation of the norm. Surprisingly, despite near universal recognition that face identity aftereffects imply norm-based coding, there have been no published attempts to simulate the predictions of norm- and exemplar-based models in face adaptation paradigms. Here, we implemented and tested variations of norm and exemplar models. Contrary to common claims, our simulations revealed that both an exemplar-based model and a version of a two-pool norm-based model, but not a traditional norm-based model, predict face identity aftereffects following face adaptation.
Detection of stator winding faults in induction motors using three-phase current monitoring.
Sharifi, Rasool; Ebrahimi, Mohammad
2011-01-01
The objective of this paper is to propose a new method for the detection of inter-turn short circuits in the stator windings of induction motors. In the previous reported methods, the supply voltage unbalance was the major difficulty, and this was solved mostly based on the sequence component impedance or current which are difficult to implement. Some other methods essentially are included in the offline methods. The proposed method is based on the motor current signature analysis and utilizes three phase current spectra to overcome the mentioned problem. Simulation results indicate that under healthy conditions, the rotor slot harmonics have the same magnitude in three phase currents, while under even 1 turn (0.3%) short circuit condition they differ from each other. Although the magnitude of these harmonics depends on the level of unbalanced voltage, they have the same magnitude in three phases in these conditions. Experiments performed under various load, fault, and supply voltage conditions validate the simulation results and demonstrate the effectiveness of the proposed technique. It is shown that the detection of resistive slight short circuits, without sensitivity to supply voltage unbalance is possible. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Moszczyński, P.; Walczak, A.; Marciniak, P.
2016-12-01
In cyclic articles previously published we described and analysed self-organized light fibres inside a liquid crystalline (LC) cell contained photosensitive polymer (PP) layer. Such asymmetric LC cell we call a hybrid LC cell. Light fibre arises along a laser beam path directed in plane of an LC cell. It means that a laser beam is parallel to photosensitive layer. We observed the asymmetric LC cell response on an external driving field polarization. Observation has been done for an AC field first. It is the reason we decided to carry out a detailed research for a DC driving field to obtain an LC cell response step by step. The properly prepared LC cell has been built with an isolating layer and garbage ions deletion. We proved by means of a physical model, as well as a numerical simulation that LC asymmetric response strongly depends on junction barriers between PP and LC layers. New parametric model for a junction barrier on PP/LC boundary has been proposed. Such model is very useful because of lack of proper conductivity and charge carriers of band structure data on LC material.
Tavčar, Gregor; Katrašnik, Tomaž
2014-01-01
The parallel straight channel PEM fuel cell model presented in this paper extends the innovative hybrid 3D analytic-numerical (HAN) approach previously published by the authors with capabilities to address ternary diffusion systems and counter-flow configurations. The model's core principle is modelling species transport by obtaining a 2D analytic solution for species concentration distribution in the plane perpendicular to the cannel gas-flow and coupling consecutive 2D solutions by means of a 1D numerical pipe-flow model. Electrochemical and other nonlinear phenomena are coupled to the species transport by a routine that uses derivative approximation with prediction-iteration. The latter is also the core of the counter-flow computation algorithm. A HAN model of a laboratory test fuel cell is presented and evaluated against a professional 3D CFD simulation tool showing very good agreement between results of the presented model and those of the CFD simulation. Furthermore, high accuracy results are achieved at moderate computational times, which is owed to the semi-analytic nature and to the efficient computational coupling of electrochemical kinetics and species transport.
Not Just the Norm: Exemplar-Based Models also Predict Face Aftereffects
Ross, David A.; Deroche, Mickael; Palmeri, Thomas J.
2014-01-01
The face recognition literature has considered two competing accounts of how faces are represented within the visual system: Exemplar-based models assume that faces are represented via their similarity to exemplars of previously experienced faces, while norm-based models assume that faces are represented with respect to their deviation from an average face, or norm. Face identity aftereffects have been taken as compelling evidence in favor of a norm-based account over an exemplar-based account. After a relatively brief period of adaptation to an adaptor face, the perceived identity of a test face is shifted towards a face with opposite attributes to the adaptor, suggesting an explicit psychological representation of the norm. Surprisingly, despite near universal recognition that face identity aftereffects imply norm-based coding, there have been no published attempts to simulate the predictions of norm- and exemplar-based models in face adaptation paradigms. Here we implemented and tested variations of norm and exemplar models. Contrary to common claims, our simulations revealed that both an exemplar-based model and a version of a two-pool norm-based model, but not a traditional norm-based model, predict face identity aftereffects following face adaptation. PMID:23690282
First Human Testing of the Orion Atmosphere Revitalization Technology
NASA Technical Reports Server (NTRS)
Lin, Amy; Sweterlitsch, Jeffrey
2009-01-01
An amine-based carbon dioxide (CO2) and water vapor sorbent in pressure-swing regenerable beds has been developed by Hamilton Sundstrand and baselined for the Orion Atmosphere Revitalization System (ARS). In two previous years at this conference, reports were presented on extensive Johnson Space Center (JSC) testing of the technology in a representative environment with simulated human metabolic loads. The next step in developmental testing at JSC was to replace the simulated humans with real humans; this testing was conducted in the spring of 2008. This first instance of human testing of a new Orion ARS technology included several cases in a sealed Orion-equivalent free volume and three cases using emergency breathing masks connected directly to the ARS loop. Significant test results presented in this paper include comparisons between the standard metabolic rates for CO2 and water vapor production published in Orion requirements documents and real-world rate ranges observed with human test subjects. Also included are qualitative assessments of process flow rate and closed-loop pressure-cycling tolerability while using the emergency masks. Recommendations for modifications to the Orion ARS design and operation, based on the test results, conclude the paper.
First Human Testing of the Orion Atmosphere Revitalization Technology
NASA Technical Reports Server (NTRS)
Lin, Amy; Sweterlitsch, Jeffrey
2008-01-01
An amine-based carbon dioxide (CO2) and water vapor sorbent in pressure-swing regenerable beds has been developed by Hamilton Sundstrand and baselined for the Orion Atmosphere Revitalization System (ARS). In two previous years at this conference, reports were presented on extensive Johnson Space Center (JSC) testing of the technology in a representative environment with simulated human metabolic loads. The next step in developmental testing at JSC was to replace the simulated humans with real humans; this testing was conducted in the spring of 2008. This first instance of human testing of a new Orion ARS technology included several cases in a sealed Orione-quivalent free volume and three cases using emergency breathing masks connected directly to the ARS loop. Significant test results presented in this paper include comparisons between the standard metabolic rates for CO2 and water vapor production published in Orion requirements documents and real-world rate ranges observed with human test subjects. Also included are qualitative assessments of process flow rate and closed-loop pressure-cycling tolerability while using the emergency masks. Recommendations for modifications to the Orion ARS design and operation, based on the test results, conclude the paper.
Ligand binding and dynamics of the monomeric epidermal growth factor receptor ectodomain
Loeffler, Hannes H; Winn, Martyn D
2013-01-01
The ectodomain of the human epidermal growth factor receptor (hEGFR) controls input to several cell signalling networks via binding with extracellular growth factors. To gain insight into the dynamics and ligand binding of the ectodomain, the hEGFR monomer was subjected to molecular dynamics simulation. The monomer was found to be substantially more flexible than the ectodomain dimer studied previously. Simulations where the endogeneous ligand EGF binds to either Subdomain I or Subdomain III, or where hEGFR is unbound, show significant differences in dynamics. The molecular mechanics Poisson–Boltzmann surface area method has been used to derive relative free energies of ligand binding, and we find that the ligand is capable of binding either subdomain with a slight preference for III. Alanine-scanning calculations for the effect of selected ligand mutants on binding reproduce the trends of affinity measurements. Taken together, these results emphasize the possible role of the ectodomain monomer in the initial step of ligand binding, and add details to the static picture obtained from crystal structures. Proteins 2013; 81:1931–1943. © 2013 The Authors. Proteins published by Wiley Periodicals, Inc. PMID:23760854
Antibiotic Resistome: Improving Detection and Quantification Accuracy for Comparative Metagenomics.
Elbehery, Ali H A; Aziz, Ramy K; Siam, Rania
2016-04-01
The unprecedented rise of life-threatening antibiotic resistance (AR), combined with the unparalleled advances in DNA sequencing of genomes and metagenomes, has pushed the need for in silico detection of the resistance potential of clinical and environmental metagenomic samples through the quantification of AR genes (i.e., genes conferring antibiotic resistance). Therefore, determining an optimal methodology to quantitatively and accurately assess AR genes in a given environment is pivotal. Here, we optimized and improved existing AR detection methodologies from metagenomic datasets to properly consider AR-generating mutations in antibiotic target genes. Through comparative metagenomic analysis of previously published AR gene abundance in three publicly available metagenomes, we illustrate how mutation-generated resistance genes are either falsely assigned or neglected, which alters the detection and quantitation of the antibiotic resistome. In addition, we inspected factors influencing the outcome of AR gene quantification using metagenome simulation experiments, and identified that genome size, AR gene length, total number of metagenomics reads and selected sequencing platforms had pronounced effects on the level of detected AR. In conclusion, our proposed improvements in the current methodologies for accurate AR detection and resistome assessment show reliable results when tested on real and simulated metagenomic datasets.
DOSE EFFECT OF THE 33S(n,α) 30SI REACTION IN BNCT USING THE NEW n_TOF-CERN DATA.
Sabaté-Gilarte, M; Praena, J; Porras, I; Quesada, J M
2017-09-23
33S is a stable isotope of sulphur which is being studied as a potential cooperative target for Boron Neutron Capture Therapy (BNCT) in accelerator-based neutron sources because of its large (n,α) cross section in the epithermal neutron energy range. Previous measurements resolved the resonances with a discrepant description of the lowest-lying and strongest one (at 13.5 keV). However, the evaluations of the major databases do not include resonances, except EAF-2010 which shows smaller values in this range than the experimental data. Furthermore, the glaring lack of data below 10 keV down to thermal (25.3 meV) has motivated a new measurement at n_TOF at CERN in order to cover the whole energy range. The inclusion of this new 33S(n,α) cross section in Monte Carlo simulations provides a more accurate estimation of the deposited kerma rate in tissue due to the presence of 33S. The results of those simulations represent the goal of this work. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Samsonov, Andrey; Gordeev, Evgeny; Sergeev, Victor
2017-04-01
As it was recently suggested (e.g., Gordeev et al., 2015), the global magnetospheric configuration can be characterized by a set of key parameters, such as the magnetopause distance at the subsolar point and on the terminator plane, the magnetic field in the magnetotail lobe and the plasma sheet thermal pressure, the cross polar cap electric potential drop and the total field-aligned current. For given solar wind conditions, the values of these parameters can be obtained from both empirical models and global MHD simulations. We validate the recently developed global MHD code SPSU-16 using the key magnetospheric parameters mentioned above. The code SPSU-16 can calculate both the isotropic and anisotropic MHD equations. In the anisotropic version, we use the modified double-adiabatic equations in which the T⊥/T∥ (the ratio of perpendicular to parallel thermal pressures) has been bounded from above by the mirror and ion-cyclotron thresholds and from below by the firehose threshold. The results of validation for the SPSU-16 code well agree with the previously published results of other global codes. Some key parameters coincide in the isotropic and anisotropic MHD simulations, but some are different.
Detached Eddy Simulation of Flap Side-Edge Flow
NASA Technical Reports Server (NTRS)
Balakrishnan, Shankar K.; Shariff, Karim R.
2016-01-01
Detached Eddy Simulation (DES) of flap side-edge flow was performed with a wing and half-span flap configuration used in previous experimental and numerical studies. The focus of the study is the unsteady flow features responsible for the production of far-field noise. The simulation was performed at a Reynolds number (based on the main wing chord) of 3.7 million. Reynolds Averaged Navier-Stokes (RANS) simulations were performed as a precursor to the DES. The results of these precursor simulations match previous experimental and RANS results closely. Although the present DES simulations have not reached statistical stationary yet, some unsteady features of the developing flap side-edge flowfield are presented. In the final paper it is expected that statistically stationary results will be presented including comparisons of surface pressure spectra with experimental data.
Siallagan, Dominik; Loke, Yue-Hin; Olivieri, Laura; Opfermann, Justin; Ong, Chin Siang; de Zélicourt, Diane; Petrou, Anastasios; Daners, Marianne Schmid; Kurtcuoglu, Vartan; Meboldt, Mirko; Nelson, Kevin; Vricella, Luca; Johnson, Jed; Hibino, Narutoshi; Krieger, Axel
2018-04-01
Despite advances in the Fontan procedure, there is an unmet clinical need for patient-specific graft designs that are optimized for variations in patient anatomy. The objective of this study is to design and produce patient-specific Fontan geometries, with the goal of improving hepatic flow distribution (HFD) and reducing power loss (P loss ), and manufacturing these designs by electrospinning. Cardiac magnetic resonance imaging data from patients who previously underwent a Fontan procedure (n = 2) was used to create 3-dimensional models of their native Fontan geometry using standard image segmentation and geometry reconstruction software. For each patient, alternative designs were explored in silico, including tube-shaped and bifurcated conduits, and their performance in terms of P loss and HFD probed by computational fluid dynamic (CFD) simulations. The best-performing options were then fabricated using electrospinning. CFD simulations showed that the bifurcated conduit improved HFD between the left and right pulmonary arteries, whereas both types of conduits reduced P loss . In vitro testing with a flow-loop chamber supported the CFD results. The proposed designs were then successfully electrospun into tissue-engineered vascular grafts. Our unique virtual cardiac surgery approach has the potential to improve the quality of surgery by manufacturing patient-specific designs before surgery, that are also optimized with balanced HFD and minimal P loss , based on refinement of commercially available options for image segmentation, computer-aided design, and flow simulations. Copyright © 2017 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
Integrating animal movement with habitat suitability for estimating dynamic landscape connectivity
van Toor, Mariëlle L.; Kranstauber, Bart; Newman, Scott H.; Prosser, Diann J.; Takekawa, John Y.; Technitis, Georgios; Weibel, Robert; Wikelski, Martin; Safi, Kamran
2018-01-01
Context High-resolution animal movement data are becoming increasingly available, yet having a multitude of empirical trajectories alone does not allow us to easily predict animal movement. To answer ecological and evolutionary questions at a population level, quantitative estimates of a species’ potential to link patches or populations are of importance. Objectives We introduce an approach that combines movement-informed simulated trajectories with an environment-informed estimate of the trajectories’ plausibility to derive connectivity. Using the example of bar-headed geese we estimated migratory connectivity at a landscape level throughout the annual cycle in their native range. Methods We used tracking data of bar-headed geese to develop a multi-state movement model and to estimate temporally explicit habitat suitability within the species’ range. We simulated migratory movements between range fragments, and calculated a measure we called route viability. The results are compared to expectations derived from published literature. Results Simulated migrations matched empirical trajectories in key characteristics such as stopover duration. The viability of the simulated trajectories was similar to that of the empirical trajectories. We found that, overall, the migratory connectivity was higher within the breeding than in wintering areas, corroborating previous findings for this species. Conclusions We show how empirical tracking data and environmental information can be fused for meaningful predictions of animal movements throughout the year and even outside the spatial range of the available data. Beyond predicting migratory connectivity, our framework will prove useful for modelling ecological processes facilitated by animal movement, such as seed dispersal or disease ecology.
Inspiratory and expiratory aerosol deposition in the upper airway.
Verbanck, S; Kalsi, H S; Biddiscombe, M F; Agnihotri, V; Belkassem, B; Lacor, C; Usmani, O S
2011-02-01
Aerosol deposition efficiency (DE) in the extrathoracic airways during mouth breathing is currently documented only for the inspiratory phase of respiration, and there is a need for quantification of expiratory DE. Our aim was to study both inspiratory and expiratory DE in a realistic upper airway geometry. This was done experimentally on a physical upper airway cast by scintigraphy, and numerically by computational fluid dynamic simulations using a Reynolds Averaged Navier?Stokes (RANS) method with a k-? SST turbulence model coupled with a stochastic Lagrangian approach. Experiments and simulations were carried out for particle sizes (3 and 6 μm) and flow rates (30 and 60 L/min) spanning the ranges of Stokes (Stk) and Reynolds (Re) number pertinent to therapeutic and environmental aerosols. We showed that inspiratory total deposition data obtained by scintigraphy fell onto a previously published deposition curve representative of a range of upper airway geometries. We also found that expiratory and inspiratory DE curves were almost identical. Finally, DE in different compartments of the upper airway model showed a very different distribution pattern of aerosol deposition during inspiration and expiration, with preferential deposition in oral and pharyngeal compartments, respectively. These compartmental deposition patterns were very consistent and only slightly dependent on particle size or flow rate. Total deposition for inspiration and expiration was reasonably well-mimicked by the RANS simulation method we employed, and more convincingly so in the upper range of the Stk and Re number. However, compartmental deposition patterns showed discrepancies between experiments and RANS simulations, particularly during expiration.
Comparison of numerical and experimental results of the flow in the U9 Kaplan turbine model
NASA Astrophysics Data System (ADS)
Petit, O.; Mulu, B.; Nilsson, H.; Cervantes, M.
2010-08-01
The present work compares simulations made using the OpenFOAM CFD code with experimental measurements of the flow in the U9 Kaplan turbine model. Comparisons of the velocity profiles in the spiral casing and in the draft tube are presented. The U9 Kaplan turbine prototype located in Porjus and its model, located in Älvkarleby, Sweden, have curved inlet pipes that lead the flow to the spiral casing. Nowadays, this curved pipe and its effect on the flow in the turbine is not taken into account when numerical simulations are performed at design stage. To study the impact of the inlet pipe curvature on the flow in the turbine, and to get a better overview of the flow of the whole system, measurements were made on the 1:3.1 model of the U9 turbine. Previously published measurements were taken at the inlet of the spiral casing and just before the guide vanes, using the laser Doppler anemometry (LDA) technique. In the draft tube, a number of velocity profiles were measured using the LDA techniques. The present work extends the experimental investigation with a horizontal section at the inlet of the draft tube. The experimental results are used to specify the inlet boundary condition for the numerical simulations in the draft tube, and to validate the computational results in both the spiral casing and the draft tube. The numerical simulations were realized using the standard k-e model and a block-structured hexahedral wall function mesh.
NASA Astrophysics Data System (ADS)
Levashov, V. A.
2014-11-01
In order to gain insight into the connection between the vibrational dynamics and the atomic-level Green-Kubo stress correlation function in liquids, we consider this connection in a model crystal instead. Of course, vibrational dynamics in liquids and crystals are quite different and it is not expected that the results obtained on a model crystal should be valid for liquids. However, these considerations provide a benchmark to which the results of the previous molecular dynamics simulations can be compared. Thus, assuming that vibrations are plane waves, we derive analytical expressions for the atomic-level stress correlation functions in the classical limit and analyze them. These results provide, in particular, a recipe for analysis of the atomic-level stress correlation functions in Fourier space and extraction of the wave-vector and frequency-dependent information. We also evaluate the energies of the atomic-level stresses. The energies obtained are significantly smaller than the energies previously determined in molecular dynamics simulations of several model liquids. This result suggests that the average energies of the atomic-level stresses in liquids and glasses are largely determined by the structural disorder. We discuss this result in the context of equipartition of the atomic-level stress energies. Analysis of the previously published data suggests that it is possible to speak about configurational and vibrational contributions to the average energies of the atomic-level stresses in a glass state. However, this separation in a liquid state is problematic. We also introduce and briefly consider the atomic-level transverse current correlation function. Finally, we address the broadening of the peaks in the pair distribution function with increase of distance. We find that the peaks' broadening (by ≈40 % ) occurs due to the transverse vibrational modes, while contribution from the longitudinal modes does not change with distance.
Modeling Wettability Alteration using Chemical EOR Processes in Naturally Fractured Reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2007-09-30
The objective of our search is to develop a mechanistic simulation tool by adapting UTCHEM to model the wettability alteration in both conventional and naturally fractured reservoirs. This will be a unique simulator that can model surfactant floods in naturally fractured reservoir with coupling of wettability effects on relative permeabilities, capillary pressure, and capillary desaturation curves. The capability of wettability alteration will help us and others to better understand and predict the oil recovery mechanisms as a function of wettability in naturally fractured reservoirs. The lack of a reliable simulator for wettability alteration means that either the concept that hasmore » already been proven to be effective in the laboratory scale may never be applied commercially to increase oil production or the process must be tested in the field by trial and error and at large expense in time and money. The objective of Task 1 is to perform a literature survey to compile published data on relative permeability, capillary pressure, dispersion, interfacial tension, and capillary desaturation curve as a function of wettability to aid in the development of petrophysical property models as a function of wettability. The new models and correlations will be tested against published data. The models will then be implemented in the compositional chemical flooding reservoir simulator, UTCHEM. The objective of Task 2 is to understand the mechanisms and develop a correlation for the degree of wettability alteration based on published data. The objective of Task 3 is to validate the models and implementation against published data and to perform 3-D field-scale simulations to evaluate the impact of uncertainties in the fracture and matrix properties on surfactant alkaline and hot water floods.« less
The use of genetic programming to develop a predictor of swash excursion on sandy beaches
NASA Astrophysics Data System (ADS)
Passarella, Marinella; Goldstein, Evan B.; De Muro, Sandro; Coco, Giovanni
2018-02-01
We use genetic programming (GP), a type of machine learning (ML) approach, to predict the total and infragravity swash excursion using previously published data sets that have been used extensively in swash prediction studies. Three previously published works with a range of new conditions are added to this data set to extend the range of measured swash conditions. Using this newly compiled data set we demonstrate that a ML approach can reduce the prediction errors compared to well-established parameterizations and therefore it may improve coastal hazards assessment (e.g. coastal inundation). Predictors obtained using GP can also be physically sound and replicate the functionality and dependencies of previous published formulas. Overall, we show that ML techniques are capable of both improving predictability (compared to classical regression approaches) and providing physical insight into coastal processes.
Combining Simulation and Optimization Models for Hardwood Lumber Production
G.A. Mendoza; R.J. Meimban; W.G. Luppold; Philip A. Araman
1991-01-01
Published literature contains a number of optimization and simulation models dealing with the primary processing of hardwood and softwood logs. Simulation models have been developed primarily as descriptive models for characterizing the general operations and performance of a sawmill. Optimization models, on the other hand, were developed mainly as analytical tools for...
A Scoping Review: Conceptualizations and Pedagogical Models of Learning in Nursing Simulation
ERIC Educational Resources Information Center
Poikela, Paula; Teräs, Marianne
2015-01-01
Simulations have been implemented globally in nursing education for years with diverse conceptual foundations. The aim of this scoping review is to examine the literature regarding the conceptualizations of learning and pedagogical models in nursing simulations. A scoping review of peer-reviewed articles published between 2000 and 2013 was…
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, A; Zbijewski, W; Bolch, W
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less
Computing the total atmospheric refraction for real-time optical imaging sensor simulation
NASA Astrophysics Data System (ADS)
Olson, Richard F.
2015-05-01
Fast and accurate computation of light path deviation due to atmospheric refraction is an important requirement for real-time simulation of optical imaging sensor systems. A large body of existing literature covers various methods for application of Snell's Law to the light path ray tracing problem. This paper provides a discussion of the adaptation to real time simulation of atmospheric refraction ray tracing techniques used in mid-1980's LOWTRAN releases. The refraction ray trace algorithm published in a LOWTRAN-6 technical report by Kneizys (et. al.) has been coded in MATLAB for development, and in C-language for simulation use. To this published algorithm we have added tuning parameters for variable path segment lengths, and extensions for Earth grazing and exoatmospheric "near Earth" ray paths. Model atmosphere properties used to exercise the refraction algorithm were obtained from tables published in another LOWTRAN-6 related report. The LOWTRAN-6 based refraction model is applicable to atmospheric propagation at wavelengths in the IR and visible bands of the electromagnetic spectrum. It has been used during the past two years by engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) in support of several advanced imaging sensor simulations. Recently, a faster (but sufficiently accurate) method using Gauss-Chebyshev Quadrature integration for evaluating the refraction integral was adopted.
NASA Astrophysics Data System (ADS)
Herrick, Gregory Paul
The quest to accurately capture flow phenomena with length-scales both short and long and to accurately represent complex flow phenomena within disparately sized geometry inspires a need for an efficient, high-fidelity, multi-block structured computational fluid dynamics (CFD) parallel computational scheme. This research presents and demonstrates a more efficient computational method by which to perform multi-block structured CFD parallel computational simulations, thus facilitating higher-fidelity solutions of complicated geometries (due to the inclusion of grids for "small'' flow areas which are often merely modeled) and their associated flows. This computational framework offers greater flexibility and user-control in allocating the resource balance between process count and wall-clock computation time. The principal modifications implemented in this revision consist of a "multiple grid block per processing core'' software infrastructure and an analytic computation of viscous flux Jacobians. The development of this scheme is largely motivated by the desire to simulate axial compressor stall inception with more complete gridding of the flow passages (including rotor tip clearance regions) than has been previously done while maintaining high computational efficiency (i.e., minimal consumption of computational resources), and thus this paradigm shall be demonstrated with an examination of instability in a transonic axial compressor. However, the paradigm presented herein facilitates CFD simulation of myriad previously impractical geometries and flows and is not limited to detailed analyses of axial compressor flows. While the simulations presented herein were technically possible under the previous structure of the subject software, they were much less computationally efficient and thus not pragmatically feasible; the previous research using this software to perform three-dimensional, full-annulus, time-accurate, unsteady, full-stage (with sliding-interface) simulations of rotating stall inception in axial compressors utilized tip clearance periodic models, while the scheme here is demonstrated by a simulation of axial compressor stall inception utilizing gridded rotor tip clearance regions. As will be discussed, much previous research---experimental, theoretical, and computational---has suggested that understanding clearance flow behavior is critical to understanding stall inception, and previous computational research efforts which have used tip clearance models have begged the question, "What about the clearance flows?''. This research begins to address that question.
Khan, Md Nabiul Islam; Hijbeek, Renske; Berger, Uta; Koedam, Nico; Grueters, Uwe; Islam, S M Zahirul; Hasan, Md Asadul; Dahdouh-Guebas, Farid
2016-01-01
In the Point-Centred Quarter Method (PCQM), the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1) and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively) show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns) plant populations and empirical ones. PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3) show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition). If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1)/(π ∑ R2) but not 12N/(π ∑ R2), of PCQM2 is 4(8N - 1)/(π ∑ R2) but not 28N/(π ∑ R2) and of PCQM3 is 4(12N - 1)/(π ∑ R2) but not 44N/(π ∑ R2) as published. If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process. Since in practice, the spatial pattern of a plant association remains unknown before starting a vegetation survey, for field applications the use of PCQM3 along with the corrected estimator is recommended. However, for sparse plant populations, where the use of PCQM3 may pose practical limitations, the PCQM2 or PCQM1 would be applied. During application of PCQM in the field, care should be taken to summarize the distance data based on 'the inverse summation of squared distances' but not 'the summation of inverse squared distances' as erroneously published.
Preliminary Multivariable Cost Model for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2010-01-01
Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. Previously, the authors published two single variable cost models based on 19 flight missions. The current paper presents the development of a multi-variable space telescopes cost model. The validity of previously published models are tested. Cost estimating relationships which are and are not significant cost drivers are identified. And, interrelationships between variables are explored
NASA Astrophysics Data System (ADS)
Coe, M. T.; Costa, M. H.; Howard, E. A.
2006-12-01
In this paper we analyze the hydrology of the Amazon River system for the latter half of the 20th century with our recently completed model of terrestrial hydrology (Terrestrial Hydrology Model with Biogeochemistry, THMB). We evaluate the simulated hydrology of the Central Amazon basin against limited observations of river discharge, floodplain inundation, and water height and analyze the spatial and temporal variability of the hydrology for the period 1939-1998. We compare the simulated discharge and floodplain inundated area to the simulations by Coe et al., 2002 using a previous version of this model. The new model simulates the discharge and flooded area in better agreement with the observations than the previous model. The coefficient of correlation between the simulated and observed discharge for the greater than 27000 monthly observations of discharge at 120 sites throughout the Brazilian Amazon is 0.9874 compared to 0.9744 for the previous model. The coefficient of correlation between the simulated monthly flooded area and the satellite-based estimates by Sippel et al., 1998 exceeds 0.7 for 8 of the 12 mainstem reaches. The seasonal and inter-annual variability of the water height and the river slope compares favorably to the satellite altimetric measurements of height reported by Birkett et al., 2002.
Biologically-inspired hexapod robot design and simulation
NASA Technical Reports Server (NTRS)
Espenschied, Kenneth S.; Quinn, Roger D.
1994-01-01
The design and construction of a biologically-inspired hexapod robot is presented. A previously developed simulation is modified to include models of the DC drive motors, the motor driver circuits and their transmissions. The application of this simulation to the design and development of the robot is discussed. The mechanisms thought to be responsible for the leg coordination of the walking stick insect were previously applied to control the straight-line locomotion of a robot. We generalized these rules for a robot walking on a plane. This biologically-inspired control strategy is used to control the robot in simulation. Numerical results show that the general body motion and performance of the simulated robot is similar to that of the robot based on our preliminary experimental results.
Kinetic modelling of a diesel-polluted clayey soil bioremediation process.
Fernández, Engracia Lacasa; Merlo, Elena Moliterni; Mayor, Lourdes Rodríguez; Camacho, José Villaseñor
2016-07-01
A mathematical model is proposed to describe a diesel-polluted clayey soil bioremediation process. The reaction system under study was considered a completely mixed closed batch reactor, which initially contacted a soil matrix polluted with diesel hydrocarbons, an aqueous liquid-specific culture medium and a microbial inoculation. The model coupled the mass transfer phenomena and the distribution of hydrocarbons among four phases (solid, S; water, A; non-aqueous liquid, NAPL; and air, V) with Monod kinetics. In the first step, the model simulating abiotic conditions was used to estimate only the mass transfer coefficients. In the second step, the model including both mass transfer and biodegradation phenomena was used to estimate the biological kinetic and stoichiometric parameters. In both situations, the model predictions were validated with experimental data that corresponded to previous research by the same authors. A correct fit between the model predictions and the experimental data was observed because the modelling curves captured the major trends for the diesel distribution in each phase. The model parameters were compared to different previously reported values found in the literature. Pearson correlation coefficients were used to show the reproducibility level of the model. Copyright © 2016. Published by Elsevier B.V.
Shipborne LF-VLF oceanic lightning observations and modeling
NASA Astrophysics Data System (ADS)
Zoghzoghy, F. G.; Cohen, M. B.; Said, R. K.; Lehtinen, N. G.; Inan, U. S.
2015-10-01
Approximately 90% of natural lightning occurs over land, but recent observations, using Global Lightning Detection (GLD360) geolocation peak current estimates and satellite optical data, suggested that cloud-to-ground flashes are on average stronger over the ocean. We present initial statistics from a novel experiment using a Low Frequency (LF) magnetic field receiver system installed aboard the National Oceanic Atmospheric Agency (NOAA) Ronald W. Brown research vessel that allowed the detection of impulsive radio emissions from deep-oceanic discharges at short distances. Thousands of LF waveforms were recorded, facilitating the comparison of oceanic waveforms to their land counterparts. A computationally efficient electromagnetic radiation model that accounts for propagation over lossy and curved ground is constructed and compared with previously published models. We include the effects of Earth curvature on LF ground wave propagation and quantify the effects of channel-base current risetime, channel-base current falltime, and return stroke speed on the radiated LF waveforms observed at a given distance. We compare simulation results to data and conclude that previously reported larger GLD360 peak current estimates over the ocean are unlikely to fully result from differences in channel-base current risetime, falltime, or return stroke speed between ocean and land flashes.
Sela, Itamar; Ashkenazy, Haim; Katoh, Kazutaka; Pupko, Tal
2015-07-01
Inference of multiple sequence alignments (MSAs) is a critical part of phylogenetic and comparative genomics studies. However, from the same set of sequences different MSAs are often inferred, depending on the methodologies used and the assumed parameters. Much effort has recently been devoted to improving the ability to identify unreliable alignment regions. Detecting such unreliable regions was previously shown to be important for downstream analyses relying on MSAs, such as the detection of positive selection. Here we developed GUIDANCE2, a new integrative methodology that accounts for: (i) uncertainty in the process of indel formation, (ii) uncertainty in the assumed guide tree and (iii) co-optimal solutions in the pairwise alignments, used as building blocks in progressive alignment algorithms. We compared GUIDANCE2 with seven methodologies to detect unreliable MSA regions using extensive simulations and empirical benchmarks. We show that GUIDANCE2 outperforms all previously developed methodologies. Furthermore, GUIDANCE2 also provides a set of alternative MSAs which can be useful for downstream analyses. The novel algorithm is implemented as a web-server, available at: http://guidance.tau.ac.il. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Impact of doping on the density of states and the mobility in organic semiconductors
NASA Astrophysics Data System (ADS)
Zuo, Guangzheng; Abdalla, Hassan; Kemerink, Martijn
2016-06-01
We experimentally investigated conductivity and mobility of poly(3-hexylthiophene) (P3HT) doped with tetrafluorotetracyanoquinodimethane (F4TCNQ ) for various relative doping concentrations ranging from ultralow (10-5) to high (10-1) and various active layer thicknesses. Although the measured conductivity monotonously increases with increasing doping concentration, the mobilities decrease, in agreement with previously published work. Additionally, we developed a simple yet quantitative model to rationalize the results on basis of a modification of the density of states (DOS) by the Coulomb potentials of ionized dopants. The DOS was integrated in a three-dimensional (3D) hopping formalism in which parameters such as energetic disorder, intersite distance, energy level difference, and temperature were varied. We compared predictions of our model as well as those of a previously developed model to kinetic Monte Carlo (MC) modeling and found that only the former model accurately reproduces the mobility of MC modeling in a large part of the parameter space. Importantly, both our model and MC simulations are in good agreement with experiments; the crucial ingredient to both is the formation of a deep trap tail in the Gaussian DOS with increasing doping concentration.
Kolko, Rachel P; Kass, Andrea E; Hayes, Jacqueline F; Levine, Michele D; Garbutt, Jane M; Proctor, Enola K; Wilfley, Denise E
This randomized pilot trial evaluated two training modalities for first-line, evidence-based pediatric obesity services (screening and goal setting) among nursing students. Participants (N = 63) were randomized to live interactive training or Web-facilitated self-study training. Pretraining, post-training, and 1-month follow-up assessments evaluated training feasibility, acceptability, and impact (knowledge and skill via simulation). Moderator (previous experience) and predictor (content engagement) analyses were conducted. Nearly all participants (98%) completed assessments. Both types of training were acceptable, with higher ratings for live training and participants with previous experience (ps < .05). Knowledge and skill improved from pretraining to post-training and follow-up in both conditions (ps < .001). Live training demonstrated greater content engagement (p < .01). The training package was feasible, acceptable, and efficacious among nursing students. Given that live training had higher acceptability and engagement and online training offers greater scalability, integrating interactive live training components within Web-based training may optimize outcomes, which may enhance practitioners' delivery of pediatric obesity services. Copyright © 2016 National Association of Pediatric Nurse Practitioners. Published by Elsevier Inc. All rights reserved.
Continuous high-solids corn liquefaction and fermentation with stripping of ethanol.
Taylor, Frank; Marquez, Marco A; Johnston, David B; Goldberg, Neil M; Hicks, Kevin B
2010-06-01
Removal of ethanol from the fermentor during fermentation can increase productivity and reduce the costs for dewatering the product and coproduct. One approach is to recycle the fermentor contents through a stripping column, where a non-condensable gas removes ethanol to a condenser. Previous research showed that this approach is feasible. Savings of $0.03 per gallon were predicted at 34% corn dry solids. Greater savings were predicted at higher concentration. Now the feasibility has been demonstrated at over 40% corn dry solids, using a continuous corn liquefaction system. A pilot plant, that continuously fed corn meal at more than one bushel (25 kg) per day, was operated for 60 consecutive days, continuously converting 95% of starch and producing 88% of the maximum theoretical yield of ethanol. A computer simulation was used to analyze the results. The fermentation and stripping systems were not significantly affected when the CO(2) stripping gas was partially replaced by nitrogen or air, potentially lowering costs associated with the gas recycle loop. It was concluded that previous estimates of potential cost savings are still valid. (c) 2010. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hudson, Brian D.; George, Ashley R.; Ford, Martyn G.; Livingstone, David J.
1992-04-01
Molecular dynamics simulations have been performed on a number of conformationally flexible pyrethroid insecticides. The results indicate that molecular dynamics is a suitable tool for conformational searching of small molecules given suitable simulation parameters. The structures derived from the simulations are compared with the static conformation used in a previous study. Various physicochemical parameters have been calculated for a set of conformations selected from the simulations using multivariate analysis. The averaged values of the parameters over the selected set (and the factors derived from them) are compared with the single conformation values used in the previous study.
NASA Astrophysics Data System (ADS)
Tomita, Motohiro; Ogasawara, Masataka; Terada, Takuya; Watanabe, Takanobu
2018-04-01
We provide the parameters of Stillinger-Weber potentials for GeSiSn ternary mixed systems. These parameters can be used in molecular dynamics (MD) simulations to reproduce phonon properties and thermal conductivities. The phonon dispersion relation is derived from the dynamical structure factor, which is calculated by the space-time Fourier transform of atomic trajectories in an MD simulation. The phonon properties and thermal conductivities of GeSiSn ternary crystals calculated using these parameters mostly reproduced both the findings of previous experiments and earlier calculations made using MD simulations. The atomic composition dependence of these properties in GeSiSn ternary crystals obtained by previous studies (both experimental and theoretical) and the calculated data were almost exactly reproduced by our proposed parameters. Moreover, the results of the MD simulation agree with the previous calculations made using a time-independent phonon Boltzmann transport equation with complicated scattering mechanisms. These scattering mechanisms are very important in complicated nanostructures, as they allow the heat-transfer properties to be more accurately calculated by MD simulations. This work enables us to predict the phonon- and heat-related properties of bulk group IV alloys, especially ternary alloys.
Austin, Peter C
2016-12-30
Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Ensor, Joie; Burke, Danielle L; Snell, Kym I E; Hemming, Karla; Riley, Richard D
2018-05-18
Researchers and funders should consider the statistical power of planned Individual Participant Data (IPD) meta-analysis projects, as they are often time-consuming and costly. We propose simulation-based power calculations utilising a two-stage framework, and illustrate the approach for a planned IPD meta-analysis of randomised trials with continuous outcomes where the aim is to identify treatment-covariate interactions. The simulation approach has four steps: (i) specify an underlying (data generating) statistical model for trials in the IPD meta-analysis; (ii) use readily available information (e.g. from publications) and prior knowledge (e.g. number of studies promising IPD) to specify model parameter values (e.g. control group mean, intervention effect, treatment-covariate interaction); (iii) simulate an IPD meta-analysis dataset of a particular size from the model, and apply a two-stage IPD meta-analysis to obtain the summary estimate of interest (e.g. interaction effect) and its associated p-value; (iv) repeat the previous step (e.g. thousands of times), then estimate the power to detect a genuine effect by the proportion of summary estimates with a significant p-value. In a planned IPD meta-analysis of lifestyle interventions to reduce weight gain in pregnancy, 14 trials (1183 patients) promised their IPD to examine a treatment-BMI interaction (i.e. whether baseline BMI modifies intervention effect on weight gain). Using our simulation-based approach, a two-stage IPD meta-analysis has < 60% power to detect a reduction of 1 kg weight gain for a 10-unit increase in BMI. Additional IPD from ten other published trials (containing 1761 patients) would improve power to over 80%, but only if a fixed-effect meta-analysis was appropriate. Pre-specified adjustment for prognostic factors would increase power further. Incorrect dichotomisation of BMI would reduce power by over 20%, similar to immediately throwing away IPD from ten trials. Simulation-based power calculations could inform the planning and funding of IPD projects, and should be used routinely.
Capasso, Renato; De Martino, Antonio
2010-10-13
Polymerin is a humic acid-like polymer, which we previously recovered for the first time from olive oil mill waste waters (OMWW) only, and chemically and physicochemically characterized. We also previously investigated its versatile sorption capacity for toxic inorganic and organic compounds. Therefore, a review is presented on the removal, from simulated polluted waters, of cationic heavy metals [Cu(II), Zn, Cr(III)] and anionic ones [Cr(VI)) and As(V)] by sorption on this natural organic sorbent in comparison with its synthetic derivatives, K-polymerin, a ferrihydrite-polymerin complex and with ferrihydrite. An overview is also performed of the removal of ionic herbicides (2,4-D, paraquat, MCPA, simazine, and cyhalofop) by sorption on polymerin, ferrihydrite, and their complex and of the removal of phenanthrene, as a representative of polycyclic aromatic hydrocarbons, by sorption on this sorbent and its complexes with micro- or nanoparticles of aluminum oxide, pointing out the employment of all these sorbents in biobed systems, which might allow the remediation of water and protection of surface and groundwater. In addition, a short review is also given on the removal of Cu(II) and Zn from simulated contaminated waters, by sorption on the humic acid-like organic fraction, named lignimerin, which we previously isolated for the first time, in collaboration with a Chilean group, from cellulose mill Kraft waste waters (KCMWW) only. More specifically, the production methods and the characterization of the two natural sorbents (polymerin and lignimerin) and their derivatives (K-polymerin ferrihydrite-polymerin, polymerin-microAl(2)O(3) and -nanoAl(2)O(3), and H-lignimerin, respectively) as well as their sorption data and mechanism are reviewed. Published and original results obtained by the cyclic sorption on all of the considered sorbents for the removal of the above-mentioned toxic compounds from simulated waste waters are also reported. Moreover, sorption capacity and mechanism of the considered compounds on polymerins and lignimerins are evaluated in comparison with other known natural sorbents, especially of humic acid nature and other organic matter. Some of their technical aspects and relative costs are also considered. Finally, the possible large-scale application of the considered sorption systems for water remediation is briefly discussed.
Writing and Publishing Handbook.
ERIC Educational Resources Information Center
Hansen, William F., Ed.
Intended to provide guidance in academic publishing to faculty members, especially younger faculty members, this handbook is a compilation of four previously published essays by different authors. Following a preface and an introduction, the four essays and their authors are as follows: (1) "One Writer's Secrets" (Donald M. Murray); (2)…
Synapse-Centric Mapping of Cortical Models to the SpiNNaker Neuromorphic Architecture
Knight, James C.; Furber, Steve B.
2016-01-01
While the adult human brain has approximately 8.8 × 1010 neurons, this number is dwarfed by its 1 × 1015 synapses. From the point of view of neuromorphic engineering and neural simulation in general this makes the simulation of these synapses a particularly complex problem. SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Current solutions for simulating spiking neural networks on SpiNNaker are heavily inspired by work on distributed high-performance computing. However, while SpiNNaker shares many characteristics with such distributed systems, its component nodes have much more limited resources and, as the system lacks global synchronization, the computation performed on each node must complete within a fixed time step. We first analyze the performance of the current SpiNNaker neural simulation software and identify several problems that occur when it is used to simulate networks of the type often used to model the cortex which contain large numbers of sparsely connected synapses. We then present a new, more flexible approach for mapping the simulation of such networks to SpiNNaker which solves many of these problems. Finally we analyze the performance of our new approach using both benchmarks, designed to represent cortical connectivity, and larger, functional cortical models. In a benchmark network where neurons receive input from 8000 STDP synapses, our new approach allows 4× more neurons to be simulated on each SpiNNaker core than has been previously possible. We also demonstrate that the largest plastic neural network previously simulated on neuromorphic hardware can be run in real time using our new approach: double the speed that was previously achieved. Additionally this network contains two types of plastic synapse which previously had to be trained separately but, using our new approach, can be trained simultaneously. PMID:27683540
A Model-Model and Data-Model Comparison for the Early Eocene Hydrological Cycle
NASA Technical Reports Server (NTRS)
Carmichael, Matthew J.; Lunt, Daniel J.; Huber, Matthew; Heinemann, Malte; Kiehl, Jeffrey; LeGrande, Allegra; Loptson, Claire A.; Roberts, Chris D.; Sagoo, Navjit; Shields, Christine
2016-01-01
A range of proxy observations have recently provided constraints on how Earth's hydrological cycle responded to early Eocene climatic changes. However, comparisons of proxy data to general circulation model (GCM) simulated hydrology are limited and inter-model variability remains poorly characterised. In this work, we undertake an intercomparison of GCM-derived precipitation and P - E distributions within the extended EoMIP ensemble (Eocene Modelling Intercomparison Project; Lunt et al., 2012), which includes previously published early Eocene simulations performed using five GCMs differing in boundary conditions, model structure, and precipitation-relevant parameterisation schemes. We show that an intensified hydrological cycle, manifested in enhanced global precipitation and evaporation rates, is simulated for all Eocene simulations relative to the preindustrial conditions. This is primarily due to elevated atmospheric paleo-CO2, resulting in elevated temperatures, although the effects of differences in paleogeography and ice sheets are also important in some models. For a given CO2 level, globally averaged precipitation rates vary widely between models, largely arising from different simulated surface air temperatures. Models with a similar global sensitivity of precipitation rate to temperature (dP=dT ) display different regional precipitation responses for a given temperature change. Regions that are particularly sensitive to model choice include the South Pacific, tropical Africa, and the Peri-Tethys, which may represent targets for future proxy acquisition. A comparison of early and middle Eocene leaf-fossil-derived precipitation estimates with the GCM output illustrates that GCMs generally underestimate precipitation rates at high latitudes, although a possible seasonal bias of the proxies cannot be excluded. Models which warm these regions, either via elevated CO2 or by varying poorly constrained model parameter values, are most successful in simulating a match with geologic data. Further data from low-latitude regions and better constraints on early Eocene CO2 are now required to discriminate between these model simulations given the large error bars on paleoprecipitation estimates. Given the clear differences between simulated precipitation distributions within the ensemble, our results suggest that paleohydrological data offer an independent means by which to evaluate model skill for warm climates.
14 CFR 121.407 - Training program: Approval of airplane simulators and other training devices.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Training program: Approval of airplane... Program § 121.407 Training program: Approval of airplane simulators and other training devices. Link to an amendment published at 78 FR 67836, Nov. 12, 2013. (a) Each airplane simulator and other training device...
O'Leary, F
2003-07-01
To determine whether it is possible to contact authors of previously published papers via email. A cross sectional study of the Emergency Medicine Journal for 2001. 118 articles were included in the study. The response rate from those with valid email addresses was 73%. There was no statistical difference between the type of email address used and the address being invalid (p=0.392) or between the type of article and the likelihood of a reply (p=0.197). More responses were obtained from work addresses when compared with Hotmail addresses (86% v 57%, p=0.02). Email is a valid means of contacting authors of previously published articles, particularly within the emergency medicine specialty. A work based email address may be a more valid means of contact than a Hotmail address.
Observation of a 27-day solar signature in noctilucent cloud altitude
NASA Astrophysics Data System (ADS)
Köhnke, Merlin C.; von Savigny, Christian; Robert, Charles E.
2018-05-01
Previous studies have identified solar 27-day signatures in several parameters in the Mesosphere/Lower thermosphere region, including temperature and Noctilucent cloud (NLC) occurrence frequency. In this study we report on a solar 27-day signature in NLC altitude with peak-to-peak variations of about 400 m. We use SCIAMACHY limb-scatter observations from 2002 to 2012 to detect NLCs. The superposed epoch analysis method is applied to extract solar 27-day signatures. A 27-day signature in NLC altitude can be identified in both hemispheres in the SCIAMACHY dataset, but the signature is more pronounced in the northern hemisphere. The solar signature in NLC altitude is found to be in phase with solar activity and temperature for latitudes ≳ 70 ° N. We provide a qualitative explanation for the positive correlation between solar activity and NLC altitude based on published model simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sacks, H.K.; Novak, T.
2008-03-15
During the past decade, several methane/air explosions in abandoned or sealed areas of underground coal mines have been attributed to lightning. Previously published work by the authors showed, through computer simulations, that currents from lightning could propagate down steel-cased boreholes and ignite explosive methane/air mixtures. The presented work expands on the model and describes a methodology based on IEEE Standard 1410-2004 to estimate the probability of an ignition. The methodology provides a means to better estimate the likelihood that an ignition could occur underground and, more importantly, allows the calculation of what-if scenarios to investigate the effectiveness of engineering controlsmore » to reduce the hazard. The computer software used for calculating fields and potentials is also verified by comparing computed results with an independently developed theoretical model of electromagnetic field propagation through a conductive medium.« less
Gradual onset and recovery of the Younger Dryas abrupt climate event in the tropics.
Partin, J W; Quinn, T M; Shen, C-C; Okumura, Y; Cardenas, M B; Siringan, F P; Banner, J L; Lin, K; Hu, H-M; Taylor, F W
2015-09-02
Proxy records of temperature from the Atlantic clearly show that the Younger Dryas was an abrupt climate change event during the last deglaciation, but records of hydroclimate are underutilized in defining the event. Here we combine a new hydroclimate record from Palawan, Philippines, in the tropical Pacific, with previously published records to highlight a difference between hydroclimate and temperature responses to the Younger Dryas. Although the onset and termination are synchronous across the records, tropical hydroclimate changes are more gradual (>100 years) than the abrupt (10-100 years) temperature changes in the northern Atlantic Ocean. The abrupt recovery of Greenland temperatures likely reflects changes in regional sea ice extent. Proxy data and transient climate model simulations support the hypothesis that freshwater forced a reduction in the Atlantic meridional overturning circulation, thereby causing the Younger Dryas. However, changes in ocean overturning may not produce the same effects globally as in Greenland.
Thermodynamic prediction of protein neutrality.
Bloom, Jesse D; Silberg, Jonathan J; Wilke, Claus O; Drummond, D Allan; Adami, Christoph; Arnold, Frances H
2005-01-18
We present a simple theory that uses thermodynamic parameters to predict the probability that a protein retains the wild-type structure after one or more random amino acid substitutions. Our theory predicts that for large numbers of substitutions the probability that a protein retains its structure will decline exponentially with the number of substitutions, with the severity of this decline determined by properties of the structure. Our theory also predicts that a protein can gain extra robustness to the first few substitutions by increasing its thermodynamic stability. We validate our theory with simulations on lattice protein models and by showing that it quantitatively predicts previously published experimental measurements on subtilisin and our own measurements on variants of TEM1 beta-lactamase. Our work unifies observations about the clustering of functional proteins in sequence space, and provides a basis for interpreting the response of proteins to substitutions in protein engineering applications.
Thermodynamic prediction of protein neutrality
Bloom, Jesse D.; Silberg, Jonathan J.; Wilke, Claus O.; Drummond, D. Allan; Adami, Christoph; Arnold, Frances H.
2005-01-01
We present a simple theory that uses thermodynamic parameters to predict the probability that a protein retains the wild-type structure after one or more random amino acid substitutions. Our theory predicts that for large numbers of substitutions the probability that a protein retains its structure will decline exponentially with the number of substitutions, with the severity of this decline determined by properties of the structure. Our theory also predicts that a protein can gain extra robustness to the first few substitutions by increasing its thermodynamic stability. We validate our theory with simulations on lattice protein models and by showing that it quantitatively predicts previously published experimental measurements on subtilisin and our own measurements on variants of TEM1 β-lactamase. Our work unifies observations about the clustering of functional proteins in sequence space, and provides a basis for interpreting the response of proteins to substitutions in protein engineering applications. PMID:15644440
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scipioni Bertoli, Umberto; Guss, Gabe; Wu, Sheldon
We report detailed understanding of the complex melt pool physics plays a vital role in predicting optimal processing regimes in laser powder bed fusion additive manufacturing. In this work, we use high framerate video recording of Selective Laser Melting (SLM) to provide useful insight on the laser-powder interaction and melt pool evolution of 316 L powder layers, while also serving as a novel instrument to quantify cooling rates of the melt pool. The experiment was performed using two powder types – one gas- and one water-atomized – to further clarify how morphological and chemical differences between these two feedstock materialsmore » influence the laser melting process. Finally, experimentally determined cooling rates are compared with values obtained through computer simulation, and the relationship between cooling rate and grain cell size is compared with data previously published in the literature.« less
Hu, Yang; Li, Decai; Shu, Shi; Niu, Xiaodong
2016-02-01
Based on the Darcy-Brinkman-Forchheimer equation, a finite-volume computational model with lattice Boltzmann flux scheme is proposed for incompressible porous media flow in this paper. The fluxes across the cell interface are calculated by reconstructing the local solution of the generalized lattice Boltzmann equation for porous media flow. The time-scaled midpoint integration rule is adopted to discretize the governing equation, which makes the time step become limited by the Courant-Friedricks-Lewy condition. The force term which evaluates the effect of the porous medium is added to the discretized governing equation directly. The numerical simulations of the steady Poiseuille flow, the unsteady Womersley flow, the circular Couette flow, and the lid-driven flow are carried out to verify the present computational model. The obtained results show good agreement with the analytical, finite-difference, and/or previously published solutions.
NASA Astrophysics Data System (ADS)
Pang, Guofei; Perdikaris, Paris; Cai, Wei; Karniadakis, George Em
2017-11-01
The fractional advection-dispersion equation (FADE) can describe accurately the solute transport in groundwater but its fractional order has to be determined a priori. Here, we employ multi-fidelity Bayesian optimization to obtain the fractional order under various conditions, and we obtain more accurate results compared to previously published data. Moreover, the present method is very efficient as we use different levels of resolution to construct a stochastic surrogate model and quantify its uncertainty. We consider two different problem set ups. In the first set up, we obtain variable fractional orders of one-dimensional FADE, considering both synthetic and field data. In the second set up, we identify constant fractional orders of two-dimensional FADE using synthetic data. We employ multi-resolution simulations using two-level and three-level Gaussian process regression models to construct the surrogates.
Detailed Modeling of the DART Spacecraft Impact into Didymoon
NASA Astrophysics Data System (ADS)
Weaver, R.; Gisler, G.
2017-12-01
In this presentation we will model the impact of the DART spacecraft into the target Didymoon. Most previous modeling of this impact has used full density aluminum spheres with a mass of 300 kg or more recently 500 kg. Many of the published scaling laws for crater size and diameter as well as ejecta modeling assume this type of impactor. The actual spacecraft for the DART impact is not solid and does not contain a solid dedicated kinetic impactor. The spacecraft is considered the impactor. Since the spacecraft is significantly larger ( 100 x 100 x 200 cm) in size than a full density aluminum sphere (radius 35 cm) the resulting impact dynamics will be quite different. Here we model both types of impact and compare the results of the simulation for crater size, crater depth and ejecta. This allows for a comparison of the momentum enhancement factor, beta. Suggestions for improvement of the spacecraft design will be given.
Body ownership: When feeling and knowing diverge.
Romano, Daniele; Sedda, Anna; Brugger, Peter; Bottini, Gabriella
2015-07-01
Individuals with the peculiar disturbance of 'overcompleteness' experience an intense desire to amputate one of their healthy limbs, describing a sense of disownership for it (Body Integrity Identity Disorder - BIID). This condition is similar to somatoparaphrenia, the acquired delusion that one's own limb belongs to someone else. In ten individuals with BIID, we measured skin conductance response to noxious stimuli, delivered to the accepted and non-accepted limb, touching the body part or simulating the contact (stimuli approach the body without contacting it), hypothesizing that these individuals have responses like somatoparaphrenic patients, who previously showed reduced pain anticipation, when the threat was directed to the disowned limb. We found reduced anticipatory response to stimuli approaching, but not contacting, the unwanted limb. Conversely, stimuli contacting the non-accepted body-part, induced stronger SCR than those contacting the healthy parts, suggesting that feeling of ownership is critically related to a proper processing of incoming threats. Copyright © 2015. Published by Elsevier Inc.
Ignition and Growth Reactive Flow Modeling of Shock Initiation of PBX 9502 at -55∘C and -196∘C
NASA Astrophysics Data System (ADS)
Chidester, Steven; Tarver, Craig
2015-06-01
Recently Gustavsen et al. and Hollowell et al. published two stage gas gun embedded particle velocity gauge experiments on PBX 9502 (95%TATB, 5% Kel-F800) cooled to -55°C and -196°C, respectively. At -196°C, PBX 9502 was shown to be much less shock sensitive than at -55°C, but it did transition to detonation. Previous Ignition and Growth model parameters for shock initiation of PBX 9502 at -55°C are modified based on the new data, and new parameters for -196°C PBX 9502 are created to accurately simulate the measured particle velocity histories and run distances to detonation versus shock pressures. This work was performed under the auspices of the U. S. Department of Energy by the Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.
Maulidiani; Rudiyanto; Abas, Faridah; Ismail, Intan Safinar; Lajis, Nordin H
2018-06-01
Optimization process is an important aspect in the natural product extractions. Herein, an alternative approach is proposed for the optimization in extraction, namely, the Generalized Likelihood Uncertainty Estimation (GLUE). The approach combines the Latin hypercube sampling, the feasible range of independent variables, the Monte Carlo simulation, and the threshold criteria of response variables. The GLUE method is tested in three different techniques including the ultrasound, the microwave, and the supercritical CO 2 assisted extractions utilizing the data from previously published reports. The study found that this method can: provide more information on the combined effects of the independent variables on the response variables in the dotty plots; deal with unlimited number of independent and response variables; consider combined multiple threshold criteria, which is subjective depending on the target of the investigation for response variables; and provide a range of values with their distribution for the optimization. Copyright © 2018 Elsevier Ltd. All rights reserved.
Sensitivity of precipitation statistics to urban growth in a subtropical coastal megacity cluster.
Holst, Christopher Claus; Chan, Johnny C L; Tam, Chi-Yung
2017-09-01
This short paper presents an investigation on how human activities may or may not affect precipitation based on numerical simulations of precipitation in a benchmark case with modified lower boundary conditions, representing different stages of urban development in the model. The results indicate that certain degrees of urbanization affect the likelihood of heavy precipitation significantly, while less urbanized or smaller cities are much less prone to these effects. Such a result can be explained based on our previous work where the sensitivity of precipitation statistics to surface anthropogenic heat sources lies in the generation of buoyancy and turbulence in the planetary boundary layer and dissipation through triggering of convection. Thus only mega cities of sufficient size, and hence human-activity-related anthropogenic heat emission, can expect to experience such effects. In other words, as cities grow, their effects upon precipitation appear to grow as well. Copyright © 2017. Published by Elsevier B.V.
Kirman, C R; Suh, M; Proctor, D M; Hays, S M
2017-06-15
A physiologically based pharmacokinetic (PBPK) model for hexavalent chromium [Cr(VI)] in mice, rats, and humans developed previously (Kirman et al., 2012, 2013), was updated to reflect an improved understanding of the toxicokinetics of the gastrointestinal tract following oral exposures. Improvements were made to: (1) the reduction model, which describes the pH-dependent reduction of Cr(VI) to Cr(III) in the gastrointestinal tract under both fasted and fed states; (2) drinking water pattern simulations, to better describe dosimetry in rodents under the conditions of the NTP cancer bioassay; and (3) parameterize the model to characterize potentially sensitive human populations. Important species differences, sources of non-linear toxicokinetics, and human variation are identified and discussed within the context of human health risk assessment. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Fantínová, K; Fojtík, P; Malátová, I
2016-09-01
Rapid measurement techniques are required for a large-scale emergency monitoring of people. In vivo measurement of the bremsstrahlung radiation produced by incorporated pure-beta emitters can offer a rapid technique for the determination of such radionuclides in the human body. This work presents a method for the calibration of spectrometers, based on the use of UPh-02T (so-called IGOR) phantom and specific (90)Sr/(90)Y sources, which can account for recent as well as previous contaminations. The process of the whole- and partial-body counter calibration in combination with application of a Monte Carlo code offers readily extension also to other pure-beta emitters and various exposure scenarios. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Zahiripour, Seyed Ali; Jalali, Ali Akbar
2014-09-01
A novel switching function based on an optimization strategy for the sliding mode control (SMC) method has been provided for uncertain stochastic systems subject to actuator degradation such that the closed-loop system is globally asymptotically stable with probability one. In the previous researches the focus on sliding surface has been on proportional or proportional-integral function of states. In this research, from a degree of freedom that depends on designer choice is used to meet certain objectives. In the design of the switching function, there is a parameter which the designer can regulate for specified objectives. A sliding-mode controller is synthesized to ensure the reachability of the specified switching surface, despite actuator degradation and uncertainties. Finally, the simulation results demonstrate the effectiveness of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lenderink, Geert; Attema, Jisk
2015-08-01
Scenarios of future changes in small scale precipitation extremes for the Netherlands are presented. These scenarios are based on a new approach whereby changes in precipitation extremes are set proportional to the change in water vapor amount near the surface as measured by the 2m dew point temperature. This simple scaling framework allows the integration of information derived from: (i) observations, (ii) a new unprecedentedly large 16 member ensemble of simulations with the regional climate model RACMO2 driven by EC-Earth, and (iii) short term integrations with a non-hydrostatic model Harmonie. Scaling constants are based on subjective weighting (expert judgement) of the three different information sources taking also into account previously published work. In all scenarios local precipitation extremes increase with warming, yet with broad uncertainty ranges expressing incomplete knowledge of how convective clouds and the atmospheric mesoscale circulation will react to climate change.
The contribution of glacier melt to streamflow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaner, Neil; Voisin, Nathalie; Nijssen, Bart
2012-09-13
Ongoing and projected future changes in glacier extent and water storage globally have lead to concerns about the implications for water supplies. However, the current magnitude of glacier contributions to river runoff is not well known, nor is the population at risk to future glacier changes. We estimate an upper bound on glacier melt contribution to seasonal streamflow by computing the energy balance of glaciers globally. Melt water quantities are computed as a fraction of total streamflow simulated using a hydrology model and the melt fraction is tracked down the stream network. In general, our estimates of the glacier meltmore » contribution to streamflow are lower than previously published values. Nonetheless, we find that globally an estimated 225 (36) million people live in river basins where maximum seasonal glacier melt contributes at least 10% (25%) of streamflow, mostly in the High Asia region.« less
Pavani, R S; Fernandes, C; Perez, A M; Vasconcelos, E J R; Siqueira-Neto, J L; Fontes, M R; Cano, M I N
2014-12-20
Replication protein A-1 (RPA-1) is a single-stranded DNA-binding protein involved in DNA metabolism. We previously demonstrated the interaction between LaRPA-1 and telomeric DNA. Here, we expressed and purified truncated mutants of LaRPA-1 and used circular dichroism measurements and molecular dynamics simulations to demonstrate that the tertiary structure of LaRPA-1 differs from human and yeast RPA-1. LaRPA-1 interacts with telomeric ssDNA via its N-terminal OB-fold domain, whereas RPA from higher eukaryotes show different binding modes to ssDNA. Our results show that LaRPA-1 is evolutionary distinct from other RPA-1 proteins and can potentially be used for targeting trypanosomatid telomeres. Copyright © 2014 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
Scipioni Bertoli, Umberto; Guss, Gabe; Wu, Sheldon; ...
2017-09-21
We report detailed understanding of the complex melt pool physics plays a vital role in predicting optimal processing regimes in laser powder bed fusion additive manufacturing. In this work, we use high framerate video recording of Selective Laser Melting (SLM) to provide useful insight on the laser-powder interaction and melt pool evolution of 316 L powder layers, while also serving as a novel instrument to quantify cooling rates of the melt pool. The experiment was performed using two powder types – one gas- and one water-atomized – to further clarify how morphological and chemical differences between these two feedstock materialsmore » influence the laser melting process. Finally, experimentally determined cooling rates are compared with values obtained through computer simulation, and the relationship between cooling rate and grain cell size is compared with data previously published in the literature.« less
NASA Astrophysics Data System (ADS)
Mitchell-Thomas, R. C.; McManus, T. M.; Quevedo-Teruel, O.; Horsley, S. A. R.; Hao, Y.
2013-11-01
This Letter presents a method for making an uneven surface behave as a flat surface. This allows an object to be concealed (cloaked) under an uneven portion of the surface, without disturbing the wave propagation on the surface. The cloaks proposed in this Letter achieve perfect cloaking that only relies upon isotropic radially dependent refractive index profiles, contrary to those previously published. In addition, these cloaks are very thin, just a fraction of a wavelength in thickness, yet can conceal electrically large objects. While this paper focuses on cloaking electromagnetic surface waves, the theory is also valid for other types of surface waves. The performance of these cloaks is simulated using dielectric filled waveguide geometries, and the curvature of the surface is shown to be rendered invisible, hiding any object positioned underneath. Finally, a transformation of the required dielectric slab permittivity was performed for surface wave propagation, demonstrating the practical applicability of this technique.
An efficient technique for the numerical solution of the bidomain equations.
Whiteley, Jonathan P
2008-08-01
Computing the numerical solution of the bidomain equations is widely accepted to be a significant computational challenge. In this study we extend a previously published semi-implicit numerical scheme with good stability properties that has been used to solve the bidomain equations (Whiteley, J.P. IEEE Trans. Biomed. Eng. 53:2139-2147, 2006). A new, efficient numerical scheme is developed which utilizes the observation that the only component of the ionic current that must be calculated on a fine spatial mesh and updated frequently is the fast sodium current. Other components of the ionic current may be calculated on a coarser mesh and updated less frequently, and then interpolated onto the finer mesh. Use of this technique to calculate the transmembrane potential and extracellular potential induces very little error in the solution. For the simulations presented in this study an increase in computational efficiency of over two orders of magnitude over standard numerical techniques is obtained.
Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F
2015-01-01
Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.
Mixed Model Association with Family-Biased Case-Control Ascertainment.
Hayeck, Tristan J; Loh, Po-Ru; Pollack, Samuela; Gusev, Alexander; Patterson, Nick; Zaitlen, Noah A; Price, Alkes L
2017-01-05
Mixed models have become the tool of choice for genetic association studies; however, standard mixed model methods may be poorly calibrated or underpowered under family sampling bias and/or case-control ascertainment. Previously, we introduced a liability threshold-based mixed model association statistic (LTMLM) to address case-control ascertainment in unrelated samples. Here, we consider family-biased case-control ascertainment, where case and control subjects are ascertained non-randomly with respect to family relatedness. Previous work has shown that this type of ascertainment can severely bias heritability estimates; we show here that it also impacts mixed model association statistics. We introduce a family-based association statistic (LT-Fam) that is robust to this problem. Similar to LTMLM, LT-Fam is computed from posterior mean liabilities (PML) under a liability threshold model; however, LT-Fam uses published narrow-sense heritability estimates to avoid the problem of biased heritability estimation, enabling correct calibration. In simulations with family-biased case-control ascertainment, LT-Fam was correctly calibrated (average χ 2 = 1.00-1.02 for null SNPs), whereas the Armitage trend test (ATT), standard mixed model association (MLM), and case-control retrospective association test (CARAT) were mis-calibrated (e.g., average χ 2 = 0.50-1.22 for MLM, 0.89-2.65 for CARAT). LT-Fam also attained higher power than other methods in some settings. In 1,259 type 2 diabetes-affected case subjects and 5,765 control subjects from the CARe cohort, downsampled to induce family-biased ascertainment, LT-Fam was correctly calibrated whereas ATT, MLM, and CARAT were again mis-calibrated. Our results highlight the importance of modeling family sampling bias in case-control datasets with related samples. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
van der Hoop, Julie M; Vanderlaan, Angelia S M; Taggart, Christopher T
2012-10-01
Vessel strikes are the primary source of known mortality for the endangered North Atlantic right whale (Eubalaena glacialis). Multi-institutional efforts to reduce mortality associated with vessel strikes include vessel-routing amendments such as the International Maritime Organization voluntary "area to be avoided" (ATBA) in the Roseway Basin right whale feeding habitat on the southwestern Scotian Shelf. Though relative probabilities of lethal vessel strikes have been estimated and published, absolute probabilities remain unknown. We used a modeling approach to determine the regional effect of the ATBA, by estimating reductions in the expected number of lethal vessel strikes. This analysis differs from others in that it explicitly includes a spatiotemporal analysis of real-time transits of vessels through a population of simulated, swimming right whales. Combining automatic identification system (AIS) vessel navigation data and an observationally based whale movement model allowed us to determine the spatial and temporal intersection of vessels and whales, from which various probability estimates of lethal vessel strikes are derived. We estimate one lethal vessel strike every 0.775-2.07 years prior to ATBA implementation, consistent with and more constrained than previous estimates of every 2-16 years. Following implementation, a lethal vessel strike is expected every 41 years. When whale abundance is held constant across years, we estimate that voluntary vessel compliance with the ATBA results in an 82% reduction in the per capita rate of lethal strikes; very similar to a previously published estimate of 82% reduction in the relative risk of a lethal vessel strike. The models we developed can inform decision-making and policy design, based on their ability to provide absolute, population-corrected, time-varying estimates of lethal vessel strikes, and they are easily transported to other regions and situations.
Relative contributions of microbial and infrastructure heat at a crude oil-contaminated site.
Warren, Ean; Bekins, Barbara A
2018-04-01
Biodegradation of contaminants can increase the temperature in the subsurface due to heat generated from exothermic reactions, making temperature observations a potentially low-cost approach for determining microbial activity. For this technique to gain more widespread acceptance, it is necessary to better understand all the factors affecting the measured temperatures. Biodegradation has been occurring at a crude oil-contaminated site near Bemidji, Minnesota for 39 years, creating a quasi-steady-state plume of contaminants and degradation products. A model of subsurface heat generation and transport helps elucidate the contribution of microbial and infrastructure heating to observed temperature increases at this site. We created a steady-state, two-dimensional, heat transport model using previous-published parameter values for physical, chemical and biodegradation properties. Simulated temperature distributions closely match the observed average annual temperatures measured in the contaminated area at the site within less than 0.2 °C in the unsaturated zone and 0.4 °C in the saturated zone. The model results confirm that the observed subsurface heat from microbial activity is due primarily to methane oxidation in the unsaturated zone resulting in a 3.6 °C increase in average annual temperature. Another important source of subsurface heat is from the active, crude-oil pipelines crossing the site. The pipelines impact temperatures for a distance of 200 m and contribute half the heat. Model results show that not accounting for the heat from the pipelines leads to overestimating the degradation rates by a factor of 1.7, demonstrating the importance of identifying and quantifying all heat sources. The model results also highlighted a zone where previously unknown microbial activity is occurring at the site. Published by Elsevier B.V.
O'Lenick, Cassandra R; Winquist, Andrea; Mulholland, James A; Friberg, Mariel D; Chang, Howard H; Kramer, Michael R; Darrow, Lyndsey A; Sarnat, Stefanie Ebelt
2017-02-01
A broad literature base provides evidence of association between air pollution and paediatric asthma. Socioeconomic status (SES) may modify these associations; however, previous studies have found inconsistent evidence regarding the role of SES. Effect modification of air pollution-paediatric asthma morbidity by multiple indicators of neighbourhood SES was examined in Atlanta, Georgia. Emergency department (ED) visit data were obtained for 5-18 years old with a diagnosis of asthma in 20-county Atlanta during 2002-2008. Daily ZIP Code Tabulation Area (ZCTA)-level concentrations of ozone, nitrogen dioxide, fine particulate matter and elemental carbon were estimated using ambient monitoring data and emissions-based chemical transport model simulations. Pollutant-asthma associations were estimated using a case-crossover approach, controlling for temporal trends and meteorology. Effect modification by ZCTA-level (neighbourhood) SES was examined via stratification. We observed stronger air pollution-paediatric asthma associations in 'deprivation areas' (eg, ≥20% of the ZCTA population living in poverty) compared with 'non-deprivation areas'. When stratifying analyses by quartiles of neighbourhood SES, ORs indicated stronger associations in the highest and lowest SES quartiles and weaker associations among the middle quartiles. Our results suggest that neighbourhood-level SES is a factor contributing vulnerability to air pollution-related paediatric asthma morbidity in Atlanta. Children living in low SES environments appear to be especially vulnerable given positive ORs and high underlying asthma ED rates. Inconsistent findings of effect modification among previous studies may be partially explained by choice of SES stratification criteria, and the use of multiplicative models combined with differing baseline risk across SES populations. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Contributions to muscle force and EMG by combined neural excitation and electrical stimulation
NASA Astrophysics Data System (ADS)
Crago, Patrick E.; Makowski, Nathaniel S.; Cole, Natalie M.
2014-10-01
Objective. Stimulation of muscle for research or clinical interventions is often superimposed on ongoing physiological activity without a quantitative understanding of the impact of the stimulation on the net muscle activity and the physiological response. Experimental studies show that total force during stimulation is less than the sum of the isolated voluntary and stimulated forces, but the occlusion mechanism is not understood. Approach. We develop a model of efferent motor activity elicited by superimposing stimulation during a physiologically activated contraction. The model combines action potential interactions due to collision block, source resetting, and refractory periods with previously published models of physiological motor unit recruitment, rate modulation, force production, and EMG generation in human first dorsal interosseous muscle to investigate the mechanisms and effectiveness of stimulation on the net muscle force and EMG. Main results. Stimulation during a physiological contraction demonstrates partial occlusion of force and the neural component of the EMG, due to action potential interactions in motor units activated by both sources. Depending on neural and stimulation firing rates as well as on force-frequency properties, individual motor unit forces can be greater, smaller, or unchanged by the stimulation. In contrast, voluntary motor unit EMG potentials in simultaneously stimulated motor units show progressive occlusion with increasing stimulus rate. The simulations predict that occlusion would be decreased by a reverse stimulation recruitment order. Significance. The results are consistent with and provide a mechanistic interpretation of previously published experimental evidence of force occlusion. The models also predict two effects that have not been reported previously—voluntary EMG occlusion and the advantages of a proximal stimulation site. This study provides a basis for the rational design of both future experiments and clinical neuroprosthetic interventions involving either motor or sensory stimulation.
A systematic review of financial incentives for dietary behavior change.
Purnell, Jason Q; Gernes, Rebecca; Stein, Rick; Sherraden, Margaret S; Knoblock-Hahn, Amy
2014-07-01
In light of the obesity epidemic, there is growing interest in the use of financial incentives for dietary behavior change. Previous reviews of the literature have focused on randomized controlled trials and found mixed results. The purpose of this systematic review is to update and expand on previous reviews by considering a broader range of study designs, including randomized controlled trials, quasi-experimental, observational, and simulation studies testing the use of financial incentives to change dietary behavior and to inform both dietetic practice and research. The review was guided by theoretical consideration of the type of incentive used based on the principles of operant conditioning. There was further examination of whether studies were carried out with an institutional focus. Studies published between 2006 and 2012 were selected for review, and data were extracted regarding study population, intervention design, outcome measures, study duration and follow-up, and key findings. Twelve studies meeting selection criteria were reviewed, with 11 finding a positive association between incentives and dietary behavior change in the short term. All studies pointed to more specific information on the type, timing, and magnitude of incentives needed to motivate individuals to change behavior, the types of incentives and disincentives most likely to affect the behavior of various socioeconomic groups, and promising approaches for potential policy and practice innovations. Limitations of the studies are noted, including the lack of theoretical guidance in the selection of incentive structures and the absence of basic experimental data. Future research should consider these factors, even as policy makers and practitioners continue to experiment with this potentially useful approach to addressing obesity. Copyright © 2014 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Refinement of Objective Motion Cueing Criteria Investigation Based on Three Flight Tasks
NASA Technical Reports Server (NTRS)
Zaal, Petrus M. T.; Schroeder, Jeffery A.; Chung, William W.
2017-01-01
The objective of this paper is to refine objective motion cueing criteria for commercial transport simulators based on pilots' performance in three flying tasks. Actuator hardware and software algorithms determine motion cues. Today, during a simulator qualification, engineers objectively evaluate only the hardware. Pilot inspectors subjectively assess the overall motion cueing system (i.e., hardware plus software); however, it is acknowledged that pinpointing any deficiencies that might arise to either hardware or software is challenging. ICAO 9625 has an Objective Motion Cueing Test (OMCT), which is now a required test in the FAA's part 60 regulations for new devices, evaluating the software and hardware together; however, it lacks accompanying fidelity criteria. Hosman has documented OMCT results for a statistical sample of eight simulators which is useful, but having validated criteria would be an improvement. In a previous experiment, we developed initial objective motion cueing criteria that this paper is trying to refine. Sinacori suggested simple criteria which are in reasonable agreement with much of the literature. These criteria often necessitate motion displacements greater than most training simulators can provide. While some of the previous work has used transport aircraft in their studies, the majority used fighter aircraft or helicopters. Those that used transport aircraft considered degraded flight characteristics. As a result, earlier criteria lean more towards being sufficient, rather than necessary, criteria for typical transport aircraft training applications. Considering the prevalence of 60-inch, six-legged hexapod training simulators, a relevant question is "what are the necessary criteria that can be used with the ICAO 9625 diagnostic?" This study adds to the literature as follows. First, it examines well-behaved transport aircraft characteristics, but in three challenging tasks. The tasks are equivalent to the ones used in our previous experiment, allowing us to directly compare the results and add to the previous data. Second, it uses the Vertical Motion Simulator (VMS), the world's largest vertical displacement simulator. This allows inclusion of relatively large motion conditions, much larger than a typical training simulator can provide. Six new motion configurations were used that explore the motion responses between the initial objective motion cueing boundaries found in a previous experiment and what current hexapod simulators typically provide. Finally, a sufficiently large pilot pool added statistical reliability to the results.
Appelqvist, Christin; Al-Hamdani, Zyad K.; Jonsson, Per R.; Havenhand, Jon N.
2015-01-01
The shipworm, Teredo navalis, is absent from most of the Baltic Sea. In the last 20 years, increased frequency of T. navalis has been reported along the southern Baltic Sea coasts of Denmark, Germany, and Sweden, indicating possible range-extensions into previously unoccupied areas. We evaluated the effects of historical and projected near-future changes in salinity, temperature, and oxygen on the risk of spread of T. navalis in the Baltic. Specifically, we developed a simple, GIS-based, mechanistic climate envelope model to predict the spatial distribution of favourable conditions for adult reproduction and larval metamorphosis of T. navalis, based on published environmental tolerances to these factors. In addition, we used a high-resolution three-dimensional hydrographic model to simulate the probability of spread of T. navalis larvae within the study area. Climate envelope modeling showed that projected near-future climate change is not likely to change the overall distribution of T. navalis in the region, but will prolong the breeding season and increase the risk of shipworm establishment at the margins of the current range. Dispersal simulations indicated that the majority of larvae were philopatric, but those that spread over a wider area typically spread to areas unfavourable for their survival. Overall, therefore, we found no substantive evidence for climate-change related shifts in the distribution of T. navalis in the Baltic Sea, and no evidence for increased risk of spread in the near-future. PMID:25768305
Understanding Ice Supersaturation, Particle Growth, and Number Concentration in Cirrus Clouds
NASA Technical Reports Server (NTRS)
Comstock, Jennifer M.; Lin, Ruei-Fong; Starr, David O'C.; Yang, Ping
2008-01-01
Many factors control the ice supersaturation and microphysical properties in cirrus clouds. We explore the effects of dynamic forcing, ice nucleation mechanisms, and ice crystal growth rate on the evolution and distribution of water vapor and cloud properties in nighttime cirrus clouds using a one-dimensional cloud model with bin microphysics and remote sensing measurements obtained at the Department of Energy's Atmospheric Radiation Measurement (ARM) Climate Research Facility located near Lamont, OK. We forced the model using both large-scale vertical ascent and, for the first time, mean mesoscale velocity derived from radar Doppler velocity measurements. Both heterogeneous and homogeneous nucleation processes are explored, where a classical theory heterogeneous scheme is compared with empirical representations. We evaluated model simulations by examining both bulk cloud properties and distributions of measured radar reflectivity, lidar extinction, and water vapor profiles, as well as retrieved cloud microphysical properties. Our results suggest that mesoscale variability is the primary mechanism needed to reproduce observed quantities. Model sensitivity to the ice growth rate is also investigated. The most realistic simulations as compared with observations are forced using mesoscale waves, include fast ice crystal growth, and initiate ice by either homogeneous or heterogeneous nucleation. Simulated ice crystal number concentrations (tens to hundreds particles per liter) are typically two orders of magnitude smaller than previously published results based on aircraft measurements in cirrus clouds, although higher concentrations are possible in isolated pockets within the nucleation zone.
Corley, R A; Minard, K R; Kabilan, S; Einstein, D R; Kuprat, A P; Harkema, J R; Kimbell, J S; Gargas, M L; Kinzell, John H
2009-05-01
The percentages of total airflows over the nasal respiratory and olfactory epithelium of female rabbits were calculated from computational fluid dynamics (CFD) simulations of steady-state inhalation. These airflow calculations, along with nasal airway geometry determinations, are critical parameters for hybrid CFD/physiologically based pharmacokinetic models that describe the nasal dosimetry of water-soluble or reactive gases and vapors in rabbits. CFD simulations were based upon three-dimensional computational meshes derived from magnetic resonance images of three adult female New Zealand White (NZW) rabbits. In the anterior portion of the nose, the maxillary turbinates of rabbits are considerably more complex than comparable regions in rats, mice, monkeys, or humans. This leads to a greater surface area to volume ratio in this region and thus the potential for increased extraction of water soluble or reactive gases and vapors in the anterior portion of the nose compared to many other species. Although there was considerable interanimal variability in the fine structures of the nasal turbinates and airflows in the anterior portions of the nose, there was remarkable consistency between rabbits in the percentage of total inspired airflows that reached the ethmoid turbinate region (approximately 50%) that is presumably lined with olfactory epithelium. These latter results (airflows reaching the ethmoid turbinate region) were higher than previous published estimates for the male F344 rat (19%) and human (7%). These differences in regional airflows can have significant implications in interspecies extrapolations of nasal dosimetry.
Shuai, Lan; Malins, Jeffrey G
2017-02-01
Despite its prevalence as one of the most highly influential models of spoken word recognition, the TRACE model has yet to be extended to consider tonal languages such as Mandarin Chinese. A key reason for this is that the model in its current state does not encode lexical tone. In this report, we present a modified version of the jTRACE model in which we borrowed on its existing architecture to code for Mandarin phonemes and tones. Units are coded in a way that is meant to capture the similarity in timing of access to vowel and tone information that has been observed in previous studies of Mandarin spoken word recognition. We validated the model by first simulating a recent experiment that had used the visual world paradigm to investigate how native Mandarin speakers process monosyllabic Mandarin words (Malins & Joanisse, 2010). We then subsequently simulated two psycholinguistic phenomena: (1) differences in the timing of resolution of tonal contrast pairs, and (2) the interaction between syllable frequency and tonal probability. In all cases, the model gave rise to results comparable to those of published data with human subjects, suggesting that it is a viable working model of spoken word recognition in Mandarin. It is our hope that this tool will be of use to practitioners studying the psycholinguistics of Mandarin Chinese and will help inspire similar models for other tonal languages, such as Cantonese and Thai.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altinok, Ozgur
A sample of charged-current single pion production events for the semi- exclusive channel ν µ + CH → µ -π 0 + nucleon(s) has been obtained using neutrino exposures of the MINERvA detector. Differential cross sections for muon momentum, muon production angle, pion momentum, pion production angle, and four-momentum transfer square Q 2 are reported and are compared to a GENIE-based simulation. The cross section versus neutrino energy is also re- ported. The effects of pion final-state interactions on these cross sections are investigated. The effect of baryon resonance suppression at low Q 2 is examined and an event re-weight used by two previous experiments is shown to improve the data versus simulation agreement. The differential cross sections for Q 2 for Eν < 4.0 GeV and E ν ≥ 4.0 GeV are examined and the shapes of these distributions are compared to those from the experiment’smore » $$\\bar{v}$$ µ-CC (π 0) measurement. The polarization of the pπ 0 system is measured and compared to the simulation predictions. The hadronic invariant mass W distribution is examined for evidence of resonance content, and a search is reported for evidence of a two-particle two-hole (2p2h) contribution. All of the differential cross-section measurements of this Thesis are compared with published MINERvA measurements for ν µ-CC (π +) and \\bar{v}$ µ-CC (π 0) processes.« less
Footitt, Steven; Ölçer-Footitt, Hülya; Hambidge, Angela J; Finch-Savage, William E
2017-08-01
Environmental signals drive seed dormancy cycling in the soil to synchronize germination with the optimal time of year, a process essential for species' fitness and survival. Previous correlation of transcription profiles in exhumed seeds with annual environmental signals revealed the coordination of dormancy-regulating mechanisms with the soil environment. Here, we developed a rapid and robust laboratory dormancy cycling simulation. The utility of this simulation was tested in two ways: firstly, using mutants in known dormancy-related genes [DELAY OF GERMINATION 1 (DOG1), MOTHER OF FLOWERING TIME (MFT), CBL-INTERACTING PROTEIN KINASE 23 (CIPK23) and PHYTOCHROME A (PHYA)] and secondly, using further mutants, we test the hypothesis that components of the circadian clock are involved in coordination of the annual seed dormancy cycle. The rate of dormancy induction and relief differed in all lines tested. In the mutants, dog1-2 and mft2, dormancy induction was reduced but not absent. DOG1 is not absolutely required for dormancy. In cipk23 and phyA dormancy, induction was accelerated. Involvement of the clock in dormancy cycling was clear when mutants in the morning and evening loops of the clock were compared. Dormancy induction was faster when the morning loop was compromised and delayed when the evening loop was compromised. © 2017 The Authors Plant, Cell & Environment Published by John Wiley & Sons Ltd.
Virtual Reality-Based Simulators for Cranial Tumor Surgery: A Systematic Review.
Mazur, Travis; Mansour, Tarek R; Mugge, Luke; Medhkour, Azedine
2018-02-01
Virtual reality (VR) simulators have become useful tools in various fields of medicine. Prominent uses of VR technologies include assessment of physician skills and presurgical planning. VR has shown effectiveness in multiple surgical specialties, yet its use in neurosurgery remains limited. To examine all current literature on VR-based simulation for presurgical planning and training in cranial tumor surgeries and to assess the quality of these studies. PubMed and Embase were systematically searched to identify studies that used VR for presurgical planning and/or studies that investigated the use of VR as a training tool from inception to May 25, 2017. The initial search identified 1662 articles. Thirty-seven full-text articles were assessed for inclusion. Nine studies were included. These studies were subdivided into presurgical planning and training using VR. Prospects for VR are bright when surgical planning and skills training are considered. In terms of surgical planning, VR has noted and documented usefulness in the planning of cranial surgeries. Further, VR has been central to establishing reproducible benchmarks of performance in relation to cranial tumor resection, which are helpful not only in showing face and construct validity but also in enhancing neurosurgical training in a way not previously examined. Although additional studies are needed to better delineate the precise role of VR in each of these capacities, these studies stand to show the usefulness of VR in the neurosurgery and highlight the need for further investigation. Published by Elsevier Inc.
Hospital influenza pandemic stockpiling needs: A computer simulation.
Abramovich, Mark N; Hershey, John C; Callies, Byron; Adalja, Amesh A; Tosh, Pritish K; Toner, Eric S
2017-03-01
A severe influenza pandemic could overwhelm hospitals but planning guidance that accounts for the dynamic interrelationships between planning elements is lacking. We developed a methodology to calculate pandemic supply needs based on operational considerations in hospitals and then tested the methodology at Mayo Clinic in Rochester, MN. We upgraded a previously designed computer modeling tool and input carefully researched resource data from the hospital to run 10,000 Monte Carlo simulations using various combinations of variables to determine resource needs across a spectrum of scenarios. Of 10,000 iterations, 1,315 fell within the parameters defined by our simulation design and logical constraints. From these valid iterations, we projected supply requirements by percentile for key supplies, pharmaceuticals, and personal protective equipment requirements needed in a severe pandemic. We projected supplies needs for a range of scenarios that use up to 100% of Mayo Clinic-Rochester's surge capacity of beds and ventilators. The results indicate that there are diminishing patient care benefits for stockpiling on the high side of the range, but that having some stockpile of critical resources, even if it is relatively modest, is most important. We were able to display the probabilities of needing various supply levels across a spectrum of scenarios. The tool could be used to model many other hospital preparedness issues, but validation in other settings is needed. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Simulation of modern climate with the new version of the INM RAS climate model
NASA Astrophysics Data System (ADS)
Volodin, E. M.; Mortikov, E. V.; Kostrykin, S. V.; Galin, V. Ya.; Lykosov, V. N.; Gritsun, A. S.; Diansky, N. A.; Gusev, A. V.; Yakovlev, N. G.
2017-03-01
The INMCM5.0 numerical model of the Earth's climate system is presented, which is an evolution from the previous version, INMCM4.0. A higher vertical resolution for the stratosphere is applied in the atmospheric block. Also, we raised the upper boundary of the calculating area, added the aerosol block, modified parameterization of clouds and condensation, and increased the horizontal resolution in the ocean block. The program implementation of the model was also updated. We consider the simulation of the current climate using the new version of the model. Attention is focused on reducing systematic errors as compared to the previous version, reproducing phenomena that could not be simulated correctly in the previous version, and modeling the problems that remain unresolved.