Sample records for simulation analysis shows

  1. In situ visualization and data analysis for turbidity currents simulation

    NASA Astrophysics Data System (ADS)

    Camata, Jose J.; Silva, Vítor; Valduriez, Patrick; Mattoso, Marta; Coutinho, Alvaro L. G. A.

    2018-01-01

    Turbidity currents are underflows responsible for sediment deposits that generate geological formations of interest for the oil and gas industry. LibMesh-sedimentation is an application built upon the libMesh library to simulate turbidity currents. In this work, we present the integration of libMesh-sedimentation with in situ visualization and in transit data analysis tools. DfAnalyzer is a solution based on provenance data to extract and relate strategic simulation data in transit from multiple data for online queries. We integrate libMesh-sedimentation and ParaView Catalyst to perform in situ data analysis and visualization. We present a parallel performance analysis for two turbidity currents simulations showing that the overhead for both in situ visualization and in transit data analysis is negligible. We show that our tools enable monitoring the sediments appearance at runtime and steer the simulation based on the solver convergence and visual information on the sediment deposits, thus enhancing the analytical power of turbidity currents simulations.

  2. Simulation-based training for nurses: Systematic review and meta-analysis.

    PubMed

    Hegland, Pål A; Aarlie, Hege; Strømme, Hilde; Jamtvedt, Gro

    2017-07-01

    Simulation-based training is a widespread strategy to improve health-care quality. However, its effect on registered nurses has previously not been established in systematic reviews. The aim of this systematic review is to evaluate effect of simulation-based training on nurses' skills and knowledge. We searched CDSR, DARE, HTA, CENTRAL, CINAHL, MEDLINE, Embase, ERIC, and SveMed+ for randomised controlled trials (RCT) evaluating effect of simulation-based training among nurses. Searches were completed in December 2016. Two reviewers independently screened abstracts and full-text, extracted data, and assessed risk of bias. We compared simulation-based training to other learning strategies, high-fidelity simulation to other simulation strategies, and different organisation of simulation training. Data were analysed through meta-analysis and narrative syntheses. GRADE was used to assess the quality of evidence. Fifteen RCTs met the inclusion criteria. For the comparison of simulation-based training to other learning strategies on nurses' skills, six studies in the meta-analysis showed a significant, but small effect in favour of simulation (SMD -1.09, CI -1.72 to -0.47). There was large heterogeneity (I 2 85%). For the other comparisons, there was large between-study variation in results. The quality of evidence for all comparisons was graded as low. The effect of simulation-based training varies substantially between studies. Our meta-analysis showed a significant effect of simulation training compared to other learning strategies, but the quality of evidence was low indicating uncertainty. Other comparisons showed inconsistency in results. Based on our findings simulation training appears to be an effective strategy to improve nurses' skills, but further good-quality RCTs with adequate sample sizes are needed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Development and validation of a virtual reality simulator: human factors input to interventional radiology training.

    PubMed

    Johnson, Sheena Joanne; Guediri, Sara M; Kilkenny, Caroline; Clough, Peter J

    2011-12-01

    This study developed and validated a virtual reality (VR) simulator for use by interventional radiologists. Research in the area of skill acquisition reports practice as essential to become a task expert. Studies on simulation show skills learned in VR can be successfully transferred to a real-world task. Recently, with improvements in technology, VR simulators have been developed to allow complex medical procedures to be practiced without risking the patient. Three studies are reported. In Study I, 35 consultant interventional radiologists took part in a cognitive task analysis to empirically establish the key competencies of the Seldinger procedure. In Study 2, 62 participants performed one simulated procedure, and their performance was compared by expertise. In Study 3, the transferability of simulator training to a real-world procedure was assessed with 14 trainees. Study I produced 23 key competencies that were implemented as performance measures in the simulator. Study 2 showed the simulator had both face and construct validity, although some issues were identified. Study 3 showed the group that had undergone simulator training received significantly higher mean performance ratings on a subsequent patient procedure. The findings of this study support the centrality of validation in the successful design of simulators and show the utility of simulators as a training device. The studies show the key elements of a validation program for a simulator. In addition to task analysis and face and construct validities, the authors highlight the importance of transfer of training in validation studies.

  4. Improving Students' Self-Efficacy in Strategic Management: The Relative Impact of Cases and Simulations.

    ERIC Educational Resources Information Center

    Tompson, George H.; Dass, Parshotam

    2000-01-01

    Investigates the relative contribution of computer simulations and case studies for improving undergraduate students' self-efficacy in strategic management courses. Results of pre-and post-test data, regression analysis, and analysis of variance show that simulations result in significantly higher improvement in self-efficacy than case studies.…

  5. INNOVATIVE INSTRUMENTATION AND ANALYSIS OF THE TEMPERATURE MEASUREMENT FOR HIGH TEMPERATURE GASIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seong W. Lee

    During this reporting period, the literature survey including the gasifier temperature measurement literature, the ultrasonic application and its background study in cleaning application, and spray coating process are completed. The gasifier simulator (cold model) testing has been successfully conducted. Four factors (blower voltage, ultrasonic application, injection time intervals, particle weight) were considered as significant factors that affect the temperature measurement. The Analysis of Variance (ANOVA) was applied to analyze the test data. The analysis shows that all four factors are significant to the temperature measurements in the gasifier simulator (cold model). The regression analysis for the case with the normalizedmore » room temperature shows that linear model fits the temperature data with 82% accuracy (18% error). The regression analysis for the case without the normalized room temperature shows 72.5% accuracy (27.5% error). The nonlinear regression analysis indicates a better fit than that of the linear regression. The nonlinear regression model's accuracy is 88.7% (11.3% error) for normalized room temperature case, which is better than the linear regression analysis. The hot model thermocouple sleeve design and fabrication are completed. The gasifier simulator (hot model) design and the fabrication are completed. The system tests of the gasifier simulator (hot model) have been conducted and some modifications have been made. Based on the system tests and results analysis, the gasifier simulator (hot model) has met the proposed design requirement and the ready for system test. The ultrasonic cleaning method is under evaluation and will be further studied for the gasifier simulator (hot model) application. The progress of this project has been on schedule.« less

  6. Simulating the IPOD, East Asian summer monsoon, and their relationships in CMIP5

    NASA Astrophysics Data System (ADS)

    Yu, Miao; Li, Jianping; Zheng, Fei; Wang, Xiaofan; Zheng, Jiayu

    2018-03-01

    This paper evaluates the simulation performance of the 37 coupled models from the Coupled Model Intercomparison Project Phase 5 (CMIP5) with respect to the East Asian summer monsoon (EASM) and the Indo-Pacific warm pool and North Pacific Ocean dipole (IPOD) and also the interrelationships between them. The results show that the majority of the models are unable to accurately simulate the interannual variability and long-term trends of the EASM, and their simulations of the temporal and spatial variations of the IPOD are also limited. Further analysis showed that the correlation coefficients between the simulated and observed EASM index (EASMI) is proportional to those between the simulated and observed IPOD index (IPODI); that is, if the models have skills to simulate one of them then they will likely generate good simulations of another. Based on the above relationship, this paper proposes a conditional multi-model ensemble method (CMME) that eliminates those models without capability to simulate the IPOD and EASM when calculating the multi-model ensemble (MME). The analysis shows that, compared with the MME, this CMME method can significantly improve the simulations of the spatial and temporal variations of both the IPOD and EASM as well as their interrelationship, suggesting the potential for the CMME approach to be used in place of the MME method.

  7. Numerical simulation of deformation and figure quality of precise mirror

    NASA Astrophysics Data System (ADS)

    Vit, Tomáš; Melich, Radek; Sandri, Paolo

    2015-01-01

    The presented paper shows results and a comparison of FEM numerical simulations and optical tests of the assembly of a precise Zerodur mirror with a mounting structure for space applications. It also shows how the curing of adhesive film can impact the optical surface, especially as regards deformations. Finally, the paper shows the results of the figure quality analysis, which are based on data from FEM simulation of optical surface deformations.

  8. Tropospheric ozone in the western Pacific Rim: Analysis of satellite and surface-based observations along with comprehensive 3-D model simulations

    NASA Technical Reports Server (NTRS)

    Young, Sun-Woo; Carmichael, Gregory R.

    1994-01-01

    Tropospheric ozone production and transport in mid-latitude eastern Asia is studied. Data analysis of surface-based ozone measurements in Japan and satellite-based tropospheric column measurements of the entire western Pacific Rim are combined with results from three-dimensional model simulations to investigate the diurnal, seasonal and long-term variations of ozone in this region. Surface ozone measurements from Japan show distinct seasonal variation with a spring peak and summer minimum. Satellite studies of the entire tropospheric column of ozone show high concentrations in both the spring and summer seasons. Finally, preliminary model simulation studies show good agreement with observed values.

  9. A meta-model analysis of a finite element simulation for defining poroelastic properties of intervertebral discs.

    PubMed

    Nikkhoo, Mohammad; Hsu, Yu-Chun; Haghpanahi, Mohammad; Parnianpour, Mohamad; Wang, Jaw-Lin

    2013-06-01

    Finite element analysis is an effective tool to evaluate the material properties of living tissue. For an interactive optimization procedure, the finite element analysis usually needs many simulations to reach a reasonable solution. The meta-model analysis of finite element simulation can be used to reduce the computation of a structure with complex geometry or a material with composite constitutive equations. The intervertebral disc is a complex, heterogeneous, and hydrated porous structure. A poroelastic finite element model can be used to observe the fluid transferring, pressure deviation, and other properties within the disc. Defining reasonable poroelastic material properties of the anulus fibrosus and nucleus pulposus is critical for the quality of the simulation. We developed a material property updating protocol, which is basically a fitting algorithm consisted of finite element simulations and a quadratic response surface regression. This protocol was used to find the material properties, such as the hydraulic permeability, elastic modulus, and Poisson's ratio, of intact and degenerated porcine discs. The results showed that the in vitro disc experimental deformations were well fitted with limited finite element simulations and a quadratic response surface regression. The comparison of material properties of intact and degenerated discs showed that the hydraulic permeability significantly decreased but Poisson's ratio significantly increased for the degenerated discs. This study shows that the developed protocol is efficient and effective in defining material properties of a complex structure such as the intervertebral disc.

  10. Analysis of Naval Ammunition Stock Positioning

    DTIC Science & Technology

    2015-12-01

    model takes once the Monte -Carlo simulation determines the assigned probabilities for site-to-site locations. Column two shows how the simulation...stockpiles and positioning them at coastal Navy facilities. A Monte -Carlo simulation model was developed to simulate expected cost and delivery...TERMS supply chain management, Monte -Carlo simulation, risk, delivery performance, stock positioning 15. NUMBER OF PAGES 85 16. PRICE CODE 17

  11. Relaxation mode analysis of a peptide system: comparison with principal component analysis.

    PubMed

    Mitsutake, Ayori; Iijima, Hiromitsu; Takano, Hiroshi

    2011-10-28

    This article reports the first attempt to apply the relaxation mode analysis method to a simulation of a biomolecular system. In biomolecular systems, the principal component analysis is a well-known method for analyzing the static properties of fluctuations of structures obtained by a simulation and classifying the structures into some groups. On the other hand, the relaxation mode analysis has been used to analyze the dynamic properties of homopolymer systems. In this article, a long Monte Carlo simulation of Met-enkephalin in gas phase has been performed. The results are analyzed by the principal component analysis and relaxation mode analysis methods. We compare the results of both methods and show the effectiveness of the relaxation mode analysis.

  12. Grand canonical ensemble Monte Carlo simulation of the dCpG/proflavine crystal hydrate.

    PubMed

    Resat, H; Mezei, M

    1996-09-01

    The grand canonical ensemble Monte Carlo molecular simulation method is used to investigate hydration patterns in the crystal hydrate structure of the dCpG/proflavine intercalated complex. The objective of this study is to show by example that the recently advocated grand canonical ensemble simulation is a computationally efficient method for determining the positions of the hydrating water molecules in protein and nucleic acid structures. A detailed molecular simulation convergence analysis and an analogous comparison of the theoretical results with experiments clearly show that the grand ensemble simulations can be far more advantageous than the comparable canonical ensemble simulations.

  13. Simulation-based optimization framework for reuse of agricultural drainage water in irrigation.

    PubMed

    Allam, A; Tawfik, A; Yoshimura, C; Fleifle, A

    2016-05-01

    A simulation-based optimization framework for agricultural drainage water (ADW) reuse has been developed through the integration of a water quality model (QUAL2Kw) and a genetic algorithm. This framework was applied to the Gharbia drain in the Nile Delta, Egypt, in summer and winter 2012. First, the water quantity and quality of the drain was simulated using the QUAL2Kw model. Second, uncertainty analysis and sensitivity analysis based on Monte Carlo simulation were performed to assess QUAL2Kw's performance and to identify the most critical variables for determination of water quality, respectively. Finally, a genetic algorithm was applied to maximize the total reuse quantity from seven reuse locations with the condition not to violate the standards for using mixed water in irrigation. The water quality simulations showed that organic matter concentrations are critical management variables in the Gharbia drain. The uncertainty analysis showed the reliability of QUAL2Kw to simulate water quality and quantity along the drain. Furthermore, the sensitivity analysis showed that the 5-day biochemical oxygen demand, chemical oxygen demand, total dissolved solids, total nitrogen and total phosphorous are highly sensitive to point source flow and quality. Additionally, the optimization results revealed that the reuse quantities of ADW can reach 36.3% and 40.4% of the available ADW in the drain during summer and winter, respectively. These quantities meet 30.8% and 29.1% of the drainage basin requirements for fresh irrigation water in the respective seasons. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    NASA Astrophysics Data System (ADS)

    Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.

    2010-07-01

    Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  15. Using Wavelet Analysis To Assist in Identification of Significant Events in Molecular Dynamics Simulations.

    PubMed

    Heidari, Zahra; Roe, Daniel R; Galindo-Murillo, Rodrigo; Ghasemi, Jahan B; Cheatham, Thomas E

    2016-07-25

    Long time scale molecular dynamics (MD) simulations of biological systems are becoming increasingly commonplace due to the availability of both large-scale computational resources and significant advances in the underlying simulation methodologies. Therefore, it is useful to investigate and develop data mining and analysis techniques to quickly and efficiently extract the biologically relevant information from the incredible amount of generated data. Wavelet analysis (WA) is a technique that can quickly reveal significant motions during an MD simulation. Here, the application of WA on well-converged long time scale (tens of μs) simulations of a DNA helix is described. We show how WA combined with a simple clustering method can be used to identify both the physical and temporal locations of events with significant motion in MD trajectories. We also show that WA can not only distinguish and quantify the locations and time scales of significant motions, but by changing the maximum time scale of WA a more complete characterization of these motions can be obtained. This allows motions of different time scales to be identified or ignored as desired.

  16. Robust Satellite Communications Under Hostile Interference

    DTIC Science & Technology

    2015-01-08

    this analysis , was included in the simulations. Figure 17. STK Simulation Showing SNR Measured During the Uplink Attack (at the Jammer) 4.3...in Decision and Game Theory for Security, pp. 34–43, 2011. 8. Wertz, J. R., and Larson, W. J., Space Mission Analysis and Design, Microcosm, 1999...15  3.2  Analysis of Reactive Jamming Against Satellite Communications ............................... 15  3.2.1

  17. Hygrothermal Simulation: A Tool for Building Envelope Design Analysis

    Treesearch

    Samuel V. Glass; Anton TenWolde; Samuel L. Zelinka

    2013-01-01

    Is it possible to gauge the risk of moisture problems while designing the building envelope? This article provides a brief introduction to computer-based hygrothermal (heat and moisture) simulation, shows how simulation can be useful as a design tool, and points out a number of im-portant considerations regarding model inputs and limita-tions. Hygrothermal simulation...

  18. Impact of atmospheric forcing on heat content variability in the sub-surface layer in the Japan/East Sea, 1948-2009

    NASA Astrophysics Data System (ADS)

    Stepanov, Dmitry; Gusev, Anatoly; Diansky, Nikolay

    2016-04-01

    Based on numerical simulations the study investigates impact of atmospheric forcing on heat content variability of the sub-surface layer in Japan/East Sea (JES), 1948-2009. We developed a model configuration based on a INMOM model and atmospheric forcing extracted from the CORE phase II experiment dataset 1948-2009, which enables to assess impact of only atmospheric forcing on heat content variability of the sub-surface layer of the JES. An analysis of kinetic energy (KE) and total heat content (THC) in the JES obtained from our numerical simulations showed that the simulated circulation of the JES is being quasi-steady state. It was found that the year-mean KE variations obtained from our numerical simulations are similar those extracted from the SODA reanalysis. Comparison of the simulated THC and that extracted from the SODA reanalysis showed significant consistence between them. An analysis of numerical simulations showed that the simulated circulation structure is very similar that obtained from the PALACE floats in the intermediate and abyssal layers in the JES. Using empirical orthogonal function analysis we studied spatial-temporal variability of the heat content of the sub-surface layer in the JES. Based on comparison of the simulated heat content variations with those obtained from natural observations an assessment of the atmospheric forcing impact on the heat content variability was obtained. Using singular value decomposition analysis we considered relationships between the heat content variability and wind stress curl as well as sensible heat flux in winter. It was established the major role of sensible heat flux in decadal variability of the heat content of the sub-surface layer in the JES. The research was supported by the Russian Foundation for Basic Research (grant N 14-05-00255) and the Council on the Russian Federation President Grants (grant N MK-3241.2015.5)

  19. Grand canonical ensemble Monte Carlo simulation of the dCpG/proflavine crystal hydrate.

    PubMed Central

    Resat, H; Mezei, M

    1996-01-01

    The grand canonical ensemble Monte Carlo molecular simulation method is used to investigate hydration patterns in the crystal hydrate structure of the dCpG/proflavine intercalated complex. The objective of this study is to show by example that the recently advocated grand canonical ensemble simulation is a computationally efficient method for determining the positions of the hydrating water molecules in protein and nucleic acid structures. A detailed molecular simulation convergence analysis and an analogous comparison of the theoretical results with experiments clearly show that the grand ensemble simulations can be far more advantageous than the comparable canonical ensemble simulations. Images FIGURE 5 FIGURE 7 PMID:8873992

  20. Using deep neural networks to augment NIF post-shot analysis

    NASA Astrophysics Data System (ADS)

    Humbird, Kelli; Peterson, Luc; McClarren, Ryan; Field, John; Gaffney, Jim; Kruse, Michael; Nora, Ryan; Spears, Brian

    2017-10-01

    Post-shot analysis of National Ignition Facility (NIF) experiments is the process of determining which simulation inputs yield results consistent with experimental observations. This analysis is typically accomplished by running suites of manually adjusted simulations, or Monte Carlo sampling surrogate models that approximate the response surfaces of the physics code. These approaches are expensive and often find simulations that match only a small subset of observables simultaneously. We demonstrate an alternative method for performing post-shot analysis using inverse models, which map directly from experimental observables to simulation inputs with quantified uncertainties. The models are created using a novel machine learning algorithm which automates the construction and initialization of deep neural networks to optimize predictive accuracy. We show how these neural networks, trained on large databases of post-shot simulations, can rigorously quantify the agreement between simulation and experiment. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  1. Kinematics Simulation Analysis of Packaging Robot with Joint Clearance

    NASA Astrophysics Data System (ADS)

    Zhang, Y. W.; Meng, W. J.; Wang, L. Q.; Cui, G. H.

    2018-03-01

    Considering the influence of joint clearance on the motion error, repeated positioning accuracy and overall position of the machine, this paper presents simulation analysis of a packaging robot — 2 degrees of freedom(DOF) planar parallel robot based on the characteristics of high precision and fast speed of packaging equipment. The motion constraint equation of the mechanism is established, and the analysis and simulation of the motion error are carried out in the case of turning the revolute clearance. The simulation results show that the size of the joint clearance will affect the movement accuracy and packaging efficiency of the packaging robot. The analysis provides a reference point of view for the packaging equipment design and selection criteria and has a great significance on the packaging industry automation.

  2. a Simulation-As Framework Facilitating Webgis Based Installation Planning

    NASA Astrophysics Data System (ADS)

    Zheng, Z.; Chang, Z. Y.; Fei, Y. F.

    2017-09-01

    Installation Planning is constrained by both natural and social conditions, especially for spatially sparse but functionally connected facilities. Simulation is important for proper deploy in space and configuration in function of facilities to make them a cohesive and supportive system to meet users' operation needs. Based on requirement analysis, we propose a framework to combine GIS and Agent simulation to overcome the shortness in temporal analysis and task simulation of traditional GIS. In this framework, Agent based simulation runs as a service on the server, exposes basic simulation functions, such as scenario configuration, simulation control, and simulation data retrieval to installation planners. At the same time, the simulation service is able to utilize various kinds of geoprocessing services in Agents' process logic to make sophisticated spatial inferences and analysis. This simulation-as-a-service framework has many potential benefits, such as easy-to-use, on-demand, shared understanding, and boosted performances. At the end, we present a preliminary implement of this concept using ArcGIS javascript api 4.0 and ArcGIS for server, showing how trip planning and driving can be carried out by agents.

  3. Effectiveness of Simulation in a Hybrid and Online Networking Course.

    ERIC Educational Resources Information Center

    Cameron, Brian H.

    2003-01-01

    Reports on a study that compares the performance of students enrolled in two sections of a Web-based computer networking course: one utilizing a simulation package and the second utilizing a static, graphical software package. Analysis shows statistically significant improvements in performance in the simulation group compared to the…

  4. Using multi-criteria analysis of simulation models to understand complex biological systems

    Treesearch

    Maureen C. Kennedy; E. David Ford

    2011-01-01

    Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...

  5. Dependability analysis of parallel systems using a simulation-based approach. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sawyer, Darren Charles

    1994-01-01

    The analysis of dependability in large, complex, parallel systems executing real applications or workloads is examined in this thesis. To effectively demonstrate the wide range of dependability problems that can be analyzed through simulation, the analysis of three case studies is presented. For each case, the organization of the simulation model used is outlined, and the results from simulated fault injection experiments are explained, showing the usefulness of this method in dependability modeling of large parallel systems. The simulation models are constructed using DEPEND and C++. Where possible, methods to increase dependability are derived from the experimental results. Another interesting facet of all three cases is the presence of some kind of workload of application executing in the simulation while faults are injected. This provides a completely new dimension to this type of study, not possible to model accurately with analytical approaches.

  6. Topographica: Building and Analyzing Map-Level Simulations from Python, C/C++, MATLAB, NEST, or NEURON Components

    PubMed Central

    Bednar, James A.

    2008-01-01

    Many neural regions are arranged into two-dimensional topographic maps, such as the retinotopic maps in mammalian visual cortex. Computational simulations have led to valuable insights about how cortical topography develops and functions, but further progress has been hindered by the lack of appropriate tools. It has been particularly difficult to bridge across levels of detail, because simulators are typically geared to a specific level, while interfacing between simulators has been a major technical challenge. In this paper, we show that the Python-based Topographica simulator makes it straightforward to build systems that cross levels of analysis, as well as providing a common framework for evaluating and comparing models implemented in other simulators. These results rely on the general-purpose abstractions around which Topographica is designed, along with the Python interfaces becoming available for many simulators. In particular, we present a detailed, general-purpose example of how to wrap an external spiking PyNN/NEST simulation as a Topographica component using only a dozen lines of Python code, making it possible to use any of the extensive input presentation, analysis, and plotting tools of Topographica. Additional examples show how to interface easily with models in other types of simulators. Researchers simulating topographic maps externally should consider using Topographica's analysis tools (such as preference map, receptive field, or tuning curve measurement) to compare results consistently, and for connecting models at different levels. This seamless interoperability will help neuroscientists and computational scientists to work together to understand how neurons in topographic maps organize and operate. PMID:19352443

  7. Will molecular dynamics simulations of proteins ever reach equilibrium?

    PubMed

    Genheden, Samuel; Ryde, Ulf

    2012-06-28

    We show that conformational entropies calculated for five proteins and protein-ligand complexes with dihedral-distribution histogramming, the von Mises approach, or quasi-harmonic analysis do not converge to any useful precision even if molecular dynamics (MD) simulations of 380-500 ns length are employed (the uncertainty is 12-89 kJ mol(-1)). To explain this, we suggest a simple protein model involving dihedrals with effective barriers forming a uniform distribution and show that for such a model, the entropy increases logarithmically with time until all significantly populated dihedral states have been sampled, in agreement with the simulations (during the simulations, 52-70% of the available dihedral phase space has been visited). This is also confirmed by the analysis of the trajectories of a 1 ms simulation of bovine pancreatic trypsin inhibitor (31 kJ mol(-1) difference in the entropy between the first and second part of the simulation). Strictly speaking, this means that it is practically impossible to equilibrate MD simulations of proteins. We discuss the implications of such a lack of strict equilibration of protein MD simulations and show that ligand-binding free energies estimated with the MM/GBSA method (molecular mechanics with generalised Born and surface-area solvation) vary by 3-15 kJ mol(-1) during a 500 ns simulation (the higher estimate is caused by rare conformational changes), although they involve a questionable but well-converged normal-mode entropy estimate, whereas free energies estimated by free-energy perturbation vary by less than 0.6 kJ mol(-1) for the same simulation.

  8. Heave-pitch-roll analysis and testing of air cushion landing systems

    NASA Technical Reports Server (NTRS)

    Boghani, A. B.; Captain, K. M.; Wormley, D. N.

    1978-01-01

    The analytical tools (analysis and computer simulation) needed to explain and predict the dynamic operation of air cushion landing systems (ACLS) is described. The following tasks were performed: the development of improved analytical models for the fan and the trunk; formulation of a heave pitch roll analysis for the complete ACLS; development of a general purpose computer simulation to evaluate landing and taxi performance of an ACLS equipped aircraft; and the verification and refinement of the analysis by comparison with test data obtained through lab testing of a prototype cushion. Demonstration of simulation capabilities through typical landing and taxi simulation of an ACLS aircraft are given. Initial results show that fan dynamics have a major effect on system performance. Comparison with lab test data (zero forward speed) indicates that the analysis can predict most of the key static and dynamic parameters (pressure, deflection, acceleration, etc.) within a margin of a 10 to 25 percent.

  9. Simulation of Transcritical CO2 Refrigeration System with Booster Hot Gas Bypass in Tropical Climate

    NASA Astrophysics Data System (ADS)

    Santosa, I. D. M. C.; Sudirman; Waisnawa, IGNS; Sunu, PW; Temaja, IW

    2018-01-01

    A Simulation computer becomes significant important for performance analysis since there is high cost and time allocation to build an experimental rig, especially for CO2 refrigeration system. Besides, to modify the rig also need additional cos and time. One of computer program simulation that is very eligible to refrigeration system is Engineering Equation System (EES). In term of CO2 refrigeration system, environmental issues becomes priority on the refrigeration system development since the Carbon dioxide (CO2) is natural and clean refrigerant. This study aims is to analysis the EES simulation effectiveness to perform CO2 transcritical refrigeration system with booster hot gas bypass in high outdoor temperature. The research was carried out by theoretical study and numerical analysis of the refrigeration system using the EES program. Data input and simulation validation were obtained from experimental and secondary data. The result showed that the coefficient of performance (COP) decreased gradually with the outdoor temperature variation increasing. The results show the program can calculate the performance of the refrigeration system with quick running time and accurate. So, it will be significant important for the preliminary reference to improve the CO2 refrigeration system design for the hot climate temperature.

  10. High-Alpha Research Vehicle Lateral-Directional Control Law Description, Analyses, and Simulation Results

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Murphy, Patrick C.; Lallman, Frederick J.; Hoffler, Keith D.; Bacon, Barton J.

    1998-01-01

    This report contains a description of a lateral-directional control law designed for the NASA High-Alpha Research Vehicle (HARV). The HARV is a F/A-18 aircraft modified to include a research flight computer, spin chute, and thrust-vectoring in the pitch and yaw axes. Two separate design tools, CRAFT and Pseudo Controls, were integrated to synthesize the lateral-directional control law. This report contains a description of the lateral-directional control law, analyses, and nonlinear simulation (batch and piloted) results. Linear analysis results include closed-loop eigenvalues, stability margins, robustness to changes in various plant parameters, and servo-elastic frequency responses. Step time responses from nonlinear batch simulation are presented and compared to design guidelines. Piloted simulation task scenarios, task guidelines, and pilot subjective ratings for the various maneuvers are discussed. Linear analysis shows that the control law meets the stability margin guidelines and is robust to stability and control parameter changes. Nonlinear batch simulation analysis shows the control law exhibits good performance and meets most of the design guidelines over the entire range of angle-of-attack. This control law (designated NASA-1A) was flight tested during the Summer of 1994 at NASA Dryden Flight Research Center.

  11. Analysis and design of high-power and efficient, millimeter-wave power amplifier systems using zero degree combiners

    NASA Astrophysics Data System (ADS)

    Tai, Wei; Abbasi, Mortez; Ricketts, David S.

    2018-01-01

    We present the analysis and design of high-power millimetre-wave power amplifier (PA) systems using zero-degree combiners (ZDCs). The methodology presented optimises the PA device sizing and the number of combined unit PAs based on device load pull simulations, driver power consumption analysis and loss analysis of the ZDC. Our analysis shows that an optimal number of N-way combined unit PAs leads to the highest power-added efficiency (PAE) for a given output power. To illustrate our design methodology, we designed a 1-W PA system at 45 GHz using a 45 nm silicon-on-insulator process and showed that an 8-way combined PA has the highest PAE that yields simulated output power of 30.6 dBm and 31% peak PAE.

  12. Exploring Simulator Use in the Preparation of Chemical Engineers

    ERIC Educational Resources Information Center

    Yerrick, Randy; Lund, Carl; Lee, Yonghee

    2013-01-01

    In this manuscript, we report the impact of students' usage of a simulator in the preparation of chemical engineers. This case study was conducted using content pretest and posttests, survey questionnaires, interviews, classroom observations, and an analysis of students' written response to design problems. Results showed the use of simulator was…

  13. Linear and nonlinear ARMA model parameter estimation using an artificial neural network

    NASA Technical Reports Server (NTRS)

    Chon, K. H.; Cohen, R. J.

    1997-01-01

    This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.

  14. Numerical Prediction of the Influence of Thrust Reverser on Aeroengine's Aerodynamic Stability

    NASA Astrophysics Data System (ADS)

    Zhiqiang, Wang; Xigang, Shen; Jun, Hu; Xiang, Gao; Liping, Liu

    2017-11-01

    A numerical method was developed to predict the aerodynamic stability of a high bypass ratio turbofan engine, at the landing stage of a large transport aircraft, when the thrust reverser was deployed. 3D CFD simulation and 2D aeroengine aerodynamic stability analysis code were performed in this work, the former is to achieve distortion coefficient for the analysis of engine stability. The 3D CFD simulation was divided into two steps, the single engine calculation and the integrated aircraft and engine calculation. Results of the CFD simulation show that with the decreasing of relative wind Mach number, the engine inlet will suffer more severe flow distortion. The total pressure and total temperature distortion coefficients at the inlet of the engines were obtained from the results of the numerical simulation. Then an aeroengine aerodynamic stability analysis program was used to quantitatively analyze the aerodynamic stability of the high bypass ratio turbofan engine. The results of the stability analysis show that the engine can work stably, when the reverser flow is re-ingested. But the anti-distortion ability of the booster is weaker than that of the fan and high pressure compressor. It is a weak link of engine stability.

  15. Open-source framework for power system transmission and distribution dynamics co-simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Fan, Rui; Daily, Jeff

    The promise of the smart grid entails more interactions between the transmission and distribution networks, and there is an immediate need for tools to provide the comprehensive modelling and simulation required to integrate operations at both transmission and distribution levels. Existing electromagnetic transient simulators can perform simulations with integration of transmission and distribution systems, but the computational burden is high for large-scale system analysis. For transient stability analysis, currently there are only separate tools for simulating transient dynamics of the transmission and distribution systems. In this paper, we introduce an open source co-simulation framework “Framework for Network Co-Simulation” (FNCS), togethermore » with the decoupled simulation approach that links existing transmission and distribution dynamic simulators through FNCS. FNCS is a middleware interface and framework that manages the interaction and synchronization of the transmission and distribution simulators. Preliminary testing results show the validity and capability of the proposed open-source co-simulation framework and the decoupled co-simulation methodology.« less

  16. Immortal time bias in observational studies of time-to-event outcomes.

    PubMed

    Jones, Mark; Fowler, Robert

    2016-12-01

    The purpose of the study is to show, through simulation and example, the magnitude and direction of immortal time bias when an inappropriate analysis is used. We compare 4 methods of analysis for observational studies of time-to-event outcomes: logistic regression, standard Cox model, landmark analysis, and time-dependent Cox model using an example data set of patients critically ill with influenza and a simulation study. For the example data set, logistic regression, standard Cox model, and landmark analysis all showed some evidence that treatment with oseltamivir provides protection from mortality in patients critically ill with influenza. However, when the time-dependent nature of treatment exposure is taken account of using a time-dependent Cox model, there is no longer evidence of a protective effect of treatment. The simulation study showed that, under various scenarios, the time-dependent Cox model consistently provides unbiased treatment effect estimates, whereas standard Cox model leads to bias in favor of treatment. Logistic regression and landmark analysis may also lead to bias. To minimize the risk of immortal time bias in observational studies of survival outcomes, we strongly suggest time-dependent exposures be included as time-dependent variables in hazard-based analyses. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. [Research on Time-frequency Characteristics of Magneto-acoustic Signal of Different Thickness Medium Based on Wave Summing Method].

    PubMed

    Zhang, Shunqi; Yin, Tao; Ma, Ren; Liu, Zhipeng

    2015-08-01

    Functional imaging method of biological electrical characteristics based on magneto-acoustic effect gives valuable information of tissue in early tumor diagnosis, therein time and frequency characteristics analysis of magneto-acoustic signal is important in image reconstruction. This paper proposes wave summing method based on Green function solution for acoustic source of magneto-acoustic effect. Simulations and analysis under quasi 1D transmission condition are carried out to time and frequency characteristics of magneto-acoustic signal of models with different thickness. Simulation results of magneto-acoustic signal were verified through experiments. Results of the simulation with different thickness showed that time-frequency characteristics of magneto-acoustic signal reflected thickness of sample. Thin sample, which is less than one wavelength of pulse, and thick sample, which is larger than one wavelength, showed different summed waveform and frequency characteristics, due to difference of summing thickness. Experimental results verified theoretical analysis and simulation results. This research has laid a foundation for acoustic source and conductivity reconstruction to the medium with different thickness in magneto-acoustic imaging.

  18. Multiscale analysis of structure development in expanded starch snacks

    NASA Astrophysics Data System (ADS)

    van der Sman, R. G. M.; Broeze, J.

    2014-11-01

    In this paper we perform a multiscale analysis of the food structuring process of the expansion of starchy snack foods like keropok, which obtains a solid foam structure. In particular, we want to investigate the validity of the hypothesis of Kokini and coworkers, that expansion is optimal at the moisture content, where the glass transition and the boiling line intersect. In our analysis we make use of several tools, (1) time scale analysis from the field of physical transport phenomena, (2) the scale separation map (SSM) developed within a multiscale simulation framework of complex automata, (3) the supplemented state diagram (SSD), depicting phase transition and glass transition lines, and (4) a multiscale simulation model for the bubble expansion. Results of the time scale analysis are plotted in the SSD, and give insight into the dominant physical processes involved in expansion. Furthermore, the results of the time scale analysis are used to construct the SSM, which has aided us in the construction of the multiscale simulation model. Simulation results are plotted in the SSD. This clearly shows that the hypothesis of Kokini is qualitatively true, but has to be refined. Our results show that bubble expansion is optimal for moisture content, where the boiling line for gas pressure of 4 bars intersects the isoviscosity line of the critical viscosity 106 Pa.s, which runs parallel to the glass transition line.

  19. Instrumental resolution of the chopper spectrometer 4SEASONS evaluated by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Kajimoto, Ryoichi; Sato, Kentaro; Inamura, Yasuhiro; Fujita, Masaki

    2018-05-01

    We performed simulations of the resolution function of the 4SEASONS spectrometer at J-PARC by using the Monte Carlo simulation package McStas. The simulations showed reasonably good agreement with analytical calculations of energy and momentum resolutions by using a simplified description. We implemented new functionalities in Utsusemi, the standard data analysis tool used in 4SEASONS, to enable visualization of the simulated resolution function and predict its shape for specific experimental configurations.

  20. Simulation of tunneling construction methods of the Cisumdawu toll road

    NASA Astrophysics Data System (ADS)

    Abduh, Muhamad; Sukardi, Sapto Nugroho; Ola, Muhammad Rusdian La; Ariesty, Anita; Wirahadikusumah, Reini D.

    2017-11-01

    Simulation can be used as a tool for planning and analysis of a construction method. Using simulation technique, a contractor could design optimally resources associated with a construction method and compare to other methods based on several criteria, such as productivity, waste, and cost. This paper discusses the use of simulation using Norwegian Method of Tunneling (NMT) for a 472-meter tunneling work in the Cisumdawu Toll Road project. Primary and secondary data were collected to provide useful information for simulation as well as problems that may be faced by the contractor. The method was modelled using the CYCLONE and then simulated using the WebCYCLONE. The simulation could show the duration of the project from the duration model of each work tasks which based on literature review, machine productivity, and several assumptions. The results of simulation could also show the total cost of the project that was modeled based on journal construction & building unit cost and online websites of local and international suppliers. The analysis of the advantages and disadvantages of the method was conducted based on its, wastes, and cost. The simulation concluded the total cost of this operation is about Rp. 900,437,004,599 and the total duration of the tunneling operation is 653 days. The results of the simulation will be used for a recommendation to the contractor before the implementation of the already selected tunneling operation.

  1. Cognitive simulation as a tool for cognitive task analysis.

    PubMed

    Roth, E M; Woods, D D; Pople, H E

    1992-10-01

    Cognitive simulations are runnable computer programs that represent models of human cognitive activities. We show how one cognitive simulation built as a model of some of the cognitive processes involved in dynamic fault management can be used in conjunction with small-scale empirical data on human performance to uncover the cognitive demands of a task, to identify where intention errors are likely to occur, and to point to improvements in the person-machine system. The simulation, called Cognitive Environment Simulation or CES, has been exercised on several nuclear power plant accident scenarios. Here we report one case to illustrate how a cognitive simulation tool such as CES can be used to clarify the cognitive demands of a problem-solving situation as part of a cognitive task analysis.

  2. Analysis of simulated angiographic procedures. Part 2: extracting efficiency data from audio and video recordings.

    PubMed

    Duncan, James R; Kline, Benjamin; Glaiberman, Craig B

    2007-04-01

    To create and test methods of extracting efficiency data from recordings of simulated renal stent procedures. Task analysis was performed and used to design a standardized testing protocol. Five experienced angiographers then performed 16 renal stent simulations using the Simbionix AngioMentor angiographic simulator. Audio and video recordings of these simulations were captured from multiple vantage points. The recordings were synchronized and compiled. A series of efficiency metrics (procedure time, contrast volume, and tool use) were then extracted from the recordings. The intraobserver and interobserver variability of these individual metrics was also assessed. The metrics were converted to costs and aggregated to determine the fixed and variable costs of a procedure segment or the entire procedure. Task analysis and pilot testing led to a standardized testing protocol suitable for performance assessment. Task analysis also identified seven checkpoints that divided the renal stent simulations into six segments. Efficiency metrics for these different segments were extracted from the recordings and showed excellent intra- and interobserver correlations. Analysis of the individual and aggregated efficiency metrics demonstrated large differences between segments as well as between different angiographers. These differences persisted when efficiency was expressed as either total or variable costs. Task analysis facilitated both protocol development and data analysis. Efficiency metrics were readily extracted from recordings of simulated procedures. Aggregating the metrics and dividing the procedure into segments revealed potential insights that could be easily overlooked because the simulator currently does not attempt to aggregate the metrics and only provides data derived from the entire procedure. The data indicate that analysis of simulated angiographic procedures will be a powerful method of assessing performance in interventional radiology.

  3. Absence of single critical dose for the amorphization of quartz under ion irradiation

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Pakarinen, O. H.; Backholm, M.; Djurabekova, F.; Nordlund, K.; Keinonen, J.; Wang, T. S.

    2018-01-01

    In this work, we first simulated the amorphization of crystalline quartz under 50 keV 23 Na ion irradiation with classical molecular dynamics (MD). We then used binary collision approximation algorithms to simulate the Rutherford backscattering spectrometry in channeling conditions (RBS-C) from these irradiated MD cells, and compared the RBS-C spectra with experiments. The simulated RBS-C results show an agreement with experiments in the evolution of amorphization as a function of dose, showing what appears to be (by this measure) full amorphization at about 2.2 eVṡatom-1 . We also applied other analysis methods, such as angular structure factor, Wigner-Seitz, coordination analysis and topological analysis, to analyze the structural evolution of the irradiated MD cells. The results show that the atomic-level structure of the sample keeps evolving after the RBS signal has saturated, until the dose of about 5 eVṡatom-1 . The continued evolution of the SiO2 structure makes the definition of what is, on the atomic level, an amorphized quartz ambiguous.

  4. Absence of single critical dose for the amorphization of quartz under ion irradiation.

    PubMed

    Zhang, S; Pakarinen, O H; Backholm, M; Djurabekova, F; Nordlund, K; Keinonen, J; Wang, T S

    2018-01-10

    In this work, we first simulated the amorphization of crystalline quartz under 50 keV [Formula: see text]Na ion irradiation with classical molecular dynamics (MD). We then used binary collision approximation algorithms to simulate the Rutherford backscattering spectrometry in channeling conditions (RBS-C) from these irradiated MD cells, and compared the RBS-C spectra with experiments. The simulated RBS-C results show an agreement with experiments in the evolution of amorphization as a function of dose, showing what appears to be (by this measure) full amorphization at about 2.2 eV⋅[Formula: see text]. We also applied other analysis methods, such as angular structure factor, Wigner-Seitz, coordination analysis and topological analysis, to analyze the structural evolution of the irradiated MD cells. The results show that the atomic-level structure of the sample keeps evolving after the RBS signal has saturated, until the dose of about 5 eV⋅[Formula: see text]. The continued evolution of the [Formula: see text] structure makes the definition of what is, on the atomic level, an amorphized quartz ambiguous.

  5. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    EPA Pesticide Factsheets

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  6. Survey of outcomes in a faculty development program on simulation pedagogy.

    PubMed

    Roh, Young Sook; Kim, Mi Kang; Tangkawanich, Thitiarpha

    2016-06-01

    Although many nursing programs use simulation as a teaching-learning modality, there are few systematic approaches to help nursing educators learn this pedagogy. This study evaluates the effects of a simulation pedagogy nursing faculty development program on participants' learning perceptions using a retrospective pre-course and post-course design. Sixteen Thai participants completed a two-day nursing faculty development program on simulation pedagogy. Thirteen questionnaires were used in the final analysis. The participants' self-perceived learning about simulation teaching showed significant post-course improvement. On a five-point Likert scale, the composite mean attitude, subjective norm, and perceived behavioral control scores, as well as intention to use a simulator, showed a significant post-course increase. A faculty development program on simulation pedagogy induced favorable learning and attitudes. Further studies must test how faculty performance affects the cognitive, emotional, and social dimensions of learning in a simulation-based learning domain. © 2015 Wiley Publishing Asia Pty Ltd.

  7. Improvements of the two-dimensional FDTD method for the simulation of normal- and superconducting planar waveguides using time series analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofschen, S.; Wolff, I.

    1996-08-01

    Time-domain simulation results of two-dimensional (2-D) planar waveguide finite-difference time-domain (FDTD) analysis are normally analyzed using Fourier transform. The introduced method of time series analysis to extract propagation and attenuation constants reduces the desired computation time drastically. Additionally, a nonequidistant discretization together with an adequate excitation technique is used to reduce the number of spatial grid points. Therefore, it is possible to reduce the number of spatial grid points. Therefore, it is possible to simulate normal- and superconducting planar waveguide structures with very thin conductors and small dimensions, as they are used in MMIC technology. The simulation results are comparedmore » with measurements and show good agreement.« less

  8. A Model for Simulating the Response of Aluminum Honeycomb Structure to Transverse Loading

    NASA Technical Reports Server (NTRS)

    Ratcliffe, James G.; Czabaj, Michael W.; Jackson, Wade C.

    2012-01-01

    A 1-dimensional material model was developed for simulating the transverse (thickness-direction) loading and unloading response of aluminum honeycomb structure. The model was implemented as a user-defined material subroutine (UMAT) in the commercial finite element analysis code, ABAQUS(Registered TradeMark)/Standard. The UMAT has been applied to analyses for simulating quasi-static indentation tests on aluminum honeycomb-based sandwich plates. Comparison of analysis results with data from these experiments shows overall good agreement. Specifically, analyses of quasi-static indentation tests yielded accurate global specimen responses. Predicted residual indentation was also in reasonable agreement with measured values. Overall, this simple model does not involve a significant computational burden, which makes it more tractable to simulate other damage mechanisms in the same analysis.

  9. Brightness analysis of an electron beam with a complex profile

    NASA Astrophysics Data System (ADS)

    Maesaka, Hirokazu; Hara, Toru; Togawa, Kazuaki; Inagaki, Takahiro; Tanaka, Hitoshi

    2018-05-01

    We propose a novel analysis method to obtain the core bright part of an electron beam with a complex phase-space profile. This method is beneficial to evaluate the performance of simulation data of a linear accelerator (linac), such as an x-ray free electron laser (XFEL) machine, since the phase-space distribution of a linac electron beam is not simple, compared to a Gaussian beam in a synchrotron. In this analysis, the brightness of undulator radiation is calculated and the core of an electron beam is determined by maximizing the brightness. We successfully extracted core electrons from a complex beam profile of XFEL simulation data, which was not expressed by a set of slice parameters. FEL simulations showed that the FEL intensity was well remained even after extracting the core part. Consequently, the FEL performance can be estimated by this analysis without time-consuming FEL simulations.

  10. An application of sedimentation simulation in Tahe oilfield

    NASA Astrophysics Data System (ADS)

    Tingting, He; Lei, Zhao; Xin, Tan; Dongxu, He

    2017-12-01

    The braided river delta develops in Triassic low oil formation in the block 9 of Tahe oilfield, but its sedimentation evolution process is unclear. By using sedimentation simulation technology, sedimentation process and distribution of braided river delta are studied based on the geological parameters including sequence stratigraphic division, initial sedimentation environment, relative lake level change and accommodation change, source supply and sedimentary transport pattern. The simulation result shows that the error rate between strata thickness of simulation and actual strata thickness is small, and the single well analysis result of simulation is highly consistent with the actual analysis, which can prove that the model is reliable. The study area belongs to braided river delta retrogradation evolution process, which provides favorable basis for fine reservoir description and prediction.

  11. Finite element analysis of 2-Station hip himulator

    NASA Astrophysics Data System (ADS)

    Fazli, M. I. M.; Yahya, A.; Shahrom, A.; Nawawi, S. W.; Zainudin, M. R.; Nazarudin, M. S.

    2017-10-01

    This paper presented the analysis of materials and design architecture of 2-station hip simulator. Hip simulator is a machine used to conduct the joint and wear test of hip prosthetic. In earlier work, the hip simulator was modified and some improvement were made by using SolidWorks software. The simulator consists of 3DOF which controlled by separate stepper motor and a static load that set up by manual method in each station. In this work, finite element analysis (FEA) of hip simulator was implemented to analyse the structure of the design and selected materials used for simulator component. The analysis is completed based on two categories which are safety factor and stress tests. Both design drawing and FEA was done using SolidWorks software. The study of the two categories is performed by applying the peak load up to 4000N on the main frame that is embedded with metal-on-metal hip prosthesis. From FEA, the value of safety factor and degree of stress formation are successfully obtained. All the components exceed the value of 2 for safety factor analysis while the degree of stress formation shows higher value compare to the yield strength of the material. With this results, it provides information regarding part of simulator which are susceptible to destruct. Besides, the results could be used for design improvement and certify the stability of the hip simulator in real application.

  12. Global gyrokinetic simulations of intrinsic rotation in ASDEX Upgrade Ohmic L-mode plasmas

    NASA Astrophysics Data System (ADS)

    Hornsby, W. A.; Angioni, C.; Lu, Z. X.; Fable, E.; Erofeev, I.; McDermott, R.; Medvedeva, A.; Lebschy, A.; Peeters, A. G.; The ASDEX Upgrade Team

    2018-05-01

    Non-linear, radially global, turbulence simulations of ASDEX Upgrade (AUG) plasmas are performed and the nonlinear generated intrinsic flow shows agreement with the intrinsic flow gradients measured in the core of Ohmic L-mode plasmas at nominal parameters. Simulations utilising the kinetic electron model show hollow intrinsic flow profiles as seen in a predominant number of experiments performed at similar plasma parameters. In addition, significantly larger flow gradients are seen than in a previous flux-tube analysis (Hornsby et al 2017 Nucl. Fusion 57 046008). Adiabatic electron model simulations can show a flow profile with opposing sign in the gradient with respect to a kinetic electron simulation, implying a reversal in the sign of the residual stress due to kinetic electrons. The shaping of the intrinsic flow is strongly determined by the density gradient profile. The sensitivity of the residual stress to variations in density profile curvature is calculated and seen to be significantly stronger than to neoclassical flows (Hornsby et al 2017 Nucl. Fusion 57 046008). This variation is strong enough on its own to explain the large variations in the intrinsic flow gradients seen in some AUG experiments. Analysis of the symmetry breaking properties of the turbulence shows that profile shearing is the dominant mechanism in producing a finite parallel wave-number, with turbulence gradient effects contributing a smaller portion of the parallel wave-vector.

  13. No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.

    PubMed

    van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B

    2016-11-24

    Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.

  14. Simulation and analysis of conjunctive use with MODFLOW's farm process

    USGS Publications Warehouse

    Hanson, R.T.; Schmid, W.; Faunt, C.C.; Lockwood, B.

    2010-01-01

    The extension of MODFLOW onto the landscape with the Farm Process (MF-FMP) facilitates fully coupled simulation of the use and movement of water from precipitation, streamflow and runoff, groundwater flow, and consumption by natural and agricultural vegetation throughout the hydrologic system at all times. This allows for more complete analysis of conjunctive use water-resource systems than previously possible with MODFLOW by combining relevant aspects of the landscape with the groundwater and surface water components. This analysis is accomplished using distributed cell-by-cell supply-constrained and demand-driven components across the landscape within " water-balance subregions" comprised of one or more model cells that can represent a single farm, a group of farms, or other hydrologic or geopolitical entities. Simulation of micro-agriculture in the Pajaro Valley and macro-agriculture in the Central Valley are used to demonstrate the utility of MF-FMP. For Pajaro Valley, the simulation of an aquifer storage and recovery system and related coastal water distribution system to supplant coastal pumpage was analyzed subject to climate variations and additional supplemental sources such as local runoff. For the Central Valley, analysis of conjunctive use from different hydrologic settings of northern and southern subregions shows how and when precipitation, surface water, and groundwater are important to conjunctive use. The examples show that through MF-FMP's ability to simulate natural and anthropogenic components of the hydrologic cycle, the distribution and dynamics of supply and demand can be analyzed, understood, and managed. This analysis of conjunctive use would be difficult without embedding them in the simulation and are difficult to estimate a priori. Journal compilation ?? 2010 National Ground Water Association. No claim to original US government works.

  15. High-Alpha Research Vehicle (HARV) longitudinal controller: Design, analyses, and simulation resultss

    NASA Technical Reports Server (NTRS)

    Ostroff, Aaron J.; Hoffler, Keith D.; Proffitt, Melissa S.; Brown, Philip W.; Phillips, Michael R.; Rivers, Robert A.; Messina, Michael D.; Carzoo, Susan W.; Bacon, Barton J.; Foster, John F.

    1994-01-01

    This paper describes the design, analysis, and nonlinear simulation results (batch and piloted) for a longitudinal controller which is scheduled to be flight-tested on the High-Alpha Research Vehicle (HARV). The HARV is an F-18 airplane modified for and equipped with multi-axis thrust vectoring. The paper includes a description of the facilities, a detailed review of the feedback controller design, linear analysis results of the feedback controller, a description of the feed-forward controller design, nonlinear batch simulation results, and piloted simulation results. Batch simulation results include maximum pitch stick agility responses, angle of attack alpha captures, and alpha regulation for full lateral stick rolls at several alpha's. Piloted simulation results include task descriptions for several types of maneuvers, task guidelines, the corresponding Cooper-Harper ratings from three test pilots, and some pilot comments. The ratings show that desirable criteria are achieved for almost all of the piloted simulation tasks.

  16. Improvement of shallow landslide prediction accuracy using soil parameterisation for a granite area in South Korea

    NASA Astrophysics Data System (ADS)

    Kim, M. S.; Onda, Y.; Kim, J. K.

    2015-01-01

    SHALSTAB model applied to shallow landslides induced by rainfall to evaluate soil properties related with the effect of soil depth for a granite area in Jinbu region, Republic of Korea. Soil depth measured by a knocking pole test and two soil parameters from direct shear test (a and b) as well as one soil parameters from a triaxial compression test (c) were collected to determine the input parameters for the model. Experimental soil data were used for the first simulation (Case I) and, soil data represented the effect of measured soil depth and average soil depth from soil data of Case I were used in the second (Case II) and third simulations (Case III), respectively. All simulations were analysed using receiver operating characteristic (ROC) analysis to determine the accuracy of prediction. ROC analysis results for first simulation showed the low ROC values under 0.75 may be due to the internal friction angle and particularly the cohesion value. Soil parameters calculated from a stochastic hydro-geomorphological model were applied to the SHALSTAB model. The accuracy of Case II and Case III using ROC analysis showed higher accuracy values rather than first simulation. Our results clearly demonstrate that the accuracy of shallow landslide prediction can be improved when soil parameters represented the effect of soil thickness.

  17. A Geant4 simulation of the depth dose percentage in brain tumors treatments using protons and carbon ions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diaz, José A. M., E-mail: joadiazme@unal.edu.co; Torres, D. A., E-mail: datorresg@unal.edu.co

    2016-07-07

    The deposited energy and dose distribution of beams of protons and carbon over a head are simulated using the free tool package Geant4 and the data analysis package ROOT-C++. The present work shows a methodology to understand the microscopical process occurring in a session of hadron-therapy using advance simulation tools.

  18. Molecular Dynamics Analysis of Lysozyme Protein in Ethanol- Water Mixed Solvent

    DTIC Science & Technology

    2012-01-01

    molecular dynamics simulations of solvent effect on lysozyme protein, using water, ethanol, and different concentrations of water-ethanol mixtures as...understood. This work focuses on detailed molecular dynamics simulations of solvent effect on lysozyme protein, using water, ethanol, and different...using GROMACS molecular dynamics simulation (MD) code. Compared to water environment, the lysozyme structure showed remarkable changes in water

  19. An Electroacoustic Hearing Protector Simulator That Accurately Predicts Pressure Levels in the Ear Based on Standard Performance Metrics

    DTIC Science & Technology

    2013-08-01

    earplug and earmuff showing HPD simulator elements for energy flow paths...unprotected or protected ear traditionally start with analysis of energy flow through schematic diagrams based on electroacoustic (EA) analogies between...Schröter, 1983; Schröter and Pösselt, 1986; Shaw and Thiessen, 1958, 1962; Zwislocki, 1957). The analysis method tracks energy flow through fluid and

  20. Differential maneuvering simulator data reduction and analysis software

    NASA Technical Reports Server (NTRS)

    Beasley, G. P.; Sigman, R. S.

    1972-01-01

    A multielement data reduction and analysis software package has been developed for use with the Langley differential maneuvering simulator (DMS). This package, which has several independent elements, was developed to support all phases of DMS aircraft simulation studies with a variety of both graphical and tabular information. The overall software package is considered unique because of the number, diversity, and sophistication of the element programs available for use in a single study. The purpose of this paper is to discuss the overall DMS data reduction and analysis package by reviewing the development of the various elements of the software, showing typical results that can be obtained, and discussing how each element can be used.

  1. Time-resolved versus time-integrated portal dosimetry: the role of an object’s position with respect to the isocenter in volumetric modulated arc therapy

    NASA Astrophysics Data System (ADS)

    Schyns, Lotte E. J. R.; Persoon, Lucas C. G. G.; Podesta, Mark; van Elmpt, Wouter J. C.; Verhaegen, Frank

    2016-05-01

    The aim of this work is to compare time-resolved (TR) and time-integrated (TI) portal dosimetry, focussing on the role of an object’s position with respect to the isocenter in volumetric modulated arc therapy (VMAT). Portal dose images (PDIs) are simulated and measured for different cases: a sphere (1), a bovine bone (2) and a patient geometry (3). For the simulated case (1) and the experimental case (2), several transformations are applied at different off-axis positions. In the patient case (3), three simple plans with different isocenters are created and pleural effusion is simulated in the patient. The PDIs before and after the sphere transformations, as well as the PDIs with and without simulated pleural effusion, are compared using a TI and TR gamma analysis. In addition, the performance of the TI and TR gamma analyses for the detection of real geometric changes in patients treated with clinical plans is investigated and a correlation analysis is performed between gamma fail rates and differences in dose volume histogram (DVH) metrics. The TI gamma analysis can show large differences in gamma fail rates for the same transformation at different off-axis positions (or for different plan isocenters). The TR gamma analysis, however, shows consistent gamma fail rates. For the detection of real geometric changes in patients treated with clinical plans, the TR gamma analysis has a higher sensitivity than the TI gamma analysis. However, the specificity for the TR gamma analysis is lower than for the TI gamma analysis. Both the TI and TR gamma fail rates show no correlation with changes in DVH metrics. This work shows that TR portal dosimetry is fundamentally superior to TI portal dosimetry, because it removes the strong dependence of the gamma fail rate on the off-axis position/plan isocenter. However, for 2D TR portal dosimetry, it is still difficult to interpret gamma fail rates in terms of changes in DVH metrics for patients treated with VMAT.

  2. Numerical and experimental validation for the thermal transmittance of windows with cellular shades

    DOE PAGES

    Hart, Robert

    2018-02-21

    Some highly energy efficient window attachment products are available today, but more rapid market adoption would be facilitated by fair performance metrics. It is important to have validated simulation tools to provide a basis for this analysis. This paper outlines a review and validation of the ISO 15099 center-of-glass zero-solar-load heat transfer correlations for windows with cellular shades. Thermal transmittance was measured experimentally, simulated using computational fluid dynamics (CFD) analysis, and simulated utilizing correlations from ISO 15099 as implemented in Berkeley Lab WINDOW and THERM software. CFD analysis showed ISO 15099 underestimates heat flux of rectangular cavities by up tomore » 60% when aspect ratio (AR) = 1 and overestimates heat flux up to 20% when AR = 0.5. CFD analysis also showed that wave-type surfaces of cellular shades have less than 2% impact on heat flux through the cavities and less than 5% for natural convection of room-side surface. WINDOW was shown to accurately represent heat flux of the measured configurations to a mean relative error of 0.5% and standard deviation of 3.8%. Finally, several shade parameters showed significant influence on correlation accuracy, including distance between shade and glass, inconsistency in cell stretch, size of perimeter gaps, and the mounting hardware.« less

  3. Numerical and experimental validation for the thermal transmittance of windows with cellular shades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Robert

    Some highly energy efficient window attachment products are available today, but more rapid market adoption would be facilitated by fair performance metrics. It is important to have validated simulation tools to provide a basis for this analysis. This paper outlines a review and validation of the ISO 15099 center-of-glass zero-solar-load heat transfer correlations for windows with cellular shades. Thermal transmittance was measured experimentally, simulated using computational fluid dynamics (CFD) analysis, and simulated utilizing correlations from ISO 15099 as implemented in Berkeley Lab WINDOW and THERM software. CFD analysis showed ISO 15099 underestimates heat flux of rectangular cavities by up tomore » 60% when aspect ratio (AR) = 1 and overestimates heat flux up to 20% when AR = 0.5. CFD analysis also showed that wave-type surfaces of cellular shades have less than 2% impact on heat flux through the cavities and less than 5% for natural convection of room-side surface. WINDOW was shown to accurately represent heat flux of the measured configurations to a mean relative error of 0.5% and standard deviation of 3.8%. Finally, several shade parameters showed significant influence on correlation accuracy, including distance between shade and glass, inconsistency in cell stretch, size of perimeter gaps, and the mounting hardware.« less

  4. [Simulation and data analysis of stereological modeling based on virtual slices].

    PubMed

    Wang, Hao; Shen, Hong; Bai, Xiao-yan

    2008-05-01

    To establish a computer-assisted stereological model for simulating the process of slice section and evaluate the relationship between section surface and estimated three-dimensional structure. The model was designed by mathematic method as a win32 software based on the MFC using Microsoft visual studio as IDE for simulating the infinite process of sections and analysis of the data derived from the model. The linearity of the fitting of the model was evaluated by comparison with the traditional formula. The win32 software based on this algorithm allowed random sectioning of the particles distributed randomly in an ideal virtual cube. The stereological parameters showed very high throughput (>94.5% and 92%) in homogeneity and independence tests. The data of density, shape and size of the section were tested to conform to normal distribution. The output of the model and that from the image analysis system showed statistical correlation and consistency. The algorithm we described can be used for evaluating the stereologic parameters of the structure of tissue slices.

  5. Spatial Analysis of Biomass Supply: Economic and Environmental Impacts

    USDA-ARS?s Scientific Manuscript database

    The EPIC simulation model is used with SSURGO soils, field location information, and a transportation cost model to analyze potential biomass supply for a West Central MN bioenergy plant. The simulation shows the relationship between biomass price, locations of where biomass production is profitable...

  6. Computational simulation of matrix micro-slip bands in SiC/Ti-15 composite

    NASA Technical Reports Server (NTRS)

    Mital, S. K.; Lee, H.-J.; Murthy, P. L. N.; Chamis, C. C.

    1992-01-01

    Computational simulation procedures are used to identify the key deformation mechanisms for (0)(sub 8) and (90)(sub 8) SiC/Ti-15 metal matrix composites. The computational simulation procedures employed consist of a three-dimensional finite-element analysis and a micromechanics based computer code METCAN. The interphase properties used in the analysis have been calibrated using the METCAN computer code with the (90)(sub 8) experimental stress-strain curve. Results of simulation show that although shear stresses are sufficiently high to cause the formation of some slip bands in the matrix concentrated mostly near the fibers, the nonlinearity in the composite stress-strain curve in the case of (90)(sub 8) composite is dominated by interfacial damage, such as microcracks and debonding rather than microplasticity. The stress-strain curve for (0)(sub 8) composite is largely controlled by the fibers and shows only slight nonlinearity at higher strain levels that could be the result of matrix microplasticity.

  7. Network Analysis on Attitudes: A Brief Tutorial.

    PubMed

    Dalege, Jonas; Borsboom, Denny; van Harreveld, Frenk; van der Maas, Han L J

    2017-07-01

    In this article, we provide a brief tutorial on the estimation, analysis, and simulation on attitude networks using the programming language R. We first discuss what a network is and subsequently show how one can estimate a regularized network on typical attitude data. For this, we use open-access data on the attitudes toward Barack Obama during the 2012 American presidential election. Second, we show how one can calculate standard network measures such as community structure, centrality, and connectivity on this estimated attitude network. Third, we show how one can simulate from an estimated attitude network to derive predictions from attitude networks. By this, we highlight that network theory provides a framework for both testing and developing formalized hypotheses on attitudes and related core social psychological constructs.

  8. Network Analysis on Attitudes

    PubMed Central

    Borsboom, Denny; van Harreveld, Frenk; van der Maas, Han L. J.

    2017-01-01

    In this article, we provide a brief tutorial on the estimation, analysis, and simulation on attitude networks using the programming language R. We first discuss what a network is and subsequently show how one can estimate a regularized network on typical attitude data. For this, we use open-access data on the attitudes toward Barack Obama during the 2012 American presidential election. Second, we show how one can calculate standard network measures such as community structure, centrality, and connectivity on this estimated attitude network. Third, we show how one can simulate from an estimated attitude network to derive predictions from attitude networks. By this, we highlight that network theory provides a framework for both testing and developing formalized hypotheses on attitudes and related core social psychological constructs. PMID:28919944

  9. Direct simulations of chemically reacting turbulent mixing layers, part 2

    NASA Technical Reports Server (NTRS)

    Metcalfe, Ralph W.; Mcmurtry, Patrick A.; Jou, Wen-Huei; Riley, James J.; Givi, Peyman

    1988-01-01

    The results of direct numerical simulations of chemically reacting turbulent mixing layers are presented. This is an extension of earlier work to a more detailed study of previous three dimensional simulations of cold reacting flows plus the development, validation, and use of codes to simulate chemically reacting shear layers with heat release. Additional analysis of earlier simulations showed good agreement with self similarity theory and laboratory data. Simulations with a two dimensional code including the effects of heat release showed that the rate of chemical product formation, the thickness of the mixing layer, and the amount of mass entrained into the layer all decrease with increasing rates of heat release. Subsequent three dimensional simulations showed similar behavior, in agreement with laboratory observations. Baroclinic torques and thermal expansion in the mixing layer were found to produce changes in the flame vortex structure that act to diffuse the pairing vortices, resulting in a net reduction in vorticity. Previously unexplained anomalies observed in the mean velocity profiles of reacting jets and mixing layers were shown to result from vorticity generation by baroclinic torques.

  10. Coupled attenuation and multiscale damage model for composite structures

    NASA Astrophysics Data System (ADS)

    Moncada, Albert M.; Chattopadhyay, Aditi; Bednarcyk, Brett; Arnold, Steven M.

    2011-04-01

    Composite materials are widely used in many applications for their high strength, low weight, and tailorability for specific applications. However, the development of robust and reliable methodologies to detect micro level damage in composite structures has been challenging. For composite materials, attenuation of ultrasonic waves propagating through the media can be used to determine damage within the material. Currently available numerical solutions for attenuation induce arbitrary damage, such as fiber-matrix debonding or inclusions, to show variations between healthy and damaged states. This paper addresses this issue by integrating a micromechanics analysis to simulate damage in the form of a fiber-matrix crack and an analytical model for calculating the attenuation of the waves when they pass through the damaged region. The hybrid analysis is validated by comparison with experimental stress-strain curves and piezoelectric sensing results for attenuation measurement. The results showed good agreement between the experimental stress-strain curves and the results from the micromechanics analysis. Wave propagation analysis also showed good correlation between simulation and experiment for the tested frequency range.

  11. Simulation Analysis of Helicopter Ground Resonance Nonlinear Dynamics

    NASA Astrophysics Data System (ADS)

    Zhu, Yan; Lu, Yu-hui; Ling, Ai-min

    2017-07-01

    In order to accurately predict the dynamic instability of helicopter ground resonance, a modeling and simulation method of helicopter ground resonance considering nonlinear dynamic characteristics of components (rotor lead-lag damper, landing gear wheel and absorber) is presented. The numerical integral method is used to calculate the transient responses of the body and rotor, simulating some disturbance. To obtain quantitative instabilities, Fast Fourier Transform (FFT) is conducted to estimate the modal frequencies, and the mobile rectangular window method is employed in the predictions of the modal damping in terms of the response time history. Simulation results show that ground resonance simulation test can exactly lead up the blade lead-lag regressing mode frequency, and the modal damping obtained according to attenuation curves are close to the test results. The simulation test results are in accordance with the actual accident situation, and prove the correctness of the simulation method. This analysis method used for ground resonance simulation test can give out the results according with real helicopter engineering tests.

  12. Damping ratio analysis of tooth stability under various simulated degrees of vertical alveolar bone loss and different root types.

    PubMed

    Ho, Kuo-Ning; Lee, Sheng-Yang; Huang, Haw-Ming

    2017-08-03

    The purpose of this study was to evaluate the feasibility of using damping ratio (DR) analysis combined with resonance frequency (RF) and periotest (PTV) analyses to provide additional information about natural tooth stability under various simulated degrees of alveolar vertical bone loss and various root types. Three experimental tooth models, including upper central incisor, upper first premolar, and upper first molar were fabricated using Ti6Al4V alloy. In the tooth models, the periodontal ligament and alveolar bone were simulated using a soft lining material and gypsum, respectively. Various degrees of vertical bone loss were simulated by decreasing the surrounding bone level apically from the cementoenamel junction in 2-mm steps incrementally downward for 10 mm. A commercially available RF analyzer was used to measure the RF and DR of impulse-forced vibrations on the tooth models. The results showed that DRs increased as alveolar vertical bone height decreased and had high coefficients of determination in the linear regression analysis. The damping ratio of the central incisor model without a simulated periodontal ligament were 11.95 ± 1.92 and 27.50 ± 0.67% respectively when their bone levels were set at 2 and 10 mm apically from the cementoenamel junction. These values significantly changed to 28.85 ± 2.54% (p = 0.000) and 51.25 ± 4.78% (p = 0.003) when the tooth model was covered with simulated periodontal ligament. Moreover, teeth with different root types showed different DR and RF patterns. Teeth with multiple roots had lower DRs than teeth with single roots. Damping ratio analysis combined with PTV and RF analysis provides more useful information on the assessment of changes in vertical alveolar bone loss than PTV or RF analysis alone.

  13. Simulation of realistic abnormal SPECT brain perfusion images: application in semi-quantitative analysis

    NASA Astrophysics Data System (ADS)

    Ward, T.; Fleming, J. S.; Hoffmann, S. M. A.; Kemp, P. M.

    2005-11-01

    Simulation is useful in the validation of functional image analysis methods, particularly when considering the number of analysis techniques currently available lacking thorough validation. Problems exist with current simulation methods due to long run times or unrealistic results making it problematic to generate complete datasets. A method is presented for simulating known abnormalities within normal brain SPECT images using a measured point spread function (PSF), and incorporating a stereotactic atlas of the brain for anatomical positioning. This allows for the simulation of realistic images through the use of prior information regarding disease progression. SPECT images of cerebral perfusion have been generated consisting of a control database and a group of simulated abnormal subjects that are to be used in a UK audit of analysis methods. The abnormality is defined in the stereotactic space, then transformed to the individual subject space, convolved with a measured PSF and removed from the normal subject image. The dataset was analysed using SPM99 (Wellcome Department of Imaging Neuroscience, University College, London) and the MarsBaR volume of interest (VOI) analysis toolbox. The results were evaluated by comparison with the known ground truth. The analysis showed improvement when using a smoothing kernel equal to system resolution over the slightly larger kernel used routinely. Significant correlation was found between effective volume of a simulated abnormality and the detected size using SPM99. Improvements in VOI analysis sensitivity were found when using the region median over the region mean. The method and dataset provide an efficient methodology for use in the comparison and cross validation of semi-quantitative analysis methods in brain SPECT, and allow the optimization of analysis parameters.

  14. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    PubMed

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  15. F-14 modeling study

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Baron, S.

    1984-01-01

    Preliminary results in the application of a closed loop pilot/simulator model to the analysis of some simulator fidelity issues are discussed in the context of an air to air target tracking task. The closed loop model is described briefly. Then, problem simplifications that are employed to reduce computational costs are discussed. Finally, model results showing sensitivity of performance to various assumptions concerning the simulator and/or the pilot are presented.

  16. Combat Simulation Using Breach Computer Language

    DTIC Science & Technology

    1979-09-01

    simulation and weapon system analysis computer language Two types of models were constructed: a stochastic duel and a dynamic engagement model The... duel model validates the BREACH approach by comparing results with mathematical solutions. The dynamic model shows the capability of the BREACH...BREACH 2 Background 2 The Language 3 Static Duel 4 Background and Methodology 4 Validation 5 Results 8 Tank Duel Simulation 8 Dynamic Assault Model

  17. Simulation and experimental analysis of nanoindentation and mechanical properties of amorphous NiAl alloys.

    PubMed

    Wang, Chih-Hao; Fang, Te-Hua; Cheng, Po-Chien; Chiang, Chia-Chin; Chao, Kuan-Chi

    2015-06-01

    This paper used numerical and experimental methods to investigate the mechanical properties of amorphous NiAl alloys during the nanoindentation process. A simulation was performed using the many-body tight-binding potential method. Temperature, plastic deformation, elastic recovery, and hardness were evaluated. The experimental method was based on nanoindentation measurements, allowing a precise prediction of Young's modulus and hardness values for comparison with the simulation results. The indentation simulation results showed a significant increase of NiAl hardness and elastic recovery with increasing Ni content. Furthermore, the results showed that hardness and Young's modulus increase with increasing Ni content. The simulation results are in good agreement with the experimental results. Adhesion test of amorphous NiAl alloys at room temperature is also described in this study.

  18. Reducing EnergyPlus Run Time For Code Compliance Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.

    2014-09-12

    Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and threemore » climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.« less

  19. Large eddy simulation for atmospheric boundary layer flow over flat and complex terrains

    NASA Astrophysics Data System (ADS)

    Han, Yi; Stoellinger, Michael; Naughton, Jonathan

    2016-09-01

    In this work, we present Large Eddy Simulation (LES) results of atmospheric boundary layer (ABL) flow over complex terrain with neutral stratification using the OpenFOAM-based simulator for on/offshore wind farm applications (SOWFA). The complete work flow to investigate the LES for the ABL over real complex terrain is described including meteorological-tower data analysis, mesh generation and case set-up. New boundary conditions for the lateral and top boundaries are developed and validated to allow inflow and outflow as required in complex terrain simulations. The turbulent inflow data for the terrain simulation is generated using a precursor simulation of a flat and neutral ABL. Conditionally averaged met-tower data is used to specify the conditions for the flat precursor simulation and is also used for comparison with the simulation results of the terrain LES. A qualitative analysis of the simulation results reveals boundary layer separation and recirculation downstream of a prominent ridge that runs across the simulation domain. Comparisons of mean wind speed, standard deviation and direction between the computed results and the conditionally averaged tower data show a reasonable agreement.

  20. Model-Based Verification and Validation of Spacecraft Avionics

    NASA Technical Reports Server (NTRS)

    Khan, Mohammed Omair

    2012-01-01

    Our simulation was able to mimic the results of 30 tests on the actual hardware. This shows that simulations have the potential to enable early design validation - well before actual hardware exists. Although simulations focused around data processing procedures at subsystem and device level, they can also be applied to system level analysis to simulate mission scenarios and consumable tracking (e.g. power, propellant, etc.). Simulation engine plug-in developments are continually improving the product, but handling time for time-sensitive operations (like those of the remote engineering unit and bus controller) can be cumbersome.

  1. Accurate Monitoring and Fault Detection in Wind Measuring Devices through Wireless Sensor Networks

    PubMed Central

    Khan, Komal Saifullah; Tariq, Muhammad

    2014-01-01

    Many wind energy projects report poor performance as low as 60% of the predicted performance. The reason for this is poor resource assessment and the use of new untested technologies and systems in remote locations. Predictions about the potential of an area for wind energy projects (through simulated models) may vary from the actual potential of the area. Hence, introducing accurate site assessment techniques will lead to accurate predictions of energy production from a particular area. We solve this problem by installing a Wireless Sensor Network (WSN) to periodically analyze the data from anemometers installed in that area. After comparative analysis of the acquired data, the anemometers transmit their readings through a WSN to the sink node for analysis. The sink node uses an iterative algorithm which sequentially detects any faulty anemometer and passes the details of the fault to the central system or main station. We apply the proposed technique in simulation as well as in practical implementation and study its accuracy by comparing the simulation results with experimental results to analyze the variation in the results obtained from both simulation model and implemented model. Simulation results show that the algorithm indicates faulty anemometers with high accuracy and low false alarm rate when as many as 25% of the anemometers become faulty. Experimental analysis shows that anemometers incorporating this solution are better assessed and performance level of implemented projects is increased above 86% of the simulated models. PMID:25421739

  2. Pilot-model analysis and simulation study of effect of control task desired control response

    NASA Technical Reports Server (NTRS)

    Adams, J. J.; Gera, J.; Jaudon, J. B.

    1978-01-01

    A pilot model analysis was performed that relates pilot control compensation, pilot aircraft system response, and aircraft response characteristics for longitudinal control. The results show that a higher aircraft short period frequency is required to achieve superior pilot aircraft system response in an altitude control task than is required in an attitude control task. These results were confirmed by a simulation study of target tracking. It was concluded that the pilot model analysis provides a theoretical basis for determining the effect of control task on pilot opinions.

  3. Simulations of arctic mixed-phase clouds in forecasts with CAM3 and AM2 for M-PACE

    DOE PAGES

    Xie, Shaocheng; Boyle, James; Klein, Stephen A.; ...

    2008-02-27

    [1] Simulations of mixed-phase clouds in forecasts with the NCAR Atmosphere Model version 3 (CAM3) and the GFDL Atmospheric Model version 2 (AM2) for the Mixed-Phase Arctic Cloud Experiment (M-PACE) are performed using analysis data from numerical weather prediction centers. CAM3 significantly underestimates the observed boundary layer mixed-phase cloud fraction and cannot realistically simulate the variations of liquid water fraction with temperature and cloud height due to its oversimplified cloud microphysical scheme. In contrast, AM2 reasonably reproduces the observed boundary layer cloud fraction while its clouds contain much less cloud condensate than CAM3 and the observations. The simulation of themore » boundary layer mixed-phase clouds and their microphysical properties is considerably improved in CAM3 when a new physically based cloud microphysical scheme is used (CAM3LIU). The new scheme also leads to an improved simulation of the surface and top of the atmosphere longwave radiative fluxes. Sensitivity tests show that these results are not sensitive to the analysis data used for model initialization. Increasing model horizontal resolution helps capture the subgrid-scale features in Arctic frontal clouds but does not help improve the simulation of the single-layer boundary layer clouds. AM2 simulated cloud fraction and LWP are sensitive to the change in cloud ice number concentrations used in the Wegener-Bergeron-Findeisen process while CAM3LIU only shows moderate sensitivity in its cloud fields to this change. Furthermore, this paper shows that the Wegener-Bergeron-Findeisen process is important for these models to correctly simulate the observed features of mixed-phase clouds.« less

  4. Simulations of Arctic mixed-phase clouds in forecasts with CAM3 and AM2 for M-PACE

    NASA Astrophysics Data System (ADS)

    Xie, Shaocheng; Boyle, James; Klein, Stephen A.; Liu, Xiaohong; Ghan, Steven

    2008-02-01

    Simulations of mixed-phase clouds in forecasts with the NCAR Atmosphere Model version 3 (CAM3) and the GFDL Atmospheric Model version 2 (AM2) for the Mixed-Phase Arctic Cloud Experiment (M-PACE) are performed using analysis data from numerical weather prediction centers. CAM3 significantly underestimates the observed boundary layer mixed-phase cloud fraction and cannot realistically simulate the variations of liquid water fraction with temperature and cloud height due to its oversimplified cloud microphysical scheme. In contrast, AM2 reasonably reproduces the observed boundary layer cloud fraction while its clouds contain much less cloud condensate than CAM3 and the observations. The simulation of the boundary layer mixed-phase clouds and their microphysical properties is considerably improved in CAM3 when a new physically based cloud microphysical scheme is used (CAM3LIU). The new scheme also leads to an improved simulation of the surface and top of the atmosphere longwave radiative fluxes. Sensitivity tests show that these results are not sensitive to the analysis data used for model initialization. Increasing model horizontal resolution helps capture the subgrid-scale features in Arctic frontal clouds but does not help improve the simulation of the single-layer boundary layer clouds. AM2 simulated cloud fraction and LWP are sensitive to the change in cloud ice number concentrations used in the Wegener-Bergeron-Findeisen process while CAM3LIU only shows moderate sensitivity in its cloud fields to this change. This paper shows that the Wegener-Bergeron-Findeisen process is important for these models to correctly simulate the observed features of mixed-phase clouds.

  5. Hierarchical Simulation to Assess Hardware and Software Dependability

    NASA Technical Reports Server (NTRS)

    Ries, Gregory Lawrence

    1997-01-01

    This thesis presents a method for conducting hierarchical simulations to assess system hardware and software dependability. The method is intended to model embedded microprocessor systems. A key contribution of the thesis is the idea of using fault dictionaries to propagate fault effects upward from the level of abstraction where a fault model is assumed to the system level where the ultimate impact of the fault is observed. A second important contribution is the analysis of the software behavior under faults as well as the hardware behavior. The simulation method is demonstrated and validated in four case studies analyzing Myrinet, a commercial, high-speed networking system. One key result from the case studies shows that the simulation method predicts the same fault impact 87.5% of the time as is obtained by similar fault injections into a real Myrinet system. Reasons for the remaining discrepancy are examined in the thesis. A second key result shows the reduction in the number of simulations needed due to the fault dictionary method. In one case study, 500 faults were injected at the chip level, but only 255 propagated to the system level. Of these 255 faults, 110 shared identical fault dictionary entries at the system level and so did not need to be resimulated. The necessary number of system-level simulations was therefore reduced from 500 to 145. Finally, the case studies show how the simulation method can be used to improve the dependability of the target system. The simulation analysis was used to add recovery to the target software for the most common fault propagation mechanisms that would cause the software to hang. After the modification, the number of hangs was reduced by 60% for fault injections into the real system.

  6. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics.

    PubMed

    Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.

  7. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics

    PubMed Central

    McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817

  8. Impacts of different characterizations of large-scale background on simulated regional-scale ozone over the continental United States

    NASA Astrophysics Data System (ADS)

    Hogrefe, Christian; Liu, Peng; Pouliot, George; Mathur, Rohit; Roselle, Shawn; Flemming, Johannes; Lin, Meiyun; Park, Rokjin J.

    2018-03-01

    This study analyzes simulated regional-scale ozone burdens both near the surface and aloft, estimates process contributions to these burdens, and calculates the sensitivity of the simulated regional-scale ozone burden to several key model inputs with a particular emphasis on boundary conditions derived from hemispheric or global-scale models. The Community Multiscale Air Quality (CMAQ) model simulations supporting this analysis were performed over the continental US for the year 2010 within the context of the Air Quality Model Evaluation International Initiative (AQMEII) and Task Force on Hemispheric Transport of Air Pollution (TF-HTAP) activities. CMAQ process analysis (PA) results highlight the dominant role of horizontal and vertical advection on the ozone burden in the mid-to-upper troposphere and lower stratosphere. Vertical mixing, including mixing by convective clouds, couples fluctuations in free-tropospheric ozone to ozone in lower layers. Hypothetical bounding scenarios were performed to quantify the effects of emissions, boundary conditions, and ozone dry deposition on the simulated ozone burden. Analysis of these simulations confirms that the characterization of ozone outside the regional-scale modeling domain can have a profound impact on simulated regional-scale ozone. This was further investigated by using data from four hemispheric or global modeling systems (Chemistry - Integrated Forecasting Model (C-IFS), CMAQ extended for hemispheric applications (H-CMAQ), the Goddard Earth Observing System model coupled to chemistry (GEOS-Chem), and AM3) to derive alternate boundary conditions for the regional-scale CMAQ simulations. The regional-scale CMAQ simulations using these four different boundary conditions showed that the largest ozone abundance in the upper layers was simulated when using boundary conditions from GEOS-Chem, followed by the simulations using C-IFS, AM3, and H-CMAQ boundary conditions, consistent with the analysis of the ozone fields from the global models along the CMAQ boundaries. Using boundary conditions from AM3 yielded higher springtime ozone columns burdens in the middle and lower troposphere compared to boundary conditions from the other models. For surface ozone, the differences between the AM3-driven CMAQ simulations and the CMAQ simulations driven by other large-scale models are especially pronounced during spring and winter where they can reach more than 10 ppb for seasonal mean ozone mixing ratios and as much as 15 ppb for domain-averaged daily maximum 8 h average ozone on individual days. In contrast, the differences between the C-IFS-, GEOS-Chem-, and H-CMAQ-driven regional-scale CMAQ simulations are typically smaller. Comparing simulated surface ozone mixing ratios to observations and computing seasonal and regional model performance statistics revealed that boundary conditions can have a substantial impact on model performance. Further analysis showed that boundary conditions can affect model performance across the entire range of the observed distribution, although the impacts tend to be lower during summer and for the very highest observed percentiles. The results are discussed in the context of future model development and analysis opportunities.

  9. Using Reconstructed POD Modes as Turbulent Inflow for LES Wind Turbine Simulations

    NASA Astrophysics Data System (ADS)

    Nielson, Jordan; Bhaganagar, Kiran; Juttijudata, Vejapong; Sirisup, Sirod

    2016-11-01

    Currently, in order to get realistic atmospheric effects of turbulence, wind turbine LES simulations require computationally expensive precursor simulations. At times, the precursor simulation is more computationally expensive than the wind turbine simulation. The precursor simulations are important because they capture turbulence in the atmosphere and as stated above, turbulence impacts the power production estimation. On the other hand, POD analysis has been shown to be capable of capturing turbulent structures. The current study was performed to determine the plausibility of using lower dimension models from POD analysis of LES simulations as turbulent inflow to wind turbine LES simulations. The study will aid the wind energy community by lowering the computational cost of full scale wind turbine LES simulations, while maintaining a high level of turbulent information and being able to quickly apply the turbulent inflow to multi turbine wind farms. This will be done by comparing a pure LES precursor wind turbine simulation with simulations that use reduced POD mod inflow conditions. The study shows the feasibility of using lower dimension models as turbulent inflow of LES wind turbine simulations. Overall the power production estimation and velocity field of the wind turbine wake are well captured with small errors.

  10. Parallel processing methods for space based power systems

    NASA Technical Reports Server (NTRS)

    Berry, F. C.

    1993-01-01

    This report presents a method for doing load-flow analysis of a power system by using a decomposition approach. The power system for the Space Shuttle is used as a basis to build a model for the load-flow analysis. To test the decomposition method for doing load-flow analysis, simulations were performed on power systems of 16, 25, 34, 43, 52, 61, 70, and 79 nodes. Each of the power systems was divided into subsystems and simulated under steady-state conditions. The results from these tests have been found to be as accurate as tests performed using a standard serial simulator. The division of the power systems into different subsystems was done by assigning a processor to each area. There were 13 transputers available, therefore, up to 13 different subsystems could be simulated at the same time. This report has preliminary results for a load-flow analysis using a decomposition principal. The report shows that the decomposition algorithm for load-flow analysis is well suited for parallel processing and provides increases in the speed of execution.

  11. PyMOOSE: Interoperable Scripting in Python for MOOSE

    PubMed Central

    Ray, Subhasis; Bhalla, Upinder S.

    2008-01-01

    Python is emerging as a common scripting language for simulators. This opens up many possibilities for interoperability in the form of analysis, interfaces, and communications between simulators. We report the integration of Python scripting with the Multi-scale Object Oriented Simulation Environment (MOOSE). MOOSE is a general-purpose simulation system for compartmental neuronal models and for models of signaling pathways based on chemical kinetics. We show how the Python-scripting version of MOOSE, PyMOOSE, combines the power of a compiled simulator with the versatility and ease of use of Python. We illustrate this by using Python numerical libraries to analyze MOOSE output online, and by developing a GUI in Python/Qt for a MOOSE simulation. Finally, we build and run a composite neuronal/signaling model that uses both the NEURON and MOOSE numerical engines, and Python as a bridge between the two. Thus PyMOOSE has a high degree of interoperability with analysis routines, with graphical toolkits, and with other simulators. PMID:19129924

  12. Motion Law Analysis and Structural Optimization of the Ejection Device of Tray Seeder

    NASA Astrophysics Data System (ADS)

    Luo, Xin; Hu, Bin; Dong, Chunwang; Huang, Lili

    An ejection mechanism consisting four reset springs, an electromagnet and a seed disk was designed for tray seeder. The motion conditions of seeds in the seed disk were theoretical analyzed and intensity and height of seed ejection were calculated. The motions of the seeds and seed disk were multi-body dynamic simulated using Cosmos modules plug-in SolidWorks software package. The simulation results showed the consistence with the theoretical analysis.

  13. MD simulations of papillomavirus DNA-E2 protein complexes hints at a protein structural code for DNA deformation.

    PubMed

    Falconi, M; Oteri, F; Eliseo, T; Cicero, D O; Desideri, A

    2008-08-01

    The structural dynamics of the DNA binding domains of the human papillomavirus strain 16 and the bovine papillomavirus strain 1, complexed with their DNA targets, has been investigated by modeling, molecular dynamics simulations, and nuclear magnetic resonance analysis. The simulations underline different dynamical features of the protein scaffolds and a different mechanical interaction of the two proteins with DNA. The two protein structures, although very similar, show differences in the relative mobility of secondary structure elements. Protein structural analyses, principal component analysis, and geometrical and energetic DNA analyses indicate that the two transcription factors utilize a different strategy in DNA recognition and deformation. Results show that the protein indirect DNA readout is not only addressable to the DNA molecule flexibility but it is finely tuned by the mechanical and dynamical properties of the protein scaffold involved in the interaction.

  14. Voxel Datacubes for 3D Visualization in Blender

    NASA Astrophysics Data System (ADS)

    Gárate, Matías

    2017-05-01

    The growth of computational astrophysics and the complexity of multi-dimensional data sets evidences the need for new versatile visualization tools for both the analysis and presentation of the data. In this work, we show how to use the open-source software Blender as a three-dimensional (3D) visualization tool to study and visualize numerical simulation results, focusing on astrophysical hydrodynamic experiments. With a datacube as input, the software can generate a volume rendering of the 3D data, show the evolution of a simulation in time, and do a fly-around camera animation to highlight the points of interest. We explain the process to import simulation outputs into Blender using the voxel data format, and how to set up a visualization scene in the software interface. This method allows scientists to perform a complementary visual analysis of their data and display their results in an appealing way, both for outreach and science presentations.

  15. How well do simulated last glacial maximum tropical temperatures constrain equilibrium climate sensitivity?

    NASA Astrophysics Data System (ADS)

    Hopcroft, Peter O.; Valdes, Paul J.

    2015-07-01

    Previous work demonstrated a significant correlation between tropical surface air temperature and equilibrium climate sensitivity (ECS) in PMIP (Paleoclimate Modelling Intercomparison Project) phase 2 model simulations of the last glacial maximum (LGM). This implies that reconstructed LGM cooling in this region could provide information about the climate system ECS value. We analyze results from new simulations of the LGM performed as part of Coupled Model Intercomparison Project (CMIP5) and PMIP phase 3. These results show no consistent relationship between the LGM tropical cooling and ECS. A radiative forcing and feedback analysis shows that a number of factors are responsible for this decoupling, some of which are related to vegetation and aerosol feedbacks. While several of the processes identified are LGM specific and do not impact on elevated CO2 simulations, this analysis demonstrates one area where the newer CMIP5 models behave in a qualitatively different manner compared with the older ensemble. The results imply that so-called Earth System components such as vegetation and aerosols can have a significant impact on the climate response in LGM simulations, and this should be taken into account in future analyses.

  16. Next Generation Simulation Framework for Robotic and Human Space Missions

    NASA Technical Reports Server (NTRS)

    Cameron, Jonathan M.; Balaram, J.; Jain, Abhinandan; Kuo, Calvin; Lim, Christopher; Myint, Steven

    2012-01-01

    The Dartslab team at NASA's Jet Propulsion Laboratory (JPL) has a long history of developing physics-based simulations based on the Darts/Dshell simulation framework that have been used to simulate many planetary robotic missions, such as the Cassini spacecraft and the rovers that are currently driving on Mars. Recent collaboration efforts between the Dartslab team at JPL and the Mission Operations Directorate (MOD) at NASA Johnson Space Center (JSC) have led to significant enhancements to the Dartslab DSENDS (Dynamics Simulator for Entry, Descent and Surface landing) software framework. The new version of DSENDS is now being used for new planetary mission simulations at JPL. JSC is using DSENDS as the foundation for a suite of software known as COMPASS (Core Operations, Mission Planning, and Analysis Spacecraft Simulation) that is the basis for their new human space mission simulations and analysis. In this paper, we will describe the collaborative process with the JPL Dartslab and the JSC MOD team that resulted in the redesign and enhancement of the DSENDS software. We will outline the improvements in DSENDS that simplify creation of new high-fidelity robotic/spacecraft simulations. We will illustrate how DSENDS simulations are assembled and show results from several mission simulations.

  17. Kinetic simulations of the stability of a plasma confined by the magnetic field of a current rod

    NASA Astrophysics Data System (ADS)

    Tonge, J.; Leboeuf, J. N.; Huang, C.; Dawson, J. M.

    2003-09-01

    The kinetic stability of a plasma in the magnetic field of a current rod is investigated for various temperature and density profiles using three-dimensional particle-in-cell simulations. Such a plasma obeys similar physics to a plasma in a dipole magnetic field, while it is easier to perform computer simulations, and do theoretical analysis, of a plasma in the field of a current rod. Simple energy principle calculations and simulations with a variety of temperature and density profiles show that the plasma is stable to interchange for pressure profiles proportional to r-10/3. As predicted by theory the simulations also show that the density profile will be stationary as long as density is proportional to r-2 even though the temperature profile may not be stable.

  18. A Collaborative Extensible User Environment for Simulation and Knowledge Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freedman, Vicky L.; Lansing, Carina S.; Porter, Ellen A.

    2015-06-01

    In scientific simulation, scientists use measured data to create numerical models, execute simulations and analyze results from advanced simulators executing on high performance computing platforms. This process usually requires a team of scientists collaborating on data collection, model creation and analysis, and on authorship of publications and data. This paper shows that scientific teams can benefit from a user environment called Akuna that permits subsurface scientists in disparate locations to collaborate on numerical modeling and analysis projects. The Akuna user environment is built on the Velo framework that provides both a rich client environment for conducting and analyzing simulations andmore » a Web environment for data sharing and annotation. Akuna is an extensible toolset that integrates with Velo, and is designed to support any type of simulator. This is achieved through data-driven user interface generation, use of a customizable knowledge management platform, and an extensible framework for simulation execution, monitoring and analysis. This paper describes how the customized Velo content management system and the Akuna toolset are used to integrate and enhance an effective collaborative research and application environment. The extensible architecture of Akuna is also described and demonstrates its usage for creation and execution of a 3D subsurface simulation.« less

  19. Numerical Simulation of Monitoring Corrosion in Reinforced Concrete Based on Ultrasonic Guided Waves

    PubMed Central

    Zheng, Zhupeng; Lei, Ying; Xue, Xin

    2014-01-01

    Numerical simulation based on finite element method is conducted to predict the location of pitting corrosion in reinforced concrete. Simulation results show that it is feasible to predict corrosion monitoring based on ultrasonic guided wave in reinforced concrete, and wavelet analysis can be used for the extremely weak signal of guided waves due to energy leaking into concrete. The characteristic of time-frequency localization of wavelet transform is adopted in the corrosion monitoring of reinforced concrete. Guided waves can be successfully used to identify corrosion defects in reinforced concrete with the analysis of suitable wavelet-based function and its scale. PMID:25013865

  20. Collapse Causes Analysis and Numerical Simulation for a Rigid Frame Multiple Arch Bridge

    NASA Astrophysics Data System (ADS)

    Zuo, XinDai

    2018-03-01

    Following the collapse accident of Baihe Bridge, the author built a plane model of the whole bridge firstly and analyzed the carrying capacity of the structure for a 170-tons lorry load. Then the author built a spatial finite element model which can accurately simulate the bridge collapse course. The collapse course was simulated and the accident scene was reproduced. Spatial analysis showed rotational stiffness of the pier bottom had a large influence on the collapse from of the superstructures. The conclusion was that the170 tons lorry load and multiple arch bridge design were the important factors leading to collapse.

  1. CFD Analysis of Evaporation-Condensation Phenomenon In an Evaporation Chamber of Natural Vacuum Solar Desalination

    NASA Astrophysics Data System (ADS)

    Ambarita, H.; Ronowikarto, A. D.; Siregar, R. E. T.; Setyawan, E. Y.

    2018-01-01

    Desalination technologies is one of solutions for water scarcity. With using renewable energy, like solar energy, wind energy, and geothermal energy, expected will reduce the energy demand. This required study on the modeling and transport parameters determination of natural vacuum solar desalination by using computational fluid dynamics (CFD) method to simulate the model. A three-dimensional case, two-phase model was developed for evaporation-condensation phenomenon in natural vacuum solar desalination. The CFD simulation results were compared with the avalaible experimental data. The simulation results shows inthat there is a phenomenon of evaporation-condensation in an evaporation chamber. From the simulation, the fresh water productivity is 2.21 litre, and from the experimental is 2.1 litre. This study shows there’s an error of magnitude 0.4%. The CFD results also show that, vacuum pressure will degrade the saturation temperature of sea water.

  2. Using a virtual reality temporal bone simulator to assess otolaryngology trainees.

    PubMed

    Zirkle, Molly; Roberson, David W; Leuwer, Rudolf; Dubrowski, Adam

    2007-02-01

    The objective of this study is to determine the feasibility of computerized evaluation of resident performance using hand motion analysis on a virtual reality temporal bone (VR TB) simulator. We hypothesized that both computerized analysis and expert ratings would discriminate the performance of novices from experienced trainees. We also hypothesized that performance on the virtual reality temporal bone simulator (VR TB) would differentiate based on previous drilling experience. The authors conducted a randomized, blind assessment study. Nineteen volunteers from the Otolaryngology-Head and Neck Surgery training program at the University of Toronto drilled both a cadaveric TB and a simulated VR TB. Expert reviewers were asked to assess operative readiness of the trainee based on a blind video review of their performance. Computerized hand motion analysis of each participant's performance was conducted. Expert raters were able to discriminate novices from experienced trainees (P < .05) on cadaveric temporal bones, and there was a trend toward discrimination on VR TB performance. Hand motion analysis showed that experienced trainees had better movement economy than novices (P < .05) on the VR TB. Performance, as measured by hand motion analysis on the VR TB simulator, reflects trainees' previous drilling experience. This study suggests that otolaryngology trainees could accomplish initial temporal bone training on a VR TB simulator, which can provide feedback to the trainee, and may reduce the need for constant faculty supervision and evaluation.

  3. JASMINE data analysis

    NASA Astrophysics Data System (ADS)

    Yamada, Y.; Gouda, N.; Yano, T.; Kobayashi, Y.; Niwa, Y.; Niwa

    2008-07-01

    Japan Astrometry Satellite Mission for Infrared Exploration (JASMINE) aims to construct a map of the Galactic bulge with a 10 μas accuracy. We use z-band CCD or K-band array detector to avoid dust absorption, and observe about 10 × 20 degrees area around the Galactic bulge region. In this poster, we show the observation strategy, reduction scheme, and error budget. We also show the basic design of the software for the end-to-end simulation of JASMINE, named JASMINE Simulator.

  4. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  5. Realism of Indian Summer Monsoon Simulation in a Quarter Degree Global Climate Model

    NASA Astrophysics Data System (ADS)

    Salunke, P.; Mishra, S. K.; Sahany, S.; Gupta, K.

    2017-12-01

    This study assesses the fidelity of Indian Summer Monsoon (ISM) simulations using a global model at an ultra-high horizontal resolution (UHR) of 0.25°. The model used was the atmospheric component of the Community Earth System Model version 1.2.0 (CESM 1.2.0) developed at the National Center for Atmospheric Research (NCAR). Precipitation and temperature over the Indian region were analyzed for a wide range of space and time scales to evaluate the fidelity of the model under UHR, with special emphasis on the ISM simulations during the period of June-through-September (JJAS). Comparing the UHR simulations with observed data from the India Meteorological Department (IMD) over the Indian land, it was found that 0.25° resolution significantly improved spatial rainfall patterns over many regions, including the Western Ghats and the South-Eastern peninsula as compared to the standard model resolution. Convective and large-scale rainfall components were analyzed using the European Centre for Medium Range Weather Forecast (ECMWF) Re-Analysis (ERA)-Interim (ERA-I) data and it was found that at 0.25° resolution, there was an overall increase in the large-scale component and an associated decrease in the convective component of rainfall as compared to the standard model resolution. Analysis of the diurnal cycle of rainfall suggests a significant improvement in the phase characteristics simulated by the UHR model as compared to the standard model resolution. Analysis of the annual cycle of rainfall, however, failed to show any significant improvement in the UHR model as compared to the standard version. Surface temperature analysis showed small improvements in the UHR model simulations as compared to the standard version. Thus, one may conclude that there are some significant improvements in the ISM simulations using a 0.25° global model, although there is still plenty of scope for further improvement in certain aspects of the annual cycle of rainfall.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sigeti, David E.; Pelak, Robert A.

    We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis withmore » an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a general beta-function prior for {theta}, enabling sequential analysis in which a small number of new simulations may be done and the resulting posterior for {theta} used as a prior to inform the next stage of power analysis.« less

  7. Massive data compression for parameter-dependent covariance matrices

    NASA Astrophysics Data System (ADS)

    Heavens, Alan F.; Sellentin, Elena; de Mijolla, Damien; Vianello, Alvise

    2017-12-01

    We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.

  8. Vegetation spatial variability and its effect on vegetation indices

    NASA Technical Reports Server (NTRS)

    Ormsby, J. P.; Choudhury, B. J.; Owe, M.

    1987-01-01

    Landsat MSS data were used to simulate low resolution satellite data, such as NOAA AVHRR, to quantify the fractional vegetation cover within a pixel and relate the fractional cover to the normalized difference vegetation index (NDVI) and the simple ratio (SR). The MSS data were converted to radiances from which the NDVI and SR values for the simulated pixels were determined. Each simulated pixel was divided into clusters using an unsupervised classification program. Spatial and spectral analysis provided a means of combining clusters representing similar surface characteristics into vegetated and non-vegetated areas. Analysis showed an average error of 12.7 per cent in determining these areas. NDVI values less than 0.3 represented fractional vegetated areas of 5 per cent or less, while a value of 0.7 or higher represented fractional vegetated areas greater than 80 per cent. Regression analysis showed a strong linear relation between fractional vegetation area and the NDVI and SR values; correlation values were 0.89 and 0.95 respectively. The range of NDVI values calculated from the MSS data agrees well with field studies.

  9. Battery Lifetime Analysis and Simulation Tool Suite | Transportation

    Science.gov Websites

    comparisons of different battery-use strategies to predict long-term performance in electric vehicle (EV) and economic and greenhouse gas impacts of different EV scenarios. An illustrated graphic showing thermal . Users can enter their own battery duty cycles for direct simulation to evaluate the impacts of different

  10. Best estimate plus uncertainty analysis of departure from nucleate boiling limiting case with CASL core simulator VERA-CS in response to PWR main steam line break event

    DOE PAGES

    Brown, Cameron S.; Zhang, Hongbin; Kucukboyaci, Vefa; ...

    2016-09-07

    VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was used to simulate a typical pressurized water reactor (PWR) full core response with 17x17 fuel assemblies for a main steam line break (MSLB) accident scenario with the most reactive rod cluster control assembly stuck out of the core. The accident scenario was initiated at the hot zero power (HZP) at the end of the first fuel cycle with return to power state points that were determined by amore » system analysis code and the most limiting state point was chosen for core analysis. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this way, 59 full core simulations were performed to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. The results show that this typical PWR core remains within MDNBR safety limits for the MSLB accident.« less

  11. The Use of Online Modules and the Effect on Student Outcomes in a High School Chemistry Class

    NASA Astrophysics Data System (ADS)

    Lamb, Richard L.; Annetta, Len

    2013-10-01

    The purpose of the study was to review the efficacy of online chemistry simulations in a high school chemistry class and provide discussion of the factors that may affect student learning. The sample consisted of 351 high school students exposed to online simulations. Researchers administered a pretest, intermediate test and posttest to measure chemistry content knowledge acquired during the use of online chemistry laboratory simulations. The authors also analyzed student journal entries as an attitudinal measure of chemistry during the simulation experience. The four analyses conducted were Repeated Time Measures Analysis of Variance, a three-way Analysis of Variance, Logistic Regression and Multiple Analysis of Variance. Each of these analyses provides for a slightly different aspect of factors regarding student attitudes and outcomes. Results indicate that there is a statistically significant main effect across grouping type (experimental versus control, p = 0.042, α = 0.05). Analysis of student journal entries suggests that attitudinal factors may affect student outcomes concerning the use of online supplemental instruction. Implications for this study show that the use of online simulations promotes increased understanding of chemistry content through open-ended and interactive questioning.

  12. Best estimate plus uncertainty analysis of departure from nucleate boiling limiting case with CASL core simulator VERA-CS in response to PWR main steam line break event

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Cameron S.; Zhang, Hongbin; Kucukboyaci, Vefa

    VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was used to simulate a typical pressurized water reactor (PWR) full core response with 17x17 fuel assemblies for a main steam line break (MSLB) accident scenario with the most reactive rod cluster control assembly stuck out of the core. The accident scenario was initiated at the hot zero power (HZP) at the end of the first fuel cycle with return to power state points that were determined by amore » system analysis code and the most limiting state point was chosen for core analysis. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this way, 59 full core simulations were performed to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. The results show that this typical PWR core remains within MDNBR safety limits for the MSLB accident.« less

  13. Analysis of Covariance: Is It the Appropriate Model to Study Change?

    ERIC Educational Resources Information Center

    Marston, Paul T., Borich, Gary D.

    The four main approaches to measuring treatment effects in schools; raw gain, residual gain, covariance, and true scores; were compared. A simulation study showed true score analysis produced a large number of Type-I errors. When corrected for this error, this method showed the least power of the four. This outcome was clearly the result of the…

  14. Virtual Observatories for Space Physics Observations and Simulations: New Routes to Efficient Access and Visualization

    NASA Technical Reports Server (NTRS)

    Roberts, Aaron

    2005-01-01

    New tools for data access and visualization promise to make the analysis of space plasma data both more efficient and more powerful, especially for answering questions about the global structure and dynamics of the Sun-Earth system. We will show how new existing tools (particularly the Virtual Space Physics Observatory-VSPO-and the Visual System for Browsing, Analysis and Retrieval of Data-ViSBARD; look for the acronyms in Google) already provide rapid access to such information as spacecraft orbits, browse plots, and detailed data, as well as visualizations that can quickly unite our view of multispacecraft observations. We will show movies illustrating multispacecraft observations of the solar wind and magnetosphere during a magnetic storm, and of simulations of 3 0-spacecraft observations derived from MHD simulations of the magnetosphere sampled along likely trajectories of the spacecraft for the MagCon mission. An important issue remaining to be solved is how best to integrate simulation data and services into the Virtual Observatory environment, and this talk will hopefully stimulate further discussion along these lines.

  15. Terascale Visualization: Multi-resolution Aspirin for Big-Data Headaches

    NASA Astrophysics Data System (ADS)

    Duchaineau, Mark

    2001-06-01

    Recent experience on the Accelerated Strategic Computing Initiative (ASCI) computers shows that computational physicists are successfully producing a prodigious collection of numbers on several thousand processors. But with this wealth of numbers comes an unprecedented difficulty in processing and moving them to provide useful insight and analysis. In this talk, a few simulations are highlighted where recent advancements in multiple-resolution mathematical representations and algorithms have provided some hope of seeing most of the physics of interest while keeping within the practical limits of the post-simulation storage and interactive data-exploration resources. A whole host of visualization research activities was spawned by the 1999 Gordon Bell Prize-winning computation of a shock-tube experiment showing Richtmyer-Meshkov turbulent instabilities. This includes efforts for the entire data pipeline from running simulation to interactive display: wavelet compression of field data, multi-resolution volume rendering and slice planes, out-of-core extraction and simplification of mixing-interface surfaces, shrink-wrapping to semi-regularize the surfaces, semi-structured surface wavelet compression, and view-dependent display-mesh optimization. More recently on the 12 TeraOps ASCI platform, initial results from a 5120-processor, billion-atom molecular dynamics simulation showed that 30-to-1 reductions in storage size can be achieved with no human-observable errors for the analysis required in simulations of supersonic crack propagation. This made it possible to store the 25 trillion bytes worth of simulation numbers in the available storage, which was under 1 trillion bytes. While multi-resolution methods and related systems are still in their infancy, for the largest-scale simulations there is often no other choice should the science require detailed exploration of the results.

  16. Comparing Real-time Versus Delayed Video Assessments for Evaluating ACGME Sub-competency Milestones in Simulated Patient Care Environments

    PubMed Central

    Stiegler, Marjorie; Hobbs, Gene; Martinelli, Susan M; Zvara, David; Arora, Harendra; Chen, Fei

    2018-01-01

    Background Simulation is an effective method for creating objective summative assessments of resident trainees. Real-time assessment (RTA) in simulated patient care environments is logistically challenging, especially when evaluating a large group of residents in multiple simulation scenarios. To date, there is very little data comparing RTA with delayed (hours, days, or weeks later) video-based assessment (DA) for simulation-based assessments of Accreditation Council for Graduate Medical Education (ACGME) sub-competency milestones. We hypothesized that sub-competency milestone evaluation scores obtained from DA, via audio-video recordings, are equivalent to the scores obtained from RTA. Methods Forty-one anesthesiology residents were evaluated in three separate simulated scenarios, representing different ACGME sub-competency milestones. All scenarios had one faculty member perform RTA and two additional faculty members perform DA. Subsequently, the scores generated by RTA were compared with the average scores generated by DA. Variance component analysis was conducted to assess the amount of variation in scores attributable to residents and raters. Results Paired t-tests showed no significant difference in scores between RTA and averaged DA for all cases. Cases 1, 2, and 3 showed an intraclass correlation coefficient (ICC) of 0.67, 0.85, and 0.50 for agreement between RTA scores and averaged DA scores, respectively. Analysis of variance of the scores assigned by the three raters showed a small proportion of variance attributable to raters (4% to 15%). Conclusions The results demonstrate that video-based delayed assessment is as reliable as real-time assessment, as both assessment methods yielded comparable scores. Based on a department’s needs or logistical constraints, our findings support the use of either real-time or delayed video evaluation for assessing milestones in a simulated patient care environment. PMID:29736352

  17. Study on magnetic force of electromagnetic levitation circular knitting machine

    NASA Astrophysics Data System (ADS)

    Wu, X. G.; Zhang, C.; Xu, X. S.; Zhang, J. G.; Yan, N.; Zhang, G. Z.

    2018-06-01

    The structure of the driving coil and the electromagnetic force of the test prototype of electromagnetic-levitation (EL) circular knitting machine are studied. In this paper, the driving coil’s structure and working principle of the EL circular knitting machine are firstly introduced, then the mathematical modelling analysis of the driving electromagnetic force is carried out, and through the Ansoft Maxwell finite element simulation software the coil’s magnetic induction intensity and the needle’s electromagnetic force is simulated, finally an experimental platform is built to measure the coil’s magnetic induction intensity and the needle’s electromagnetic force. The results show that the theoretical analysis, the simulation analysis and the results of the test are very close, which proves the correctness of the proposed model.

  18. A simulations approach for meta-analysis of genetic association studies based on additive genetic model.

    PubMed

    John, Majnu; Lencz, Todd; Malhotra, Anil K; Correll, Christoph U; Zhang, Jian-Ping

    2018-06-01

    Meta-analysis of genetic association studies is being increasingly used to assess phenotypic differences between genotype groups. When the underlying genetic model is assumed to be dominant or recessive, assessing the phenotype differences based on summary statistics, reported for individual studies in a meta-analysis, is a valid strategy. However, when the genetic model is additive, a similar strategy based on summary statistics will lead to biased results. This fact about the additive model is one of the things that we establish in this paper, using simulations. The main goal of this paper is to present an alternate strategy for the additive model based on simulating data for the individual studies. We show that the alternate strategy is far superior to the strategy based on summary statistics.

  19. The research on flow pulsation characteristics of axial piston pump

    NASA Astrophysics Data System (ADS)

    Wang, Bingchao; Wang, Yulin

    2017-01-01

    The flow pulsation is an important factor influencing the axial piston pump performance. In this paper we implement modeling and simulation of the axial piston pump with AMESim software to explore the flow pulsation characteristics under various factors . Theory analysis shows the loading pressure, angular speed, piston numbers and the accumulator impose evident influence on the flow pulsation characteristics. This simulation and analysis can be used for reducing the flow pulsation rate via properly setting the related factors.

  20. Stability analysis and finite element simulations of superplastic forming in the presence of hydrostatic pressure

    NASA Astrophysics Data System (ADS)

    Nazzal, M. A.

    2018-04-01

    It is established that some superplastic materials undergo significant cavitation during deformation. In this work, stability analysis for the superplastic copper based alloy Coronze-638 at 550 °C based on Hart's definition of stable plastic deformation and finite element simulations for the balanced biaxial loading case are carried out to study the effects of hydrostatic pressure on cavitation evolution during superplastic forming. The finite element results show that imposing hydrostatic pressure yields to a reduction in cavitation growth.

  1. Simulation System of Car Crash Test in C-NCAP Analysis Based on an Improved Apriori Algorithm*

    NASA Astrophysics Data System (ADS)

    Xiang, LI

    In order to analysis car crash test in C-NCAP, an improved algorithm is given based on Apriori algorithm in this paper. The new algorithm is implemented with vertical data layout, breadth first searching, and intersecting. It takes advantage of the efficiency of vertical data layout and intersecting, and prunes candidate frequent item sets like Apriori. Finally, the new algorithm is applied in simulation of car crash test analysis system. The result shows that the relations will affect the C-NCAP test results, and it can provide a reference for the automotive design.

  2. Cultures of simulations vs. cultures of calculations? The development of simulation practices in meteorology and astrophysics

    NASA Astrophysics Data System (ADS)

    Sundberg, Mikaela

    While the distinction between theory and experiment is often used to discuss the place of simulation from a philosophical viewpoint, other distinctions are possible from a sociological perspective. Turkle (1995) distinguishes between cultures of calculation and cultures of simulation and relates these cultures to the distinction between modernity and postmodernity, respectively. What can we understand about contemporary simulation practices in science by looking at them from the point of view of these two computer cultures? What new questions does such an analysis raise for further studies? On the basis of two case studies, the present paper compares and discusses simulation activities in astrophysics and meteorology. It argues that simulation practices manifest aspects of both of these cultures simultaneously, but in different situations. By employing the dichotomies surface/depth, play/seriousness, and extreme/reasonable to characterize and operationalize cultures of calculation and cultures of simulation as sensitizing concepts, the analysis shows how simulation code work shifts from development to use, the importance of but also resistance towards too much visualizations, and how simulation modelers play with extreme values, yet also try to achieve reasonable results compared to observations.

  3. Initial Data Analysis Results for ATD-2 ISAS HITL Simulation

    NASA Technical Reports Server (NTRS)

    Lee, Hanbong

    2017-01-01

    To evaluate the operational procedures and information requirements for the core functional capabilities of the ATD-2 project, such as tactical surface metering tool, APREQ-CFR procedure, and data element exchanges between ramp and tower, human-in-the-loop (HITL) simulations were performed in March, 2017. This presentation shows the initial data analysis results from the HITL simulations. With respect to the different runway configurations and metering values in tactical surface scheduler, various airport performance metrics were analyzed and compared. These metrics include gate holding time, taxi-out in time, runway throughput, queue size and wait time in queue, and TMI flight compliance. In addition to the metering value, other factors affecting the airport performance in the HITL simulation, including run duration, runway changes, and TMI constraints, are also discussed.

  4. Efficient three-dimensional resist profile-driven source mask optimization optical proximity correction based on Abbe-principal component analysis and Sylvester equation

    NASA Astrophysics Data System (ADS)

    Lin, Pei-Chun; Yu, Chun-Chang; Chen, Charlie Chung-Ping

    2015-01-01

    As one of the critical stages of a very large scale integration fabrication process, postexposure bake (PEB) plays a crucial role in determining the final three-dimensional (3-D) profiles and lessening the standing wave effects. However, the full 3-D chemically amplified resist simulation is not widely adopted during the postlayout optimization due to the long run-time and huge memory usage. An efficient simulation method is proposed to simulate the PEB while considering standing wave effects and resolution enhancement techniques, such as source mask optimization and subresolution assist features based on the Sylvester equation and Abbe-principal component analysis method. Simulation results show that our algorithm is 20× faster than the conventional Gaussian convolution method.

  5. The joint return period analysis of natural disasters based on monitoring and statistical modeling of multidimensional hazard factors.

    PubMed

    Liu, Xueqin; Li, Ning; Yuan, Shuai; Xu, Ning; Shi, Wenqin; Chen, Weibin

    2015-12-15

    As a random event, a natural disaster has the complex occurrence mechanism. The comprehensive analysis of multiple hazard factors is important in disaster risk assessment. In order to improve the accuracy of risk analysis and forecasting, the formation mechanism of a disaster should be considered in the analysis and calculation of multi-factors. Based on the consideration of the importance and deficiencies of multivariate analysis of dust storm disasters, 91 severe dust storm disasters in Inner Mongolia from 1990 to 2013 were selected as study cases in the paper. Main hazard factors from 500-hPa atmospheric circulation system, near-surface meteorological system, and underlying surface conditions were selected to simulate and calculate the multidimensional joint return periods. After comparing the simulation results with actual dust storm events in 54years, we found that the two-dimensional Frank Copula function showed the better fitting results at the lower tail of hazard factors and that three-dimensional Frank Copula function displayed the better fitting results at the middle and upper tails of hazard factors. However, for dust storm disasters with the short return period, three-dimensional joint return period simulation shows no obvious advantage. If the return period is longer than 10years, it shows significant advantages in extreme value fitting. Therefore, we suggest the multivariate analysis method may be adopted in forecasting and risk analysis of serious disasters with the longer return period, such as earthquake and tsunami. Furthermore, the exploration of this method laid the foundation for the prediction and warning of other nature disasters. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Cluster Correspondence Analysis.

    PubMed

    van de Velden, M; D'Enza, A Iodice; Palumbo, F

    2017-03-01

    A method is proposed that combines dimension reduction and cluster analysis for categorical data by simultaneously assigning individuals to clusters and optimal scaling values to categories in such a way that a single between variance maximization objective is achieved. In a unified framework, a brief review of alternative methods is provided and we show that the proposed method is equivalent to GROUPALS applied to categorical data. Performance of the methods is appraised by means of a simulation study. The results of the joint dimension reduction and clustering methods are compared with the so-called tandem approach, a sequential analysis of dimension reduction followed by cluster analysis. The tandem approach is conjectured to perform worse when variables are added that are unrelated to the cluster structure. Our simulation study confirms this conjecture. Moreover, the results of the simulation study indicate that the proposed method also consistently outperforms alternative joint dimension reduction and clustering methods.

  7. Comparative Evaluation of Stress Distribution in Experimentally Designed Nickel-titanium Rotary Files with Varying Cross Sections: A Finite Element Analysis.

    PubMed

    Basheer Ahamed, Shadir Bughari; Vanajassun, Purushothaman Pranav; Rajkumar, Kothandaraman; Mahalaxmi, Sekar

    2018-04-01

    Single cross-sectional nickel-titanium (NiTi) rotary instruments during continuous rotations are subjected to constant and variable stresses depending on the canal anatomy. This study was intended to create 2 new experimental, theoretic single-file designs with combinations of triple U (TU), triangle (TR), and convex triangle (CT) cross sections and to compare their bending stresses in simulated root canals with a single cross-sectional instrument using finite element analysis. A 3-dimensional model of the simulated root canal with 45° curvature and NiTi files with 5 cross-sectional designs were created using Pro/ENGINEER Wildfire 4.0 software (PTC Inc, Needham, MA) and ANSYS software (version 17; ANSYS, Inc, Canonsburg, PA) for finite element analysis. The NiTi files of 3 groups had single cross-sectional shapes of CT, TR, and TU designs, and 2 experimental groups had a CT, TR, and TU (CTU) design and a TU, TR, and CT (UTC) design. The file was rotated in simulated root canals to analyze the bending stress, and the von Mises stress value for every file was recorded in MPa. Statistical analysis was performed using the Kruskal-Wallis test and the Bonferroni-adjusted Mann-Whitney test for multiple pair-wise comparison with a P value <.05 (95 %). The maximum bending stress of the rotary file was observed in the apical third of the CT design, whereas comparatively less stress was recorded in the CTU design. The TU and TR designs showed a similar stress pattern at the curvature, whereas the UTC design showed greater stress in the apical and middle thirds of the file in curved canals. All the file designs showed a statistically significant difference. The CTU designed instruments showed the least bending stress on a 45° angulated simulated root canal when compared with all the other tested designs. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  8. An efficiency improvement in warehouse operation using simulation analysis

    NASA Astrophysics Data System (ADS)

    Samattapapong, N.

    2017-11-01

    In general, industry requires an efficient system for warehouse operation. There are many important factors that must be considered when designing an efficient warehouse system. The most important is an effective warehouse operation system that can help transfer raw material, reduce costs and support transportation. By all these factors, researchers are interested in studying about work systems and warehouse distribution. We start by collecting the important data for storage, such as the information on products, information on size and location, information on data collection and information on production, and all this information to build simulation model in Flexsim® simulation software. The result for simulation analysis found that the conveyor belt was a bottleneck in the warehouse operation. Therefore, many scenarios to improve that problem were generated and testing through simulation analysis process. The result showed that an average queuing time was reduced from 89.8% to 48.7% and the ability in transporting the product increased from 10.2% to 50.9%. Thus, it can be stated that this is the best method for increasing efficiency in the warehouse operation.

  9. Simulator for heterogeneous dataflow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    1993-01-01

    A new simulator is developed to simulate the execution of an algorithm graph in accordance with the Algorithm to Architecture Mapping Model (ATAMM) rules. ATAMM is a Petri Net model which describes the periodic execution of large-grained, data-independent dataflow graphs and which provides predictable steady state time-optimized performance. This simulator extends the ATAMM simulation capability from a heterogenous set of resources, or functional units, to a more general heterogenous architecture. Simulation test cases show that the simulator accurately executes the ATAMM rules for both a heterogenous architecture and a homogenous architecture, which is the special case for only one processor type. The simulator forms one tool in an ATAMM Integrated Environment which contains other tools for graph entry, graph modification for performance optimization, and playback of simulations for analysis.

  10. Ion diffusion may introduce spurious current sources in current-source density (CSD) analysis.

    PubMed

    Halnes, Geir; Mäki-Marttunen, Tuomo; Pettersen, Klas H; Andreassen, Ole A; Einevoll, Gaute T

    2017-07-01

    Current-source density (CSD) analysis is a well-established method for analyzing recorded local field potentials (LFPs), that is, the low-frequency part of extracellular potentials. Standard CSD theory is based on the assumption that all extracellular currents are purely ohmic, and thus neglects the possible impact from ionic diffusion on recorded potentials. However, it has previously been shown that in physiological conditions with large ion-concentration gradients, diffusive currents can evoke slow shifts in extracellular potentials. Using computer simulations, we here show that diffusion-evoked potential shifts can introduce errors in standard CSD analysis, and can lead to prediction of spurious current sources. Further, we here show that the diffusion-evoked prediction errors can be removed by using an improved CSD estimator which accounts for concentration-dependent effects. NEW & NOTEWORTHY Standard CSD analysis does not account for ionic diffusion. Using biophysically realistic computer simulations, we show that unaccounted-for diffusive currents can lead to the prediction of spurious current sources. This finding may be of strong interest for in vivo electrophysiologists doing extracellular recordings in general, and CSD analysis in particular. Copyright © 2017 the American Physiological Society.

  11. Analysis Of Direct Numerical Simulation Results Of Adverse Pressure Gradient Boundary Layer Through Anisotropy Invariant Mapping And Comparison With The Rans Simulations

    NASA Astrophysics Data System (ADS)

    Gungor, Ayse Gul; Nural, Ozan Ekin; Ertunc, Ozgur

    2017-11-01

    Purpose of this study is to analyze the direct numerical simulation data of a turbulent boundary layer subjected to strong adverse pressure gradient through anisotropy invariant mapping. RANS simulation using the ``Elliptic Blending Model'' of Manceau and Hanjolic (2002) is also conducted for the same flow case with commercial software Star-CCM+ and comparison of the results with DNS data is done. RANS simulation captures the general trends in the velocity field but, significant deviations are found when skin friction coefficients are compared. Anisotropy invariant map of Lumley and Newman (1977) and barycentric map of Banerjee et al. (2007) are used for the analysis. Invariant mapping of the DNS data has yielded that at locations away from the wall, flow is close to one component turbulence state. In the vicinity of the wall, turbulence is at two component limit which is one border of the barycentric map and as the flow evolves along the streamwise direction, it approaches to two component turbulence state. Additionally, at the locations away from the wall, turbulence approaches to two component limit. Furthermore, analysis of the invariants of the RANS simulations shows dissimilar results. In RANS simulations invariants do not approach to any of the limit states unlike the DNS.

  12. Training Knowledge Bots for Physics-Based Simulations Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.; Wong, Jay Ming

    2014-01-01

    Millions of complex physics-based simulations are required for design of an aerospace vehicle. These simulations are usually performed by highly trained and skilled analysts, who execute, monitor, and steer each simulation. Analysts rely heavily on their broad experience that may have taken 20-30 years to accumulate. In addition, the simulation software is complex in nature, requiring significant computational resources. Simulations of system of systems become even more complex and are beyond human capacity to effectively learn their behavior. IBM has developed machines that can learn and compete successfully with a chess grandmaster and most successful jeopardy contestants. These machines are capable of learning some complex problems much faster than humans can learn. In this paper, we propose using artificial neural network to train knowledge bots to identify the idiosyncrasies of simulation software and recognize patterns that can lead to successful simulations. We examine the use of knowledge bots for applications of computational fluid dynamics (CFD), trajectory analysis, commercial finite-element analysis software, and slosh propellant dynamics. We will show that machine learning algorithms can be used to learn the idiosyncrasies of computational simulations and identify regions of instability without including any additional information about their mathematical form or applied discretization approaches.

  13. Percolation analyses of observed and simulated galaxy clustering

    NASA Astrophysics Data System (ADS)

    Bhavsar, S. P.; Barrow, J. D.

    1983-11-01

    A percolation cluster analysis is performed on equivalent regions of the CFA redshift survey of galaxies and the 4000 body simulations of gravitational clustering made by Aarseth, Gott and Turner (1979). The observed and simulated percolation properties are compared and, unlike correlation and multiplicity function analyses, favour high density (Omega = 1) models with n = - 1 initial data. The present results show that the three-dimensional data are consistent with the degree of filamentary structure present in isothermal models of galaxy formation at the level of percolation analysis. It is also found that the percolation structure of the CFA data is a function of depth. Percolation structure does not appear to be a sensitive probe of intrinsic filamentary structure.

  14. Numerical Simulation of Dry Granular Flow Impacting a Rigid Wall Using the Discrete Element Method

    PubMed Central

    Wu, Fengyuan; Fan, Yunyun; Liang, Li; Wang, Chao

    2016-01-01

    This paper presents a clump model based on Discrete Element Method. The clump model was more close to the real particle than a spherical particle. Numerical simulations of several tests of dry granular flow impacting a rigid wall flowing in an inclined chute have been achieved. Five clump models with different sphericity have been used in the simulations. By comparing the simulation results with the experimental results of normal force on the rigid wall, a clump model with better sphericity was selected to complete the following numerical simulation analysis and discussion. The calculation results of normal force showed good agreement with the experimental results, which verify the effectiveness of the clump model. Then, total normal force and bending moment of the rigid wall and motion process of the granular flow were further analyzed. Finally, comparison analysis of the numerical simulations using the clump model with different grain composition was obtained. By observing normal force on the rigid wall and distribution of particle size at the front of the rigid wall at the final state, the effect of grain composition on the force of the rigid wall has been revealed. It mainly showed that, with the increase of the particle size, the peak force at the retaining wall also increase. The result can provide a basis for the research of relevant disaster and the design of protective structures. PMID:27513661

  15. The Researches on Damage Detection Method for Truss Structures

    NASA Astrophysics Data System (ADS)

    Wang, Meng Hong; Cao, Xiao Nan

    2018-06-01

    This paper presents an effective method to detect damage in truss structures. Numerical simulation and experimental analysis were carried out on a damaged truss structure under instantaneous excitation. The ideal excitation point and appropriate hammering method were determined to extract time domain signals under two working conditions. The frequency response function and principal component analysis were used for data processing, and the angle between the frequency response function vectors was selected as a damage index to ascertain the location of a damaged bar in the truss structure. In the numerical simulation, the time domain signal of all nodes was extracted to determine the location of the damaged bar. In the experimental analysis, the time domain signal of a portion of the nodes was extracted on the basis of an optimal sensor placement method based on the node strain energy coefficient. The results of the numerical simulation and experimental analysis showed that the damage detection method based on the frequency response function and principal component analysis could locate the damaged bar accurately.

  16. Analysis of biomolecular solvation sites by 3D-RISM theory.

    PubMed

    Sindhikara, Daniel J; Hirata, Fumio

    2013-06-06

    We derive, implement, and apply equilibrium solvation site analysis for biomolecules. Our method utilizes 3D-RISM calculations to quickly obtain equilibrium solvent distributions without either necessity of simulation or limits of solvent sampling. Our analysis of these distributions extracts highest likelihood poses of solvent as well as localized entropies, enthalpies, and solvation free energies. We demonstrate our method on a structure of HIV-1 protease where excellent structural and thermodynamic data are available for comparison. Our results, obtained within minutes, show systematic agreement with available experimental data. Further, our results are in good agreement with established simulation-based solvent analysis methods. This method can be used not only for visual analysis of active site solvation but also for virtual screening methods and experimental refinement.

  17. Study on Collision of Ship Side Structure by Simplified Plastic Analysis Method

    NASA Astrophysics Data System (ADS)

    Sun, C. J.; Zhou, J. H.; Wu, W.

    2017-10-01

    During its lifetime, a ship may encounter collision or grounding and sustain permanent damage after these types of accidents. Crashworthiness has been based on two kinds of main methods: simplified plastic analysis and numerical simulation. A simplified plastic analysis method is presented in this paper. Numerical methods using the non-linear finite-element software LS-DYNA are conducted to validate the method. The results show that, as for the accuracy of calculation results, the simplified plasticity analysis are in good agreement with the finite element simulation, which reveals that the simplified plasticity analysis method can quickly and accurately estimate the crashworthiness of the side structure during the collision process and can be used as a reliable risk assessment method.

  18. 7 CFR 400.705 - Contents required for a new submission or changes to a previously approved submission.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... information from market research, producers or producer groups, agents, lending institutions, and other... reliability of the data; (5) An analysis of the results of simulations or modeling showing the performance of proposed rates and commodity prices, as applicable, based on one or more of the following (Such simulations...

  19. 7 CFR 400.705 - Contents required for a new submission or changes to a previously approved submission.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... information from market research, producers or producer groups, agents, lending institutions, and other... reliability of the data; (5) An analysis of the results of simulations or modeling showing the performance of proposed rates and commodity prices, as applicable, based on one or more of the following (Such simulations...

  20. 7 CFR 400.705 - Contents required for a new submission or changes to a previously approved submission.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... information from market research, producers or producer groups, agents, lending institutions, and other... reliability of the data; (5) An analysis of the results of simulations or modeling showing the performance of proposed rates and commodity prices, as applicable, based on one or more of the following (Such simulations...

  1. 7 CFR 400.705 - Contents required for a new submission or changes to a previously approved submission.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... information from market research, producers or producer groups, agents, lending institutions, and other... reliability of the data; (5) An analysis of the results of simulations or modeling showing the performance of proposed rates and commodity prices, as applicable, based on one or more of the following (Such simulations...

  2. Understand the Air-Sea Coupling Processes in High Wind Conditions Using a Synthesized Data Analysis/modeling Approach

    DTIC Science & Technology

    2007-09-30

    secondary gap outflow that appeared in COAMPS simulations ( Cherrett 2006). Figure 3d shows similar SST spatial variations as in Fig. 3c with slight... Cherrett , R. C. 2006: Observed and Simulated temporal and spatial variations of the gap outflow region, M.S. Thesis, Meteorology Department, Naval

  3. The Local Minima Problem in Hierarchical Classes Analysis: An Evaluation of a Simulated Annealing Algorithm and Various Multistart Procedures

    ERIC Educational Resources Information Center

    Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin

    2007-01-01

    Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…

  4. Optimal spinneret layout in Von Koch curves of fractal theory based needleless electrospinning process

    NASA Astrophysics Data System (ADS)

    Yang, Wenxiu; Liu, Yanbo; Zhang, Ligai; Cao, Hong; Wang, Yang; Yao, Jinbo

    2016-06-01

    Needleless electrospinning technology is considered as a better avenue to produce nanofibrous materials at large scale, and electric field intensity and its distribution play an important role in controlling nanofiber diameter and quality of the nanofibrous web during electrospinning. In the current study, a novel needleless electrospinning method was proposed based on Von Koch curves of Fractal configuration, simulation and analysis on electric field intensity and distribution in the new electrospinning process were performed with Finite element analysis software, Comsol Multiphysics 4.4, based on linear and nonlinear Von Koch fractal curves (hereafter called fractal models). The result of simulation and analysis indicated that Second level fractal structure is the optimal linear electrospinning spinneret in terms of field intensity and uniformity. Further simulation and analysis showed that the circular type of Fractal spinneret has better field intensity and distribution compared to spiral type of Fractal spinneret in the nonlinear Fractal electrospinning technology. The electrospinning apparatus with the optimal Von Koch fractal spinneret was set up to verify the theoretical analysis results from Comsol simulation, achieving more uniform electric field distribution and lower energy cost, compared to the current needle and needleless electrospinning technologies.

  5. Raman Spectroscopic Detection for Simulants of Chemical Warfare Agents Using a Spatial Heterodyne Spectrometer.

    PubMed

    Hu, Guangxiao; Xiong, Wei; Luo, Haiyan; Shi, Hailiang; Li, Zhiwei; Shen, Jing; Fang, Xuejing; Xu, Biao; Zhang, Jicheng

    2018-01-01

    Raman spectroscopic detection is one of the suitable methods for the detection of chemical warfare agents (CWAs) and simulants. Since the 1980s, many researchers have been dedicated to the research of chemical characteristic of CWAs and simulants and instrumental improvement for their analysis and detection. The spatial heterodyne Raman spectrometer (SHRS) is a new developing instrument for Raman detection that appeared in 2011. It is already well-known that SHRS has the characteristics of high spectral resolution, a large field-of-view, and high throughput. Thus, it is inherently suitable for the analysis and detection of these toxic chemicals and simulants. The in situ and standoff detection of some typical simulants of CWAs, such as dimethyl methylphosphonate (DMMP), diisopropyl methylphosphonate (DIMP), triethylphosphate (TEP), diethyl malonate (DEM), methyl salicylate (MES), 2-chloroethyl ethyl sulfide (CEES), and malathion, were tried. The achieved results show that SHRS does have the ability of in situ analysis or standoff detection for simulants of CWAs. When the laser power was set to as low as 26 mW, the SHRS still has a signal-to-noise ratio higher than 5 in in situ detection. The standoff Raman spectra detection of CWAs simulants was realized at a distance of 11 m. The potential feasibility of standoff detection of SHRS for CWAs simulants has been proved.

  6. Unsteady, Cooled Turbine Simulation Using a PC-Linux Analysis System

    NASA Technical Reports Server (NTRS)

    List, Michael G.; Turner, Mark G.; Chen, Jen-Pimg; Remotigue, Michael G.; Veres, Joseph P.

    2004-01-01

    The fist stage of the high-pressure turbine (HPT) of the GE90 engine was simulated with a three-dimensional unsteady Navier-Sokes solver, MSU Turbo, which uses source terms to simulate the cooling flows. In addition to the solver, its pre-processor, GUMBO, and a post-processing and visualization tool, Turbomachinery Visual3 (TV3) were run in a Linux environment to carry out the simulation and analysis. The solver was run both with and without cooling. The introduction of cooling flow on the blade surfaces, case, and hub and its effects on both rotor-vane interaction as well the effects on the blades themselves were the principle motivations for this study. The studies of the cooling flow show the large amount of unsteadiness in the turbine and the corresponding hot streak migration phenomenon. This research on the GE90 turbomachinery has also led to a procedure for running unsteady, cooled turbine analysis on commodity PC's running the Linux operating system.

  7. Generalized Appended Product Indicator Procedure for Nonlinear Structural Equation Analysis.

    ERIC Educational Resources Information Center

    Wall, Melanie M.; Amemiya, Yasuo

    2001-01-01

    Considers the estimation of polynomial structural models and shows a limitation of an existing method. Introduces a new procedure, the generalized appended product indicator procedure, for nonlinear structural equation analysis. Addresses statistical issues associated with the procedure through simulation. (SLD)

  8. Modeling and Simulation at NASA

    NASA Technical Reports Server (NTRS)

    Steele, Martin J.

    2009-01-01

    This slide presentation is composed of two topics. The first reviews the use of modeling and simulation (M&S) particularly as it relates to the Constellation program and discrete event simulation (DES). DES is defined as a process and system analysis, through time-based and resource constrained probabilistic simulation models, that provide insight into operation system performance. The DES shows that the cycles for a launch from manufacturing and assembly to launch and recovery is about 45 days and that approximately 4 launches per year are practicable. The second topic reviews a NASA Standard for Modeling and Simulation. The Columbia Accident Investigation Board made some recommendations related to models and simulations. Some of the ideas inherent in the new standard are the documentation of M&S activities, an assessment of the credibility, and reporting to decision makers, which should include the analysis of the results, a statement as to the uncertainty in the results,and the credibility of the results. There is also discussion about verification and validation (V&V) of models. There is also discussion about the different types of models and simulation.

  9. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.

    PubMed

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  10. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    PubMed Central

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946

  11. Spatial Dependence and Heterogeneity in Bayesian Factor Analysis: A Cross-National Investigation of Schwartz Values

    ERIC Educational Resources Information Center

    Stakhovych, Stanislav; Bijmolt, Tammo H. A.; Wedel, Michel

    2012-01-01

    In this article, we present a Bayesian spatial factor analysis model. We extend previous work on confirmatory factor analysis by including geographically distributed latent variables and accounting for heterogeneity and spatial autocorrelation. The simulation study shows excellent recovery of the model parameters and demonstrates the consequences…

  12. Fractional Transport in Strongly Turbulent Plasmas.

    PubMed

    Isliker, Heinz; Vlahos, Loukas; Constantinescu, Dana

    2017-07-28

    We analyze statistically the energization of particles in a large scale environment of strong turbulence that is fragmented into a large number of distributed current filaments. The turbulent environment is generated through strongly perturbed, 3D, resistive magnetohydrodynamics simulations, and it emerges naturally from the nonlinear evolution, without a specific reconnection geometry being set up. Based on test-particle simulations, we estimate the transport coefficients in energy space for use in the classical Fokker-Planck (FP) equation, and we show that the latter fails to reproduce the simulation results. The reason is that transport in energy space is highly anomalous (strange), the particles perform Levy flights, and the energy distributions show extended power-law tails. Newly then, we motivate the use and derive the specific form of a fractional transport equation (FTE), we determine its parameters and the order of the fractional derivatives from the simulation data, and we show that the FTE is able to reproduce the high energy part of the simulation data very well. The procedure for determining the FTE parameters also makes clear that it is the analysis of the simulation data that allows us to make the decision whether a classical FP equation or a FTE is appropriate.

  13. Fractional Transport in Strongly Turbulent Plasmas

    NASA Astrophysics Data System (ADS)

    Isliker, Heinz; Vlahos, Loukas; Constantinescu, Dana

    2017-07-01

    We analyze statistically the energization of particles in a large scale environment of strong turbulence that is fragmented into a large number of distributed current filaments. The turbulent environment is generated through strongly perturbed, 3D, resistive magnetohydrodynamics simulations, and it emerges naturally from the nonlinear evolution, without a specific reconnection geometry being set up. Based on test-particle simulations, we estimate the transport coefficients in energy space for use in the classical Fokker-Planck (FP) equation, and we show that the latter fails to reproduce the simulation results. The reason is that transport in energy space is highly anomalous (strange), the particles perform Levy flights, and the energy distributions show extended power-law tails. Newly then, we motivate the use and derive the specific form of a fractional transport equation (FTE), we determine its parameters and the order of the fractional derivatives from the simulation data, and we show that the FTE is able to reproduce the high energy part of the simulation data very well. The procedure for determining the FTE parameters also makes clear that it is the analysis of the simulation data that allows us to make the decision whether a classical FP equation or a FTE is appropriate.

  14. Comparative study on gene set and pathway topology-based enrichment methods.

    PubMed

    Bayerlová, Michaela; Jung, Klaus; Kramer, Frank; Klemm, Florian; Bleckmann, Annalen; Beißbarth, Tim

    2015-10-22

    Enrichment analysis is a popular approach to identify pathways or sets of genes which are significantly enriched in the context of differentially expressed genes. The traditional gene set enrichment approach considers a pathway as a simple gene list disregarding any knowledge of gene or protein interactions. In contrast, the new group of so called pathway topology-based methods integrates the topological structure of a pathway into the analysis. We comparatively investigated gene set and pathway topology-based enrichment approaches, considering three gene set and four topological methods. These methods were compared in two extensive simulation studies and on a benchmark of 36 real datasets, providing the same pathway input data for all methods. In the benchmark data analysis both types of methods showed a comparable ability to detect enriched pathways. The first simulation study was conducted with KEGG pathways, which showed considerable gene overlaps between each other. In this study with original KEGG pathways, none of the topology-based methods outperformed the gene set approach. Therefore, a second simulation study was performed on non-overlapping pathways created by unique gene IDs. Here, methods accounting for pathway topology reached higher accuracy than the gene set methods, however their sensitivity was lower. We conducted one of the first comprehensive comparative works on evaluating gene set against pathway topology-based enrichment methods. The topological methods showed better performance in the simulation scenarios with non-overlapping pathways, however, they were not conclusively better in the other scenarios. This suggests that simple gene set approach might be sufficient to detect an enriched pathway under realistic circumstances. Nevertheless, more extensive studies and further benchmark data are needed to systematically evaluate these methods and to assess what gain and cost pathway topology information introduces into enrichment analysis. Both types of methods for enrichment analysis require further improvements in order to deal with the problem of pathway overlaps.

  15. Theoretical analyses of localized surface plasmon resonance spectrum with nanoparticles imprinted polymers

    NASA Astrophysics Data System (ADS)

    Li, Hong; Peng, Wei; Wang, Yanjie; Hu, Lingling; Liang, Yuzhang; Zhang, Xinpu; Yao, Wenjuan; Yu, Qi; Zhou, Xinlei

    2011-12-01

    Optical sensors based on nanoparticles induced Localized Surface Plasmon Resonance are more sensitive to real-time chemical and biological sensing, which have attracted intensive attentions in many fields. In this paper, we establish a simulation model based on nanoparticles imprinted polymer to increase sensitivity of the LSPR sensor by detecting the changes of Surface Plasmon Resonance signals. Theoretical analysis and numerical simulation of parameters effects to absorption peak and light field distribution are highlighted. Two-dimensional simulated color maps show that LSPR lead to centralization of the light energy around the gold nanoparticles, Transverse Magnetic wave and total reflection become the important factors to enhance the light field in our simulated structure. Fast Fourier Transfer analysis shows that the absorption peak of the surface plasmon resonance signal resulted from gold nanoparticles is sharper while its wavelength is bigger by comparing with silver nanoparticles; a double chain structure make the amplitude of the signals smaller, and make absorption wavelength longer; the absorption peak of enhancement resulted from nanopore arrays has smaller wavelength and weaker amplitude in contrast with nanoparticles. These simulation results of the Localized Surface Plasmon Resonance can be used as an enhanced transduction mechanism for enhancement of sensitivity in recognition and sensing of target analytes in accordance with different requirements.

  16. Viscous and thermal modelling of thermoplastic composites forming process

    NASA Astrophysics Data System (ADS)

    Guzman, Eduardo; Liang, Biao; Hamila, Nahiene; Boisse, Philippe

    2016-10-01

    Thermoforming thermoplastic prepregs is a fast manufacturing process. It is suitable for automotive composite parts manufacturing. The simulation of thermoplastic prepreg forming is achieved by alternate thermal and mechanical analyses. The thermal properties are obtained from a mesoscopic analysis and a homogenization procedure. The forming simulation is based on a viscous-hyperelastic approach. The thermal simulations define the coefficients of the mechanical model that depend on the temperature. The forming simulations modify the boundary conditions and the internal geometry of the thermal analyses. The comparison of the simulation with an experimental thermoforming of a part representative of automotive applications shows the efficiency of the approach.

  17. Multi-disciplinary coupling for integrated design of propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Singhal, S. N.

    1993-01-01

    Effective computational simulation procedures are described for modeling the inherent multi-disciplinary interactions for determining the true response of propulsion systems. Results are presented for propulsion system responses including multi-discipline coupling effects via (1) coupled multi-discipline tailoring, (2) an integrated system of multidisciplinary simulators, (3) coupled material-behavior/fabrication-process tailoring, (4) sensitivities using a probabilistic simulator, and (5) coupled materials/structures/fracture/probabilistic behavior simulator. The results show that the best designs can be determined if the analysis/tailoring methods account for the multi-disciplinary coupling effects. The coupling across disciplines can be used to develop an integrated interactive multi-discipline numerical propulsion system simulator.

  18. Wake Encounter Analysis for a Closely Spaced Parallel Runway Paired Approach Simulation

    NASA Technical Reports Server (NTRS)

    Mckissick,Burnell T.; Rico-Cusi, Fernando J.; Murdoch, Jennifer; Oseguera-Lohr, Rosa M.; Stough, Harry P, III; O'Connor, Cornelius J.; Syed, Hazari I.

    2009-01-01

    A Monte Carlo simulation of simultaneous approaches performed by two transport category aircraft from the final approach fix to a pair of closely spaced parallel runways was conducted to explore the aft boundary of the safe zone in which separation assurance and wake avoidance are provided. The simulation included variations in runway centerline separation, initial longitudinal spacing of the aircraft, crosswind speed, and aircraft speed during the approach. The data from the simulation showed that the majority of the wake encounters occurred near or over the runway and the aft boundaries of the safe zones were identified for all simulation conditions.

  19. Automatic analysis of stereoscopic satellite image pairs for determination of cloud-top height and structure

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.; Strong, J.; Woodward, R. H.; Pierce, H.

    1991-01-01

    Results are presented on an automatic stereo analysis of cloud-top heights from nearly simultaneous satellite image pairs from the GOES and NOAA satellites, using a massively parallel processor computer. Comparisons of computer-derived height fields and manually analyzed fields show that the automatic analysis technique shows promise for performing routine stereo analysis in a real-time environment, providing a useful forecasting tool by augmenting observational data sets of severe thunderstorms and hurricanes. Simulations using synthetic stereo data show that it is possible to automatically resolve small-scale features such as 4000-m-diam clouds to about 1500 m in the vertical.

  20. Advancing cloud lifecycle representation in numerical models using innovative analysis methods that bridge arm observations over a breadth of scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tselioudis, George

    2016-03-04

    From its location on the subtropics-midlatitude boundary, the Azores is influenced by both the subtropical high pressure and the midlatitude baroclinic storm regimes, and therefore experiences a wide range of cloud structures, from fair-weather scenes to stratocumulus sheets to deep convective systems. This project combined three types of data sets to study cloud variability in the Azores: a satellite analysis of cloud regimes, a reanalysis characterization of storminess, and a 19-month field campaign that occurred on Graciosa Island. Combined analysis of the three data sets provides a detailed picture of cloud variability and the respective dynamic influences, with emphasis onmore » low clouds that constitute a major uncertainty source in climate model simulations. The satellite cloud regime analysis shows that the Azores cloud distribution is similar to the mean global distribution and can therefore be used to evaluate cloud simulation in global models. Regime analysis of low clouds shows that stratocumulus decks occur under the influence of the Azores high-pressure system, while shallow cumulus clouds are sustained by cold-air outbreaks, as revealed by their preference for post-frontal environments and northwesterly flows. An evaluation of CMIP5 climate model cloud regimes over the Azores shows that all models severely underpredict shallow cumulus clouds, while most models also underpredict the occurrence of stratocumulus cloud decks. It is demonstrated that carefully selected case studies can be related through regime analysis to climatological cloud distributions, and a methodology is suggested utilizing process-resolving model simulations of individual cases to better understand cloud-dynamics interactions and attempt to explain and correct climate model cloud deficiencies.« less

  1. Results of a search for deuterium at 25-50 GC/c using a magnetic spectrometer

    NASA Technical Reports Server (NTRS)

    Golden, R. L.; Stephens, S. A.; Webber, W. R.

    1985-01-01

    A method is presented for separately identifying isotopes using a Cerenkov detector and a magnet spectrometer. Simulations of the method are given for separating deuterium from protons. The simulations are compared with data gathered from the 1979 flight of the New Mexico State University balloonborne magnet spectrometer. The simulation and the data show the same general characteristics lending credence to the technique. The data show an apparent deuteron signal which is (11 + or - 3)% of the total sample in the rigidity region 38.5 to 50 GV/c. Until further background analysis and subtraction is performed this should be regarded as an upper limit to the deuteron/(deuteron+proton) ratio.

  2. [Parameter sensitivity of simulating net primary productivity of Larix olgensis forest based on BIOME-BGC model].

    PubMed

    He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong

    2016-02-01

    Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.

  3. Modeling mechanisms of vegetation change due to fire in a semi-arid ecosystem

    USGS Publications Warehouse

    White, J.D.; Gutzwiller, K.J.; Barrow, W.C.; Randall, L.J.; Swint, P.

    2008-01-01

    Vegetation growth and community composition in semi-arid environments is determined by water availability and carbon assimilation mechanisms specific to different plant types. Disturbance also impacts vegetation productivity and composition dependent on area affected, intensity, and frequency factors. In this study, a new spatially explicit ecosystem model is presented for the purpose of simulating vegetation cover type changes associated with fire disturbance in the northern Chihuahuan Desert region. The model is called the Landscape and Fire Simulator (LAFS) and represents physiological activity of six functional plant types incorporating site climate, fire, and seed dispersal routines for individual grid cells. We applied this model for Big Bend National Park, Texas, by assessing the impact of wildfire on the trajectory of vegetation communities over time. The model was initialized and calibrated based on landcover maps derived from Landsat-5 Thematic Mapper data acquired in 1986 and 1999 coupled with plant biomass measurements collected in the field during 2000. Initial vegetation cover change analysis from satellite data showed shrub encroachment during this time period that was captured in the simulated results. A synthetic 50-year climate record was derived from historical meteorological data to assess system response based on initial landcover conditions. This simulation showed that shrublands increased to the detriment of grass and yucca-ocotillo vegetation cover types indicating an ecosystem-level trajectory for shrub encroachment. Our analysis of simulated fires also showed that fires significantly reduced site biomass components including leaf area, stem, and seed biomass in this semi-arid ecosystem. In contrast to other landscape simulation models, this new model incorporates detailed physiological responses of functional plant types that will allow us to simulated the impact of increased atmospheric CO2 occurring with climate change coupled with fire disturbance. Simulations generated from this model are expected to be the subject of subsequent studies on landscape dynamics with specific regard to prediction of wildlife distributions associated with fire management and climate change.

  4. Simulation analysis of photometric data for attitude estimation of unresolved space objects

    NASA Astrophysics Data System (ADS)

    Du, Xiaoping; Gou, Ruixin; Liu, Hao; Hu, Heng; Wang, Yang

    2017-10-01

    The attitude information acquisition of unresolved space objects, such as micro-nano satellites and GEO objects under the way of ground-based optical observations, is a challenge to space surveillance. In this paper, a useful method is proposed to estimate the SO attitude state according to the simulation analysis of photometric data in different attitude states. The object shape model was established and the parameters of the BRDF model were determined, then the space object photometric model was established. Furthermore, the photometric data of space objects in different states are analyzed by simulation and the regular characteristics of the photometric curves are summarized. The simulation results show that the photometric characteristics are useful for attitude inversion in a unique way. Thus, a new idea is provided for space object identification in this paper.

  5. Kinetic theory-based numerical modeling and analysis of bi-disperse segregated mixture fluidized bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konan, N. A.; Huckaby, E. D.

    We discuss a series of continuum Euler-Euler simulations of an initially mixed bi-disperse fluidized bed which segregates under certain operating conditions. The simulations use the multi-phase kinetic theory-based description of the momentum and energy exchanges between the phases by Simonin’s Group [see e.g. Gourdel, Simonin and Brunier (1999). Proceedings of 6th International Conference on Circulating Fluidized Beds, Germany, pp. 205-210]. The discussion and analysis of the results focus on the fluid-particle momentum exchange (i.e. drag). Simulations using mono- and poly-disperse fluid-particle drag correlations are analyzed for the Geldart D-type size bi-disperse gas-solid experiments performed by Goldschmidt et al. [Powder Tech.,more » pp. 135-159 (2003)]. The poly-disperse gas-particle drag correlations account for the local particle size distribution by using an effective mixture diameter when calculating the Reynolds number and then correcting the resulting force coefficient. Simulation results show very good predictions of the segregation index for bidisperse beds with the mono-disperse drag correlations contrary to the poly-disperse drag correlations for which the segregation rate is systematically under-predicted. The statistical analysis of the results shows a clear separation in the distribution of the gas-particle mean relaxation times of the small and large particles with simulations using the mono-disperse drag. In contrast, the poly-disperse drag simulations have a significant overlap and also a smaller difference in the mean particle relaxation times. This results in the small and large particles in the bed to respond to the gas similarly without enough relative time lag. The results suggest that the difference in the particle response time induce flow dynamics favorable to a force imbalance which results in the segregation.« less

  6. Kinetic theory-based numerical modeling and analysis of bi-disperse segregated mixture fluidized bed

    DOE PAGES

    Konan, N. A.; Huckaby, E. D.

    2017-06-21

    We discuss a series of continuum Euler-Euler simulations of an initially mixed bi-disperse fluidized bed which segregates under certain operating conditions. The simulations use the multi-phase kinetic theory-based description of the momentum and energy exchanges between the phases by Simonin’s Group [see e.g. Gourdel, Simonin and Brunier (1999). Proceedings of 6th International Conference on Circulating Fluidized Beds, Germany, pp. 205-210]. The discussion and analysis of the results focus on the fluid-particle momentum exchange (i.e. drag). Simulations using mono- and poly-disperse fluid-particle drag correlations are analyzed for the Geldart D-type size bi-disperse gas-solid experiments performed by Goldschmidt et al. [Powder Tech.,more » pp. 135-159 (2003)]. The poly-disperse gas-particle drag correlations account for the local particle size distribution by using an effective mixture diameter when calculating the Reynolds number and then correcting the resulting force coefficient. Simulation results show very good predictions of the segregation index for bidisperse beds with the mono-disperse drag correlations contrary to the poly-disperse drag correlations for which the segregation rate is systematically under-predicted. The statistical analysis of the results shows a clear separation in the distribution of the gas-particle mean relaxation times of the small and large particles with simulations using the mono-disperse drag. In contrast, the poly-disperse drag simulations have a significant overlap and also a smaller difference in the mean particle relaxation times. This results in the small and large particles in the bed to respond to the gas similarly without enough relative time lag. The results suggest that the difference in the particle response time induce flow dynamics favorable to a force imbalance which results in the segregation.« less

  7. Computational Analysis on Performance of Thermal Energy Storage (TES) Diffuser

    NASA Astrophysics Data System (ADS)

    Adib, M. A. H. M.; Adnan, F.; Ismail, A. R.; Kardigama, K.; Salaam, H. A.; Ahmad, Z.; Johari, N. H.; Anuar, Z.; Azmi, N. S. N.

    2012-09-01

    Application of thermal energy storage (TES) system reduces cost and energy consumption. The performance of the overall operation is affected by diffuser design. In this study, computational analysis is used to determine the thermocline thickness. Three dimensional simulations with different tank height-to-diameter ratio (HD), diffuser opening and the effect of difference number of diffuser holes are investigated. Medium HD tanks simulations with double ring octagonal diffuser show good thermocline behavior and clear distinction between warm and cold water. The result show, the best performance of thermocline thickness during 50% time charging occur in medium tank with height-to-diameter ratio of 4.0 and double ring octagonal diffuser with 48 holes (9mm opening ~ 60%) acceptable compared to diffuser with 6mm ~ 40% and 12mm ~ 80% opening. The conclusion is computational analysis method are very useful in the study on performance of thermal energy storage (TES).

  8. Analysis of simulated high burnup nuclear fuel by laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Singh, Manjeet; Sarkar, Arnab; Banerjee, Joydipta; Bhagat, R. K.

    2017-06-01

    Advanced Heavy Water Reactor (AHWR) grade (Th-U)O2 fuel sample and Simulated High Burn-Up Nuclear Fuels (SIMFUEL) samples mimicking the 28 and 43 GWd/Te irradiated burn-up fuel were studied using laser-induced breakdown spectroscopy (LIBS) setup in a simulated hot-cell environment from a distance of > 1.5 m. Resolution of < 38 pm has been used to record the complex spectra of the SIMFUEL samples. By using spectrum comparison and database matching > 60 emission lines of fission products was identified. Among them only a few emission lines were found to generate calibration curves. The study demonstrates the possibility to investigate impurities at concentrations around hundreds of ppm, rapidly at atmospheric pressure without any sample preparation. The results of Ba and Mo showed the advantage of LIBS analysis over traditional methods involving sample dissolution, which introduces possible elemental loss. Limits of detections (LOD) under Ar atmosphere shows significant improvement, which is shown to be due to the formation of stable plasma.

  9. [Spectrum simulation based on data derived from red tide].

    PubMed

    Liu, Zhen-Yu; Cui, Ting-Wei; Yue, Jie; Jiang, Tao; Cao, Wen-Xi; Ma, Yi

    2011-11-01

    The present paper utilizes the absorption data of red tide water measured during the growing and dying course to retrieve imaginary part of the index of refraction based on Mie theory, carries out the simulation and analysis of average absorption efficiency factors, average backscattering efficiency factors and scattering phase function. The analysis of the simulation shows that Mie theory can be used to reproduce the absorption property of Chaetoceros socialis with an average error of 11%; the average backscattering efficiency factors depend on the value of absorption whose maximum value corresponds to the wavelength range from 400 to 700 nanometer; the average backscattering efficiency factors showed a maximum value on 17th with a low value during the outbreak of red tide and the minimum on 21th; the total scattering, weakly depending on the absorption, is proportional to the size parameters which represent the relative size of cell diameter with respect to the wavelength, while the angle scattering intensity is inversely proportional to wavelength.

  10. Marcus canonical integral for non-Gaussian processes and its computation: pathwise simulation and tau-leaping algorithm.

    PubMed

    Li, Tiejun; Min, Bin; Wang, Zhiming

    2013-03-14

    The stochastic integral ensuring the Newton-Leibnitz chain rule is essential in stochastic energetics. Marcus canonical integral has this property and can be understood as the Wong-Zakai type smoothing limit when the driving process is non-Gaussian. However, this important concept seems not well-known for physicists. In this paper, we discuss Marcus integral for non-Gaussian processes and its computation in the context of stochastic energetics. We give a comprehensive introduction to Marcus integral and compare three equivalent definitions in the literature. We introduce the exact pathwise simulation algorithm and give the error analysis. We show how to compute the thermodynamic quantities based on the pathwise simulation algorithm. We highlight the information hidden in the Marcus mapping, which plays the key role in determining thermodynamic quantities. We further propose the tau-leaping algorithm, which advance the process with deterministic time steps when tau-leaping condition is satisfied. The numerical experiments and its efficiency analysis show that it is very promising.

  11. Phantom Effects in Multilevel Compositional Analysis: Problems and Solutions

    ERIC Educational Resources Information Center

    Pokropek, Artur

    2015-01-01

    This article combines statistical and applied research perspective showing problems that might arise when measurement error in multilevel compositional effects analysis is ignored. This article focuses on data where independent variables are constructed measures. Simulation studies are conducted evaluating methods that could overcome the…

  12. Analysis of the Influence of Construction Insulation Systems on Public Safety in China

    PubMed Central

    Zhang, Guowei; Zhu, Guoqing; Zhao, Guoxiang

    2016-01-01

    With the Government of China’s proposed Energy Efficiency Regulations (GB40411-2007), the implementation of external insulation systems will be mandatory in China. The frequent external insulation system fires cause huge numbers of casualties and extensive property damage and have rapidly become a new hot issue in construction evacuation safety in China. This study attempts to reconstruct an actual fire scene and propose a quantitative risk assessment method for upward insulation system fires using thermal analysis tests and large eddy simulations (using the Fire Dynamics Simulator (FDS) software). Firstly, the pyrolysis and combustion characteristics of Extruded polystyrene board (XPS panel), such as ignition temperature, combustion heat, limiting oxygen index, thermogravimetric analysis and thermal radiation analysis were studied experimentally. Based on these experimental data, large eddy simulation was then applied to reconstruct insulation system fires. The results show that upward insulation system fires could be accurately reconstructed by using thermal analysis test and large eddy simulation. The spread of insulation material system fires in the vertical direction is faster than that in the horizontal direction. Moreover, we also find that there is a possibility of flashover in enclosures caused by insulation system fires as the smoke temperature exceeds 600 °C. The simulation methods and experimental results obtained in this paper could provide valuable references for fire evacuation, hazard assessment and fire resistant construction design studies. PMID:27589774

  13. Analysis of the Influence of Construction Insulation Systems on Public Safety in China.

    PubMed

    Zhang, Guowei; Zhu, Guoqing; Zhao, Guoxiang

    2016-08-30

    With the Government of China's proposed Energy Efficiency Regulations (GB40411-2007), the implementation of external insulation systems will be mandatory in China. The frequent external insulation system fires cause huge numbers of casualties and extensive property damage and have rapidly become a new hot issue in construction evacuation safety in China. This study attempts to reconstruct an actual fire scene and propose a quantitative risk assessment method for upward insulation system fires using thermal analysis tests and large eddy simulations (using the Fire Dynamics Simulator (FDS) software). Firstly, the pyrolysis and combustion characteristics of Extruded polystyrene board (XPS panel), such as ignition temperature, combustion heat, limiting oxygen index, thermogravimetric analysis and thermal radiation analysis were studied experimentally. Based on these experimental data, large eddy simulation was then applied to reconstruct insulation system fires. The results show that upward insulation system fires could be accurately reconstructed by using thermal analysis test and large eddy simulation. The spread of insulation material system fires in the vertical direction is faster than that in the horizontal direction. Moreover, we also find that there is a possibility of flashover in enclosures caused by insulation system fires as the smoke temperature exceeds 600 °C. The simulation methods and experimental results obtained in this paper could provide valuable references for fire evacuation, hazard assessment and fire resistant construction design studies.

  14. Light extraction efficiency analysis of GaN-based light-emitting diodes with nanopatterned sapphire substrates.

    PubMed

    Pan, Jui-Wen; Tsai, Pei-Jung; Chang, Kao-Der; Chang, Yung-Yuan

    2013-03-01

    In this paper, we propose a method to analyze the light extraction efficiency (LEE) enhancement of a nanopatterned sapphire substrates (NPSS) light-emitting diode (LED) by comparing wave optics software with ray optics software. Finite-difference time-domain (FDTD) simulations represent the wave optics software and Light Tools (LTs) simulations represent the ray optics software. First, we find the trends of and an optimal solution for the LEE enhancement when the 2D-FDTD simulations are used to save on simulation time and computational memory. The rigorous coupled-wave analysis method is utilized to explain the trend we get from the 2D-FDTD algorithm. The optimal solution is then applied in 3D-FDTD and LTs simulations. The results are similar and the difference in LEE enhancement between the two simulations does not exceed 8.5% in the small LED chip area. More than 10(4) times computational memory is saved during the LTs simulation in comparison to the 3D-FDTD simulation. Moreover, LEE enhancement from the side of the LED can be obtained in the LTs simulation. An actual-size NPSS LED is simulated using the LTs. The results show a more than 307% improvement in the total LEE enhancement of the NPSS LED with the optimal solution compared to the conventional LED.

  15. Digital computer simulation of inductor-energy-storage dc-to-dc converters with closed-loop regulators

    NASA Technical Reports Server (NTRS)

    Ohri, A. K.; Owen, H. A.; Wilson, T. G.; Rodriguez, G. E.

    1974-01-01

    The simulation of converter-controller combinations by means of a flexible digital computer program which produces output to a graphic display is discussed. The procedure is an alternative to mathematical analysis of converter systems. The types of computer programming involved in the simulation are described. Schematic diagrams, state equations, and output equations are displayed for four basic forms of inductor-energy-storage dc to dc converters. Mathematical models are developed to show the relationship of the parameters.

  16. Detection of Nanosilver Agents in Antibacterial Textiles

    NASA Astrophysics Data System (ADS)

    Xu, Chengtao; Zhao, Jie; Wu, Jianjian; Nie, Jinmei; Cui, Chengmin; Xie, Weibin; Zhang, Yan

    2018-01-01

    The analytical techniques are needed to detect the nanosilver in textiles in direct contact with skin. In this paper, in order to discuss the extraction of nanosilver on the surface of textiles by human skin, we demonstrate the capability of constant temperature oscillation extraction method followed by Inductively Coupled Plasma Spectroscopy (ICP). The sweat and deionized water were selected as extraction solvent simulating the contact process of human skin with textiles. The SEM and TEM analysis shows the existence of nanosilver in the fabric and aqueous extract. ICP analysis shows accurately when analysing silver amounts in the range of 0.05∼1.2 mg/L with r2 values of 0.9997. The percent recoveries of all fabrics were all lower than 44 %.The results shows that the developed method of simulating of human sweat extraction was not very effective. So the nanosilver might not be transferred to human body effectively from the fabric.

  17. Loading Deformation Characteristic Simulation Study of Engineering Vehicle Refurbished Tire

    NASA Astrophysics Data System (ADS)

    Qiang, Wang; Xiaojie, Qi; Zhao, Yang; Yunlong, Wang; Guotian, Wang; Degang, Lv

    2018-05-01

    The paper constructed engineering vehicle refurbished tire computer geometry model, mechanics model, contact model, finite element analysis model, did simulation study on load-deformation property of engineering vehicle refurbished tire by comparing with that of the new and the same type tire, got load-deformation of engineering vehicle refurbished tire under the working condition of static state and ground contact. The analysis result shows that change rules of radial-direction deformation and side-direction deformation of engineering vehicle refurbished tire are close to that of the new tire, radial-direction and side-direction deformation value is a little less than that of the new tire. When air inflation pressure was certain, radial-direction deformation linear rule of engineer vehicle refurbished tire would increase with load adding, however, side-direction deformation showed linear change rule, when air inflation pressure was low; and it would show increase of non-linear change rule, when air inflation pressure was very high.

  18. Uncertainty Quantification Analysis of Both Experimental and CFD Simulation Data of a Bench-scale Fluidized Bed Gasifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shahnam, Mehrdad; Gel, Aytekin; Subramaniyan, Arun K.

    Adequate assessment of the uncertainties in modeling and simulation is becoming an integral part of the simulation based engineering design. The goal of this study is to demonstrate the application of non-intrusive Bayesian uncertainty quantification (UQ) methodology in multiphase (gas-solid) flows with experimental and simulation data, as part of our research efforts to determine the most suited approach for UQ of a bench scale fluidized bed gasifier. UQ analysis was first performed on the available experimental data. Global sensitivity analysis performed as part of the UQ analysis shows that among the three operating factors, steam to oxygen ratio has themore » most influence on syngas composition in the bench-scale gasifier experiments. An analysis for forward propagation of uncertainties was performed and results show that an increase in steam to oxygen ratio leads to an increase in H2 mole fraction and a decrease in CO mole fraction. These findings are in agreement with the ANOVA analysis performed in the reference experimental study. Another contribution in addition to the UQ analysis is the optimization-based approach to guide to identify next best set of additional experimental samples, should the possibility arise for additional experiments. Hence, the surrogate models constructed as part of the UQ analysis is employed to improve the information gain and make incremental recommendation, should the possibility to add more experiments arise. In the second step, series of simulations were carried out with the open-source computational fluid dynamics software MFiX to reproduce the experimental conditions, where three operating factors, i.e., coal flow rate, coal particle diameter, and steam-to-oxygen ratio, were systematically varied to understand their effect on the syngas composition. Bayesian UQ analysis was performed on the numerical results. As part of Bayesian UQ analysis, a global sensitivity analysis was performed based on the simulation results, which shows that the predicted syngas composition is strongly affected not only by the steam-to-oxygen ratio (which was observed in experiments as well) but also by variation in the coal flow rate and particle diameter (which was not observed in experiments). The carbon monoxide mole fraction is underpredicted at lower steam-to-oxygen ratios and overpredicted at higher steam-to-oxygen ratios. The opposite trend is observed for the carbon dioxide mole fraction. These discrepancies are attributed to either excessive segregation of the phases that leads to the fuel-rich or -lean regions or alternatively the selection of reaction models, where different reaction models and kinetics can lead to different syngas compositions throughout the gasifier. To improve quality of numerical models used, the effect that uncertainties in reaction models for gasification, char oxidation, carbon monoxide oxidation, and water gas shift will have on the syngas composition at different grid resolution, along with bed temperature were investigated. The global sensitivity analysis showed that among various reaction models employed for water gas shift, gasification, char oxidation, the choice of reaction model for water gas shift has the greatest influence on syngas composition, with gasification reaction model being second. Syngas composition also shows a small sensitivity to temperature of the bed. The hydrodynamic behavior of the bed did not change beyond grid spacing of 18 times the particle diameter. However, the syngas concentration continued to be affected by the grid resolution as low as 9 times the particle diameter. This is due to a better resolution of the phasic interface between the gases and solid that leads to stronger heterogeneous reactions. This report is a compilation of three manuscripts published in peer-reviewed journals for the series of studies mentioned above.« less

  19. Monitoring, analyzing and simulating of spatial-temporal changes of landscape pattern over mining area

    NASA Astrophysics Data System (ADS)

    Liu, Pei; Han, Ruimei; Wang, Shuangting

    2014-11-01

    According to the merits of remotely sensed data in depicting regional land cover and Land changes, multi- objective information processing is employed to remote sensing images to analyze and simulate land cover in mining areas. In this paper, multi-temporal remotely sensed data were selected to monitor the pattern, distri- bution and trend of LUCC and predict its impacts on ecological environment and human settlement in mining area. The monitor, analysis and simulation of LUCC in this coal mining areas are divided into five steps. The are information integration of optical and SAR data, LULC types extraction with SVM classifier, LULC trends simulation with CA Markov model, landscape temporal changes monitoring and analysis with confusion matrixes and landscape indices. The results demonstrate that the improved data fusion algorithm could make full use of information extracted from optical and SAR data; SVM classifier has an efficient and stable ability to obtain land cover maps, which could provide a good basis for both land cover change analysis and trend simulation; CA Markov model is able to predict LULC trends with good performance, and it is an effective way to integrate remotely sensed data with spatial-temporal model for analysis of land use / cover change and corresponding environmental impacts in mining area. Confusion matrixes are combined with landscape indices to evaluation and analysis show that, there was a sustained downward trend in agricultural land and bare land, but a continues growth trend tendency in water body, forest and other lands, and building area showing a wave like change, first increased and then decreased; mining landscape has undergone a from small to large and large to small process of fragmentation, agricultural land is the strongest influenced landscape type in this area, and human activities are the primary cause, so the problem should be pay more attentions by government and other organizations.

  20. Characteristics of supercritical turbulence from Direct Numerical Simulations of C(sub 7)H(sub 16)/N(sub 2) and O(sub 2)/H(sub 2)

    NASA Technical Reports Server (NTRS)

    Okong'o, N. A.; Bellan, J.

    2003-01-01

    Analysis of Direct Numerical Simulations (DNS) transitional states of temporal, supercritical mixing layers for C7H16/N2 and O2/H2 shows that the evolution of all layers is characterized by the formation of high-density-gradient magnitude (HDGM) regions.

  1. Incremental dynamical downscaling for probabilistic analysis based on multiple GCM projections

    NASA Astrophysics Data System (ADS)

    Wakazuki, Y.

    2015-12-01

    A dynamical downscaling method for probabilistic regional scale climate change projections was developed to cover an uncertainty of multiple general circulation model (GCM) climate simulations. The climatological increments (future minus present climate states) estimated by GCM simulation results were statistically analyzed using the singular vector decomposition. Both positive and negative perturbations from the ensemble mean with the magnitudes of their standard deviations were extracted and were added to the ensemble mean of the climatological increments. The analyzed multiple modal increments were utilized to create multiple modal lateral boundary conditions for the future climate regional climate model (RCM) simulations by adding to an objective analysis data. This data handling is regarded to be an advanced method of the pseudo-global-warming (PGW) method previously developed by Kimura and Kitoh (2007). The incremental handling for GCM simulations realized approximated probabilistic climate change projections with the smaller number of RCM simulations. Three values of a climatological variable simulated by RCMs for a mode were used to estimate the response to the perturbation of the mode. For the probabilistic analysis, climatological variables of RCMs were assumed to show linear response to the multiple modal perturbations, although the non-linearity was seen for local scale rainfall. Probability of temperature was able to be estimated within two modes perturbation simulations, where the number of RCM simulations for the future climate is five. On the other hand, local scale rainfalls needed four modes simulations, where the number of the RCM simulations is nine. The probabilistic method is expected to be used for regional scale climate change impact assessment in the future.

  2. Simulation-based bronchoscopy training: systematic review and meta-analysis.

    PubMed

    Kennedy, Cassie C; Maldonado, Fabien; Cook, David A

    2013-07-01

    Simulation-based bronchoscopy training is increasingly used, but effectiveness remains uncertain. We sought to perform a comprehensive synthesis of published work on simulation-based bronchoscopy training. We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus for eligible articles through May 11, 2011. We included all original studies involving health professionals that evaluated, in comparison with no intervention or an alternative instructional approach, simulation-based training for flexible or rigid bronchoscopy. Study selection and data abstraction were performed independently and in duplicate. We pooled results using random effects meta-analysis. From an initial pool of 10,903 articles, we identified 17 studies evaluating simulation-based bronchoscopy training. In comparison with no intervention, simulation training was associated with large benefits on skills and behaviors (pooled effect size, 1.21 [95% CI, 0.82-1.60]; n=8 studies) and moderate benefits on time (0.62 [95% CI, 0.12-1.13]; n=7). In comparison with clinical instruction, behaviors with real patients showed nonsignificant effects favoring simulation for time (0.61 [95% CI, -1.47 to 2.69]) and process (0.33 [95% CI, -1.46 to 2.11]) outcomes (n=2 studies each), although variation in training time might account for these differences. Four studies compared alternate simulation-based training approaches. Inductive analysis to inform instructional design suggested that longer or more structured training is more effective, authentic clinical context adds value, and animal models and plastic part-task models may be superior to more costly virtual-reality simulators. Simulation-based bronchoscopy training is effective in comparison with no intervention. Comparative effectiveness studies are few.

  3. Chaos in plasma simulation and experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, C.; Newman, D.E.; Sprott, J.C.

    1993-09-01

    We investigate the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas using data from both numerical simulations and experiment. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos. These tools include phase portraits and Poincard sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulate the plasma dynamics. These are -the DEBS code, which models global RFPmore » dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low,dimensional chaos and simple determinism. Experimental data were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or other simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less

  4. Multi-scale sensitivity analysis of pile installation using DEM

    NASA Astrophysics Data System (ADS)

    Esposito, Ricardo Gurevitz; Velloso, Raquel Quadros; , Eurípedes do Amaral Vargas, Jr.; Danziger, Bernadete Ragoni

    2017-12-01

    The disturbances experienced by the soil due to the pile installation and dynamic soil-structure interaction still present major challenges to foundation engineers. These phenomena exhibit complex behaviors, difficult to measure in physical tests and to reproduce in numerical models. Due to the simplified approach used by the discrete element method (DEM) to simulate large deformations and nonlinear stress-dilatancy behavior of granular soils, the DEM consists of an excellent tool to investigate these processes. This study presents a sensitivity analysis of the effects of introducing a single pile using the PFC2D software developed by Itasca Co. The different scales investigated in these simulations include point and shaft resistance, alterations in porosity and stress fields and particles displacement. Several simulations were conducted in order to investigate the effects of different numerical approaches showing indications that the method of installation and particle rotation could influence greatly in the conditions around the numerical pile. Minor effects were also noted due to change in penetration velocity and pile-soil friction. The difference in behavior of a moving and a stationary pile shows good qualitative agreement with previous experimental results indicating the necessity of realizing a force equilibrium process prior to any load-test to be simulated.

  5. Heat Transfer Model for Hot Air Balloons

    NASA Astrophysics Data System (ADS)

    Llado-Gambin, Adriana

    A heat transfer model and analysis for hot air balloons is presented in this work, backed with a flow simulation using SolidWorks. The objective is to understand the major heat losses in the balloon and to identify the parameters that affect most its flight performance. Results show that more than 70% of the heat losses are due to the emitted radiation from the balloon envelope and that convection losses represent around 20% of the total. A simulated heating source is also included in the modeling based on typical thermal input from a balloon propane burner. The burner duty cycle to keep a constant altitude can vary from 10% to 28% depending on the atmospheric conditions, and the ambient temperature is the parameter that most affects the total thermal input needed. The simulation and analysis also predict that the gas temperature inside the balloon decreases at a rate of -0.25 K/s when there is no burner activity, and it increases at a rate of +1 K/s when the balloon pilot operates the burner. The results were compared to actual flight data and they show very good agreement indicating that the major physical processes responsible for balloon performance aloft are accurately captured in the simulation.

  6. Predictability of the 1997 and 1998 South Asian Summer Monsoons on the Intraseasonal Time Scale Based on 10 AMIP2 Model Runs

    NASA Technical Reports Server (NTRS)

    Wu, Man Li C.; Schubert, Siegfried; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Predictability of the 1997 and 1998 South Asian summer monsoons is examined using National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalyses, and 100 two-year simulations with ten different Atmospheric General Circulation Models (AGCMs) with prescribed sea surface temperature (SST). We focus on the intraseasonal variations of the south Asian summer monsoon associated with the Madden-Julian Oscillation (MJO). The NCEP/NCAR reanalysis shows a clear coupling between SST anomalies and upper level velocity potential anomalies associated with the MJO. We analyze several MJO events that developed during the 1997 and 1998 focusing of the coupling with the SST. The same analysis is carried out for the model simulations. Remarkably, the ensemble mean of the two-year AGCM simulations show a signature of the observed MJO events. The ensemble mean simulated MJO events are approximately in phase with the observed events, although they are weaker, the period of oscillation is somewhat longer, and their onset is delayed by about ten days compared with the observations. Details of the analysis and comparisons among the ten AMIP2 (Atmospheric Model Intercomparison Project) models will be presented in the conference.

  7. Multi-scale sensitivity analysis of pile installation using DEM

    NASA Astrophysics Data System (ADS)

    Esposito, Ricardo Gurevitz; Velloso, Raquel Quadros; , Eurípedes do Amaral Vargas, Jr.; Danziger, Bernadete Ragoni

    2018-07-01

    The disturbances experienced by the soil due to the pile installation and dynamic soil-structure interaction still present major challenges to foundation engineers. These phenomena exhibit complex behaviors, difficult to measure in physical tests and to reproduce in numerical models. Due to the simplified approach used by the discrete element method (DEM) to simulate large deformations and nonlinear stress-dilatancy behavior of granular soils, the DEM consists of an excellent tool to investigate these processes. This study presents a sensitivity analysis of the effects of introducing a single pile using the PFC2D software developed by Itasca Co. The different scales investigated in these simulations include point and shaft resistance, alterations in porosity and stress fields and particles displacement. Several simulations were conducted in order to investigate the effects of different numerical approaches showing indications that the method of installation and particle rotation could influence greatly in the conditions around the numerical pile. Minor effects were also noted due to change in penetration velocity and pile-soil friction. The difference in behavior of a moving and a stationary pile shows good qualitative agreement with previous experimental results indicating the necessity of realizing a force equilibrium process prior to any load-test to be simulated.

  8. Case-Deletion Diagnostics for Maximum Likelihood Multipoint Quantitative Trait Locus Linkage Analysis

    PubMed Central

    Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.

    2009-01-01

    Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086

  9. A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China

    NASA Astrophysics Data System (ADS)

    Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.

    2016-12-01

    Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.

  10. Non-Newtonian Hele-Shaw Flow and the Saffman-Taylor Instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kondic, L.; Shelley, M.J.; Palffy-Muhoray, P.

    We explore the Saffman-Taylor instability of a gas bubble expanding into a shear thinning liquid in a radial Hele-Shaw cell. Using Darcy{close_quote}s law generalized for non-Newtonian fluids, we perform simulations of the full dynamical problem. The simulations show that shear thinning significantly influences the developing interfacial patterns. Shear thinning can suppress tip splitting, and produce fingers which oscillate during growth and shed side branches. Emergent length scales show reasonable agreement with a general linear stability analysis. {copyright} {ital 1998} {ital The American Physical Society}

  11. Reverberation time influences musical enjoyment with cochlear implants.

    PubMed

    Certo, Michael V; Kohlberg, Gavriel D; Chari, Divya A; Mancuso, Dean M; Lalwani, Anil K

    2015-02-01

    To identify factors that enhance the enjoyment of music in cochlear implant (CI) recipients. Specifically, we assessed the hypothesis that variations in reverberation time (RT60) may be linked to variations in the level of musical enjoyment in CI users. Prospective analysis of music enjoyment in normal-hearing individuals. Single tertiary academic medical center. Normal-hearing adults (N = 20) were asked to rate a novel 20-second melody on three enjoyment modalities: musicality, pleasantness, and naturalness. Subjective rating of music excerpts. Participants listened to seven different instruments play the melody, each with five levels (0.2, 1.6, 3.0, 5.0, 10.0 s) of RT60, both with and without CI simulation processing. Linear regression analysis with analysis of variance was used to assess the impact of RT60 on music enjoyment. Without CI simulation, music samples with RT60 = 3.0 seconds were ranked most pleasant and most musical, whereas those with RT60 = 1.6 seconds and RT60 = 3.0 seconds were ranked equally most natural (all p < 0.05). With CI simulation, music samples with RT60 = 0.2 seconds were ranked most pleasant, most musical, and most natural (all p < 0.05). Samples without CI simulation show a preference for middle-range RT60, whereas samples with CI simulation show a negative linear relationship between RT60 and musical enjoyment, with preference for minimal reverberation. Minimization of RT60 may be a useful strategy for increasing musical enjoyment under CI conditions, both in altering existing music as well as in composition of new music.

  12. Safety and reliability analysis in a polyvinyl chloride batch process using dynamic simulator-case study: Loss of containment incident.

    PubMed

    Rizal, Datu; Tani, Shinichi; Nishiyama, Kimitoshi; Suzuki, Kazuhiko

    2006-10-11

    In this paper, a novel methodology in batch plant safety and reliability analysis is proposed using a dynamic simulator. A batch process involving several safety objects (e.g. sensors, controller, valves, etc.) is activated during the operational stage. The performance of the safety objects is evaluated by the dynamic simulation and a fault propagation model is generated. By using the fault propagation model, an improved fault tree analysis (FTA) method using switching signal mode (SSM) is developed for estimating the probability of failures. The timely dependent failures can be considered as unavailability of safety objects that can cause the accidents in a plant. Finally, the rank of safety object is formulated as performance index (PI) and can be estimated using the importance measures. PI shows the prioritization of safety objects that should be investigated for safety improvement program in the plants. The output of this method can be used for optimal policy in safety object improvement and maintenance. The dynamic simulator was constructed using Visual Modeler (VM, the plant simulator, developed by Omega Simulation Corp., Japan). A case study is focused on the loss of containment (LOC) incident at polyvinyl chloride (PVC) batch process which is consumed the hazardous material, vinyl chloride monomer (VCM).

  13. Neuroanatomical Diversity of Corpus Callosum and Brain Volume in Autism: Meta-analysis, Analysis of the Autism Brain Imaging Data Exchange Project, and Simulation.

    PubMed

    Lefebvre, Aline; Beggiato, Anita; Bourgeron, Thomas; Toro, Roberto

    2015-07-15

    Patients with autism have been often reported to have a smaller corpus callosum (CC) than control subjects. We conducted a meta-analysis of the literature, analyzed the CC in 694 subjects of the Autism Brain Imaging Data Exchange project, and performed computer simulations to study the effect of different analysis strategies. Our meta-analysis suggested a group difference in CC size; however, the studies were heavily underpowered (20% power to detect Cohen's d = .3). In contrast, we did not observe significant differences in the Autism Brain Imaging Data Exchange cohort, despite having achieved 99% power. However, we observed that CC scaled nonlinearly with brain volume (BV): large brains had a proportionally smaller CC. Our simulations showed that because of this nonlinearity, CC normalization could not control for eventual BV differences, but using BV as a covariate in a linear model would. We also observed a weaker correlation of IQ and BV in cases compared with control subjects. Our simulations showed that matching populations by IQ could then induce artifactual BV differences. The lack of statistical power in the previous literature prevents us from establishing the reality of the claims of a smaller CC in autism, and our own analyses did not find any. However, the nonlinear relationship between CC and BV and the different correlation between BV and IQ in cases and control subjects may induce artifactual differences. Overall, our results highlight the necessity for open data sharing to provide a more solid ground for the discovery of neuroimaging biomarkers within the context of the wide human neuroanatomical diversity. Copyright © 2015 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  14. Robust spectral-domain optical coherence tomography speckle model and its cross-correlation coefficient analysis

    PubMed Central

    Liu, Xuan; Ramella-Roman, Jessica C.; Huang, Yong; Guo, Yuan; Kang, Jin U.

    2013-01-01

    In this study, we proposed a generic speckle simulation for optical coherence tomography (OCT) signal, by convolving the point spread function (PSF) of the OCT system with the numerically synthesized random sample field. We validated our model and used the simulation method to study the statistical properties of cross-correlation coefficients (XCC) between Ascans which have been recently applied in transverse motion analysis by our group. The results of simulation show that over sampling is essential for accurate motion tracking; exponential decay of OCT signal leads to an under estimate of motion which can be corrected; lateral heterogeneity of sample leads to an over estimate of motion for a few pixels corresponding to the structural boundary. PMID:23456001

  15. Zemax simulations describing collective effects in transition and diffraction radiation.

    PubMed

    Bisesto, F G; Castellano, M; Chiadroni, E; Cianchi, A

    2018-02-19

    Transition and diffraction radiation from charged particles is commonly used for diagnostics purposes in accelerator facilities as well as THz sources for spectroscopy applications. Therefore, an accurate analysis of the emission process and the transport optics is crucial to properly characterize the source and precisely retrieve beam parameters. In this regard, we have developed a new algorithm, based on Zemax, to simulate both transition and diffraction radiation as generated by relativistic electron bunches, therefore considering collective effects. In particular, unlike other previous works, we take into account electron beam physical size and transverse momentum, reproducing some effects visible on the produced radiation, not observable in a single electron analysis. The simulation results have been compared with two experiments showing an excellent agreement.

  16. Analyzing small data sets using Bayesian estimation: the case of posttraumatic stress symptoms following mechanical ventilation in burn survivors

    PubMed Central

    van de Schoot, Rens; Broere, Joris J.; Perryck, Koen H.; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E.

    2015-01-01

    Background The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis. PMID:25765534

  17. Analyzing small data sets using Bayesian estimation: the case of posttraumatic stress symptoms following mechanical ventilation in burn survivors.

    PubMed

    van de Schoot, Rens; Broere, Joris J; Perryck, Koen H; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E

    2015-01-01

    Background : The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods : First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results : Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion : We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis.

  18. Analysis of audiometric notch as a noise-induced hearing loss phenotype in US youth: data from the National Health And Nutrition Examination Survey, 2005-2010.

    PubMed

    Bhatt, Ishan S; Guthrie, O'neil

    2017-06-01

    Bilateral audiometric notch (BN) at 4000-6000 Hz was identified as a noise-induced hearing loss (NIHL) phenotype for genetic association analysis in college-aged musicians. This study analysed BN in a sample of US youth. Prevalence of the BN within the study sample was determined and logistic-regression analyses were performed to identify audiologic and other demographic factors associated with BN. Computer-simulated "flat" audiograms were used to estimate potential influence of false-positive rates in estimating the prevalence of the BN. 2348 participants (12-19 years) following the inclusion criteria were selected from the National Health and Nutrition Examination Survey data (2005-2010). The prevalence of BN was 16.6%. Almost 55.6% of the participants showed notch in at least one ear. Noise exposure, gender, ethnicity and age showed significant relationship with the BN. Computer simulation revealed that 5.5% of simulated participants with "flat" audiograms showed BN. Association of noise exposure with BN suggests that it is a useful NIHL phenotype for genetic association analyses. However, further research is necessary to reduce false-positive rates in notch identification.

  19. Miscible gravitational instability of initially stable horizontal interface in a porous medium: Non-monotonic density profiles

    NASA Astrophysics Data System (ADS)

    Kim, Min Chan

    2014-11-01

    To simulate a CO2 sequestration process, some researchers employed a water/propylene glycol (PPG) system which shows a non-monotonic density profile. Motivated by this fact, the stability of the diffusion layer of two miscible fluids saturated in a porous medium is analyzed. For a non-monotonic density profile system, linear stability equations are derived in a global domain, and then transformed into a system of ordinary differential equations in an infinite domain. Initial growth rate analysis is conducted without the quasi-steady state approximation (QSSA) and shows that initially the system is unconditionally stable for the least stable disturbance. For the time evolving case, the ordinary differential equations are solved applying the eigen-analysis and numerical shooting scheme with and without the QSSA. To support these theoretical results, direct numerical simulations are conducted using the Fourier spectral method. The results of theoretical linear stability analyses and numerical simulations validate one another. The present linear and nonlinear analyses show that the water/PPG system is more unstable than the CO2/brine one, and the flow characteristics of these two systems are quite different from each other.

  20. Assessment of the effects of horizontal grid resolution on long ...

    EPA Pesticide Factsheets

    The objective of this study is to determine the adequacy of using a relatively coarse horizontal resolution (i.e. 36 km) to simulate long-term trends of pollutant concentrations and radiation variables with the coupled WRF-CMAQ model. WRF-CMAQ simulations over the continental United State are performed over the 2001 to 2010 time period at two different horizontal resolutions of 12 and 36 km. Both simulations used the same emission inventory and model configurations. Model results are compared both in space and time to assess the potential weaknesses and strengths of using coarse resolution in long-term air quality applications. The results show that the 36 km and 12 km simulations are comparable in terms of trends analysis for both pollutant concentrations and radiation variables. The advantage of using the coarser 36 km resolution is a significant reduction of computational cost, time and storage requirement which are key considerations when performing multiple years of simulations for trend analysis. However, if such simulations are to be used for local air quality analysis, finer horizontal resolution may be beneficial since it can provide information on local gradients. In particular, divergences between the two simulations are noticeable in urban, complex terrain and coastal regions. The National Exposure Research Laboratory’s Atmospheric Modeling Division (AMAD) conducts research in support of EPA’s mission to protect human health and the environment.

  1. Transient analysis of an HTS DC power cable with an HVDC system

    NASA Astrophysics Data System (ADS)

    Dinh, Minh-Chau; Ju, Chang-Hyeon; Kim, Jin-Geun; Park, Minwon; Yu, In-Keun; Yang, Byeongmo

    2013-11-01

    The operational characteristics of a superconducting DC power cable connected to a highvoltage direct current (HVDC) system are mainly concerned with the HVDC control and protection system. To confirm how the cable operates with the HVDC system, verifications using simulation tools are needed. This paper presents a transient analysis of a high temperature superconducting (HTS) DC power cable in connection with an HVDC system. The study was conducted via the simulation of the HVDC system and a developed model of the HTS DC power cable using a real time digital simulator (RTDS). The simulation was performed with some cases of short circuits that may have caused system damage. The simulation results show that during the faults, the quench did not happen with the HTS DC power cable because the HVDC controller reduced some degree of the fault current. These results could provide useful data for the protection design of a practical HVDC and HTS DC power cable system.

  2. Design and analysis of planar spiral resonator bandstop filter for microwave frequency

    NASA Astrophysics Data System (ADS)

    Motakabber, S. M. A.; Shaifudin Suharsono, Muhammad

    2017-11-01

    In microwave frequency, a spiral resonator can act as either frequency reject or acceptor circuits. A planar logarithmic spiral resonator bandstop filter has been developed based on this property. This project focuses on the rejection property of the spiral resonator. The performance analysis of the exhibited filter circuit has been performed by using scattering parameters (S-parameters) technique in the ultra-wideband microwave frequency. The proposed filter is built, simulated and S-parameters analysis have been accomplished by using electromagnetic simulation software CST microwave studio. The commercial microwave substrate Taconic TLX-8 has been used to build this filter. Experimental results showed that the -10 dB rejection bandwidth of the filter is 2.32 GHz and central frequency is 5.72 GHz which is suitable for ultra-wideband applications. The proposed design has been full of good compliance with the simulated and experimental results here.

  3. Simulation Research on Vehicle Active Suspension Controller Based on G1 Method

    NASA Astrophysics Data System (ADS)

    Li, Gen; Li, Hang; Zhang, Shuaiyang; Luo, Qiuhui

    2017-09-01

    Based on the order relation analysis method (G1 method), the optimal linear controller of vehicle active suspension is designed. The system of the main and passive suspension of the single wheel vehicle is modeled and the system input signal model is determined. Secondly, the system motion state space equation is established by the kinetic knowledge and the optimal linear controller design is completed with the optimal control theory. The weighting coefficient of the performance index coefficients of the main passive suspension is determined by the relational analysis method. Finally, the model is simulated in Simulink. The simulation results show that: the optimal weight value is determined by using the sequence relation analysis method under the condition of given road conditions, and the vehicle acceleration, suspension stroke and tire motion displacement are optimized to improve the comprehensive performance of the vehicle, and the active control is controlled within the requirements.

  4. Simulation of anisoplanatic imaging through optical turbulence using numerical wave propagation with new validation analysis

    NASA Astrophysics Data System (ADS)

    Hardie, Russell C.; Power, Jonathan D.; LeMaster, Daniel A.; Droege, Douglas R.; Gladysz, Szymon; Bose-Pillai, Santasri

    2017-07-01

    We present a numerical wave propagation method for simulating imaging of an extended scene under anisoplanatic conditions. While isoplanatic simulation is relatively common, few tools are specifically designed for simulating the imaging of extended scenes under anisoplanatic conditions. We provide a complete description of the proposed simulation tool, including the wave propagation method used. Our approach computes an array of point spread functions (PSFs) for a two-dimensional grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. The degradation includes spatially varying warping and blurring. To produce the PSF array, we generate a series of extended phase screens. Simulated point sources are numerically propagated from an array of positions on the object plane, through the phase screens, and ultimately to the focal plane of the simulated camera. Note that the optical path for each PSF will be different, and thus, pass through a different portion of the extended phase screens. These different paths give rise to a spatially varying PSF to produce anisoplanatic effects. We use a method for defining the individual phase screen statistics that we have not seen used in previous anisoplanatic simulations. We also present a validation analysis. In particular, we compare simulated outputs with the theoretical anisoplanatic tilt correlation and a derived differential tilt variance statistic. This is in addition to comparing the long- and short-exposure PSFs and isoplanatic angle. We believe this analysis represents the most thorough validation of an anisoplanatic simulation to date. The current work is also unique that we simulate and validate both constant and varying Cn2(z) profiles. Furthermore, we simulate sequences with both temporally independent and temporally correlated turbulence effects. Temporal correlation is introduced by generating even larger extended phase screens and translating this block of screens in front of the propagation area. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. Thus, we think this tool can be used effectively to study optical anisoplanatic turbulence and to aid in the development of image restoration methods.

  5. Analysis of the Flicker Level Produced by a Fixed-Speed Wind Turbine

    NASA Astrophysics Data System (ADS)

    Suppioni, Vinicius; P. Grilo, Ahda

    2013-10-01

    In this article, the analysis of the flicker emission during continuous operation of a mid-scale fixed-speed wind turbine connected to a distribution system is presented. Flicker emission is investigated based on simulation results, and the dependence of flicker emission on short-circuit capacity, grid impedance angle, mean wind speed, and wind turbulence is analyzed. The simulations were conducted in different programs in order to provide a more realistic wind emulation and detailed model of mechanical and electrical components of the wind turbine. Such aim is accomplished by using FAST (Fatigue, Aerodynamics, Structures, and Turbulence) to simulate the mechanical parts of the wind turbine, Simulink/MatLab to simulate the electrical system, and TurbSim to obtain the wind model. The results show that, even for a small wind generator, the flicker level can limit the wind power capacity installed in a distribution system.

  6. Radiation and ionization energy loss simulation for the GDH sum rule experiment in Hall-A at Jefferson Lab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Xin -Hu; Ye, Yun -Xiu; Chen, Jian -Ping

    2015-07-17

    The radiation and ionization energy loss are presented for single arm Monte Carlo simulation for the GDH sum rule experiment in Hall-A at Jefferson Lab. Radiation and ionization energy loss are discussed formore » $$^{12}C$$ elastic scattering simulation. The relative momentum ratio $$\\frac{\\Delta p}{p}$$ and $$^{12}C$$ elastic cross section are compared without and with radiation energy loss and a reasonable shape is obtained by the simulation. The total energy loss distribution is obtained, showing a Landau shape for $$^{12}C$$ elastic scattering. This simulation work will give good support for radiation correction analysis of the GDH sum rule experiment.« less

  7. Numerical Simulation of Intense Precipitation Events South of the Alps: Sensitivity to Initial Conditions and Horizontal Resolution

    NASA Astrophysics Data System (ADS)

    Cacciamani, C.; Cesari, D.; Grazzini, F.; Paccagnella, T.; Pantone, M.

    In this paper we describe the results of several numerical experiments performed with the limited area model LAMBO, based on a 1989 version of the NCEP (National Center for Environmental Prediction) ETA model, operational at ARPA-SMR since 1993. The experiments have been designed to assess the impact of different horizontal resolutions and initial conditions on the quality and detail of the forecast, especially as regards the precipitation field in the case of severe flood events. For initial conditions we developed a mesoscale data assimilation scheme, based on the nudging technique. The scheme makes use of upper air and surface meteorological observations to modify ECMWF (European Centre for Medium Range Weather Forecast) operational analyses, used as first-guess fields, in order to better describe smaller scales features, mainly in the lower troposphere. Three flood cases in the Alpine and Mediterranean regions have been simulated with LAMBO, using a horizontal grid spacing of 15 and 5km and starting either from ECMWF initialised analysis or from the result of our mesoscale analysis procedure. The results show that increasing the resolution generally improves the forecast, bringing the precipitation peaks in the flooded areas close to the observed values without producing many spurious precipitation patterns. The use of mesoscale analysis produces a more realistic representation of precipitation patterns giving a further improvement to the forecast of precipitation. Furthermore, when simulations are started from mesoscale analysis, some model-simulated thermodynamic indices show greater vertical instability just in the regions where strongest precipitation occurred.

  8. Nanostructure Formation by controlled dewetting on patterned substrates: A combined theoretical, modeling and experimental study.

    PubMed

    Lu, Liang-Xing; Wang, Ying-Min; Srinivasan, Bharathi Madurai; Asbahi, Mohamed; Yang, Joel K W; Zhang, Yong-Wei

    2016-09-01

    We perform systematic two-dimensional energetic analysis to study the stability of various nanostructures formed by dewetting solid films deposited on patterned substrates. Our analytical results show that by controlling system parameters such as the substrate surface pattern, film thickness and wetting angle, a variety of equilibrium nanostructures can be obtained. Phase diagrams are presented to show the complex relations between these system parameters and various nanostructure morphologies. We further carry out both phase field simulations and dewetting experiments to validate the analytically derived phase diagrams. Good agreements between the results from our energetic analyses and those from our phase field simulations and experiments verify our analysis. Hence, the phase diagrams presented here provide guidelines for using solid-state dewetting as a tool to achieve various nanostructures.

  9. Comparison of Moist Static Energy and Budget between the GCM-Simulated Madden–Julian Oscillation and Observations over the Indian Ocean and Western Pacific

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Xiaoqing; Deng, Liping

    The moist static energy (MSE) anomalies and MSE budget associated with the Madden–Julian oscillation (MJO) simulated in the Iowa State University General Circulation Model (ISUGCM) over the Indian and Pacific Oceans are compared with observations. Different phase relationships between MJO 850-hPa zonal wind, precipitation, and surface latent heat flux are simulated over the Indian Ocean and western Pacific, which are greatly influenced by the convection closure, trigger conditions, and convective momentum transport (CMT). The moist static energy builds up from the lower troposphere 15–20 days before the peak of MJO precipitation, and reaches the maximum in the middle troposphere (500–600more » hPa) near the peak of MJO precipitation. The gradual lower-tropospheric heating and moistening and the upward transport of moist static energy are important aspects of MJO events, which are documented in observational studies but poorly simulated in most GCMs. The trigger conditions for deep convection, obtained from the year-long cloud resolving model (CRM) simulations, contribute to the striking difference between ISUGCM simulations with the original and modified convection schemes and play the major role in the improved MJO simulation in ISUGCM. Additionally, the budget analysis with the ISUGCM simulations shows the increase in MJO MSE is in phase with the horizontal advection of MSE over the western Pacific, while out of phase with the horizontal advection of MSE over the Indian Ocean. However, the NCEP analysis shows that the tendency of MJO MSE is in phase with the horizontal advection of MSE over both oceans.« less

  10. Route complexity and simulated physical ageing negatively influence wayfinding.

    PubMed

    Zijlstra, Emma; Hagedoorn, Mariët; Krijnen, Wim P; van der Schans, Cees P; Mobach, Mark P

    2016-09-01

    The aim of this age-simulation field experiment was to assess the influence of route complexity and physical ageing on wayfinding. Seventy-five people (aged 18-28) performed a total of 108 wayfinding tasks (i.e., 42 participants performed two wayfinding tasks and 33 performed one wayfinding task), of which 59 tasks were performed wearing gerontologic ageing suits. Outcome variables were wayfinding performance (i.e., efficiency and walking speed) and physiological outcomes (i.e., heart and respiratory rates). Analysis of covariance showed that persons on more complex routes (i.e., more floor and building changes) walked less efficiently than persons on less complex routes. In addition, simulated elderly participants perform worse in wayfinding than young participants in terms of speed (p < 0.001). Moreover, a linear mixed model showed that simulated elderly persons had higher heart rates and respiratory rates compared to young people during a wayfinding task, suggesting that simulated elderly consumed more energy during this task. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Simulation of Rutherford backscattering spectrometry from arbitrary atom structures.

    PubMed

    Zhang, S; Nordlund, K; Djurabekova, F; Zhang, Y; Velisa, G; Wang, T S

    2016-10-01

    Rutherford backscattering spectrometry in a channeling direction (RBS/C) is a powerful tool for analysis of the fraction of atoms displaced from their lattice positions. However, it is in many cases not straightforward to analyze what is the actual defect structure underlying the RBS/C signal. To reveal insights of RBS/C signals from arbitrarily complex defective atomic structures, we develop here a method for simulating the RBS/C spectrum from a set of arbitrary read-in atom coordinates (obtained, e.g., from molecular dynamics simulations). We apply the developed method to simulate the RBS/C signals from Ni crystal structures containing randomly displaced atoms, Frenkel point defects, and extended defects, respectively. The RBS/C simulations show that, even for the same number of atoms in defects, the RBS/C signal is much stronger for the extended defects. Comparison with experimental results shows that the disorder profile obtained from RBS/C signals in ion-irradiated Ni is due to a small fraction of extended defects rather than a large number of individual random atoms.

  12. Simulation of Rutherford backscattering spectrometry from arbitrary atom structures

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Nordlund, K.; Djurabekova, F.; Zhang, Y.; Velisa, G.; Wang, T. S.

    2016-10-01

    Rutherford backscattering spectrometry in a channeling direction (RBS/C) is a powerful tool for analysis of the fraction of atoms displaced from their lattice positions. However, it is in many cases not straightforward to analyze what is the actual defect structure underlying the RBS/C signal. To reveal insights of RBS/C signals from arbitrarily complex defective atomic structures, we develop here a method for simulating the RBS/C spectrum from a set of arbitrary read-in atom coordinates (obtained, e.g., from molecular dynamics simulations). We apply the developed method to simulate the RBS/C signals from Ni crystal structures containing randomly displaced atoms, Frenkel point defects, and extended defects, respectively. The RBS/C simulations show that, even for the same number of atoms in defects, the RBS/C signal is much stronger for the extended defects. Comparison with experimental results shows that the disorder profile obtained from RBS/C signals in ion-irradiated Ni is due to a small fraction of extended defects rather than a large number of individual random atoms.

  13. The cost of conservative synchronization in parallel discrete event simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The performance of a synchronous conservative parallel discrete-event simulation protocol is analyzed. The class of simulation models considered is oriented around a physical domain and possesses a limited ability to predict future behavior. A stochastic model is used to show that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approach the complexity of the average per-event overhead of a serial simulation. The method is therefore within a constant factor of optimal. The analysis demonstrates that on large problems--those for which parallel processing is ideally suited--there is often enough parallel workload so that processors are not usually idle. The viability of the method is also demonstrated empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor.

  14. Pediatric Disaster Triage: Multiple Simulation Curriculum Improves Prehospital Care Providers' Assessment Skills.

    PubMed

    Cicero, Mark Xavier; Whitfill, Travis; Overly, Frank; Baird, Janette; Walsh, Barbara; Yarzebski, Jorge; Riera, Antonio; Adelgais, Kathleen; Meckler, Garth D; Baum, Carl; Cone, David Christopher; Auerbach, Marc

    2017-01-01

    Paramedics and emergency medical technicians (EMTs) triage pediatric disaster victims infrequently. The objective of this study was to measure the effect of a multiple-patient, multiple-simulation curriculum on accuracy of pediatric disaster triage (PDT). Paramedics, paramedic students, and EMTs from three sites were enrolled. Triage accuracy was measured three times (Time 0, Time 1 [two weeks later], and Time 2 [6 months later]) during a disaster simulation, in which high and low fidelity manikins and actors portrayed 10 victims. Accuracy was determined by participant triage decision concordance with predetermined expected triage level (RED [Immediate], YELLOW [Delayed], GREEN [Ambulatory], BLACK [Deceased]) for each victim. Between Time 0 and Time 1, participants completed an interactive online module, and after each simulation there was an individual debriefing. Associations between participant level of training, years of experience, and enrollment site were determined, as were instances of the most dangerous mistriage, when RED and YELLOW victims were triaged BLACK. The study enrolled 331 participants, and the analysis included 261 (78.9%) participants who completed the study, 123 from the Connecticut site, 83 from Rhode Island, and 55 from Massachusetts. Triage accuracy improved significantly from Time 0 to Time 1, after the educational interventions (first simulation with debriefing, and an interactive online module), with a median 10% overall improvement (p < 0.001). Subgroup analyses showed between Time 0 and Time 1, paramedics and paramedic students improved more than EMTs (p = 0.002). Analysis of triage accuracy showed greatest improvement in overall accuracy for YELLOW triage patients (Time 0 50% accurate, Time1 100%), followed by RED patients (Time 0 80%, Time 1 100%). There was no significant difference in accuracy between Time 1 and Time 2 (p = 0.073). This study shows that the multiple-victim, multiple-simulation curriculum yields a durable 10% improvement in simulated triage accuracy. Future iterations of the curriculum can target greater improvements in EMT triage accuracy.

  15. Estimation of real-time runway surface contamination using flight data recorder parameters

    NASA Astrophysics Data System (ADS)

    Curry, Donovan

    Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the results show the minimum frequency at which the algorithm still provides moderately accurate data is at 2Hz. In addition, the linear analysis shows that with estimated parameters increased and decreased up to 25% at random, high priority parameters have to be accurate to within at least +/-5% to have an effect of less than 1% change in the average coefficient of friction. Non-linear analysis results show that the algorithm can be considered reasonably accurate for all simulated cases when inaccuracies in the estimated parameters vary randomly and simultaneously up to +/-27%. At worst-case the maximum percentage change in average coefficient of friction is less than 10% for all surfaces.

  16. Development and validation of the Simulation Learning Effectiveness Inventory.

    PubMed

    Chen, Shiah-Lian; Huang, Tsai-Wei; Liao, I-Chen; Liu, Chienchi

    2015-10-01

    To develop and psychometrically test the Simulation Learning Effectiveness Inventory. High-fidelity simulation helps students develop clinical skills and competencies. Yet, reliable instruments measuring learning outcomes are scant. A descriptive cross-sectional survey was used to validate psychometric properties of the instrument measuring students' perception of stimulation learning effectiveness. A purposive sample of 505 nursing students who had taken simulation courses was recruited from a department of nursing of a university in central Taiwan from January 2010-June 2010. The study was conducted in two phases. In Phase I, question items were developed based on the literature review and the preliminary psychometric properties of the inventory were evaluated using exploratory factor analysis. Phase II was conducted to evaluate the reliability and validity of the finalized inventory using confirmatory factor analysis. The results of exploratory and confirmatory factor analyses revealed the instrument was composed of seven factors, named course arrangement, equipment resource, debriefing, clinical ability, problem-solving, confidence and collaboration. A further second-order analysis showed comparable fits between a three second-order factor (preparation, process and outcome) and the seven first-order factor models. Internal consistency was supported by adequate Cronbach's alphas and composite reliability. Convergent and discriminant validities were also supported by confirmatory factor analysis. The study provides evidence that the Simulation Learning Effectiveness Inventory is reliable and valid for measuring student perception of learning effectiveness. The instrument is helpful in building the evidence-based knowledge of the effect of simulation teaching on students' learning outcomes. © 2015 John Wiley & Sons Ltd.

  17. Computational analysis for selectivity of histone deacetylase inhibitor by replica-exchange umbrella sampling molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Tsukamoto, Shuichiro; Sakae, Yoshitake; Itoh, Yukihiro; Suzuki, Takayoshi; Okamoto, Yuko

    2018-03-01

    We performed protein-ligand docking simulations with a ligand T247, which has been reported as a selective inhibitor of a histone deacetylase HDAC3, by the replica-exchange umbrella sampling method in order to estimate the free energy profiles along ligand docking pathways of HDAC3-T247 and HDAC2-T247 systems. The simulation results showed that the docked state of the HDAC3-T247 system is more stable than that of the HDAC2-T247 system although the amino-acid sequences and structures of HDAC3 and HDAC2 are very similar. By comparing structures obtained from the simulations of both systems, we found the difference between structures of hydrophobic residues at the entrance of the catalytic site. Moreover, we performed conventional molecular dynamics simulations of HDAC3 and HDAC2 systems without T247, and the results also showed the same difference of the hydrophobic structures. Therefore, we consider that this hydrophobic structure contributes to the stabilization of the docked state of the HDAC3-T247 system. Furthermore, we show that Tyr209, which is one of the hydrophobic residues in HDAC2, plays a key role in the instability from the simulation results of a mutated-HDAC2 system.

  18. Robust Mediation Analysis Based on Median Regression

    PubMed Central

    Yuan, Ying; MacKinnon, David P.

    2014-01-01

    Mediation analysis has many applications in psychology and the social sciences. The most prevalent methods typically assume that the error distribution is normal and homoscedastic. However, this assumption may rarely be met in practice, which can affect the validity of the mediation analysis. To address this problem, we propose robust mediation analysis based on median regression. Our approach is robust to various departures from the assumption of homoscedasticity and normality, including heavy-tailed, skewed, contaminated, and heteroscedastic distributions. Simulation studies show that under these circumstances, the proposed method is more efficient and powerful than standard mediation analysis. We further extend the proposed robust method to multilevel mediation analysis, and demonstrate through simulation studies that the new approach outperforms the standard multilevel mediation analysis. We illustrate the proposed method using data from a program designed to increase reemployment and enhance mental health of job seekers. PMID:24079925

  19. Simulation for Supporting Scale-Up of a Fluidized Bed Reactor for Advanced Water Oxidation

    PubMed Central

    Abdul Raman, Abdul Aziz; Daud, Wan Mohd Ashri Wan

    2014-01-01

    Simulation of fluidized bed reactor (FBR) was accomplished for treating wastewater using Fenton reaction, which is an advanced oxidation process (AOP). The simulation was performed to determine characteristics of FBR performance, concentration profile of the contaminants, and various prominent hydrodynamic properties (e.g., Reynolds number, velocity, and pressure) in the reactor. Simulation was implemented for 2.8 L working volume using hydrodynamic correlations, continuous equation, and simplified kinetic information for phenols degradation as a model. The simulation shows that, by using Fe3+ and Fe2+ mixtures as catalyst, TOC degradation up to 45% was achieved for contaminant range of 40–90 mg/L within 60 min. The concentration profiles and hydrodynamic characteristics were also generated. A subsequent scale-up study was also conducted using similitude method. The analysis shows that up to 10 L working volume, the models developed are applicable. The study proves that, using appropriate modeling and simulation, data can be predicted for designing and operating FBR for wastewater treatment. PMID:25309949

  20. Effect of distributive mass of spring on power flow in engineering test

    NASA Astrophysics Data System (ADS)

    Sheng, Meiping; Wang, Ting; Wang, Minqing; Wang, Xiao; Zhao, Xuan

    2018-06-01

    Mass of spring is always neglected in theoretical and simulative analysis, while it may be a significance in practical engineering. This paper is concerned with the distributive mass of a steel spring which is used as an isolator to simulate isolation performance of a water pipe in a heating system. Theoretical derivation of distributive mass effect of steel spring on vibration is presented, and multiple eigenfrequencies are obtained, which manifest that distributive mass results in extra modes and complex impedance properties. Furthermore, numerical simulation visually shows several anti-resonances of the steel spring corresponding to impedance and power flow curves. When anti-resonances emerge, the spring collects large energy which may cause damage and unexpected consequences in practical engineering and needs to be avoided. Finally, experimental tests are conducted and results show consistency with that of the simulation of the spring with distributive mass.

  1. Application of the denitrification-decomposition model to predict carbon dioxide emissions under alternative straw retention methods.

    PubMed

    Chen, Can; Chen, Deli; Pan, Jianjun; Lam, Shu Kee

    2013-01-01

    Straw retention has been shown to reduce carbon dioxide (CO2) emission from agricultural soils. But it remains a big challenge for models to effectively predict CO2 emission fluxes under different straw retention methods. We used maize season data in the Griffith region, Australia, to test whether the denitrification-decomposition (DNDC) model could simulate annual CO2 emission. We also identified driving factors of CO2 emission by correlation analysis and path analysis. We show that the DNDC model was able to simulate CO2 emission under alternative straw retention scenarios. The correlation coefficients between simulated and observed daily values for treatments of straw burn and straw incorporation were 0.74 and 0.82, respectively, in the straw retention period and 0.72 and 0.83, respectively, in the crop growth period. The results also show that simulated values of annual CO2 emission for straw burn and straw incorporation were 3.45 t C ha(-1) y(-1) and 2.13 t C ha(-1) y(-1), respectively. In addition the DNDC model was found to be more suitable in simulating CO2 mission fluxes under straw incorporation. Finally the standard multiple regression describing the relationship between CO2 emissions and factors found that soil mean temperature (SMT), daily mean temperature (T mean), and water-filled pore space (WFPS) were significant.

  2. Application of the Denitrification-Decomposition Model to Predict Carbon Dioxide Emissions under Alternative Straw Retention Methods

    PubMed Central

    Chen, Deli; Pan, Jianjun; Lam, Shu Kee

    2013-01-01

    Straw retention has been shown to reduce carbon dioxide (CO2) emission from agricultural soils. But it remains a big challenge for models to effectively predict CO2 emission fluxes under different straw retention methods. We used maize season data in the Griffith region, Australia, to test whether the denitrification-decomposition (DNDC) model could simulate annual CO2 emission. We also identified driving factors of CO2 emission by correlation analysis and path analysis. We show that the DNDC model was able to simulate CO2 emission under alternative straw retention scenarios. The correlation coefficients between simulated and observed daily values for treatments of straw burn and straw incorporation were 0.74 and 0.82, respectively, in the straw retention period and 0.72 and 0.83, respectively, in the crop growth period. The results also show that simulated values of annual CO2 emission for straw burn and straw incorporation were 3.45 t C ha−1 y−1 and 2.13 t C ha−1 y−1, respectively. In addition the DNDC model was found to be more suitable in simulating CO2 mission fluxes under straw incorporation. Finally the standard multiple regression describing the relationship between CO2 emissions and factors found that soil mean temperature (SMT), daily mean temperature (T mean), and water-filled pore space (WFPS) were significant. PMID:24453915

  3. The Impact of High-Resolution Sea Surface Temperatures on the Simulated Nocturnal Florida Marine Boundary Layer

    NASA Technical Reports Server (NTRS)

    LaCasse, Katherine M.; Splitt, Michael E.; Lazarus, Steven M.; Lapenta, William M.

    2008-01-01

    High- and low-resolution sea surface temperature (SST) analysis products are used to initialize the Weather Research and Forecasting (WRF) Model for May 2004 for short-term forecasts over Florida and surrounding waters. Initial and boundary conditions for the simulations were provided by a combination of observations, large-scale model output, and analysis products. The impact of using a 1-km Moderate Resolution Imaging Spectroradiometer (MODIS) SST composite on subsequent evolution of the marine atmospheric boundary layer (MABL) is assessed through simulation comparisons and limited validation. Model results are presented for individual simulations, as well as for aggregates of easterly- and westerly-dominated low-level flows. The simulation comparisons show that the use of MODIS SST composites results in enhanced convergence zones. earlier and more intense horizontal convective rolls. and an increase in precipitation as well as a change in precipitation location. Validation of 10-m winds with buoys shows a slight improvement in wind speed. The most significant results of this study are that 1) vertical wind stress divergence and pressure gradient accelerations across the Florida Current region vary in importance as a function of flow direction and stability and 2) the warmer Florida Current in the MODIS product transports heat vertically and downwind of this heat source, modifying the thermal structure and the MABL wind field primarily through pressure gradient adjustments.

  4. Simulation and Analysis of One-time Forming Process of Automobile Steering Ball Head

    NASA Astrophysics Data System (ADS)

    Shi, Peicheng; Zhang, Xujun; Xu, Zengwei; Zhang, Rongyun

    2018-03-01

    Aiming at the problems such as large machining allowance, low production efficiency and material waste during die forging of ball pin, the cold extrusion process of ball head was studied and the analog simulation of the forming process was carried out by using the finite element analysis software DEFORM-3D. Through the analysis of the equivalent stress strain, velocity vector field and load-displacement curve, the flow regularity of the metal during the cold extrusion process of ball pin was clarified, and possible defects during the molding were predicted. The results showed that this process could solve the forming problem of ball pin and provide theoretical basis for actual production of enterprises.

  5. Global simulation of edge pedestal micro-instabilities

    NASA Astrophysics Data System (ADS)

    Wan, Weigang; Parker, Scott; Chen, Yang

    2011-10-01

    We study micro turbulence of the tokamak edge pedestal with global gyrokinetic particle simulations. The simulation code GEM is an electromagnetic δf code. Two sets of DIII-D experimental profiles, shot #131997 and shot #136051 are used. The dominant instabilities appear to be two kinds of modes both propagating in the electron diamagnetic direction, with comparable linear growth rates. The low n mode is at the Alfven frequency range and driven by density and ion temperature gradients. The high n mode is driven by electron temperature gradient and has a low real frequency. A β scan shows that the low n mode is electromagnetic. Frequency analysis shows that the high n mode is sometimes mixed with an ion instability. Experimental radial electric field is applied and its effects studied. We will also show some preliminary nonlinear results. We thank R. Groebner, P. Snyder and Y. Zheng for providing experimental profiles and helpful discussions.

  6. BWR station blackout: A RISMC analysis using RAVEN and RELAP5-3D

    DOE PAGES

    Mandelli, D.; Smith, C.; Riley, T.; ...

    2016-01-01

    The existing fleet of nuclear power plants is in the process of extending its lifetime and increasing the power generated from these plants via power uprates and improved operations. In order to evaluate the impact of these factors on the safety of the plant, the Risk-Informed Safety Margin Characterization (RISMC) project aims to provide insights to decision makers through a series of simulations of the plant dynamics for different initial conditions and accident scenarios. This paper presents a case study in order to show the capabilities of the RISMC methodology to assess impact of power uprate of a Boiling Watermore » Reactor system during a Station Black-Out accident scenario. We employ a system simulator code, RELAP5-3D, coupled with RAVEN which perform the stochastic analysis. Furthermore, our analysis is performed by: 1) sampling values from a set of parameters from the uncertainty space of interest, 2) simulating the system behavior for that specific set of parameter values and 3) analyzing the outcomes from the set of simulation runs.« less

  7. Modelling nitrous oxide emissions from mown-grass and grain-cropping systems: Testing and sensitivity analysis of DailyDayCent using high frequency measurements.

    PubMed

    Senapati, Nimai; Chabbi, Abad; Giostri, André Faé; Yeluripati, Jagadeesh B; Smith, Pete

    2016-12-01

    The DailyDayCent biogeochemical model was used to simulate nitrous oxide (N 2 O) emissions from two contrasting agro-ecosystems viz. a mown-grassland and a grain-cropping system in France. Model performance was tested using high frequency measurements over three years; additionally a local sensitivity analysis was performed. Annual N 2 O emissions of 1.97 and 1.24kgNha -1 year -1 were simulated from mown-grassland and grain-cropland, respectively. Measured and simulated water filled pore space (r=0.86, ME=-2.5%) and soil temperature (r=0.96, ME=-0.63°C) at 10cm soil depth matched well in mown-grassland. The model predicted cumulative hay and crop production effectively. The model simulated soil mineral nitrogen (N) concentrations, particularly ammonium (NH 4 + ), reasonably, but the model significantly underestimated soil nitrate (NO 3 - ) concentration under both systems. In general, the model effectively simulated the dynamics and the magnitude of daily N 2 O flux over the whole experimental period in grain-cropland (r=0.16, ME=-0.81gNha -1 day -1 ), with reasonable agreement between measured and modelled N 2 O fluxes for the mown-grassland (r=0.63, ME=-0.65gNha -1 day -1 ). Our results indicate that DailyDayCent has potential for use as a tool for predicting overall N 2 O emissions in the study region. However, in-depth analysis shows some systematic discrepancies between measured and simulated N 2 O fluxes on a daily basis. The current exercise suggests that the DailyDayCent may need improvement, particularly the sub-module responsible for N transformations, for better simulating soil mineral N, especially soil NO 3 - concentration, and N 2 O flux on a daily basis. The sensitivity analysis shows that many factors such as climate change, N-fertilizer use, input uncertainty and parameter value could influence the simulation of N 2 O emissions. Sensitivity estimation also helped to identify critical parameters, which need careful estimation or site-specific calibration for successful modelling of N 2 O emissions in the study region. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Application of Foldcore Sandwich Structures in Helicopter Subfloor Energy Absorption Structure

    NASA Astrophysics Data System (ADS)

    Zhou, H. Z.; Wang, Z. J.

    2017-10-01

    The intersection element is an important part of the helicopter subfloor structure. The numerical simulation model of the intersection element is established and the crush simulation is conducted. The simulation results agree well with the experiment results. In order to improve the buffering capacity and energy-absorbing capacity, the intersection element is redesigned. The skin and the floor in the intersection element are replaced with foldcore sandwich structures. The new intersection element is studied using the same simulation method as the typical intersection element. The analysis result shows that foldcore can improve the buffering capacity and the energy-absorbing capacity, and reduce the structure mass.

  9. Direct folding simulation of a long helix in explicit water

    NASA Astrophysics Data System (ADS)

    Gao, Ya; Lu, Xiaoliang; Duan, Lili; Zhang, Dawei; Mei, Ye; Zhang, John Z. H.

    2013-05-01

    A recently proposed Polarizable Hydrogen Bond (PHB) method has been employed to simulate the folding of a 53 amino acid helix (PDB ID 2KHK) in explicit water. Under PHB simulation, starting from a fully extended structure, the peptide folds into the native state as confirmed by measured time evolutions of radius of gyration, root mean square deviation (RMSD), and native hydrogen bond. Free energy and cluster analysis show that the folded helix is thermally stable under the PHB model. Comparison of simulation results under, respectively, PHB and standard nonpolarizable force field demonstrates that polarization is critical for stable folding of this long α-helix.

  10. Team Communication Influence on Procedure Performance: Findings From Interprofessional Simulations with Nursing and Medical Students.

    PubMed

    Reising, Deanna L; Carr, Douglas E; Gindling, Sally; Barnes, Roxie; Garletts, Derrick; Ozdogan, Zulfukar

    Interprofessional team performance is believed to be dependent on the development of effective team communication skills. Yet, little evidence exists in undergraduate nursing programs on whether team communication skills affect team performance. A secondary analysis of a larger study on interprofessional student teams in simulations was conducted to determine if there is a relationship between team communication and team procedure performance. The results showed a positive, significant correlation between interprofessional team communication ratings and procedure accuracy in the simulation. Interprofessional team training in communication skills for nursing and medical students improves the procedure accuracy in a simulated setting.

  11. HVAC modifications and computerized energy analysis for the Operations Support Building at the Mars Deep Space Station at Goldstone

    NASA Technical Reports Server (NTRS)

    Halperin, A.; Stelzmuller, P.

    1986-01-01

    The key heating, ventilation, and air-conditioning (HVAC) modifications implemented at the Mars Deep Space Station's Operation Support Building at Jet Propulsion Laboratories (JPL) in order to reduce energy consumption and decrease operating costs are described. An energy analysis comparison between the computer simulated model for the building and the actual meter data was presented. The measurement performance data showed that the cumulative energy savings was about 21% for the period 1979 to 1981. The deviation from simulated data to measurement performance data was only about 3%.

  12. Consistent Principal Component Modes from Molecular Dynamics Simulations of Proteins.

    PubMed

    Cossio-Pérez, Rodrigo; Palma, Juliana; Pierdominici-Sottile, Gustavo

    2017-04-24

    Principal component analysis is a technique widely used for studying the movements of proteins using data collected from molecular dynamics simulations. In spite of its extensive use, the technique has a serious drawback: equivalent simulations do not afford the same PC-modes. In this article, we show that concatenating equivalent trajectories and calculating the PC-modes from the concatenated one significantly enhances the reproducibility of the results. Moreover, the consistency of the modes can be systematically improved by adding more individual trajectories to the concatenated one.

  13. Simulation of Shock-Shock Interaction in Parsec-Scale Jets

    NASA Astrophysics Data System (ADS)

    Fromm, Christian M.; Perucho, Manel; Ros, Eduardo; Mimica, Petar; Savolainen, Tuomas; Lobanov, Andrei P.; Zensus, J. Anton

    The analysis of the radio light curves of the blazar CTA 102 during its 2006 flare revealed a possible interaction between a standing shock wave and a traveling one. In order to better understand this highly non-linear process, we used a relativistic hydrodynamic code to simulate the high energy interaction and its related emission. The calculated synchrotron emission from these simulations showed an increase in turnover flux density, Sm, and turnover frequency, νm, during the interaction and decrease to its initial values after the passage of the traveling shock wave.

  14. System Dynamics Modeling for Supply Chain Information Sharing

    NASA Astrophysics Data System (ADS)

    Feng, Yang

    In this paper, we try to use the method of system dynamics to model supply chain information sharing. Firstly, we determine the model boundaries, establish system dynamics model of supply chain before information sharing, analyze the model's simulation results under different changed parameters and suggest improvement proposal. Then, we establish system dynamics model of supply chain information sharing and make comparison and analysis on the two model's simulation results, to show the importance of information sharing in supply chain management. We wish that all these simulations would provide scientific supports for enterprise decision-making.

  15. RCWA and FDTD modeling of light emission from internally structured OLEDs.

    PubMed

    Callens, Michiel Koen; Marsman, Herman; Penninck, Lieven; Peeters, Patrick; de Groot, Harry; ter Meulen, Jan Matthijs; Neyts, Kristiaan

    2014-05-05

    We report on the fabrication and simulation of a green OLED with an Internal Light Extraction (ILE) layer. The optical behavior of these devices is simulated using both Rigorous Coupled Wave Analysis (RCWA) and Finite Difference Time-Domain (FDTD) methods. Results obtained using these two different techniques show excellent agreement and predict the experimental results with good precision. By verifying the validity of both simulation methods on the internal light extraction structure we pave the way to optimization of ILE layers using either of these methods.

  16. Flight testing and simulation of an F-15 airplane using throttles for flight control

    NASA Technical Reports Server (NTRS)

    Burcham, Frank W., Jr.; Maine, Trindel; Wolf, Thomas

    1992-01-01

    Flight tests and simulation studies using the throttles of an F-15 airplane for emergency flight control have been conducted at the NASA Dryden Flight Research Facility. The airplane and the simulation are capable of extended up-and-away flight, using only throttles for flight path control. Initial simulation results showed that runway landings using manual throttles-only control were difficult, but possible with practice. Manual approaches flown in the airplane were much more difficult, indicating a significant discrepancy between flight and simulation. Analysis of flight data and development of improved simulation models that resolve the discrepancy are discussed. An augmented throttle-only control system that controls bank angle and flight path with appropriate feedback parameters has also been developed, evaluated in simulations, and is planned for flight in the F-15.

  17. The 48-Pictures Test: a two-alternative forced-choice recognition test for the detection of malingering.

    PubMed

    Chouinard, M J; Rouleau, I

    1997-11-01

    We tested the validity of the 48-Pictures Test, a 2-alternative forced-choice recognition test, in detecting exaggerated memory impairments. This test maximizes subjective difficulty, through a large number of stimuli and shows minimal objective difficulty. We compared 17 suspected malingerers to 39 patients with memory impairments (6 amnesic, 15 frontal lobe dysfunctions, 18 other etiologies), and 17 normal adults instructed to simulate malingering on three memory tests: the 48-Pictures Test, the Rey Auditory Verbal Learning Test (RAVLT), and the Rey Complex Figure Test (RCFT). On the 48-Pictures Test, the clinical groups showed good recognition performance (amnesics: 85%; frontal dysfunction: 94%; other memory impairments: 97%), whereas the two simulator groups showed a poor performance (suspected malingerers: 62% correct; volunteer simulators 68% correct). The two other tests did not show a high degree of discrimination between the clinical groups and the simulator groups, except in 2 measures: the 2 simulator groups tended to show a performance decrement from the last recall trial to immediate recognition of the RAVLT and also performed better than the clinical groups on the immediate recall of the RCFT. A discriminant analysis with the latter 2 measures and the 48-Pictures Test correctly classified 96% of the participants. These results suggest that the 48-Pictures Test is a useful tool for the detection of possible simulated memory impairment and that when combined to the RAVLT recall-recognition difference score and to the immediate recall score on the RCFT can provide strong evidence of exaggerated memory impairment.

  18. Simulation study on discrete charge effects of SiNW biosensors according to bound target position using a 3D TCAD simulator.

    PubMed

    Chung, In-Young; Jang, Hyeri; Lee, Jieun; Moon, Hyunggeun; Seo, Sung Min; Kim, Dae Hwan

    2012-02-17

    We introduce a simulation method for the biosensor environment which treats the semiconductor and the electrolyte region together, using the well-established semiconductor 3D TCAD simulator tool. Using this simulation method, we conduct electrostatic simulations of SiNW biosensors with a more realistic target charge model where the target is described as a charged cube, randomly located across the nanowire surface, and analyze the Coulomb effect on the SiNW FET according to the position and distribution of the target charges. The simulation results show the considerable variation in the SiNW current according to the bound target positions, and also the dependence of conductance modulation on the polarity of target charges. This simulation method and the results can be utilized for analysis of the properties and behavior of the biosensor device, such as the sensing limit or the sensing resolution.

  19. Toward Interactive Scenario Analysis and Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gayle, Thomas R.; Summers, Kenneth Lee; Jungels, John

    2015-01-01

    As Modeling and Simulation (M&S) tools have matured, their applicability and importance have increased across many national security challenges. In particular, they provide a way to test how something may behave without the need to do real world testing. However, current and future changes across several factors including capabilities, policy, and funding are driving a need for rapid response or evaluation in ways that many M&S tools cannot address. Issues around large data, computational requirements, delivery mechanisms, and analyst involvement already exist and pose significant challenges. Furthermore, rising expectations, rising input complexity, and increasing depth of analysis will only increasemore » the difficulty of these challenges. In this study we examine whether innovations in M&S software coupled with advances in ''cloud'' computing and ''big-data'' methodologies can overcome many of these challenges. In particular, we propose a simple, horizontally-scalable distributed computing environment that could provide the foundation (i.e. ''cloud'') for next-generation M&S-based applications based on the notion of ''parallel multi-simulation''. In our context, the goal of parallel multi- simulation is to consider as many simultaneous paths of execution as possible. Therefore, with sufficient resources, the complexity is dominated by the cost of single scenario runs as opposed to the number of runs required. We show the feasibility of this architecture through a stable prototype implementation coupled with the Umbra Simulation Framework [6]. Finally, we highlight the utility through multiple novel analysis tools and by showing the performance improvement compared to existing tools.« less

  20. Investigation of dental alginate and agar impression materials as a brain simulant for ballistic testing.

    PubMed

    Falland-Cheung, Lisa; Piccione, Neil; Zhao, Tianqi; Lazarjan, Milad Soltanipour; Hanlin, Suzanne; Jermy, Mark; Waddell, J Neil

    2016-06-01

    Routine forensic research into in vitro skin/skull/brain ballistic blood backspatter behavior has traditionally used gelatin at a 1:10 Water:Powder (W:P) ratio by volume as a brain simulant. A limitation of gelatin is its high elasticity compared to brain tissue. Therefore this study investigated the use of dental alginate and agar impression materials as a brain simulant for ballistic testing. Fresh deer brain, alginate (W:P ratio 91.5:8.5) and agar (W:P ratio 81:19) specimens (n=10) (11×22×33mm) were placed in transparent Perspex boxes of the same internal dimensions prior to shooting with a 0.22inch caliber high velocity air gun. Quantitative analysis to establish kinetic energy loss, vertical displacement elastic behavior and qualitative analysis to establish elasticity behavior was done via high-speed camera footage (SA5, Photron, Japan) using Photron Fastcam Viewer software (Version 3.5.1, Photron, Japan) and visual observation. Damage mechanisms and behavior were qualitatively established by observation of the materials during and after shooting. The qualitative analysis found that of the two simulant materials tested, agar behaved more like brain in terms of damage and showed similar mechanical response to brain during the passage of the projectile, in terms of energy absorption and vertical velocity displacement. In conclusion agar showed a mechanical and subsequent damage response that was similar to brain compared to alginate. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Analysis of the laser ignition of methane/oxygen mixtures in a sub-scale rocket combustion chamber

    NASA Astrophysics Data System (ADS)

    Wohlhüter, Michael; Zhukov, Victor P.; Sender, Joachim; Schlechtriem, Stefan

    2017-06-01

    The laser ignition of methane/oxygen mixtures in a sub-scale rocket combustion chamber has been investigated numerically and experimentally. The ignition test case used in the present paper was generated during the In-Space Propulsion project (ISP-1), a project focused on the operation of propulsion systems in space, the handling of long idle periods between operations, and multiple reignitions under space conditions. Regarding the definition of the numerical simulation and the suitable domain for the current model, 2D and 3D simulations have been performed. Analysis shows that the usage of a 2D geometry is not suitable for this type of simulation, as the reduction of the geometry to a 2D domain significantly changes the conditions at the time of ignition and subsequently the flame development. The comparison of the numerical and experimental results shows a strong discrepancy in the pressure evolution and the combustion chamber pressure peak following the laser spark. The detailed analysis of the optical Schlieren and OH data leads to the conclusion that the pressure measurement system was not able to capture the strong pressure increase and the peak value in the combustion chamber during ignition. Although the timing in flame development following the laser spark is not captured appropriately, the 3D simulations reproduce the general ignition phenomena observed in the optical measurement systems, such as pressure evolution and injector flow characteristics.

  2. A parallel electrostatic Particle-in-Cell method on unstructured tetrahedral grids for large-scale bounded collisionless plasma simulations

    NASA Astrophysics Data System (ADS)

    Averkin, Sergey N.; Gatsonis, Nikolaos A.

    2018-06-01

    An unstructured electrostatic Particle-In-Cell (EUPIC) method is developed on arbitrary tetrahedral grids for simulation of plasmas bounded by arbitrary geometries. The electric potential in EUPIC is obtained on cell vertices from a finite volume Multi-Point Flux Approximation of Gauss' law using the indirect dual cell with Dirichlet, Neumann and external circuit boundary conditions. The resulting matrix equation for the nodal potential is solved with a restarted generalized minimal residual method (GMRES) and an ILU(0) preconditioner algorithm, parallelized using a combination of node coloring and level scheduling approaches. The electric field on vertices is obtained using the gradient theorem applied to the indirect dual cell. The algorithms for injection, particle loading, particle motion, and particle tracking are parallelized for unstructured tetrahedral grids. The algorithms for the potential solver, electric field evaluation, loading, scatter-gather algorithms are verified using analytic solutions for test cases subject to Laplace and Poisson equations. Grid sensitivity analysis examines the L2 and L∞ norms of the relative error in potential, field, and charge density as a function of edge-averaged and volume-averaged cell size. Analysis shows second order of convergence for the potential and first order of convergence for the electric field and charge density. Temporal sensitivity analysis is performed and the momentum and energy conservation properties of the particle integrators in EUPIC are examined. The effects of cell size and timestep on heating, slowing-down and the deflection times are quantified. The heating, slowing-down and the deflection times are found to be almost linearly dependent on number of particles per cell. EUPIC simulations of current collection by cylindrical Langmuir probes in collisionless plasmas show good comparison with previous experimentally validated numerical results. These simulations were also used in a parallelization efficiency investigation. Results show that the EUPIC has efficiency of more than 80% when the simulation is performed on a single CPU from a non-uniform memory access node and the efficiency is decreasing as the number of threads further increases. The EUPIC is applied to the simulation of the multi-species plasma flow over a geometrically complex CubeSat in Low Earth Orbit. The EUPIC potential and flowfield distribution around the CubeSat exhibit features that are consistent with previous simulations over simpler geometrical bodies.

  3. Improving the analysis of near-spectroscopy data with multivariate classification of hemodynamic patterns: a theoretical formulation and validation.

    PubMed

    Gemignani, Jessica; Middell, Eike; Barbour, Randall L; Graber, Harry L; Blankertz, Benjamin

    2018-04-04

    The statistical analysis of functional near infrared spectroscopy (fNIRS) data based on the general linear model (GLM) is often made difficult by serial correlations, high inter-subject variability of the hemodynamic response, and the presence of motion artifacts. In this work we propose to extract information on the pattern of hemodynamic activations without using any a priori model for the data, by classifying the channels as 'active' or 'not active' with a multivariate classifier based on linear discriminant analysis (LDA). This work is developed in two steps. First we compared the performance of the two analyses, using a synthetic approach in which simulated hemodynamic activations were combined with either simulated or real resting-state fNIRS data. This procedure allowed for exact quantification of the classification accuracies of GLM and LDA. In the case of real resting-state data, the correlations between classification accuracy and demographic characteristics were investigated by means of a Linear Mixed Model. In the second step, to further characterize the reliability of the newly proposed analysis method, we conducted an experiment in which participants had to perform a simple motor task and data were analyzed with the LDA-based classifier as well as with the standard GLM analysis. The results of the simulation study show that the LDA-based method achieves higher classification accuracies than the GLM analysis, and that the LDA results are more uniform across different subjects and, in contrast to the accuracies achieved by the GLM analysis, have no significant correlations with any of the demographic characteristics. Findings from the real-data experiment are consistent with the results of the real-plus-simulation study, in that the GLM-analysis results show greater inter-subject variability than do the corresponding LDA results. The results obtained suggest that the outcome of GLM analysis is highly vulnerable to violations of theoretical assumptions, and that therefore a data-driven approach such as that provided by the proposed LDA-based method is to be favored.

  4. A sensitivity analysis of regional and small watershed hydrologic models

    NASA Technical Reports Server (NTRS)

    Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.

    1975-01-01

    Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.

  5. A Petri Net Approach Based Elementary Siphons Supervisor for Flexible Manufacturing Systems

    NASA Astrophysics Data System (ADS)

    Abdul-Hussin, Mowafak Hassan

    2015-05-01

    This paper presents an approach to constructing a class of an S3PR net for modeling, simulation and control of processes occurring in the flexible manufacturing system (FMS) used based elementary siphons of a Petri net. Siphons are very important to the analysis and control of deadlocks of FMS that is significant objectives of siphons. Petri net models in the efficiency structure analysis, and utilization of the FMSs when different policy can be implemented lead to the deadlock prevention. We are representing an effective deadlock-free policy of a special class of Petri nets called S3PR. Simulation of Petri net structural analysis and reachability graph analysis is used for analysis and control of Petri nets. Petri nets contain been successfully as one of the most powerful tools for modelling of FMS, where Using structural analysis, we show that liveness of such systems can be attributed to the absence of under marked siphons.

  6. Time-lapse analysis of methane quantity in Mary Lee group of coal seams using filter-based multiple-point geostatistical simulation

    USGS Publications Warehouse

    Karacan, C. Özgen; Olea, Ricardo A.

    2013-01-01

    The systematic approach presented in this paper is the first time in literature that history matching, TIs of GIPs and filter simulations are used for degasification performance evaluation and for assessing GIP for mining safety. Results from this study showed that using production history matching of coalbed methane wells to determine time-lapsed reservoir data could be used to compute spatial GIP and representative GIP TIs generated through Voronoi decomposition. Furthermore, performing filter simulations using point-wise data and TIs could be used to predict methane quantity in coal seams subjected to degasification. During the course of the study, it was shown that the material balance of gas produced by wellbores and the GIP reductions in coal seams predicted using filter simulations compared very well, showing the success of filter simulations for continuous variables in this case study. Quantitative results from filter simulations of GIP within the studied area briefly showed that GIP was reduced from an initial ∼73 Bcf (median) to ∼46 Bcf (2011), representing a 37 % decrease and varying spatially through degasification. It is forecasted that there will be an additional ∼2 Bcf reduction in methane quantity between 2011 and 2015. This study and presented results showed that the applied methodology and utilized techniques can be used to map GIP and its change within coal seams after degasification, which can further be used for ventilation design for methane control in coal mines.

  7. Development and validation of the Simulation Learning Effectiveness Scale for nursing students.

    PubMed

    Pai, Hsiang-Chu

    2016-11-01

    To develop and validate the Simulation Learning Effectiveness Scale, which is based on Bandura's social cognitive theory. A simulation programme is a significant teaching strategy for nursing students. Nevertheless, there are few evidence-based instruments that validate the effectiveness of simulation learning in Taiwan. This is a quantitative descriptive design. In Study 1, a nonprobability convenience sample of 151 student nurses completed the Simulation Learning Effectiveness Scale. Exploratory factor analysis was used to examine the factor structure of the instrument. In Study 2, which involved 365 student nurses, confirmatory factor analysis and structural equation modelling were used to analyse the construct validity of the Simulation Learning Effectiveness Scale. In Study 1, exploratory factor analysis yielded three components: self-regulation, self-efficacy and self-motivation. The three factors explained 29·09, 27·74 and 19·32% of the variance, respectively. The final 12-item instrument with the three factors explained 76·15% of variance. Cronbach's alpha was 0·94. In Study 2, confirmatory factor analysis identified a second-order factor termed Simulation Learning Effectiveness Scale. Goodness-of-fit indices showed an acceptable fit overall with the full model (χ 2 /df (51) = 3·54, comparative fit index = 0·96, Tucker-Lewis index = 0·95 and standardised root-mean-square residual = 0·035). In addition, teacher's competence was found to encourage learning, and self-reflection and insight were significantly and positively associated with Simulation Learning Effectiveness Scale. Teacher's competence in encouraging learning also was significantly and positively associated with self-reflection and insight. Overall, theses variable explained 21·9% of the variance in the student's learning effectiveness. The Simulation Learning Effectiveness Scale is a reliable and valid means to assess simulation learning effectiveness for nursing students. The Simulation Learning Effectiveness Scale can be used to examine nursing students' learning effectiveness and serve as a basis to improve student's learning efficiency through simulation programmes. Future implementation research that focuses on the relationship between learning effectiveness and nursing competence in nursing students is recommended. © 2016 John Wiley & Sons Ltd.

  8. Finite element for rotor/stator interactive forces in general engine dynamic simulation. Part 1: Development of bearing damper element

    NASA Technical Reports Server (NTRS)

    Adams, M. L.; Padovan, J.; Fertis, D. G.

    1980-01-01

    A general purpose squeeze-film damper interactive force element was developed, coded into a software package (module) and debugged. This software package was applied to nonliner dynamic analyses of some simple rotor systems. Results for pressure distributions show that the long bearing (end sealed) is a stronger bearing as compared to the short bearing as expected. Results of the nonlinear dynamic analysis, using a four degree of freedom simulation model, showed that the orbit of the rotating shaft increases nonlinearity to fill the bearing clearance as the unbalanced weight increases.

  9. Weighing the galactic disc using the Jeans equation: lessons from simulations

    NASA Astrophysics Data System (ADS)

    Candlish, G. N.; Smith, R.; Moni Bidin, C.; Gibson, B. K.

    2016-03-01

    Using three-dimensional stellar kinematic data from simulated galaxies, we examine the efficacy of a Jeans equation analysis in reconstructing the total disk surface density, including the dark matter, at the `Solar' radius. Our simulation data set includes galaxies formed in a cosmological context using state-of-the-art high-resolution cosmological zoom simulations, and other idealized models. The cosmologically formed galaxies have been demonstrated to lie on many of the observed scaling relations for late-type spirals, and thus offer an interesting surrogate for real galaxies with the obvious advantage that all the kinematical data are known perfectly. We show that the vertical velocity dispersion is typically the dominant kinematic quantity in the analysis, and that the traditional method of using only the vertical force is reasonably effective at low heights above the disk plane. At higher heights the inclusion of the radial force becomes increasingly important. We also show that the method is sensitive to uncertainties in the measured disk parameters, particularly the scalelengths of the assumed double exponential density distribution, and the scalelength of the radial velocity dispersion. In addition, we show that disk structure and low number statistics can lead to significant errors in the calculated surface densities. Finally, we examine the implications of our results for previous studies of this sort, suggesting that more accurate measurements of the scalelengths may help reconcile conflicting estimates of the local dark matter density in the literature.

  10. Principal component and normal mode analysis of proteins; a quantitative comparison using the GroEL subunit.

    PubMed

    Skjaerven, Lars; Martinez, Aurora; Reuter, Nathalie

    2011-01-01

    Principal component analysis (PCA) and normal mode analysis (NMA) have emerged as two invaluable tools for studying conformational changes in proteins. To compare these approaches for studying protein dynamics, we have used a subunit of the GroEL chaperone, whose dynamics is well characterized. We first show that both PCA on trajectories from molecular dynamics (MD) simulations and NMA reveal a general dynamical behavior in agreement with what has previously been described for GroEL. We thus compare the reproducibility of PCA on independent MD runs and subsequently investigate the influence of the length of the MD simulations. We show that there is a relatively poor one-to-one correspondence between eigenvectors obtained from two independent runs and conclude that caution should be taken when analyzing principal components individually. We also observe that increasing the simulation length does not improve the agreement with the experimental structural difference. In fact, relatively short MD simulations are sufficient for this purpose. We observe a rapid convergence of the eigenvectors (after ca. 6 ns). Although there is not always a clear one-to-one correspondence, there is a qualitatively good agreement between the movements described by the first five modes obtained with the three different approaches; PCA, all-atoms NMA, and coarse-grained NMA. It is particularly interesting to relate this to the computational cost of the three methods. The results we obtain on the GroEL subunit contribute to the generalization of robust and reproducible strategies for the study of protein dynamics, using either NMA or PCA of trajectories from MD simulations. © 2010 Wiley-Liss, Inc.

  11. MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion

    NASA Astrophysics Data System (ADS)

    Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong

    This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.

  12. Implementation and simulation of a cone dielectric elastomer actuator

    NASA Astrophysics Data System (ADS)

    Wang, Huaming; Zhu, Jianying

    2008-11-01

    The purpose is to investigate the performance of cone dielectric elastomer actuator (DEA) by experiment and FEM simulation. Two working equilibrium positions of cone DEA, which correspond to its initial displacement and displacement output with voltage off and on respectively, are determined through the analysis on its working principle. Experiments show that analytical results accord with experimental ones, and work output in a workcycle is hereby calculated. Actuator can respond quickly when voltage is applied and can return to its original position rapidly when voltage is released. Also, FEM simulation is used to obtain the movement of cone DEA in advance. Simulation results agree well with experimental ones and prove the feasibility of simulation. Also, causes for small difference between them in displacement output are analyzed.

  13. Structural and chemical orders in N i 64.5 Z r 35.5 metallic glass by molecular dynamics simulation

    DOE PAGES

    Tang, L.; Wen, T. Q.; Wang, N.; ...

    2018-03-06

    The atomic structure of Ni 64.5Zr 35.5 metallic glass has been investigated by molecular dynamics (MD) simulations. The calculated structure factors from the MD glassy sample at room temperature agree well with the X-ray diffraction (XRD) and neutron diffraction (ND) experimental data. Using the pairwise cluster alignment and clique analysis methods, we show that there are three types dominant short-range order (SRO) motifs around Ni atoms in the glass sample of Ni 64.5Zr 35.5, i.e., Mixed- Icosahedron(ICO)-Cube, Twined-Cube and icosahedron-like clusters. Furthermore, chemical order and medium-range order (MRO) analysis show that the Mixed-ICOCube and Twined-Cube clusters exhibit the characteristics ofmore » the crystalline B2 phase. In conclusion, our simulation results suggest that the weak glass-forming ability (GFA) of Ni 64.5Zr 35.5 can be attributed to the competition between the glass forming ICO SRO and the crystalline Mixed-ICO-Cube and Twined-Cube motifs.« less

  14. Structural and chemical orders in N i 64.5 Z r 35.5 metallic glass by molecular dynamics simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, L.; Wen, T. Q.; Wang, N.

    The atomic structure of Ni 64.5Zr 35.5 metallic glass has been investigated by molecular dynamics (MD) simulations. The calculated structure factors from the MD glassy sample at room temperature agree well with the X-ray diffraction (XRD) and neutron diffraction (ND) experimental data. Using the pairwise cluster alignment and clique analysis methods, we show that there are three types dominant short-range order (SRO) motifs around Ni atoms in the glass sample of Ni 64.5Zr 35.5, i.e., Mixed- Icosahedron(ICO)-Cube, Twined-Cube and icosahedron-like clusters. Furthermore, chemical order and medium-range order (MRO) analysis show that the Mixed-ICOCube and Twined-Cube clusters exhibit the characteristics ofmore » the crystalline B2 phase. In conclusion, our simulation results suggest that the weak glass-forming ability (GFA) of Ni 64.5Zr 35.5 can be attributed to the competition between the glass forming ICO SRO and the crystalline Mixed-ICO-Cube and Twined-Cube motifs.« less

  15. Structural and chemical orders in N i64.5Z r35.5 metallic glass by molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Tang, L.; Wen, T. Q.; Wang, N.; Sun, Y.; Zhang, F.; Yang, Z. J.; Ho, K. M.; Wang, C. Z.

    2018-03-01

    The atomic structure of N i64.5Z r35.5 metallic glass has been investigated by molecular dynamics (MD) simulations. The calculated structure factors from the MD glassy sample at room temperature agree well with the x-ray diffraction (XRD) and neutron diffraction (ND) experimental data. Using the pairwise cluster alignment and clique analysis methods, we show that there are three types of dominant short-range order (SRO) motifs around Ni atoms in the glass sample of N i64.5Z r35.5 , i.e., mixed-icosahedron(ICO)-cube, intertwined-cube, and icosahedronlike clusters. Furthermore, chemical order and medium-range order (MRO) analysis show that the mixed-ICO-cube and intertwined-cube clusters exhibit the characteristics of the crystalline B2 phase. Our simulation results suggest that the weak glass-forming ability (GFA) of N i64.5Z r35.5 can be attributed to the competition between the glass forming ICO SRO and the crystalline mixed-ICO-cube and intertwined-cube motifs.

  16. Medication Waste Reduction in Pediatric Pharmacy Batch Processes

    PubMed Central

    Veltri, Michael A.; Hamrock, Eric; Mollenkopf, Nicole L.; Holt, Kristen; Levin, Scott

    2014-01-01

    OBJECTIVES: To inform pediatric cart-fill batch scheduling for reductions in pharmaceutical waste using a case study and simulation analysis. METHODS: A pre and post intervention and simulation analysis was conducted during 3 months at a 205-bed children's center. An algorithm was developed to detect wasted medication based on time-stamped computerized provider order entry information. The algorithm was used to quantify pharmaceutical waste and associated costs for both preintervention (1 batch per day) and postintervention (3 batches per day) schedules. Further, simulation was used to systematically test 108 batch schedules outlining general characteristics that have an impact on the likelihood for waste. RESULTS: Switching from a 1-batch-per-day to a 3-batch-per-day schedule resulted in a 31.3% decrease in pharmaceutical waste (28.7% to 19.7%) and annual cost savings of $183,380. Simulation results demonstrate how increasing batch frequency facilitates a more just-in-time process that reduces waste. The most substantial gains are realized by shifting from a schedule of 1 batch per day to at least 2 batches per day. The simulation exhibits how waste reduction is also achievable by avoiding batch preparation during daily time periods where medication administration or medication discontinuations are frequent. Last, the simulation was used to show how reducing batch preparation time per batch provides some, albeit minimal, opportunity to decrease waste. CONCLUSIONS: The case study and simulation analysis demonstrate characteristics of batch scheduling that may support pediatric pharmacy managers in redesign toward minimizing pharmaceutical waste. PMID:25024671

  17. Medication waste reduction in pediatric pharmacy batch processes.

    PubMed

    Toerper, Matthew F; Veltri, Michael A; Hamrock, Eric; Mollenkopf, Nicole L; Holt, Kristen; Levin, Scott

    2014-04-01

    To inform pediatric cart-fill batch scheduling for reductions in pharmaceutical waste using a case study and simulation analysis. A pre and post intervention and simulation analysis was conducted during 3 months at a 205-bed children's center. An algorithm was developed to detect wasted medication based on time-stamped computerized provider order entry information. The algorithm was used to quantify pharmaceutical waste and associated costs for both preintervention (1 batch per day) and postintervention (3 batches per day) schedules. Further, simulation was used to systematically test 108 batch schedules outlining general characteristics that have an impact on the likelihood for waste. Switching from a 1-batch-per-day to a 3-batch-per-day schedule resulted in a 31.3% decrease in pharmaceutical waste (28.7% to 19.7%) and annual cost savings of $183,380. Simulation results demonstrate how increasing batch frequency facilitates a more just-in-time process that reduces waste. The most substantial gains are realized by shifting from a schedule of 1 batch per day to at least 2 batches per day. The simulation exhibits how waste reduction is also achievable by avoiding batch preparation during daily time periods where medication administration or medication discontinuations are frequent. Last, the simulation was used to show how reducing batch preparation time per batch provides some, albeit minimal, opportunity to decrease waste. The case study and simulation analysis demonstrate characteristics of batch scheduling that may support pediatric pharmacy managers in redesign toward minimizing pharmaceutical waste.

  18. Time and Frequency-Domain Cross-Verification of SLS 6DOF Trajectory Simulations

    NASA Technical Reports Server (NTRS)

    Johnson, Matthew; McCullough, John

    2017-01-01

    The Space Launch System (SLS) Guidance, Navigation, and Control (GNC) team and its partners have developed several time- and frequency-based simulations for development and analysis of the proposed SLS launch vehicle. The simulations differ in fidelity and some have unique functionality that allows them to perform specific analyses. Some examples of the purposes of the various models are: trajectory simulation, multi-body separation, Monte Carlo, hardware in the loop, loads, and frequency domain stability analyses. While no two simulations are identical, many of the models are essentially six degree-of-freedom (6DOF) representations of the SLS plant dynamics, hardware implementation, and flight software. Thus at a high level all of those models should be in agreement. Comparison of outputs from several SLS trajectory and stability analysis tools are ongoing as part of the program's current verification effort. The purpose of these comparisons is to highlight modeling and analysis differences, verify simulation data sources, identify inconsistencies and minor errors, and ultimately to verify output data as being a good representation of the vehicle and subsystem dynamics. This paper will show selected verification work in both the time and frequency domain from the current design analysis cycle of the SLS for several of the design and analysis simulations. In the time domain, the tools that will be compared are MAVERIC, CLVTOPS, SAVANT, STARS, ARTEMIS, and POST 2. For the frequency domain analysis, the tools to be compared are FRACTAL, SAVANT, and STARS. The paper will include discussion of these tools including their capabilities, configurations, and the uses to which they are put in the SLS program. Determination of the criteria by which the simulations are compared (matching criteria) requires thoughtful consideration, and there are several pitfalls that may occur that can severely punish a simulation if not considered carefully. The paper will discuss these considerations and will present a framework for responding to these issues when they arise. For example, small event timing differences can lead to large differences in mass properties if the criteria are to measure those properties at the same time, or large differences in altitude if the criteria are to measure those properties when the simulation experiences a staging event. Similarly, a tiny difference in phase can lead to large gain margin differences for frequency-domain comparisons of gain margins.

  19. Comparative study of signalling methods for high-speed backplane transceiver

    NASA Astrophysics Data System (ADS)

    Wu, Kejun

    2017-11-01

    A combined analysis of transient simulation and statistical method is proposed for comparative study of signalling methods applied to high-speed backplane transceivers. This method enables fast and accurate signal-to-noise ratio and symbol error rate estimation of a serial link based on a four-dimension design space, including channel characteristics, noise scenarios, equalisation schemes, and signalling methods. The proposed combined analysis method chooses an efficient sampling size for performance evaluation. A comparative study of non-return-to-zero (NRZ), PAM-4, and four-phase shifted sinusoid symbol (PSS-4) using parameterised behaviour-level simulation shows PAM-4 and PSS-4 has substantial advantages over conventional NRZ in most of the cases. A comparison between PAM-4 and PSS-4 shows PAM-4 gets significant bit error rate degradation when noise level is enhanced.

  20. Nanostructure Formation by controlled dewetting on patterned substrates: A combined theoretical, modeling and experimental study

    PubMed Central

    Lu, Liang-Xing; Wang, Ying-Min; Srinivasan, Bharathi Madurai; Asbahi, Mohamed; Yang, Joel K. W.; Zhang, Yong-Wei

    2016-01-01

    We perform systematic two-dimensional energetic analysis to study the stability of various nanostructures formed by dewetting solid films deposited on patterned substrates. Our analytical results show that by controlling system parameters such as the substrate surface pattern, film thickness and wetting angle, a variety of equilibrium nanostructures can be obtained. Phase diagrams are presented to show the complex relations between these system parameters and various nanostructure morphologies. We further carry out both phase field simulations and dewetting experiments to validate the analytically derived phase diagrams. Good agreements between the results from our energetic analyses and those from our phase field simulations and experiments verify our analysis. Hence, the phase diagrams presented here provide guidelines for using solid-state dewetting as a tool to achieve various nanostructures. PMID:27580943

  1. Optical Layout Analysis of Polarization Interference Imaging Spectrometer by Jones Calculus in View of both Optical Throughput and Interference Fringe Visibility

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanni; Zhang, Chunmin

    2013-01-01

    A polarization interference imaging spectrometer based on Savart polariscope was presented. Its optical throughput was analyzed by Jones calculus. The throughput expression was given, and clearly showed that the optical throughput mainly depended on the intensity of incident light, transmissivity, refractive index and the layout of optical system. The simulation and analysis gave the optimum layout in view of both optical throughput and interference fringe visibility, and verified that the layout of our former design was optimum. The simulation showed that a small deviation from the optimum layout influenced interference fringe visibility little for the optimum one, but influenced severely for others, so a small deviation is admissible in the optimum, and this can mitigate the manufacture difficulty. These results pave the way for further research and engineering design.

  2. Building energy analysis of Electrical Engineering Building from DesignBuilder tool: calibration and simulations

    NASA Astrophysics Data System (ADS)

    Cárdenas, J.; Osma, G.; Caicedo, C.; Torres, A.; Sánchez, S.; Ordóñez, G.

    2016-07-01

    This research shows the energy analysis of the Electrical Engineering Building, located on campus of the Industrial University of Santander in Bucaramanga - Colombia. This building is a green pilot for analysing energy saving strategies such as solar pipes, green roof, daylighting, and automation, among others. Energy analysis was performed by means of DesignBuilder software from virtual model of the building. Several variables were analysed such as air temperature, relative humidity, air velocity, daylighting, and energy consumption. According to two criteria, thermal load and energy consumption, critical areas were defined. The calibration and validation process of the virtual model was done obtaining error below 5% in comparison with measured values. The simulations show that the average indoor temperature in the critical areas of the building was 27°C, whilst relative humidity reached values near to 70% per year. The most critical discomfort conditions were found in the area of the greatest concentration of people, which has an average annual temperature of 30°C. Solar pipes can increase 33% daylight levels into the areas located on the upper floors of the building. In the case of the green roofs, the simulated results show that these reduces of nearly 31% of the internal heat gains through the roof, as well as a decrease in energy consumption related to air conditioning of 5% for some areas on the fourth and fifth floor. The estimated energy consumption of the building was 69 283 kWh per year.

  3. Analysis of Lunar Highland Regolith Samples from Apollo 16 Drive Core 64001/2 and Lunar Regolith Simulants - An Expanding Comparative Database

    NASA Technical Reports Server (NTRS)

    Schrader, Christian M.; Rickman, Doug; Stoeser, Doug; Wentworth, Susan J.; Botha, Pieter WSK; Butcher, Alan R.; McKay, David; Horsch, Hanna; Benedictus, Aukje; Gottlieb, Paul

    2008-01-01

    We present modal data from QEMSCAN(registered TradeMark) beam analysis of Apollo 16 samples from drive core 64001/2. The analyzed lunar samples are thin sections 64002,6019 (5.0-8.0 cm depth) and 64001,6031 (50.0-53.1 cm depth) and sieved grain mounts 64002,262 and 64001,374 from depths corresponding to the thin sections, respectively. We also analyzed lunar highland regolith simulants NU-LHT-1M, -2M, and OB-1, low-Ti mare simulants JSC-1, -lA, -1AF, and FJS-1, and high-Ti mare simulant MLS-1. The preliminary results comprise the beginning of an internally consistent database of lunar regolith and regolith simulant mineral and glass information. This database, combined with previous and concurrent studies on phase chemistry, bulk chemistry, and with data on particle shape and size distribution, will serve to guide lunar scientists and engineers in choosing simulants for their applications. These results are modal% by phase rather than by particle type, so they are not directly comparable to most previously published lunar data that report lithic fragments, monomineralic particles, agglutinates, etc. Of the highland simulants, 08-1 has an integrated modal composition closer than NU-LHT-1M to that of the 64001/2 samples, However, this and other studies show that NU-LHT-1M and -2M have minor and trace mineral (e.g., Fe-Ti oxides and phosphates) populations and mineral and glass chemistry closer to these lunar samples. The finest fractions (0-20 microns) in the sieved lunar samples are enriched in glass relative to the integrated compositions by approx.30% for 64002,262 and approx.15% for 64001,374. Plagioclase, pyroxene, and olivine are depleted in these finest fractions. This could be important to lunar dust mitigation efforts and astronaut health - none of the analyzed simulants show this trend. Contrary to previously reported modal analyses of monomineralic grains in lunar regolith, these area% modal analyses do not show a systematic increase in plagiociase/pyroxene as size fraction decreases.

  4. Simulations of material mixing in laser-driven reshock experiments

    NASA Astrophysics Data System (ADS)

    Haines, Brian M.; Grinstein, Fernando F.; Welser-Sherrill, Leslie; Fincke, James R.

    2013-02-01

    We perform simulations of a laser-driven reshock experiment [Welser-Sherrill et al., High Energy Density Phys. (unpublished)] in the strong-shock high energy-density regime to better understand material mixing driven by the Richtmyer-Meshkov instability. Validation of the simulations is based on direct comparison of simulation and radiographic data. Simulations are also compared with published direct numerical simulation and the theory of homogeneous isotropic turbulence. Despite the fact that the flow is neither homogeneous, isotropic nor fully turbulent, there are local regions in which the flow demonstrates characteristics of homogeneous isotropic turbulence. We identify and isolate these regions by the presence of high levels of turbulent kinetic energy (TKE) and vorticity. After reshock, our analysis shows characteristics consistent with those of incompressible isotropic turbulence. Self-similarity and effective Reynolds number assessments suggest that the results are reasonably converged at the finest resolution. Our results show that in shock-driven transitional flows, turbulent features such as self-similarity and isotropy only fully develop once de-correlation, characteristic vorticity distributions, and integrated TKE, have decayed significantly. Finally, we use three-dimensional simulation results to test the performance of two-dimensional Reynolds-averaged Navier-Stokes simulations. In this context, we also test a presumed probability density function turbulent mixing model extensively used in combustion applications.

  5. Modeling the Transfer Function for the Dark Energy Survey

    DOE PAGES

    Chang, C.

    2015-03-04

    We present a forward-modeling simulation framework designed to model the data products from the Dark Energy Survey (DES). This forward-model process can be thought of as a transfer function—a mapping from cosmological/astronomical signals to the final data products used by the scientists. Using output from the cosmological simulations (the Blind Cosmology Challenge), we generate simulated images (the Ultra Fast Image Simulator) and catalogs representative of the DES data. In this work we demonstrate the framework by simulating the 244 deg 2 coadd images and catalogs in five bands for the DES Science Verification data. The simulation output is compared withmore » the corresponding data to show that major characteristics of the images and catalogs can be captured. We also point out several directions of future improvements. Two practical examples—star-galaxy classification and proximity effects on object detection—are then used to illustrate how one can use the simulations to address systematics issues in data analysis. With clear understanding of the simplifications in our model, we show that one can use the simulations side-by-side with data products to interpret the measurements. This forward modeling approach is generally applicable for other upcoming and future surveys. It provides a powerful tool for systematics studies that is sufficiently realistic and highly controllable.« less

  6. Divertor heat flux simulations in ELMy H-mode discharges of EAST

    NASA Astrophysics Data System (ADS)

    Xia, T. Y.; Xu, X. Q.; Wu, Y. B.; Huang, Y. Q.; Wang, L.; Zheng, Z.; Liu, J. B.; Zang, Q.; Li, Y. Y.; Zhao, D.; EAST Team

    2017-11-01

    This paper presents heat flux simulations for the ELMy H-mode on the Experimental Advanced Superconducting Tokamak (EAST) using a six-field two-fluid model in BOUT++. Three EAST ELMy H-mode discharges with different plasma currents I p and geometries are studied. The trend of the scrape-off layer width λq with I p is reproduced by the simulation. The simulated width is only half of that derived from the EAST scaling law, but agrees well with the international multi-machine scaling law. Note that there is no radio-frequency (RF) heating scheme in the simulations, and RF heating can change the boundary topology and increase the flux expansion. Anomalous electron transport is found to contribute to the divertor heat fluxes. A coherent mode is found in the edge region in simulations. The frequency and poloidal wave number kθ are in the range of the edge coherent mode in EAST. The magnetic fluctuations of the mode are smaller than the electric field fluctuations. Statistical analysis of the type of turbulence shows that the turbulence transport type (blobby or turbulent) does not influence the heat flux width scaling. The two-point model differs from the simulation results but the drift-based model shows good agreement with simulations.

  7. Three-dimensional instabilities of natural convection between two differentially heated vertical plates: Linear and nonlinear complementary approaches

    NASA Astrophysics Data System (ADS)

    Gao, Zhenlan; Podvin, Berengere; Sergent, Anne; Xin, Shihe; Chergui, Jalel

    2018-05-01

    The transition to the chaos of the air flow between two vertical plates maintained at different temperatures is studied in the Boussinesq approximation. After the first bifurcation at critical Rayleigh number Rac, the flow consists of two-dimensional (2D) corotating rolls. The stability of the 2D rolls is examined, confronting linear predictions with nonlinear integration. In all cases the 2D rolls are destabilized in the spanwise direction. Efficient linear stability analysis based on an Arnoldi method shows competition between two eigenmodes, corresponding to different spanwise wavelengths and different types of roll distortion. Nonlinear integration shows that the lower-wave-number mode is always dominant. A partial route to chaos is established through the nonlinear simulations. The flow becomes temporally chaotic for Ra =1.05 Rac , but remains characterized by the spatial patterns identified by linear stability analysis. This highlights the complementary role of linear stability analysis and nonlinear simulation.

  8. A Numerical Investigation of Turbine Noise Source Hierarchy and Its Acoustic Transmission Characteristics: Proof-of-Concept Progress

    NASA Technical Reports Server (NTRS)

    VanZante, Dale; Envia, Edmane

    2008-01-01

    A CFD-based simulation of single-stage turbine was done using the TURBO code to assess its viability for determining acoustic transmission through blade rows. Temporal and spectral analysis of the unsteady pressure data from the numerical simulations showed the allowable Tyler-Sofrin modes that are consistent with expectations. This indicated that high-fidelity acoustic transmission calculations are feasible with TURBO.

  9. The MICE grand challenge lightcone simulation - I. Dark matter clustering

    NASA Astrophysics Data System (ADS)

    Fosalba, P.; Crocce, M.; Gaztañaga, E.; Castander, F. J.

    2015-04-01

    We present a new N-body simulation from the Marenostrum Institut de Ciències de l'Espai (MICE) collaboration, the MICE Grand Challenge (MICE-GC), containing about 70 billion dark matter particles in a (3 Gpc h-1)3 comoving volume. Given its large volume and fine spatial resolution, spanning over five orders of magnitude in dynamic range, it allows an accurate modelling of the growth of structure in the universe from the linear through the highly non-linear regime of gravitational clustering. We validate the dark matter simulation outputs using 3D and 2D clustering statistics, and discuss mass-resolution effects in the non-linear regime by comparing to previous simulations and the latest numerical fits. We show that the MICE-GC run allows for a measurement of the BAO feature with per cent level accuracy and compare it to state-of-the-art theoretical models. We also use sub-arcmin resolution pixelized 2D maps of the dark matter counts in the lightcone to make tomographic analyses in real and redshift space. Our analysis shows the simulation reproduces the Kaiser effect on large scales, whereas we find a significant suppression of power on non-linear scales relative to the real space clustering. We complete our validation by presenting an analysis of the three-point correlation function in this and previous MICE simulations, finding further evidence for mass-resolution effects. This is the first of a series of three papers in which we present the MICE-GC simulation, along with a wide and deep mock galaxy catalogue built from it. This mock is made publicly available through a dedicated web portal, http://cosmohub.pic.es.

  10. cuTauLeaping: A GPU-Powered Tau-Leaping Stochastic Simulator for Massive Parallel Analyses of Biological Systems

    PubMed Central

    Besozzi, Daniela; Pescini, Dario; Mauri, Giancarlo

    2014-01-01

    Tau-leaping is a stochastic simulation algorithm that efficiently reconstructs the temporal evolution of biological systems, modeled according to the stochastic formulation of chemical kinetics. The analysis of dynamical properties of these systems in physiological and perturbed conditions usually requires the execution of a large number of simulations, leading to high computational costs. Since each simulation can be executed independently from the others, a massive parallelization of tau-leaping can bring to relevant reductions of the overall running time. The emerging field of General Purpose Graphic Processing Units (GPGPU) provides power-efficient high-performance computing at a relatively low cost. In this work we introduce cuTauLeaping, a stochastic simulator of biological systems that makes use of GPGPU computing to execute multiple parallel tau-leaping simulations, by fully exploiting the Nvidia's Fermi GPU architecture. We show how a considerable computational speedup is achieved on GPU by partitioning the execution of tau-leaping into multiple separated phases, and we describe how to avoid some implementation pitfalls related to the scarcity of memory resources on the GPU streaming multiprocessors. Our results show that cuTauLeaping largely outperforms the CPU-based tau-leaping implementation when the number of parallel simulations increases, with a break-even directly depending on the size of the biological system and on the complexity of its emergent dynamics. In particular, cuTauLeaping is exploited to investigate the probability distribution of bistable states in the Schlögl model, and to carry out a bidimensional parameter sweep analysis to study the oscillatory regimes in the Ras/cAMP/PKA pathway in S. cerevisiae. PMID:24663957

  11. Adaptive screening for depression--recalibration of an item bank for the assessment of depression in persons with mental and somatic diseases and evaluation in a simulated computer-adaptive test environment.

    PubMed

    Forkmann, Thomas; Kroehne, Ulf; Wirtz, Markus; Norra, Christine; Baumeister, Harald; Gauggel, Siegfried; Elhan, Atilla Halil; Tennant, Alan; Boecker, Maren

    2013-11-01

    This study conducted a simulation study for computer-adaptive testing based on the Aachen Depression Item Bank (ADIB), which was developed for the assessment of depression in persons with somatic diseases. Prior to computer-adaptive test simulation, the ADIB was newly calibrated. Recalibration was performed in a sample of 161 patients treated for a depressive syndrome, 103 patients from cardiology, and 103 patients from otorhinolaryngology (mean age 44.1, SD=14.0; 44.7% female) and was cross-validated in a sample of 117 patients undergoing rehabilitation for cardiac diseases (mean age 58.4, SD=10.5; 24.8% women). Unidimensionality of the itembank was checked and a Rasch analysis was performed that evaluated local dependency (LD), differential item functioning (DIF), item fit and reliability. CAT-simulation was conducted with the total sample and additional simulated data. Recalibration resulted in a strictly unidimensional item bank with 36 items, showing good Rasch model fit (item fit residuals<|2.5|) and no DIF or LD. CAT simulation revealed that 13 items on average were necessary to estimate depression in the range of -2 and +2 logits when terminating at SE≤0.32 and 4 items if using SE≤0.50. Receiver Operating Characteristics analysis showed that θ estimates based on the CAT algorithm have good criterion validity with regard to depression diagnoses (Area Under the Curve≥.78 for all cut-off criteria). The recalibration of the ADIB succeeded and the simulation studies conducted suggest that it has good screening performance in the samples investigated and that it may reasonably add to the improvement of depression assessment. © 2013.

  12. Data for Figures and Tables in Journal Article Assessment of the Effects of Horizontal Grid Resolution on Long-Term Air Quality Trends using Coupled WRF-CMAQ Simulations, doi:10.1016/j.atmosenv.2016.02.036

    EPA Pesticide Factsheets

    The dataset represents the data depicted in the Figures and Tables of a Journal Manuscript with the following abstract: The objective of this study is to determine the adequacy of using a relatively coarse horizontal resolution (i.e. 36 km) to simulate long-term trends of pollutant concentrations and radiation variables with the coupled WRF-CMAQ model. WRF-CMAQ simulations over the continental United State are performed over the 2001 to 2010 time period at two different horizontal resolutions of 12 and 36 km. Both simulations used the same emission inventory and model configurations. Model results are compared both in space and time to assess the potential weaknesses and strengths of using coarse resolution in long-term air quality applications. The results show that the 36 km and 12 km simulations are comparable in terms of trends analysis for both pollutant concentrations and radiation variables. The advantage of using the coarser 36 km resolution is a significant reduction of computational cost, time and storage requirement which are key considerations when performing multiple years of simulations for trend analysis. However, if such simulations are to be used for local air quality analysis, finer horizontal resolution may be beneficial since it can provide information on local gradients. In particular, divergences between the two simulations are noticeable in urban, complex terrain and coastal regions.This dataset is associated with the following publication

  13. A quantum algorithm for obtaining the lowest eigenstate of a Hamiltonian assisted with an ancillary qubit system

    NASA Astrophysics Data System (ADS)

    Bang, Jeongho; Lee, Seung-Woo; Lee, Chang-Woo; Jeong, Hyunseok

    2015-01-01

    We propose a quantum algorithm to obtain the lowest eigenstate of any Hamiltonian simulated by a quantum computer. The proposed algorithm begins with an arbitrary initial state of the simulated system. A finite series of transforms is iteratively applied to the initial state assisted with an ancillary qubit. The fraction of the lowest eigenstate in the initial state is then amplified up to 1. We prove that our algorithm can faithfully work for any arbitrary Hamiltonian in the theoretical analysis. Numerical analyses are also carried out. We firstly provide a numerical proof-of-principle demonstration with a simple Hamiltonian in order to compare our scheme with the so-called "Demon-like algorithmic cooling (DLAC)", recently proposed in Xu (Nat Photonics 8:113, 2014). The result shows a good agreement with our theoretical analysis, exhibiting the comparable behavior to the best `cooling' with the DLAC method. We then consider a random Hamiltonian model for further analysis of our algorithm. By numerical simulations, we show that the total number of iterations is proportional to , where is the difference between the two lowest eigenvalues and is an error defined as the probability that the finally obtained system state is in an unexpected (i.e., not the lowest) eigenstate.

  14. Effects of Solder Temperature on Pin Through-Hole during Wave Soldering: Thermal-Fluid Structure Interaction Analysis

    PubMed Central

    Abdul Aziz, M. S.; Abdullah, M. Z.; Khor, C. Y.

    2014-01-01

    An efficient simulation technique was proposed to examine the thermal-fluid structure interaction in the effects of solder temperature on pin through-hole during wave soldering. This study investigated the capillary flow behavior as well as the displacement, temperature distribution, and von Mises stress of a pin passed through a solder material. A single pin through-hole connector mounted on a printed circuit board (PCB) was simulated using a 3D model solved by FLUENT. The ABAQUS solver was employed to analyze the pin structure at solder temperatures of 456.15 K (183°C) < T < 643.15 K (370°C). Both solvers were coupled by the real time coupling software and mesh-based parallel code coupling interface during analysis. In addition, an experiment was conducted to measure the temperature difference (ΔT) between the top and the bottom of the pin. Analysis results showed that an increase in temperature increased the structural displacement and the von Mises stress. Filling time exhibited a quadratic relationship to the increment of temperature. The deformation of pin showed a linear correlation to the temperature. The ΔT obtained from the simulation and the experimental method were validated. This study elucidates and clearly illustrates wave soldering for engineers in the PCB assembly industry. PMID:25225638

  15. Chaos and simple determinism in reversed field pinch plasmas: Nonlinear analysis of numerical simulation and experimental data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, Christopher A.

    In this dissertation the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas is investigated. To properly assess this possibility, data from both numerical simulations and experiment are analyzed. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos in the data. These tools include phase portraits and Poincare sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulatemore » the plasma dynamics. These are the DEBS code, which models global RFP dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low dimensional chaos and simple determinism. Experimental date were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or low simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less

  16. Assessment of three-dimensional joint kinematics of the upper limb during simulated swimming using wearable inertial-magnetic measurement units.

    PubMed

    Fantozzi, Silvia; Giovanardi, Andrea; Magalhães, Fabrício Anício; Di Michele, Rocco; Cortesi, Matteo; Gatta, Giorgio

    2016-01-01

    The analysis of the joint kinematics during swimming plays a fundamental role both in sports conditioning and in clinical contexts. Contrary to the traditional video analysis, wearable inertial-magnetic measurements units (IMMUs) allow to analyse both the underwater and aerial phases of the swimming stroke over the whole length of the swimming pool. Furthermore, the rapid calibration and short data processing required by IMMUs provide coaches and athletes with an immediate feedback on swimming kinematics during training. This study aimed to develop a protocol to assess the three-dimensional kinematics of the upper limbs during swimming using IMMUs. Kinematics were evaluated during simulated dry-land swimming trials performed in the laboratory by eight swimmers. A stereo-photogrammetric system was used as the gold standard. The results showed high coefficient of multiple correlation (CMC) values, with median (first-third quartile) of 0.97 (0.93-0.95) and 0.99 (0.97-0.99) for simulated front-crawl and breaststroke, respectively. Furthermore, the joint angles were estimated with an accuracy increasing from distal to proximal joints, with wrist indices showing median CMC values always higher than 0.90. The present findings represent an important step towards the practical use of technology based on IMMUs for the kinematic analysis of swimming in applied contexts.

  17. Effects of solder temperature on pin through-hole during wave soldering: thermal-fluid structure interaction analysis.

    PubMed

    Aziz, M S Abdul; Abdullah, M Z; Khor, C Y

    2014-01-01

    An efficient simulation technique was proposed to examine the thermal-fluid structure interaction in the effects of solder temperature on pin through-hole during wave soldering. This study investigated the capillary flow behavior as well as the displacement, temperature distribution, and von Mises stress of a pin passed through a solder material. A single pin through-hole connector mounted on a printed circuit board (PCB) was simulated using a 3D model solved by FLUENT. The ABAQUS solver was employed to analyze the pin structure at solder temperatures of 456.15 K (183(°)C) < T < 643.15 K (370(°)C). Both solvers were coupled by the real time coupling software and mesh-based parallel code coupling interface during analysis. In addition, an experiment was conducted to measure the temperature difference (ΔT) between the top and the bottom of the pin. Analysis results showed that an increase in temperature increased the structural displacement and the von Mises stress. Filling time exhibited a quadratic relationship to the increment of temperature. The deformation of pin showed a linear correlation to the temperature. The ΔT obtained from the simulation and the experimental method were validated. This study elucidates and clearly illustrates wave soldering for engineers in the PCB assembly industry.

  18. A full-spectrum analysis of high-speed train interior noise under multi-physical-field coupling excitations

    NASA Astrophysics Data System (ADS)

    Zheng, Xu; Hao, Zhiyong; Wang, Xu; Mao, Jie

    2016-06-01

    High-speed-railway-train interior noise at low, medium, and high frequencies could be simulated by finite element analysis (FEA) or boundary element analysis (BEA), hybrid finite element analysis-statistical energy analysis (FEA-SEA) and statistical energy analysis (SEA), respectively. First, a new method named statistical acoustic energy flow (SAEF) is proposed, which can be applied to the full-spectrum HST interior noise simulation (including low, medium, and high frequencies) with only one model. In an SAEF model, the corresponding multi-physical-field coupling excitations are firstly fully considered and coupled to excite the interior noise. The interior noise attenuated by sound insulation panels of carriage is simulated through modeling the inflow acoustic energy from the exterior excitations into the interior acoustic cavities. Rigid multi-body dynamics, fast multi-pole BEA, and large-eddy simulation with indirect boundary element analysis are first employed to extract the multi-physical-field excitations, which include the wheel-rail interaction forces/secondary suspension forces, the wheel-rail rolling noise, and aerodynamic noise, respectively. All the peak values and their frequency bands of the simulated acoustic excitations are validated with those from the noise source identification test. Besides, the measured equipment noise inside equipment compartment is used as one of the excitation sources which contribute to the interior noise. Second, a full-trimmed FE carriage model is firstly constructed, and the simulated modal shapes and frequencies agree well with the measured ones, which has validated the global FE carriage model as well as the local FE models of the aluminum alloy-trim composite panel. Thus, the sound transmission loss model of any composite panel has indirectly been validated. Finally, the SAEF model of the carriage is constructed based on the accurate FE model and stimulated by the multi-physical-field excitations. The results show that the trend of the simulated 1/3 octave band sound pressure spectrum agrees well with that of the on-site-measured one. The deviation between the simulated and measured overall sound pressure level (SPL) is 2.6 dB(A) and well controlled below the engineering tolerance limit, which has validated the SAEF model in the full-spectrum analysis of the high speed train interior noise.

  19. Exploring the Origin of Differential Binding Affinities of Human Tubulin Isotypes αβII, αβIII and αβIV for DAMA-Colchicine Using Homology Modelling, Molecular Docking and Molecular Dynamics Simulations

    PubMed Central

    Panda, Dulal; Kunwar, Ambarish

    2016-01-01

    Tubulin isotypes are found to play an important role in regulating microtubule dynamics. The isotype composition is also thought to contribute in the development of drug resistance as tubulin isotypes show differential binding affinities for various anti-cancer agents. Tubulin isotypes αβII, αβIII and αβIV show differential binding affinity for colchicine. However, the origin of differential binding affinity is not well understood at the molecular level. Here, we investigate the origin of differential binding affinity of a colchicine analogue N-deacetyl-N-(2-mercaptoacetyl)-colchicine (DAMA-colchicine) for human αβII, αβIII and αβIV isotypes, employing sequence analysis, homology modeling, molecular docking, molecular dynamics simulation and MM-GBSA binding free energy calculations. The sequence analysis study shows that the residue compositions are different in the colchicine binding pocket of αβII and αβIII, whereas no such difference is present in αβIV tubulin isotypes. Further, the molecular docking and molecular dynamics simulations results show that residue differences present at the colchicine binding pocket weaken the bonding interactions and the correct binding of DAMA-colchicine at the interface of αβII and αβIII tubulin isotypes. Post molecular dynamics simulation analysis suggests that these residue variations affect the structure and dynamics of αβII and αβIII tubulin isotypes, which in turn affect the binding of DAMA-colchicine. Further, the binding free-energy calculation shows that αβIV tubulin isotype has the highest binding free-energy and αβIII has the lowest binding free-energy for DAMA-colchicine. The order of binding free-energy for DAMA-colchicine is αβIV ≃ αβII >> αβIII. Thus, our computational approaches provide an insight into the effect of residue variations on differential binding of αβII, αβIII and αβIV tubulin isotypes with DAMA-colchicine and may help to design new analogues with higher binding affinities for tubulin isotypes. PMID:27227832

  20. Cost-effectiveness analysis of EGFR mutation testing in patients with non-small cell lung cancer (NSCLC) with gefitinib or carboplatin-paclitaxel.

    PubMed

    Arrieta, Oscar; Anaya, Pablo; Morales-Oyarvide, Vicente; Ramírez-Tirado, Laura Alejandra; Polanco, Ana C

    2016-09-01

    Assess the cost-effectiveness of an EGFR-mutation testing strategy for advanced NSCLC in first-line therapy with either gefitinib or carboplatin-paclitaxel in Mexican institutions. Cost-effectiveness analysis using a discrete event simulation (DES) model to simulate two therapeutic strategies in patients with advanced NSCLC. Strategy one included patients tested for EGFR-mutation and therapy given accordingly. Strategy two included chemotherapy for all patients without testing. All results are presented in 2014 US dollars. The analysis was made with data from the Mexican frequency of EGFR-mutation. A univariate sensitivity analysis was conducted on EGFR prevalence. Progression-free survival (PFS) transition probabilities were estimated on data from the IPASS and simulated with a Weibull distribution, run with parallel trials to calculate a probabilistic sensitivity analysis. PFS of patients in the testing strategy was 6.76 months (95 % CI 6.10-7.44) vs 5.85 months (95 % CI 5.43-6.29) in the non-testing group. The one-way sensitivity analysis showed that PFS has a direct relationship with EGFR-mutation prevalence, while the ICER and testing cost have an inverse relationship with EGFR-mutation prevalence. The probabilistic sensitivity analysis showed that all iterations had incremental costs and incremental PFS for strategy 1 in comparison with strategy 2. There is a direct relationship between the ICER and the cost of EGFR testing, with an inverse relationship with the prevalence of EGFR-mutation. When prevalence is >10 % ICER remains constant. This study could impact Mexican and Latin American health policies regarding mutation detection testing and treatment for advanced NSCLC.

  1. Future Integrated Architecture (FIA): A Proposed Space Internetworking Architecture for Future Operations

    DTIC Science & Technology

    2008-09-01

    WiMax “as a ‘last mile’ broadband wireless access (BWA) alternative to cable modem service, telephone company Digital Subscriber Line (DSL) or T1/E1...MUOS at 56k , shows results similar to another simulation with different circuit configurations. Analysis presented in this thesis shows adequate

  2. Systematic analysis of signaling pathways using an integrative environment.

    PubMed

    Visvanathan, Mahesh; Breit, Marc; Pfeifer, Bernhard; Baumgartner, Christian; Modre-Osprian, Robert; Tilg, Bernhard

    2007-01-01

    Understanding the biological processes of signaling pathways as a whole system requires an integrative software environment that has comprehensive capabilities. The environment should include tools for pathway design, visualization, simulation and a knowledge base concerning signaling pathways as one. In this paper we introduce a new integrative environment for the systematic analysis of signaling pathways. This system includes environments for pathway design, visualization, simulation and a knowledge base that combines biological and modeling information concerning signaling pathways that provides the basic understanding of the biological system, its structure and functioning. The system is designed with a client-server architecture. It contains a pathway designing environment and a simulation environment as upper layers with a relational knowledge base as the underlying layer. The TNFa-mediated NF-kB signal trans-duction pathway model was designed and tested using our integrative framework. It was also useful to define the structure of the knowledge base. Sensitivity analysis of this specific pathway was performed providing simulation data. Then the model was extended showing promising initial results. The proposed system offers a holistic view of pathways containing biological and modeling data. It will help us to perform biological interpretation of the simulation results and thus contribute to a better understanding of the biological system for drug identification.

  3. Principal Component Analysis of Lipid Molecule Conformational Changes in Molecular Dynamics Simulations.

    PubMed

    Buslaev, Pavel; Gordeliy, Valentin; Grudinin, Sergei; Gushchin, Ivan

    2016-03-08

    Molecular dynamics simulations of lipid bilayers are ubiquitous nowadays. Usually, either global properties of the bilayer or some particular characteristics of each lipid molecule are evaluated in such simulations, but the structural properties of the molecules as a whole are rarely studied. Here, we show how a comprehensive quantitative description of conformational space and dynamics of a single lipid molecule can be achieved via the principal component analysis (PCA). We illustrate the approach by analyzing and comparing simulations of DOPC bilayers obtained using eight different force fields: all-atom generalized AMBER, CHARMM27, CHARMM36, Lipid14, and Slipids and united-atom Berger, GROMOS43A1-S3, and GROMOS54A7. Similarly to proteins, most of the structural variance of a lipid molecule can be described by only a few principal components. These major components are similar in different simulations, although there are notable distinctions between the older and newer force fields and between the all-atom and united-atom force fields. The DOPC molecules in the simulations generally equilibrate on the time scales of tens to hundreds of nanoseconds. The equilibration is the slowest in the GAFF simulation and the fastest in the Slipids simulation. Somewhat unexpectedly, the equilibration in the united-atom force fields is generally slower than in the all-atom force fields. Overall, there is a clear separation between the more variable previous generation force fields and significantly more similar new generation force fields (CHARMM36, Lipid14, Slipids). We expect that the presented approaches will be useful for quantitative analysis of conformations and dynamics of individual lipid molecules in other simulations of lipid bilayers.

  4. Detection of anatomical changes in lung cancer patients with 2D time-integrated, 2D time-resolved and 3D time-integrated portal dosimetry: a simulation study

    NASA Astrophysics Data System (ADS)

    Wolfs, Cecile J. A.; Brás, Mariana G.; Schyns, Lotte E. J. R.; Nijsten, Sebastiaan M. J. J. G.; van Elmpt, Wouter; Scheib, Stefan G.; Baltes, Christof; Podesta, Mark; Verhaegen, Frank

    2017-08-01

    The aim of this work is to assess the performance of 2D time-integrated (2D-TI), 2D time-resolved (2D-TR) and 3D time-integrated (3D-TI) portal dosimetry in detecting dose discrepancies between the planned and (simulated) delivered dose caused by simulated changes in the anatomy of lung cancer patients. For six lung cancer patients, tumor shift, tumor regression and pleural effusion are simulated by modifying their CT images. Based on the modified CT images, time-integrated (TI) and time-resolved (TR) portal dose images (PDIs) are simulated and 3D-TI doses are calculated. The modified and original PDIs and 3D doses are compared by a gamma analysis with various gamma criteria. Furthermore, the difference in the D 95% (ΔD 95%) of the GTV is calculated and used as a gold standard. The correlation between the gamma fail rate and the ΔD 95% is investigated, as well the sensitivity and specificity of all combinations of portal dosimetry method, gamma criteria and gamma fail rate threshold. On the individual patient level, there is a correlation between the gamma fail rate and the ΔD 95%, which cannot be found at the group level. The sensitivity and specificity analysis showed that there is not one combination of portal dosimetry method, gamma criteria and gamma fail rate threshold that can detect all simulated anatomical changes. This work shows that it will be more beneficial to relate portal dosimetry and DVH analysis on the patient level, rather than trying to quantify a relationship for a group of patients. With regards to optimizing sensitivity and specificity, different combinations of portal dosimetry method, gamma criteria and gamma fail rate should be used to optimally detect certain types of anatomical changes.

  5. Detection of anatomical changes in lung cancer patients with 2D time-integrated, 2D time-resolved and 3D time-integrated portal dosimetry: a simulation study.

    PubMed

    Wolfs, Cecile J A; Brás, Mariana G; Schyns, Lotte E J R; Nijsten, Sebastiaan M J J G; van Elmpt, Wouter; Scheib, Stefan G; Baltes, Christof; Podesta, Mark; Verhaegen, Frank

    2017-07-12

    The aim of this work is to assess the performance of 2D time-integrated (2D-TI), 2D time-resolved (2D-TR) and 3D time-integrated (3D-TI) portal dosimetry in detecting dose discrepancies between the planned and (simulated) delivered dose caused by simulated changes in the anatomy of lung cancer patients. For six lung cancer patients, tumor shift, tumor regression and pleural effusion are simulated by modifying their CT images. Based on the modified CT images, time-integrated (TI) and time-resolved (TR) portal dose images (PDIs) are simulated and 3D-TI doses are calculated. The modified and original PDIs and 3D doses are compared by a gamma analysis with various gamma criteria. Furthermore, the difference in the D 95% (ΔD 95% ) of the GTV is calculated and used as a gold standard. The correlation between the gamma fail rate and the ΔD 95% is investigated, as well the sensitivity and specificity of all combinations of portal dosimetry method, gamma criteria and gamma fail rate threshold. On the individual patient level, there is a correlation between the gamma fail rate and the ΔD 95% , which cannot be found at the group level. The sensitivity and specificity analysis showed that there is not one combination of portal dosimetry method, gamma criteria and gamma fail rate threshold that can detect all simulated anatomical changes. This work shows that it will be more beneficial to relate portal dosimetry and DVH analysis on the patient level, rather than trying to quantify a relationship for a group of patients. With regards to optimizing sensitivity and specificity, different combinations of portal dosimetry method, gamma criteria and gamma fail rate should be used to optimally detect certain types of anatomical changes.

  6. Study on Roadheader Cutting Load at Different Properties of Coal and Rock

    PubMed Central

    2013-01-01

    The mechanism of cutting process of roadheader with cutting head was researched, and the influences of properties of coal and rock on cutting load were deeply analyzed. Aimed at the defects of traditional calculation method of cutting load on fully expressing the complex cutting process of cutting head, the method of finite element simulation was proposed to simulate the dynamic cutting process. Aimed at the characteristics of coal and rock which affect the cutting load, several simulations with different firmness coefficient were taken repeatedly, and the relationship between three-axis force and firmness coefficient was derived. A comparative analysis of cutting pick load between simulation results and theoretical formula was carried out, and a consistency was achieved. Then cutting process with a total cutting head was carried out on this basis. The results show that the simulation analysis not only provides a reliable guarantee for the accurate calculation of the cutting head load and improves the efficiency of the cutting head cutting test but also offers a basis for selection of cutting head with different geological conditions of coal or rock. PMID:24302866

  7. Simulations of the observation of clouds and aerosols with the Experimental Lidar in Space Equipment system.

    PubMed

    Liu, Z; Voelger, P; Sugimoto, N

    2000-06-20

    We carried out a simulation study for the observation of clouds and aerosols with the Japanese Experimental Lidar in Space Equipment (ELISE), which is a two-wavelength backscatter lidar with three detection channels. The National Space Development Agency of Japan plans to launch the ELISE on the Mission Demonstrate Satellite 2 (MDS-2). In the simulations, the lidar return signals for the ELISE are calculated for an artificial, two-dimensional atmospheric model including different types of clouds and aerosols. The signal detection processes are simulated realistically by inclusion of various sources of noise. The lidar signals that are generated are then used as input for simulations of data analysis with inversion algorithms to investigate retrieval of the optical properties of clouds and aerosols. The results demonstrate that the ELISE can provide global data on the structures and optical properties of clouds and aerosols. We also conducted an analysis of the effects of cloud inhomogeneity on retrievals from averaged lidar profiles. We show that the effects are significant for space lidar observations of optically thick broken clouds.

  8. Analysis of wind-resistant and stability for cable tower in cable-stayed bridge with four towers

    NASA Astrophysics Data System (ADS)

    Meng, Yangjun; Li, Can

    2017-06-01

    Wind speed time history simulation methods have been introduced first, especially the harmonic synthesis method introduced in detail. Second, taking Chishi bridge for example, choosing the particular sections, and combined with the design wind speed, three-component coefficient simulate analysis between -4°and 4°has been carry out with the Fluent software. The results show that drag coefficient reaches maximum when the attack Angle is 1°. According to measured wind speed samples,time history curves of wind speed at bridge deck and tower roof have been obtained,and wind-resistant time history analysis for No.5 tower has been carry out. Their results show that the dynamic coefficients are different with different calculation standard, especially transverse bending moment, pulsating crosswind load does not show a dynamic amplification effect.Under pulsating wind loads at bridge deck or tower roof, the maximum displacement at the top of the tower and the maximum stress at the bottom of the tower are within the allowable range. The transverse stiffness of tower is greater than that of the longitudinal stiffness, therefore wind-resistant analysis should give priority to the longitudinal direction. Dynamic coefficients are different with different standard, the maximum dynamic coefficient should be used for the pseudo-static analysis.Finally, the static stability of tower is analyzed with different load combinations, and the galloping stabilities of cable tower is proved.

  9. [Remodeling simulation of human femur under bed rest and spaceflight circumstances based on three dimensional finite element analysis].

    PubMed

    Yang, Wenting; Wang, Dongmei; Lei, Zhoujixin; Wang, Chunhui; Chen, Shanguang

    2017-12-01

    Astronauts who are exposed to weightless environment in long-term spaceflight might encounter bone density and mass loss for the mechanical stimulus is smaller than normal value. This study built a three dimensional model of human femur to simulate the remodeling process of human femur during bed rest experiment based on finite element analysis (FEA). The remodeling parameters of this finite element model was validated after comparing experimental and numerical results. Then, the remodeling process of human femur in weightless environment was simulated, and the remodeling function of time was derived. The loading magnitude and loading cycle on human femur during weightless environment were increased to simulate the exercise against bone loss. Simulation results showed that increasing loading magnitude is more effective in diminishing bone loss than increasing loading cycles, which demonstrated that exercise of certain intensity could help resist bone loss during long-term spaceflight. At the end, this study simulated the bone recovery process after spaceflight. It was found that the bone absorption rate is larger than bone formation rate. We advise that astronauts should take exercise during spaceflight to resist bone loss.

  10. Nonlinear model analysis of all-optical flip-flop and inverter operations of microring laser

    NASA Astrophysics Data System (ADS)

    Kobayashi, Naoki; Kawamura, Yusaku; Aoki, Ryosuke; Kokubun, Yasuo

    2018-03-01

    We explore a theoretical model of bistability at two adjacent lasing wavelengths from an InGaAs/InGaAsP multiple quantum well (MQW) microring laser. We show that nonlinear effects on the phase and amplitude play significant roles in the lasing operations of the microring laser. Numerical simulations indicate that all-optical flip-flop operations and inverter operations can be observed within the same device by controlling the injection current. The validity of our analysis is confirmed by a comparison of the results for numerical simulations with experimental results of the lasing spectrum. We believe that the analysis presented in this paper will be useful for the future design of all-optical signal processing devices.

  11. Dynamic Simulation and Analysis of Human Walking Mechanism

    NASA Astrophysics Data System (ADS)

    Azahari, Athirah; Siswanto, W. A.; Ngali, M. Z.; Salleh, S. Md.; Yusup, Eliza M.

    2017-01-01

    Behaviour such as gait or posture may affect a person with the physiological condition during daily activities. The characteristic of human gait cycle phase is one of the important parameter which used to described the human movement whether it is in normal gait or abnormal gait. This research investigates four types of crouch walking (upright, interpolated, crouched and severe) by simulation approach. The assessment are conducting by looking the parameters of hamstring muscle joint, knee joint and ankle joint. The analysis results show that based on gait analysis approach, the crouch walking have a weak pattern of walking and postures. Short hamstring and knee joint is the most influence factor contributing to the crouch walking due to excessive hip flexion that typically accompanies knee flexion.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Medeiros, Brian; Williamson, David L.; Olson, Jerry G.

    In this study, fundamental characteristics of the aquaplanet climate simulated by the Community Atmosphere Model, Version 5.3 (CAM5.3) are presented. The assumptions and simplifications of the configuration are described. A 16 year long, perpetual equinox integration with prescribed SST using the model’s standard 18 grid spacing is presented as a reference simulation. Statistical analysis is presented that shows similar aquaplanet configurations can be run for about 2 years to obtain robust climatological structures, including global and zonal means, eddy statistics, and precipitation distributions. Such a simulation can be compared to the reference simulation to discern differences in the climate, includingmore » an assessment of confidence in the differences. To aid such comparisons, the reference simulation has been made available via earthsystemgrid.org. Examples are shown comparing the reference simulation with simulations from the CAM5 series that make different microphysical assumptions and use a different dynamical core.« less

  13. Reference aquaplanet climate in the Community Atmosphere Model, Version 5

    DOE PAGES

    Medeiros, Brian; Williamson, David L.; Olson, Jerry G.

    2016-03-18

    In this study, fundamental characteristics of the aquaplanet climate simulated by the Community Atmosphere Model, Version 5.3 (CAM5.3) are presented. The assumptions and simplifications of the configuration are described. A 16 year long, perpetual equinox integration with prescribed SST using the model’s standard 18 grid spacing is presented as a reference simulation. Statistical analysis is presented that shows similar aquaplanet configurations can be run for about 2 years to obtain robust climatological structures, including global and zonal means, eddy statistics, and precipitation distributions. Such a simulation can be compared to the reference simulation to discern differences in the climate, includingmore » an assessment of confidence in the differences. To aid such comparisons, the reference simulation has been made available via earthsystemgrid.org. Examples are shown comparing the reference simulation with simulations from the CAM5 series that make different microphysical assumptions and use a different dynamical core.« less

  14. Asphalt pavement aging and temperature dependent properties using functionally graded viscoelastic model

    NASA Astrophysics Data System (ADS)

    Dave, Eshan V.

    Asphalt concrete pavements are inherently graded viscoelastic structures. Oxidative aging of asphalt binder and temperature cycling due to climatic conditions being the major cause of non-homogeneity. Current pavement analysis and simulation procedures dwell on the use of layered approach to account for these non-homogeneities. The conventional finite-element modeling (FEM) technique discretizes the problem domain into smaller elements, each with a unique constitutive property. However the assignment of unique material property description to an element in the FEM approach makes it an unattractive choice for simulation of problems with material non-homogeneities. Specialized elements such as "graded elements" allow for non-homogenous material property definitions within an element. This dissertation describes the development of graded viscoelastic finite element analysis method and its application for analysis of asphalt concrete pavements. Results show that the present research improves efficiency and accuracy of simulations for asphalt pavement systems. Some of the practical implications of this work include the new technique's capability for accurate analysis and design of asphalt pavements and overlay systems and for the determination of pavement performance with varying climatic conditions and amount of in-service age. Other application areas include simulation of functionally graded fiber-reinforced concrete, geotechnical materials, metal and metal composites at high temperatures, polymers, and several other naturally existing and engineered materials.

  15. Two-Dimensional Neutronic and Fuel Cycle Analysis of the Transatomic Power Molten Salt Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Betzler, Benjamin R.; Powers, Jeffrey J.; Worrall, Andrew

    2017-01-15

    This status report presents the results from the first phase of the collaboration between Transatomic Power Corporation (TAP) and Oak Ridge National Laboratory (ORNL) to provide neutronic and fuel cycle analysis of the TAP core design through the Department of Energy Gateway for Accelerated Innovation in Nuclear, Nuclear Energy Voucher program. The TAP design is a molten salt reactor using movable moderator rods to shift the neutron spectrum in the core from mostly epithermal at beginning of life to thermal at end of life. Additional developments in the ChemTriton modeling and simulation tool provide the critical moderator-to-fuel ratio searches andmore » time-dependent parameters necessary to simulate the continuously changing physics in this complex system. Results from simulations with these tools show agreement with TAP-calculated performance metrics for core lifetime, discharge burnup, and salt volume fraction, verifying the viability of reducing actinide waste production with this design. Additional analyses of time step sizes, mass feed rates and enrichments, and isotopic removals provide additional information to make informed design decisions. This work further demonstrates capabilities of ORNL modeling and simulation tools for analysis of molten salt reactor designs and strongly positions this effort for the upcoming three-dimensional core analysis.« less

  16. Dissecting the evolution of dark matter subhaloes in the Bolshoi simulation

    NASA Astrophysics Data System (ADS)

    van den Bosch, Frank C.

    2017-06-01

    We present a comprehensive analysis of the evolution of dark matter subhaloes in the cosmological Bolshoi simulation. We identify a complete set of 12 unique evolution channels by which subhaloes evolve in between simulation outputs, and study their relative importance and demographics. We show that instantaneous masses and maximum circular velocities of individual subhaloes are extremely noisy, despite the use of a sophisticated, phase-space-based halo finder. We also show that subhaloes experience frequent penetrating encounters with other subhaloes (on average about one per dynamical time), and that subhaloes whose apo-centre lies outside the virial radius of their host (the 'ejected' or 'backsplash' haloes) experience tidal forces that modify their orbits. This results in an average fractional subhalo exchange rate among host haloes of ˜0.01 Gyr-1 (at the present time). In addition, we show that there are three distinct disruption channels; one in which subhaloes drop below the mass resolution limit of the simulation, one in which subhaloes 'merge' with their host halo largely driven by dynamical friction, and one in which subhaloes abruptly disintegrate. We estimate that roughly 80 per cent of all subhalo disruption in the Bolshoi simulation is numerical, rather than physical. This 'overmerging' is a serious road-block for the use of numerical simulations to interpret small-scale clustering, or for any other study that is sensitive to the detailed demographics of dark matter substructure.

  17. Chromaticity effects on head-tail instabilities for broadband impedance using two particle model, Vlasov analysis, and simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, Yong Ho; Chao, Alexander Wu; Blaskiewicz, Michael M.

    Effects of the chromaticity on head-tail instabilities for broadband impedances are comprehensively studied, using the two particle model, the Vlasov analysis and computer simulations. We show both in the two particle model and the Vlasov analysis with the trapezoidal (semiconstant) wake model that we can derive universal contour plots for the growth factor as a function of the two dimensionless parameters: the wakefield strength, Υ, and the difference of the betatron phase advances between the head and the tail, χ. They reveal how the chromaticity affects strong head-tail instabilities and excites head-tail instabilities. We also apply the LEP (Large Electron-Positronmore » Collider) broadband resonator model to the Vlasov approach and find that the results are in very good agreement with those of the trapezoidal wake model. The theoretical findings are also reinforced by the simulation results. In conclusion, the trapezoidal wake model turns out to be a very useful tool since it significantly simplifies the time domain analysis and provides well-behaved impedance at the same time.« less

  18. Chromaticity effects on head-tail instabilities for broadband impedance using two particle model, Vlasov analysis, and simulations

    DOE PAGES

    Chin, Yong Ho; Chao, Alexander Wu; Blaskiewicz, Michael M.; ...

    2017-07-28

    Effects of the chromaticity on head-tail instabilities for broadband impedances are comprehensively studied, using the two particle model, the Vlasov analysis and computer simulations. We show both in the two particle model and the Vlasov analysis with the trapezoidal (semiconstant) wake model that we can derive universal contour plots for the growth factor as a function of the two dimensionless parameters: the wakefield strength, Υ, and the difference of the betatron phase advances between the head and the tail, χ. They reveal how the chromaticity affects strong head-tail instabilities and excites head-tail instabilities. We also apply the LEP (Large Electron-Positronmore » Collider) broadband resonator model to the Vlasov approach and find that the results are in very good agreement with those of the trapezoidal wake model. The theoretical findings are also reinforced by the simulation results. In conclusion, the trapezoidal wake model turns out to be a very useful tool since it significantly simplifies the time domain analysis and provides well-behaved impedance at the same time.« less

  19. Lewis Research Center studies of multiple large wind turbine generators on a utility network

    NASA Technical Reports Server (NTRS)

    Gilbert, L. J.; Triezenberg, D. M.

    1979-01-01

    A NASA-Lewis program to study the anticipated performance of a wind turbine generator farm on an electric utility network is surveyed. The paper describes the approach of the Lewis Wind Energy Project Office to developing analysis capabilities in the area of wind turbine generator-utility network computer simulations. Attention is given to areas such as, the Lewis Purdue hybrid simulation, an independent stability study, DOE multiunit plant study, and the WEST simulator. Also covered are the Lewis mod-2 simulation including analog simulation of a two wind turbine system and comparison with Boeing simulation results, and gust response of a two machine model. Finally future work to be done is noted and it is concluded that the study shows little interaction between the generators and between the generators and the bus.

  20. Regional projections of North Indian climate for adaptation studies.

    PubMed

    Mathison, Camilla; Wiltshire, Andrew; Dimri, A P; Falloon, Pete; Jacob, Daniela; Kumar, Pankaj; Moors, Eddy; Ridley, Jeff; Siderius, Christian; Stoffel, Markus; Yasunari, T

    2013-12-01

    Adaptation is increasingly important for regions around the world where large changes in climate could have an impact on populations and industry. The Brahmaputra-Ganges catchments have a large population, a main industry of agriculture and a growing hydro-power industry, making the region susceptible to changes in the Indian Summer Monsoon, annually the main water source. The HighNoon project has completed four regional climate model simulations for India and the Himalaya at high resolution (25km) from 1960 to 2100 to provide an ensemble of simulations for the region. In this paper we have assessed the ensemble for these catchments, comparing the simulations with observations, to give credence that the simulations provide a realistic representation of atmospheric processes and therefore future climate. We have illustrated how these simulations could be used to provide information on potential future climate impacts and therefore aid decision-making using climatology and threshold analysis. The ensemble analysis shows an increase in temperature between the baseline (1970-2000) and the 2050s (2040-2070) of between 2 and 4°C and an increase in the number of days with maximum temperatures above 28°C and 35°C. There is less certainty for precipitation and runoff which show considerable variability, even in this relatively small ensemble, spanning zero. The HighNoon ensemble is the most complete data for the region providing useful information on a wide range of variables for the regional climate of the Brahmaputra-Ganges region, however there are processes not yet included in the models that could have an impact on the simulations of future climate. We have discussed these processes and show that the range from the HighNoon ensemble is similar in magnitude to potential changes in projections where these processes are included. Therefore strategies for adaptation must be robust and flexible allowing for advances in the science and natural environmental changes. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Simulating the Cyclone Induced Turbulent Mixing in the Bay of Bengal using COAWST Model

    NASA Astrophysics Data System (ADS)

    Prakash, K. R.; Nigam, T.; Pant, V.

    2017-12-01

    Mixing in the upper oceanic layers (up to a few tens of meters from surface) is an important process to understand the evolution of sea surface properties. Enhanced mixing due to strong wind forcing at surface leads to deepening of mixed layer that affects the air-sea exchange of heat and momentum fluxes and modulates sea surface temperature (SST). In the present study, we used Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) model to demonstrate and quantify the enhanced cyclone induced turbulent mixing in case of a severe cyclonic storm. The COAWST model was configured over the Bay of Bengal (BoB) and used to simulate the atmospheric and oceanic conditions prevailing during the tropical cyclone (TC) Phailin that occurred over the BoB during 10-15 October 2013. The model simulated cyclone track was validated with IMD best-track and model SST validated with daily AVHRR SST data. Validation shows that model simulated track & intensity, SST and salinity were in good agreement with observations and the cyclone induced cooling of the sea surface was well captured by the model. Model simulations show a considerable deepening (by 10-15 m) of the mixed layer and shoaling of thermocline during TC Phailin. The power spectrum analysis was performed on the zonal and meridional baroclinic current components, which shows strongest energy at 14 m depth. Model results were analyzed to investigate the non-uniform energy distribution in the water column from surface up to the thermocline depth. The rotary spectra analysis highlights the downward direction of turbulent mixing during the TC Phailin period. Model simulations were used to quantify and interpret the near-inertial mixing, which were generated by cyclone induced strong wind stress and the near-inertial energy. These near-inertial oscillations are responsible for the enhancement of the mixing operative in the strong post-monsoon (October-November) stratification in the BoB.

  2. A simulation model for risk assessment of turbine wheels

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.; Hage, Richard T.

    1991-01-01

    A simulation model has been successfully developed to evaluate the risk of the Space Shuttle auxiliary power unit (APU) turbine wheels for a specific inspection policy. Besides being an effective tool for risk/reliability evaluation, the simulation model also allows the analyst to study the trade-offs between wheel reliability, wheel life, inspection interval, and rejection crack size. For example, in the APU application, sensitivity analysis results showed that the wheel life limit has the least effect on wheel reliability when compared to the effect of the inspection interval and the rejection crack size. In summary, the simulation model developed represents a flexible tool to predict turbine wheel reliability and study the risk under different inspection policies.

  3. A simulation model for risk assessment of turbine wheels

    NASA Astrophysics Data System (ADS)

    Safie, Fayssal M.; Hage, Richard T.

    A simulation model has been successfully developed to evaluate the risk of the Space Shuttle auxiliary power unit (APU) turbine wheels for a specific inspection policy. Besides being an effective tool for risk/reliability evaluation, the simulation model also allows the analyst to study the trade-offs between wheel reliability, wheel life, inspection interval, and rejection crack size. For example, in the APU application, sensitivity analysis results showed that the wheel life limit has the least effect on wheel reliability when compared to the effect of the inspection interval and the rejection crack size. In summary, the simulation model developed represents a flexible tool to predict turbine wheel reliability and study the risk under different inspection policies.

  4. Influence of the track quality and of the properties of the wheel-rail rolling contact on vehicle dynamics

    NASA Astrophysics Data System (ADS)

    Suarez, Berta; Felez, Jesus; Lozano, José Antonio; Rodriguez, Pablo

    2013-02-01

    This work describes an analytical approach to determine what degree of accuracy is required in the definition of the rail vehicle models used for dynamic simulations. This way it would be possible to know in advance how the results of simulations may be altered due to the existence of errors in the creation of rolling stock models, whilst also identifying their critical parameters. This would make it possible to maximise the time available to enhance dynamic analysis and focus efforts on factors that are strictly necessary. In particular, the parameters related both to the track quality and to the rolling contact were considered in this study. With this aim, a sensitivity analysis was performed to assess their influence on the vehicle dynamic behaviour. To do this, 72 dynamic simulations were performed modifying, one at a time, the track quality, the wheel-rail friction coefficient and the equivalent conicity of both new and worn wheels. Three values were assigned to each parameter, and two wear states were considered for each type of wheel, one for new wheels and another one for reprofiled wheels. After processing the results of these simulations, it was concluded that all the parameters considered show very high influence, though the friction coefficient shows the highest influence. Therefore, it is recommended to undertake any future simulation job with measured track geometry and track irregularities, measured wheel profiles and normative values of the wheel-rail friction coefficient.

  5. Modeling and simulation of normal and hemiparetic gait

    NASA Astrophysics Data System (ADS)

    Luengas, Lely A.; Camargo, Esperanza; Sanchez, Giovanni

    2015-09-01

    Gait is the collective term for the two types of bipedal locomotion, walking and running. This paper is focused on walking. The analysis of human gait is of interest to many different disciplines, including biomechanics, human-movement science, rehabilitation and medicine in general. Here we present a new model that is capable of reproducing the properties of walking, normal and pathological. The aim of this paper is to establish the biomechanical principles that underlie human walking by using Lagrange method. The constraint forces of Rayleigh dissipation function, through which to consider the effect on the tissues in the gait, are included. Depending on the value of the factor present in the Rayleigh dissipation function, both normal and pathological gait can be simulated. First of all, we apply it in the normal gait and then in the permanent hemiparetic gait. Anthropometric data of adult person are used by simulation, and it is possible to use anthropometric data for children but is necessary to consider existing table of anthropometric data. Validation of these models includes simulations of passive dynamic gait that walk on level ground. The dynamic walking approach provides a new perspective of gait analysis, focusing on the kinematics and kinetics of gait. There have been studies and simulations to show normal human gait, but few of them have focused on abnormal, especially hemiparetic gait. Quantitative comparisons of the model predictions with gait measurements show that the model can reproduce the significant characteristics of normal gait.

  6. Verification of echo amplitude envelope analysis method in skin tissues for quantitative follow-up of healing ulcers

    NASA Astrophysics Data System (ADS)

    Omura, Masaaki; Yoshida, Kenji; Akita, Shinsuke; Yamaguchi, Tadashi

    2018-07-01

    We aim to develop an ultrasonic tissue characterization method for the follow-up of healing ulcers by diagnosing collagen fibers properties. In this paper, we demonstrated a computer simulation with simulation phantoms reflecting irregularly distributed collagen fibers to evaluate the relationship between physical properties, such as number density and periodicity, and the estimated characteristics of the echo amplitude envelope using the homodyned-K distribution. Moreover, the consistency between echo signal characteristics and the structures of ex vivo human tissues was verified from the measured data of normal skin and nonhealed ulcers. In the simulation study, speckle or coherent signal characteristics are identified as periodically or uniformly distributed collagen fibers with high number density and high periodicity. This result shows the effectiveness of the analysis using the homodyned-K distribution for tissues with complicated structures. Normal skin analysis results are characterized as including speckle or low-coherence signal components, and a nonhealed ulcer is different from normal skin with respect to the physical properties of collagen fibers.

  7. Cluster analysis of accelerated molecular dynamics simulations: A case study of the decahedron to icosahedron transition in Pt nanoparticles.

    PubMed

    Huang, Rao; Lo, Li-Ta; Wen, Yuhua; Voter, Arthur F; Perez, Danny

    2017-10-21

    Modern molecular-dynamics-based techniques are extremely powerful to investigate the dynamical evolution of materials. With the increase in sophistication of the simulation techniques and the ubiquity of massively parallel computing platforms, atomistic simulations now generate very large amounts of data, which have to be carefully analyzed in order to reveal key features of the underlying trajectories, including the nature and characteristics of the relevant reaction pathways. We show that clustering algorithms, such as the Perron Cluster Cluster Analysis, can provide reduced representations that greatly facilitate the interpretation of complex trajectories. To illustrate this point, clustering tools are used to identify the key kinetic steps in complex accelerated molecular dynamics trajectories exhibiting shape fluctuations in Pt nanoclusters. This analysis provides an easily interpretable coarse representation of the reaction pathways in terms of a handful of clusters, in contrast to the raw trajectory that contains thousands of unique states and tens of thousands of transitions.

  8. Cluster analysis of accelerated molecular dynamics simulations: A case study of the decahedron to icosahedron transition in Pt nanoparticles

    NASA Astrophysics Data System (ADS)

    Huang, Rao; Lo, Li-Ta; Wen, Yuhua; Voter, Arthur F.; Perez, Danny

    2017-10-01

    Modern molecular-dynamics-based techniques are extremely powerful to investigate the dynamical evolution of materials. With the increase in sophistication of the simulation techniques and the ubiquity of massively parallel computing platforms, atomistic simulations now generate very large amounts of data, which have to be carefully analyzed in order to reveal key features of the underlying trajectories, including the nature and characteristics of the relevant reaction pathways. We show that clustering algorithms, such as the Perron Cluster Cluster Analysis, can provide reduced representations that greatly facilitate the interpretation of complex trajectories. To illustrate this point, clustering tools are used to identify the key kinetic steps in complex accelerated molecular dynamics trajectories exhibiting shape fluctuations in Pt nanoclusters. This analysis provides an easily interpretable coarse representation of the reaction pathways in terms of a handful of clusters, in contrast to the raw trajectory that contains thousands of unique states and tens of thousands of transitions.

  9. Preliminary report of the Hepatic Encephalopathy Assessment Driving Simulator (HEADS) score.

    PubMed

    Baskin-Bey, Edwina S; Stewart, Charmaine A; Mitchell, Mary M; Bida, John P; Rosenthal, Theodore J; Nyberg, Scott L

    2008-01-01

    Audiovisual simulations of real-life driving (ie, driving simulators) have been used to assess neurologic dysfunction in a variety of medical applications. However, the use of simulated driving to assess neurologic impairment in the setting of liver disease (ie, hepatic encephalopathy) is limited. The aim of this analysis was to develop a scoring system based on simulated driving performance to assess mild cognitive impairment in cirrhotic patients with hepatic encephalopathy. This preliminary analysis was conducted as part of the Hepatic Encephalopathy Assessment Driving Simulator (HEADS) pilot study. Cirrhotic volunteers initially underwent a battery of neuropsychological tests to identify those cirrhotic patients with mild cognitive impairment. Performance during an audiovisually simulated course of on-road driving was then compared between mildly impaired cirrhotic patients and healthy volunteers. A scoring system was developed to quantify the likelihood of cognitive impairment on the basis of data from the simulated on-road driving. Mildly impaired cirrhotic patients performed below the level of healthy volunteers on the driving simulator. Univariate logistic regression and correlation models indicated that several driving simulator variables were significant predictors of cognitive impairment. Five variables (run time, total map performance, number of collisions, visual divided attention response, and average lane position) were incorporated into a quantitative model, the HEADS scoring system. The HEADS score (0-9 points) showed a strong correlation with cognitive impairment as measured by area under the receiver-operator curve (.89). The HEADS system appears to be a promising new tool for the assessment of mild hepatic encephalopathy.

  10. Influence analysis for high-dimensional time series with an application to epileptic seizure onset zone detection

    PubMed Central

    Flamm, Christoph; Graef, Andreas; Pirker, Susanne; Baumgartner, Christoph; Deistler, Manfred

    2013-01-01

    Granger causality is a useful concept for studying causal relations in networks. However, numerical problems occur when applying the corresponding methodology to high-dimensional time series showing co-movement, e.g. EEG recordings or economic data. In order to deal with these shortcomings, we propose a novel method for the causal analysis of such multivariate time series based on Granger causality and factor models. We present the theoretical background, successfully assess our methodology with the help of simulated data and show a potential application in EEG analysis of epileptic seizures. PMID:23354014

  11. [Numerical simulation and operation optimization of biological filter].

    PubMed

    Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing

    2014-12-01

    BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10.

  12. Theoretical analysis and simulations of strong terahertz radiation from the interaction of ultrashort laser pulses with gases

    NASA Astrophysics Data System (ADS)

    Chen, Min; Pukhov, Alexander; Peng, Xiao-Yu; Willi, Oswald

    2008-10-01

    Terahertz (THz) radiation from the interaction of ultrashort laser pulses with gases is studied both by theoretical analysis and particle-in-cell (PIC) simulations. A one-dimensional THz generation model based on the transient ionization electric current mechanism is given, which explains the results of one-dimensional PIC simulations. At the same time the relation between the final THz field and the initial transient ionization current is shown. One- and two-dimensional simulations show that for the THz generation the contribution of the electric current due to ionization is much larger than the one driven by the usual ponderomotive force. Ionization current generated by different laser pulses and gases is also studied numerically. Based on the numerical results we explain the scaling laws for THz emission observed in the recent experiments performed by Xie [Phys. Rev. Lett. 96, 075005 (2006)]. We also study the effective parameter region for the carrier envelop phase measurement by the use of THz generation.

  13. Theoretical analysis and simulations of strong terahertz radiation from the interaction of ultrashort laser pulses with gases.

    PubMed

    Chen, Min; Pukhov, Alexander; Peng, Xiao-Yu; Willi, Oswald

    2008-10-01

    Terahertz (THz) radiation from the interaction of ultrashort laser pulses with gases is studied both by theoretical analysis and particle-in-cell (PIC) simulations. A one-dimensional THz generation model based on the transient ionization electric current mechanism is given, which explains the results of one-dimensional PIC simulations. At the same time the relation between the final THz field and the initial transient ionization current is shown. One- and two-dimensional simulations show that for the THz generation the contribution of the electric current due to ionization is much larger than the one driven by the usual ponderomotive force. Ionization current generated by different laser pulses and gases is also studied numerically. Based on the numerical results we explain the scaling laws for THz emission observed in the recent experiments performed by Xie et al. [Phys. Rev. Lett. 96, 075005 (2006)]. We also study the effective parameter region for the carrier envelop phase measurement by the use of THz generation.

  14. Analysis of Aerodynamic Load of LSU-03 (LAPAN Surveillance UAV-03) Propeller

    NASA Astrophysics Data System (ADS)

    Rahmadi Nuranto, Awang; Jamaludin Fitroh, Ahmad; Syamsudin, Hendri

    2018-04-01

    The existing propeller of the LSU-03 aircraft is made of wood. To improve structural strength and obtain better mechanical properties, the propeller will be redesigned usingcomposite materials. It is necessary to simulate and analyze the design load. This research paper explainsthe simulation and analysis of aerodynamic load prior to structural design phase of composite propeller. Aerodynamic load calculations are performed using both the Blade Element Theory(BET) and the Computational Fluid Dynamic (CFD)simulation. The result of both methods show a close agreement, the different thrust forces is only 1.2 and 4.1% for two type mesh. Thus the distribution of aerodynamic loads along the surface of the propeller blades of the 3-D CFD simulation results are considered valid and ready to design the composite structure. TheCFD results is directly imported to the structure model using the Direct Import CFD / One-Way Fluid Structure Interaction (FSI) method. Design load of propeller is chosen at the flight condition at speed of 20 km/h at 7000 rpm.

  15. Motion analysis and trials of the deep sea hybrid underwater glider Petrel-II

    NASA Astrophysics Data System (ADS)

    Liu, Fang; Wang, Yan-hui; Wu, Zhi-liang; Wang, Shu-xin

    2017-03-01

    A hybrid underwater glider Petrel-II has been developed and field tested. It is equipped with an active buoyancy unit and a compact propeller unit. Its working modes have been expanded to buoyancy driven gliding and propeller driven level-flight, which can make the glider work in strong currents, as well as many other complicated ocean environments. Its maximal gliding speed reaches 1 knot and the propelling speed is up to 3 knots. In this paper, a 3D dynamic model of Petrel-II is derived using linear momentum and angular momentum equations. According to the dynamic model, the spiral motion in the underwater space is simulated for the gliding mode. Similarly the cycle motion on water surface and the depth-keeping motion underwater are simulated for the level-flight mode. These simulations are important to the performance analysis and parameter optimization for the Petrel-II underwater glider. The simulation results show a good agreement with field trials.

  16. Testability analysis on a hydraulic system in a certain equipment based on simulation model

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Cong, Hua; Liu, Yuanhong; Feng, Fuzhou

    2018-03-01

    Aiming at the problem that the complicated structure and the shortage of fault statistics information in hydraulic systems, a multi value testability analysis method based on simulation model is proposed. Based on the simulation model of AMESim, this method injects the simulated faults and records variation of test parameters ,such as pressure, flow rate, at each test point compared with those under normal conditions .Thus a multi-value fault-test dependency matrix is established. Then the fault detection rate (FDR) and fault isolation rate (FIR) are calculated based on the dependency matrix. Finally the system of testability and fault diagnosis capability are analyzed and evaluated, which can only reach a lower 54%(FDR) and 23%(FIR). In order to improve testability performance of the system,. number and position of the test points are optimized on the system. Results show the proposed test placement scheme can be used to solve the problems that difficulty, inefficiency and high cost in the system maintenance.

  17. Measuring implosion velocities in experiments and simulations of laser-driven cylindrical implosions on the OMEGA laser

    NASA Astrophysics Data System (ADS)

    Hansen, E. C.; Barnak, D. H.; Betti, R.; Campbell, E. M.; Chang, P.-Y.; Davies, J. R.; Glebov, V. Yu; Knauer, J. P.; Peebles, J.; Regan, S. P.; Sefkow, A. B.

    2018-05-01

    Laser-driven magnetized liner inertial fusion (MagLIF) on OMEGA involves cylindrical implosions, a preheat beam, and an applied magnetic field. Initial experiments excluded the preheat beam and magnetic field to better characterize the implosion. X-ray self-emission as measured by framing cameras was used to determine the shell trajectory. The 1D code LILAC was used to model the central region of the implosion, and results were compared to 2D simulations from the HYDRA code. Post-processing of simulation output with SPECT3D and Yorick produced synthetic x-ray images that were used to compare the simulation results with the x-ray framing camera data. Quantitative analysis shows that higher measured neutron yields correlate with higher implosion velocities. The future goal is to further analyze the x-ray images to characterize the uniformity of the implosions and apply these analysis techniques to integrated laser-driven MagLIF shots to better understand the effects of preheat and the magnetic field.

  18. Tutorial: Determination of thermal boundary resistance by molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Liang, Zhi; Hu, Ming

    2018-05-01

    Due to the high surface-to-volume ratio of nanostructured components in microelectronics and other advanced devices, the thermal resistance at material interfaces can strongly affect the overall thermal behavior in these devices. Therefore, the thermal boundary resistance, R, must be taken into account in the thermal analysis of nanoscale structures and devices. This article is a tutorial on the determination of R and the analysis of interfacial thermal transport via molecular dynamics (MD) simulations. In addition to reviewing the commonly used equilibrium and non-equilibrium MD models for the determination of R, we also discuss several MD simulation methods which can be used to understand interfacial thermal transport behavior. To illustrate how these MD models work for various interfaces, we will show several examples of MD simulation results on thermal transport across solid-solid, solid-liquid, and solid-gas interfaces. The advantages and drawbacks of a few other MD models such as approach-to-equilibrium MD and first-principles MD are also discussed.

  19. Automatic Selection of Order Parameters in the Analysis of Large Scale Molecular Dynamics Simulations.

    PubMed

    Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S

    2014-12-09

    Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.

  20. Simulation on the Performance of a Driven Fan Made by Polyester/Epoxy interpenetrate polymer network (IPN)

    NASA Astrophysics Data System (ADS)

    Fahrul Hassan, Mohd; Jamri, Azmil; Nawawi, Azli; Zaini Yunos, Muhamad; Fauzi Ahmad, Md; Adzila, Sharifah; Nasrull Abdol Rahman, Mohd

    2017-08-01

    The main purpose of this study is to investigate the performance of a driven fan design made by Polyester/Epoxy interpenetrate polymer network (IPN) material that specifically used for turbocharger compressor. Polyester/Epoxy IPN is polymer plastics that was used as replacements for traditional polymers and has been widely used in a variety of applications because of their limitless conformations. Simulation based on several parameters which are air pressure, air velocity and air temperature have been carried out for a driven fan design performance of two different materials, aluminum alloy (existing driven fan design) and Polyester/Epoxy IPN using SolidWorks Flow Simulation software. Results from both simulations were analyzed and compared where both materials show similar performance in terms of air pressure and air velocity due to similar geometric and dimension, but Polyester/Epoxy IPN produces lower air temperature than aluminum alloy. This study shows a preliminary result of the potential Polyester/Epoxy IPN to be used as a driven fan design material. In the future, further studies will be conducted on detail simulation and experimental analysis.

  1. Simulation of root forms using cellular automata model

    NASA Astrophysics Data System (ADS)

    Winarno, Nanang; Prima, Eka Cahya; Afifah, Ratih Mega Ayu

    2016-02-01

    This research aims to produce a simulation program for root forms using cellular automata model. Stephen Wolfram in his book entitled "A New Kind of Science" discusses the formation rules based on the statistical analysis. In accordance with Stephen Wolfram's investigation, the research will develop a basic idea of computer program using Delphi 7 programming language. To best of our knowledge, there is no previous research developing a simulation describing root forms using the cellular automata model compared to the natural root form with the presence of stone addition as the disturbance. The result shows that (1) the simulation used four rules comparing results of the program towards the natural photographs and each rule had shown different root forms; (2) the stone disturbances prevent the root growth and the multiplication of root forms had been successfully modeled. Therefore, this research had added some stones, which have size of 120 cells placed randomly in the soil. Like in nature, stones cannot be penetrated by plant roots. The result showed that it is very likely to further develop the program of simulating root forms by 50 variations.

  2. Simulation of Rutherford backscattering spectrometry from arbitrary atom structures

    DOE PAGES

    Zhang, S.; Univ. of Helsinki; Nordlund, Kai; ...

    2016-10-25

    Rutherford backscattering spectrometry in a channeling direction (RBS/C) is a powerful tool for analysis of the fraction of atoms displaced from their lattice positions. However, it is in many cases not straightforward to analyze what is the actual defect structure underlying the RBS/C signal. To reveal insights of RBS/C signals from arbitrarily complex defective atomic structures, we develop in this paper a method for simulating the RBS/C spectrum from a set of arbitrary read-in atom coordinates (obtained, e.g., from molecular dynamics simulations). We apply the developed method to simulate the RBS/C signals from Ni crystal structures containing randomly displaced atoms,more » Frenkel point defects, and extended defects, respectively. The RBS/C simulations show that, even for the same number of atoms in defects, the RBS/C signal is much stronger for the extended defects. Finally, comparison with experimental results shows that the disorder profile obtained from RBS/C signals in ion-irradiated Ni is due to a small fraction of extended defects rather than a large number of individual random atoms.« less

  3. Spontaneous quaternary and tertiary T-R transitions of human hemoglobin in molecular dynamics simulation.

    PubMed

    Hub, Jochen S; Kubitzki, Marcus B; de Groot, Bert L

    2010-05-06

    We present molecular dynamics simulations of unliganded human hemoglobin (Hb) A under physiological conditions, starting from the R, R2, and T state. The simulations were carried out with protonated and deprotonated HC3 histidines His(beta)146, and they sum up to a total length of 5.6 micros. We observe spontaneous and reproducible T-->R quaternary transitions of the Hb tetramer and tertiary transitions of the alpha and beta subunits, as detected from principal component projections, from an RMSD measure, and from rigid body rotation analysis. The simulations reveal a marked asymmetry between the alpha and beta subunits. Using the mutual information as correlation measure, we find that the beta subunits are substantially more strongly linked to the quaternary transition than the alpha subunits. In addition, the tertiary populations of the alpha and beta subunits differ substantially, with the beta subunits showing a tendency towards R, and the alpha subunits showing a tendency towards T. Based on the simulation results, we present a transition pathway for coupled quaternary and tertiary transitions between the R and T conformations of Hb.

  4. Spontaneous Quaternary and Tertiary T-R Transitions of Human Hemoglobin in Molecular Dynamics Simulation

    PubMed Central

    de Groot, Bert L.

    2010-01-01

    We present molecular dynamics simulations of unliganded human hemoglobin (Hb) A under physiological conditions, starting from the R, R2, and T state. The simulations were carried out with protonated and deprotonated HC3 histidines His(β)146, and they sum up to a total length of 5.6µs. We observe spontaneous and reproducible T→R quaternary transitions of the Hb tetramer and tertiary transitions of the α and β subunits, as detected from principal component projections, from an RMSD measure, and from rigid body rotation analysis. The simulations reveal a marked asymmetry between the α and β subunits. Using the mutual information as correlation measure, we find that the β subunits are substantially more strongly linked to the quaternary transition than the α subunits. In addition, the tertiary populations of the α and β subunits differ substantially, with the β subunits showing a tendency towards R, and the α subunits showing a tendency towards T. Based on the simulation results, we present a transition pathway for coupled quaternary and tertiary transitions between the R and T conformations of Hb. PMID:20463873

  5. Simulation and optimization of a pulsating heat pipe using artificial neural network and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Jokar, Ali; Godarzi, Ali Abbasi; Saber, Mohammad; Shafii, Mohammad Behshad

    2016-11-01

    In this paper, a novel approach has been presented to simulate and optimize the pulsating heat pipes (PHPs). The used pulsating heat pipe setup was designed and constructed for this study. Due to the lack of a general mathematical model for exact analysis of the PHPs, a method has been applied for simulation and optimization using the natural algorithms. In this way, the simulator consists of a kind of multilayer perceptron neural network, which is trained by experimental results obtained from our PHP setup. The results show that the complex behavior of PHPs can be successfully described by the non-linear structure of this simulator. The input variables of the neural network are input heat flux to evaporator (q″), filling ratio (FR) and inclined angle (IA) and its output is thermal resistance of PHP. Finally, based upon the simulation results and considering the heat pipe's operating constraints, the optimum operating point of the system is obtained by using genetic algorithm (GA). The experimental results show that the optimum FR (38.25 %), input heat flux to evaporator (39.93 W) and IA (55°) that obtained from GA are acceptable.

  6. Investigation on coupling error characteristics in angular rate matching based ship deformation measurement approach

    NASA Astrophysics Data System (ADS)

    Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang

    2018-01-01

    The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.

  7. A Computational Approach for Probabilistic Analysis of Water Impact Simulations

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Mason, Brian H.; Lyle, Karen H.

    2009-01-01

    NASA's development of new concepts for the Crew Exploration Vehicle Orion presents many similar challenges to those worked in the sixties during the Apollo program. However, with improved modeling capabilities, new challenges arise. For example, the use of the commercial code LS-DYNA, although widely used and accepted in the technical community, often involves high-dimensional, time consuming, and computationally intensive simulations. The challenge is to capture what is learned from a limited number of LS-DYNA simulations to develop models that allow users to conduct interpolation of solutions at a fraction of the computational time. This paper presents a description of the LS-DYNA model, a brief summary of the response surface techniques, the analysis of variance approach used in the sensitivity studies, equations used to estimate impact parameters, results showing conditions that might cause injuries, and concluding remarks.

  8. A comparison of solute-transport solution techniques based on inverse modelling results

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2000-01-01

    Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results-simulated breakthrough curves, sensitivity analysis, and calibrated parameter values-change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.

  9. Preserving the Boltzmann ensemble in replica-exchange molecular dynamics.

    PubMed

    Cooke, Ben; Schmidler, Scott C

    2008-10-28

    We consider the convergence behavior of replica-exchange molecular dynamics (REMD) [Sugita and Okamoto, Chem. Phys. Lett. 314, 141 (1999)] based on properties of the numerical integrators in the underlying isothermal molecular dynamics (MD) simulations. We show that a variety of deterministic algorithms favored by molecular dynamics practitioners for constant-temperature simulation of biomolecules fail either to be measure invariant or irreducible, and are therefore not ergodic. We then show that REMD using these algorithms also fails to be ergodic. As a result, the entire configuration space may not be explored even in an infinitely long simulation, and the simulation may not converge to the desired equilibrium Boltzmann ensemble. Moreover, our analysis shows that for initial configurations with unfavorable energy, it may be impossible for the system to reach a region surrounding the minimum energy configuration. We demonstrate these failures of REMD algorithms for three small systems: a Gaussian distribution (simple harmonic oscillator dynamics), a bimodal mixture of Gaussians distribution, and the alanine dipeptide. Examination of the resulting phase plots and equilibrium configuration densities indicates significant errors in the ensemble generated by REMD simulation. We describe a simple modification to address these failures based on a stochastic hybrid Monte Carlo correction, and prove that this is ergodic.

  10. mocca code for star cluster simulations - VI. Bimodal spatial distribution of blue stragglers

    NASA Astrophysics Data System (ADS)

    Hypki, Arkadiusz; Giersz, Mirek

    2017-11-01

    The paper presents an analysis of formation mechanism and properties of spatial distributions of blue stragglers in evolving globular clusters, based on numerical simulations done with the mocca code. First, there are presented N-body and mocca simulations which try to reproduce the simulations presented by Ferraro et al. (2012). Then, we show the agreement between N-body and the mocca code. Finally, we discuss the formation process of the bimodal distribution. We report that we could not reproduce simulations from Ferraro et al. (2012). Moreover, we show that the so-called bimodal spatial distribution of blue stragglers is a very transient feature. It is formed for one snapshot in time and it can easily vanish in the next one. Moreover, we show that the radius of avoidance proposed by Ferraro et al. (2012) goes out of sync with the apparent minimum of the bimodal distribution after about two half-mass relaxation times (without finding out what is the reason for that). This finding creates a real challenge for the dynamical clock, which uses this radius to determine the dynamical age of globular clusters. Additionally, the paper discusses a few important problems concerning the apparent visibilities of the bimodal distributions, which have to be taken into account while studying the spatial distributions of blue stragglers.

  11. Speckle correlation method used to measure object's in-plane velocity.

    PubMed

    Smíd, Petr; Horváth, Pavel; Hrabovský, Miroslav

    2007-06-20

    We present a measurement of an object's in-plane velocity in one direction by the use of the speckle correlation method. Numerical correlations of speckle patterns recorded periodically during motion of the object under investigation give information used to evaluate the object's in-plane velocity. The proposed optical setup uses a detection plane in the image field and enables one to detect the object's velocity within the interval (10-150) microm x s(-1). Simulation analysis shows a way of controlling the measuring range. The presented theory, simulation analysis, and setup are verified through an experiment of measurement of the velocity profile of an object.

  12. Evaluating Measurement of Dynamic Constructs: Defining a Measurement Model of Derivatives

    PubMed Central

    Estabrook, Ryne

    2015-01-01

    While measurement evaluation has been embraced as an important step in psychological research, evaluating measurement structures with longitudinal data is fraught with limitations. This paper defines and tests a measurement model of derivatives (MMOD), which is designed to assess the measurement structure of latent constructs both for analyses of between-person differences and for the analysis of change. Simulation results indicate that MMOD outperforms existing models for multivariate analysis and provides equivalent fit to data generation models. Additional simulations show MMOD capable of detecting differences in between-person and within-person factor structures. Model features, applications and future directions are discussed. PMID:24364383

  13. Energy consumption analysis of the Venus Deep Space Station (DSS-13)

    NASA Technical Reports Server (NTRS)

    Hayes, N. V.

    1983-01-01

    This report continues the energy consumption analysis and verification study of the tracking stations of the Goldstone Deep Space Communications Complex, and presents an audit of the Venus Deep Space Station (DSS 13). Due to the non-continuous radioastronomy research and development operations at the station, estimations of energy usage were employed in the energy consumption simulation of both the 9-meter and 26-meter antenna buildings. A 17.9% decrease in station energy consumption was experienced over the 1979-1981 years under study. A comparison of the ECP computer simulations and the station's main watt-hour meter readings showed good agreement.

  14. A performance evaluation postprocessor for computer-aided design and analysis of communication systems

    NASA Technical Reports Server (NTRS)

    Tranter, W. H.

    1979-01-01

    A technique for estimating the signal-to-noise ratio at a point in a digital simulation of a communication system is described; the technique is essentially a digital realization of a technique proposed by Shepertycki (1964) for the evaluation of analog communication systems. Signals having lowpass or bandpass spectra may be used. Simulation results show the technique to be accurate over a wide range of signal-to-noise ratios.

  15. Molecular dynamics simulation of propagating cracks

    NASA Technical Reports Server (NTRS)

    Mullins, M.

    1982-01-01

    Steady state crack propagation is investigated numerically using a model consisting of 236 free atoms in two (010) planes of bcc alpha iron. The continuum region is modeled using the finite element method with 175 nodes and 288 elements. The model shows clear (010) plane fracture to the edge of the discrete region at moderate loads. Analysis of the results obtained indicates that models of this type can provide realistic simulation of steady state crack propagation.

  16. Mathematic models for a ray tracing method and its applications in wireless optical communications.

    PubMed

    Zhang, Minglun; Zhang, Yangan; Yuan, Xueguang; Zhang, Jinnan

    2010-08-16

    This paper presents a new ray tracing method, which contains a whole set of mathematic models, and its validity is verified by simulations. In addition, both theoretical analysis and simulation results show that the computational complexity of the method is much lower than that of previous ones. Therefore, the method can be used to rapidly calculate the impulse response of wireless optical channels for complicated systems.

  17. Heteroscedastic Tests Statistics for One-Way Analysis of Variance: The Trimmed Means and Hall's Transformation Conjunction

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2005-01-01

    To deal with nonnormal and heterogeneous data for the one-way fixed effect analysis of variance model, the authors adopted a trimmed means method in conjunction with Hall's invertible transformation into a heteroscedastic test statistic (Alexander-Govern test or Welch test). The results of simulation experiments showed that the proposed technique…

  18. Hyper-X Stage Separation: Simulation Development and Results

    NASA Technical Reports Server (NTRS)

    Reubush, David E.; Martin, John G.; Robinson, Jeffrey S.; Bose, David M.; Strovers, Brian K.

    2001-01-01

    This paper provides an overview of stage separation simulation development and results for NASA's Hyper-X program; a focused hypersonic technology effort designed to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. This paper presents an account of the development of the current 14 degree of freedom stage separation simulation tool (SepSim) and results from use of the tool in a Monte Carlo analysis to evaluate the risk of failure for the separation event. Results from use of the tool show that there is only a very small risk of failure in the separation event.

  19. Nonlinear dynamic mechanism of vocal tremor from voice analysis and model simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Jiang, Jack J.

    2008-09-01

    Nonlinear dynamic analysis and model simulations are used to study the nonlinear dynamic characteristics of vocal folds with vocal tremor, which can typically be characterized by low-frequency modulation and aperiodicity. Tremor voices from patients with disorders such as paresis, Parkinson's disease, hyperfunction, and adductor spasmodic dysphonia show low-dimensional characteristics, differing from random noise. Correlation dimension analysis statistically distinguishes tremor voices from normal voices. Furthermore, a nonlinear tremor model is proposed to study the vibrations of the vocal folds with vocal tremor. Fractal dimensions and positive Lyapunov exponents demonstrate the evidence of chaos in the tremor model, where amplitude and frequency play important roles in governing vocal fold dynamics. Nonlinear dynamic voice analysis and vocal fold modeling may provide a useful set of tools for understanding the dynamic mechanism of vocal tremor in patients with laryngeal diseases.

  20. Buckling analysis of planar compression micro-springs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jing; Sui, Li; Shi, Gengchen

    2015-04-15

    Large compression deformation causes micro-springs buckling and loss of load capacity. We analyzed the impact of structural parameters and boundary conditions for planar micro-springs, and obtained the change rules for the two factors that affect buckling. A formula for critical buckling deformation of micro-springs under compressive load was derived based on elastic thin plate theory. Results from this formula were compared with finite element analysis results but these did not always correlate. Therefore, finite element analysis is necessary for micro-spring buckling analysis. We studied the variation of micro-spring critical buckling deformation caused by four structural parameters using ANSYS software undermore » two constraint conditions. The simulation results show that when an x-direction constraint is added, the critical buckling deformation increases by 32.3-297.9%. The critical buckling deformation decreases with increase in micro-spring arc radius or section width and increases with increase in micro-spring thickness or straight beam width. We conducted experiments to confirm the simulation results, and the experimental and simulation trends were found to agree. Buckling analysis of the micro-spring establishes a theoretical foundation for optimizing micro-spring structural parameters and constraint conditions to maximize the critical buckling load.« less

  1. Pattern Recognition for a Flight Dynamics Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; Hurtado, John E.

    2011-01-01

    The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.

  2. Guidelines for the analysis of free energy calculations

    PubMed Central

    Klimovich, Pavel V.; Shirts, Michael R.; Mobley, David L.

    2015-01-01

    Free energy calculations based on molecular dynamics (MD) simulations show considerable promise for applications ranging from drug discovery to prediction of physical properties and structure-function studies. But these calculations are still difficult and tedious to analyze, and best practices for analysis are not well defined or propagated. Essentially, each group analyzing these calculations needs to decide how to conduct the analysis and, usually, develop its own analysis tools. Here, we review and recommend best practices for analysis yielding reliable free energies from molecular simulations. Additionally, we provide a Python tool, alchemical–analysis.py, freely available on GitHub at https://github.com/choderalab/pymbar–examples, that implements the analysis practices reviewed here for several reference simulation packages, which can be adapted to handle data from other packages. Both this review and the tool covers analysis of alchemical calculations generally, including free energy estimates via both thermodynamic integration and free energy perturbation-based estimators. Our Python tool also handles output from multiple types of free energy calculations, including expanded ensemble and Hamiltonian replica exchange, as well as standard fixed ensemble calculations. We also survey a range of statistical and graphical ways of assessing the quality of the data and free energy estimates, and provide prototypes of these in our tool. We hope these tools and discussion will serve as a foundation for more standardization of and agreement on best practices for analysis of free energy calculations. PMID:25808134

  3. A cost simulation for mammography examinations taking into account equipment failures and resource utilization characteristics.

    PubMed

    Coelli, Fernando C; Almeida, Renan M V R; Pereira, Wagner C A

    2010-12-01

    This work develops a cost analysis estimation for a mammography clinic, taking into account resource utilization and equipment failure rates. Two standard clinic models were simulated, the first with one mammography equipment, two technicians and one doctor, and the second (based on an actually functioning clinic) with two equipments, three technicians and one doctor. Cost data and model parameters were obtained by direct measurements, literature reviews and other hospital data. A discrete-event simulation model was developed, in order to estimate the unit cost (total costs/number of examinations in a defined period) of mammography examinations at those clinics. The cost analysis considered simulated changes in resource utilization rates and in examination failure probabilities (failures on the image acquisition system). In addition, a sensitivity analysis was performed, taking into account changes in the probabilities of equipment failure types. For the two clinic configurations, the estimated mammography unit costs were, respectively, US$ 41.31 and US$ 53.46 in the absence of examination failures. As the examination failures increased up to 10% of total examinations, unit costs approached US$ 54.53 and US$ 53.95, respectively. The sensitivity analysis showed that type 3 (the most serious) failure increases had a very large impact on the patient attendance, up to the point of actually making attendance unfeasible. Discrete-event simulation allowed for the definition of the more efficient clinic, contingent on the expected prevalence of resource utilization and equipment failures. © 2010 Blackwell Publishing Ltd.

  4. Development of Viscoelastic Multi-Body Simulation and Impact Response Analysis of a Ballasted Railway Track under Cyclic Loading

    PubMed Central

    Nishiura, Daisuke; Sakaguchi, Hide; Aikawa, Akira

    2017-01-01

    Simulation of a large number of deformable bodies is often difficult because complex high-level modeling is required to address both multi-body contact and viscoelastic deformation. This necessitates the combined use of a discrete element method (DEM) and a finite element method (FEM). In this study, a quadruple discrete element method (QDEM) was developed for dynamic analysis of viscoelastic materials using a simpler algorithm compared to the standard FEM. QDEM easily incorporates the contact algorithm used in DEM. As the first step toward multi-body simulation, the fundamental performance of QDEM was investigated for viscoelastic analysis. The amplitude and frequency of cantilever elastic vibration were nearly equal to those obtained by the standard FEM. A comparison of creep recovery tests with an analytical solution showed good agreement between them. In addition, good correlation between the attenuation degree and the real physical viscosity was confirmed for viscoelastic vibration analysis. Therefore, the high accuracy of QDEM in the fundamental analysis of infinitesimal viscoelastic deformations was verified. Finally, the impact response of a ballast and sleeper under cyclic loading on a railway track was analyzed using QDEM as an application of deformable multi-body dynamics. The results showed that the vibration of the ballasted track was qualitatively in good agreement with the actual measurements. Moreover, the ballast layer with high friction reduced the ballasted track deterioration. This study suggests that QDEM, as an alternative to DEM and FEM, can provide deeper insights into the contact dynamics of a large number of deformable bodies. PMID:28772974

  5. Development of Viscoelastic Multi-Body Simulation and Impact Response Analysis of a Ballasted Railway Track under Cyclic Loading.

    PubMed

    Nishiura, Daisuke; Sakaguchi, Hide; Aikawa, Akira

    2017-06-03

    Simulation of a large number of deformable bodies is often difficult because complex high-level modeling is required to address both multi-body contact and viscoelastic deformation. This necessitates the combined use of a discrete element method (DEM) and a finite element method (FEM). In this study, a quadruple discrete element method (QDEM) was developed for dynamic analysis of viscoelastic materials using a simpler algorithm compared to the standard FEM. QDEM easily incorporates the contact algorithm used in DEM. As the first step toward multi-body simulation, the fundamental performance of QDEM was investigated for viscoelastic analysis. The amplitude and frequency of cantilever elastic vibration were nearly equal to those obtained by the standard FEM. A comparison of creep recovery tests with an analytical solution showed good agreement between them. In addition, good correlation between the attenuation degree and the real physical viscosity was confirmed for viscoelastic vibration analysis. Therefore, the high accuracy of QDEM in the fundamental analysis of infinitesimal viscoelastic deformations was verified. Finally, the impact response of a ballast and sleeper under cyclic loading on a railway track was analyzed using QDEM as an application of deformable multi-body dynamics. The results showed that the vibration of the ballasted track was qualitatively in good agreement with the actual measurements. Moreover, the ballast layer with high friction reduced the ballasted track deterioration. This study suggests that QDEM, as an alternative to DEM and FEM, can provide deeper insights into the contact dynamics of a large number of deformable bodies.

  6. A numerical simulation method and analysis of a complete thermoacoustic-Stirling engine.

    PubMed

    Ling, Hong; Luo, Ercang; Dai, Wei

    2006-12-22

    Thermoacoustic prime movers can generate pressure oscillation without any moving parts on self-excited thermoacoustic effect. The details of the numerical simulation methodology for thermoacoustic engines are presented in the paper. First, a four-port network method is used to build the transcendental equation of complex frequency as a criterion to judge if temperature distribution of the whole thermoacoustic system is correct for the case with given heating power. Then, the numerical simulation of a thermoacoustic-Stirling heat engine is carried out. It is proved that the numerical simulation code can run robustly and output what one is interested in. Finally, the calculated results are compared with the experiments of the thermoacoustic-Stirling heat engine (TASHE). It shows that the numerical simulation can agrees with the experimental results with acceptable accuracy.

  7. Fractal analysis of the dark matter and gas distributions in the Mare-Nostrum universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaite, José, E-mail: jose.gaite@upm.es

    2010-03-01

    We develop a method of multifractal analysis of N-body cosmological simulations that improves on the customary counts-in-cells method by taking special care of the effects of discreteness and large scale homogeneity. The analysis of the Mare-Nostrum simulation with our method provides strong evidence of self-similar multifractal distributions of dark matter and gas, with a halo mass function that is of Press-Schechter type but has a power-law exponent -2, as corresponds to a multifractal. Furthermore, our analysis shows that the dark matter and gas distributions are indistinguishable as multifractals. To determine if there is any gas biasing, we calculate the cross-correlationmore » coefficient, with negative but inconclusive results. Hence, we develop an effective Bayesian analysis connected with information theory, which clearly demonstrates that the gas is biased in a long range of scales, up to the scale of homogeneity. However, entropic measures related to the Bayesian analysis show that this gas bias is small (in a precise sense) and is such that the fractal singularities of both distributions coincide and are identical. We conclude that this common multifractal cosmic web structure is determined by the dynamics and is independent of the initial conditions.« less

  8. Enhancement of CFD validation exercise along the roof profile of a low-rise building

    NASA Astrophysics Data System (ADS)

    Deraman, S. N. C.; Majid, T. A.; Zaini, S. S.; Yahya, W. N. W.; Abdullah, J.; Ismail, M. A.

    2018-04-01

    The aim of this study is to enhance the validation of CFD exercise along the roof profile of a low-rise building. An isolated gabled-roof house having 26.6° roof pitch was simulated to obtain the pressure coefficient around the house. Validation of CFD analysis with experimental data requires many input parameters. This study performed CFD simulation based on the data from a previous study. Where the input parameters were not clearly stated, new input parameters were established from the open literatures. The numerical simulations were performed in FLUENT 14.0 by applying the Computational Fluid Dynamics (CFD) approach based on steady RANS equation together with RNG k-ɛ model. Hence, the result from CFD was analysed by using quantitative test (statistical analysis) and compared with CFD results from the previous study. The statistical analysis results from ANOVA test and error measure showed that the CFD results from the current study produced good agreement and exhibited the closest error compared to the previous study. All the input data used in this study can be extended to other types of CFD simulation involving wind flow over an isolated single storey house.

  9. Pharmacy practice simulations: performance of senior pharmacy students at a University in southern Brazil

    PubMed Central

    Galato, Dayani; Alano, Graziela M.; Trauthman, Silvana C.; França, Tainã F.

    Objective A simulation process known as objective structured clinical examination (OSCE) was applied to assess pharmacy practice performed by senior pharmacy students. Methods A cross-sectional study was conducted based on documentary analysis of performance evaluation records of pharmacy practice simulations that occurred between 2005 and 2009. These simulations were related to the process of self-medication and dispensing, and were performed with the use of patients simulated. The simulations were filmed to facilitate the evaluation process. It presents the OSCE educational experience performed by pharmacy trainees of the University of Southern Santa Catarina and experienced by two evaluators. The student general performance was analyzed, and the criteria for pharmacy practice assessment often identified trainees in difficulty. Results The results of 291 simulations showed that students have an average yield performance of 70.0%. Several difficulties were encountered, such as the lack of information about the selected/prescribed treatment regimen (65.1%); inadequate communication style (21.9%); lack of identification of patients’ needs (7.7%) and inappropriate drug selection for self-medication (5.3%). Conclusions These data show that there is a need for reorientation of clinical pharmacy students because they need to improve their communication skills, and have a deeper knowledge of medicines and health problems in order to properly orient their patients. PMID:24367467

  10. Comparison of normalization methods for differential gene expression analysis in RNA-Seq experiments

    PubMed Central

    Maza, Elie; Frasse, Pierre; Senin, Pavel; Bouzayen, Mondher; Zouine, Mohamed

    2013-01-01

    In recent years, RNA-Seq technologies became a powerful tool for transcriptome studies. However, computational methods dedicated to the analysis of high-throughput sequencing data are yet to be standardized. In particular, it is known that the choice of a normalization procedure leads to a great variability in results of differential gene expression analysis. The present study compares the most widespread normalization procedures and proposes a novel one aiming at removing an inherent bias of studied transcriptomes related to their relative size. Comparisons of the normalization procedures are performed on real and simulated data sets. Real RNA-Seq data sets analyses, performed with all the different normalization methods, show that only 50% of significantly differentially expressed genes are common. This result highlights the influence of the normalization step on the differential expression analysis. Real and simulated data sets analyses give similar results showing 3 different groups of procedures having the same behavior. The group including the novel method named “Median Ratio Normalization” (MRN) gives the lower number of false discoveries. Within this group the MRN method is less sensitive to the modification of parameters related to the relative size of transcriptomes such as the number of down- and upregulated genes and the gene expression levels. The newly proposed MRN method efficiently deals with intrinsic bias resulting from relative size of studied transcriptomes. Validation with real and simulated data sets confirmed that MRN is more consistent and robust than existing methods. PMID:26442135

  11. Analysis of stimulated Raman backscatter and stimulated Brillouin backscatter in experiments performed on SG-III prototype facility with a spectral analysis code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Liang; Zhao, Yiqing; Hu, Xiaoyan

    2014-07-15

    Experiments about the observations of stimulated Raman backscatter (SRS) and stimulated Brillouin backscatter (SBS) in Hohlraum were performed on Shenguang-III (SG-III) prototype facility for the first time in 2011. In this paper, relevant experimental results are analyzed for the first time with a one-dimension spectral analysis code, which is developed to study the coexistent process of SRS and SBS in Hohlraum plasma condition. Spectral features of the backscattered light are discussed with different plasma parameters. In the case of empty Hohlraum experiments, simulation results indicate that SBS, which grows fast at the energy deposition region near the Hohlraum wall, ismore » the dominant instability process. The time resolved spectra of SRS and SBS are numerically obtained, which agree with the experimental observations. For the gas-filled Hohlraum experiments, simulation results show that SBS grows fastest in Au plasma and amplifies convectively in C{sub 5}H{sub 12} gas, whereas SRS mainly grows in the high density region of the C{sub 5}H{sub 12} gas. Gain spectra and the spectra of backscattered light are simulated along the ray path, which clearly show the location where the intensity of scattered light with a certain wavelength increases. This work is helpful to comprehend the observed spectral features of SRS and SBS. The experiments and relevant analysis provide references for the ignition target design in future.« less

  12. Analysis of Simulated Temporal Illumination at the Lunar PSRs

    NASA Astrophysics Data System (ADS)

    Thompson, T. J.; Mahanti, P.

    2018-04-01

    Illumination on the Moon is modeled temporally for permanently shadowed regions to lighting trends. Crater topography is used to generate viewfactor maps, which show which areas contribute most to scattered light into the primary shadows.

  13. Verification and Validation of EnergyPlus Phase Change Material Model for Opaque Wall Assemblies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.

    2012-08-01

    Phase change materials (PCMs) represent a technology that may reduce peak loads and HVAC energy consumption in buildings. A few building energy simulation programs have the capability to simulate PCMs, but their accuracy has not been completely tested. This study shows the procedure used to verify and validate the PCM model in EnergyPlus using a similar approach as dictated by ASHRAE Standard 140, which consists of analytical verification, comparative testing, and empirical validation. This process was valuable, as two bugs were identified and fixed in the PCM model, and version 7.1 of EnergyPlus will have a validated PCM model. Preliminarymore » results using whole-building energy analysis show that careful analysis should be done when designing PCMs in homes, as their thermal performance depends on several variables such as PCM properties and location in the building envelope.« less

  14. Ozone Temporal Variability in the Subarctic Region: Comparison of Satellite Measurements with Numerical Simulations

    NASA Astrophysics Data System (ADS)

    Shved, G. M.; Virolainen, Ya. A.; Timofeyev, Yu. M.; Ermolenko, S. I.; Smyshlyaev, S. P.; Motsakov, M. A.; Kirner, O.

    2018-01-01

    Fourier and wavelet spectra of time series for the ozone column abundance in the atmospheric 0-25 and 25-60 km layers are analyzed from SBUV satellite observations and from numerical simulations based on the RSHU and EMAC models. The analysis uses datasets for three subarctic locations (St. Petersburg, Harestua, and Kiruna) for 2000-2014. The Fourier and wavelet spectra show periodicities in the range from 10 days to 10 years and from 1 day to 2 years, respectively. The comparison of the spectra shows overall agreement between the observational and modeled datasets. However, the analysis has revealed differences both between the measurements and the models and between the models themselves. The differences primarily concern the Rossby wave period region and the 11-year and semiannual periodicities. Possible reasons are given for the differences between the models and the measurements.

  15. Operation analysis of a Chebyshev-Pantograph leg mechanism for a single DOF biped robot

    NASA Astrophysics Data System (ADS)

    Liang, Conghui; Ceccarelli, Marco; Takeda, Yukio

    2012-12-01

    In this paper, operation analysis of a Chebyshev-Pantograph leg mechanism is presented for a single degree of freedom (DOF) biped robot. The proposed leg mechanism is composed of a Chebyshev four-bar linkage and a pantograph mechanism. In contrast to general fully actuated anthropomorphic leg mechanisms, the proposed leg mechanism has peculiar features like compactness, low-cost, and easy-operation. Kinematic equations of the proposed leg mechanism are formulated for a computer oriented simulation. Simulation results show the operation performance of the proposed leg mechanism with suitable characteristics. A parametric study has been carried out to evaluate the operation performance as function of design parameters. A prototype of a single DOF biped robot equipped with two proposed leg mechanisms has been built at LARM (Laboratory of Robotics and Mechatronics). Experimental test shows practical feasible walking ability of the prototype, as well as drawbacks are discussed for the mechanical design.

  16. Simulated annealing with probabilistic analysis for solving traveling salesman problems

    NASA Astrophysics Data System (ADS)

    Hong, Pei-Yee; Lim, Yai-Fung; Ramli, Razamin; Khalid, Ruzelan

    2013-09-01

    Simulated Annealing (SA) is a widely used meta-heuristic that was inspired from the annealing process of recrystallization of metals. Therefore, the efficiency of SA is highly affected by the annealing schedule. As a result, in this paper, we presented an empirical work to provide a comparable annealing schedule to solve symmetric traveling salesman problems (TSP). Randomized complete block design is also used in this study. The results show that different parameters do affect the efficiency of SA and thus, we propose the best found annealing schedule based on the Post Hoc test. SA was tested on seven selected benchmarked problems of symmetric TSP with the proposed annealing schedule. The performance of SA was evaluated empirically alongside with benchmark solutions and simple analysis to validate the quality of solutions. Computational results show that the proposed annealing schedule provides a good quality of solution.

  17. Planning acetabular fracture reduction using patient-specific multibody simulation of the hip

    NASA Astrophysics Data System (ADS)

    Oliveri, Hadrien; Boudissa, Mehdi; Tonetti, Jerome; Chabanas, Matthieu

    2017-03-01

    Acetabular fractures are a challenge in orthopedic surgery. Computer-aided solutions were proposed to segment bone fragments, simulate the fracture reduction or design the osteosynthesis fixation plates. This paper addresses the simulation part, which is usually carried out by freely moving bone fragments with six degrees of freedom to reproduce the pre-fracture state. Instead we propose a different paradigm, closer to actual surgeon's requirements: to simulate the surgical procedure itself rather than the desired result. A simple, patient-specific, biomechanical multibody model is proposed, integrating the main ligaments and muscles of the hip joint while accounting for contacts between bone fragments. Main surgical tools and actions can be simulated, such as clamps, Schanz screws or traction of the femur. Simulations are computed interactively, which enables clinicians to evaluate different strategies for an optimal surgical planning. Six retrospective cases were studied, with simple and complex fracture patterns. After interactively building the models from preoperative CT, gestures from the surgical reports were reproduced. Results of the simulations could then be compared with postoperative CT data. A qualitative study shows the model behavior is excellent and the simulated reductions fit the observed data. A more quantitative analysis is currently being completed. Two cases are particularly significant, for which the surgical reduction actually failed. Simulations show it was indeed not possible to reduce these fractures with the chosen approach. Had our simulator being used, a better planning may have avoided a second surgery to these patients.

  18. High power diode pumped solid state (DPSS) laser systems active media robust modeling and analysis

    NASA Astrophysics Data System (ADS)

    Kashef, Tamer M.; Mokhtar, Ayman M.; Ghoniemy, Samy A.

    2018-02-01

    Diode side-pumped solid-state lasers have the potential to yield high quality laser beams with high efficiency and reliability. This paper summarizes the results of simulation of the most predominant active media that are used in high power diode pumped solid-state (DPSS) laser systems. Nd:YAG, Nd:glass, and Nd:YLF rods laser systems were simulated using the special finite element analysis software program LASCAD. A performance trade off analysis for Nd:YAG, Nd:glass, and Nd:YLF rods was performed in order to predict the system optimized parameters and to investigate thermally induced thermal fracture that may occur due to heat load and mechanical stress. The simulation results showed that at the optimized values Nd:YAG rod achieved the highest output power of 175W with 43% efficiency and heat load of 1.873W/mm3. A negligible changes in laser output power, heat load, stress, and temperature distributions were observed when the Nd:YAG rod length was increased from 72 to 80mm. Simulation of Nd:glass at different rod diameters at the same pumping conditions showed better results for mechanical stress and thermal load than that of Nd:YAG and Nd:YLF which makes it very suitable for high power laser applications especially for large rod diameters. For large rod diameters Nd:YLF is mechanically weaker and softer crystal compared to Nd:YAG and Nd:glass due to its poor thermomechanical properties which limits its usage to only low to medium power systems.

  19. Why simulation can be efficient: on the preconditions of efficient learning in complex technology based practices.

    PubMed

    Hofmann, Bjørn

    2009-07-23

    It is important to demonstrate learning outcomes of simulation in technology based practices, such as in advanced health care. Although many studies show skills improvement and self-reported change to practice, there are few studies demonstrating patient outcome and societal efficiency. The objective of the study is to investigate if and why simulation can be effective and efficient in a hi-tech health care setting. This is important in order to decide whether and how to design simulation scenarios and outcome studies. Core theoretical insights in Science and Technology Studies (STS) are applied to analyze the field of simulation in hi-tech health care education. In particular, a process-oriented framework where technology is characterized by its devices, methods and its organizational setting is applied. The analysis shows how advanced simulation can address core characteristics of technology beyond the knowledge of technology's functions. Simulation's ability to address skilful device handling as well as purposive aspects of technology provides a potential for effective and efficient learning. However, as technology is also constituted by organizational aspects, such as technology status, disease status, and resource constraints, the success of simulation depends on whether these aspects can be integrated in the simulation setting as well. This represents a challenge for future development of simulation and for demonstrating its effectiveness and efficiency. Assessing the outcome of simulation in education in hi-tech health care settings is worthwhile if core characteristics of medical technology are addressed. This challenges the traditional technical versus non-technical divide in simulation, as organizational aspects appear to be part of technology's core characteristics.

  20. Finite element analyses of continuous filament ties for masonry applications : final report for the Arquin Corporation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinones, Armando, Sr.; Bibeau, Tiffany A.; Ho, Clifford Kuofei

    2008-08-01

    Finite-element analyses were performed to simulate the response of a hypothetical vertical masonry wall subject to different lateral loads with and without continuous horizontal filament ties laid between rows of concrete blocks. A static loading analysis and cost comparison were also performed to evaluate optimal materials and designs for the spacers affixed to the filaments. Results showed that polypropylene, ABS, and polyethylene (high density) were suitable materials for the spacers based on performance and cost, and the short T-spacer design was optimal based on its performance and functionality. Simulations of vertical walls subject to static loads representing 100 mph windsmore » (0.2 psi) and a seismic event (0.66 psi) showed that the simulated walls performed similarly and adequately when subject to these loads with and without the ties. Additional simulations and tests are required to assess the performance of actual walls with and without the ties under greater loads and more realistic conditions (e.g., cracks, non-linear response).« less

  1. Temperature field simulation and phantom validation of a Two-armed Spiral Antenna for microwave thermotherapy.

    PubMed

    Du, Yongxing; Zhang, Lingze; Sang, Lulu; Wu, Daocheng

    2016-04-29

    In this paper, an Archimedean planar spiral antenna for the application of thermotherapy was designed. This type of antenna was chosen for its compact structure, flexible application and wide heating area. The temperature field generated by the use of this Two-armed Spiral Antenna in a muscle-equivalent phantom was simulated and subsequently validated by experimentation. First, the specific absorption rate (SAR) of the field was calculated using the Finite Element Method (FEM) by Ansoft's High Frequency Structure Simulation (HFSS). Then, the temperature elevation in the phantom was simulated by an explicit finite difference approximation of the bioheat equation (BHE). The temperature distribution was then validated by a phantom heating experiment. The results showed that this antenna had a good heating ability and a wide heating area. A comparison between the calculation and the measurement showed a fair agreement in the temperature elevation. The validated model could be applied for the analysis of electromagnetic-temperature distribution in phantoms during the process of antenna design or thermotherapy experimentation.

  2. Status and Preliminary Evaluation for Chinese Re-Analysis Datasets

    NASA Astrophysics Data System (ADS)

    bin, zhao; chunxiang, shi; tianbao, zhao; dong, si; jingwei, liu

    2016-04-01

    Based on operational T639L60 spectral model, combined with Hybird_GSI assimilation system by using meteorological observations including radiosondes, buoyes, satellites el al., a set of Chinese Re-Analysis (CRA) datasets is developing by Chinese National Meteorological Information Center (NMIC) of Chinese Meteorological Administration (CMA). The datasets are run at 30km (0.28°latitude / longitude) resolution which holds higher resolution than most of the existing reanalysis dataset. The reanalysis is done in an effort to enhance the accuracy of historical synoptic analysis and aid to find out detailed investigation of various weather and climate systems. The current status of reanalysis is in a stage of preliminary experimental analysis. One-year forecast data during Jun 2013 and May 2014 has been simulated and used in synoptic and climate evaluation. We first examine the model prediction ability with the new assimilation system, and find out that it represents significant improvement in Northern and Southern hemisphere, due to addition of new satellite data, compared with operational T639L60 model, the effect of upper-level prediction is improved obviously and overall prediction stability is enhanced. In climatological analysis, compared with ERA-40, NCEP/NCAR and NCEP/DOE reanalyses, the results show that surface temperature simulates a bit lower in land and higher over ocean, 850-hPa specific humidity reflects weakened anomaly and the zonal wind value anomaly is focus on equatorial tropics. Meanwhile, the reanalysis dataset shows good ability for various climate index, such as subtropical high index, ESMI (East-Asia subtropical Summer Monsoon Index) et al., especially for the Indian and western North Pacific monsoon index. Latter we will further improve the assimilation system and dynamical simulating performance, and obtain 40-years (1979-2018) reanalysis datasets. It will provide a more comprehensive analysis for synoptic and climate diagnosis.

  3. Modeling and analysis of the solar concentrator in photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Mroczka, Janusz; Plachta, Kamil

    2015-06-01

    The paper presents the Λ-ridge and V-trough concentrator system with a low concentration ratio. Calculations and simulations have been made in the program created by the author. The results of simulation allow to choose the best parameters of photovoltaic system: the opening angle between the surface of the photovoltaic module and mirrors, resolution of the tracking system and the material for construction of the concentrator mirrors. The research shows the effect each of these parameters on the efficiency of the photovoltaic system and method of surface modeling using BRDF function. The parameters of concentrator surface (eg. surface roughness) were calculated using a new algorithm based on the BRDF function. The algorithm uses a combination of model Torrance-Sparrow and HTSG. The simulation shows the change in voltage, current and output power depending on system parameters.

  4. Analysis of Numerical Simulation Database for Pressure Fluctuations Induced by High-Speed Turbulent Boundary Layers

    NASA Technical Reports Server (NTRS)

    Duan, Lian; Choudhari, Meelan M.

    2014-01-01

    Direct numerical simulations (DNS) of Mach 6 turbulent boundary layer with nominal freestream Mach number of 6 and Reynolds number of Re(sub T) approximately 460 are conducted at two wall temperatures (Tw/Tr = 0.25, 0.76) to investigate the generated pressure fluctuations and their dependence on wall temperature. Simulations indicate that the influence of wall temperature on pressure fluctuations is largely limited to the near-wall region, with the characteristics of wall-pressure fluctuations showing a strong temperature dependence. Wall temperature has little influence on the propagation speed of the freestream pressure signal. The freestream radiation intensity compares well between wall-temperature cases when normalized by the local wall shear; the propagation speed of the freestream pressure signal and the orientation of the radiation wave front show little dependence on the wall temperature.

  5. Investigation of Hydrogen Embrittlement Susceptibility of X80 Weld Joints by Thermal Simulation

    NASA Astrophysics Data System (ADS)

    Peng, Huangtao; An, Teng; Zheng, Shuqi; Luo, Bingwei; Wang, Siyu; Zhang, Shuai

    2018-05-01

    The objective of this study was to investigate the hydrogen embrittlement (HE) susceptibility and influence mechanism of X80 weld joints. Slow strain rate testing (SSRT) under in situ H-charging, combined with microstructure and fracture analysis, was performed on the base metal (BM), weld metal (WM), thermally simulated fine-grained heat-affected zone (FGHAZ) and coarse-grained heat-affected zone (CGHAZ). Results showed that the WM and simulated HAZ had a greater degree of high local strain distribution than the BM; compared to the CGHAZ, the FGHAZ had lower microhardness and more uniformly distributed stress. SSRT results showed that the weld joint was highly sensitive to HE; the HE index decreased in the following sequence: FGHAZ, WM, CGHAZ and BM. The effect of the microstructure on HE was mainly reflected in microstructure, local stress distribution and microhardness.

  6. Monte-Carlo simulation of defect-cluster nucleation in metals during irradiation

    NASA Astrophysics Data System (ADS)

    Nakasuji, Toshiki; Morishita, Kazunori; Ruan, Xiaoyong

    2017-02-01

    A multiscale modeling approach was applied to investigate the nucleation process of CRPs (copper rich precipitates, i.e., copper-vacancy clusters) in α-Fe containing 1 at.% Cu during irradiation. Monte-Carlo simulations were performed to investigate the nucleation process, with the rate theory equation analysis to evaluate the concentration of displacement defects, along with the molecular dynamics technique to know CRP thermal stabilities in advance. Our MC simulations showed that there is long incubation period at first, followed by a rapid growth of CRPs. The incubation period depends on irradiation conditions such as the damage rate and temperature. CRP's composition during nucleation varies with time. The copper content of CRPs shows relatively rich at first, and then becomes poorer as the precipitate size increases. A widely-accepted model of CRP nucleation process is finally proposed.

  7. Effects of geometry on blast-induced loadings

    NASA Astrophysics Data System (ADS)

    Moore, Christopher Dyer

    Simulations of blasts in an urban environment were performed using Loci/BLAST, a full-featured fluid dynamics simulation code, and analyzed. A two-structure urban environment blast case was used to perform a mesh refinement study. Results show that mesh spacing on and around the structure must be 12.5 cm or less to resolve fluid dynamic features sufficiently to yield accurate results. The effects of confinement were illustrated by analyzing a blast initiated from the same location with and without the presence of a neighboring structure. Analysis of extreme pressures and impulses on structures showed that confinement can increase blast loading by more than 200 percent.

  8. Sequence-dependent DNA deformability studied using molecular dynamics simulations.

    PubMed

    Fujii, Satoshi; Kono, Hidetoshi; Takenaka, Shigeori; Go, Nobuhiro; Sarai, Akinori

    2007-01-01

    Proteins recognize specific DNA sequences not only through direct contact between amino acids and bases, but also indirectly based on the sequence-dependent conformation and deformability of the DNA (indirect readout). We used molecular dynamics simulations to analyze the sequence-dependent DNA conformations of all 136 possible tetrameric sequences sandwiched between CGCG sequences. The deformability of dimeric steps obtained by the simulations is consistent with that by the crystal structures. The simulation results further showed that the conformation and deformability of the tetramers can highly depend on the flanking base pairs. The conformations of xATx tetramers show the most rigidity and are not affected by the flanking base pairs and the xYRx show by contrast the greatest flexibility and change their conformations depending on the base pairs at both ends, suggesting tetramers with the same central dimer can show different deformabilities. These results suggest that analysis of dimeric steps alone may overlook some conformational features of DNA and provide insight into the mechanism of indirect readout during protein-DNA recognition. Moreover, the sequence dependence of DNA conformation and deformability may be used to estimate the contribution of indirect readout to the specificity of protein-DNA recognition as well as nucleosome positioning and large-scale behavior of nucleic acids.

  9. Simulation of ground-water flow in the Saginaw Aquifer, Clinton, Eaton, and Ingham counties, Michigan

    USGS Publications Warehouse

    Holtschlag, David J.; Luukkonen, Carol L.; Nicholas, J.R.

    1996-01-01

    A numerical model was developed to simulate ground-water flow in the Tri-County region, which consists of Clinton, Eaton, and Ingham Counties, Michigan. This region includes a nine-township area surrounding Lansing, Michigan. The model simulates the regional response of the Saginaw aquifer to major groundwater withdrawals associated with public-supply wells. The Saginaw aquifer, which is in the Grand River and Saginaw Formations of Pennsylvanian age, is the primary source of ground water for Tri-County residents. The Saginaw aquifer is overlain by glacial deposits, which also are important ground-water sources in some locations. Flow in the Saginaw aquifer and the glacial deposits is simulated by discretizing the flow system into model cells arranged in two layers. Each cell, which corresponds to a land area of 0.0625 square mile, represents the locally averaged properties of the system. The spatial variation of hydraulic properties controlling ground-water flow was estimated by geostatistical analysis of 4,947 well logs. Parameter estimation, a form of nonlinear regression, was used to calibrate the flow model. Results of steady-state ground-water-flow simulations show close agreement between water flowing into and out of the model area for 1992 pumping conditions; standard error of the difference between simulated and measured heads is 14.7 feet. Simulation results for three alternative pumping scenarios for the year 2020 show that the glacial aquifer could be dewatered in places if hypothetical increases in pumping are not distributed throughout the Tri-County region. Contributing areas to public-supply wells in the nine-township area were delineated by a particle-tracking analysis. These areas cover about 121 square miles. Contributing areas for particles having travel times of 40 years or less cover about 42 square miles. Results of tritium sampling support results of model simulations to delineate contributing areas.

  10. Warm winter, thin ice?

    NASA Astrophysics Data System (ADS)

    Stroeve, Julienne C.; Schroder, David; Tsamados, Michel; Feltham, Daniel

    2018-05-01

    Winter 2016/2017 saw record warmth over the Arctic Ocean, leading to the least amount of freezing degree days north of 70° N since at least 1979. The impact of this warmth was evaluated using model simulations from the Los Alamos sea ice model (CICE) and CryoSat-2 thickness estimates from three different data providers. While CICE simulations show a broad region of anomalously thin ice in April 2017 relative to the 2011-2017 mean, analysis of three CryoSat-2 products show more limited regions with thin ice and do not always agree with each other, both in magnitude and direction of thickness anomalies. CICE is further used to diagnose feedback processes driving the observed anomalies, showing 11-13 cm reduced thermodynamic ice growth over the Arctic domain used in this study compared to the 2011-2017 mean, and dynamical contributions of +1 to +4 cm. Finally, CICE model simulations from 1985 to 2017 indicate the negative feedback relationship between ice growth and winter air temperatures may be starting to weaken, showing decreased winter ice growth since 2012, as winter air temperatures have increased and the freeze-up has been further delayed.

  11. Comparative analysis of the fit of 3-unit implant-supported frameworks cast in nickel-chromium and cobalt-chromium alloys and commercially pure titanium after casting, laser welding, and simulated porcelain firings.

    PubMed

    Tiossi, Rodrigo; Rodrigues, Renata Cristina Silveira; de Mattos, Maria da Glória Chiarello; Ribeiro, Ricardo Faria

    2008-01-01

    This study compared the vertical misfit of 3-unit implant-supported nickel-chromium (Ni-Cr) and cobalt-chromium (Co-Cr) alloy and commercially pure titanium (cpTi) frameworks after casting as 1 piece, after sectioning and laser welding, and after simulated porcelain firings. The results on the tightened side showed no statistically significant differences. On the opposite side, statistically significant differences were found for Co-Cr alloy (118.64 microm [SD: 91.48] to 39.90 microm [SD: 27.13]) and cpTi (118.56 microm [51.35] to 27.87 microm [12.71]) when comparing 1-piece to laser-welded frameworks. With both sides tightened, only Co-Cr alloy showed statistically significant differences after laser welding. Ni-Cr alloy showed the lowest misfit values, though the differences were not statistically significantly different. Simulated porcelain firings revealed no significant differences.

  12. Modeling competitive substitution in a polyelectrolyte complex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, B.; Muthukumar, M., E-mail: muthu@polysci.umass.edu

    2015-12-28

    We have simulated the invasion of a polyelectrolyte complex made of a polycation chain and a polyanion chain, by another longer polyanion chain, using the coarse-grained united atom model for the chains and the Langevin dynamics methodology. Our simulations reveal many intricate details of the substitution reaction in terms of conformational changes of the chains and competition between the invading chain and the chain being displaced for the common complementary chain. We show that the invading chain is required to be sufficiently longer than the chain being displaced for effecting the substitution. Yet, having the invading chain to be longermore » than a certain threshold value does not reduce the substitution time much further. While most of the simulations were carried out in salt-free conditions, we show that presence of salt facilitates the substitution reaction and reduces the substitution time. Analysis of our data shows that the dominant driving force for the substitution process involving polyelectrolytes lies in the release of counterions during the substitution.« less

  13. Simulating 3D Spacecraft Constellations for Low Frequency Radio Imaging

    NASA Astrophysics Data System (ADS)

    Hegedus, A. M.; Amiri, N.; Lazio, J.; Belov, K.; Kasper, J. C.

    2016-12-01

    Constellations of small spacecraft could be used to realize a low-frequency phased array for either heliophysics or astrophysics observations. However, there are issues that arise with an orbiting array that do not occur on the ground, thus rendering much of the existing radio astronomy software inadequate for data analysis and simulation. In this work we address these issues and consider the performance of two constellation concepts. The first is a 32-spacecraft constellation for astrophysical observations, and the second is a 5-element concept for pointing to the location of radio emission from coronal mass ejections (CMEs). For the first, we fill the software gap by extending the APSYNSIM software to simulate the aperture synthesis for a radio interferometer in orbit. This involves using the dynamic baselines from the relative motion of the individual spacecraft as well as the capability to add galactic noise. The ability to simulate phase errors corresponding to positional uncertainty of the antennas was also added. The upgraded software was then used to model the imaging of a 32 spacecraft constellation that would orbit the moon to image radio galaxies like Cygnus A at .3-30 MHz. Animated images showing the improvement of the dirty image as the orbits progressed were made. RMSE plots that show how well the dirty image matches the input image as a function of integration time were made. For the second concept we performed radio interferometric simulations of the Sun Radio Interferometer Space Experiment (SunRISE) using the Common Astronomy Software Applications (CASA) package. SunRISE is a five spacecraft phased array that would orbit Earth to localize the low frequency radio emission from CMEs. This involved simulating the array in CASA, creating truth images for the CMEs over the entire frequency band of SunRISE, and observing them with the simulated array to see how well it could localize the true position of the CME. The results of our analysis show that we can localize the radio emission originating from the head or flanks of the CMEs in spite of the phase errors introduced by uncertainties in orbit and clock estimation.

  14. Cost-effectiveness analysis of fecal microbiota transplantation for recurrent Clostridium difficile infection.

    PubMed

    Varier, Raghu U; Biltaji, Eman; Smith, Kenneth J; Roberts, Mark S; Kyle Jensen, M; LaFleur, Joanne; Nelson, Richard E

    2015-04-01

    Clostridium difficile infection (CDI) places a high burden on the US healthcare system. Recurrent CDI (RCDI) occurs frequently. Recently proposed guidelines from the American College of Gastroenterology (ACG) and the American Gastroenterology Association (AGA) include fecal microbiota transplantation (FMT) as a therapeutic option for RCDI. The purpose of this study was to estimate the cost-effectiveness of FMT compared with vancomycin for the treatment of RCDI in adults, specifically following guidelines proposed by the ACG and AGA. We constructed a decision-analytic computer simulation using inputs from the published literature to compare the standard approach using tapered vancomycin to FMT for RCDI from the third-party payer perspective. Our effectiveness measure was quality-adjusted life years (QALYs). Because simulated patients were followed for 90 days, discounting was not necessary. One-way and probabilistic sensitivity analyses were performed. Base-case analysis showed that FMT was less costly ($1,669 vs $3,788) and more effective (0.242 QALYs vs 0.235 QALYs) than vancomycin for RCDI. One-way sensitivity analyses showed that FMT was the dominant strategy (both less expensive and more effective) if cure rates for FMT and vancomycin were ≥70% and <91%, respectively, and if the cost of FMT was <$3,206. Probabilistic sensitivity analysis, varying all parameters simultaneously, showed that FMT was the dominant strategy over 10, 000 second-order Monte Carlo simulations. Our results suggest that FMT may be a cost-saving intervention in managing RCDI. Implementation of FMT for RCDI may help decrease the economic burden to the healthcare system.

  15. Simulation of patient flow in multiple healthcare units using process and data mining techniques for model identification.

    PubMed

    Kovalchuk, Sergey V; Funkner, Anastasia A; Metsker, Oleg G; Yakovlev, Aleksey N

    2018-06-01

    An approach to building a hybrid simulation of patient flow is introduced with a combination of data-driven methods for automation of model identification. The approach is described with a conceptual framework and basic methods for combination of different techniques. The implementation of the proposed approach for simulation of the acute coronary syndrome (ACS) was developed and used in an experimental study. A combination of data, text, process mining techniques, and machine learning approaches for the analysis of electronic health records (EHRs) with discrete-event simulation (DES) and queueing theory for the simulation of patient flow was proposed. The performed analysis of EHRs for ACS patients enabled identification of several classes of clinical pathways (CPs) which were used to implement a more realistic simulation of the patient flow. The developed solution was implemented using Python libraries (SimPy, SciPy, and others). The proposed approach enables more a realistic and detailed simulation of the patient flow within a group of related departments. An experimental study shows an improved simulation of patient length of stay for ACS patient flow obtained from EHRs in Almazov National Medical Research Centre in Saint Petersburg, Russia. The proposed approach, methods, and solutions provide a conceptual, methodological, and programming framework for the implementation of a simulation of complex and diverse scenarios within a flow of patients for different purposes: decision making, training, management optimization, and others. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Crack Detection with Lamb Wave Wavenumber Analysis

    NASA Technical Reports Server (NTRS)

    Tian, Zhenhua; Leckey, Cara; Rogge, Matt; Yu, Lingyu

    2013-01-01

    In this work, we present our study of Lamb wave crack detection using wavenumber analysis. The aim is to demonstrate the application of wavenumber analysis to 3D Lamb wave data to enable damage detection. The 3D wavefields (including vx, vy and vz components) in time-space domain contain a wealth of information regarding the propagating waves in a damaged plate. For crack detection, three wavenumber analysis techniques are used: (i) two dimensional Fourier transform (2D-FT) which can transform the time-space wavefield into frequency-wavenumber representation while losing the spatial information; (ii) short space 2D-FT which can obtain the frequency-wavenumber spectra at various spatial locations, resulting in a space-frequency-wavenumber representation; (iii) local wavenumber analysis which can provide the distribution of the effective wavenumbers at different locations. All of these concepts are demonstrated through a numerical simulation example of an aluminum plate with a crack. The 3D elastodynamic finite integration technique (EFIT) was used to obtain the 3D wavefields, of which the vz (out-of-plane) wave component is compared with the experimental measurement obtained from a scanning laser Doppler vibrometer (SLDV) for verification purposes. The experimental and simulated results are found to be in close agreement. The application of wavenumber analysis on 3D EFIT simulation data shows the effectiveness of the analysis for crack detection. Keywords: : Lamb wave, crack detection, wavenumber analysis, EFIT modeling

  17. Comparison of Two Global Sensitivity Analysis Methods for Hydrologic Modeling over the Columbia River Basin

    NASA Astrophysics Data System (ADS)

    Hameed, M.; Demirel, M. C.; Moradkhani, H.

    2015-12-01

    Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.

  18. An operational wave forecasting system for the east coast of India

    NASA Astrophysics Data System (ADS)

    Sandhya, K. G.; Murty, P. L. N.; Deshmukh, Aditya N.; Balakrishnan Nair, T. M.; Shenoi, S. S. C.

    2018-03-01

    Demand for operational ocean state forecasting is increasing, owing to the ever-increasing marine activities in the context of blue economy. In the present study, an operational wave forecasting system for the east coast of India is proposed using unstructured Simulating WAves Nearshore model (UNSWAN). This modelling system uses very high resolution mesh near the Indian east coast and coarse resolution offshore, and thus avoids the necessity of nesting with a global wave model. The model is forced with European Centre for Medium-Range Weather Forecasts (ECMWF) winds and simulates wave parameters and wave spectra for the next 3 days. The spatial pictures of satellite data overlaid on simulated wave height show that the model is capable of simulating the significant wave heights and their gradients realistically. Spectral validation has been done using the available data to prove the reliability of the model. To further evaluate the model performance, the wave forecast for the entire year 2014 is evaluated against buoy measurements over the region at 4 waverider buoy locations. Seasonal analysis of significant wave height (Hs) at the four locations showed that the correlation between the modelled and observed was the highest (in the range 0.78-0.96) during the post-monsoon season. The variability of Hs was also the highest during this season at all locations. The error statistics showed clear seasonal and geographical location dependence. The root mean square error at Visakhapatnam was the same (0.25) for all seasons, but it was the smallest for pre-monsoon season (0.12 m and 0.17 m) for Puducherry and Gopalpur. The wind sea component showed higher variability compared to the corresponding swell component in all locations and for all seasons. The variability was picked by the model to a reasonable level in most of the cases. The results of statistical analysis show that the modelling system is suitable for use in the operational scenario.

  19. Numerical simulation of a rare winter hailstorm event over Delhi, India on 17 January 2013

    NASA Astrophysics Data System (ADS)

    Chevuturi, A.; Dimri, A. P.; Gunturu, U. B.

    2014-12-01

    This study analyzes the cause of the rare occurrence of a winter hailstorm over New Delhi/NCR (National Capital Region), India. The absence of increased surface temperature or low level of moisture incursion during winter cannot generate the deep convection required for sustaining a hailstorm. Consequently, NCR shows very few cases of hailstorms in the months of December-January-February, making the winter hail formation a question of interest. For this study, a recent winter hailstorm event on 17 January 2013 (16:00-18:00 UTC) occurring over NCR is investigated. The storm is simulated using the Weather Research and Forecasting (WRF) model with the Goddard Cumulus Ensemble (GCE) microphysics scheme with two different options: hail and graupel. The aim of the study is to understand and describe the cause of hailstorm event during over NCR with a comparative analysis of the two options of GCE microphysics. Upon evaluating the model simulations, it is observed that the hail option shows a more similar precipitation intensity with the Tropical Rainfall Measuring Mission (TRMM) observation than the graupel option does, and it is able to simulate hail precipitation. Using the model-simulated output with the hail option; detailed investigation on understanding the dynamics of hailstorm is performed. The analysis based on a numerical simulation suggests that the deep instability in the atmospheric column led to the formation of hailstones as the cloud formation reached up to the glaciated zone promoting ice nucleation. In winters, such instability conditions rarely form due to low level available potential energy and moisture incursion along with upper level baroclinic instability due to the presence of a western disturbance (WD). Such rare positioning is found to be lowering the tropopause with increased temperature gradient, leading to winter hailstorm formation.

  20. Numerical simulation of a winter hailstorm event over Delhi, India on 17 January 2013

    NASA Astrophysics Data System (ADS)

    Chevuturi, A.; Dimri, A. P.; Gunturu, U. B.

    2014-09-01

    This study analyzes the cause of rare occurrence of winter hailstorm over New Delhi/NCR (National Capital Region), India. The absence of increased surface temperature or low level of moisture incursion during winter cannot generate the deep convection required for sustaining a hailstorm. Consequently, NCR shows very few cases of hailstorms in the months of December-January-February, making the winter hail formation a question of interest. For this study, recent winter hailstorm event on 17 January 2013 (16:00-18:00 UTC) occurring over NCR is investigated. The storm is simulated using Weather Research and Forecasting (WRF) model with Goddard Cumulus Ensemble (GCE) microphysics scheme with two different options, hail or graupel. The aim of the study is to understand and describe the cause of hailstorm event during over NCR with comparative analysis of the two options of GCE microphysics. On evaluating the model simulations, it is observed that hail option shows similar precipitation intensity with TRMM observation than the graupel option and is able to simulate hail precipitation. Using the model simulated output with hail option; detailed investigation on understanding the dynamics of hailstorm is performed. The analysis based on numerical simulation suggests that the deep instability in the atmospheric column led to the formation of hailstones as the cloud formation reached upto the glaciated zone promoting ice nucleation. In winters, such instability conditions rarely form due to low level available potential energy and moisture incursion along with upper level baroclinic instability due to the presence of WD. Such rare positioning is found to be lowering the tropopause with increased temperature gradient, leading to winter hailstorm formation.

  1. "I got it on Ebay!": cost-effective approach to surgical skills laboratories.

    PubMed

    Schneider, Ethan; Schenarts, Paul J; Shostrom, Valerie; Schenarts, Kimberly D; Evans, Charity H

    2017-01-01

    Surgical education is witnessing a surge in the use of simulation. However, implementation of simulation is often cost-prohibitive. Online shopping offers a low budget alternative. The aim of this study was to implement cost-effective skills laboratories and analyze online versus manufacturers' prices to evaluate for savings. Four skills laboratories were designed for the surgery clerkship from July 2014 to June 2015. Skills laboratories were implemented using hand-built simulation and instruments purchased online. Trademarked simulation was priced online and instruments priced from a manufacturer. Costs were compiled, and a descriptive cost analysis of online and manufacturers' prices was performed. Learners rated their level of satisfaction for all educational activities, and levels of satisfaction were compared. A total of 119 third-year medical students participated. Supply lists and costs were compiled for each laboratory. A descriptive cost analysis of online and manufacturers' prices showed online prices were substantially lower than manufacturers, with a per laboratory savings of: $1779.26 (suturing), $1752.52 (chest tube), $2448.52 (anastomosis), and $1891.64 (laparoscopic), resulting in a year 1 savings of $47,285. Mean student satisfaction scores for the skills laboratories were 4.32, with statistical significance compared to live lectures at 2.96 (P < 0.05) and small group activities at 3.67 (P < 0.05). A cost-effective approach for implementation of skills laboratories showed substantial savings. By using hand-built simulation boxes and online resources to purchase surgical equipment, surgical educators overcome financial obstacles limiting the use of simulation and provide learning opportunities that medical students perceive as beneficial. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Electrolytes in a nanometer slab-confinement: Ion-specific structure and solvation forces

    NASA Astrophysics Data System (ADS)

    Kalcher, Immanuel; Schulz, Julius C. F.; Dzubiella, Joachim

    2010-10-01

    We study the liquid structure and solvation forces of dense monovalent electrolytes (LiCl, NaCl, CsCl, and NaI) in a nanometer slab-confinement by explicit-water molecular dynamics (MD) simulations, implicit-water Monte Carlo (MC) simulations, and modified Poisson-Boltzmann (PB) theories. In order to consistently coarse-grain and to account for specific hydration effects in the implicit methods, realistic ion-ion and ion-surface pair potentials have been derived from infinite-dilution MD simulations. The electrolyte structure calculated from MC simulations is in good agreement with the corresponding MD simulations, thereby validating the coarse-graining approach. The agreement improves if a realistic, MD-derived dielectric constant is employed, which partially corrects for (water-mediated) many-body effects. Further analysis of the ionic structure and solvation pressure demonstrates that nonlocal extensions to PB (NPB) perform well for a wide parameter range when compared to MC simulations, whereas all local extensions mostly fail. A Barker-Henderson mapping of the ions onto a charged, asymmetric, and nonadditive binary hard-sphere mixture shows that the strength of structural correlations is strongly related to the magnitude and sign of the salt-specific nonadditivity. Furthermore, a grand canonical NPB analysis shows that the Donnan effect is dominated by steric correlations, whereas solvation forces and overcharging effects are mainly governed by ion-surface interactions. However, steric corrections to solvation forces are strongly repulsive for high concentrations and low surface charges, while overcharging can also be triggered by steric interactions in strongly correlated systems. Generally, we find that ion-surface and ion-ion correlations are strongly coupled and that coarse-grained methods should include both, the latter nonlocally and nonadditive (as given by our specific ionic diameters), when studying electrolytes in highly inhomogeneous situations.

  3. Perception Modelling of Visitors in Vargas Museum Using Agent-Based Simulation and Visibility Analysis

    NASA Astrophysics Data System (ADS)

    Carcellar, B. G., III

    2017-10-01

    Museum exhibit management is one of the usual undertakings of museum facilitators. Art works must be strategically placed to achieve maximum viewing from the visitors. The positioning of the artworks also highly influences the quality of experience of the visitors. One solution in such problems is to utilize GIS and Agent-Based Modelling (ABM). In ABM, persistent interacting objects are modelled as agents. These agents are given attributes and behaviors that describe their properties as well as their motion. In this study, ABM approach that incorporates GIS is utilized to perform analyticcal assessment on the placement of the artworks in the Vargas Museum. GIS serves as the backbone for the spatial aspect of the simulation such as the placement of the artwork exhibits, as well as possible obstructions to perception such as the columns, walls, and panel boards. Visibility Analysis is also done to the model in GIS to assess the overall visibility of the artworks. The ABM is done using the initial GIS outputs and GAMA, an open source ABM software. Visitors are modelled as agents, moving inside the museum following a specific decision tree. The simulation is done in three use cases: the 10 %, 20 %, and 30 % chance of having a visitor in the next minute. For the case of the said museum, the 10 % chance is determined to be the closest simulation case to the actual and the recommended minimum time to achieve a maximum artwork perception is 1 hour and 40 minutes. Initial assessment of the results shows that even after 3 hours of simulation, small parts of the exhibit show lack of viewers, due to its distance from the entrance. A more detailed decision tree for the visitor agents can be incorporated to have a more realistic simulation.

  4. Simulation of mixture microstructures via particle packing models and their direct comparison with real mixtures

    NASA Astrophysics Data System (ADS)

    Gulliver, Eric A.

    The objective of this thesis to identify and develop techniques providing direct comparison between simulated and real packed particle mixture microstructures containing submicron-sized particles. This entailed devising techniques for simulating powder mixtures, producing real mixtures with known powder characteristics, sectioning real mixtures, interrogating mixture cross-sections, evaluating and quantifying the mixture interrogation process and for comparing interrogation results between mixtures. A drop and roll-type particle-packing model was used to generate simulations of random mixtures. The simulated mixtures were then evaluated to establish that they were not segregated and free from gross defects. A powder processing protocol was established to provide real mixtures for direct comparison and for use in evaluating the simulation. The powder processing protocol was designed to minimize differences between measured particle size distributions and the particle size distributions in the mixture. A sectioning technique was developed that was capable of producing distortion free cross-sections of fine scale particulate mixtures. Tessellation analysis was used to interrogate mixture cross sections and statistical quality control charts were used to evaluate different types of tessellation analysis and to establish the importance of differences between simulated and real mixtures. The particle-packing program generated crescent shaped pores below large particles but realistic looking mixture microstructures otherwise. Focused ion beam milling was the only technique capable of sectioning particle compacts in a manner suitable for stereological analysis. Johnson-Mehl and Voronoi tessellation of the same cross-sections produced tessellation tiles with different the-area populations. Control charts analysis showed Johnson-Mehl tessellation measurements are superior to Voronoi tessellation measurements for detecting variations in mixture microstructure, such as altered particle-size distributions or mixture composition. Control charts based on tessellation measurements were used for direct, quantitative comparisons between real and simulated mixtures. Four sets of simulated and real mixtures were examined. Data from real mixture was matched with simulated data when the samples were well mixed and the particle size distributions and volume fractions of the components were identical. Analysis of mixture components that occupied less than approximately 10 vol% of the mixture was not practical unless the particle size of the component was extremely small and excellent quality high-resolution compositional micrographs of the real sample are available. These methods of analysis should allow future researchers to systematically evaluate and predict the impact and importance of variables such as component volume fraction and component particle size distribution as they pertain to the uniformity of powder mixture microstructures.

  5. Bias and inference from misspecified mixed-effect models in stepped wedge trial analysis.

    PubMed

    Thompson, Jennifer A; Fielding, Katherine L; Davey, Calum; Aiken, Alexander M; Hargreaves, James R; Hayes, Richard J

    2017-10-15

    Many stepped wedge trials (SWTs) are analysed by using a mixed-effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common-to-all or varied-between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within-cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within-cluster comparisons in the standard model. In the SWTs simulated here, mixed-effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within-cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  6. Bias and inference from misspecified mixed‐effect models in stepped wedge trial analysis

    PubMed Central

    Fielding, Katherine L.; Davey, Calum; Aiken, Alexander M.; Hargreaves, James R.; Hayes, Richard J.

    2017-01-01

    Many stepped wedge trials (SWTs) are analysed by using a mixed‐effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common‐to‐all or varied‐between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within‐cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within‐cluster comparisons in the standard model. In the SWTs simulated here, mixed‐effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within‐cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28556355

  7. Towards Application of NASA Standard for Models and Simulations in Aeronautical Design Process

    NASA Astrophysics Data System (ADS)

    Vincent, Luc; Dunyach, Jean-Claude; Huet, Sandrine; Pelissier, Guillaume; Merlet, Joseph

    2012-08-01

    Even powerful computational techniques like simulation endure limitations in their validity domain. Consequently using simulation models requires cautions to avoid making biased design decisions for new aeronautical products on the basis of inadequate simulation results. Thus the fidelity, accuracy and validity of simulation models shall be monitored in context all along the design phases to build confidence in achievement of the goals of modelling and simulation.In the CRESCENDO project, we adapt the Credibility Assessment Scale method from NASA standard for models and simulations from space programme to the aircraft design in order to assess the quality of simulations. The proposed eight quality assurance metrics aggregate information to indicate the levels of confidence in results. They are displayed in management dashboard and can secure design trade-off decisions at programme milestones.The application of this technique is illustrated in aircraft design context with specific thermal Finite Elements Analysis. This use case shows how to judge the fitness- for-purpose of simulation as Virtual testing means and then green-light the continuation of Simulation Lifecycle Management (SLM) process.

  8. Design of a neural network simulator on a transputer array

    NASA Technical Reports Server (NTRS)

    Mcintire, Gary; Villarreal, James; Baffes, Paul; Rua, Monica

    1987-01-01

    A brief summary of neural networks is presented which concentrates on the design constraints imposed. Major design issues are discussed together with analysis methods and the chosen solutions. Although the system will be capable of running on most transputer architectures, it currently is being implemented on a 40-transputer system connected to a toroidal architecture. Predictions show a performance level equivalent to that of a highly optimized simulator running on the SX-2 supercomputer.

  9. A multi-frequency radar sounder for lava tubes detection on the Moon: Design, performance assessment and simulations

    NASA Astrophysics Data System (ADS)

    Carrer, Leonardo; Gerekos, Christopher; Bruzzone, Lorenzo

    2018-03-01

    Lunar lava tubes have attracted special interest as they would be suitable shelters for future human outposts on the Moon. Recent experimental results from optical images and gravitational anomalies have brought strong evidence of their existence, but such investigative means have very limited potential for global mapping of lava tubes. In this paper, we investigate the design requirement and feasibility of a radar sounder system specifically conceived for detecting subsurface Moon lava tubes from orbit. This is done by conducting a complete performance assessment and by simulating the electromagnetic signatures of lava tubes using a coherent 3D simulator. The results show that radar sounding of lava tubes is feasible with good performance margins in terms of signal-to-noise and signal-to-clutter ratio, and that a dual-frequency radar sounder would be able to detect the majority of lunar lava tubes based on their potential dimension with some limitations for very small lava tubes having width smaller than 250 m. The electromagnetic simulations show that lava tubes display an unique signature characterized by a signal phase inversion on the roof echo. The analysis is provided for different acquisition geometries with respect to the position of the sounded lava tube. This analysis confirms that orbiting multi-frequency radar sounder can detect and map in a reliable and unambiguous way the majority of Moon lava tubes.

  10. Axisymmetric Plume Simulations with NASA's DSMC Analysis Code

    NASA Technical Reports Server (NTRS)

    Stewart, B. D.; Lumpkin, F. E., III

    2012-01-01

    A comparison of axisymmetric Direct Simulation Monte Carlo (DSMC) Analysis Code (DAC) results to analytic and Computational Fluid Dynamics (CFD) solutions in the near continuum regime and to 3D DAC solutions in the rarefied regime for expansion plumes into a vacuum is performed to investigate the validity of the newest DAC axisymmetric implementation. This new implementation, based on the standard DSMC axisymmetric approach where the representative molecules are allowed to move in all three dimensions but are rotated back to the plane of symmetry by the end of the move step, has been fully integrated into the 3D-based DAC code and therefore retains all of DAC s features, such as being able to compute flow over complex geometries and to model chemistry. Axisymmetric DAC results for a spherically symmetric isentropic expansion are in very good agreement with a source flow analytic solution in the continuum regime and show departure from equilibrium downstream of the estimated breakdown location. Axisymmetric density contours also compare favorably against CFD results for the R1E thruster while temperature contours depart from equilibrium very rapidly away from the estimated breakdown surface. Finally, axisymmetric and 3D DAC results are in very good agreement over the entire plume region and, as expected, this new axisymmetric implementation shows a significant reduction in computer resources required to achieve accurate simulations for this problem over the 3D simulations.

  11. 3D PIC SIMULATIONS OF COLLISIONLESS SHOCKS AT LUNAR MAGNETIC ANOMALIES AND THEIR ROLE IN FORMING LUNAR SWIRLS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bamford, R. A.; Kellett, B. J.; Alves, E. P.

    Investigation of the lunar crustal magnetic anomalies offers a comprehensive long-term data set of observations of small-scale magnetic fields and their interaction with the solar wind. In this paper a review of the observations of lunar mini-magnetospheres is compared quantifiably with theoretical kinetic-scale plasma physics and 3D particle-in-cell simulations. The aim of this paper is to provide a complete picture of all the aspects of the phenomena and to show how the observations from all the different and international missions interrelate. The analysis shows that the simulations are consistent with the formation of miniature (smaller than the ion Larmor orbit)more » collisionless shocks and miniature magnetospheric cavities, which has not been demonstrated previously. The simulations reproduce the finesse and form of the differential proton patterns that are believed to be responsible for the creation of both the “lunar swirls” and “dark lanes.” Using a mature plasma physics code like OSIRIS allows us, for the first time, to make a side-by-side comparison between model and space observations. This is shown for all of the key plasma parameters observed to date by spacecraft, including the spectral imaging data of the lunar swirls. The analysis of miniature magnetic structures offers insight into multi-scale mechanisms and kinetic-scale aspects of planetary magnetospheres.« less

  12. In Silico Evaluation of the Potential Impact of Bioanalytical Bias Difference between Two Therapeutic Protein Formulations for Pharmacokinetic Assessment in a Biocomparability Study.

    PubMed

    Thway, Theingi M; Macaraeg, Chris; Eschenberg, Michael; Ma, Mark

    2015-05-01

    Formulation changes at later stages of biotherapeutics development require biocomparability (BC) assessment. Using simulation, this study aims to determine the potential effect of bias difference observed between the two formulations after spiking into serum in passing or failing of a critical BC study. An ELISA method with 20% total error was used to assess any bias differences between a reference (RF) and test formulations (TF) in serum. During bioanalytical comparison of these formulations, a 9% difference in bias was observed between the two formulations in sera. To determine acceptable level of bias difference between the RF and TF bioanalytically, two in silico simulations were performed. The in silico analysis showed that the likelihood of the study meeting the BC criteria was >90% when the bias difference between RF and TF in serum was 9% and the number of subjects was ≥20 per treatment arm. An additional simulation showed that when the bias difference was increased to 13% and the number of subjects was <40, the likelihood of meeting the BC criteria decreased to 80%. The result from in silico analysis allowed the bioanalytical laboratory to proceed with sample analysis using a single calibrator and quality controls made from the reference formulation. This modeling approach can be applied to other BC studies with similar situations.

  13. Effect of analysis parameters on non-linear implicit finite element analysis of marine corroded steel plate

    NASA Astrophysics Data System (ADS)

    Islam, Muhammad Rabiul; Sakib-Ul-Alam, Md.; Nazat, Kazi Kaarima; Hassan, M. Munir

    2017-12-01

    FEA results greatly depend on analysis parameters. MSC NASTRAN nonlinear implicit analysis code has been used in large deformation finite element analysis of pitted marine SM490A steel rectangular plate. The effect of two types actual pit shape on parameters of integrity of structure has been analyzed. For 3-D modeling, a proposed method for simulation of pitted surface by probabilistic corrosion model has been used. The result has been verified with the empirical formula proposed by finite element analysis of steel surface generated with different pitted data where analyses have been carried out by the code of LS-DYNA 971. In the both solver, an elasto-plastic material has been used where an arbitrary stress versus strain curve can be defined. In the later one, the material model is based on the J2 flow theory with isotropic hardening where a radial return algorithm is used. The comparison shows good agreement between the two results which ensures successful simulation with comparatively less energy and time.

  14. Arctic sea-ice diffusion from observed and simulated Lagrangian trajectories

    NASA Astrophysics Data System (ADS)

    Rampal, Pierre; Bouillon, Sylvain; Bergh, Jon; Ólason, Einar

    2016-07-01

    We characterize sea-ice drift by applying a Lagrangian diffusion analysis to buoy trajectories from the International Arctic Buoy Programme (IABP) dataset and from two different models: the standalone Lagrangian sea-ice model neXtSIM and the Eulerian coupled ice-ocean model used for the TOPAZ reanalysis. By applying the diffusion analysis to the IABP buoy trajectories over the period 1979-2011, we confirm that sea-ice diffusion follows two distinct regimes (ballistic and Brownian) and we provide accurate values for the diffusivity and integral timescale that could be used in Eulerian or Lagrangian passive tracers models to simulate the transport and diffusion of particles moving with the ice. We discuss how these values are linked to the evolution of the fluctuating displacements variance and how this information could be used to define the size of the search area around the position predicted by the mean drift. By comparing observed and simulated sea-ice trajectories for three consecutive winter seasons (2007-2011), we show how the characteristics of the simulated motion may differ from or agree well with observations. This comparison illustrates the usefulness of first applying a diffusion analysis to evaluate the output of modeling systems that include a sea-ice model before using these in, e.g., oil spill trajectory models or, more generally, to simulate the transport of passive tracers in sea ice.

  15. Performance assessment of retrospective meteorological inputs for use in air quality modeling during TexAQS 2006

    NASA Astrophysics Data System (ADS)

    Ngan, Fong; Byun, Daewon; Kim, Hyuncheol; Lee, Daegyun; Rappenglück, Bernhard; Pour-Biazar, Arastoo

    2012-07-01

    To achieve more accurate meteorological inputs than was used in the daily forecast for studying the TexAQS 2006 air quality, retrospective simulations were conducted using objective analysis and 3D/surface analysis nudging with surface and upper observations. Model ozone using the assimilated meteorological fields with improved wind fields shows better agreement with the observation compared to the forecasting results. In the post-frontal conditions, important factors for ozone modeling in terms of wind patterns are the weak easterlies in the morning for bringing in industrial emissions to the city and the subsequent clockwise turning of the wind direction induced by the Coriolis force superimposing the sea breeze, which keeps pollutants in the urban area. Objective analysis and nudging employed in the retrospective simulation minimize the wind bias but are not able to compensate for the general flow pattern biases inherited from large scale inputs. By using an alternative analyses data for initializing the meteorological simulation, the model can re-produce the flow pattern and generate the ozone peak location closer to the reality. The inaccurate simulation of precipitation and cloudiness cause over-prediction of ozone occasionally. Since there are limitations in the meteorological model to simulate precipitation and cloudiness in the fine scale domain (less than 4-km grid), the satellite-based cloud is an alternative way to provide necessary inputs for the retrospective study of air quality.

  16. Design and simulation of the surface shape control system for membrane mirror

    NASA Astrophysics Data System (ADS)

    Zhang, Gengsheng; Tang, Minxue

    2009-11-01

    The surface shape control is one of the key technologies for the manufacture of membrane mirror. This paper presents a design of membrane mirror's surface shape control system on the basis of fuzzy logic control. The system contains such function modules as surface shape design, surface shape control, surface shape analysis, and etc. The system functions are realized by using hybrid programming technology of Visual C# and MATLAB. The finite element method is adopted to simulate the surface shape control of membrane mirror. The finite element analysis model is established through ANSYS Parametric Design Language (APDL). ANSYS software kernel is called by the system in background running mode when doing the simulation. The controller is designed by means of controlling the sag of the mirror's central crosssection. The surface shape of the membrane mirror and its optical aberration are obtained by applying Zernike polynomial fitting. The analysis of surface shape control and the simulation of disturbance response are performed for a membrane mirror with 300mm aperture and F/2.7. The result of the simulation shows that by using the designed control system, the RMS wavefront error of the mirror can reach to 142λ (λ=632.8nm), which is consistent to the surface accuracy of the membrane mirror obtained by the large deformation theory of membrane under the same condition.

  17. Clinical validation of robot simulation of toothbrushing - comparative plaque removal efficacy

    PubMed Central

    2014-01-01

    Background Clinical validation of laboratory toothbrushing tests has important advantages. It was, therefore, the aim to demonstrate correlation of tooth cleaning efficiency of a new robot brushing simulation technique with clinical plaque removal. Methods Clinical programme: 27 subjects received dental cleaning prior to 3-day-plaque-regrowth-interval. Plaque was stained, photographically documented and scored using planimetrical index. Subjects brushed teeth 33–47 with three techniques (horizontal, rotating, vertical), each for 20s buccally and for 20s orally in 3 consecutive intervals. The force was calibrated, the brushing technique was video supported. Two different brushes were randomly assigned to the subject. Robot programme: Clinical brushing programmes were transfered to a 6-axis-robot. Artificial teeth 33–47 were covered with plaque-simulating substrate. All brushing techniques were repeated 7 times, results were scored according to clinical planimetry. All data underwent statistical analysis by t-test, U-test and multivariate analysis. Results The individual clinical cleaning patterns are well reproduced by the robot programmes. Differences in plaque removal are statistically significant for the two brushes, reproduced in clinical and robot data. Multivariate analysis confirms the higher cleaning efficiency for anterior teeth and for the buccal sites. Conclusions The robot tooth brushing simulation programme showed good correlation with clinically standardized tooth brushing. This new robot brushing simulation programme can be used for rapid, reproducible laboratory testing of tooth cleaning. PMID:24996973

  18. Clinical validation of robot simulation of toothbrushing--comparative plaque removal efficacy.

    PubMed

    Lang, Tomas; Staufer, Sebastian; Jennes, Barbara; Gaengler, Peter

    2014-07-04

    Clinical validation of laboratory toothbrushing tests has important advantages. It was, therefore, the aim to demonstrate correlation of tooth cleaning efficiency of a new robot brushing simulation technique with clinical plaque removal. Clinical programme: 27 subjects received dental cleaning prior to 3-day-plaque-regrowth-interval. Plaque was stained, photographically documented and scored using planimetrical index. Subjects brushed teeth 33-47 with three techniques (horizontal, rotating, vertical), each for 20s buccally and for 20s orally in 3 consecutive intervals. The force was calibrated, the brushing technique was video supported. Two different brushes were randomly assigned to the subject. Robot programme: Clinical brushing programmes were transfered to a 6-axis-robot. Artificial teeth 33-47 were covered with plaque-simulating substrate. All brushing techniques were repeated 7 times, results were scored according to clinical planimetry. All data underwent statistical analysis by t-test, U-test and multivariate analysis. The individual clinical cleaning patterns are well reproduced by the robot programmes. Differences in plaque removal are statistically significant for the two brushes, reproduced in clinical and robot data. Multivariate analysis confirms the higher cleaning efficiency for anterior teeth and for the buccal sites. The robot tooth brushing simulation programme showed good correlation with clinically standardized tooth brushing.This new robot brushing simulation programme can be used for rapid, reproducible laboratory testing of tooth cleaning.

  19. Analyzing indirect secondary electron contrast of unstained bacteriophage T4 based on SEM images and Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogura, Toshihiko, E-mail: t-ogura@aist.go.jp

    2009-03-06

    The indirect secondary electron contrast (ISEC) condition of the scanning electron microscopy (SEM) produces high contrast detection with minimal damage of unstained biological samples mounted under a thin carbon film. The high contrast image is created by a secondary electron signal produced under the carbon film by a low acceleration voltage. Here, we show that ISEC condition is clearly able to detect unstained bacteriophage T4 under a thin carbon film (10-15 nm) by using high-resolution field emission (FE) SEM. The results show that FE-SEM provides higher resolution than thermionic emission SEM. Furthermore, we investigated the scattered electron area within themore » carbon film under ISEC conditions using Monte Carlo simulation. The simulations indicated that the image resolution difference is related to the scattering width in the carbon film and the electron beam spot size. Using ISEC conditions on unstained virus samples would produce low electronic damage, because the electron beam does not directly irradiate the sample. In addition to the routine analysis, this method can be utilized for structural analysis of various biological samples like viruses, bacteria, and protein complexes.« less

  20. Simulating three dimensional wave run-up over breakwaters covered by antifer units

    NASA Astrophysics Data System (ADS)

    Najafi-Jilani, A.; Niri, M. Zakiri; Naderi, Nader

    2014-06-01

    The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD) and Computational Fluid Dynamics (CFD) software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS) Volume of Fluid (VOF) code (Flow-3D) was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D) simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.

  1. Conformational Analysis on structural perturbations of the zinc finger NEMO

    NASA Astrophysics Data System (ADS)

    Godwin, Ryan; Salsbury, Freddie; Salsbury Group Team

    2014-03-01

    The NEMO (NF-kB Essential Modulator) Zinc Finger protein (2jvx) is a functional Ubiquitin-binding domain, and plays a role in signaling pathways for immune/inflammatory responses, apoptosis, and oncogenesis [Cordier et al., 2008]. Characterized by 3 cysteines and 1 histidine residue at the active site, the biologically occurring, bound zinc configuration is a stable structural motif. Perturbations of the zinc binding residues suggest conformational changes in the 423-atom protein characterized via analysis of all-atom molecular dynamics simulations. Structural perturbations include simulations with and without a zinc ion and with and without de-protonated cysteines, resulting in four distinct configurations. Simulations of various time scales show consistent results, yet the longest, GPU driven, microsecond runs show more drastic structural and dynamic fluctuations when compared to shorter duration time-scales. The last cysteine residue (26 of 28) and the helix on which it resides exhibit a secondary, locally unfolded conformation in addition to its normal bound conformation. Combined analytics elucidate how the presence of zinc and/or protonated cysteines impact the dynamics and energetic fluctuations of NEMO. Comprehensive Cancer Center of Wake Forest University Computational Biosciences shared resource supported by NCI CCSG P30CA012197.

  2. Directions of arrival estimation with planar antenna arrays in the presence of mutual coupling

    NASA Astrophysics Data System (ADS)

    Akkar, Salem; Harabi, Ferid; Gharsallah, Ali

    2013-06-01

    Directions of arrival (DoAs) estimation of multiple sources using an antenna array is a challenging topic in wireless communication. The DoAs estimation accuracy depends not only on the selected technique and algorithm, but also on the geometrical configuration of the antenna array used during the estimation. In this article the robustness of common planar antenna arrays against unaccounted mutual coupling is examined and their DoAs estimation capabilities are compared and analysed through computer simulations using the well-known MUltiple SIgnal Classification (MUSIC) algorithm. Our analysis is based on an electromagnetic concept to calculate an approximation of the impedance matrices that define the mutual coupling matrix (MCM). Furthermore, a CRB analysis is presented and used as an asymptotic performance benchmark of the studied antenna arrays. The impact of the studied antenna arrays geometry on the MCM structure is also investigated. Simulation results show that the UCCA has more robustness against unaccounted mutual coupling and performs better results than both UCA and URA geometries. The performed simulations confirm also that, although the UCCA achieves better performance under complicated scenarios, the URA shows better asymptotic (CRB) behaviour which promises more accuracy on DoAs estimation.

  3. Careful with Those Priors: A Note on Bayesian Estimation in Two-Parameter Logistic Item Response Theory Models

    ERIC Educational Resources Information Center

    Marcoulides, Katerina M.

    2018-01-01

    This study examined the use of Bayesian analysis methods for the estimation of item parameters in a two-parameter logistic item response theory model. Using simulated data under various design conditions with both informative and non-informative priors, the parameter recovery of Bayesian analysis methods were examined. Overall results showed that…

  4. Simulation framework for electromagnetic effects in plasmonics, filter apertures, wafer scattering, grating mirrors, and nano-crystals

    NASA Astrophysics Data System (ADS)

    Ceperley, Daniel Peter

    This thesis presents a Finite-Difference Time-Domain simulation framework as well as both scientific observations and quantitative design data for emerging optical devices. These emerging applications required the development of simulation capabilities to carefully control numerical experimental conditions, isolate and quantifying specific scattering processes, and overcome memory and run-time limitations on large device structures. The framework consists of a new version 7 of TEMPEST and auxiliary tools implemented as Matlab scripts. In improving the geometry representation and absorbing boundary conditions in TEMPEST from v6 the accuracy has been sustained and key improvements have yielded application specific speed and accuracy improvements. These extensions include pulsed methods, PML for plasmon termination, and plasmon and scattered field sources. The auxiliary tools include application specific methods such as signal flow graphs of plasmon couplers, Bloch mode expansions of sub-wavelength grating waves, and back-propagation methods to characterize edge scattering in diffraction masks. Each application posed different numerical hurdles and physical questions for the simulation framework. The Terrestrial Planet Finder Coronagraph required accurate modeling of diffraction mask structures too large for solely FDTD analysis. This analysis was achieved through a combination of targeted TEMPEST simulations and full system simulator based on thin mask scalar diffraction models by Ball Aerospace for JPL. TEMPEST simulation showed that vertical sidewalls were the strongest scatterers, adding nearly 2lambda of light per mask edge, which could be reduced by 20° undercuts. TEMPEST assessment of coupling in rapid thermal annealing was complicated by extremely sub-wavelength features and fine meshes. Near 100% coupling and low variability was confirmed even in the presence of unidirectional dense metal gates. Accurate analysis of surface plasmon coupling efficiency by small surface features required capabilities to isolate these features and cleanly illuminate them with plasmons and plane-waves. These features were shown to have coupling cross-sections up to and slightly exceeding their physical size. Long run-times for TEMPEST simulations of finite length gratings were overcome with a signal flow graph method. With these methods a plasmon coupler with over a 10lambda 100% capture length was demonstrated. Simulation of 3D nano-particle arrays utilized TEMPEST v7's pulsed methods to minimize the number of multi-day simulations. These simulations led to the discovery that interstitial plasmons were responsible for resonant absorption and transmission but not reflection. Simulation of a sub-wavelength grating mirror using pulsed sources to map resonant spectra showed that neither coupled guided waves nor coupled isolated resonators accurately described the operation. However, a new model based on vertical propagation of lateral Bloch modes with zero phase progression efficiently characterized the device and provided principles for designing similar devices at other wavelengths.

  5. Simulation Analysis of DC and Switching Impulse Superposition Circuit

    NASA Astrophysics Data System (ADS)

    Zhang, Chenmeng; Xie, Shijun; Zhang, Yu; Mao, Yuxiang

    2018-03-01

    Surge capacitors running between the natural bus and the ground are affected by DC and impulse superposition voltage during operation in the converter station. This paper analyses the simulation aging circuit of surge capacitors by PSCAD electromagnetic transient simulation software. This paper also analyses the effect of the DC voltage to the waveform of the impulse voltage generation. The effect of coupling capacitor to the test voltage waveform is also studied. Testing results prove that the DC voltage has little effect on the waveform of the output of the surge voltage generator, and the value of the coupling capacitor has little effect on the voltage waveform of the sample. Simulation results show that surge capacitor DC and impulse superimposed aging test is feasible.

  6. Flowfield analysis of helicopter rotor in hover and forward flight based on CFD

    NASA Astrophysics Data System (ADS)

    Zhao, Qinghe; Li, Xiaodong

    2018-05-01

    The helicopter rotor field is simulated in hover and forward flight based on Computational Fluid Dynamics(CFD). In hover case only one rotor is simulated with the periodic boundary condition in the rotational coordinate system and the grid is fixed. In the non-lift forward flight case, the total rotor is simulated in inertia coordinate system and the whole grid moves rigidly. The dual-time implicit scheme is applied to simulate the unsteady flowfield on the movement grids. The k – ω turbulence model is employed in order to capture the effects of turbulence. To verify the solver, the flowfield around the Caradonna-Tung rotor is computed. The comparison shows a good agreement between the numerical results and the experimental data.

  7. A Monte Carlo simulation and setup optimization of output efficiency to PGNAA thermal neutron using 252Cf neutrons

    NASA Astrophysics Data System (ADS)

    Zhang, Jin-Zhao; Tuo, Xian-Guo

    2014-07-01

    We present the design and optimization of a prompt γ-ray neutron activation analysis (PGNAA) thermal neutron output setup based on Monte Carlo simulations using MCNP5 computer code. In these simulations, the moderator materials, reflective materials, and structure of the PGNAA 252Cf neutrons of thermal neutron output setup are optimized. The simulation results reveal that the thin layer paraffin and the thick layer of heavy water moderating effect work best for the 252Cf neutron spectrum. Our new design shows a significantly improved performance of the thermal neutron flux and flux rate, that are increased by 3.02 times and 3.27 times, respectively, compared with the conventional neutron source design.

  8. Development and validation of an artificial wetlab training system for the lumbar discectomy.

    PubMed

    Adermann, Jens; Geissler, Norman; Bernal, Luis E; Kotzsch, Susanne; Korb, Werner

    2014-09-01

    An initial research indicated that realistic haptic simulators with an adapted training concept are needed to enhance the training for spinal surgery. A cognitive task analysis (CTA) was performed to define a realistic and helpful scenario-based simulation. Based on the results a simulator for lumbar discectomy was developed. Additionally, a realistic training operating room was built for a pilot. The results were validated. The CTA showed a need for realistic scenario-based training in spine surgery. The developed simulator consists of synthetic bone structures, synthetic soft tissue and an advanced bleeding system. Due to the close interdisciplinary cooperation of surgeons between engineers and psychologists, the iterative multicentre validation showed that the simulator is visually and haptically realistic. The simulator offers integrated sensors for the evaluation of the traction being used and the compression during surgery. The participating surgeons in the pilot workshop rated the simulator and the training concept as very useful for the improvement of their surgical skills. In the context of the present work a precise definition for the simulator and training concept was developed. The additional implementation of sensors allows the objective evaluation of the surgical training by the trainer. Compared to other training simulators and concepts, the high degree of objectivity strengthens the acceptance of the feedback. The measured data of the nerve root tension and the compression of the dura can be used for intraoperative control and a detailed postoperative evaluation.

  9. Evaluation of snowmelt simulation in the Weather Research and Forecasting model

    NASA Astrophysics Data System (ADS)

    Jin, Jiming; Wen, Lijuan

    2012-05-01

    The objective of this study is to better understand and improve snowmelt simulations in the advanced Weather Research and Forecasting (WRF) model by coupling it with the Community Land Model (CLM) Version 3.5. Both WRF and CLM are developed by the National Center for Atmospheric Research. The automated Snow Telemetry (SNOTEL) station data over the Columbia River Basin in the northwestern United States are used to evaluate snowmelt simulations generated with the coupled WRF-CLM model. These SNOTEL data include snow water equivalent (SWE), precipitation, and temperature. The simulations cover the period of March through June 2002 and focus mostly on the snowmelt season. Initial results show that when compared to observations, WRF-CLM significantly improves the simulations of SWE, which is underestimated when the release version of WRF is coupled with the Noah and Rapid Update Cycle (RUC) land surface schemes, in which snow physics is oversimplified. Further analysis shows that more realistic snow surface energy allocation in CLM is an important process that results in improved snowmelt simulations when compared to that in Noah and RUC. Additional simulations with WRF-CLM at different horizontal spatial resolutions indicate that accurate description of topography is also vital to SWE simulations. WRF-CLM at 10 km resolution produces the most realistic SWE simulations when compared to those produced with coarser spatial resolutions in which SWE is remarkably underestimated. The coupled WRF-CLM provides an important tool for research and forecasts in weather, climate, and water resources at regional scales.

  10. Does model structure limit the use of satellite data as hydrologic forcing for distributed operational models?

    NASA Astrophysics Data System (ADS)

    Bowman, A. L.; Franz, K.; Hogue, T. S.

    2015-12-01

    We are investigating the implications for use of satellite data in operational streamflow prediction. Specifically, the consequence of potential hydrologic model structure deficiencies on the ability to achieve improved forecast accuracy through the use of satellite data. We want to understand why advanced data do not lead to improved streamflow simulations by exploring how various fluxes and states differ among models of increasing complexity. In a series of prior studies, we investigated the use of a daily satellite-derived potential evapotranspiration (PET) estimate as input to the National Weather Service (NWS) streamflow forecast models for watersheds in the Upper Mississippi and Red river basins. Although the spatial PET product appears to represent the day-to-day variability in PET more realistically than current climatological methods used by the NWS, the impact of the satellite data on streamflow simulations results in slightly poorer model efficiency overall. Analysis of the model states indicates the model progresses differently between simulations with baseline PET and the satellite-derived PET input, though variation in streamflow simulations overall is negligible. For instance, the upper zone states, responsible for the high flows of a hydrograph, show a profound difference, while simulation of the peak flows tend to show little variation in the timing and magnitude. Using the spatial PET input, the lower zone states show improvement with simulating the recession limb and baseflow portion of the hydrograph. We anticipate that through a better understanding of the relationship between model structure, model states, and simulated streamflow we will be able to diagnose why simulations of discharge from the forecast model have failed to improve when provided seemingly more representative input data. Identifying model limitations are critical to demonstrating the full benefit of a satellite data for operational use.

  11. Electroosmotic flow analysis of a branched U-turn nanofluidic device.

    PubMed

    Parikesit, Gea O F; Markesteijn, Anton P; Kutchoukov, Vladimir G; Piciu, Oana; Bossche, Andre; Westerweel, Jerry; Garini, Yuval; Young, Ian T

    2005-10-01

    In this paper, we present the analysis of electroosmotic flow in a branched -turn nanofluidic device, which we developed for detection and sorting of single molecules. The device, where the channel depth is only 150 nm, is designed to optically detect fluorescence from a volume as small as 270 attolitres (al) with a common wide-field fluorescent setup. We use distilled water as the liquid, in which we dilute 110 nm fluorescent beads employed as tracer-particles. Quantitative imaging is used to characterize the pathlines and velocity distribution of the electroosmotic flow in the device. Due to the device's complex geometry, the electroosmotic flow cannot be solved analytically. Therefore we use numerical flow simulation to model our device. Our results show that the deviation between measured and simulated data can be explained by the measured Brownian motion of the tracer-particles, which was not incorporated in the simulation.

  12. A Priori Analysis of Subgrid-Scale Models for Large Eddy Simulations of Supercritical Binary-Species Mixing Layers

    NASA Technical Reports Server (NTRS)

    Okong'o, Nora; Bellan, Josette

    2005-01-01

    Models for large eddy simulation (LES) are assessed on a database obtained from direct numerical simulations (DNS) of supercritical binary-species temporal mixing layers. The analysis is performed at the DNS transitional states for heptane/nitrogen, oxygen/hydrogen and oxygen/helium mixing layers. The incorporation of simplifying assumptions that are validated on the DNS database leads to a set of LES equations that requires only models for the subgrid scale (SGS) fluxes, which arise from filtering the convective terms in the DNS equations. Constant-coefficient versions of three different models for the SGS fluxes are assessed and calibrated. The Smagorinsky SGS-flux model shows poor correlations with the SGS fluxes, while the Gradient and Similarity models have high correlations, as well as good quantitative agreement with the SGS fluxes when the calibrated coefficients are used.

  13. Analysis of photoelectron effect on the antenna impedance via Particle-In-Cell simulation

    NASA Astrophysics Data System (ADS)

    Miyake, Y.; Usui, H.

    2008-08-01

    We present photoelectron effects on the impedance of electric field antennas used for plasma wave investigations. To illustrate the photoelectron effects, we applied electromagnetic Particle-In-Cell simulation to the self-consistent antenna impedance analysis. We confirmed the formation of a dense photoelectron region around the sunlit surfaces of the antenna and the spacecraft. The dense photoelectrons enhance the real part, and decrease the absolute value of the imaginary part, of antenna impedance at low frequencies. We also showed that the antenna conductance can be analytically calculated from simulation results of the electron current flowing into or out of the antenna. The antenna impedance in the photoelectron environment is represented by a parallel equivalent circuit consisting of a capacitance and a resistance, which is consistent with empirical knowledge. The results also imply that the impedance varies with the spin of the spacecraft, which causes the variation of the photoelectron density around the antenna.

  14. Attribution of Observed Streamflow Changes in Key British Columbia Drainage Basins

    NASA Astrophysics Data System (ADS)

    Najafi, Mohammad Reza; Zwiers, Francis W.; Gillett, Nathan P.

    2017-11-01

    We study the observed decline in summer streamflow in four key river basins in British Columbia (BC), Canada, using a formal detection and attribution (D&A) analysis procedure. Reconstructed and simulated streamflow is generated using the semidistributed variable infiltration capacity hydrologic model, which is driven by 1/16° gridded observations and downscaled climate model data from the Coupled Model Intercomparison Project phase 5 (CMIP5), respectively. The internal variability of the regional hydrologic components using 5100 years of streamflow was simulated using CMIP5 preindustrial control runs. Results show that the observed changes in summer streamflow are inconsistent with simulations representing the responses to natural forcing factors alone, while the response to anthropogenic and natural forcing factors combined is detected in these changes. A two-signal D&A analysis indicates that the effects of anthropogenic (ANT) forcing factors are discernable from natural forcing in BC, albeit with large uncertainties.

  15. Noise and linearity optimization methods for a 1.9GHz low noise amplifier.

    PubMed

    Guo, Wei; Huang, Da-Quan

    2003-01-01

    Noise and linearity performances are critical characteristics for radio frequency integrated circuits (RFICs), especially for low noise amplifiers (LNAs). In this paper, a detailed analysis of noise and linearity for the cascode architecture, a widely used circuit structure in LNA designs, is presented. The noise and the linearity improvement techniques for cascode structures are also developed and have been proven by computer simulating experiments. Theoretical analysis and simulation results showed that, for cascode structure LNAs, the first metallic oxide semiconductor field effect transistor (MOSFET) dominates the noise performance of the LNA, while the second MOSFET contributes more to the linearity. A conclusion is thus obtained that the first and second MOSFET of the LNA can be designed to optimize the noise performance and the linearity performance separately, without trade-offs. The 1.9GHz Complementary Metal-Oxide-Semiconductor (CMOS) LNA simulation results are also given as an application of the developed theory.

  16. Numerical Analysis of a Flexible Dual Loop Coil and its Experimental Validation for pre-Clinical Magnetic Resonance Imaging of Rodents at 7 T

    NASA Astrophysics Data System (ADS)

    Solis-Najera, S.; Vazquez, F.; Hernandez, R.; Marrufo, O.; Rodriguez, A. O.

    2016-12-01

    A surface radio frequency coil was developed for small animal image acquisition in a pre-clinical magnetic resonance imaging system at 7 T. A flexible coil composed of two circular loops was developed to closely cover the object to be imaged. Electromagnetic numerical simulations were performed to evaluate its performance before the coil construction. An analytical expression of the mutual inductance for the two circular loops as a function of the separation between them was derived and used to validate the simulations. The RF coil is composed of two circular loops with a 5 cm external diameter and was tuned to 300 MHz and 50 Ohms matched. The angle between the loops was varied and the Q factor was obtained from the S11 simulations for each angle. B1 homogeneity was also evaluated using the electromagnetic simulations. The coil prototype was designed and built considering the numerical simulation results. To show the feasibility of the coil and its performance, saline-solution phantom images were acquired. A correlation of the simulations and imaging experimental results was conducted showing a concordance of 0.88 for the B1 field. The best coil performance was obtained at the 90° aperture angle. A more realistic phantom was also built using a formaldehyde-fixed rat phantom for ex vivo imaging experiments. All images showed a good image quality revealing clearly defined anatomical details of an ex vivo rat.

  17. Key Structures and Interactions for Binding of Mycobacterium tuberculosis Protein Kinase B Inhibitors from Molecular Dynamics Simulation.

    PubMed

    Punkvang, Auradee; Kamsri, Pharit; Saparpakorn, Patchreenart; Hannongbua, Supa; Wolschann, Peter; Irle, Stephan; Pungpo, Pornpan

    2015-07-01

    Substituted aminopyrimidine inhibitors have recently been introduced as antituberculosis agents. These inhibitors show impressive activity against protein kinase B, a Ser/Thr protein kinase that is essential for cell growth of M. tuberculosis. However, up to now, X-ray structures of the protein kinase B enzyme complexes with the substituted aminopyrimidine inhibitors are currently unavailable. Consequently, structural details of their binding modes are questionable, prohibiting the structural-based design of more potent protein kinase B inhibitors in the future. Here, molecular dynamics simulations, in conjunction with molecular mechanics/Poisson-Boltzmann surface area binding free-energy analysis, were employed to gain insight into the complex structures of the protein kinase B inhibitors and their binding energetics. The complex structures obtained by the molecular dynamics simulations show binding free energies in good agreement with experiment. The detailed analysis of molecular dynamics results shows that Glu93, Val95, and Leu17 are key residues responsible to the binding of the protein kinase B inhibitors. The aminopyrazole group and the pyrimidine core are the crucial moieties of substituted aminopyrimidine inhibitors for interaction with the key residues. Our results provide a structural concept that can be used as a guide for the future design of protein kinase B inhibitors with highly increased antagonistic activity. © 2014 John Wiley & Sons A/S.

  18. Aerospace Toolbox--a flight vehicle design, analysis, simulation, and software development environment II: an in-depth overview

    NASA Astrophysics Data System (ADS)

    Christian, Paul M.

    2002-07-01

    This paper presents a demonstrated approach to significantly reduce the cost and schedule of non real-time modeling and simulation, real-time HWIL simulation, and embedded code development. The tool and the methodology presented capitalize on a paradigm that has become a standard operating procedure in the automotive industry. The tool described is known as the Aerospace Toolbox, and it is based on the MathWorks Matlab/Simulink framework, which is a COTS application. Extrapolation of automotive industry data and initial applications in the aerospace industry show that the use of the Aerospace Toolbox can make significant contributions in the quest by NASA and other government agencies to meet aggressive cost reduction goals in development programs. The part I of this paper provided a detailed description of the GUI based Aerospace Toolbox and how it is used in every step of a development program; from quick prototyping of concept developments that leverage built-in point of departure simulations through to detailed design, analysis, and testing. Some of the attributes addressed included its versatility in modeling 3 to 6 degrees of freedom, its library of flight test validated library of models (including physics, environments, hardware, and error sources), and its built-in Monte Carlo capability. Other topics that were covered in part I included flight vehicle models and algorithms, and the covariance analysis package, Navigation System Covariance Analysis Tools (NavSCAT). Part II of this series will cover a more in-depth look at the analysis and simulation capability and provide an update on the toolbox enhancements. It will also address how the Toolbox can be used as a design hub for Internet based collaborative engineering tools such as NASA's Intelligent Synthesis Environment (ISE) and Lockheed Martin's Interactive Missile Design Environment (IMD).

  19. Simulating urban land cover changes at sub-pixel level in a coastal city

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaofeng; Deng, Lei; Feng, Huihui; Zhao, Yanchuang

    2014-10-01

    The simulation of urban expansion or land cover changes is a major theme in both geographic information science and landscape ecology. Yet till now, almost all of previous studies were based on grid computations at pixel level. With the prevalence of spectral mixture analysis in urban land cover research, the simulation of urban land cover at sub-pixel level is being put into agenda. This study provided a new approach of land cover simulation at sub-pixel level. Landsat TM/ETM+ images of Xiamen city, China on both the January of 2002 and 2007 were used to acquire land cover data through supervised classification. Then the two classified land cover data were utilized to extract the transformation rule between 2002 and 2007 using logistic regression. The transformation possibility of each land cover type in a certain pixel was taken as its percent in the same pixel after normalization. And cellular automata (CA) based grid computation was carried out to acquire simulated land cover on 2007. The simulated 2007 sub-pixel land cover was testified with a validated sub-pixel land cover achieved by spectral mixture analysis in our previous studies on the same date. And finally the sub-pixel land cover of 2017 was simulated for urban planning and management. The results showed that our method is useful in land cover simulation at sub-pixel level. Although the simulation accuracy is not quite satisfactory for all the land cover types, it provides an important idea and a good start in the CA-based urban land cover simulation.

  20. Simulation-Based Training Platforms for Arthroscopy: A Randomized Comparison of Virtual Reality Learning to Benchtop Learning.

    PubMed

    Middleton, Robert M; Alvand, Abtin; Garfjeld Roberts, Patrick; Hargrove, Caroline; Kirby, Georgina; Rees, Jonathan L

    2017-05-01

    To determine whether a virtual reality (VR) arthroscopy simulator or benchtop (BT) arthroscopy simulator showed superiority as a training tool. Arthroscopic novices were randomized to a training program on a BT or a VR knee arthroscopy simulator. The VR simulator provided user performance feedback. Individuals performed a diagnostic arthroscopy on both simulators before and after the training program. Performance was assessed using wireless objective motion analysis and a global rating scale. The groups (8 in the VR group, 9 in the BT group) were well matched at baseline across all parameters (P > .05). Training on each simulator resulted in significant performance improvements across all parameters (P < .05). BT training conferred a significant improvement in all parameters when trainees were reassessed on the VR simulator (P < .05). In contrast, VR training did not confer improvement in performance when trainees were reassessed on the BT simulator (P > .05). BT-trained subjects outperformed VR-trained subjects in all parameters during final assessments on the BT simulator (P < .05). There was no difference in objective performance between VR-trained and BT-trained subjects on final VR simulator wireless objective motion analysis assessment (P > .05). Both simulators delivered improvements in arthroscopic skills. BT training led to skills that readily transferred to the VR simulator. Skills acquired after VR training did not transfer as readily to the BT simulator. Despite trainees receiving automated metric feedback from the VR simulator, the results suggest a greater gain in psychomotor skills for BT training. Further work is required to determine if this finding persists in the operating room. This study suggests that there are differences in skills acquired on different simulators and skills learnt on some simulators may be more transferable. Further work in identifying user feedback metrics that enhance learning is also required. Copyright © 2016 Arthroscopy Association of North America. All rights reserved.

  1. Simulation of crash tests for high impact levels of a new bridge safety barrier

    NASA Astrophysics Data System (ADS)

    Drozda, Jiří; Rotter, Tomáš

    2017-09-01

    The purpose is to show the opportunity of a non-linear dynamic impact simulation and to explain the possibility of using finite element method (FEM) for developing new designs of safety barriers. The main challenge is to determine the means to create and validate the finite element (FE) model. The results of accurate impact simulations can help to reduce necessary costs for developing of a new safety barrier. The introductory part deals with the creation of the FE model, which includes the newly-designed safety barrier and focuses on the application of an experimental modal analysis (EMA). The FE model has been created in ANSYS Workbench and is formed from shell and solid elements. The experimental modal analysis, which was performed on a real pattern, was employed for measuring the modal frequencies and shapes. After performing the EMA, the FE mesh was calibrated after comparing the measured modal frequencies with the calculated ones. The last part describes the process of the numerical non-linear dynamic impact simulation in LS-DYNA. This simulation was validated after comparing the measured ASI index with the calculated ones. The aim of the study is to improve professional public knowledge about dynamic non-linear impact simulations. This should ideally lead to safer, more accurate and profitable designs.

  2. Research on monocentric model of urbanization by agent-based simulation

    NASA Astrophysics Data System (ADS)

    Xue, Ling; Yang, Kaizhong

    2008-10-01

    Over the past years, GIS have been widely used for modeling urbanization from a variety of perspectives such as digital terrain representation and overlay analysis using cell-based data platform. Similarly, simulation of urban dynamics has been achieved with the use of Cellular Automata. In contrast to these approaches, agent-based simulation provides a much more powerful set of tools. This allows researchers to set up a counterpart for real environmental and urban systems in computer for experimentation and scenario analysis. This Paper basically reviews the research on the economic mechanism of urbanization and an agent-based monocentric model is setup for further understanding the urbanization process and mechanism in China. We build an endogenous growth model with dynamic interactions between spatial agglomeration and urban development by using agent-based simulation. It simulates the migration decisions of two main types of agents, namely rural and urban households between rural and urban area. The model contains multiple economic interactions that are crucial in understanding urbanization and industrial process in China. These adaptive agents can adjust their supply and demand according to the market situation by a learning algorithm. The simulation result shows this agent-based urban model is able to perform the regeneration and to produce likely-to-occur projections of reality.

  3. Implementation and evaluation of an interprofessional simulation-based education program for undergraduate nursing students in operating room nursing education: a randomized controlled trial.

    PubMed

    Wang, Rongmei; Shi, Nianke; Bai, Jinbing; Zheng, Yaguang; Zhao, Yue

    2015-07-09

    The present study was designed to implement an interprofessional simulation-based education program for nursing students and evaluate the influence of this program on nursing students' attitudes toward interprofessional education and knowledge about operating room nursing. Nursing students were randomly assigned to either the interprofessional simulation-based education or traditional course group. A before-and-after study of nursing students' attitudes toward the program was conducted using the Readiness for Interprofessional Learning Scale. Responses to an open-ended question were categorized using thematic content analysis. Nursing students' knowledge about operating room nursing was measured. Nursing students from the interprofessional simulation-based education group showed statistically different responses to four of the nineteen questions in the Readiness for Interprofessional Learning Scale, reflecting a more positive attitude toward interprofessional learning. This was also supported by thematic content analysis of the open-ended responses. Furthermore, nursing students in the simulation-based education group had a significant improvement in knowledge about operating room nursing. The integrated course with interprofessional education and simulation provided a positive impact on undergraduate nursing students' perceptions toward interprofessional learning and knowledge about operating room nursing. Our study demonstrated that this course may be a valuable elective option for undergraduate nursing students in operating room nursing education.

  4. Pollen flow in the wildservice tree, Sorbus torminalis (L.) Crantz. I. Evaluating the paternity analysis procedure in continuous populations.

    PubMed

    Oddou-Muratorio, S; Houot, M-L; Demesure-Musch, B; Austerlitz, F

    2003-12-01

    The joint development of polymorphic molecular markers and paternity analysis methods provides new approaches to investigate ongoing patterns of pollen flow in natural plant populations. However, paternity studies are hindered by false paternity assignment and the nondetection of true fathers. To gauge the risk of these two types of errors, we performed a simulation study to investigate the impact on paternity analysis of: (i) the assumed values for the size of the breeding male population (NBMP), and (ii) the rate of scoring error in genotype assessment. Our simulations were based on microsatellite data obtained from a natural population of the entomophilous wild service tree, Sorbus torminalis (L.) Crantz. We show that an accurate estimate of NBMP is required to minimize both types of errors, and we assess the reliability of a technique used to estimate NBMP based on parent-offspring genetic data. We then show that scoring errors in genotype assessment only slightly affect the assessment of paternity relationships, and conclude that it is generally better to neglect the scoring error rate in paternity analyses within a nonisolated population.

  5. Basic guidelines to introduce electric circuit simulation software in a general physics course

    NASA Astrophysics Data System (ADS)

    Moya, A. A.

    2018-05-01

    The introduction of electric circuit simulation software for undergraduate students in a general physics course is proposed in order to contribute to the constructive learning of electric circuit theory. This work focuses on the lab exercises based on dc, transient and ac analysis in electric circuits found in introductory physics courses, and shows how students can use the simulation software to do simple activities associated with a lab exercise itself and with related topics. By introducing electric circuit simulation programs in a general physics course as a brief activitiy complementing lab exercise, students develop basic skills in using simulation software, improve their knowledge on the topology of electric circuits and perceive that the technology contributes to their learning, all without reducing the time spent on the actual content of the course.

  6. The characteristics simulation of FMCW laser backscattering signals

    NASA Astrophysics Data System (ADS)

    Liu, Bohu; Song, Chengtian; Duan, Yabo

    2018-04-01

    A Monte Carlo simulation model of FMCW laser transmission in a smoke interference environment was established in this paper. The aerosol extinction coefficient and scattering coefficient changed dynamically in the simulation according to the smoke concentration variation, aerosol particle distributions and photon spatial positions. The simulation results showed that the smoke backscattering interference produced a number of amplitude peaks in the beat signal spectrum; the SNR of target echo signal to smoke interference was related to the transmitted laser wavelength and the aerosol particle size distribution; a better SNR could be obtained when the laser wavelength was in the range of 560-1660 nm. The characteristics of FMCW laser backscattering signals generated by simulation are consistent with the theoretical analysis. Therefore, this study was greatly helpful for improving the ability of identifying target and anti-interference in the further research.

  7. Ideas and Approaches on “Construction of High Level Simulation Experimental Teaching Center of Virtual Chemical Laboratory”

    NASA Astrophysics Data System (ADS)

    Zhang, Yunshen

    2017-11-01

    With the spiritual guidance of the Circular on the Construction of National Virtual Simulation Experimental Teaching Center by the National Department of Education, according to the requirements of construction task and work content, and based on the reality of the simulation experimental teaching center of virtual chemical laboratory at Tianjin University, this paper mainly strengthens the understanding of virtual simulation experimental teaching center from three aspects, and on this basis, this article puts forward specific construction ideas, which refer to the “four combinations, five in one, the optimization of the resources and school-enterprise cooperation”, and on this basis, this article has made effective explorations. It also shows the powerful functions of the virtual simulation experimental teaching platform in all aspects by taking the synthesis and analysis of organic compounds as an example.

  8. Assessment of upper-ocean variability and the Madden-Julian Oscillation in extended-range air-ocean coupled mesoscale simulations

    NASA Astrophysics Data System (ADS)

    Hong, Xiaodong; Reynolds, Carolyn A.; Doyle, James D.; May, Paul; O'Neill, Larry

    2017-06-01

    Atmosphere-ocean interaction, particular the ocean response to strong atmospheric forcing, is a fundamental component of the Madden-Julian Oscillation (MJO). In this paper, we examine how model errors in previous Madden-Julian Oscillation (MJO) events can affect the simulation of subsequent MJO events due to increased errors that develop in the upper-ocean before the MJO initiation stage. Two fully coupled numerical simulations with 45-km and 27-km horizontal resolutions were integrated for a two-month period from November to December 2011 using the Navy's limited area Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS®). There are three MJO events that occurred subsequently in early November, mid-November, and mid-December during the simulations. The 45-km simulation shows an excessive warming of the SSTs during the suppressed phase that occurs before the initiation of the second MJO event due to erroneously strong surface net heat fluxes. The simulated second MJO event stalls over the Maritime Continent which prevents the recovery of the deep mixed layer and associated barrier layer. Cross-wavelet analysis of solar radiation and SSTs reveals that the diurnal warming is absent during the second suppressed phase after the second MJO event. The mixed layer heat budget indicates that the cooling is primarily caused by horizontal advection associated with the stalling of the second MJO event and the cool SSTs fail to initiate the third MJO event. When the horizontal resolution is increased to 27-km, three MJOs are simulated and compare well with observations on multi-month timescales. The higher-resolution simulation of the second MJO event and more-realistic upper-ocean response promote the onset of the third MJO event. Simulations performed with analyzed SSTs indicate that the stalling of the second MJO in the 45-km run is a robust feature, regardless of ocean forcing, while the diurnal cycle analysis indicates that both 45-km and 27-km ocean resolutions respond realistically when provided with realistic atmospheric forcing. Thus, the problem in the 45-km simulation appears to originate in the atmosphere. Additional simulations show that while the details of the simulations are sensitive to small changes in the initial integration time, the large differences between the 45-km and 27-km runs during the suppressed phase in early December are robust.

  9. The characterisation of next generation ceramic bearings for orthopaedic hip applications

    NASA Astrophysics Data System (ADS)

    Insley, Gerard M.

    Two zirconia toughened alumina ceramic materials were characterised for application as bearing surfaces for hip joint arthroplasty. Both ceramics were supplied by orthopaedic ceramic suppliers in the form of flat discs, flexural strength bars and finished ball heads and cups. Analysis techniques involved standard and novel test methods in order to gauge the suitability of the ZTA for this application. These included mechanical strength testing, phase composition analysis by x-ray diffraction, accelerated and real time stability testing, friction testing and hip simulator testing under standard and nonstandard conditions. Alumina was used as a control in all testing. The results show the ZTA materials to be 50 to 75% stronger and up to 25% tougher than the alumina. Both materials differ in terms of their processing, microstructure and crystalline phase composition, however both showed no tetragonal to monoclinic degradation after both accelerated and real time ageing. The friction and wear tests show the ZTA to be performing as well as the alumina in normal test conditions. However, when microseparation is introduced into the hip simulator testing the ZTA ceramics wear significantly less than the alumina. Clinical analysis of a series of explanted heads showed that microseparation definitely occurs in the clinical situation with wear scars observed in eleven out of sixteen components. Zirconia toughened alumina is suitable as a fourth generation bearing surface for hip joint arthroplasty.

  10. Simulation of a mycological KOH preparation--e-learning as a practical dermatologic exercise in an undergraduate medical curriculum.

    PubMed

    Bernhardt, Johannes; Hye, Florian; Thallinger, Sigrid; Bauer, Pamela; Ginter, Gabriele; Smolle, Josef

    2009-07-01

    Mycological KOH preparation is one of the most popular practical laboratory skills in dermatology. The study addresses the question whether an interactive simulation program enhances knowledge of students about this procedure. 166 students, 107 female and 59 male, participated. We separated our study in three phases: pretest, completing the simulation three times and post-test. In the pre- and post-test we recorded the number of correct steps of the mycological KOH preparation listed by the students. The full text feedback was explored by content analysis. In the pre-test the students listed an average of 3.1 +/- 2.2 correct steps, compared to 8.8 +/- 1.2 correct steps after completing the simulation (p < 0.001). Furthermore, the improvement was significant for each individual step. There were no significant differences between male and female students. In content analysis of the feedback, positive statements prevailed with 78.3%, compared to only 1.8% critical items. Our study shows that an interactive computer simulation program of mycological KOH preparation results in a significant learning effectiveness as far as recall of the correct procedural steps is concerned. Furthermore, subjective acceptance by students is high.

  11. [Numerical simulation of the effect of virtual stent release pose on the expansion results].

    PubMed

    Li, Jing; Peng, Kun; Cui, Xinyang; Fu, Wenyu; Qiao, Aike

    2018-04-01

    The current finite element analysis of vascular stent expansion does not take into account the effect of the stent release pose on the expansion results. In this study, stent and vessel model were established by Pro/E. Five kinds of finite element assembly models were constructed by ABAQUS, including 0 degree without eccentricity model, 3 degree without eccentricity model, 5 degree without eccentricity model, 0 degree axial eccentricity model and 0 degree radial eccentricity model. These models were divided into two groups of experiments for numerical simulation with respect to angle and eccentricity. The mechanical parameters such as foreshortening rate, radial recoil rate and dog boning rate were calculated. The influence of angle and eccentricity on the numerical simulation was obtained by comparative analysis. Calculation results showed that the residual stenosis rates were 38.3%, 38.4%, 38.4%, 35.7% and 38.2% respectively for the 5 models. The results indicate that the pose has less effect on the numerical simulation results so that it can be neglected when the accuracy of the result is not highly required, and the basic model as 0 degree without eccentricity model is feasible for numerical simulation.

  12. Numerical simulation of fluid flow and heat transfer in enhanced copper tube

    NASA Astrophysics Data System (ADS)

    Rahman, M. M.; Zhen, T.; Kadir, A. K.

    2013-06-01

    Inner grooved tube is enhanced with grooves by increasing the inner surface area. Due to its high efficiency of heat transfer, it is used widely in power generation, air conditioning and many other applications. Heat exchanger is one of the example that uses inner grooved tube to enhance rate heat transfer. Precision in production of inner grooved copper tube is very important because it affects the tube's performance due to various tube parameters. Therefore, it is necessary to carry out analysis in optimizing tube performance prior to production in order to avoid unnecessary loss. The analysis can be carried out either through experimentation or numerical simulation. However, experimental study is too costly and takes longer time in gathering necessary information. Therefore, numerical simulation is conducted instead of experimental research. Firstly, the model of inner grooved tube was generated using SOLIDWORKS. Then it was imported into GAMBIT for healing, followed by meshing, boundary types and zones settings. Next, simulation was done in FLUENT where all the boundary conditions are set. The simulation results were observed and compared with published experimental results. It showed that heat transfer enhancement in range of 649.66% to 917.22% of inner grooved tube compared to plain tube.

  13. A Large Deviations Analysis of Certain Qualitative Properties of Parallel Tempering and Infinite Swapping Algorithms

    DOE PAGES

    Doll, J.; Dupuis, P.; Nyquist, P.

    2017-02-08

    Parallel tempering, or replica exchange, is a popular method for simulating complex systems. The idea is to run parallel simulations at different temperatures, and at a given swap rate exchange configurations between the parallel simulations. From the perspective of large deviations it is optimal to let the swap rate tend to infinity and it is possible to construct a corresponding simulation scheme, known as infinite swapping. In this paper we propose a novel use of large deviations for empirical measures for a more detailed analysis of the infinite swapping limit in the setting of continuous time jump Markov processes. Usingmore » the large deviations rate function and associated stochastic control problems we consider a diagnostic based on temperature assignments, which can be easily computed during a simulation. We show that the convergence of this diagnostic to its a priori known limit is a necessary condition for the convergence of infinite swapping. The rate function is also used to investigate the impact of asymmetries in the underlying potential landscape, and where in the state space poor sampling is most likely to occur.« less

  14. Comparing a discrete and continuum model of the intestinal crypt

    PubMed Central

    Murray, Philip J.; Walter, Alex; Fletcher, Alex G.; Edwards, Carina M.; Tindall, Marcus J.; Maini, Philip K.

    2011-01-01

    The integration of processes at different scales is a key problem in the modelling of cell populations. Owing to increased computational resources and the accumulation of data at the cellular and subcellular scales, the use of discrete, cell-level models, which are typically solved using numerical simulations, has become prominent. One of the merits of this approach is that important biological factors, such as cell heterogeneity and noise, can be easily incorporated. However, it can be difficult to efficiently draw generalisations from the simulation results, as, often, many simulation runs are required to investigate model behaviour in typically large parameter spaces. In some cases, discrete cell-level models can be coarse-grained, yielding continuum models whose analysis can lead to the development of insight into the underlying simulations. In this paper we apply such an approach to the case of a discrete model of cell dynamics in the intestinal crypt. An analysis of the resulting continuum model demonstrates that there is a limited region of parameter space within which steady-state (and hence biologically realistic) solutions exist. Continuum model predictions show good agreement with corresponding results from the underlying simulations and experimental data taken from murine intestinal crypts. PMID:21411869

  15. An in-depth analysis of temperature effect on DIBL in UTBB FD SOI MOSFETs based on experimental data, numerical simulations and analytical models

    NASA Astrophysics Data System (ADS)

    Pereira, A. S. N.; de Streel, G.; Planes, N.; Haond, M.; Giacomini, R.; Flandre, D.; Kilchytska, V.

    2017-02-01

    The Drain Induced Barrier Lowering (DIBL) behavior in Ultra-Thin Body and Buried oxide (UTBB) transistors is investigated in details in the temperature range up to 150 °C, for the first time to the best of our knowledge. The analysis is based on experimental data, physical device simulation, compact model (SPICE) simulation and previously published models. Contrary to MASTAR prediction, experiments reveal DIBL increase with temperature. Physical device simulations of different thin-film fully-depleted (FD) devices outline the generality of such behavior. SPICE simulations, with UTSOI DK2.4 model, only partially adhere to experimental trends. Several analytic models available in the literature are assessed for DIBL vs. temperature prediction. Although being the closest to experiments, Fasarakis' model overestimates DIBL(T) dependence for shortest devices and underestimates it for upsized gate lengths frequently used in ultra-low-voltage (ULV) applications. This model is improved in our work, by introducing a temperature-dependent inversion charge at threshold. The improved model shows very good agreement with experimental data, with high gain in precision for the gate lengths under test.

  16. Research on hyperspectral dynamic scene and image sequence simulation

    NASA Astrophysics Data System (ADS)

    Sun, Dandan; Liu, Fang; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyperspectral dynamic scene and image sequence for hyperspectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyperspectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyperspectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyperspectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyperspectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyperspectral images are consistent with the theoretical analysis results.

  17. Reduced order model based on principal component analysis for process simulation and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lang, Y.; Malacina, A.; Biegler, L.

    2009-01-01

    It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models,more » this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.« less

  18. An Example-Based Brain MRI Simulation Framework.

    PubMed

    He, Qing; Roy, Snehashis; Jog, Amod; Pham, Dzung L

    2015-02-21

    The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an "atlas" consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.

  19. Simulation skill of APCC set of global climate models for Asian summer monsoon rainfall variability

    NASA Astrophysics Data System (ADS)

    Singh, U. K.; Singh, G. P.; Singh, Vikas

    2015-04-01

    The performance of 11 Asia-Pacific Economic Cooperation Climate Center (APCC) global climate models (coupled and uncoupled both) in simulating the seasonal summer (June-August) monsoon rainfall variability over Asia (especially over India and East Asia) has been evaluated in detail using hind-cast data (3 months advance) generated from APCC which provides the regional climate information product services based on multi-model ensemble dynamical seasonal prediction systems. The skill of each global climate model over Asia was tested separately in detail for the period of 21 years (1983-2003), and simulated Asian summer monsoon rainfall (ASMR) has been verified using various statistical measures for Indian and East Asian land masses separately. The analysis found a large variation in spatial ASMR simulated with uncoupled model compared to coupled models (like Predictive Ocean Atmosphere Model for Australia, National Centers for Environmental Prediction and Japan Meteorological Agency). The simulated ASMR in coupled model was closer to Climate Prediction Centre Merged Analysis of Precipitation (CMAP) compared to uncoupled models although the amount of ASMR was underestimated in both models. Analysis also found a high spread in simulated ASMR among the ensemble members (suggesting that the model's performance is highly dependent on its initial conditions). The correlation analysis between sea surface temperature (SST) and ASMR shows that that the coupled models are strongly associated with ASMR compared to the uncoupled models (suggesting that air-sea interaction is well cared in coupled models). The analysis of rainfall using various statistical measures suggests that the multi-model ensemble (MME) performed better compared to individual model and also separate study indicate that Indian and East Asian land masses are more useful compared to Asia monsoon rainfall as a whole. The results of various statistical measures like skill of multi-model ensemble, large spread among the ensemble members of individual model, strong teleconnection (correlation analysis) with SST, coefficient of variation, inter-annual variability, analysis of Taylor diagram, etc. suggest that there is a need to improve coupled model instead of uncoupled model for the development of a better dynamical seasonal forecast system.

  20. Characteristics of Bremsstrahlung emissions of (177)Lu, (188)Re, and (90)Y for SPECT/CT quantification in radionuclide therapy.

    PubMed

    Uribe, Carlos F; Esquinas, Pedro L; Gonzalez, Marjorie; Celler, Anna

    2016-05-01

    Beta particles emitted by radioisotopes used in targeted radionuclide therapies (TRT) create Bremsstrahlung (BRS) which may affect SPECT quantification when imaging these isotopes. The purpose of the current study was to investigate the characteristics of Bremsstrahlung produced in tissue by three β-emitting radioisotopes used in TRT. Monte Carlo simulations of (177)Lu, (188)Re, and (90)Y sources placed in water filled cylinders were performed. BRS yields, mean energies and energy spectra for (a) all photons generated in the decays, (b) photons that were not absorbed and leave the cylinder, and (c) photons detected by the camera were analyzed. Next, the results of simulations were compared with those from experiments performed on a clinical SPECT camera using same acquisition conditions and phantom configurations as in simulations. Simulations reproduced relatively well the shapes of the measured spectra, except for (90)Y which showed an overestimation in the low energy range. Detailed analysis of the results allowed us to suggest best collimators and imaging conditions for each of the investigated isotopes. Finally, our simulations confirmed that the BRS contribution to the energy spectra in quantitative imaging of (177)Lu and (188)Re could be ignored. For (177)Lu and (188)Re, BRS contributes only marginally to the total spectra recorded by the camera. Our analysis shows that MELP and HE collimators are the best for imaging these two isotopes. For (90)Y, HE collimator should be used. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  1. Analysis of the pump-turbine S characteristics using the detached eddy simulation method

    NASA Astrophysics Data System (ADS)

    Sun, Hui; Xiao, Ruofu; Wang, Fujun; Xiao, Yexiang; Liu, Weichao

    2015-01-01

    Current research on pump-turbine units is focused on the unstable operation at off-design conditions, with the characteristic curves in generating mode being S-shaped. Unlike in the traditional water turbines, pump-turbine operation along the S-shaped curve can lead to difficulties during load rejection with unusual increases in the water pressure, which leads to machine vibrations. This paper describes both model tests and numerical simulations. A reduced scale model of a low specific speed pump-turbine was used for the performance tests, with comparisons to computational fluid dynamics(CFD) results. Predictions using the detached eddy simulation(DES) turbulence model, which is a combined Reynolds averaged Naviers-Stokes(RANS) and large eddy simulation(LES) model, are compared with the two-equation turbulence mode results. The external characteristics as well as the internal flow are for various guide vane openings to understand the unsteady flow along the so called S characteristics of a pump-turbine. Comparison of the experimental data with the CFD results for various conditions and times shows that DES model gives better agreement with experimental data than the two-equation turbulence model. For low flow conditions, the centrifugal forces and the large incident angle create large vortices between the guide vanes and the runner inlet in the runner passage, which is the main factor leading to the S-shaped characteristics. The turbulence model used here gives more accurate simulations of the internal flow characteristics of the pump-turbine and a more detailed force analysis which shows the mechanisms controlling of the S characteristics.

  2. A Cross-Correlational Analysis between Electroencephalographic and End-Tidal Carbon Dioxide Signals: Methodological Issues in the Presence of Missing Data and Real Data Results

    PubMed Central

    Morelli, Maria Sole; Giannoni, Alberto; Passino, Claudio; Landini, Luigi; Emdin, Michele; Vanello, Nicola

    2016-01-01

    Electroencephalographic (EEG) irreducible artifacts are common and the removal of corrupted segments from the analysis may be required. The present study aims at exploring the effects of different EEG Missing Data Segment (MDS) distributions on cross-correlation analysis, involving EEG and physiological signals. The reliability of cross-correlation analysis both at single subject and at group level as a function of missing data statistics was evaluated using dedicated simulations. Moreover, a Bayesian-based approach for combining the single subject results at group level by considering each subject’s reliability was introduced. Starting from the above considerations, the cross-correlation function between EEG Global Field Power (GFP) in delta band and end-tidal CO2 (PETCO2) during rest and voluntary breath-hold was evaluated in six healthy subjects. The analysis of simulated data results at single subject level revealed a worsening of precision and accuracy in the cross-correlation analysis in the presence of MDS. At the group level, a large improvement in the results’ reliability with respect to single subject analysis was observed. The proposed Bayesian approach showed a slight improvement with respect to simple average results. Real data results were discussed in light of the simulated data tests and of the current physiological findings. PMID:27809243

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Chang; Fox, William; Bhattacharjee, Amitava

    Recent theory has demonstrated a novel physics regime for magnetic reconnection in high-energy-density plasmas where the magnetic field is advected by heat flux via the Nernst effect. In this paper, we elucidate the physics of the electron dissipation layer in this regime. Through fully kinetic simulation and a generalized Ohm's law derived from first principles, we show that momentum transport due to a nonlocal effect, the heat-flux-viscosity, provides the dissipation mechanism for magnetic reconnection. Scaling analysis, and simulations show that the reconnection process comprises a magnetic field compression stage and quasisteady reconnection stage, and the characteristic width of the currentmore » sheet in this regime is several electron mean-free paths. Finally, these results show the important interplay between nonlocal transport effects and generation of anisotropic components to the distribution function.« less

  4. Simulation of raw water and treatment parameters in support of the disinfection by-products regulatory impact analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Regli, S.; Cromwell, J.; Mosher, J.

    The U.S. EPA has undertaken an effort to model how the water supply industry may respond to possible rules and how those responses may affect human health risk. The model is referred to as the Disinfection By-Product Regulatory Analysis Model (DBPRAM), The paper is concerned primarily with presenting and discussing the methods, underlying data, assumptions, limitations and results for the first part of the model. This part of the model shows the creation of sets of simulated water supplies that are representative of the conditions currently encountered by public water supplies with respect to certain raw water quality and watermore » treatment characteristics.« less

  5. Research and Analysis of Image Processing Technologies Based on DotNet Framework

    NASA Astrophysics Data System (ADS)

    Ya-Lin, Song; Chen-Xi, Bai

    Microsoft.Net is a kind of most popular program development tool. This paper gave a detailed analysis concluded about some image processing technologies of the advantages and disadvantages by .Net processed image while the same algorithm is used in Programming experiments. The result shows that the two best efficient methods are unsafe pointer and Direct 3D, and Direct 3D used to 3D simulation development, and the others are useful in some fields while these technologies are poor efficiency and not suited to real-time processing. The experiment results in paper will help some projects about image processing and simulation based DotNet and it has strong practicability.

  6. Application of the GRC Stirling Convertor System Dynamic Model

    NASA Technical Reports Server (NTRS)

    Regan, Timothy F.; Lewandowski, Edward J.; Schreiber, Jeffrey G. (Technical Monitor)

    2004-01-01

    The GRC Stirling Convertor System Dynamic Model (SDM) has been developed to simulate dynamic performance of power systems incorporating free-piston Stirling convertors. This paper discusses its use in evaluating system dynamics and other systems concerns. Detailed examples are provided showing the use of the model in evaluation of off-nominal operating conditions. The many degrees of freedom in both the mechanical and electrical domains inherent in the Stirling convertor and the nonlinear dynamics make simulation an attractive analysis tool in conjunction with classical analysis. Application of SDM in studying the relationship of the size of the resonant circuit quality factor (commonly referred to as Q) in the various resonant mechanical and electrical sub-systems is discussed.

  7. Biomechanical investigation of naso-orbitoethmoid trauma by finite element analysis.

    PubMed

    Huempfner-Hierl, Heike; Schaller, Andreas; Hemprich, Alexander; Hierl, Thomas

    2014-11-01

    Naso-orbitoethmoid fractures account for 5% of all facial fractures. We used data derived from a white 34-year-old man to make a transient dynamic finite element model, which consisted of about 740 000 elements, to simulate fist-like impacts to this anatomically complex area. Finite element analysis showed a pattern of von Mises stresses beyond the yield criterion of bone that corresponded with fractures commonly seen clinically. Finite element models can be used to simulate injuries to the human skull, and provide information about the pathogenesis of different types of fracture. Copyright © 2014 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  8. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear Layer

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Berkman, Mert E.

    2001-01-01

    A detailed computational aeroacoustic analysis of a high-lift flow field is performed. Time-accurate Reynolds Averaged Navier-Stokes (RANS) computations simulate the free shear layer that originates from the slat cusp. Both unforced and forced cases are studied. Preliminary results show that the shear layer is a good amplifier of disturbances in the low to mid-frequency range. The Ffowcs-Williams and Hawkings equation is solved to determine the acoustic field using the unsteady flow data from the RANS calculations. The noise radiated from the excited shear layer has a spectral shape qualitatively similar to that obtained from measurements in a corresponding experimental study of the high-lift system.

  9. High fidelity polycrystalline CdTe/CdS heterostructures via molecular dynamics

    DOE PAGES

    Aguirre, Rodolfo; Chavez, Jose Juan; Zhou, Xiaowang; ...

    2017-06-20

    Molecular dynamics simulations of polycrystalline growth of CdTe/CdS heterostructures have been performed. First, CdS was deposited on an amorphous CdS substrate, forming a polycrystalline film. Subsequently, CdTe was deposited on top of the polycrystalline CdS film. Cross-sectional images show grain formation at early stages of the CdS growth. During CdTe deposition, the CdS structure remains almost unchanged. Concurrently, CdTe grain boundary motion was detected after the first 24.4 nanoseconds of CdTe deposition. With the elapse of time, this grain boundary pins along the CdS/CdTe interface, leaving only a small region of epitaxial growth. CdTe grains are larger than CdS grainsmore » in agreement with experimental observations in the literature. Crystal phase analysis shows that zinc blende structure dominates over the wurtzite structure inside both CdS and CdTe grains. Composition analysis shows Te and S diffusion to the CdS and CdTe films, respectively. Lastly, these simulated results may stimulate new ideas for studying and improving CdTe solar cell efficiency.« less

  10. Simulating x-ray telescopes with McXtrace: a case study of ATHENA's optics

    NASA Astrophysics Data System (ADS)

    Ferreira, Desiree D. M.; Knudsen, Erik B.; Westergaard, Niels J.; Christensen, Finn E.; Massahi, Sonny; Shortt, Brian; Spiga, Daniele; Solstad, Mathias; Lefmann, Kim

    2016-07-01

    We use the X-ray ray-tracing package McXtrace to simulate the performance of X-ray telescopes based on Silicon Pore Optics (SPO) technologies. We use as reference the design of the optics of the planned X-ray mission Advanced Telescope for High ENergy Astrophysics (ATHENA) which is designed as a single X-ray telescope populated with stacked SPO substrates forming mirror modules to focus X-ray photons. We show that is possible to simulate in detail the SPO pores and qualify the use of McXtrace for in-depth analysis of in-orbit performance and laboratory X-ray test results.

  11. Simulated characteristics of the DEGAS γ-detector array

    NASA Astrophysics Data System (ADS)

    Li, G. S.; Lizarazo, C.; Gerl, J.; Kojouharov, I.; Schaffner, H.; Górska, M.; Pietralla, N.; Saha, S.; Liu, M. L.; Wang, J. G.

    2018-05-01

    The performance of the novel HPGe-Cluster array DEGAS to be used at FAIR has been studied through GEANT4 simulations using accurate geometries of most of the detector components. The simulation framework has been tested by comparing experimental data of various detector setups. The study showed that the DEGAS system could provide a clear improvement of the photo-peak efficiency compared to the previous RISING array. In addition, the active BGO Back-catcher could greatly enhance the background suppression capability. The add-back analysis revealed that even at a γ multiplicity of six the sensitivity is improved by adding back the energy depositions of the neighboring Ge crystals.

  12. Simulation and experimental research of 1MWe solar tower power plant in China

    NASA Astrophysics Data System (ADS)

    Yu, Qiang; Wang, Zhifeng; Xu, Ershu

    2016-05-01

    The establishment of a reliable simulation system for a solar tower power plant can greatly increase the economic and safety performance of the whole system. In this paper, a dynamic model of the 1MWe Solar Tower Power Plant at Badaling in Beijing is developed based on the "STAR-90" simulation platform, including the heliostat field, the central receiver system (water/steam), etc. The dynamic behavior of the global CSP plant can be simulated. In order to verify the validity of simulation system, a complete experimental process was synchronously simulated by repeating the same operating steps based on the simulation platform, including the locations and number of heliostats, the mass flow of the feed water, etc. According to the simulation and experimental results, some important parameters are taken out to make a deep comparison. The results show that there is good alignment between the simulations and the experimental results and that the error range can be acceptable considering the error of the models. In the end, a comprehensive and deep analysis on the error source is carried out according to the comparative results.

  13. Use of an Auditory Hallucination Simulation to Increase Student Pharmacist Empathy for Patients with Mental Illness

    PubMed Central

    Eukel, Heidi N.; Frenzel, Jeanne E.; Werremeyer, Amy; McDaniel, Becky

    2016-01-01

    Objective. To increase student pharmacist empathy through the use of an auditory hallucination simulation. Design. Third-year professional pharmacy students independently completed seven stations requiring skills such as communication, following directions, reading comprehension, and cognition while listening to an audio recording simulating what one experiencing auditory hallucinations may hear. Following the simulation, students participated in a faculty-led debriefing and completed a written reflection. Assessment. The Kiersma-Chen Empathy Scale was completed by each student before and after the simulation to measure changes in empathy. The written reflections were read and qualitatively analyzed. Empathy scores increased significantly after the simulation. Qualitative analysis showed students most frequently reported feeling distracted and frustrated. All student participants recommended the simulation be offered to other student pharmacists, and 99% felt the simulation would impact their future careers. Conclusions. With approximately 10 million adult Americans suffering from serious mental illness, it is important for pharmacy educators to prepare students to provide adequate patient care to this population. This auditory hallucination simulation increased student pharmacist empathy for patients with mental illness. PMID:27899838

  14. SS-mPMG and SS-GA: tools for finding pathways and dynamic simulation of metabolic networks.

    PubMed

    Katsuragi, Tetsuo; Ono, Naoaki; Yasumoto, Keiichi; Altaf-Ul-Amin, Md; Hirai, Masami Y; Sriyudthsak, Kansuporn; Sawada, Yuji; Yamashita, Yui; Chiba, Yukako; Onouchi, Hitoshi; Fujiwara, Toru; Naito, Satoshi; Shiraishi, Fumihide; Kanaya, Shigehiko

    2013-05-01

    Metabolomics analysis tools can provide quantitative information on the concentration of metabolites in an organism. In this paper, we propose the minimum pathway model generator tool for simulating the dynamics of metabolite concentrations (SS-mPMG) and a tool for parameter estimation by genetic algorithm (SS-GA). SS-mPMG can extract a subsystem of the metabolic network from the genome-scale pathway maps to reduce the complexity of the simulation model and automatically construct a dynamic simulator to evaluate the experimentally observed behavior of metabolites. Using this tool, we show that stochastic simulation can reproduce experimentally observed dynamics of amino acid biosynthesis in Arabidopsis thaliana. In this simulation, SS-mPMG extracts the metabolic network subsystem from published databases. The parameters needed for the simulation are determined using a genetic algorithm to fit the simulation results to the experimental data. We expect that SS-mPMG and SS-GA will help researchers to create relevant metabolic networks and carry out simulations of metabolic reactions derived from metabolomics data.

  15. Space-charge-sustained microbunch structure in the Los Alamos Proton Storage Ring

    NASA Astrophysics Data System (ADS)

    Cousineau, S.; Danilov, V.; Holmes, J.; Macek, R.

    2004-09-01

    We present experimental data from the Los Alamos Proton Storage Ring (PSR) showing long-lived linac microbunch structure during beam storage with no rf bunching. Analysis of the experimental data and particle-in-cell simulations of the experiments indicate that space charge, coupled with energy spread effects, is responsible for the sustained microbunch structure. The simulated longitudinal phase space of the beam reveals a well-defined separatrix in the phase space between linac microbunches, with particles executing unbounded motion outside of the separatrix. We show that the longitudinal phase space of the beam was near steady state during the PSR experiments, such that the separatrix persisted for long periods of time. Our simulations indicate that the steady state is very sensitive to the experimental conditions. Finally, we solve the steady-state problem in an analytic, self-consistent fashion for a set of periodic longitudinal space-charge potentials.

  16. The Effect of Dust on the Martian Polar Vortices

    NASA Technical Reports Server (NTRS)

    Guzewich, Scott D.; Toigo, A. D.; Waugh, D. W.

    2016-01-01

    The influence of atmospheric dust on the dynamics and stability of the martian polar vortices is examined, through analysis of Mars Climate Sounder observations and MarsWRF general circulation model simulations. We show that regional and global dust storms produce transient vortex warming events that partially or fully disrupt the northern winter polar vortex for brief periods. Increased atmospheric dust heating alters the Hadley circulation and shifts the downwelling branch of the circulation poleward, leading to a disruption of the polar vortex for a period of days to weeks. Through our simulations, we find this effect is dependent on the atmospheric heating rate, which can be changed by increasing the amount of dust in the atmosphere or by altering the dust optical properties (e.g., single scattering albedo). Despite this, our simulations show that some level of atmospheric dust is necessary to produce a distinct northern hemisphere winter polar vortex.

  17. Features of the accretion in the EX Hydrae system: Results of numerical simulation

    NASA Astrophysics Data System (ADS)

    Isakova, P. B.; Zhilkin, A. G.; Bisikalo, D. V.; Semena, A. N.; Revnivtsev, M. G.

    2017-07-01

    A two-dimensional numerical model in the axisymmetric approximation that describes the flow structure in the magnetosphere of the white dwarf in the EX Hya system has been developed. Results of simulations show that the accretion in EX Hya proceeds via accretion columns, which are not closed and have curtain-like shapes. The thickness of the accretion curtains depends only weakly on the thickness of the accretion disk. This thickness developed in the simulations does not agree with observations. It is concluded that the main reason for the formation of thick accretion curtains in the model is the assumption that the magnetic field penetrates fully into the plasma of the disk. An analysis based on simple estimates shows that a diamagnetic disk that fully or partially shields the magnetic field of the star may be a more attractive explanation for the observed features of the accretion in EX Hya.

  18. Symmetry and charge order in Fe2OBO3 studied through polarized resonant x-ray diffraction

    NASA Astrophysics Data System (ADS)

    Bland, S. R.; Angst, M.; Adiga, S.; Scagnoli, V.; Johnson, R. D.; Herrero-Martín, J.; Hatton, P. D.

    2010-09-01

    Bond valence sum calculations have previously suggested that iron oxyborate exhibits charge order of the Fe ions with integer 2+/3+ valence states. Meanwhile transition metal oxides typically show much smaller, fractional charge disproportionations. Using resonant x-ray diffraction at the iron K edge, we find resonant features which are much larger than those ordinarily observed in charge ordered oxides. Simulations were subsequently performed using a cluster-based, monoelectronic code. The nanoscale domain structure prevents precise fitting; nevertheless the simulations confirm the diagonal charge order symmetry, as well as the unusually large charge disproportionation. We have demonstrated the conversion of linearly to nonlinearly polarized light and vice versa through full polarization analysis. Simulations show that this effect principally results from interference between the isotropic and anisotropic scattering terms. This mechanism is likely to account for similar observations in alternative systems.

  19. Experiment and simulation study on unidirectional carbon fiber composite component under dynamic 3 point bending loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Guowei; Sun, Qingping; Zeng, Danielle

    In current work, unidirectional (UD) carbon fiber composite hatsection component with two different layups are studied under dynamic 3 point bending loading. The experiments are performed at various impact velocities, and the effects of impactor velocity and layup on acceleration histories are compared. A macro model is established with LS-Dyna for more detailed study. The simulation results show that the delamination plays an important role during dynamic 3 point bending test. Based on the analysis with high speed camera, the sidewall of hatsection shows significant buckling rather than failure. Without considering the delamination, current material model cannot capture the postmore » failure phenomenon correctly. The sidewall delamination is modeled by assumption of larger failure strain together with slim parameters, and the simulation results of different impact velocities and layups match the experimental results reasonable well.« less

  20. The effect of dust on the martian polar vortices

    NASA Astrophysics Data System (ADS)

    Guzewich, Scott D.; Toigo, A. D.; Waugh, D. W.

    2016-11-01

    The influence of atmospheric dust on the dynamics and stability of the martian polar vortices is examined, through analysis of Mars Climate Sounder observations and MarsWRF general circulation model simulations. We show that regional and global dust storms produce ;transient vortex warming; events that partially or fully disrupt the northern winter polar vortex for brief periods. Increased atmospheric dust heating alters the Hadley circulation and shifts the downwelling branch of the circulation poleward, leading to a disruption of the polar vortex for a period of days to weeks. Through our simulations, we find this effect is dependent on the atmospheric heating rate, which can be changed by increasing the amount of dust in the atmosphere or by altering the dust optical properties (e.g., single scattering albedo). Despite this, our simulations show that some level of atmospheric dust is necessary to produce a distinct northern hemisphere winter polar vortex.

  1. Effective Medium Ratio Obeying Wideband Left-Handed Miniaturized Meta-atoms for Multi-band Applications

    NASA Astrophysics Data System (ADS)

    Hossain, Mohammad Jakir; Faruque, Mohammad Rashed Iqbal; Islam, Mohammad Tariqul

    2017-12-01

    In this paper, a miniaturized wideband left-handed (LH) meta-atom based on planar modified multiple hexagonal split ring resonators was designed, simulated, fabricated and tested that can maintain a left-handed property. An analysis and comparison of the different array structures were performed that obtained better effective medium ratio (EMR) and wideband (5.54 GHz) for multi band operations in the microwave regime. Finite-difference time-domain (FDTD) method based Computer Simulation Technology was implemented to design the meta-atom. The meta-atom showed multi-band response in conjunction with wideband and LH property over the certain frequency bands in the microwave spectra. The EMR was considerably improved compared to previously reported meta-atoms. The measured results showed good agreement with the simulated results. The dimensions, S-parameters and EMR parameters of the proposed miniaturized LH meta-atom are appropriate for L-, S-, C-, X-, and Ku-band applications.

  2. Effective Medium Ratio Obeying Wideband Left-Handed Miniaturized Meta-atoms for Multi-band Applications

    NASA Astrophysics Data System (ADS)

    Hossain, Mohammad Jakir; Faruque, Mohammad Rashed Iqbal; Islam, Mohammad Tariqul

    2018-03-01

    In this paper, a miniaturized wideband left-handed (LH) meta-atom based on planar modified multiple hexagonal split ring resonators was designed, simulated, fabricated and tested that can maintain a left-handed property. An analysis and comparison of the different array structures were performed that obtained better effective medium ratio (EMR) and wideband (5.54 GHz) for multi band operations in the microwave regime. Finite-difference time-domain (FDTD) method based Computer Simulation Technology was implemented to design the meta-atom. The meta-atom showed multi-band response in conjunction with wideband and LH property over the certain frequency bands in the microwave spectra. The EMR was considerably improved compared to previously reported meta-atoms. The measured results showed good agreement with the simulated results. The dimensions, S-parameters and EMR parameters of the proposed miniaturized LH meta-atom are appropriate for L-, S-, C-, X-, and Ku-band applications.

  3. Progress of the NASAUSGS Lunar Regolith Simulant Project

    NASA Technical Reports Server (NTRS)

    Rickman, Douglas; McLemore, C.; Stoeser, D.; Schrader, C.; Fikes, J.; Street, K.

    2009-01-01

    Beginning in 2004 personnel at MSFC began serious efforts to develop a new generation of lunar simulants. The first two products were a replication of the previous JSC-1 simulant under a contract to Orbitec and a major workshop in 2005 on future simulant development. It was recognized in early 2006 there were serious limitations with the standard approach of simply taking a single terrestrial rock and grinding it. To a geologist, even a cursory examination of the Lunar Sourcebook shows that matching lunar heterogeneity, crystal size, relative mineral abundances, lack of H2O, plagioclase chemistry and glass abundance simply can not be done with any simple combination of terrestrial rocks. Thus the project refocused its efforts and approached simulant development in a new and more comprehensive manner, examining new approaches in simulant development and ways to more accurately compare simulants to actual lunar materials. This led to a multi-year effort with five major tasks running in parallel. The five tasks are Requirements, Lunar Analysis, Process Development, Feed Stocks, and Standards.

  4. The impact of the observation nudging and nesting on the simulated meteorology and ozone concentrations from WRF-SMOKE-CMAQ during DISCOVER-AQ 2013 Texas campaign

    NASA Astrophysics Data System (ADS)

    Choi, Y.; Li, X.; Czader, B.

    2014-12-01

    Three WRF simulations for the DISCOVER-AQ 2013 Texas campaign period (30 days in September) are performed to characterize uncertainties in the simulated meteorological and chemical conditions. These simulations differ in domain setup, and in performing observation nudging in WRF runs. There are around 7% index of agreement (IOA) gain in temperature and 9-12% boost in U-WIND and V-WIND when the observational nudging is employed in the simulation. Further performance gain from nested domains over single domain is marginal. The CMAQ simulations based on above WRF setups showed that the ozone performance slightly improved in the simulation for which objective analysis (OA) is carried out. Further IOA gain, though quite limited, is achieved with nested domains. This study shows that the high ozone episodes during the analyzed time periods were associated with the uncertainties of the simulated cold front passage, chemical boundary condition and small-scale temporal wind fields. All runs missed the observed high ozone values which reached above 150 ppb in La Porte on September 25, the only day with hourly ozone over 120 ppb. The failure is likely due to model's inability to catch small-scale wind shifts in the industrial zone, despite better wind directions in the simulations with nudging and nested domains. This study also shows that overestimated background ozone from the southerly chemical boundary is a critical source for the model's general overpredictions of the ozone concentrations from CMAQ during September of 2013. These results of this study shed a light on the necessity of (1) capturing the small-scale winds such as the onsets of bay-breeze or sea-breeze and (2) implementing more accurate chemical boundary conditions to reduce the simulated high-biased ozone concentrations. One promising remedy for (1) is implementing hourly observation nudging instead of the standard one which is done every three hours.

  5. Introduction and application of the multiscale coefficient of variation analysis.

    PubMed

    Abney, Drew H; Kello, Christopher T; Balasubramaniam, Ramesh

    2017-10-01

    Quantifying how patterns of behavior relate across multiple levels of measurement typically requires long time series for reliable parameter estimation. We describe a novel analysis that estimates patterns of variability across multiple scales of analysis suitable for time series of short duration. The multiscale coefficient of variation (MSCV) measures the distance between local coefficient of variation estimates within particular time windows and the overall coefficient of variation across all time samples. We first describe the MSCV analysis and provide an example analytical protocol with corresponding MATLAB implementation and code. Next, we present a simulation study testing the new analysis using time series generated by ARFIMA models that span white noise, short-term and long-term correlations. The MSCV analysis was observed to be sensitive to specific parameters of ARFIMA models varying in the type of temporal structure and time series length. We then apply the MSCV analysis to short time series of speech phrases and musical themes to show commonalities in multiscale structure. The simulation and application studies provide evidence that the MSCV analysis can discriminate between time series varying in multiscale structure and length.

  6. Comparison Of Quantitative Precipitation Estimates Derived From Rain Gauge And Radar Derived Algorithms For Operational Flash Flood Support.

    NASA Astrophysics Data System (ADS)

    Streubel, D. P.; Kodama, K.

    2014-12-01

    To provide continuous flash flood situational awareness and to better differentiate severity of ongoing individual precipitation events, the National Weather Service Research Distributed Hydrologic Model (RDHM) is being implemented over Hawaii and Alaska. In the implementation process of RDHM, three gridded precipitation analyses are used as forcing. The first analysis is a radar only precipitation estimate derived from WSR-88D digital hybrid reflectivity, a Z-R relationship and aggregated into an hourly ¼ HRAP grid. The second analysis is derived from a rain gauge network and interpolated into an hourly ¼ HRAP grid using PRISM climatology. The third analysis is derived from a rain gauge network where rain gauges are assigned static pre-determined weights to derive a uniform mean areal precipitation that is applied over a catchment on a ¼ HRAP grid. To assess the effect of different QPE analyses on the accuracy of RDHM simulations and to potentially identify a preferred analysis for operational use, each QPE was used to force RDHM to simulate stream flow for 20 USGS peak flow events. An evaluation of the RDHM simulations was focused on peak flow magnitude, peak flow timing, and event volume accuracy to be most relevant for operational use. Results showed RDHM simulations based on the observed rain gauge amounts were more accurate in simulating peak flow magnitude and event volume relative to the radar derived analysis. However this result was not consistent for all 20 events nor was it consistent for a few of the rainfall events where an annual peak flow was recorded at more than one USGS gage. Implications of this indicate that a more robust QPE forcing with the inclusion of uncertainty derived from the three analyses may provide a better input for simulating extreme peak flow events.

  7. Comparison of linear measurements between CBCT orthogonally synthesized cephalograms and conventional cephalograms

    PubMed Central

    Yang, S; Liu, D G

    2014-01-01

    Objectives: The purposes of the study are to investigate the consistency of linear measurements between CBCT orthogonally synthesized cephalograms and conventional cephalograms and to evaluate the influence of different magnifications on these comparisons based on a simulation algorithm. Methods: Conventional cephalograms and CBCT scans were taken on 12 dry skulls with spherical metal markers. Orthogonally synthesized cephalograms were created from CBCT data. Linear parameters on both cephalograms were measured via Photoshop CS v. 5.0 (Adobe® Systems, San Jose, CA), named measurement group (MG). Bland–Altman analysis was utilized to assess the agreement of two imaging modalities. Reproducibility was investigated using paired t-test. By a specific mathematical programme “cepha”, corresponding linear parameters [mandibular corpus length (Go-Me), mandibular ramus length (Co-Go), posterior facial height (Go-S)] on these two types of cephalograms were calculated, named simulation group (SG). Bland–Altman analysis was used to assess the agreement between MG and SG. Simulated linear measurements with varying magnifications were generated based on “cepha” as well. Bland–Altman analysis was used to assess the agreement of simulated measurements between two modalities. Results: Bland–Altman analysis suggested the agreement between measurements on conventional cephalograms and orthogonally synthesized cephalograms, with a mean bias of 0.47 mm. Comparison between MG and SG showed that the difference did not reach clinical significance. The consistency between simulated measurements of both modalities with four different magnifications was demonstrated. Conclusions: Normative data of conventional cephalograms could be used for CBCT orthogonally synthesized cephalograms during this transitional period. PMID:25029593

  8. Effect of train vibration on settlement of soil: A numerical analysis

    NASA Astrophysics Data System (ADS)

    Tiong, Kah-Yong; Ling, Felix Ngee-Leh; Talib, Zaihasra Abu

    2017-10-01

    The drastic development of transit system caused the influence of ground-borne vibrations induced by train on ground settlement became concern problem nowadays. The purpose of this study is to investigate soil settlement caused by train vibration. To facilitate this study, computer simulation of soil dynamic response using commercial finite element package - PLAXIS 2D was performed to simulate track-subgrade system together with dynamic train load under three different conditions. The results of simulation analysis established the facts that the soil deformation increased with raising in water level. This phenomenon happens because the increasing water level not only induced greater excess pore water pressure but also reduced stiffness of soil. Furthermore, the simulation analysis also deduced that the soil settlement was reduced by placing material with high stiffness between the subgrade and the ballast layer since material with high stiffness was able to dissipate energy efficiently due to its high bearing capacity, thus protecting the subgrade from deteriorating. The simulation analysis result also showed that the soil dynamic response increased with the increase in the speed of train and a noticeable amplification in soil deformation occurred as the train speed approaches the Rayleigh wave velocity of the track subgrade system. This is due to the fact that dynamic train load depend on both the self-weight of the train and the dynamic component due to inertial effects associated with the train speed. Thus, controlling the train speeds under critical velocity of track-subgrade system is able to ensure the safety of train operation as it prevents track-ground resonance and dramatic ground.

  9. Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Smith, Curtis L.; Alfonsi, Andrea

    2015-09-01

    The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, themore » overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.« less

  10. Guidelines for the analysis of free energy calculations.

    PubMed

    Klimovich, Pavel V; Shirts, Michael R; Mobley, David L

    2015-05-01

    Free energy calculations based on molecular dynamics simulations show considerable promise for applications ranging from drug discovery to prediction of physical properties and structure-function studies. But these calculations are still difficult and tedious to analyze, and best practices for analysis are not well defined or propagated. Essentially, each group analyzing these calculations needs to decide how to conduct the analysis and, usually, develop its own analysis tools. Here, we review and recommend best practices for analysis yielding reliable free energies from molecular simulations. Additionally, we provide a Python tool, alchemical-analysis.py, freely available on GitHub as part of the pymbar package (located at http://github.com/choderalab/pymbar), that implements the analysis practices reviewed here for several reference simulation packages, which can be adapted to handle data from other packages. Both this review and the tool covers analysis of alchemical calculations generally, including free energy estimates via both thermodynamic integration and free energy perturbation-based estimators. Our Python tool also handles output from multiple types of free energy calculations, including expanded ensemble and Hamiltonian replica exchange, as well as standard fixed ensemble calculations. We also survey a range of statistical and graphical ways of assessing the quality of the data and free energy estimates, and provide prototypes of these in our tool. We hope this tool and discussion will serve as a foundation for more standardization of and agreement on best practices for analysis of free energy calculations.

  11. Large eddy simulation in a rotary blood pump: Viscous shear stress computation and comparison with unsteady Reynolds-averaged Navier-Stokes simulation.

    PubMed

    Torner, Benjamin; Konnigk, Lucas; Hallier, Sebastian; Kumar, Jitendra; Witte, Matthias; Wurm, Frank-Hendrik

    2018-06-01

    Numerical flow analysis (computational fluid dynamics) in combination with the prediction of blood damage is an important procedure to investigate the hemocompatibility of a blood pump, since blood trauma due to shear stresses remains a problem in these devices. Today, the numerical damage prediction is conducted using unsteady Reynolds-averaged Navier-Stokes simulations. Investigations with large eddy simulations are rarely being performed for blood pumps. Hence, the aim of the study is to examine the viscous shear stresses of a large eddy simulation in a blood pump and compare the results with an unsteady Reynolds-averaged Navier-Stokes simulation. The simulations were carried out at two operation points of a blood pump. The flow was simulated on a 100M element mesh for the large eddy simulation and a 20M element mesh for the unsteady Reynolds-averaged Navier-Stokes simulation. As a first step, the large eddy simulation was verified by analyzing internal dissipative losses within the pump. Then, the pump characteristics and mean and turbulent viscous shear stresses were compared between the two simulation methods. The verification showed that the large eddy simulation is able to reproduce the significant portion of dissipative losses, which is a global indication that the equivalent viscous shear stresses are adequately resolved. The comparison with the unsteady Reynolds-averaged Navier-Stokes simulation revealed that the hydraulic parameters were in agreement, but differences for the shear stresses were found. The results show the potential of the large eddy simulation as a high-quality comparative case to check the suitability of a chosen Reynolds-averaged Navier-Stokes setup and turbulence model. Furthermore, the results lead to suggest that large eddy simulations are superior to unsteady Reynolds-averaged Navier-Stokes simulations when instantaneous stresses are applied for the blood damage prediction.

  12. A mixed-reality part-task trainer for subclavian venous access.

    PubMed

    Robinson, Albert R; Gravenstein, Nikolaus; Cooper, Lou Ann; Lizdas, David; Luria, Isaac; Lampotang, Samsun

    2014-02-01

    Mixed-reality (MR) procedural simulators combine virtual and physical components and visualization software that can be used for debriefing and offer an alternative to learn subclavian central venous access (SCVA). We present a SCVA MR simulator, a part-task trainer, which can assist in the training of medical personnel. Sixty-five participants were involved in the following: (1) a simulation trial 1; (2) a teaching intervention followed by trial 2 (with the simulator's visualization software); and (3) trial 3, a final simulation assessment. The main test parameters were time to complete SCVA and the SCVA score, a composite of efficiency and safety metrics generated by the simulator's scoring algorithm. Residents and faculty completed questionnaires presimulation and postsimulation that assessed their confidence in obtaining access and learner satisfaction questions, for example, realism of the simulator. The average SCVA score was improved by 24.5 (n=65). Repeated-measures analysis of variance showed significant reductions in average time (F=31.94, P<0.0001), number of attempts (F=10.56, P<0.0001), and score (F=18.59, P<0.0001). After the teaching intervention and practice with the MR simulator, the results no longer showed a difference in performance between the faculty and residents. On a 5-point scale (5=strongly agree), participants agreed that the SCVA simulator was realistic (M=4.3) and strongly agreed that it should be used as an educational tool (M=4.9). An SCVA mixed simulator offers a realistic representation of subclavian central venous access and offers new debriefing capabilities.

  13. Pore-scale observation and 3D simulation of wettability effects on supercritical CO2 - brine immiscible displacement in drainage

    NASA Astrophysics Data System (ADS)

    Hu, R.; Wan, J.; Chen, Y.

    2016-12-01

    Wettability is a factor controlling the fluid-fluid displacement pattern in porous media and significantly affects the flow and transport of supercritical (sc) CO2 in geologic carbon sequestration. Using a high-pressure micromodel-microscopy system, we performed drainage experiments of scCO2 invasion into brine-saturated water-wet and intermediate-wet micromodels; we visualized the scCO2 invasion morphology at pore-scale under reservoir conditions. We also performed pore-scale numerical simulations of the Navier-Stokes equations to obtain 3D details of fluid-fluid displacement processes. Simulation results are qualitatively consistent with the experiments, showing wider scCO2 fingering, higher percentage of scCO2 and more compact displacement pattern in intermediate-wet micromodel. Through quantitative analysis based on pore-scale simulation, we found that the reduced wettability reduces the displacement front velocity, promotes the pore-filling events in the longitudinal direction, delays the breakthrough time of invading fluid, and then increases the displacement efficiency. Simulated results also show that the fluid-fluid interface area follows a unified power-law relation with scCO2 saturation, and show smaller interface area in intermediate-wet case which suppresses the mass transfer between the phases. These pore-scale results provide insights for the wettability effects on CO2 - brine immiscible displacement in geologic carbon sequestration.

  14. Desorption of sulphur mustard simulants methyl salicylate and 2-chloroethyl ethyl sulphide from contaminated scalp hair after vapour exposure.

    PubMed

    Spiandore, Marie; Souilah-Edib, Mélanie; Piram, Anne; Lacoste, Alexandre; Josse, Denis; Doumenq, Pierre

    2018-01-01

    Chemical warfare agents have been used to incapacitate, injure or kill people, in a context of war or terrorist attack. It has previously been shown that hair could trap the sulphur mustard simulants methyl salicylate and 2-chloroethyl ethyl sulphide. In order to investigate simulants persistency in hair after intense vapour exposure, their desorption kinetics were studied by using two complementary methods: hair residual content measurement and desorbed vapour monitoring. Results showed that both simulants were detected in air and could be recovered from hair 2 h after the end of exposure. Longer experiments with methyl salicylate showed that it could still be recovered from hair after 24 h. Our data were fitted with several kinetic models and best correlation was obtained with a bimodal first-order equation, suggesting a 2-step desorption kinetics model: initial fast regime followed by a slower desorption. 2-chloroethyl ethyl sulphide was also detected in the immediate environment after hair exposure for 2 h, and hair simulant content decreased by more than 80%. Our results showed that hair ability to release formerly trapped chemical toxics could lead to health hazard. Their persistency however confirmed the potentiality of hair analysis as a tool for chemical exposure assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Objective Fidelity Evaluation in Multisensory Virtual Environments: Auditory Cue Fidelity in Flight Simulation

    PubMed Central

    Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.

    2012-01-01

    We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068

  16. Cardiorespiratory endurance evaluation using heart rate analysis during ski simulator exercise and the Harvard step test in elementary school students.

    PubMed

    Lee, Hyo Taek; Roh, Hyo Lyun; Kim, Yoon Sang

    2016-01-01

    [Purpose] Efficient management using exercise programs with various benefits should be provided by educational institutions for children in their growth phase. We analyzed the heart rates of children during ski simulator exercise and the Harvard step test to evaluate the cardiopulmonary endurance by calculating their post-exercise recovery rate. [Subjects and Methods] The subjects (n = 77) were categorized into a normal weight and an overweight/obesity group by body mass index. They performed each exercise for 3 minutes. The cardiorespiratory endurance was calculated using the Physical Efficiency Index formula. [Results] The ski simulator and Harvard step test showed that there was a significant difference in the heart rates of the 2 body mass index-based groups at each minute. The normal weight and the ski-simulator group had higher Physical Efficiency Index levels. [Conclusion] This study showed that a simulator exercise can produce a cumulative load even when performed at low intensity, and can be effectively utilized as exercise equipment since it resulted in higher Physical Efficiency Index levels than the Harvard step test. If schools can increase sport durability by stimulating students' interests, the ski simulator exercise can be used in programs designed to improve and strengthen students' physical fitness.

  17. Stability and failure analysis of steering tie-rod

    NASA Astrophysics Data System (ADS)

    Jiang, GongFeng; Zhang, YiLiang; Xu, XueDong; Ding, DaWei

    2008-11-01

    A new car in operation of only 8,000 km, because of malfunction, resulting in lost control and rammed into the edge of the road, and then the basic vehicle scrapped. According to the investigation of the site, it was found that the tie-rod of the car had been broken. For the subjective analysis of the accident and identifying the true causes of rupture of the tierod, a series of studies, from the angle of theory to experiment on the bended broken tie-rod, were conducted. The mechanical model was established; the stability of the defective tie-rod was simulated based on ANSYS software. Meanwhile, the process of the accident was simulated considering the effect of destabilization of different vehicle speed and direction of the impact. Simultaneously, macro graphic test, chemical composition analysis, microstructure analysis and SEM analysis of the fracture were implemented. The results showed that: 1) the toughness of the tie-rod is at a normal level, but there is some previous flaws. One quarter of the fracture surface has been cracked before the accident. However, there is no relationship between the flaw and this incident. The direct cause is the dynamic instability leading to the large deformation of impact loading. 2) The declining safety factor of the tie-rod greatly due to the previous flaws; the result of numerical simulation shows that previous flaw is the vital factor of structure instability, on the basis of the comparison of critical loads of the accident tie-rod and normal. The critical load can decrease by 51.3% when the initial defect increases 19.54% on the cross-sectional area, which meets the Theory of Koiter.

  18. Plume-Free Stream Interaction Heating Effects During Orion Crew Module Reentry

    NASA Technical Reports Server (NTRS)

    Marichalar, J.; Lumpkin, F.; Boyles, K.

    2012-01-01

    During reentry of the Orion Crew Module (CM), vehicle attitude control will be performed by firing reaction control system (RCS) thrusters. Simulation of RCS plumes and their interaction with the oncoming flow has been difficult for the analysis community due to the large scarf angles of the RCS thrusters and the unsteady nature of the Orion capsule backshell environments. The model for the aerothermal database has thus relied on wind tunnel test data to capture the heating effects of thruster plume interactions with the freestream. These data are only valid for the continuum flow regime of the reentry trajectory. A Direct Simulation Monte Carlo (DSMC) analysis was performed to study the vehicle heating effects that result from the RCS thruster plume interaction with the oncoming freestream flow at high altitudes during Orion CM reentry. The study was performed with the DSMC Analysis Code (DAC). The inflow boundary conditions for the jets were obtained from Data Parallel Line Relaxation (DPLR) computational fluid dynamics (CFD) solutions. Simulations were performed for the roll, yaw, pitch-up and pitch-down jets at altitudes of 105 km, 125 km and 160 km as well as vacuum conditions. For comparison purposes (see Figure 1), the freestream conditions were based on previous DAC simulations performed without active RCS to populate the aerodynamic database for the Orion CM. Other inputs to the analysis included a constant Orbital reentry velocity of 7.5 km/s and angle of attack of 160 degrees. The results of the study showed that the interaction effects decrease quickly with increasing altitude. Also, jets with highly scarfed nozzles cause more severe heating compared to the nozzles with lower scarf angles. The difficulty of performing these simulations was based on the maximum number density and the ratio of number densities between the freestream and the plume for each simulation. The lowest altitude solutions required a substantial amount of computational resources (up to 1800 processors) to simulate approximately 2 billion molecules for the refined (adapted) solutions.

  19. Quantitative description of charge-carrier transport in a white organic light-emitting diode

    NASA Astrophysics Data System (ADS)

    Schober, M.; Anderson, M.; Thomschke, M.; Widmer, J.; Furno, M.; Scholz, R.; Lüssem, B.; Leo, K.

    2011-10-01

    We present a simulation model for the analysis of charge-carrier transport in organic thin-film devices, and apply it to a three-color white hybrid organic light-emitting diode (OLED) with fluorescent blue and phosphorescent red and green emission. We simulate a series of single-carrier devices, which reconstruct the OLED layer sequence step by step. Thereby, we determine the energy profiles for hole and electron transport, show how to discern bulk from interface limitation, and identify trap states.

  20. Simulation and optimization of volume holographic imaging systems in Zemax.

    PubMed

    Wissmann, Patrick; Oh, Se Baek; Barbastathis, George

    2008-05-12

    We present a new methodology for ray-tracing analysis of volume holographic imaging (VHI) systems. Using the k-sphere formulation, we apply geometrical relationships to describe the volumetric diffraction effects imposed on rays passing through a volume hologram. We explain the k-sphere formulation in conjunction with ray tracing process and describe its implementation in a Zemax UDS (User Defined Surface). We conclude with examples of simulation and optimization results and show proof of consistency and usefulness of the proposed model.

  1. Thermal microactuator dimension analysis

    NASA Astrophysics Data System (ADS)

    Azman, N. D.; Ong, N. R.; Aziz, M. H. A.; Alcain, J. B.; Haimi, W. M. W. N.; Sauli, Z.

    2017-09-01

    The focus of this study was to analyse the stress and thermal flow of thermal microactuator with different type of materials and parameter using COMSOL Multiphysics software. Simulations were conducted on the existing thermal actuator and integrated it to be more efficient, low cost and low power consumption. In this simulation, the U-shaped actuator was designed and five different materials of the microactuator were studied. The result showed that Si Polycrystalline was the most suitable material used to produce thermal actuator for commercialization.

  2. Analysis of drift correction in different simulated weighing schemes

    NASA Astrophysics Data System (ADS)

    Beatrici, A.; Rebelo, A.; Quintão, D.; Cacais, F. L.; Loayza, V. M.

    2015-10-01

    In the calibration of high accuracy mass standards some weighing schemes are used to reduce or eliminate the zero drift effects in mass comparators. There are different sources for the drift and different methods for its treatment. By using numerical methods, drift functions were simulated and a random term was included in each function. The comparison between the results obtained from ABABAB and ABBA weighing series was carried out. The results show a better efficacy of ABABAB method for drift with smooth variation and small randomness.

  3. Power Analysis of Artificial Selection Experiments Using Efficient Whole Genome Simulation of Quantitative Traits

    PubMed Central

    Kessner, Darren; Novembre, John

    2015-01-01

    Evolve and resequence studies combine artificial selection experiments with massively parallel sequencing technology to study the genetic basis for complex traits. In these experiments, individuals are selected for extreme values of a trait, causing alleles at quantitative trait loci (QTL) to increase or decrease in frequency in the experimental population. We present a new analysis of the power of artificial selection experiments to detect and localize quantitative trait loci. This analysis uses a simulation framework that explicitly models whole genomes of individuals, quantitative traits, and selection based on individual trait values. We find that explicitly modeling QTL provides qualitatively different insights than considering independent loci with constant selection coefficients. Specifically, we observe how interference between QTL under selection affects the trajectories and lengthens the fixation times of selected alleles. We also show that a substantial portion of the genetic variance of the trait (50–100%) can be explained by detected QTL in as little as 20 generations of selection, depending on the trait architecture and experimental design. Furthermore, we show that power depends crucially on the opportunity for recombination during the experiment. Finally, we show that an increase in power is obtained by leveraging founder haplotype information to obtain allele frequency estimates. PMID:25672748

  4. Water quality modeling for urban reach of Yamuna river, India (1999-2009), using QUAL2Kw

    NASA Astrophysics Data System (ADS)

    Sharma, Deepshikha; Kansal, Arun; Pelletier, Greg

    2017-06-01

    The study was to characterize and understand the water quality of the river Yamuna in Delhi (India) prior to an efficient restoration plan. A combination of collection of monitored data, mathematical modeling, sensitivity, and uncertainty analysis has been done using the QUAL2Kw, a river quality model. The model was applied to simulate DO, BOD, total coliform, and total nitrogen at four monitoring stations, namely Palla, Old Delhi Railway Bridge, Nizamuddin, and Okhla for 10 years (October 1999-June 2009) excluding the monsoon seasons (July-September). The study period was divided into two parts: monthly average data from October 1999-June 2004 (45 months) were used to calibrate the model and monthly average data from October 2005-June 2009 (45 months) were used to validate the model. The R2 for CBODf and TN lies within the range of 0.53-0.75 and 0.68-0.83, respectively. This shows that the model has given satisfactory results in terms of R2 for CBODf, TN, and TC. Sensitivity analysis showed that DO, CBODf, TN, and TC predictions are highly sensitive toward headwater flow and point source flow and quality. Uncertainty analysis using Monte Carlo showed that the input data have been simulated in accordance with the prevalent river conditions.

  5. On the dust load and rainfall relationship in South Asia: an analysis from CMIP5

    NASA Astrophysics Data System (ADS)

    Singh, Charu; Ganguly, Dilip; Dash, S. K.

    2018-01-01

    This study is aimed at examining the consistency of the relationship between load of dust and rainfall simulated by different climate models and its implication for the Indian summer monsoon system. Monthly mean outputs of 12 climate models, obtained from the archive of the Coupled Model Intercomparison Project phase 5 (CMIP5) for the period 1951-2004, are analyzed to investigate the relationship between dust and rainfall. Comparative analysis of the model simulated precipitation with the India Meteorological Department (IMD) gridded rainfall, CRU TS3.21 and GPCP version 2.2 data sets show significant differences between the spatial patterns of JJAS rainfall as well as annual cycle of rainfall simulated by various models and observations. Similarly, significant inter-model differences are also noted in the simulation of load of dust, nevertheless it is further noted that most of the CMIP5 models are able to capture the major dust sources across the study region. Although the scatter plot analysis and the lead-lag pattern correlation between the dust load and the rainfall show strong relationship between the dust load over distant sources and the rainfall in the South Asian region in individual models, the temporal scale of this association indicates large differences amongst the models. Our results caution that it would be pre-mature to draw any robust conclusions on the time scale of the relationship between dust and the rainfall in the South Asian region based on either CMIP5 results or limited number of previous studies. Hence, we would like to emphasize upon the fact that any conclusions drawn on the relationship between the dust load and the South Asian rainfall using model simulation is highly dependent on the degree of complexity incorporated in those models such as the representation of aerosol life cycle, their interaction with clouds, precipitation and other components of the climate system.

  6. Multifractal evaluation of simulated precipitation intensities from the COSMO NWP model

    NASA Astrophysics Data System (ADS)

    Wolfensberger, Daniel; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Berne, Alexis

    2017-12-01

    The framework of universal multifractals (UM) characterizes the spatio-temporal variability in geophysical data over a wide range of scales with only a limited number of scale-invariant parameters. This work aims to clarify the link between multifractals (MFs) and more conventional weather descriptors and to show how they can be used to perform a multi-scale evaluation of model data. The first part of this work focuses on a MF analysis of the climatology of precipitation intensities simulated by the COSMO numerical weather prediction model. Analysis of the spatial structure of the MF parameters, and their correlations with external meteorological and topographical descriptors, reveals that simulated precipitation tends to be smoother at higher altitudes, and that the mean intermittency is mostly influenced by the latitude. A hierarchical clustering was performed on the external descriptors, yielding three different clusters, which correspond roughly to Alpine/continental, Mediterranean and temperate regions. Distributions of MF parameters within these three clusters are shown to be statistically significantly different, indicating that the MF signature of rain is indeed geographically dependent. The second part of this work is event-based and focuses on the smaller scales. The MF parameters of precipitation intensities at the ground are compared with those obtained from the Swiss radar composite during three events corresponding to typical synoptic conditions over Switzerland. The results of this analysis show that the COSMO simulations exhibit spatial scaling breaks that are not present in the radar data, indicating that the model is not able to simulate the observed variability at all scales. A comparison of the operational one-moment microphysical parameterization scheme of COSMO with a more advanced two-moment scheme reveals that, while no scheme systematically outperforms the other, the two-moment scheme tends to produce larger extreme values and more discontinuous precipitation fields, which agree better with the radar composite.

  7. Monte Carlo Simulations for VLBI2010

    NASA Astrophysics Data System (ADS)

    Wresnik, J.; Böhm, J.; Schuh, H.

    2007-07-01

    Monte Carlo simulations are carried out at the Institute of Geodesy and Geophysics (IGG), Vienna, and at Goddard Space Flight Center (GSFC), Greenbelt (USA), with the goal to design a new geodetic Very Long Baseline Interferometry (VLBI) system. Influences of the schedule, the network geometry and the main stochastic processes on the geodetic results are investigated. Therefore schedules are prepared with the software package SKED (Vandenberg 1999), and different strategies are applied to produce temporally very dense schedules which are compared in terms of baseline length repeatabilities. For the simulation of VLBI observations a Monte Carlo Simulator was set up which creates artificial observations by randomly simulating wet zenith delay and clock values as well as additive white noise representing the antenna errors. For the simulation at IGG the VLBI analysis software OCCAM (Titov et al. 2004) was adapted. Random walk processes with power spectrum densities of 0.7 and 0.1 psec2/sec are used for the simulation of wet zenith delays. The clocks are simulated with Allan Standard Deviations of 1*10^-14 @ 50 min and 2*10^-15 @ 15 min and three levels of white noise, 4 psec, 8 psec and, 16 psec, are added to the artificial observations. The variations of the power spectrum densities of the clocks and wet zenith delays, and the application of different white noise levels show clearly that the wet delay is the critical factor for the improvement of the geodetic VLBI system. At GSFC the software CalcSolve is used for the VLBI analysis, therefore a comparison between the software packages OCCAM and CalcSolve was done with simulated data. For further simulations the wet zenith delay was modeled by a turbulence model. This data was provided by Nilsson T. and was added to the simulation work. Different schedules have been run.

  8. Non-stationary Return Levels of CMIP5 Multi-model Temperature Extremes

    DOE PAGES

    Cheng, L.; Phillips, T. J.; AghaKouchak, A.

    2015-05-01

    The objective of this study is to evaluate to what extent the CMIP5 climate model simulations of the climate of the twentieth century can represent observed warm monthly temperature extremes under a changing environment. The biases and spatial patterns of 2-, 10-, 25-, 50- and 100-year return levels of the annual maxima of monthly mean temperature (hereafter, annual temperature maxima) from CMIP5 simulations are compared with those of Climatic Research Unit (CRU) observational data considered under a non-stationary assumption. The results show that CMIP5 climate models collectively underestimate the mean annual maxima over arid and semi-arid regions that are mostmore » subject to severe heat waves and droughts. Furthermore, the results indicate that most climate models tend to underestimate the historical annual temperature maxima over the United States and Greenland, while generally disagreeing in their simulations over cold regions. Return level analysis shows that with respect to the spatial patterns of the annual temperature maxima, there are good agreements between the CRU observations and most CMIP5 simulations. However, the magnitudes of the simulated annual temperature maxima differ substantially across individual models. Discrepancies are generally larger over higher latitudes and cold regions.« less

  9. A Progressive Damage Model for unidirectional Fibre Reinforced Composites with Application to Impact and Penetration Simulation

    NASA Astrophysics Data System (ADS)

    Kerschbaum, M.; Hopmann, C.

    2016-06-01

    The computationally efficient simulation of the progressive damage behaviour of continuous fibre reinforced plastics is still a challenging task with currently available computer aided engineering methods. This paper presents an original approach for an energy based continuum damage model which accounts for stress-/strain nonlinearities, transverse and shear stress interaction phenomena, quasi-plastic shear strain components, strain rate effects, regularised damage evolution and consideration of load reversal effects. The physically based modelling approach enables experimental determination of all parameters on ply level to avoid expensive inverse analysis procedures. The modelling strategy, implementation and verification of this model using commercially available explicit finite element software are detailed. The model is then applied to simulate the impact and penetration of carbon fibre reinforced cross-ply specimens with variation of the impact speed. The simulation results show that the presented approach enables a good representation of the force-/displacement curves and especially well agreement with the experimentally observed fracture patterns. In addition, the mesh dependency of the results were assessed for one impact case showing only very little change of the simulation results which emphasises the general applicability of the presented method.

  10. Flow visualization of a rocket injector spray using gelled propellant simulants

    NASA Technical Reports Server (NTRS)

    Green, James M.; Rapp, Douglas C.; Roncace, James

    1991-01-01

    A study was conducted at NASA-Lewis to compare the atomization characteristics of gelled and nongelled propellant simulants. A gelled propellant simulant composed of water, sodium hydroxide, and an acrylic acid polymer resin (as the gelling agent) was used to simulate the viscosity of an aluminum/PR-1 metallized fuel gel. Water was used as a comparison fluid to isolate the rheological effects of the water-gel and to simulate nongelled RP-1. The water-gel was injected through the central orifice of a triplet injector element and the central post of a coaxial injector element. Nitrogen gas flowed through the outer orifices of the triplet injector element and through the annulus of the coaxial injector element and atomized the gelled and nongelled liquids. Photographs of the water-gel spray patterns at different operating conditions were compared with images obtained using water and nitrogen. A laser light was used for illumination of the sprays. The results of the testing showed that the water sprays produced a finer and more uniform atomization than the water-gel sprays. Rheological analysis of the water-gel showed poor atomization caused by high viscosity of water-gel delaying the transition to turbulence.

  11. Simulation of root forms using cellular automata model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winarno, Nanang, E-mail: nanang-winarno@upi.edu; Prima, Eka Cahya; Afifah, Ratih Mega Ayu

    This research aims to produce a simulation program for root forms using cellular automata model. Stephen Wolfram in his book entitled “A New Kind of Science” discusses the formation rules based on the statistical analysis. In accordance with Stephen Wolfram’s investigation, the research will develop a basic idea of computer program using Delphi 7 programming language. To best of our knowledge, there is no previous research developing a simulation describing root forms using the cellular automata model compared to the natural root form with the presence of stone addition as the disturbance. The result shows that (1) the simulation usedmore » four rules comparing results of the program towards the natural photographs and each rule had shown different root forms; (2) the stone disturbances prevent the root growth and the multiplication of root forms had been successfully modeled. Therefore, this research had added some stones, which have size of 120 cells placed randomly in the soil. Like in nature, stones cannot be penetrated by plant roots. The result showed that it is very likely to further develop the program of simulating root forms by 50 variations.« less

  12. Modelling of proton acceleration in application to a ground level enhancement

    NASA Astrophysics Data System (ADS)

    Afanasiev, A.; Vainio, R.; Rouillard, A. P.; Battarbee, M.; Aran, A.; Zucca, P.

    2018-06-01

    Context. The source of high-energy protons (above 500 MeV) responsible for ground level enhancements (GLEs) remains an open question in solar physics. One of the candidates is a shock wave driven by a coronal mass ejection, which is thought to accelerate particles via diffusive-shock acceleration. Aims: We perform physics-based simulations of proton acceleration using information on the shock and ambient plasma parameters derived from the observation of a real GLE event. We analyse the simulation results to find out which of the parameters are significant in controlling the acceleration efficiency and to get a better understanding of the conditions under which the shock can produce relativistic protons. Methods: We use the results of the recently developed technique to determine the shock and ambient plasma parameters, applied to the 17 May 2012 GLE event, and carry out proton acceleration simulations with the Coronal Shock Acceleration (CSA) model. Results: We performed proton acceleration simulations for nine individual magnetic field lines characterised by various plasma conditions. Analysis of the simulation results shows that the acceleration efficiency of the shock, i.e. its ability to accelerate particles to high energies, tends to be higher for those shock portions that are characterised by higher values of the scattering-centre compression ratio rc and/or the fast-mode Mach number MFM. At the same time, the acceleration efficiency can be strengthened by enhanced plasma density in the corresponding flux tube. The simulations show that protons can be accelerated to GLE energies in the shock portions characterised by the highest values of rc. Analysis of the delays between the flare onset and the production times of protons of 1 GV rigidity for different field lines in our simulations, and a subsequent comparison of those with the observed values indicate a possibility that quasi-perpendicular portions of the shock play the main role in producing relativistic protons.

  13. Fault Analysis in Solar Photovoltaic Arrays

    NASA Astrophysics Data System (ADS)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  14. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    NASA Astrophysics Data System (ADS)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  15. Simulation and Failure Analysis of Car Bumper Made of Pineapple Leaf Fiber Reinforced Composite

    NASA Astrophysics Data System (ADS)

    Arbintarso, E. S.; Muslim, M.; Rusianto, T.

    2018-02-01

    The bumper car made of the Pineapple Leaf Fiber Reinforced Composite (PLFRC) is possible to be produced with the advantage of easy to get, and cheap. Pineapple leaf fiber has chosen as a natural fiber, which the maximum of the strength of 368 MPa. The objective of this study was to determine the maximum capability of front car bumpers using Pineapple Leaf Fiber Reinforced Composite materials through the process of simulating stress analysis with Solidworks 2014 software. The aim also to know the distribution of loads that occur on the front car bumper and predict the critical point position on the design of the bumper. The result will use to develop the alternative lightweight, cheap and environmentally friendly materials in general and the development of the use of pineapple fiber for automotive purposes in particular. Simulations and failure analysis have been conducted and showed an increased impact speed in line with increased displacement, strain, and stress that occur on the surface of the bumper. The bumper can withstand collisions at a speed of less than 70 kph.

  16. Variations in High-frequency Oscillations of Tropical Cyclones over the Western North Pacific

    NASA Astrophysics Data System (ADS)

    Chen, Shumin; Li, Weibiao; Wen, Zhiping; Zhou, Mingsen; Lu, Youyu; Qian, Yu-Kun; Liu, Haoya; Fang, Rong

    2018-04-01

    Variations in the high-frequency oscillations of tropical cyclones (TCs) over the western North Pacific (WNP) are studied in numerical model simulations. Power spectrum analysis of maximum wind speeds at 10 m (MWS10) from an ensemble of 15 simulated TCs shows that oscillations are significant for all TCs. The magnitudes of oscillations in MWS10 are similar in the WNP and South China Sea (SCS); however, the mean of the averaged significant periods in the SCS (1.93 h) is shorter than that in the open water of the WNP (2.83 h). The shorter period in the SCS is examined through an ensemble of simulations, and a case simulation as well as a sensitivity experiment in which the continent is replaced by ocean for Typhoon Hagupit (2008). The analysis of the convergence efficiency within the boundary layer suggests that the shorter periods in the SCS are possibly due to the stronger terrain effect, which intensifies convergence through greater friction. The enhanced convergence strengthens the disturbance of the gradient and thermal wind balances, and then contributes to the shorter oscillation periods in the SCS.

  17. Finite-Difference Time-Domain Analysis of Tapered Photonic Crystal Fiber

    NASA Astrophysics Data System (ADS)

    Ali, M. I. Md; Sanusidin, S. N.; Yusof, M. H. M.

    2018-03-01

    This paper brief about the simulation of tapered photonic crystal fiber (PCF) LMA-8 single-mode type based on correlation of scattering pattern at wavelength of 1.55 μm, analyzation of transmission spectrum at wavelength over the range of 1.0 until 2.5 μm and correlation of transmission spectrum with the refractive index change in photonic crystal holes with respect to taper size of 0.1 until 1.0 using Optiwave simulation software. The main objective is to simulate using Finite-Difference Time-Domain (FDTD) technique of tapered LMA-8 PCF and for sensing application by improving the capabilities of PCF without collapsing the crystal holes. The types of FDTD techniques used are scattering pattern and transverse transmission and principal component analysis (PCA) used as a mathematical tool to model the data obtained by MathCad software. The simulation results showed that there is no obvious correlation of scattering pattern at a wavelength of 1.55 μm, a correlation obtained between taper sizes with a transverse transmission and there is a parabolic relationship between the refractive index changes inside the crystal structure.

  18. Backbone conformations and side chain flexibility of two somatostatin mimics investigated by molecular dynamics simulations.

    PubMed

    Interlandi, Gianluca

    2009-05-15

    Molecular dynamics simulations with two designed somatostatin mimics, SOM230 and SMS 201-995, were performed in explicit water for a total aggregated time of 208 ns. Analysis of the runs with SOM230 revealed the presence of two clusters of conformations. Strikingly, the two sampled conformers correspond to the two main X-ray structures in the asymmetric unit of SMS 201-995. Structural comparison between the residues of SOM230 and SMS 201-995 provides an explanation for the high binding affinity of SOM230 to four of five somatostatin receptors. Similarly, cluster analysis of the simulations with SMS 201-995 shows that the backbone of the peptide interconverts between its two main crystallographic conformers. The conformations of SMS 201-995 sampled in the two clusters violated two different sets of NOE distance constraints in agreement with a previous NMR study. Differences in side chain fluctuations between SOM230 and SMS 201-995 observed in the simulations may contribute to the relatively higher binding affinity of SOM230 to most somatostatin receptors.

  19. Optomechanical integrated simulation of Mars medium resolution lens with large field of view

    NASA Astrophysics Data System (ADS)

    Yang, Wenqiang; Xu, Guangzhou; Yang, Jianfeng; Sun, Yi

    2017-10-01

    The lens of Mars detector is exposed to solar radiation and space temperature for long periods of time during orbit, so that the ambient temperature of the optical system is in a dynamic state. The optical and mechanical change caused by heat will lead to camera's visual axis drift and the wavefront distortion. The surface distortion of the optical lens includes the displacement of the rigid body and the distortion of the surface shape. This paper used the calculation method based on the integrated optomechanical analysis, to explore the impact of thermodynamic load on image quality. Through the analysis software, established a simulation model of the lens structure. The shape distribution and the surface characterization parameters of the lens in some temperature ranges were analyzed and compared. the PV / RMS value, deformation cloud of the lens surface and quality evaluation of imaging was achieved. This simulation has been successfully measured the lens surface shape and shape distribution under the load which is difficult to measure on the experimental conditions. The integrated simulation method of the optical machine can obtain the change of the optical parameters brought by the temperature load. It shows that the application of Integrated analysis has play an important role in guiding the designing the lens.

  20. An ensemble constrained variational analysis of atmospheric forcing data and its application to evaluate clouds in CAM5: Ensemble 3DCVA and Its Application

    DOE PAGES

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2016-01-05

    Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less

Top