Sample records for parameters enables simulation

  1. LibKiSAO: a Java library for Querying KiSAO.

    PubMed

    Zhukova, Anna; Adams, Richard; Laibe, Camille; Le Novère, Nicolas

    2012-09-24

    The Kinetic Simulation Algorithm Ontology (KiSAO) supplies information about existing algorithms available for the simulation of Systems Biology models, their characteristics, parameters and inter-relationships. KiSAO enables the unambiguous identification of algorithms from simulation descriptions. Information about analogous methods having similar characteristics and about algorithm parameters incorporated into KiSAO is desirable for simulation tools. To retrieve this information programmatically an application programming interface (API) for KiSAO is needed. We developed libKiSAO, a Java library to enable querying of the KiSA Ontology. It implements methods to retrieve information about simulation algorithms stored in KiSAO, their characteristics and parameters, and methods to query the algorithm hierarchy and search for similar algorithms providing comparable results for the same simulation set-up. Using libKiSAO, simulation tools can make logical inferences based on this knowledge and choose the most appropriate algorithm to perform a simulation. LibKiSAO also enables simulation tools to handle a wider range of simulation descriptions by determining which of the available methods are similar and can be used instead of the one indicated in the simulation description if that one is not implemented. LibKiSAO enables Java applications to easily access information about simulation algorithms, their characteristics and parameters stored in the OWL-encoded Kinetic Simulation Algorithm Ontology. LibKiSAO can be used by simulation description editors and simulation tools to improve reproducibility of computational simulation tasks and facilitate model re-use.

  2. User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Coleman, Kayla; Gilkey, Lindsay N.

    Sandia’s Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. In its simplest mode, Dakota can automate typical parameter variation studies through a generic interface to a physics-based computational model. This can lend efficiency and rigor to manual parameter perturbation studies already being conducted by analysts. However, Dakota also delivers advanced parametric analysis techniques enabling design exploration, optimization, model calibration, riskmore » analysis, and quantification of margins and uncertainty with such models. It directly supports verification and validation activities. Dakota algorithms enrich complex science and engineering models, enabling an analyst to answer crucial questions of - Sensitivity: Which are the most important input factors or parameters entering the simulation, and how do they influence key outputs?; Uncertainty: What is the uncertainty or variability in simulation output, given uncertainties in input parameters? How safe, reliable, robust, or variable is my system? (Quantification of margins and uncertainty, QMU); Optimization: What parameter values yield the best performing design or operating condition, given constraints? Calibration: What models and/or parameters best match experimental data? In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers.« less

  3. Wall Shear Stress Distribution in a Patient-Specific Cerebral Aneurysm Model using Reduced Order Modeling

    NASA Astrophysics Data System (ADS)

    Han, Suyue; Chang, Gary Han; Schirmer, Clemens; Modarres-Sadeghi, Yahya

    2016-11-01

    We construct a reduced-order model (ROM) to study the Wall Shear Stress (WSS) distributions in image-based patient-specific aneurysms models. The magnitude of WSS has been shown to be a critical factor in growth and rupture of human aneurysms. We start the process by running a training case using Computational Fluid Dynamics (CFD) simulation with time-varying flow parameters, such that these parameters cover the range of parameters of interest. The method of snapshot Proper Orthogonal Decomposition (POD) is utilized to construct the reduced-order bases using the training CFD simulation. The resulting ROM enables us to study the flow patterns and the WSS distributions over a range of system parameters computationally very efficiently with a relatively small number of modes. This enables comprehensive analysis of the model system across a range of physiological conditions without the need to re-compute the simulation for small changes in the system parameters.

  4. Implementation of Hydrodynamic Simulation Code in Shock Experiment Design for Alkali Metals

    NASA Astrophysics Data System (ADS)

    Coleman, A. L.; Briggs, R.; Gorman, M. G.; Ali, S.; Lazicki, A.; Swift, D. C.; Stubley, P. G.; McBride, E. E.; Collins, G.; Wark, J. S.; McMahon, M. I.

    2017-10-01

    Shock compression techniques enable the investigation of extreme P-T states. In order to probe off-Hugoniot regions of P-T space, target makeup and laser pulse parameters must be carefully designed. HYADES is a hydrodynamic simulation code which has been successfully utilised to simulate shock compression events and refine the experimental parameters required in order to explore new P-T states in alkali metals. Here we describe simulations and experiments on potassium, along with the techniques required to access off-Hugoniot states.

  5. Multiplatform Mission Planning and Operations Simulation Environment for Adaptive Remote Sensors

    NASA Astrophysics Data System (ADS)

    Smith, G.; Ball, C.; O'Brien, A.; Johnson, J. T.

    2017-12-01

    We report on the design and development of mission simulator libraries to support the emerging field of adaptive remote sensors. We will outline the current state of the art in adaptive sensing, provide analysis of how the current approach to performing observing system simulation experiments (OSSEs) must be changed to enable adaptive sensors for remote sensing, and present an architecture to enable their inclusion in future OSSEs.The growing potential of sensors capable of real-time adaptation of their operational parameters calls for a new class of mission planning and simulation tools. Existing simulation tools used in OSSEs assume a fixed set of sensor parameters in terms of observation geometry, frequencies used, resolution, or observation time, which allows simplifications to be made in the simulation and allows sensor observation errors to be characterized a priori. Adaptive sensors may vary these parameters depending on the details of the scene observed, so that sensor performance is not simple to model without conducting OSSE simulations that include sensor adaptation in response to varying observational environment. Adaptive sensors are of significance to resource-constrained, small satellite platforms because they enable the management of power and data volumes while providing methods for multiple sensors to collaborate.The new class of OSSEs required to utilize adaptive sensors located on multiple platforms must answer the question: If the physical act of sensing has a cost, how does the system determine if the science value of a measurement is worth the cost and how should that cost be shared among the collaborating sensors?Here we propose to answer this question using an architecture structured around three modules: ADAPT, MANAGE and COLLABORATE. The ADAPT module is a set of routines to facilitate modeling of adaptive sensors, the MANAGE module will implement a set of routines to facilitate simulations of sensor resource management when power and data volume are constrained, and the COLLABORATE module will support simulations of coordination among multiple platforms with adaptive sensors. When used together these modules will for a simulation OSSEs that can enable both the design of adaptive algorithms to support remote sensing and the prediction of the sensor performance.

  6. An open-source job management framework for parameter-space exploration: OACIS

    NASA Astrophysics Data System (ADS)

    Murase, Y.; Uchitane, T.; Ito, N.

    2017-11-01

    We present an open-source software framework for parameter-space exporation, named OACIS, which is useful to manage vast amount of simulation jobs and results in a systematic way. Recent development of high-performance computers enabled us to explore parameter spaces comprehensively, however, in such cases, manual management of the workflow is practically impossible. OACIS is developed aiming at reducing the cost of these repetitive tasks when conducting simulations by automating job submissions and data management. In this article, an overview of OACIS as well as a getting started guide are presented.

  7. An IT-enabled supply chain model: a simulation study

    NASA Astrophysics Data System (ADS)

    Cannella, Salvatore; Framinan, Jose M.; Barbosa-Póvoa, Ana

    2014-11-01

    During the last decades, supply chain collaboration practices and the underlying enabling technologies have evolved from the classical electronic data interchange (EDI) approach to a web-based and radio frequency identification (RFID)-enabled collaboration. In this field, most of the literature has focused on the study of optimal parameters for reducing the total cost of suppliers, by adopting operational research (OR) techniques. Herein we are interested in showing that the considered information technology (IT)-enabled structure is resilient, that is, it works well across a reasonably broad range of parameter settings. By adopting a methodological approach based on system dynamics, we study a multi-tier collaborative supply chain. Results show that the IT-enabled supply chain improves operational performance and customer service level. Nonetheless, benefits for geographically dispersed networks are of minor entity.

  8. Systematic parameter inference in stochastic mesoscopic modeling

    NASA Astrophysics Data System (ADS)

    Lei, Huan; Yang, Xiu; Li, Zhen; Karniadakis, George Em

    2017-02-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are "sparse". The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Hongyi; Li, Yang; Zeng, Danielle

    Process integration and optimization is the key enabler of the Integrated Computational Materials Engineering (ICME) of carbon fiber composites. In this paper, automated workflows are developed for two types of composites: Sheet Molding Compounds (SMC) short fiber composites, and multi-layer unidirectional (UD) composites. For SMC, the proposed workflow integrates material processing simulation, microstructure representation volume element (RVE) models, material property prediction and structure preformation simulation to enable multiscale, multidisciplinary analysis and design. Processing parameters, microstructure parameters and vehicle subframe geometry parameters are defined as the design variables; the stiffness and weight of the structure are defined as the responses. Formore » multi-layer UD structure, this work focuses on the discussion of different design representation methods and their impacts on the optimization performance. Challenges in ICME process integration and optimization are also summarized and highlighted. Two case studies are conducted to demonstrate the integrated process and its application in optimization.« less

  10. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  11. A Proposal for Modeling Real Hardware, Weather and Marine Conditions for Underwater Sensor Networks

    PubMed Central

    Climent, Salvador; Capella, Juan Vicente; Blanc, Sara; Perles, Angel; Serrano, Juan José

    2013-01-01

    Network simulators are useful for researching protocol performance, appraising new hardware capabilities and evaluating real application scenarios. However, these tasks can only be achieved when using accurate models and real parameters that enable the extraction of trustworthy results and conclusions. This paper presents an underwater wireless sensor network ecosystem for the ns-3 simulator. This ecosystem is composed of a new energy-harvesting model and a low-cost, low-power underwater wake-up modem model that, alongside existing models, enables the performance of accurate simulations by providing real weather and marine conditions from the location where the real application is to be deployed. PMID:23748171

  12. Systematic parameter inference in stochastic mesoscopic modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Huan; Yang, Xiu; Li, Zhen

    2017-02-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the priormore » knowledge that the coefficients are “sparse”. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.« less

  13. Enhancing 4D PC-MRI in an aortic phantom considering numerical simulations

    NASA Astrophysics Data System (ADS)

    Kratzke, Jonas; Schoch, Nicolai; Weis, Christian; Müller-Eschner, Matthias; Speidel, Stefanie; Farag, Mina; Beller, Carsten J.; Heuveline, Vincent

    2015-03-01

    To date, cardiovascular surgery enables the treatment of a wide range of aortic pathologies. One of the current challenges in this field is given by the detection of high-risk patients for adverse aortic events, who should be treated electively. Reliable diagnostic parameters, which indicate the urge of treatment, have to be determined. Functional imaging by means of 4D phase contrast-magnetic resonance imaging (PC-MRI) enables the time-resolved measurement of blood flow velocity in 3D. Applied to aortic phantoms, three dimensional blood flow properties and their relation to adverse dynamics can be investigated in vitro. Emerging "in silico" methods of numerical simulation can supplement these measurements in computing additional information on crucial parameters. We propose a framework that complements 4D PC-MRI imaging by means of numerical simulation based on the Finite Element Method (FEM). The framework is developed on the basis of a prototypic aortic phantom and validated by 4D PC-MRI measurements of the phantom. Based on physical principles of biomechanics, the derived simulation depicts aortic blood flow properties and characteristics. The framework might help identifying factors that induce aortic pathologies such as aortic dilatation or aortic dissection. Alarming thresholds of parameters such as wall shear stress distribution can be evaluated. The combined techniques of 4D PC-MRI and numerical simulation can be used as complementary tools for risk-stratification of aortic pathology.

  14. A generic implementation of replica exchange with solute tempering (REST2) algorithm in NAMD for complex biophysical simulations

    NASA Astrophysics Data System (ADS)

    Jo, Sunhwan; Jiang, Wei

    2015-12-01

    Replica Exchange with Solute Tempering (REST2) is a powerful sampling enhancement algorithm of molecular dynamics (MD) in that it needs significantly smaller number of replicas but achieves higher sampling efficiency relative to standard temperature exchange algorithm. In this paper, we extend the applicability of REST2 for quantitative biophysical simulations through a robust and generic implementation in greatly scalable MD software NAMD. The rescaling procedure of force field parameters controlling REST2 "hot region" is implemented into NAMD at the source code level. A user can conveniently select hot region through VMD and write the selection information into a PDB file. The rescaling keyword/parameter is written in NAMD Tcl script interface that enables an on-the-fly simulation parameter change. Our implementation of REST2 is within communication-enabled Tcl script built on top of Charm++, thus communication overhead of an exchange attempt is vanishingly small. Such a generic implementation facilitates seamless cooperation between REST2 and other modules of NAMD to provide enhanced sampling for complex biomolecular simulations. Three challenging applications including native REST2 simulation for peptide folding-unfolding transition, free energy perturbation/REST2 for absolute binding affinity of protein-ligand complex and umbrella sampling/REST2 Hamiltonian exchange for free energy landscape calculation were carried out on IBM Blue Gene/Q supercomputer to demonstrate efficacy of REST2 based on the present implementation.

  15. Parameterization of Keeling's network generation algorithm.

    PubMed

    Badham, Jennifer; Abbass, Hussein; Stocker, Rob

    2008-09-01

    Simulation is increasingly being used to examine epidemic behaviour and assess potential management options. The utility of the simulations rely on the ability to replicate those aspects of the social structure that are relevant to epidemic transmission. One approach is to generate networks with desired social properties. Recent research by Keeling and his colleagues has generated simulated networks with a range of properties, and examined the impact of these properties on epidemic processes occurring over the network. However, published work has included only limited analysis of the algorithm itself and the way in which the network properties are related to the algorithm parameters. This paper identifies some relationships between the algorithm parameters and selected network properties (mean degree, degree variation, clustering coefficient and assortativity). Our approach enables users of the algorithm to efficiently generate a network with given properties, thereby allowing realistic social networks to be used as the basis of epidemic simulations. Alternatively, the algorithm could be used to generate social networks with a range of property values, enabling analysis of the impact of these properties on epidemic behaviour.

  16. Process Integration and Optimization of ICME Carbon Fiber Composites for Vehicle Lightweighting: A Preliminary Development

    DOE PAGES

    Xu, Hongyi; Li, Yang; Zeng, Danielle

    2017-01-02

    Process integration and optimization is the key enabler of the Integrated Computational Materials Engineering (ICME) of carbon fiber composites. In this paper, automated workflows are developed for two types of composites: Sheet Molding Compounds (SMC) short fiber composites, and multi-layer unidirectional (UD) composites. For SMC, the proposed workflow integrates material processing simulation, microstructure representation volume element (RVE) models, material property prediction and structure preformation simulation to enable multiscale, multidisciplinary analysis and design. Processing parameters, microstructure parameters and vehicle subframe geometry parameters are defined as the design variables; the stiffness and weight of the structure are defined as the responses. Formore » multi-layer UD structure, this work focuses on the discussion of different design representation methods and their impacts on the optimization performance. Challenges in ICME process integration and optimization are also summarized and highlighted. Two case studies are conducted to demonstrate the integrated process and its application in optimization.« less

  17. syris: a flexible and efficient framework for X-ray imaging experiments simulation.

    PubMed

    Faragó, Tomáš; Mikulík, Petr; Ershov, Alexey; Vogelgesang, Matthias; Hänschke, Daniel; Baumbach, Tilo

    2017-11-01

    An open-source framework for conducting a broad range of virtual X-ray imaging experiments, syris, is presented. The simulated wavefield created by a source propagates through an arbitrary number of objects until it reaches a detector. The objects in the light path and the source are time-dependent, which enables simulations of dynamic experiments, e.g. four-dimensional time-resolved tomography and laminography. The high-level interface of syris is written in Python and its modularity makes the framework very flexible. The computationally demanding parts behind this interface are implemented in OpenCL, which enables fast calculations on modern graphics processing units. The combination of flexibility and speed opens new possibilities for studying novel imaging methods and systematic search of optimal combinations of measurement conditions and data processing parameters. This can help to increase the success rates and efficiency of valuable synchrotron beam time. To demonstrate the capabilities of the framework, various experiments have been simulated and compared with real data. To show the use case of measurement and data processing parameter optimization based on simulation, a virtual counterpart of a high-speed radiography experiment was created and the simulated data were used to select a suitable motion estimation algorithm; one of its parameters was optimized in order to achieve the best motion estimation accuracy when applied on the real data. syris was also used to simulate tomographic data sets under various imaging conditions which impact the tomographic reconstruction accuracy, and it is shown how the accuracy may guide the selection of imaging conditions for particular use cases.

  18. Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.

    PubMed

    Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang

    2017-01-01

    Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.

  19. Expanding the catalog of binary black-hole simulations: aligned-spin configurations

    NASA Astrophysics Data System (ADS)

    Chu, Tony; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; SXS Collaboration

    2015-04-01

    A major goal of numerical relativity is to model the inspiral and merger of binary black holes through sufficiently accurate and long simulations, to enable the successful detection of gravitational waves. However, covering the full parameter space of binary configurations is a computationally daunting task. The SXS Collaboration has made important progress in this direction recently, with a catalog of 174 publicly available binary black-hole simulations [black-holes.org/waveforms]. Nevertheless, the parameter-space coverage remains sparse, even for non-precessing binaries. In this talk, I will describe an addition to the SXS catalog to improve its coverage, consisting of 95 new simulations of aligned-spin binaries with moderate mass ratios and dimensionless spins as high as 0.9. Some applications of these new simulations will also be mentioned.

  20. Comprehensive Monte-Carlo simulator for optimization of imaging parameters for high sensitivity detection of skin cancer at the THz

    NASA Astrophysics Data System (ADS)

    Ney, Michael; Abdulhalim, Ibrahim

    2016-03-01

    Skin cancer detection at its early stages has been the focus of a large number of experimental and theoretical studies during the past decades. Among these studies two prominent approaches presenting high potential are reflectometric sensing at the THz wavelengths region and polarimetric imaging techniques in the visible wavelengths. While THz radiation contrast agent and source of sensitivity to cancer related tissue alterations was considered to be mainly the elevated water content in the cancerous tissue, the polarimetric approach has been verified to enable cancerous tissue differentiation based on cancer induced structural alterations to the tissue. Combining THz with the polarimetric approach, which is considered in this study, is examined in order to enable higher detection sensitivity than previously pure reflectometric THz measurements. For this, a comprehensive MC simulation of radiative transfer in a complex skin tissue model fitted for the THz domain that considers the skin`s stratified structure, tissue material optical dispersion modeling, surface roughness, scatterers, and substructure organelles has been developed. Additionally, a narrow beam Mueller matrix differential analysis technique is suggested for assessing skin cancer induced changes in the polarimetric image, enabling the tissue model and MC simulation to be utilized for determining the imaging parameters resulting in maximal detection sensitivity.

  1. Comparison of existing models to simulate anaerobic digestion of lipid-rich waste.

    PubMed

    Béline, F; Rodriguez-Mendez, R; Girault, R; Bihan, Y Le; Lessard, P

    2017-02-01

    Models for anaerobic digestion of lipid-rich waste taking inhibition into account were reviewed and, if necessary, adjusted to the ADM1 model framework in order to compare them. Experimental data from anaerobic digestion of slaughterhouse waste at an organic loading rate (OLR) ranging from 0.3 to 1.9kgVSm -3 d -1 were used to compare and evaluate models. Experimental data obtained at low OLRs were accurately modeled whatever the model thereby validating the stoichiometric parameters used and influent fractionation. However, at higher OLRs, although inhibition parameters were optimized to reduce differences between experimental and simulated data, no model was able to accurately simulate accumulation of substrates and intermediates, mainly due to the wrong simulation of pH. A simulation using pH based on experimental data showed that acetogenesis and methanogenesis were the most sensitive steps to LCFA inhibition and enabled identification of the inhibition parameters of both steps. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Development of interatomic potential of Ge(1- x - y )Si x Sn y ternary alloy semiconductors for classical lattice dynamics simulation

    NASA Astrophysics Data System (ADS)

    Tomita, Motohiro; Ogasawara, Masataka; Terada, Takuya; Watanabe, Takanobu

    2018-04-01

    We provide the parameters of Stillinger-Weber potentials for GeSiSn ternary mixed systems. These parameters can be used in molecular dynamics (MD) simulations to reproduce phonon properties and thermal conductivities. The phonon dispersion relation is derived from the dynamical structure factor, which is calculated by the space-time Fourier transform of atomic trajectories in an MD simulation. The phonon properties and thermal conductivities of GeSiSn ternary crystals calculated using these parameters mostly reproduced both the findings of previous experiments and earlier calculations made using MD simulations. The atomic composition dependence of these properties in GeSiSn ternary crystals obtained by previous studies (both experimental and theoretical) and the calculated data were almost exactly reproduced by our proposed parameters. Moreover, the results of the MD simulation agree with the previous calculations made using a time-independent phonon Boltzmann transport equation with complicated scattering mechanisms. These scattering mechanisms are very important in complicated nanostructures, as they allow the heat-transfer properties to be more accurately calculated by MD simulations. This work enables us to predict the phonon- and heat-related properties of bulk group IV alloys, especially ternary alloys.

  3. Parameter-induced uncertainty quantification of crop yields, soil N2O and CO2 emission for 8 arable sites across Europe using the LandscapeDNDC model

    NASA Astrophysics Data System (ADS)

    Santabarbara, Ignacio; Haas, Edwin; Kraus, David; Herrera, Saul; Klatt, Steffen; Kiese, Ralf

    2014-05-01

    When using biogeochemical models to estimate greenhouse gas emissions at site to regional/national levels, the assessment and quantification of the uncertainties of simulation results are of significant importance. The uncertainties in simulation results of process-based ecosystem models may result from uncertainties of the process parameters that describe the processes of the model, model structure inadequacy as well as uncertainties in the observations. Data for development and testing of uncertainty analisys were corp yield observations, measurements of soil fluxes of nitrous oxide (N2O) and carbon dioxide (CO2) from 8 arable sites across Europe. Using the process-based biogeochemical model LandscapeDNDC for simulating crop yields, N2O and CO2 emissions, our aim is to assess the simulation uncertainty by setting up a Bayesian framework based on Metropolis-Hastings algorithm. Using Gelman statistics convergence criteria and parallel computing techniques, enable multi Markov Chains to run independently in parallel and create a random walk to estimate the joint model parameter distribution. Through means distribution we limit the parameter space, get probabilities of parameter values and find the complex dependencies among them. With this parameter distribution that determines soil-atmosphere C and N exchange, we are able to obtain the parameter-induced uncertainty of simulation results and compare them with the measurements data.

  4. Optimization of the molecular dynamics method for simulations of DNA and ion transport through biological nanopores.

    PubMed

    Wells, David B; Bhattacharya, Swati; Carr, Rogan; Maffeo, Christopher; Ho, Anthony; Comer, Jeffrey; Aksimentiev, Aleksei

    2012-01-01

    Molecular dynamics (MD) simulations have become a standard method for the rational design and interpretation of experimental studies of DNA translocation through nanopores. The MD method, however, offers a multitude of algorithms, parameters, and other protocol choices that can affect the accuracy of the resulting data as well as computational efficiency. In this chapter, we examine the most popular choices offered by the MD method, seeking an optimal set of parameters that enable the most computationally efficient and accurate simulations of DNA and ion transport through biological nanopores. In particular, we examine the influence of short-range cutoff, integration timestep and force field parameters on the temperature and concentration dependence of bulk ion conductivity, ion pairing, ion solvation energy, DNA structure, DNA-ion interactions, and the ionic current through a nanopore.

  5. Hierarchical Order Parameters for Macromolecular Assembly Simulations I: Construction and Dynamical Properties of Order Parameters

    PubMed Central

    Singharoy, Abhishek; Sereda, Yuriy

    2012-01-01

    Macromolecular assemblies often display a hierarchical organization of macromolecules or their sub-assemblies. To model this, we have formulated a space warping method that enables capturing overall macromolecular structure and dynamics via a set of coarse-grained order parameters (OPs). This article is the first of two describing the construction and computational implementation of an additional class of OPs that has built into them the hierarchical architecture of macromolecular assemblies. To accomplish this, first, the system is divided into subsystems, each of which is described via a representative set of OPs. Then, a global set of variables is constructed from these subsystem-centered OPs to capture overall system organization. Dynamical properties of the resulting OPs are compared to those of our previous nonhierarchical ones, and implied conceptual and computational advantages are discussed for a 100ns, 2 million atom solvated Human Papillomavirus-like particle simulation. In the second article, the hierarchical OPs are shown to enable a multiscale analysis that starts with the N-atom Liouville equation and yields rigorous Langevin equations of stochastic OP dynamics. The latter is demonstrated via a force-field based simulation algorithm that probes key structural transition pathways, simultaneously accounting for all-atom details and overall structure. PMID:22661911

  6. Parallel stochastic simulation of macroscopic calcium currents.

    PubMed

    González-Vélez, Virginia; González-Vélez, Horacio

    2007-06-01

    This work introduces MACACO, a macroscopic calcium currents simulator. It provides a parameter-sweep framework which computes macroscopic Ca(2+) currents from the individual aggregation of unitary currents, using a stochastic model for L-type Ca(2+) channels. MACACO uses a simplified 3-state Markov model to simulate the response of each Ca(2+) channel to different voltage inputs to the cell. In order to provide an accurate systematic view for the stochastic nature of the calcium channels, MACACO is composed of an experiment generator, a central simulation engine and a post-processing script component. Due to the computational complexity of the problem and the dimensions of the parameter space, the MACACO simulation engine employs a grid-enabled task farm. Having been designed as a computational biology tool, MACACO heavily borrows from the way cell physiologists conduct and report their experimental work.

  7. Genomic data assimilation for estimating hybrid functional Petri net from time-course gene expression data.

    PubMed

    Nagasaki, Masao; Yamaguchi, Rui; Yoshida, Ryo; Imoto, Seiya; Doi, Atsushi; Tamada, Yoshinori; Matsuno, Hiroshi; Miyano, Satoru; Higuchi, Tomoyuki

    2006-01-01

    We propose an automatic construction method of the hybrid functional Petri net as a simulation model of biological pathways. The problems we consider are how we choose the values of parameters and how we set the network structure. Usually, we tune these unknown factors empirically so that the simulation results are consistent with biological knowledge. Obviously, this approach has the limitation in the size of network of interest. To extend the capability of the simulation model, we propose the use of data assimilation approach that was originally established in the field of geophysical simulation science. We provide genomic data assimilation framework that establishes a link between our simulation model and observed data like microarray gene expression data by using a nonlinear state space model. A key idea of our genomic data assimilation is that the unknown parameters in simulation model are converted as the parameter of the state space model and the estimates are obtained as the maximum a posteriori estimators. In the parameter estimation process, the simulation model is used to generate the system model in the state space model. Such a formulation enables us to handle both the model construction and the parameter tuning within a framework of the Bayesian statistical inferences. In particular, the Bayesian approach provides us a way of controlling overfitting during the parameter estimations that is essential for constructing a reliable biological pathway. We demonstrate the effectiveness of our approach using synthetic data. As a result, parameter estimation using genomic data assimilation works very well and the network structure is suitably selected.

  8. Real-time flutter analysis

    NASA Technical Reports Server (NTRS)

    Walker, R.; Gupta, N.

    1984-01-01

    The important algorithm issues necessary to achieve a real time flutter monitoring system; namely, the guidelines for choosing appropriate model forms, reduction of the parameter convergence transient, handling multiple modes, the effect of over parameterization, and estimate accuracy predictions, both online and for experiment design are addressed. An approach for efficiently computing continuous-time flutter parameter Cramer-Rao estimate error bounds were developed. This enables a convincing comparison of theoretical and simulation results, as well as offline studies in preparation for a flight test. Theoretical predictions, simulation and flight test results from the NASA Drones for Aerodynamic and Structural Test (DAST) Program are compared.

  9. Simulation of EAST vertical displacement events by tokamak simulation code

    NASA Astrophysics Data System (ADS)

    Qiu, Qinglai; Xiao, Bingjia; Guo, Yong; Liu, Lei; Xing, Zhe; Humphreys, D. A.

    2016-10-01

    Vertical instability is a potentially serious hazard for elongated plasma. In this paper, the tokamak simulation code (TSC) is used to simulate vertical displacement events (VDE) on the experimental advanced superconducting tokamak (EAST). Key parameters from simulations, including plasma current, plasma shape and position, flux contours and magnetic measurements match experimental data well. The growth rates simulated by TSC are in good agreement with TokSys results. In addition to modeling the free drift, an EAST fast vertical control model enables TSC to simulate the course of VDE recovery. The trajectories of the plasma current center and control currents on internal coils (IC) fit experimental data well.

  10. Crystal Growth Simulations To Establish Physically Relevant Kinetic Parameters from the Empirical Kolmogorov-Johnson-Mehl-Avrami Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dill, Eric D.; Folmer, Jacob C.W.; Martin, James D.

    A series of simulations was performed to enable interpretation of the material and physical significance of the parameters defined in the Kolmogorov, Johnson and Mehl, and Avrami (KJMA) rate expression commonly used to describe phase boundary controlled reactions of condensed matter. The parameters k, n, and t 0 are shown to be highly correlated, which if unaccounted for seriously challenge mechanistic interpretation. It is demonstrated that rate measurements exhibit an intrinsic uncertainty without precise knowledge of the location and orientation of nucleation with respect to the free volume into which it grows. More significantly, it is demonstrated that the KJMAmore » rate constant k is highly dependent on sample size. However, under the simulated conditions of slow nucleation relative to crystal growth, sample volume and sample anisotropy correction affords a means to eliminate the experimental condition dependence of the KJMA rate constant, k, producing the material-specific parameter, the velocity of the phase boundary, v pb.« less

  11. A neural network for the prediction of performance parameters of transformer cores

    NASA Astrophysics Data System (ADS)

    Nussbaum, C.; Booth, T.; Ilo, A.; Pfützner, H.

    1996-07-01

    The paper shows that Artificial Neural Networks (ANNs) may offer new possibilities for the prediction of transformer core performance parameters, i.e. no-load power losses and excitation. Basically this technique enables simulations with respect to different construction parameters most notably the characteristics of corner designs, i.e. the overlap length, the air gap length, and the number of steps. However, without additional physical knowledge incorporated into the ANN extrapolation beyond the training data limits restricts the predictive performance.

  12. Procedure Enabling Simulation and In-Depth Analysis of Optical Effects in Camera-Based Time-Of Sensors

    NASA Astrophysics Data System (ADS)

    Baumgart, M.; Druml, N.; Consani, M.

    2018-05-01

    This paper presents a simulation approach for Time-of-Flight cameras to estimate sensor performance and accuracy, as well as to help understanding experimentally discovered effects. The main scope is the detailed simulation of the optical signals. We use a raytracing-based approach and use the optical path length as the master parameter for depth calculations. The procedure is described in detail with references to our implementation in Zemax OpticStudio and Python. Our simulation approach supports multiple and extended light sources and allows accounting for all effects within the geometrical optics model. Especially multi-object reflection/scattering ray-paths, translucent objects, and aberration effects (e.g. distortion caused by the ToF lens) are supported. The optical path length approach also enables the implementation of different ToF senor types and transient imaging evaluations. The main features are demonstrated on a simple 3D test scene.

  13. A hybrid method of estimating pulsating flow parameters in the space-time domain

    NASA Astrophysics Data System (ADS)

    Pałczyński, Tomasz

    2017-05-01

    This paper presents a method for estimating pulsating flow parameters in partially open pipes, such as pipelines, internal combustion engine inlets, exhaust pipes and piston compressors. The procedure is based on the method of characteristics, and employs a combination of measurements and simulations. An experimental test rig is described, which enables pressure, temperature and mass flow rate to be measured within a defined cross section. The second part of the paper discusses the main assumptions of a simulation algorithm elaborated in the Matlab/Simulink environment. The simulation results are shown as 3D plots in the space-time domain, and compared with proposed models of phenomena relating to wave propagation, boundary conditions, acoustics and fluid mechanics. The simulation results are finally compared with acoustic phenomena, with an emphasis on the identification of resonant frequencies.

  14. Singlet-Fission-Sensitized Hybrid Thin-Films For Next-Generation Photovoltaics

    DTIC Science & Technology

    2016-04-12

    evaporators and a spin-coater was constructed. In order to characterize PV devices, a solar -simulator, 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...with thermal evaporators and a spin-coater was constructed. In order to characterize PV devices, a solar -simulator, semiconductor parameter analyzer...SECURITY CLASSIFICATION OF: This grant enabled the acquisition of equipment for the fabrication of organic and nanocrystal based photovoltaic ( PV

  15. Using Active Learning for Speeding up Calibration in Simulation Models.

    PubMed

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  16. Using Active Learning for Speeding up Calibration in Simulation Models

    PubMed Central

    Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2015-01-01

    Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190

  17. Construction of multi-functional open modulized Matlab simulation toolbox for imaging ladar system

    NASA Astrophysics Data System (ADS)

    Wu, Long; Zhao, Yuan; Tang, Meng; He, Jiang; Zhang, Yong

    2011-06-01

    Ladar system simulation is to simulate the ladar models using computer simulation technology in order to predict the performance of the ladar system. This paper presents the developments of laser imaging radar simulation for domestic and overseas studies and the studies of computer simulation on ladar system with different application requests. The LadarSim and FOI-LadarSIM simulation facilities of Utah State University and Swedish Defence Research Agency are introduced in details. This paper presents the low level of simulation scale, un-unified design and applications of domestic researches in imaging ladar system simulation, which are mostly to achieve simple function simulation based on ranging equations for ladar systems. Design of laser imaging radar simulation with open and modularized structure is proposed to design unified modules for ladar system, laser emitter, atmosphere models, target models, signal receiver, parameters setting and system controller. Unified Matlab toolbox and standard control modules have been built with regulated input and output of the functions, and the communication protocols between hardware modules. A simulation based on ICCD gain-modulated imaging ladar system for a space shuttle is made based on the toolbox. The simulation result shows that the models and parameter settings of the Matlab toolbox are able to simulate the actual detection process precisely. The unified control module and pre-defined parameter settings simplify the simulation of imaging ladar detection. Its open structures enable the toolbox to be modified for specialized requests. The modulization gives simulations flexibility.

  18. Precise measurement of the angular correlation parameter aβν in the β decay of 35Ar with LPCTrap

    NASA Astrophysics Data System (ADS)

    Fabian, X.; Ban, G.; Boussaïd, R.; Breitenfeldt, M.; Couratin, C.; Delahaye, P.; Durand, D.; Finlay, P.; Fléchard, X.; Guillon, B.; Lemière, Y.; Leredde, A.; Liénard, E.; Méry, A.; Naviliat-Cuncic, O.; Pierre, E.; Porobic, T.; Quéméner, G.; Rodríguez, D.; Severijns, N.; Thomas, J. C.; Van Gorp, S.

    2014-03-01

    Precise measurements in the β decay of the 35Ar nucleus enable to search for deviations from the Standard Model (SM) in the weak sector. These measurements enable either to check the CKM matrix unitarity or to constrain the existence of exotic currents rejected in the V-A theory of the SM. For this purpose, the β-ν angular correlation parameter, aβν, is inferred from a comparison between experimental and simulated recoil ion time-of-flight distributions following the quasi-pure Fermi transition of 35Ar1+ ions confined in the transparent Paul trap of the LPCTrap device at GANIL. During the last experiment, 1.5×106 good events have been collected, which corresponds to an expected precision of less than 0.5% on the aβν value. The required simulation is divided between the use of massive GPU parallelization and the GEANT4 toolkit for the source-cloud kinematics and the tracking of the decay products.

  19. Design Analysis Kit for Optimization and Terascale Applications 6.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-10-19

    Sandia's Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to: (1) enhance understanding of risk, (2) improve products, and (3) assess simulation credibility. In its simplest mode, Dakota can automate typical parameter variation studies through a generic interface to a computational model. However, Dakota also delivers advanced parametric analysis techniques enabling design exploration, optimization, model calibration, risk analysis, and quantification of margins and uncertainty with such models. It directly supports verificationmore » and validation activities. The algorithms implemented in Dakota aim to address challenges in performing these analyses with complex science and engineering models from desktop to high performance computers.« less

  20. Parametric investigations of plasma characteristics in a remote inductively coupled plasma system

    NASA Astrophysics Data System (ADS)

    Shukla, Prasoon; Roy, Abhra; Jain, Kunal; Bhoj, Ananth

    2016-09-01

    Designing a remote plasma system involves source chamber sizing, selection of coils and/or electrodes to power the plasma, designing the downstream tubes, selection of materials used in the source and downstream regions, locations of inlets and outlets and finally optimizing the process parameter space of pressure, gas flow rates and power delivery. Simulations can aid in spatial and temporal plasma characterization in what are often inaccessible locations for experimental probes in the source chamber. In this paper, we report on simulations of a remote inductively coupled Argon plasma system using the modeling platform CFD-ACE +. The coupled multiphysics model description successfully address flow, chemistry, electromagnetics, heat transfer and plasma transport in the remote plasma system. The SimManager tool enables easy setup of parametric simulations to investigate the effect of varying the pressure, power, frequency, flow rates and downstream tube lengths. It can also enable the automatic solution of the varied parameters to optimize a user-defined objective function, which may be the integral ion and radical fluxes at the wafer. The fast run time coupled with the parametric and optimization capabilities can add significant insight and value in design and optimization.

  1. Desktop Application Program to Simulate Cargo-Air-Drop Tests

    NASA Technical Reports Server (NTRS)

    Cuthbert, Peter

    2009-01-01

    The DSS Application is a computer program comprising a Windows version of the UNIX-based Decelerator System Simulation (DSS) coupled with an Excel front end. The DSS is an executable code that simulates the dynamics of airdropped cargo from first motion in an aircraft through landing. The bare DSS is difficult to use; the front end makes it easy to use. All inputs to the DSS, control of execution of the DSS, and postprocessing and plotting of outputs are handled in the front end. The front end is graphics-intensive. The Excel software provides the graphical elements without need for additional programming. Categories of input parameters are divided into separate tabbed windows. Pop-up comments describe each parameter. An error-checking software component evaluates combinations of parameters and alerts the user if an error results. Case files can be created from inputs, making it possible to build cases from previous ones. Simulation output is plotted in 16 charts displayed on a separate worksheet, enabling plotting of multiple DSS cases with flight-test data. Variables assigned to each plot can be changed. Selected input parameters can be edited from the plot sheet for quick sensitivity studies.

  2. Systematic simulations of modified gravity: chameleon models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brax, Philippe; Davis, Anne-Christine; Li, Baojiu

    2013-04-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference withmore » the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc{sup −1}, since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future.« less

  3. Monte Carlo based NMR simulations of open fractures in porous media

    NASA Astrophysics Data System (ADS)

    Lukács, Tamás; Balázs, László

    2014-05-01

    According to the basic principles of nuclear magnetic resonance (NMR), a measurement's free induction decay curve has an exponential characteristic and its parameter is the transversal relaxation time, T2, given by the Bloch equations in rotating frame. In our simulations we are observing that particular case when the bulk's volume is neglectable to the whole system, the vertical movement is basically zero, hence the diffusion part of the T2 relation can be editted out. This small-apertured situations are common in sedimentary layers, and the smallness of the observed volume enable us to calculate with just the bulk relaxation and the surface relaxation. The simulation uses the Monte-Carlo method, so it is based on a random-walk generator which provides the brownian motions of the particles by uniformly distributed, pseudorandom generated numbers. An attached differential equation assures the bulk relaxation, the initial and the iterated conditions guarantee the simulation's replicability and enable having consistent estimations. We generate an initial geometry of a plain segment with known height, with given number of particles, the spatial distribution is set to equal to each simulation, and the surface-volume ratio remains at a constant value. It follows that to the given thickness of the open fracture, from the fitted curve's parameter, the surface relaxivity is determinable. The calculated T2 distribution curves are also indicating the inconstancy in the observed fracture situations. The effect of varying the height of the lamina at a constant diffusion coefficient also produces characteristic anomaly and for comparison we have run the simulation with the same initial volume, number of particles and conditions in spherical bulks, their profiles are clear and easily to understand. The surface relaxation enables us to estimate the interaction beetwen the materials of boundary with this two geometrically well-defined bulks, therefore the distribution takes as a basis in estimation of the porosity and can be use of identifying small-grained porous media.

  4. Acid-Base Disorders--A Computer Simulation.

    ERIC Educational Resources Information Center

    Maude, David L.

    1985-01-01

    Describes and lists a program for Apple Pascal Version 1.1 which investigates the behavior of the bicarbonate-carbon dioxide buffer system in acid-base disorders. Designed specifically for the preclinical medical student, the program has proven easy to use and enables students to use blood gas parameters to arrive at diagnoses. (DH)

  5. Least-Squares Adaptive Control Using Chebyshev Orthogonal Polynomials

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Burken, John; Ishihara, Abraham

    2011-01-01

    This paper presents a new adaptive control approach using Chebyshev orthogonal polynomials as basis functions in a least-squares functional approximation. The use of orthogonal basis functions improves the function approximation significantly and enables better convergence of parameter estimates. Flight control simulations demonstrate the effectiveness of the proposed adaptive control approach.

  6. Sensitivity of echo enabled harmonic generation to sinusoidal electron beam energy structure

    DOE PAGES

    Hemsing, E.; Garcia, B.; Huang, Z.; ...

    2017-06-19

    Here, we analytically examine the bunching factor spectrum of a relativistic electron beam with sinusoidal energy structure that then undergoes an echo-enabled harmonic generation (EEHG) transformation to produce high harmonics. The performance is found to be described primarily by a simple scaling parameter. The dependence of the bunching amplitude on fluctuations of critical parameters is derived analytically, and compared with simulations. Where applicable, EEHG is also compared with high gain harmonic generation (HGHG) and we find that EEHG is generally less sensitive to several types of energy structure. In the presence of intermediate frequency modulations like those produced by themore » microbunching instability, EEHG has a substantially narrower intrinsic bunching pedestal.« less

  7. Towards an integrative computational model for simulating tumor growth and response to radiation therapy

    NASA Astrophysics Data System (ADS)

    Marrero, Carlos Sosa; Aubert, Vivien; Ciferri, Nicolas; Hernández, Alfredo; de Crevoisier, Renaud; Acosta, Oscar

    2017-11-01

    Understanding the response to irradiation in cancer radiotherapy (RT) may help devising new strategies with improved tumor local control. Computational models may allow to unravel the underlying radiosensitive mechanisms intervening in the dose-response relationship. By using extensive simulations a wide range of parameters may be evaluated providing insights on tumor response thus generating useful data to plan modified treatments. We propose in this paper a computational model of tumor growth and radiation response which allows to simulate a whole RT protocol. Proliferation of tumor cells, cell life-cycle, oxygen diffusion, radiosensitivity, RT response and resorption of killed cells were implemented in a multiscale framework. The model was developed in C++, using the Multi-formalism Modeling and Simulation Library (M2SL). Radiosensitivity parameters extracted from literature enabled us to simulate in a regular grid (voxel-wise) a prostate cell tissue. Histopathological specimens with different aggressiveness levels extracted from patients after prostatectomy were used to initialize in silico simulations. Results on tumor growth exhibit a good agreement with data from in vitro studies. Moreover, standard fractionation of 2 Gy/fraction, with a total dose of 80 Gy as a real RT treatment was applied with varying radiosensitivity and oxygen diffusion parameters. As expected, the high influence of these parameters was observed by measuring the percentage of survival tumor cell after RT. This work paves the way to further models allowing to simulate increased doses in modified hypofractionated schemes and to develop new patient-specific combined therapies.

  8. Enabling Computational Nanotechnology through JavaGenes in a Cycle Scavenging Environment

    NASA Technical Reports Server (NTRS)

    Globus, Al; Menon, Madhu; Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    A genetic algorithm procedure is developed and implemented for fitting parameters for many-body inter-atomic force field functions for simulating nanotechnology atomistic applications using portable Java on cycle-scavenged heterogeneous workstations. Given a physics based analytic functional form for the force field, correlated parameters in a multi-dimensional environment are typically chosen to fit properties given either by experiments and/or by higher accuracy quantum mechanical simulations. The implementation automates this tedious procedure using an evolutionary computing algorithm operating on hundreds of cycle-scavenged computers. As a proof of concept, we demonstrate the procedure for evaluating the Stillinger-Weber (S-W) potential by (a) reproducing the published parameters for Si using S-W energies in the fitness function, and (b) evolving a "new" set of parameters using semi-empirical tightbinding energies in the fitness function. The "new" parameters are significantly better suited for Si cluster energies and forces as compared to even the published S-W potential.

  9. A framework for optimization and quantification of uncertainty and sensitivity for developing carbon capture systems

    DOE PAGES

    Eslick, John C.; Ng, Brenda; Gao, Qianwen; ...

    2014-12-31

    Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less

  10. Transient dynamics of a nonlinear magneto-optical rotation

    NASA Astrophysics Data System (ADS)

    Grewal, Raghwinder Singh; Pustelny, S.; Rybak, A.; Florkowski, M.

    2018-04-01

    We analyze nonlinear magneto-optical rotation (NMOR) in rubidium vapor subjected to a continuously scanned magnetic field. By varying the magnetic-field sweep rate, a transition from traditionally observed dispersivelike NMOR signals (low sweep rate) to oscillating signals (higher sweep rates) is demonstrated. The transient oscillatory behavior is studied versus light and magnetic-field parameters, revealing a strong dependence of the signals on magnetic sweep rate and light intensity. The experimental results are supported with density-matrix calculations, which enable quantitative analysis of the effect. Fitting of the signals simulated versus different parameters with a theoretically motivated curve reveals the presence of oscillatory and static components in the signals. The components depend differently on the system parameters, which suggests their distinct nature. The investigations provide insight into the dynamics of ground-state coherence generation and enable application of NMOR in detection of transient spin couplings.

  11. cosmoabc: Likelihood-free inference for cosmology

    NASA Astrophysics Data System (ADS)

    Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.

    2015-05-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

  12. Nuclear Engine System Simulation (NESS) version 2.0

    NASA Technical Reports Server (NTRS)

    Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.

    1993-01-01

    The topics are presented in viewgraph form and include the following; nuclear thermal propulsion (NTP) engine system analysis program development; nuclear thermal propulsion engine analysis capability requirements; team resources used to support NESS development; expanded liquid engine simulations (ELES) computer model; ELES verification examples; NESS program development evolution; past NTP ELES analysis code modifications and verifications; general NTP engine system features modeled by NESS; representative NTP expander, gas generator, and bleed engine system cycles modeled by NESS; NESS program overview; NESS program flow logic; enabler (NERVA type) nuclear thermal rocket engine; prismatic fuel elements and supports; reactor fuel and support element parameters; reactor parameters as a function of thrust level; internal shield sizing; and reactor thermal model.

  13. Numerical studies of electron dynamics in oblique quasi-perpendicular collisionless shock waves

    NASA Technical Reports Server (NTRS)

    Liewer, P. C.; Decyk, V. K.; Dawson, J. M.; Lembege, B.

    1991-01-01

    Linear and nonlinear electron damping of the whistler precursor wave train to low Mach number quasi-perpendicular oblique shocks is studied using a one-dimensional electromagnetic plasma simulation code with particle electrons and ions. In some parameter regimes, electrons are observed to trap along the magnetic field lines in the potential of the whistler precursor wave train. This trapping can lead to significant electron heating in front of the shock for low beta(e). Use of a 64-processor hypercube concurrent computer has enabled long runs using realistic mass ratios in the full particle in-cell code and thus simulate shock parameter regimes and phenomena not previously studied numerically.

  14. Simulating observations with HARMONI: the integral field spectrograph for the European Extremely Large Telescope

    NASA Astrophysics Data System (ADS)

    Zieleniewski, Simon; Thatte, Niranjan; Kendrew, Sarah; Houghton, Ryan; Tecza, Matthias; Clarke, Fraser; Fusco, Thierry; Swinbank, Mark

    2014-07-01

    With the next generation of extremely large telescopes commencing construction, there is an urgent need for detailed quantitative predictions of the scientific observations that these new telescopes will enable. Most of these new telescopes will have adaptive optics fully integrated with the telescope itself, allowing unprecedented spatial resolution combined with enormous sensitivity. However, the adaptive optics point spread function will be strongly wavelength dependent, requiring detailed simulations that accurately model these variations. We have developed a simulation pipeline for the HARMONI integral field spectrograph, a first light instrument for the European Extremely Large Telescope. The simulator takes high-resolution input data-cubes of astrophysical objects and processes them with accurate atmospheric, telescope and instrumental effects, to produce mock observed cubes for chosen observing parameters. The output cubes represent the result of a perfect data reduc- tion process, enabling a detailed analysis and comparison between input and output, showcasing HARMONI's capabilities. The simulations utilise a detailed knowledge of the telescope's wavelength dependent adaptive op- tics point spread function. We discuss the simulation pipeline and present an early example of the pipeline functionality for simulating observations of high redshift galaxies.

  15. Evolution Model and Simulation of Profit Model of Agricultural Products Logistics Financing

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Wu, Yan

    2018-03-01

    Agricultural products logistics financial warehousing business mainly involves agricultural production and processing enterprises, third-party logistics enterprises and financial institutions tripartite, to enable the three parties to achieve win-win situation, the article first gives the replication dynamics and evolutionary stability strategy between the three parties in business participation, and then use NetLogo simulation platform, using the overall modeling and simulation method of Multi-Agent, established the evolutionary game simulation model, and run the model under different revenue parameters, finally, analyzed the simulation results. To achieve the agricultural products logistics financial financing warehouse business to participate in tripartite mutually beneficial win-win situation, thus promoting the smooth flow of agricultural products logistics business.

  16. Multiscale Hy3S: hybrid stochastic simulation for supercomputers.

    PubMed

    Salis, Howard; Sotiropoulos, Vassilios; Kaznessis, Yiannis N

    2006-02-24

    Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users create biological systems and analyze data. We demonstrate the accuracy and efficiency of Hy3S with examples, including a large-scale system benchmark and a complex bistable biochemical network with positive feedback. The software itself is open-sourced under the GPL license and is modular, allowing users to modify it for their own purposes. Hy3S is a powerful suite of simulation programs for simulating the stochastic dynamics of networks of biochemical reactions. Its first public version enables computational biologists to more efficiently investigate the dynamics of realistic biological systems.

  17. Toward Improved Description of DNA Backbone: Revisiting Epsilon and Zeta Torsion Force Field Parameters

    PubMed Central

    Zgarbová, Marie; Luque, F. Javier; Šponer, Jiří; Cheatham, Thomas E.; Otyepka, Michal; Jurečka, Petr

    2013-01-01

    We present a refinement of the backbone torsion parameters ε and ζ of the Cornell et al. AMBER force field for DNA simulations. The new parameters, denoted as εζOL1, were derived from quantum-mechanical calculations with inclusion of conformation-dependent solvation effects according to the recently reported methodology (J. Chem. Theory Comput. 2012, 7(9), 2886-2902). The performance of the refined parameters was analyzed by means of extended molecular dynamics (MD) simulations for several representative systems. The results showed that the εζOL1 refinement improves the backbone description of B-DNA double helices and G-DNA stem. In B-DNA simulations, we observed an average increase of the helical twist and narrowing of the major groove, thus achieving better agreement with X-ray and solution NMR data. The balance between populations of BI and BII backbone substates was shifted towards the BII state, in better agreement with ensemble-refined solution experimental results. Furthermore, the refined parameters decreased the backbone RMS deviations in B-DNA MD simulations. In the antiparallel guanine quadruplex (G-DNA) the εζOL1 modification improved the description of non-canonical α/γ backbone substates, which were shown to be coupled to the ε/ζ torsion potential. Thus, the refinement is suggested as a possible alternative to the current ε/ζ torsion potential, which may enable more accurate modeling of nucleic acids. However, long-term testing is recommended before its routine application in DNA simulations. PMID:24058302

  18. Simulation of the Tsunami Resulting from the M 9.2 2004 Sumatra-Andaman Earthquake - Dynamic Rupture vs. Seismic Inversion Source Model

    NASA Astrophysics Data System (ADS)

    Vater, Stefan; Behrens, Jörn

    2017-04-01

    Simulations of historic tsunami events such as the 2004 Sumatra or the 2011 Tohoku event are usually initialized using earthquake sources resulting from inversion of seismic data. Also, other data from ocean buoys etc. is sometimes included in the derivation of the source model. The associated tsunami event can often be well simulated in this way, and the results show high correlation with measured data. However, it is unclear how the derived source model compares to the particular earthquake event. In this study we use the results from dynamic rupture simulations obtained with SeisSol, a software package based on an ADER-DG discretization solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. The tsunami model is based on a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme on triangular grids and features a robust wetting and drying scheme for the simulation of inundation events at the coast. Adaptive mesh refinement enables the efficient computation of large domains, while at the same time it allows for high local resolution and geometric accuracy. The results are compared to measured data and results using earthquake sources based on inversion. With the approach of using the output of actual dynamic rupture simulations, we can estimate the influence of different earthquake parameters. Furthermore, the comparison to other source models enables a thorough comparison and validation of important tsunami parameters, such as the runup at the coast. This work is part of the ASCETE (Advanced Simulation of Coupled Earthquake and Tsunami Events) project, which aims at an improved understanding of the coupling between the earthquake and the generated tsunami event.

  19. Assessing the accuracy of subject-specific, muscle-model parameters determined by optimizing to match isometric strength.

    PubMed

    DeSmitt, Holly J; Domire, Zachary J

    2016-12-01

    Biomechanical models are sensitive to the choice of model parameters. Therefore, determination of accurate subject specific model parameters is important. One approach to generate these parameters is to optimize the values such that the model output will match experimentally measured strength curves. This approach is attractive as it is inexpensive and should provide an excellent match to experimentally measured strength. However, given the problem of muscle redundancy, it is not clear that this approach generates accurate individual muscle forces. The purpose of this investigation is to evaluate this approach using simulated data to enable a direct comparison. It is hypothesized that the optimization approach will be able to recreate accurate muscle model parameters when information from measurable parameters is given. A model of isometric knee extension was developed to simulate a strength curve across a range of knee angles. In order to realistically recreate experimentally measured strength, random noise was added to the modeled strength. Parameters were solved for using a genetic search algorithm. When noise was added to the measurements the strength curve was reasonably recreated. However, the individual muscle model parameters and force curves were far less accurate. Based upon this examination, it is clear that very different sets of model parameters can recreate similar strength curves. Therefore, experimental variation in strength measurements has a significant influence on the results. Given the difficulty in accurately recreating individual muscle parameters, it may be more appropriate to perform simulations with lumped actuators representing similar muscles.

  20. Rapid Monte Carlo Simulation of Gravitational Wave Galaxies

    NASA Astrophysics Data System (ADS)

    Breivik, Katelyn; Larson, Shane L.

    2015-01-01

    With the detection of gravitational waves on the horizon, astrophysical catalogs produced by gravitational wave observatories can be used to characterize the populations of sources and validate different galactic population models. Efforts to simulate gravitational wave catalogs and source populations generally focus on population synthesis models that require extensive time and computational power to produce a single simulated galaxy. Monte Carlo simulations of gravitational wave source populations can also be used to generate observation catalogs from the gravitational wave source population. Monte Carlo simulations have the advantes of flexibility and speed, enabling rapid galactic realizations as a function of galactic binary parameters with less time and compuational resources required. We present a Monte Carlo method for rapid galactic simulations of gravitational wave binary populations.

  1. Fast simulation tool for ultraviolet radiation at the earth's surface

    NASA Astrophysics Data System (ADS)

    Engelsen, Ola; Kylling, Arve

    2005-04-01

    FastRT is a fast, yet accurate, UV simulation tool that computes downward surface UV doses, UV indices, and irradiances in the spectral range 290 to 400 nm with a resolution as small as 0.05 nm. It computes a full UV spectrum within a few milliseconds on a standard PC, and enables the user to convolve the spectrum with user-defined and built-in spectral response functions including the International Commission on Illumination (CIE) erythemal response function used for UV index calculations. The program accounts for the main radiative input parameters, i.e., instrumental characteristics, solar zenith angle, ozone column, aerosol loading, clouds, surface albedo, and surface altitude. FastRT is based on look-up tables of carefully selected entries of atmospheric transmittances and spherical albedos, and exploits the smoothness of these quantities with respect to atmospheric, surface, geometrical, and spectral parameters. An interactive site, http://nadir.nilu.no/~olaeng/fastrt/fastrt.html, enables the public to run the FastRT program with most input options. This page also contains updated information about FastRT and links to freely downloadable source codes and binaries.

  2. Penalized spline estimation for functional coefficient regression models.

    PubMed

    Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan

    2010-04-01

    The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.

  3. Ligandbook: an online repository for small and drug-like molecule force field parameters.

    PubMed

    Domanski, Jan; Beckstein, Oliver; Iorga, Bogdan I

    2017-06-01

    Ligandbook is a public database and archive for force field parameters of small and drug-like molecules. It is a repository for parameter sets that are part of published work but are not easily available to the community otherwise. Parameter sets can be downloaded and immediately used in molecular dynamics simulations. The sets of parameters are versioned with full histories and carry unique identifiers to facilitate reproducible research. Text-based search on rich metadata and chemical substructure search allow precise identification of desired compounds or functional groups. Ligandbook enables the rapid set up of reproducible molecular dynamics simulations of ligands and protein-ligand complexes. Ligandbook is available online at https://ligandbook.org and supports all modern browsers. Parameters can be searched and downloaded without registration, including access through a programmatic RESTful API. Deposition of files requires free user registration. Ligandbook is implemented in the PHP Symfony2 framework with TCL scripts using the CACTVS toolkit. oliver.beckstein@asu.edu or bogdan.iorga@cnrs.fr ; contact@ligandbook.org . Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  4. All-optical nanomechanical heat engine.

    PubMed

    Dechant, Andreas; Kiesel, Nikolai; Lutz, Eric

    2015-05-08

    We propose and theoretically investigate a nanomechanical heat engine. We show how a levitated nanoparticle in an optical trap inside a cavity can be used to realize a Stirling cycle in the underdamped regime. The all-optical approach enables fast and flexible control of all thermodynamical parameters and the efficient optimization of the performance of the engine. We develop a systematic optimization procedure to determine optimal driving protocols. Further, we perform numerical simulations with realistic parameters and evaluate the maximum power and the corresponding efficiency.

  5. All-Optical Nanomechanical Heat Engine

    NASA Astrophysics Data System (ADS)

    Dechant, Andreas; Kiesel, Nikolai; Lutz, Eric

    2015-05-01

    We propose and theoretically investigate a nanomechanical heat engine. We show how a levitated nanoparticle in an optical trap inside a cavity can be used to realize a Stirling cycle in the underdamped regime. The all-optical approach enables fast and flexible control of all thermodynamical parameters and the efficient optimization of the performance of the engine. We develop a systematic optimization procedure to determine optimal driving protocols. Further, we perform numerical simulations with realistic parameters and evaluate the maximum power and the corresponding efficiency.

  6. X-ray Micro-Tomography of Ablative Heat Shield Materials

    NASA Technical Reports Server (NTRS)

    Panerai, Francesco; Ferguson, Joseph; Borner, Arnaud; Mansour, Nagi N.; Barnard, Harold S.; MacDowell, Alastair A.; Parkinson, Dilworth Y.

    2016-01-01

    X-ray micro-tomography is a non-destructive characterization technique that allows imaging of materials structures with voxel sizes in the micrometer range. This level of resolution makes the technique very attractive for imaging porous ablators used in hypersonic entry systems. Besides providing a high fidelity description of the material architecture, micro-tomography enables computations of bulk material properties and simulations of micro-scale phenomena. This presentation provides an overview of a collaborative effort between NASA Ames Research Center and Lawrence Berkeley National Laboratory, aimed at developing micro-tomography experiments and simulations for porous ablative materials. Measurements are carried using x-rays from the Advanced Light Source at Berkeley Lab on different classes of ablative materials used in NASA entry systems. Challenges, strengths and limitations of the technique for imaging materials such as lightweight carbon-phenolic systems and woven textiles are discussed. Computational tools developed to perform numerical simulations based on micro-tomography are described. These enable computations of material properties such as permeability, thermal and radiative conductivity, tortuosity and other parameters that are used in ablator response models. Finally, we present the design of environmental cells that enable imaging materials under simulated operational conditions, such as high temperature, mechanical loads and oxidizing atmospheres.Keywords: Micro-tomography, Porous media, Ablation

  7. Evaluation of decadal predictions using a satellite simulator for the Special Sensor Microwave Imager (SSM/I)

    NASA Astrophysics Data System (ADS)

    Spangehl, Thomas; Schröder, Marc; Bodas-Salcedo, Alejandro; Glowienka-Hense, Rita; Hense, Andreas; Hollmann, Rainer; Dietzsch, Felix

    2017-04-01

    Decadal climate predictions are commonly evaluated focusing on geophysical parameters such as temperature, precipitation or wind speed using observational datasets and reanalysis. Alternatively, satellite based radiance measurements combined with satellite simulator techniques to deduce virtual satellite observations from the numerical model simulations can be used. The latter approach enables an evaluation in the instrument's parameter space and has the potential to reduce uncertainties on the reference side. Here we present evaluation methods focusing on forward operator techniques for the Special Sensor Microwave Imager (SSM/I). The simulator is developed as an integrated part of the CFMIP Observation Simulator Package (COSP). On the observational side the SSM/I and SSMIS Fundamental Climate Data Record (FCDR) released by CM SAF (http://dx.doi.org/10.5676/EUM_SAF_CM/FCDR_MWI/V002) is used, which provides brightness temperatures for different channels and covers the period from 1987 to 2013. The simulator is applied to hindcast simulations performed within the MiKlip project (http://fona-miklip.de) which is funded by the BMBF (Federal Ministry of Education and Research in Germany). Probabilistic evaluation results are shown based on a subset of the hindcast simulations covering the observational period.

  8. Parameterization of an interfacial force field for accurate representation of peptide adsorption free energy on high-density polyethylene

    PubMed Central

    Abramyan, Tigran M.; Snyder, James A.; Yancey, Jeremy A.; Thyparambil, Aby A.; Wei, Yang; Stuart, Steven J.; Latour, Robert A.

    2015-01-01

    Interfacial force field (IFF) parameters for use with the CHARMM force field have been developed for interactions between peptides and high-density polyethylene (HDPE). Parameterization of the IFF was performed to achieve agreement between experimental and calculated adsorption free energies of small TGTG–X–GTGT host–guest peptides (T = threonine, G = glycine, and X = variable amino-acid residue) on HDPE, with ±0.5 kcal/mol agreement. This IFF parameter set consists of tuned nonbonded parameters (i.e., partial charges and Lennard–Jones parameters) for use with an in-house-modified CHARMM molecular dynamic program that enables the use of an independent set of force field parameters to control molecular behavior at a solid–liquid interface. The R correlation coefficient between the simulated and experimental peptide adsorption free energies increased from 0.00 for the standard CHARMM force field parameters to 0.88 for the tuned IFF parameters. Subsequent studies are planned to apply the tuned IFF parameter set for the simulation of protein adsorption behavior on an HDPE surface for comparison with experimental values of adsorbed protein orientation and conformation. PMID:25818122

  9. Computer simulations of alkali-acetate solutions: Accuracy of the forcefields in difference concentrations

    NASA Astrophysics Data System (ADS)

    Ahlstrand, Emma; Zukerman Schpector, Julio; Friedman, Ran

    2017-11-01

    When proteins are solvated in electrolyte solutions that contain alkali ions, the ions interact mostly with carboxylates on the protein surface. Correctly accounting for alkali-carboxylate interactions is thus important for realistic simulations of proteins. Acetates are the simplest carboxylates that are amphipathic, and experimental data for alkali acetate solutions are available and can be compared with observables obtained from simulations. We carried out molecular dynamics simulations of alkali acetate solutions using polarizable and non-polarizable forcefields and examined the ion-acetate interactions. In particular, activity coefficients and association constants were studied in a range of concentrations (0.03, 0.1, and 1M). In addition, quantum-mechanics (QM) based energy decomposition analysis was performed in order to estimate the contribution of polarization, electrostatics, dispersion, and QM (non-classical) effects on the cation-acetate and cation-water interactions. Simulations of Li-acetate solutions in general overestimated the binding of Li+ and acetates. In lower concentrations, the activity coefficients of alkali-acetate solutions were too high, which is suggested to be due to the simulation protocol and not the forcefields. Energy decomposition analysis suggested that improvement of the forcefield parameters to enable accurate simulations of Li-acetate solutions can be achieved but may require the use of a polarizable forcefield. Importantly, simulations with some ion parameters could not reproduce the correct ion-oxygen distances, which calls for caution in the choice of ion parameters when protein simulations are performed in electrolyte solutions.

  10. Uncertainty in least-squares fits to the thermal noise spectra of nanomechanical resonators with applications to the atomic force microscope.

    PubMed

    Sader, John E; Yousefi, Morteza; Friend, James R

    2014-02-01

    Thermal noise spectra of nanomechanical resonators are used widely to characterize their physical properties. These spectra typically exhibit a Lorentzian response, with additional white noise due to extraneous processes. Least-squares fits of these measurements enable extraction of key parameters of the resonator, including its resonant frequency, quality factor, and stiffness. Here, we present general formulas for the uncertainties in these fit parameters due to sampling noise inherent in all thermal noise spectra. Good agreement with Monte Carlo simulation of synthetic data and measurements of an Atomic Force Microscope (AFM) cantilever is demonstrated. These formulas enable robust interpretation of thermal noise spectra measurements commonly performed in the AFM and adaptive control of fitting procedures with specified tolerances.

  11. Uncertainty in least-squares fits to the thermal noise spectra of nanomechanical resonators with applications to the atomic force microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sader, John E., E-mail: jsader@unimelb.edu.au; Yousefi, Morteza; Friend, James R.

    2014-02-15

    Thermal noise spectra of nanomechanical resonators are used widely to characterize their physical properties. These spectra typically exhibit a Lorentzian response, with additional white noise due to extraneous processes. Least-squares fits of these measurements enable extraction of key parameters of the resonator, including its resonant frequency, quality factor, and stiffness. Here, we present general formulas for the uncertainties in these fit parameters due to sampling noise inherent in all thermal noise spectra. Good agreement with Monte Carlo simulation of synthetic data and measurements of an Atomic Force Microscope (AFM) cantilever is demonstrated. These formulas enable robust interpretation of thermal noisemore » spectra measurements commonly performed in the AFM and adaptive control of fitting procedures with specified tolerances.« less

  12. Variability-aware compact modeling and statistical circuit validation on SRAM test array

    NASA Astrophysics Data System (ADS)

    Qiao, Ying; Spanos, Costas J.

    2016-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose a variability-aware compact model characterization methodology based on stepwise parameter selection. Transistor I-V measurements are obtained from bit transistor accessible SRAM test array fabricated using a collaborating foundry's 28nm FDSOI technology. Our in-house customized Monte Carlo simulation bench can incorporate these statistical compact models; and simulation results on SRAM writability performance are very close to measurements in distribution estimation. Our proposed statistical compact model parameter extraction methodology also has the potential of predicting non-Gaussian behavior in statistical circuit performances through mixtures of Gaussian distributions.

  13. Virtual Plant Tissue: Building Blocks for Next-Generation Plant Growth Simulation

    PubMed Central

    De Vos, Dirk; Dzhurakhalov, Abdiravuf; Stijven, Sean; Klosiewicz, Przemyslaw; Beemster, Gerrit T. S.; Broeckhove, Jan

    2017-01-01

    Motivation: Computational modeling of plant developmental processes is becoming increasingly important. Cellular resolution plant tissue simulators have been developed, yet they are typically describing physiological processes in an isolated way, strongly delimited in space and time. Results: With plant systems biology moving toward an integrative perspective on development we have built the Virtual Plant Tissue (VPTissue) package to couple functional modules or models in the same framework and across different frameworks. Multiple levels of model integration and coordination enable combining existing and new models from different sources, with diverse options in terms of input/output. Besides the core simulator the toolset also comprises a tissue editor for manipulating tissue geometry and cell, wall, and node attributes in an interactive manner. A parameter exploration tool is available to study parameter dependence of simulation results by distributing calculations over multiple systems. Availability: Virtual Plant Tissue is available as open source (EUPL license) on Bitbucket (https://bitbucket.org/vptissue/vptissue). The project has a website https://vptissue.bitbucket.io. PMID:28523006

  14. First-Principles and Thermodynamic Simulation of Elastic Stress Effect on Energy of Hydrogen Dissolution in Alpha Iron

    NASA Astrophysics Data System (ADS)

    Rakitin, M. S.; Mirzoev, A. A.; Mirzaev, D. A.

    2018-04-01

    Mobile hydrogen, when dissolving in metals, redistributes due to the density gradients and elastic stresses, and enables destruction processes or phase transformations in local volumes of a solvent metal. It is rather important in solid state physics to investigate these interactions. The first-principle calculations performed in terms of the density functional theory, are used for thermodynamic simulation of the elastic stress effect on the energy of hydrogen dissolution in α-Fe crystal lattice. The paper presents investigations of the total energy of Fe-H system depending on the lattice parameter. As a result, the relation is obtained between the hydrogen dissolution energy and stress. A good agreement is shown between the existing data and simulation results. The extended equation is suggested for the chemical potential of hydrogen atom in iron within the local stress field. Two parameters affecting the hydrogen distribution are compared, namely local stress and phase transformations.

  15. Predicting pedestrian flow: a methodology and a proof of concept based on real-life data.

    PubMed

    Davidich, Maria; Köster, Gerta

    2013-01-01

    Building a reliable predictive model of pedestrian motion is very challenging: Ideally, such models should be based on observations made in both controlled experiments and in real-world environments. De facto, models are rarely based on real-world observations due to the lack of available data; instead, they are largely based on intuition and, at best, literature values and laboratory experiments. Such an approach is insufficient for reliable simulations of complex real-life scenarios: For instance, our analysis of pedestrian motion under natural conditions at a major German railway station reveals that the values for free-flow velocities and the flow-density relationship differ significantly from widely used literature values. It is thus necessary to calibrate and validate the model against relevant real-life data to make it capable of reproducing and predicting real-life scenarios. In this work we aim at constructing such realistic pedestrian stream simulation. Based on the analysis of real-life data, we present a methodology that identifies key parameters and interdependencies that enable us to properly calibrate the model. The success of the approach is demonstrated for a benchmark model, a cellular automaton. We show that the proposed approach significantly improves the reliability of the simulation and hence the potential prediction accuracy. The simulation is validated by comparing the local density evolution of the measured data to that of the simulated data. We find that for our model the most sensitive parameters are: the source-target distribution of the pedestrian trajectories, the schedule of pedestrian appearances in the scenario and the mean free-flow velocity. Our results emphasize the need for real-life data extraction and analysis to enable predictive simulations.

  16. Visualization of the Invisible, Explanation of the Unknown, Ruggedization of the Unstable: Sensitivity Analysis, Virtual Tryout and Robust Design through Systematic Stochastic Simulation

    NASA Astrophysics Data System (ADS)

    Zwickl, Titus; Carleer, Bart; Kubli, Waldemar

    2005-08-01

    In the past decade, sheet metal forming simulation became a well established tool to predict the formability of parts. In the automotive industry, this has enabled significant reduction in the cost and time for vehicle design and development, and has helped to improve the quality and performance of vehicle parts. However, production stoppages for troubleshooting and unplanned die maintenance, as well as production quality fluctuations continue to plague manufacturing cost and time. The focus therefore has shifted in recent times beyond mere feasibility to robustness of the product and process being engineered. Ensuring robustness is the next big challenge for the virtual tryout / simulation technology. We introduce new methods, based on systematic stochastic simulations, to visualize the behavior of the part during the whole forming process — in simulation as well as in production. Sensitivity analysis explains the response of the part to changes in influencing parameters. Virtual tryout allows quick exploration of changed designs and conditions. Robust design and manufacturing guarantees quality and process capability for the production process. While conventional simulations helped to reduce development time and cost by ensuring feasible processes, robustness engineering tools have the potential for far greater cost and time savings. Through examples we illustrate how expected and unexpected behavior of deep drawing parts may be tracked down, identified and assigned to the influential parameters. With this knowledge, defects can be eliminated or springback can be compensated e.g.; the response of the part to uncontrollable noise can be predicted and minimized. The newly introduced methods enable more reliable and predictable stamping processes in general.

  17. Automated Knowledge Discovery from Simulators

    NASA Technical Reports Server (NTRS)

    Burl, Michael C.; DeCoste, D.; Enke, B. L.; Mazzoni, D.; Merline, W. J.; Scharenbroich, L.

    2006-01-01

    In this paper, we explore one aspect of knowledge discovery from simulators, the landscape characterization problem, where the aim is to identify regions in the input/ parameter/model space that lead to a particular output behavior. Large-scale numerical simulators are in widespread use by scientists and engineers across a range of government agencies, academia, and industry; in many cases, simulators provide the only means to examine processes that are infeasible or impossible to study otherwise. However, the cost of simulation studies can be quite high, both in terms of the time and computational resources required to conduct the trials and the manpower needed to sift through the resulting output. Thus, there is strong motivation to develop automated methods that enable more efficient knowledge extraction.

  18. BioNSi: A Discrete Biological Network Simulator Tool.

    PubMed

    Rubinstein, Amir; Bracha, Noga; Rudner, Liat; Zucker, Noga; Sloin, Hadas E; Chor, Benny

    2016-08-05

    Modeling and simulation of biological networks is an effective and widely used research methodology. The Biological Network Simulator (BioNSi) is a tool for modeling biological networks and simulating their discrete-time dynamics, implemented as a Cytoscape App. BioNSi includes a visual representation of the network that enables researchers to construct, set the parameters, and observe network behavior under various conditions. To construct a network instance in BioNSi, only partial, qualitative biological data suffices. The tool is aimed for use by experimental biologists and requires no prior computational or mathematical expertise. BioNSi is freely available at http://bionsi.wix.com/bionsi , where a complete user guide and a step-by-step manual can also be found.

  19. Advances in Discrete-Event Simulation for MSL Command Validation

    NASA Technical Reports Server (NTRS)

    Patrikalakis, Alexander; O'Reilly, Taifun

    2013-01-01

    In the last five years, the discrete event simulator, SEQuence GENerator (SEQGEN), developed at the Jet Propulsion Laboratory to plan deep-space missions, has greatly increased uplink operations capacity to deal with increasingly complicated missions. In this paper, we describe how the Mars Science Laboratory (MSL) project makes full use of an interpreted environment to simulate change in more than fifty thousand flight software parameters and conditional command sequences to predict the result of executing a conditional branch in a command sequence, and enable the ability to warn users whenever one or more simulated spacecraft states change in an unexpected manner. Using these new SEQGEN features, operators plan more activities in one sol than ever before.

  20. Modeling the human body/seat system in a vibration environment.

    PubMed

    Rosen, Jacob; Arcan, Mircea

    2003-04-01

    The vibration environment is a common man-made artificial surrounding with which humans have a limited tolerance to cope due to their body dynamics. This research studied the dynamic characteristics of a seated human body/seat system in a vibration environment. The main result is a multi degrees of freedom lumped parameter model that synthesizes two basic dynamics: (i) global human dynamics, the apparent mass phenomenon, including a systematic set of the model parameters for simulating various conditions like body posture, backrest, footrest, muscle tension, and vibration directions, and (ii) the local human dynamics, represented by the human pelvis/vibrating seat contact, using a cushioning interface. The model and its selected parameters successfully described the main effects of the apparent mass phenomenon compared to experimental data documented in the literature. The model provided an analytical tool for human body dynamics research. It also enabled a primary tool for seat and cushioning design. The model was further used to develop design guidelines for a composite cushion using the principle of quasi-uniform body/seat contact force distribution. In terms of evenly distributing the contact forces, the best result for the different materials and cushion geometries simulated in the current study was achieved using a two layer shaped geometry cushion built from three materials. Combining the geometry and the mechanical characteristics of a structure under large deformation into a lumped parameter model enables successful analysis of the human/seat interface system and provides practical results for body protection in dynamic environment.

  1. The effective χ parameter in polarizable polymeric systems: One-loop perturbation theory and field-theoretic simulations.

    PubMed

    Grzetic, Douglas J; Delaney, Kris T; Fredrickson, Glenn H

    2018-05-28

    We derive the effective Flory-Huggins parameter in polarizable polymeric systems, within a recently introduced polarizable field theory framework. The incorporation of bead polarizabilities in the model self-consistently embeds dielectric response, as well as van der Waals interactions. The latter generate a χ parameter (denoted χ̃) between any two species with polarizability contrast. Using one-loop perturbation theory, we compute corrections to the structure factor Sk and the dielectric function ϵ^(k) for a polarizable binary homopolymer blend in the one-phase region of the phase diagram. The electrostatic corrections to S(k) can be entirely accounted for by a renormalization of the excluded volume parameter B into three van der Waals-corrected parameters B AA , B AB , and B BB , which then determine χ̃. The one-loop theory not only enables the quantitative prediction of χ̃ but also provides useful insight into the dependence of χ̃ on the electrostatic environment (for example, its sensitivity to electrostatic screening). The unapproximated polarizable field theory is amenable to direct simulation via complex Langevin sampling, which we employ here to test the validity of the one-loop results. From simulations of S(k) and ϵ^(k) for a system of polarizable homopolymers, we find that the one-loop theory is best suited to high concentrations, where it performs very well. Finally, we measure χ̃N in simulations of a polarizable diblock copolymer melt and obtain excellent agreement with the one-loop theory. These constitute the first fully fluctuating simulations conducted within the polarizable field theory framework.

  2. The effective χ parameter in polarizable polymeric systems: One-loop perturbation theory and field-theoretic simulations

    NASA Astrophysics Data System (ADS)

    Grzetic, Douglas J.; Delaney, Kris T.; Fredrickson, Glenn H.

    2018-05-01

    We derive the effective Flory-Huggins parameter in polarizable polymeric systems, within a recently introduced polarizable field theory framework. The incorporation of bead polarizabilities in the model self-consistently embeds dielectric response, as well as van der Waals interactions. The latter generate a χ parameter (denoted χ ˜ ) between any two species with polarizability contrast. Using one-loop perturbation theory, we compute corrections to the structure factor S (k ) and the dielectric function ɛ ^ (k ) for a polarizable binary homopolymer blend in the one-phase region of the phase diagram. The electrostatic corrections to S(k) can be entirely accounted for by a renormalization of the excluded volume parameter B into three van der Waals-corrected parameters BAA, BAB, and BBB, which then determine χ ˜ . The one-loop theory not only enables the quantitative prediction of χ ˜ but also provides useful insight into the dependence of χ ˜ on the electrostatic environment (for example, its sensitivity to electrostatic screening). The unapproximated polarizable field theory is amenable to direct simulation via complex Langevin sampling, which we employ here to test the validity of the one-loop results. From simulations of S(k) and ɛ ^ (k ) for a system of polarizable homopolymers, we find that the one-loop theory is best suited to high concentrations, where it performs very well. Finally, we measure χ ˜ N in simulations of a polarizable diblock copolymer melt and obtain excellent agreement with the one-loop theory. These constitute the first fully fluctuating simulations conducted within the polarizable field theory framework.

  3. MCPB.py: A Python Based Metal Center Parameter Builder.

    PubMed

    Li, Pengfei; Merz, Kenneth M

    2016-04-25

    MCPB.py, a python based metal center parameter builder, has been developed to build force fields for the simulation of metal complexes employing the bonded model approach. It has an optimized code structure, with far fewer required steps than the previous developed MCPB program. It supports various AMBER force fields and more than 80 metal ions. A series of parametrization schemes to derive force constants and charge parameters are available within the program. We give two examples (one metalloprotein example and one organometallic compound example), indicating the program's ability to build reliable force fields for different metal ion containing complexes. The original version was released with AmberTools15. It is provided via the GNU General Public License v3.0 (GNU_GPL_v3) agreement and is free to download and distribute. MCPB.py provides a bridge between quantum mechanical calculations and molecular dynamics simulation software packages thereby enabling the modeling of metal ion centers. It offers an entry into simulating metal ions in a number of situations by providing an efficient way for researchers to handle the vagaries and difficulties associated with metal ion modeling.

  4. Bayesian Assessment of the Uncertainties of Estimates of a Conceptual Rainfall-Runoff Model Parameters

    NASA Astrophysics Data System (ADS)

    Silva, F. E. O. E.; Naghettini, M. D. C.; Fernandes, W.

    2014-12-01

    This paper evaluated the uncertainties associated with the estimation of the parameters of a conceptual rainfall-runoff model, through the use of Bayesian inference techniques by Monte Carlo simulation. The Pará River sub-basin, located in the upper São Francisco river basin, in southeastern Brazil, was selected for developing the studies. In this paper, we used the Rio Grande conceptual hydrologic model (EHR/UFMG, 2001) and the Markov Chain Monte Carlo simulation method named DREAM (VRUGT, 2008a). Two probabilistic models for the residues were analyzed: (i) the classic [Normal likelihood - r ≈ N (0, σ²)]; and (ii) a generalized likelihood (SCHOUPS & VRUGT, 2010), in which it is assumed that the differences between observed and simulated flows are correlated, non-stationary, and distributed as a Skew Exponential Power density. The assumptions made for both models were checked to ensure that the estimation of uncertainties in the parameters was not biased. The results showed that the Bayesian approach proved to be adequate to the proposed objectives, enabling and reinforcing the importance of assessing the uncertainties associated with hydrological modeling.

  5. A Facility and Architecture for Autonomy Research

    NASA Technical Reports Server (NTRS)

    Pisanich, Greg; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Autonomy is a key enabling factor in the advancement of the remote robotic exploration. There is currently a large gap between autonomy software at the research level and software that is ready for insertion into near-term space missions. The Mission Simulation Facility (MST) will bridge this gap by providing a simulation framework and suite of simulation tools to support research in autonomy for remote exploration. This system will allow developers of autonomy software to test their models in a high-fidelity simulation and evaluate their system's performance against a set of integrated, standardized simulations. The Mission Simulation ToolKit (MST) uses a distributed architecture with a communication layer that is built on top of the standardized High Level Architecture (HLA). This architecture enables the use of existing high fidelity models, allows mixing simulation components from various computing platforms and enforces the use of a standardized high-level interface among components. The components needed to achieve a realistic simulation can be grouped into four categories: environment generation (terrain, environmental features), robotic platform behavior (robot dynamics), instrument models (camera/spectrometer/etc.), and data analysis. The MST will provide basic components in these areas but allows users to plug-in easily any refined model by means of a communication protocol. Finally, a description file defines the robot and environment parameters for easy configuration and ensures that all the simulation models share the same information.

  6. Simulation of a Geiger-Mode Imaging LADAR System for Performance Assessment

    PubMed Central

    Kim, Seongjoon; Lee, Impyeong; Kwon, Yong Joon

    2013-01-01

    As LADAR systems applications gradually become more diverse, new types of systems are being developed. When developing new systems, simulation studies are an essential prerequisite. A simulator enables performance predictions and optimal system parameters at the design level, as well as providing sample data for developing and validating application algorithms. The purpose of the study is to propose a method for simulating a Geiger-mode imaging LADAR system. We develop simulation software to assess system performance and generate sample data for the applications. The simulation is based on three aspects of modeling—the geometry, radiometry and detection. The geometric model computes the ranges to the reflection points of the laser pulses. The radiometric model generates the return signals, including the noises. The detection model determines the flight times of the laser pulses based on the nature of the Geiger-mode detector. We generated sample data using the simulator with the system parameters and analyzed the detection performance by comparing the simulated points to the reference points. The proportion of the outliers in the simulated points reached 25.53%, indicating the need for efficient outlier elimination algorithms. In addition, the false alarm rate and dropout rate of the designed system were computed as 1.76% and 1.06%, respectively. PMID:23823970

  7. A simulation-based probabilistic design method for arctic sea transport systems

    NASA Astrophysics Data System (ADS)

    Martin, Bergström; Ove, Erikstad Stein; Sören, Ehlers

    2016-12-01

    When designing an arctic cargo ship, it is necessary to consider multiple stochastic factors. This paper evaluates the merits of a simulation-based probabilistic design method specifically developed to deal with this challenge. The outcome of the paper indicates that the incorporation of simulations and probabilistic design parameters into the design process enables more informed design decisions. For instance, it enables the assessment of the stochastic transport capacity of an arctic ship, as well as of its long-term ice exposure that can be used to determine an appropriate level of ice-strengthening. The outcome of the paper also indicates that significant gains in transport system cost-efficiency can be obtained by extending the boundaries of the design task beyond the individual vessel. In the case of industrial shipping, this allows for instance the consideration of port-based cargo storage facilities allowing for temporary shortages in transport capacity and thus a reduction in the required fleet size / ship capacity.

  8. A Progressive Damage Model for unidirectional Fibre Reinforced Composites with Application to Impact and Penetration Simulation

    NASA Astrophysics Data System (ADS)

    Kerschbaum, M.; Hopmann, C.

    2016-06-01

    The computationally efficient simulation of the progressive damage behaviour of continuous fibre reinforced plastics is still a challenging task with currently available computer aided engineering methods. This paper presents an original approach for an energy based continuum damage model which accounts for stress-/strain nonlinearities, transverse and shear stress interaction phenomena, quasi-plastic shear strain components, strain rate effects, regularised damage evolution and consideration of load reversal effects. The physically based modelling approach enables experimental determination of all parameters on ply level to avoid expensive inverse analysis procedures. The modelling strategy, implementation and verification of this model using commercially available explicit finite element software are detailed. The model is then applied to simulate the impact and penetration of carbon fibre reinforced cross-ply specimens with variation of the impact speed. The simulation results show that the presented approach enables a good representation of the force-/displacement curves and especially well agreement with the experimentally observed fracture patterns. In addition, the mesh dependency of the results were assessed for one impact case showing only very little change of the simulation results which emphasises the general applicability of the presented method.

  9. Toward Rigorous Parameterization of Underconstrained Neural Network Models Through Interactive Visualization and Steering of Connectivity Generation

    PubMed Central

    Nowke, Christian; Diaz-Pier, Sandra; Weyers, Benjamin; Hentschel, Bernd; Morrison, Abigail; Kuhlen, Torsten W.; Peyser, Alexander

    2018-01-01

    Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases—the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed. PMID:29937723

  10. GRODY - GAMMA RAY OBSERVATORY DYNAMICS SIMULATOR IN ADA

    NASA Technical Reports Server (NTRS)

    Stark, M.

    1994-01-01

    Analysts use a dynamics simulator to test the attitude control system algorithms used by a satellite. The simulator must simulate the hardware, dynamics, and environment of the particular spacecraft and provide user services which enable the analyst to conduct experiments. Researchers at Goddard's Flight Dynamics Division developed GRODY alongside GROSS (GSC-13147), a FORTRAN simulator which performs the same functions, in a case study to assess the feasibility and effectiveness of the Ada programming language for flight dynamics software development. They used popular object-oriented design techniques to link the simulator's design with its function. GRODY is designed for analysts familiar with spacecraft attitude analysis. The program supports maneuver planning as well as analytical testing and evaluation of the attitude determination and control system used on board the Gamma Ray Observatory (GRO) satellite. GRODY simulates the GRO on-board computer and Control Processor Electronics. The analyst/user sets up and controls the simulation. GRODY allows the analyst to check and update parameter values and ground commands, obtain simulation status displays, interrupt the simulation, analyze previous runs, and obtain printed output of simulation runs. The video terminal screen display allows visibility of command sequences, full-screen display and modification of parameters using input fields, and verification of all input data. Data input available for modification includes alignment and performance parameters for all attitude hardware, simulation control parameters which determine simulation scheduling and simulator output, initial conditions, and on-board computer commands. GRODY generates eight types of output: simulation results data set, analysis report, parameter report, simulation report, status display, plots, diagnostic output (which helps the user trace any problems that have occurred during a simulation), and a permanent log of all runs and errors. The analyst can send results output in graphical or tabular form to a terminal, disk, or hardcopy device, and can choose to have any or all items plotted against time or against each other. Goddard researchers developed GRODY on a VAX 8600 running VMS version 4.0. For near real time performance, GRODY requires a VAX at least as powerful as a model 8600 running VMS 4.0 or a later version. To use GRODY, the VAX needs an Ada Compilation System (ACS), Code Management System (CMS), and 1200K memory. GRODY is written in Ada and FORTRAN.

  11. Using spatial principles to optimize distributed computing for enabling the physical science discoveries

    PubMed Central

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-01-01

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779

  12. Using spatial principles to optimize distributed computing for enabling the physical science discoveries.

    PubMed

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-04-05

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.

  13. CASL L2 milestone report : VUQ.Y1.03, %22Enable statistical sensitivity and UQ demonstrations for VERA.%22

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sung, Yixing; Adams, Brian M.; Witkowski, Walter R.

    2011-04-01

    The CASL Level 2 Milestone VUQ.Y1.03, 'Enable statistical sensitivity and UQ demonstrations for VERA,' was successfully completed in March 2011. The VUQ focus area led this effort, in close partnership with AMA, and with support from VRI. DAKOTA was coupled to VIPRE-W thermal-hydraulics simulations representing reactors of interest to address crud-related challenge problems in order to understand the sensitivity and uncertainty in simulation outputs with respect to uncertain operating and model form parameters. This report summarizes work coupling the software tools, characterizing uncertainties, selecting sensitivity and uncertainty quantification algorithms, and analyzing the results of iterative studies. These demonstration studies focusedmore » on sensitivity and uncertainty of mass evaporation rate calculated by VIPRE-W, a key predictor for crud-induced power shift (CIPS).« less

  14. Whole-body PET parametric imaging employing direct 4D nested reconstruction and a generalized non-linear Patlak model

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Rahmim, Arman

    2014-03-01

    Graphical analysis is employed in the research setting to provide quantitative estimation of PET tracer kinetics from dynamic images at a single bed. Recently, we proposed a multi-bed dynamic acquisition framework enabling clinically feasible whole-body parametric PET imaging by employing post-reconstruction parameter estimation. In addition, by incorporating linear Patlak modeling within the system matrix, we enabled direct 4D reconstruction in order to effectively circumvent noise amplification in dynamic whole-body imaging. However, direct 4D Patlak reconstruction exhibits a relatively slow convergence due to the presence of non-sparse spatial correlations in temporal kinetic analysis. In addition, the standard Patlak model does not account for reversible uptake, thus underestimating the influx rate Ki. We have developed a novel whole-body PET parametric reconstruction framework in the STIR platform, a widely employed open-source reconstruction toolkit, a) enabling accelerated convergence of direct 4D multi-bed reconstruction, by employing a nested algorithm to decouple the temporal parameter estimation from the spatial image update process, and b) enhancing the quantitative performance particularly in regions with reversible uptake, by pursuing a non-linear generalized Patlak 4D nested reconstruction algorithm. A set of published kinetic parameters and the XCAT phantom were employed for the simulation of dynamic multi-bed acquisitions. Quantitative analysis on the Ki images demonstrated considerable acceleration in the convergence of the nested 4D whole-body Patlak algorithm. In addition, our simulated and patient whole-body data in the postreconstruction domain indicated the quantitative benefits of our extended generalized Patlak 4D nested reconstruction for tumor diagnosis and treatment response monitoring.

  15. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens.

    PubMed

    Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D; Volz, Kerstin

    2017-06-01

    We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  17. Design of a new artificial breathing system for simulating the human respiratory activities.

    PubMed

    Essoukaki, Elmaati; Rattal, Mourad; Ben Taleb, Lhoucine; Harmouchi, Mohammed; Assir, Abdelhadi; Mouhsen, Azeddine; Lyazidi, Aissam

    2018-01-01

    The purpose of this work is the conception and implementation of an artificial active respiratory system that allows the simulation of human respiratory activities. The system consists of two modules, mechanical and electronical. The first one represents a cylindrical lung adjustable in resistance and compliance. This lung is located inside a transparent thoracic box, connected to a piston that generates variable respiratory efforts. The parameters of the system, which are pressure, flow and volume, are measured by the second module. A computer application was developed to control the whole system, and enables the display of the parameters. A series of tests were made to evaluate the respiratory efforts, resistances and compliances. The results were compared to the bibliographical studies, allowing the validation of the proposed system.

  18. On the Asymmetric Focusing of Low-Emittance Electron Bunches via Active Lensing by Using Capillary Discharges

    NASA Astrophysics Data System (ADS)

    Bulanov, Stepan; Bagdasarov, Gennadiy; Bobrova, Nadezhda; Boldarev, Alexey; Olkhovskaya, Olga; Sasorov, Pavel; Gasilov, Vladimir; Barber, Samuel; Gonsalves, Anthony; Schroeder, Carl; van Tilborg, Jeroen; Esarey, Eric; Leemans, Wim; Levato, Tadzio; Margarone, Daniele; Korn, Georg; Kando, Masaki; Bulanov, Sergei

    2017-10-01

    A novel method for asymmetric focusing of electron beams is proposed. The scheme is based on the active lensing technique, which takes advantage of the strong inhomogeneous magnetic field generated inside the capillary discharge plasma to focus the ultrarelativistic electrons. The plasma and magnetic field parameters inside a capillary discharge are described theoretically and modeled with dissipative MHD simulations to enable analysis of capillaries of oblong rectangle cross-sections implying that large aspect ratio rectangular capillaries can be used to form flat electron bunches. The effect of the capillary cross-section on the electron beam focusing properties were studied using the analytical methods and simulation- derived magnetic field map showing the range of the capillary discharge parameters required for producing the high quality flat electron beams.

  19. Off-line tracking of series parameters in distribution systems using AMI data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Tess L.; Sun, Yannan; Schneider, Kevin

    2016-05-01

    Electric distribution systems have historically lacked measurement points, and equipment is often operated to its failure point, resulting in customer outages. The widespread deployment of sensors at the distribution level is enabling observability. This paper presents an off-line parameter value tracking procedure that takes advantage of the increasing number of measurement devices being deployed at the distribution level to estimate changes in series impedance parameter values over time. The tracking of parameter values enables non-diurnal and non-seasonal change to be flagged for investigation. The presented method uses an unbalanced Distribution System State Estimation (DSSE) and a measurement residual-based parameter estimationmore » procedure. Measurement residuals from multiple measurement snapshots are combined in order to increase the effective local redundancy and improve the robustness of the calculations in the presence of measurement noise. Data from devices on the primary distribution system and from customer meters, via an AMI system, form the input data set. Results of simulations on the IEEE 13-Node Test Feeder are presented to illustrate the proposed approach applied to changes in series impedance parameters. A 5% change in series resistance elements can be detected in the presence of 2% measurement error when combining less than 1 day of measurement snapshots into a single estimate.« less

  20. Creating A Data Base For Design Of An Impeller

    NASA Technical Reports Server (NTRS)

    Prueger, George H.; Chen, Wei-Chung

    1993-01-01

    Report describes use of Taguchi method of parametric design to create data base facilitating optimization of design of impeller in centrifugal pump. Data base enables systematic design analysis covering all significant design parameters. Reduces time and cost of parametric optimization of design: for particular impeller considered, one can cover 4,374 designs by computational simulations of performance for only 18 cases.

  1. Using a Virtual Tablet Machine to Improve Student Understanding of the Complex Processes Involved in Tablet Manufacturing.

    PubMed

    Mattsson, Sofia; Sjöström, Hans-Erik; Englund, Claire

    2016-06-25

    Objective. To develop and implement a virtual tablet machine simulation to aid distance students' understanding of the processes involved in tablet production. Design. A tablet simulation was created enabling students to study the effects different parameters have on the properties of the tablet. Once results were generated, students interpreted and explained them on the basis of current theory. Assessment. The simulation was evaluated using written questionnaires and focus group interviews. Students appreciated the exercise and considered it to be motivational. Students commented that they found the simulation, together with the online seminar and the writing of the report, was beneficial for their learning process. Conclusion. According to students' perceptions, the use of the tablet simulation contributed to their understanding of the compaction process.

  2. Using a Virtual Tablet Machine to Improve Student Understanding of the Complex Processes Involved in Tablet Manufacturing

    PubMed Central

    Sjöström, Hans-Erik; Englund, Claire

    2016-01-01

    Objective. To develop and implement a virtual tablet machine simulation to aid distance students’ understanding of the processes involved in tablet production. Design. A tablet simulation was created enabling students to study the effects different parameters have on the properties of the tablet. Once results were generated, students interpreted and explained them on the basis of current theory. Assessment. The simulation was evaluated using written questionnaires and focus group interviews. Students appreciated the exercise and considered it to be motivational. Students commented that they found the simulation, together with the online seminar and the writing of the report, was beneficial for their learning process. Conclusion. According to students’ perceptions, the use of the tablet simulation contributed to their understanding of the compaction process. PMID:27402990

  3. SMC: SCENIC Model Control

    NASA Technical Reports Server (NTRS)

    Srivastava, Priyaka; Kraus, Jeff; Murawski, Robert; Golden, Bertsel, Jr.

    2015-01-01

    NASAs Space Communications and Navigation (SCaN) program manages three active networks: the Near Earth Network, the Space Network, and the Deep Space Network. These networks simultaneously support NASA missions and provide communications services to customers worldwide. To efficiently manage these resources and their capabilities, a team of student interns at the NASA Glenn Research Center is developing a distributed system to model the SCaN networks. Once complete, the system shall provide a platform that enables users to perform capacity modeling of current and prospective missions with finer-grained control of information between several simulation and modeling tools. This will enable the SCaN program to access a holistic view of its networks and simulate the effects of modifications in order to provide NASA with decisional information. The development of this capacity modeling system is managed by NASAs Strategic Center for Education, Networking, Integration, and Communication (SCENIC). Three primary third-party software tools offer their unique abilities in different stages of the simulation process. MagicDraw provides UMLSysML modeling, AGIs Systems Tool Kit simulates the physical transmission parameters and de-conflicts scheduled communication, and Riverbed Modeler (formerly OPNET) simulates communication protocols and packet-based networking. SCENIC developers are building custom software extensions to integrate these components in an end-to-end space communications modeling platform. A central control module acts as the hub for report-based messaging between client wrappers. Backend databases provide information related to mission parameters and ground station configurations, while the end user defines scenario-specific attributes for the model. The eight SCENIC interns are working under the direction of their mentors to complete an initial version of this capacity modeling system during the summer of 2015. The intern team is composed of four students in Computer Science, two in Computer Engineering, one in Electrical Engineering, and one studying Space Systems Engineering.

  4. SURF Model Calibration Strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    2017-03-10

    SURF and SURFplus are high explosive reactive burn models for shock initiation and propagation of detonation waves. They are engineering models motivated by the ignition & growth concept of high spots and for SURFplus a second slow reaction for the energy release from carbon clustering. A key feature of the SURF model is that there is a partial decoupling between model parameters and detonation properties. This enables reduced sets of independent parameters to be calibrated sequentially for the initiation and propagation regimes. Here we focus on a methodology for tting the initiation parameters to Pop plot data based on 1-Dmore » simulations to compute a numerical Pop plot. In addition, the strategy for tting the remaining parameters for the propagation regime and failure diameter is discussed.« less

  5. SiC JFET Transistor Circuit Model for Extreme Temperature Range

    NASA Technical Reports Server (NTRS)

    Neudeck, Philip G.

    2008-01-01

    A technique for simulating extreme-temperature operation of integrated circuits that incorporate silicon carbide (SiC) junction field-effect transistors (JFETs) has been developed. The technique involves modification of NGSPICE, which is an open-source version of the popular Simulation Program with Integrated Circuit Emphasis (SPICE) general-purpose analog-integrated-circuit-simulating software. NGSPICE in its unmodified form is used for simulating and designing circuits made from silicon-based transistors that operate at or near room temperature. Two rapid modifications of NGSPICE source code enable SiC JFETs to be simulated to 500 C using the well-known Level 1 model for silicon metal oxide semiconductor field-effect transistors (MOSFETs). First, the default value of the MOSFET surface potential must be changed. In the unmodified source code, this parameter has a value of 0.6, which corresponds to slightly more than half the bandgap of silicon. In NGSPICE modified to simulate SiC JFETs, this parameter is changed to a value of 1.6, corresponding to slightly more than half the bandgap of SiC. The second modification consists of changing the temperature dependence of MOSFET transconductance and saturation parameters. The unmodified NGSPICE source code implements a T(sup -1.5) temperature dependence for these parameters. In order to mimic the temperature behavior of experimental SiC JFETs, a T(sup -1.3) temperature dependence must be implemented in the NGSPICE source code. Following these two simple modifications, the Level 1 MOSFET model of the NGSPICE circuit simulation program reasonably approximates the measured high-temperature behavior of experimental SiC JFETs properly operated with zero or reverse bias applied to the gate terminal. Modification of additional silicon parameters in the NGSPICE source code was not necessary to model experimental SiC JFET current-voltage performance across the entire temperature range from 25 to 500 C.

  6. Numerical relativity waveform surrogate model for generically precessing binary black hole mergers

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott E.; Scheel, Mark A.; Galley, Chad R.; Ott, Christian D.; Boyle, Michael; Kidder, Lawrence E.; Pfeiffer, Harald P.; Szilágyi, Béla

    2017-07-01

    A generic, noneccentric binary black hole (BBH) system emits gravitational waves (GWs) that are completely described by seven intrinsic parameters: the black hole spin vectors and the ratio of their masses. Simulating a BBH coalescence by solving Einstein's equations numerically is computationally expensive, requiring days to months of computing resources for a single set of parameter values. Since theoretical predictions of the GWs are often needed for many different source parameters, a fast and accurate model is essential. We present the first surrogate model for GWs from the coalescence of BBHs including all seven dimensions of the intrinsic noneccentric parameter space. The surrogate model, which we call NRSur7dq2, is built from the results of 744 numerical relativity simulations. NRSur7dq2 covers spin magnitudes up to 0.8 and mass ratios up to 2, includes all ℓ≤4 modes, begins about 20 orbits before merger, and can be evaluated in ˜50 ms . We find the largest NRSur7dq2 errors to be comparable to the largest errors in the numerical relativity simulations, and more than an order of magnitude smaller than the errors of other waveform models. Our model, and more broadly the methods developed here, will enable studies that were not previously possible when using highly accurate waveforms, such as parameter inference and tests of general relativity with GW observations.

  7. HABEBEE: habitability of eyeball-exo-Earths.

    PubMed

    Angerhausen, Daniel; Sapers, Haley; Citron, Robert; Bergantini, Alexandre; Lutz, Stefanie; Queiroz, Luciano Lopes; da Rosa Alexandre, Marcelo; Araujo, Ana Carolina Vieira

    2013-03-01

    Extrasolar Earth and super-Earth planets orbiting within the habitable zone of M dwarf host stars may play a significant role in the discovery of habitable environments beyond Earth. Spectroscopic characterization of these exoplanets with respect to habitability requires the determination of habitability parameters with respect to remote sensing. The habitable zone of dwarf stars is located in close proximity to the host star, such that exoplanets orbiting within this zone will likely be tidally locked. On terrestrial planets with an icy shell, this may produce a liquid water ocean at the substellar point, one particular "Eyeball Earth" state. In this research proposal, HABEBEE: exploring the HABitability of Eyeball-Exo-Earths, we define the parameters necessary to achieve a stable icy Eyeball Earth capable of supporting life. Astronomical and geochemical research will define parameters needed to simulate potentially habitable environments on an icy Eyeball Earth planet. Biological requirements will be based on detailed studies of microbial communities within Earth analog environments. Using the interdisciplinary results of both the physical and biological teams, we will set up a simulation chamber to expose a cold- and UV-tolerant microbial community to the theoretically derived Eyeball Earth climate states, simulating the composition, atmosphere, physical parameters, and stellar irradiation. Combining the results of both studies will enable us to derive observable parameters as well as target decision guidance and feasibility analysis for upcoming astronomical platforms.

  8. gPKPDSim: a SimBiology®-based GUI application for PKPD modeling in drug development.

    PubMed

    Hosseini, Iraj; Gajjala, Anita; Bumbaca Yadav, Daniela; Sukumaran, Siddharth; Ramanujan, Saroja; Paxson, Ricardo; Gadkar, Kapil

    2018-04-01

    Modeling and simulation (M&S) is increasingly used in drug development to characterize pharmacokinetic-pharmacodynamic (PKPD) relationships and support various efforts such as target feasibility assessment, molecule selection, human PK projection, and preclinical and clinical dose and schedule determination. While model development typically require mathematical modeling expertise, model exploration and simulations could in many cases be performed by scientists in various disciplines to support the design, analysis and interpretation of experimental studies. To this end, we have developed a versatile graphical user interface (GUI) application to enable easy use of any model constructed in SimBiology ® to execute various common PKPD analyses. The MATLAB ® -based GUI application, called gPKPDSim, has a single screen interface and provides functionalities including simulation, data fitting (parameter estimation), population simulation (exploring the impact of parameter variability on the outputs of interest), and non-compartmental PK analysis. Further, gPKPDSim is a user-friendly tool with capabilities including interactive visualization, exporting of results and generation of presentation-ready figures. gPKPDSim was designed primarily for use in preclinical and translational drug development, although broader applications exist. gPKPDSim is a MATLAB ® -based open-source application and is publicly available to download from MATLAB ® Central™. We illustrate the use and features of gPKPDSim using multiple PKPD models to demonstrate the wide applications of this tool in pharmaceutical sciences. Overall, gPKPDSim provides an integrated, multi-purpose user-friendly GUI application to enable efficient use of PKPD models by scientists from various disciplines, regardless of their modeling expertise.

  9. A 2D modeling approach for fluid propagation during FE-forming simulation of continuously reinforced composites in wet compression moulding

    NASA Astrophysics Data System (ADS)

    Poppe, Christian; Dörr, Dominik; Henning, Frank; Kärger, Luise

    2018-05-01

    Wet compression moulding (WCM) provides large-scale production potential for continuously fiber reinforced components as a promising alternative to resin transfer moulding (RTM). Lower cycle times are possible due to parallelization of the process steps draping, infiltration and curing during moulding (viscous draping). Experimental and theoretical investigations indicate a strong mutual dependency between the physical mechanisms, which occur during draping and mould filling (fluid-structure-interaction). Thus, key process parameters, like fiber orientation, fiber volume fraction, cavity pressure and the amount and viscosity of the resin are physically coupled. To enable time and cost efficient product and process development throughout all design stages, accurate process simulation tools are desirable. Separated draping and mould filling simulation models, as appropriate for the sequential RTM-process, cannot be applied for the WCM process due to the above outlined physical couplings. Within this study, a two-dimensional Darcy-Propagation-Element (DPE-2D) based on a finite element formulation with additional control volumes (FE/CV) is presented, verified and applied to forming simulation of a generic geometry, as a first step towards a fluid-structure-interaction model taking into account simultaneous resin infiltration and draping. The model is implemented in the commercial FE-Solver Abaqus by means of several user subroutines considering simultaneous draping and 2D-infiltration mechanisms. Darcy's equation is solved with respect to a local fiber orientation. Furthermore, the material model can access the local fluid domain properties to update the mechanical forming material parameter, which enables further investigations on the coupled physical mechanisms.

  10. Detecting vortices in superconductors: Extracting one-dimensional topological singularities from a discretized complex scalar field

    DOE PAGES

    Phillips, Carolyn L.; Peterka, Tom; Karpeyev, Dmitry; ...

    2015-02-20

    In type II superconductors, the dynamics of superconducting vortices determine their transport properties. In the Ginzburg-Landau theory, vortices correspond to topological defects in the complex order parameter. Extracting their precise positions and motion from discretized numerical simulation data is an important, but challenging, task. In the past, vortices have mostly been detected by analyzing the magnitude of the complex scalar field representing the order parameter and visualized by corresponding contour plots and isosurfaces. However, these methods, primarily used for small-scale simulations, blur the fine details of the vortices, scale poorly to large-scale simulations, and do not easily enable isolating andmore » tracking individual vortices. In this paper, we present a method for exactly finding the vortex core lines from a complex order parameter field. With this method, vortices can be easily described at a resolution even finer than the mesh itself. The precise determination of the vortex cores allows the interplay of the vortices inside a model superconductor to be visualized in higher resolution than has previously been possible. Finally, by representing the field as the set of vortices, this method also massively reduces the data footprint of the simulations and provides the data structures for further analysis and feature tracking.« less

  11. Refinement of Generalized Born Implicit Solvation Parameters for Nucleic Acids and their Complexes with Proteins

    PubMed Central

    Nguyen, Hai; Pérez, Alberto; Bermeo, Sherry; Simmerling, Carlos

    2016-01-01

    The Generalized Born (GB) implicit solvent model has undergone significant improvements in accuracy for modeling of proteins and small molecules. However, GB still remains a less widely explored option for nucleic acid simulations, in part because fast GB models are often unable to maintain stable nucleic acid structures, or they introduce structural bias in proteins, leading to difficulty in application of GB models in simulations of protein-nucleic acid complexes. Recently, GB-neck2 was developed to improve the behavior of protein simulations. In an effort to create a more accurate model for nucleic acids, a similar procedure to the development of GB-neck2 is described here for nucleic acids. The resulting parameter set significantly reduces absolute and relative energy error relative to Poisson Boltzmann for both nucleic acids and nucleic acid-protein complexes, when compared to its predecessor GB-neck model. This improvement in solvation energy calculation translates to increased structural stability for simulations of DNA and RNA duplexes, quadruplexes, and protein-nucleic acid complexes. The GB-neck2 model also enables successful folding of small DNA and RNA hairpins to near native structures as determined from comparison with experiment. The functional form and all required parameters are provided here and also implemented in the AMBER software. PMID:26574454

  12. Efficient simulation and model reformulation of two-dimensional electrochemical thermal behavior of lithium-ion batteries

    DOE PAGES

    Northrop, Paul W. C.; Pathak, Manan; Rife, Derek; ...

    2015-03-09

    Lithium-ion batteries are an important technology to facilitate efficient energy storage and enable a shift from petroleum based energy to more environmentally benign sources. Such systems can be utilized most efficiently if good understanding of performance can be achieved for a range of operating conditions. Mathematical models can be useful to predict battery behavior to allow for optimization of design and control. An analytical solution is ideally preferred to solve the equations of a mathematical model, as it eliminates the error that arises when using numerical techniques and is usually computationally cheap. An analytical solution provides insight into the behaviormore » of the system and also explicitly shows the effects of different parameters on the behavior. However, most engineering models, including the majority of battery models, cannot be solved analytically due to non-linearities in the equations and state dependent transport and kinetic parameters. The numerical method used to solve the system of equations describing a battery operation can have a significant impact on the computational cost of the simulation. In this paper, a model reformulation of the porous electrode pseudo three dimensional (P3D) which significantly reduces the computational cost of lithium ion battery simulation, while maintaining high accuracy, is discussed. This reformulation enables the use of the P3D model into applications that would otherwise be too computationally expensive to justify its use, such as online control, optimization, and parameter estimation. Furthermore, the P3D model has proven to be robust enough to allow for the inclusion of additional physical phenomena as understanding improves. In this study, the reformulated model is used to allow for more complicated physical phenomena to be considered for study, including thermal effects.« less

  13. Exact-Differential Large-Scale Traffic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) amore » key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.« less

  14. A Software Toolkit to Study Systematic Uncertainties of the Physics Models of the Geant4 Simulation Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genser, Krzysztof; Hatcher, Robert; Kelsey, Michael

    The Geant4 simulation toolkit is used to model interactions between particles and matter. Geant4 employs a set of validated physics models that span a wide range of interaction energies. These models rely on measured cross-sections and phenomenological models with the physically motivated parameters that are tuned to cover many application domains. To study what uncertainties are associated with the Geant4 physics models we have designed and implemented a comprehensive, modular, user-friendly software toolkit that allows the variation of one or more parameters of one or more Geant4 physics models involved in simulation studies. It also enables analysis of multiple variantsmore » of the resulting physics observables of interest in order to estimate the uncertainties associated with the simulation model choices. Based on modern event-processing infrastructure software, the toolkit offers a variety of attractive features, e.g. exible run-time con gurable work ow, comprehensive bookkeeping, easy to expand collection of analytical components. Design, implementation technology, and key functionalities of the toolkit are presented in this paper and illustrated with selected results.« less

  15. Fractal propagation method enables realistic optical microscopy simulations in biological tissues

    PubMed Central

    Glaser, Adam K.; Chen, Ye; Liu, Jonathan T.C.

    2017-01-01

    Current simulation methods for light transport in biological media have limited efficiency and realism when applied to three-dimensional microscopic light transport in biological tissues with refractive heterogeneities. We describe here a technique which combines a beam propagation method valid for modeling light transport in media with weak variations in refractive index, with a fractal model of refractive index turbulence. In contrast to standard simulation methods, this fractal propagation method (FPM) is able to accurately and efficiently simulate the diffraction effects of focused beams, as well as the microscopic heterogeneities present in tissue that result in scattering, refractive beam steering, and the aberration of beam foci. We validate the technique and the relationship between the FPM model parameters and conventional optical parameters used to describe tissues, and also demonstrate the method’s flexibility and robustness by examining the steering and distortion of Gaussian and Bessel beams in tissue with comparison to experimental data. We show that the FPM has utility for the accurate investigation and optimization of optical microscopy methods such as light-sheet, confocal, and nonlinear microscopy. PMID:28983499

  16. Exploiting active subspaces to quantify uncertainty in the numerical simulation of the HyShot II scramjet

    NASA Astrophysics Data System (ADS)

    Constantine, P. G.; Emory, M.; Larsson, J.; Iaccarino, G.

    2015-12-01

    We present a computational analysis of the reactive flow in a hypersonic scramjet engine with focus on effects of uncertainties in the operating conditions. We employ a novel methodology based on active subspaces to characterize the effects of the input uncertainty on the scramjet performance. The active subspace identifies one-dimensional structure in the map from simulation inputs to quantity of interest that allows us to reparameterize the operating conditions; instead of seven physical parameters, we can use a single derived active variable. This dimension reduction enables otherwise infeasible uncertainty quantification, considering the simulation cost of roughly 9500 CPU-hours per run. For two values of the fuel injection rate, we use a total of 68 simulations to (i) identify the parameters that contribute the most to the variation in the output quantity of interest, (ii) estimate upper and lower bounds on the quantity of interest, (iii) classify sets of operating conditions as safe or unsafe corresponding to a threshold on the output quantity of interest, and (iv) estimate a cumulative distribution function for the quantity of interest.

  17. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  18. Robust Real-Time Musculoskeletal Modeling Driven by Electromyograms.

    PubMed

    Durandau, Guillaume; Farina, Dario; Sartori, Massimo

    2018-03-01

    Current clinical biomechanics involves lengthy data acquisition and time-consuming offline analyses with biomechanical models not operating in real-time for man-machine interfacing. We developed a method that enables online analysis of neuromusculoskeletal function in vivo in the intact human. We used electromyography (EMG)-driven musculoskeletal modeling to simulate all transformations from muscle excitation onset (EMGs) to mechanical moment production around multiple lower-limb degrees of freedom (DOFs). We developed a calibration algorithm that enables adjusting musculoskeletal model parameters specifically to an individual's anthropometry and force-generating capacity. We incorporated the modeling paradigm into a computationally efficient, generic framework that can be interfaced in real-time with any movement data collection system. The framework demonstrated the ability of computing forces in 13 lower-limb muscle-tendon units and resulting moments about three joint DOFs simultaneously in real-time. Remarkably, it was capable of extrapolating beyond calibration conditions, i.e., predicting accurate joint moments during six unseen tasks and one unseen DOF. The proposed framework can dramatically reduce evaluation latency in current clinical biomechanics and open up new avenues for establishing prompt and personalized treatments, as well as for establishing natural interfaces between patients and rehabilitation systems. The integration of EMG with numerical modeling will enable simulating realistic neuromuscular strategies in conditions including muscular/orthopedic deficit, which could not be robustly simulated via pure modeling formulations. This will enable translation to clinical settings and development of healthcare technologies including real-time bio-feedback of internal mechanical forces and direct patient-machine interfacing.

  19. Predicting scattering scanning near-field optical microscopy of mass-produced plasmonic devices

    NASA Astrophysics Data System (ADS)

    Otto, Lauren M.; Burgos, Stanley P.; Staffaroni, Matteo; Ren, Shen; Süzer, Özgün; Stipe, Barry C.; Ashby, Paul D.; Hammack, Aeron T.

    2018-05-01

    Scattering scanning near-field optical microscopy enables optical imaging and characterization of plasmonic devices with nanometer-scale resolution well below the diffraction limit. This technique enables developers to probe and understand the waveguide-coupled plasmonic antenna in as-fabricated heat-assisted magnetic recording heads. In order to validate and predict results and to extract information from experimental measurements that is physically comparable to simulations, a model was developed to translate the simulated electric field into expected near-field measurements using physical parameters specific to scattering scanning near-field optical microscopy physics. The methods used in this paper prove that scattering scanning near-field optical microscopy can be used to determine critical sub-diffraction-limited dimensions of optical field confinement, which is a crucial metrology requirement for the future of nano-optics, semiconductor photonic devices, and biological sensing where the near-field character of light is fundamental to device operation.

  20. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-07-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.

  1. AmapSim: a structural whole-plant simulator based on botanical knowledge and designed to host external functional models.

    PubMed

    Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry

    2008-05-01

    AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Simulations were performed on tomato plants to demonstrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment.

  2. AmapSim: A Structural Whole-plant Simulator Based on Botanical Knowledge and Designed to Host External Functional Models

    PubMed Central

    Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry

    2008-01-01

    Background and Aims AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. Methods The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Key Results Simulations were performed on tomato plants to demostrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. Conclusions The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment. PMID:17766310

  3. A satellite simulator for TRMM PR applied to climate model simulations

    NASA Astrophysics Data System (ADS)

    Spangehl, T.; Schroeder, M.; Bodas-Salcedo, A.; Hollmann, R.; Riley Dellaripa, E. M.; Schumacher, C.

    2017-12-01

    Climate model simulations have to be compared against observation based datasets in order to assess their skill in representing precipitation characteristics. Here we use a satellite simulator for TRMM PR in order to evaluate simulations performed with MPI-ESM (Earth system model of the Max Planck Institute for Meteorology in Hamburg, Germany) performed within the MiKlip project (https://www.fona-miklip.de/, funded by Federal Ministry of Education and Research in Germany). While classical evaluation methods focus on geophysical parameters such as precipitation amounts, the application of the satellite simulator enables an evaluation in the instrument's parameter space thereby reducing uncertainties on the reference side. The CFMIP Observation Simulator Package (COSP) provides a framework for the application of satellite simulators to climate model simulations. The approach requires the introduction of sub-grid cloud and precipitation variability. Radar reflectivities are obtained by applying Mie theory, with the microphysical assumptions being chosen to match the atmosphere component of MPI-ESM (ECHAM6). The results are found to be sensitive to the methods used to distribute the convective precipitation over the sub-grid boxes. Simple parameterization methods are used to introduce sub-grid variability of convective clouds and precipitation. In order to constrain uncertainties a comprehensive comparison with sub-grid scale convective precipitation variability which is deduced from TRMM PR observations is carried out.

  4. Emulating Simulations of Cosmic Dawn for 21 cm Power Spectrum Constraints on Cosmology, Reionization, and X-Ray Heating

    NASA Astrophysics Data System (ADS)

    Kern, Nicholas S.; Liu, Adrian; Parsons, Aaron R.; Mesinger, Andrei; Greig, Bradley

    2017-10-01

    Current and upcoming radio interferometric experiments are aiming to make a statistical characterization of the high-redshift 21 cm fluctuation signal spanning the hydrogen reionization and X-ray heating epochs of the universe. However, connecting 21 cm statistics to the underlying physical parameters is complicated by the theoretical challenge of modeling the relevant physics at computational speeds quick enough to enable exploration of the high-dimensional and weakly constrained parameter space. In this work, we use machine learning algorithms to build a fast emulator that can accurately mimic an expensive simulation of the 21 cm signal across a wide parameter space. We embed our emulator within a Markov Chain Monte Carlo framework in order to perform Bayesian parameter constraints over a large number of model parameters, including those that govern the Epoch of Reionization, the Epoch of X-ray Heating, and cosmology. As a worked example, we use our emulator to present an updated parameter constraint forecast for the Hydrogen Epoch of Reionization Array experiment, showing that its characterization of a fiducial 21 cm power spectrum will considerably narrow the allowed parameter space of reionization and heating parameters, and could help strengthen Planck's constraints on {σ }8. We provide both our generalized emulator code and its implementation specifically for 21 cm parameter constraints as publicly available software.

  5. Multifidelity-CMA: a multifidelity approach for efficient personalisation of 3D cardiac electromechanical models.

    PubMed

    Molléro, Roch; Pennec, Xavier; Delingette, Hervé; Garny, Alan; Ayache, Nicholas; Sermesant, Maxime

    2018-02-01

    Personalised computational models of the heart are of increasing interest for clinical applications due to their discriminative and predictive abilities. However, the simulation of a single heartbeat with a 3D cardiac electromechanical model can be long and computationally expensive, which makes some practical applications, such as the estimation of model parameters from clinical data (the personalisation), very slow. Here we introduce an original multifidelity approach between a 3D cardiac model and a simplified "0D" version of this model, which enables to get reliable (and extremely fast) approximations of the global behaviour of the 3D model using 0D simulations. We then use this multifidelity approximation to speed-up an efficient parameter estimation algorithm, leading to a fast and computationally efficient personalisation method of the 3D model. In particular, we show results on a cohort of 121 different heart geometries and measurements. Finally, an exploitable code of the 0D model with scripts to perform parameter estimation will be released to the community.

  6. Predictive process simulation of cryogenic implants for leading edge transistor design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gossmann, Hans-Joachim; Zographos, Nikolas; Park, Hugh

    2012-11-06

    Two cryogenic implant TCAD-modules have been developed: (i) A continuum-based compact model targeted towards a TCAD production environment calibrated against an extensive data-set for all common dopants. Ion-specific calibration parameters related to damage generation and dynamic annealing were used and resulted in excellent fits to the calibration data-set. (ii) A Kinetic Monte Carlo (kMC) model including the full time dependence of ion-exposure that a particular spot on the wafer experiences, as well as the resulting temperature vs. time profile of this spot. It was calibrated by adjusting damage generation and dynamic annealing parameters. The kMC simulations clearly demonstrate the importancemore » of the time-structure of the beam for the amorphization process: Assuming an average dose-rate does not capture all of the physics and may lead to incorrect conclusions. The model enables optimization of the amorphization process through tool parameters such as scan speed or beam height.« less

  7. VASA: Interactive Computational Steering of Large Asynchronous Simulation Pipelines for Societal Infrastructure.

    PubMed

    Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S

    2014-12-01

    We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.

  8. Temporal binning of time-correlated single photon counting data improves exponential decay fits and imaging speed

    PubMed Central

    Walsh, Alex J.; Sharick, Joe T.; Skala, Melissa C.; Beier, Hope T.

    2016-01-01

    Time-correlated single photon counting (TCSPC) enables acquisition of fluorescence lifetime decays with high temporal resolution within the fluorescence decay. However, many thousands of photons per pixel are required for accurate lifetime decay curve representation, instrument response deconvolution, and lifetime estimation, particularly for two-component lifetimes. TCSPC imaging speed is inherently limited due to the single photon per laser pulse nature and low fluorescence event efficiencies (<10%) required to reduce bias towards short lifetimes. Here, simulated fluorescence lifetime decays are analyzed by SPCImage and SLIM Curve software to determine the limiting lifetime parameters and photon requirements of fluorescence lifetime decays that can be accurately fit. Data analysis techniques to improve fitting accuracy for low photon count data were evaluated. Temporal binning of the decays from 256 time bins to 42 time bins significantly (p<0.0001) improved fit accuracy in SPCImage and enabled accurate fits with low photon counts (as low as 700 photons/decay), a 6-fold reduction in required photons and therefore improvement in imaging speed. Additionally, reducing the number of free parameters in the fitting algorithm by fixing the lifetimes to known values significantly reduced the lifetime component error from 27.3% to 3.2% in SPCImage (p<0.0001) and from 50.6% to 4.2% in SLIM Curve (p<0.0001). Analysis of nicotinamide adenine dinucleotide–lactate dehydrogenase (NADH-LDH) solutions confirmed temporal binning of TCSPC data and a reduced number of free parameters improves exponential decay fit accuracy in SPCImage. Altogether, temporal binning (in SPCImage) and reduced free parameters are data analysis techniques that enable accurate lifetime estimation from low photon count data and enable TCSPC imaging speeds up to 6x and 300x faster, respectively, than traditional TCSPC analysis. PMID:27446663

  9. Fine-Tuning ADAS Algorithm Parameters for Optimizing Traffic ...

    EPA Pesticide Factsheets

    With the development of the Connected Vehicle technology that facilitates wirelessly communication among vehicles and road-side infrastructure, the Advanced Driver Assistance Systems (ADAS) can be adopted as an effective tool for accelerating traffic safety and mobility optimization at various highway facilities. To this end, the traffic management centers identify the optimal ADAS algorithm parameter set that enables the maximum improvement of the traffic safety and mobility performance, and broadcast the optimal parameter set wirelessly to individual ADAS-equipped vehicles. After adopting the optimal parameter set, the ADAS-equipped drivers become active agents in the traffic stream that work collectively and consistently to prevent traffic conflicts, lower the intensity of traffic disturbances, and suppress the development of traffic oscillations into heavy traffic jams. Successful implementation of this objective requires the analysis capability of capturing the impact of the ADAS on driving behaviors, and measuring traffic safety and mobility performance under the influence of the ADAS. To address this challenge, this research proposes a synthetic methodology that incorporates the ADAS-affected driving behavior modeling and state-of-the-art microscopic traffic flow modeling into a virtually simulated environment. Building on such an environment, the optimal ADAS algorithm parameter set is identified through an optimization programming framework to enable th

  10. Simulation of random road microprofile based on specified correlation function

    NASA Astrophysics Data System (ADS)

    Rykov, S. P.; Rykova, O. A.; Koval, V. S.; Vlasov, V. G.; Fedotov, K. V.

    2018-03-01

    The paper aims to develop a numerical simulation method and an algorithm for a random microprofile of special roads based on the specified correlation function. The paper used methods of correlation, spectrum and numerical analysis. It proves that the transfer function of the generating filter for known expressions of spectrum input and output filter characteristics can be calculated using a theorem on nonnegative and fractional rational factorization and integral transformation. The model of the random function equivalent of the real road surface microprofile enables us to assess springing system parameters and identify ranges of variations.

  11. Interactive 3D display simulator for autostereoscopic smart pad

    NASA Astrophysics Data System (ADS)

    Choe, Yeong-Seon; Lee, Ho-Dong; Park, Min-Chul; Son, Jung-Young; Park, Gwi-Tae

    2012-06-01

    There is growing interest of displaying 3D images on a smart pad for entertainments and information services. Designing and realizing various types of 3D displays on the smart pad is not easy for costs and given time. Software simulation can be an alternative method to save and shorten the development. In this paper, we propose a 3D display simulator for autostereoscopic smart pad. It simulates light intensity of each view and crosstalk for smart pad display panels. Designers of 3D display for smart pad can interactively simulate many kinds of autostereoscopic displays interactively by changing parameters required for panel design. Crosstalk to reduce leakage of one eye's image into the image of the other eye, and light intensity for computing visual comfort zone are important factors in designing autostereoscopic display for smart pad. Interaction enables intuitive designs. This paper describes an interactive 3D display simulator for autostereoscopic smart pad.

  12. SBML-PET: a Systems Biology Markup Language-based parameter estimation tool.

    PubMed

    Zi, Zhike; Klipp, Edda

    2006-11-01

    The estimation of model parameters from experimental data remains a bottleneck for a major breakthrough in systems biology. We present a Systems Biology Markup Language (SBML) based Parameter Estimation Tool (SBML-PET). The tool is designed to enable parameter estimation for biological models including signaling pathways, gene regulation networks and metabolic pathways. SBML-PET supports import and export of the models in the SBML format. It can estimate the parameters by fitting a variety of experimental data from different experimental conditions. SBML-PET has a unique feature of supporting event definition in the SMBL model. SBML models can also be simulated in SBML-PET. Stochastic Ranking Evolution Strategy (SRES) is incorporated in SBML-PET for parameter estimation jobs. A classic ODE Solver called ODEPACK is used to solve the Ordinary Differential Equation (ODE) system. http://sysbio.molgen.mpg.de/SBML-PET/. The website also contains detailed documentation for SBML-PET.

  13. The dynamical core of the Aeolus 1.0 statistical-dynamical atmosphere model: validation and parameter optimization

    NASA Astrophysics Data System (ADS)

    Totz, Sonja; Eliseev, Alexey V.; Petri, Stefan; Flechsig, Michael; Caesar, Levke; Petoukhov, Vladimir; Coumou, Dim

    2018-02-01

    We present and validate a set of equations for representing the atmosphere's large-scale general circulation in an Earth system model of intermediate complexity (EMIC). These dynamical equations have been implemented in Aeolus 1.0, which is a statistical-dynamical atmosphere model (SDAM) and includes radiative transfer and cloud modules (Coumou et al., 2011; Eliseev et al., 2013). The statistical dynamical approach is computationally efficient and thus enables us to perform climate simulations at multimillennia timescales, which is a prime aim of our model development. Further, this computational efficiency enables us to scan large and high-dimensional parameter space to tune the model parameters, e.g., for sensitivity studies.Here, we present novel equations for the large-scale zonal-mean wind as well as those for planetary waves. Together with synoptic parameterization (as presented by Coumou et al., 2011), these form the mathematical description of the dynamical core of Aeolus 1.0.We optimize the dynamical core parameter values by tuning all relevant dynamical fields to ERA-Interim reanalysis data (1983-2009) forcing the dynamical core with prescribed surface temperature, surface humidity and cumulus cloud fraction. We test the model's performance in reproducing the seasonal cycle and the influence of the El Niño-Southern Oscillation (ENSO). We use a simulated annealing optimization algorithm, which approximates the global minimum of a high-dimensional function.With non-tuned parameter values, the model performs reasonably in terms of its representation of zonal-mean circulation, planetary waves and storm tracks. The simulated annealing optimization improves in particular the model's representation of the Northern Hemisphere jet stream and storm tracks as well as the Hadley circulation.The regions of high azonal wind velocities (planetary waves) are accurately captured for all validation experiments. The zonal-mean zonal wind and the integrated lower troposphere mass flux show good results in particular in the Northern Hemisphere. In the Southern Hemisphere, the model tends to produce too-weak zonal-mean zonal winds and a too-narrow Hadley circulation. We discuss possible reasons for these model biases as well as planned future model improvements and applications.

  14. An Implementation of Wireless Body Area Networks for Improving Priority Data Transmission Delay.

    PubMed

    Gündoğdu, Köksal; Çalhan, Ali

    2016-03-01

    The rapid growth of wireless sensor networks has enabled the human health monitoring of patients using body sensor nodes that gather and evaluate human body parameters and movements. This study describes both simulation model and implementation of a new traffic sensitive wireless body area network by using non-preemptive priority queue discipline. A wireless body area network implementation employing TDMA is designed with three different priorities of data traffics. Besides, a coordinator node having the non-preemptive priority queue is performed in this study. We have also developed, modeled and simulated example network scenarios by using the Riverbed Modeler simulation software with the purpose of verifying the implementation results. The simulation results obtained under various network load conditions are consistent with the implementation results.

  15. Reparameterization of All-Atom Dipalmitoylphosphatidylcholine Lipid Parameters Enables Simulation of Fluid Bilayers at Zero Tension

    PubMed Central

    Sonne, Jacob; Jensen, Morten Ø.; Hansen, Flemming Y.; Hemmingsen, Lars; Peters, Günther H.

    2007-01-01

    Molecular dynamics simulations of dipalmitoylphosphatidylcholine (DPPC) lipid bilayers using the CHARMM27 force field in the tensionless isothermal-isobaric (NPT) ensemble give highly ordered, gel-like bilayers with an area per lipid of ∼48 Å2. To obtain fluid (Lα) phase properties of DPPC bilayers represented by the CHARMM energy function in this ensemble, we reparameterized the atomic partial charges in the lipid headgroup and upper parts of the acyl chains. The new charges were determined from the electron structure using both the Mulliken method and the restricted electrostatic potential fitting method. We tested the derived charges in molecular dynamics simulations of a fully hydrated DPPC bilayer. Only the simulation with the new restricted electrostatic potential charges shows significant improvements compared with simulations using the original CHARMM27 force field resulting in an area per lipid of 60.4 ± 0.1 Å2. Compared to the 48 Å2, the new value of 60.4 Å2 is in fair agreement with the experimental value of 64 Å2. In addition, the simulated order parameter profile and electron density profile are in satisfactory agreement with experimental data. Thus, the biologically more interesting fluid phase of DPPC bilayers can now be simulated in all-atom simulations in the NPT ensemble by employing our modified CHARMM27 force field. PMID:17400696

  16. Cancer heterogeneity and multilayer spatial evolutionary games.

    PubMed

    Świerniak, Andrzej; Krześlak, Michał

    2016-10-13

    Evolutionary game theory (EGT) has been widely used to simulate tumour processes. In almost all studies on EGT models analysis is limited to two or three phenotypes. Our model contains four main phenotypes. Moreover, in a standard approach only heterogeneity of populations is studied, while cancer cells remain homogeneous. A multilayer approach proposed in this paper enables to study heterogeneity of single cells. In the extended model presented in this paper we consider four strategies (phenotypes) that can arise by mutations. We propose multilayer spatial evolutionary games (MSEG) played on multiple 2D lattices corresponding to the possible phenotypes. It enables simulation and investigation of heterogeneity on the player-level in addition to the population-level. Moreover, it allows to model interactions between arbitrary many phenotypes resulting from the mixture of basic traits. Different equilibrium points and scenarios (monomorphic and polymorphic populations) have been achieved depending on model parameters and the type of played game. However, there is a possibility of stable quadromorphic population in MSEG games for the same set of parameters like for the mean-field game. The model assumes an existence of four possible phenotypes (strategies) in the population of cells that make up tumour. Various parameters and relations between cells lead to complex analysis of this model and give diverse results. One of them is a possibility of stable coexistence of different tumour cells within the population, representing almost arbitrary mixture of the basic phenotypes. This article was reviewed by Tomasz Lipniacki, Urszula Ledzewicz and Jacek Banasiak.

  17. Model inversion via multi-fidelity Bayesian optimization: a new paradigm for parameter estimation in haemodynamics, and beyond.

    PubMed

    Perdikaris, Paris; Karniadakis, George Em

    2016-05-01

    We present a computational framework for model inversion based on multi-fidelity information fusion and Bayesian optimization. The proposed methodology targets the accurate construction of response surfaces in parameter space, and the efficient pursuit to identify global optima while keeping the number of expensive function evaluations at a minimum. We train families of correlated surrogates on available data using Gaussian processes and auto-regressive stochastic schemes, and exploit the resulting predictive posterior distributions within a Bayesian optimization setting. This enables a smart adaptive sampling procedure that uses the predictive posterior variance to balance the exploration versus exploitation trade-off, and is a key enabler for practical computations under limited budgets. The effectiveness of the proposed framework is tested on three parameter estimation problems. The first two involve the calibration of outflow boundary conditions of blood flow simulations in arterial bifurcations using multi-fidelity realizations of one- and three-dimensional models, whereas the last one aims to identify the forcing term that generated a particular solution to an elliptic partial differential equation. © 2016 The Author(s).

  18. Model inversion via multi-fidelity Bayesian optimization: a new paradigm for parameter estimation in haemodynamics, and beyond

    PubMed Central

    Perdikaris, Paris; Karniadakis, George Em

    2016-01-01

    We present a computational framework for model inversion based on multi-fidelity information fusion and Bayesian optimization. The proposed methodology targets the accurate construction of response surfaces in parameter space, and the efficient pursuit to identify global optima while keeping the number of expensive function evaluations at a minimum. We train families of correlated surrogates on available data using Gaussian processes and auto-regressive stochastic schemes, and exploit the resulting predictive posterior distributions within a Bayesian optimization setting. This enables a smart adaptive sampling procedure that uses the predictive posterior variance to balance the exploration versus exploitation trade-off, and is a key enabler for practical computations under limited budgets. The effectiveness of the proposed framework is tested on three parameter estimation problems. The first two involve the calibration of outflow boundary conditions of blood flow simulations in arterial bifurcations using multi-fidelity realizations of one- and three-dimensional models, whereas the last one aims to identify the forcing term that generated a particular solution to an elliptic partial differential equation. PMID:27194481

  19. SU-F-J-178: A Computer Simulation Model Observer for Task-Based Image Quality Assessment in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolly, S; Mutic, S; Anastasio, M

    Purpose: Traditionally, image quality in radiation therapy is assessed subjectively or by utilizing physically-based metrics. Some model observers exist for task-based medical image quality assessment, but almost exclusively for diagnostic imaging tasks. As opposed to disease diagnosis, the task for image observers in radiation therapy is to utilize the available images to design and deliver a radiation dose which maximizes patient disease control while minimizing normal tissue damage. The purpose of this study was to design and implement a new computer simulation model observer to enable task-based image quality assessment in radiation therapy. Methods: A modular computer simulation framework wasmore » developed to resemble the radiotherapy observer by simulating an end-to-end radiation therapy treatment. Given images and the ground-truth organ boundaries from a numerical phantom as inputs, the framework simulates an external beam radiation therapy treatment and quantifies patient treatment outcomes using the previously defined therapeutic operating characteristic (TOC) curve. As a preliminary demonstration, TOC curves were calculated for various CT acquisition and reconstruction parameters, with the goal of assessing and optimizing simulation CT image quality for radiation therapy. Sources of randomness and bias within the system were analyzed. Results: The relationship between CT imaging dose and patient treatment outcome was objectively quantified in terms of a singular value, the area under the TOC (AUTOC) curve. The AUTOC decreases more rapidly for low-dose imaging protocols. AUTOC variation introduced by the dose optimization algorithm was approximately 0.02%, at the 95% confidence interval. Conclusion: A model observer has been developed and implemented to assess image quality based on radiation therapy treatment efficacy. It enables objective determination of appropriate imaging parameter values (e.g. imaging dose). Framework flexibility allows for incorporation of additional modules to include any aspect of the treatment process, and therefore has great potential for both assessment and optimization within radiation therapy.« less

  20. Uncertainty quantification and risk analyses of CO2 leakage in heterogeneous geological formations

    NASA Astrophysics Data System (ADS)

    Hou, Z.; Murray, C. J.; Rockhold, M. L.

    2012-12-01

    A stochastic sensitivity analysis framework is adopted to evaluate the impact of spatial heterogeneity in permeability on CO2 leakage risk. The leakage is defined as the total mass of CO2 moving into the overburden through the caprock-overburden interface, in both gaseous and liquid (dissolved) phases. The entropy-based framework has the ability to quantify the uncertainty associated with the input parameters in the form of prior pdfs (probability density functions). Effective sampling of the prior pdfs enables us to fully explore the parameter space and systematically evaluate the individual and combined effects of the parameters of interest on CO2 leakage risk. The parameters that are considered in the study include: mean, variance, and horizontal to vertical spatial anisotropy ratio for caprock permeability, and those same parameters for reservoir permeability. Given the sampled spatial variogram parameters, multiple realizations of permeability fields were generated using GSLIB subroutines. For each permeability field, a numerical simulator, STOMP, (in the water-salt-CO2-energy operational mode) is used to simulate the CO2 migration within the reservoir and caprock up to 50 years after injection. Due to intensive computational demand, we run both a scalable version simulator eSTOMP and serial STOMP on various supercomputers. We then perform statistical analyses and summarize the relationships between the parameters of interest (mean/variance/anisotropy ratio of caprock and reservoir permeability) and CO2 leakage ratio. We also present the effects of those parameters on CO2 plume radius and reservoir injectivity. The statistical analysis provides a reduced order model that can be used to estimate the impact of heterogeneity on caprock leakage.

  1. Electrical circuit modeling and analysis of microwave acoustic interaction with biological tissues.

    PubMed

    Gao, Fei; Zheng, Qian; Zheng, Yuanjin

    2014-05-01

    Numerical study of microwave imaging and microwave-induced thermoacoustic imaging utilizes finite difference time domain (FDTD) analysis for simulation of microwave and acoustic interaction with biological tissues, which is time consuming due to complex grid-segmentation and numerous calculations, not straightforward due to no analytical solution and physical explanation, and incompatible with hardware development requiring circuit simulator such as SPICE. In this paper, instead of conventional FDTD numerical simulation, an equivalent electrical circuit model is proposed to model the microwave acoustic interaction with biological tissues for fast simulation and quantitative analysis in both one and two dimensions (2D). The equivalent circuit of ideal point-like tissue for microwave-acoustic interaction is proposed including transmission line, voltage-controlled current source, envelop detector, and resistor-inductor-capacitor (RLC) network, to model the microwave scattering, thermal expansion, and acoustic generation. Based on which, two-port network of the point-like tissue is built and characterized using pseudo S-parameters and transducer gain. Two dimensional circuit network including acoustic scatterer and acoustic channel is also constructed to model the 2D spatial information and acoustic scattering effect in heterogeneous medium. Both FDTD simulation, circuit simulation, and experimental measurement are performed to compare the results in terms of time domain, frequency domain, and pseudo S-parameters characterization. 2D circuit network simulation is also performed under different scenarios including different sizes of tumors and the effect of acoustic scatterer. The proposed circuit model of microwave acoustic interaction with biological tissue could give good agreement with FDTD simulated and experimental measured results. The pseudo S-parameters and characteristic gain could globally evaluate the performance of tumor detection. The 2D circuit network enables the potential to combine the quasi-numerical simulation and circuit simulation in a uniform simulator for codesign and simulation of a microwave acoustic imaging system, bridging bioeffect study and hardware development seamlessly.

  2. Development of space simulation / net-laboratory system

    NASA Astrophysics Data System (ADS)

    Usui, H.; Matsumoto, H.; Ogino, T.; Fujimoto, M.; Omura, Y.; Okada, M.; Ueda, H. O.; Murata, T.; Kamide, Y.; Shinagawa, H.; Watanabe, S.; Machida, S.; Hada, T.

    A research project for the development of space simulation / net-laboratory system was approved by Japan Science and Technology Corporation (JST) in the category of Research and Development for Applying Advanced Computational Science and Technology(ACT-JST) in 2000. This research project, which continues for three years, is a collaboration with an astrophysical simulation group as well as other space simulation groups which use MHD and hybrid models. In this project, we develop a proto type of unique simulation system which enables us to perform simulation runs by providing or selecting plasma parameters through Web-based interface on the internet. We are also developing an on-line database system for space simulation from which we will be able to search and extract various information such as simulation method and program, manuals, and typical simulation results in graphic or ascii format. This unique system will help the simulation beginners to start simulation study without much difficulty or effort, and contribute to the promotion of simulation studies in the STP field. In this presentation, we will report the overview and the current status of the project.

  3. A Comparison of Three-Dimensional Simulations of Traveling-Wave Tube Cold-Test Characteristics Using CST MICROWAVE STUDIO and MAFIA

    NASA Technical Reports Server (NTRS)

    Chevalier, C. T.; Herrmann, K. A.; Kory, C. L.; Wilson, J. D.; Cross, A. W.; Williams, W. D. (Technical Monitor)

    2001-01-01

    Previously, it was shown that MAFIA (solutions of Maxwell's equations by the Finite Integration Algorithm), a three-dimensional simulation code, can be used to produce accurate cold-test characteristics including frequency-phase dispersion, interaction impedance, and attenuation for traveling-wave tube (TWT) slow-wave structures. In an effort to improve user-friendliness and simulation time, a model was developed to compute the cold-test parameters using the electromagnetic field simulation software package CST MICROWAVE STUDIO (MWS). Cold-test parameters were calculated for several slow-wave circuits including a ferruled coupled-cavity, a folded waveguide, and a novel finned-ladder circuit using both MWS and MAFIA. Comparisons indicate that MWS provides more accurate cold-test data with significantly reduced simulation times. Both MAFIA and MWS are based on the finite integration (FI) method; however, MWS has several advantages over MAFIA. First, it has a Windows based interface for PC operation, making it very user-friendly, whereas MAFIA is UNIX based. MWS uses a new Perfect Boundary Approximation (PBA), which increases the accuracy of the simulations by avoiding stair step approximations associated with MAFIA's representation of structures. Finally, MWS includes a Visual Basic for Applications (VBA) compatible macro language that enables the simulation process to be automated and allows for the optimization of user-defined goal functions, such as interaction impedance.

  4. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    PubMed

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  5. Efficient micromagnetic modelling of spin-transfer torque and spin-orbit torque

    NASA Astrophysics Data System (ADS)

    Abert, Claas; Bruckner, Florian; Vogler, Christoph; Suess, Dieter

    2018-05-01

    While the spin-diffusion model is considered one of the most complete and accurate tools for the description of spin transport and spin torque, its solution in the context of dynamical micromagnetic simulations is numerically expensive. We propose a procedure to retrieve the free parameters of a simple macro-spin like spin-torque model through the spin-diffusion model. In case of spin-transfer torque the simplified model complies with the model of Slonczewski. A similar model can be established for the description of spin-orbit torque. In both cases the spin-diffusion model enables the retrieval of free model parameters from the geometry and the material parameters of the system. Since these parameters usually have to be determined phenomenologically through experiments, the proposed method combines the strength of the diffusion model to resolve material parameters and geometry with the high performance of simple torque models.

  6. Life sciences Spacelab Mission Development test 3 (SMD 3) data management report

    NASA Technical Reports Server (NTRS)

    Moseley, E. C.

    1977-01-01

    Development of a permanent data system for SMD tests was studied that would simulate all elements of the shuttle onboard, telemetry, and ground data systems that are involved with spacelab operations. The onboard data system (ODS) and the ground data system (GDS) were utilized. The air-to-ground link was simulated by a hardwired computer-to-computer interface. A patch board system was used on board to select experiment inputs, and the downlink configuration from the ODS was changed by a crew keyboard entry to support each experiment. The ODS provided a CRT display of experiment parameters to enable the crew to monitor experiment performance. An onboard analog system, with recording capability, was installed to handle high rate data and to provide a backup to the digital system. The GDS accomplished engineering unit conversion and limit sensing, and provided realtime parameter display on CRT's in the science monitoring area and the test control area.

  7. Accelerated Enveloping Distribution Sampling: Enabling Sampling of Multiple End States while Preserving Local Energy Minima.

    PubMed

    Perthold, Jan Walther; Oostenbrink, Chris

    2018-05-17

    Enveloping distribution sampling (EDS) is an efficient approach to calculate multiple free-energy differences from a single molecular dynamics (MD) simulation. However, the construction of an appropriate reference-state Hamiltonian that samples all states efficiently is not straightforward. We propose a novel approach for the construction of the EDS reference-state Hamiltonian, related to a previously described procedure to smoothen energy landscapes. In contrast to previously suggested EDS approaches, our reference-state Hamiltonian preserves local energy minima of the combined end-states. Moreover, we propose an intuitive, robust and efficient parameter optimization scheme to tune EDS Hamiltonian parameters. We demonstrate the proposed method with established and novel test systems and conclude that our approach allows for the automated calculation of multiple free-energy differences from a single simulation. Accelerated EDS promises to be a robust and user-friendly method to compute free-energy differences based on solid statistical mechanics.

  8. Development of cost-effective surfactant flooding technology. Quarterly report, January 1, 1994--March 31, 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, G.A.; Sepehrnoori, K.

    1994-09-01

    The objective of this research is to develop cost-effective surfactant flooding technology by using surfactant simulation studies to evaluate and optimize alternative design strategies taking into account reservoir characteristics, process chemistry, and process design options such as horizontal wells. Task 1 is the development of an improved numerical method for our simulator that will enable us to solve a wider class of these difficult simulation problems accurately and affordably. Task 2 is the application of this simulator to the optimization of surfactant flooding to reduce its risk and cost. The goal of Task 2 is to understand and generalize themore » impact of both process and reservoir characteristics on the optimal design of surfactant flooding. We have studied the effect of process parameters such as salinity gradient, surfactant adsorption, surfactant concentration, surfactant slug size, pH, polymer concentration and well constraints on surfactant floods. In this report, we show three dimensional field scale simulation results to illustrate the impact of one important design parameter, the salinity gradient. Although the use of a salinity gradient to improve the efficiency and robustness of surfactant flooding has been studied and applied for many years, this is the first time that we have evaluated it using stochastic simulations rather than simulations using the traditional layered reservoir description. The surfactant flooding simulations were performed using The University of Texas chemical flooding simulator called UTCHEM.« less

  9. A software tool to assess uncertainty in transient-storage model parameters using Monte Carlo simulations

    USGS Publications Warehouse

    Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.

    2017-01-01

    Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.

  10. Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates

    NASA Astrophysics Data System (ADS)

    Todorovic, Andrijana; Plavsic, Jasna

    2015-04-01

    A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters. Correlation coefficients among optimised model parameters and total precipitation P, mean temperature T and mean flow Q are calculated to give an insight into parameter dependence on the hydrometeorological drivers. The results reveal high sensitivity of almost all model parameters towards calibration period. The highest variability is displayed by the refreezing coefficient, water holding capacity, and temperature gradient. The only statistically significant (decreasing) trend is detected in the evapotranspiration reduction threshold. Statistically significant correlation is detected between the precipitation gradient and precipitation depth, and between the time-area histogram base and flows. All other correlations are not statistically significant, implying that changes in optimised parameters cannot generally be linked to the changes in P, T or Q. As for the model performance, the model reproduces the observed runoff satisfactorily, though the runoff is slightly overestimated in wet periods. The Nash-Sutcliffe efficiency coefficient (NSE) ranges from 0.44 to 0.79. Higher NSE values are obtained over wetter periods, what is supported by statistically significant correlation between NSE and flows. Overall, no systematic variations in parameters or in model performance are detected. Parameter variability may therefore rather be attributed to errors in data or inadequacies in the model structure. Further research is required to examine the impact of the calibration strategy or model structure on the variability in optimised parameters in time.

  11. WFIRST: Data/Instrument Simulation Support at IPAC

    NASA Astrophysics Data System (ADS)

    Laine, Seppo; Akeson, Rachel; Armus, Lee; Bennett, Lee; Colbert, James; Helou, George; Kirkpatrick, J. Davy; Meshkat, Tiffany; Paladini, Roberta; Ramirez, Solange; Wang, Yun; Xie, Joan; Yan, Lin

    2018-01-01

    As part of WFIRST Science Center preparations, the IPAC Science Operations Center (ISOC) maintains a repository of 1) WFIRST data and instrument simulations, 2) tools to facilitate scientific performance and feasibility studies using the WFIRST, and 3) parameters summarizing the current design and predicted performance of the WFIRST telescope and instruments. The simulation repository provides access for the science community to simulation code, tools, and resulting analyses. Examples of simulation code with ISOC-built web-based interfaces include EXOSIMS (for estimating exoplanet yields in CGI surveys) and the Galaxy Survey Exposure Time Calculator. In the future the repository will provide an interface for users to run custom simulations of a wide range of coronagraph instrument (CGI) observations and sophisticated tools for designing microlensing experiments. We encourage those who are generating simulations or writing tools for exoplanet observations with WFIRST to contact the ISOC team so we can work with you to bring these to the attention of the broader astronomical community as we prepare for the exciting science that will be enabled by WFIRST.

  12. Reinforcement learning for routing in cognitive radio ad hoc networks.

    PubMed

    Al-Rawi, Hasan A A; Yau, Kok-Lim Alvin; Mohamad, Hafizal; Ramli, Nordin; Hashim, Wahidah

    2014-01-01

    Cognitive radio (CR) enables unlicensed users (or secondary users, SUs) to sense for and exploit underutilized licensed spectrum owned by the licensed users (or primary users, PUs). Reinforcement learning (RL) is an artificial intelligence approach that enables a node to observe, learn, and make appropriate decisions on action selection in order to maximize network performance. Routing enables a source node to search for a least-cost route to its destination node. While there have been increasing efforts to enhance the traditional RL approach for routing in wireless networks, this research area remains largely unexplored in the domain of routing in CR networks. This paper applies RL in routing and investigates the effects of various features of RL (i.e., reward function, exploitation, and exploration, as well as learning rate) through simulation. New approaches and recommendations are proposed to enhance the features in order to improve the network performance brought about by RL to routing. Simulation results show that the RL parameters of the reward function, exploitation, and exploration, as well as learning rate, must be well regulated, and the new approaches proposed in this paper improves SUs' network performance without significantly jeopardizing PUs' network performance, specifically SUs' interference to PUs.

  13. Reinforcement Learning for Routing in Cognitive Radio Ad Hoc Networks

    PubMed Central

    Al-Rawi, Hasan A. A.; Mohamad, Hafizal; Hashim, Wahidah

    2014-01-01

    Cognitive radio (CR) enables unlicensed users (or secondary users, SUs) to sense for and exploit underutilized licensed spectrum owned by the licensed users (or primary users, PUs). Reinforcement learning (RL) is an artificial intelligence approach that enables a node to observe, learn, and make appropriate decisions on action selection in order to maximize network performance. Routing enables a source node to search for a least-cost route to its destination node. While there have been increasing efforts to enhance the traditional RL approach for routing in wireless networks, this research area remains largely unexplored in the domain of routing in CR networks. This paper applies RL in routing and investigates the effects of various features of RL (i.e., reward function, exploitation, and exploration, as well as learning rate) through simulation. New approaches and recommendations are proposed to enhance the features in order to improve the network performance brought about by RL to routing. Simulation results show that the RL parameters of the reward function, exploitation, and exploration, as well as learning rate, must be well regulated, and the new approaches proposed in this paper improves SUs' network performance without significantly jeopardizing PUs' network performance, specifically SUs' interference to PUs. PMID:25140350

  14. Synaptic Plasticity Enables Adaptive Self-Tuning Critical Networks

    PubMed Central

    Stepp, Nigel; Plenz, Dietmar; Srinivasa, Narayan

    2015-01-01

    During rest, the mammalian cortex displays spontaneous neural activity. Spiking of single neurons during rest has been described as irregular and asynchronous. In contrast, recent in vivo and in vitro population measures of spontaneous activity, using the LFP, EEG, MEG or fMRI suggest that the default state of the cortex is critical, manifested by spontaneous, scale-invariant, cascades of activity known as neuronal avalanches. Criticality keeps a network poised for optimal information processing, but this view seems to be difficult to reconcile with apparently irregular single neuron spiking. Here, we simulate a 10,000 neuron, deterministic, plastic network of spiking neurons. We show that a combination of short- and long-term synaptic plasticity enables these networks to exhibit criticality in the face of intrinsic, i.e. self-sustained, asynchronous spiking. Brief external perturbations lead to adaptive, long-term modification of intrinsic network connectivity through long-term excitatory plasticity, whereas long-term inhibitory plasticity enables rapid self-tuning of the network back to a critical state. The critical state is characterized by a branching parameter oscillating around unity, a critical exponent close to -3/2 and a long tail distribution of a self-similarity parameter between 0.5 and 1. PMID:25590427

  15. Laser Ranging for Effective and Accurate Tracking of Space Debris in Low Earth Orbits

    NASA Astrophysics Data System (ADS)

    Blanchet, Guillaume; Haag, Herve; Hennegrave, Laurent; Assemat, Francois; Vial, Sophie; Samain, Etienne

    2013-08-01

    The paper presents the results of preliminary design options for an operational laser ranging system adapted to the measurement of the distance of space debris. Thorough analysis of the operational parameters is provided with identification of performance drivers and assessment of enabling design options. Results from performance simulation demonstrate how the range measurement enables improvement of the orbit determination when combined with astrometry. Besides, experimental results on rocket-stage class debris in LEO were obtained by Astrium beginning of 2012, in collaboration with the Observatoire de la Côte d'Azur (OCA), by operating an experimental laser ranging system supported by the MéO (Métrologie Optique) telescope.

  16. Laboratory simulations of astrophysical jets: results from experiments at the PF-3, PF-1000U, and KPF-4 facilities

    NASA Astrophysics Data System (ADS)

    Krauz, V. I.; Myalton, V. V.; Vinogradov, V. P.; Velikhov, E. P.; Ananyev, S. S.; Dan'ko, S. A.; Kalinin, Yu G.; Kharrasov, A. M.; Vinogradova, Yu V.; Mitrofanov, K. N.; Paduch, M.; Miklaszewski, R.; Zielinska, E.; Skladnik-Sadowska, E.; Sadowski, M. J.; Kwiatkowski, R.; Tomaszewski, K.; Vojtenko, D. A.

    2017-10-01

    Results are presented from laboratory simulations of plasma jets emitted by young stellar objects carried out at the plasma focus facilities. The experiments were performed at three facilities: the PF-3, PF-1000U and KPF-4. The operation modes were realized enabling the formation of narrow plasma jets which can propagate over long distances. The main parameters of plasma jets and background plasma were determined. In order to control the ratio of a jet density to that of background plasma, some special operation modes with pulsed injection of the working gas were used.

  17. XaNSoNS: GPU-accelerated simulator of diffraction patterns of nanoparticles

    NASA Astrophysics Data System (ADS)

    Neverov, V. S.

    XaNSoNS is an open source software with GPU support, which simulates X-ray and neutron 1D (or 2D) diffraction patterns and pair-distribution functions (PDF) for amorphous or crystalline nanoparticles (up to ∼107 atoms) of heterogeneous structural content. Among the multiple parameters of the structure the user may specify atomic displacements, site occupancies, molecular displacements and molecular rotations. The software uses general equations nonspecific to crystalline structures to calculate the scattering intensity. It supports four major standards of parallel computing: MPI, OpenMP, Nvidia CUDA and OpenCL, enabling it to run on various architectures, from CPU-based HPCs to consumer-level GPUs.

  18. Simulation and training of lumbar punctures using haptic volume rendering and a 6DOF haptic device

    NASA Astrophysics Data System (ADS)

    Färber, Matthias; Heller, Julika; Handels, Heinz

    2007-03-01

    The lumbar puncture is performed by inserting a needle into the spinal chord of the patient to inject medicaments or to extract liquor. The training of this procedure is usually done on the patient guided by experienced supervisors. A virtual reality lumbar puncture simulator has been developed in order to minimize the training costs and the patient's risk. We use a haptic device with six degrees of freedom (6DOF) to feedback forces that resist needle insertion and rotation. An improved haptic volume rendering approach is used to calculate the forces. This approach makes use of label data of relevant structures like skin, bone, muscles or fat and original CT data that contributes information about image structures that can not be segmented. A real-time 3D visualization with optional stereo view shows the punctured region. 2D visualizations of orthogonal slices enable a detailed impression of the anatomical context. The input data consisting of CT and label data and surface models of relevant structures is defined in an XML file together with haptic rendering and visualization parameters. In a first evaluation the visible human male data has been used to generate a virtual training body. Several users with different medical experience tested the lumbar puncture trainer. The simulator gives a good haptic and visual impression of the needle insertion and the haptic volume rendering technique enables the feeling of unsegmented structures. Especially, the restriction of transversal needle movement together with rotation constraints enabled by the 6DOF device facilitate a realistic puncture simulation.

  19. Pyrolysis and combustion of tobacco in a cigarette smoking simulator under air and nitrogen atmosphere.

    PubMed

    Busch, Christian; Streibel, Thorsten; Liu, Chuan; McAdam, Kevin G; Zimmermann, Ralf

    2012-04-01

    A coupling between a cigarette smoking simulator and a time-of-flight mass spectrometer was constructed to allow investigation of tobacco smoke formation under simulated burning conditions. The cigarette smoking simulator is designed to burn a sample in close approximation to the conditions experienced by a lit cigarette. The apparatus also permits conditions outside those of normal cigarette burning to be investigated for mechanistic understanding purposes. It allows control of parameters such as smouldering and puff temperatures, as well as combustion rate and puffing volume. In this study, the system enabled examination of the effects of "smoking" a cigarette under a nitrogen atmosphere. Time-of-flight mass spectrometry combined with a soft ionisation technique is expedient to analyse complex mixtures such as tobacco smoke with a high time resolution. The objective of the study was to separate pyrolysis from combustion processes to reveal the formation mechanism of several selected toxicants. A purposely designed adapter, with no measurable dead volume or memory effects, enables the analysis of pyrolysis and combustion gases from tobacco and tobacco products (e.g. 3R4F reference cigarette) with minimum aging. The combined system demonstrates clear distinctions between smoke composition found under air and nitrogen smoking atmospheres based on the corresponding mass spectra and visualisations using principal component analysis.

  20. Characterization of chemical agent transport in paints.

    PubMed

    Willis, Matthew P; Gordon, Wesley; Lalain, Teri; Mantooth, Brent

    2013-09-15

    A combination of vacuum-based vapor emission measurements with a mass transport model was employed to determine the interaction of chemical warfare agents with various materials, including transport parameters of agents in paints. Accurate determination of mass transport parameters enables the simulation of the chemical agent distribution in a material for decontaminant performance modeling. The evaluation was performed with the chemical warfare agents bis(2-chloroethyl) sulfide (distilled mustard, known as the chemical warfare blister agent HD) and O-ethyl S-[2-(diisopropylamino)ethyl] methylphosphonothioate (VX), an organophosphate nerve agent, deposited on to two different types of polyurethane paint coatings. The results demonstrated alignment between the experimentally measured vapor emission flux and the predicted vapor flux. Mass transport modeling demonstrated rapid transport of VX into the coatings; VX penetrated through the aliphatic polyurethane-based coating (100 μm) within approximately 107 min. By comparison, while HD was more soluble in the coatings, the penetration depth in the coatings was approximately 2× lower than VX. Applications of mass transport parameters include the ability to predict agent uptake, and subsequent long-term vapor emission or contact transfer where the agent could present exposure risks. Additionally, these parameters and model enable the ability to perform decontamination modeling to predict how decontaminants remove agent from these materials. Published by Elsevier B.V.

  1. Analyzing Spacecraft Telecommunication Systems

    NASA Technical Reports Server (NTRS)

    Kordon, Mark; Hanks, David; Gladden, Roy; Wood, Eric

    2004-01-01

    Multi-Mission Telecom Analysis Tool (MMTAT) is a C-language computer program for analyzing proposed spacecraft telecommunication systems. MMTAT utilizes parameterized input and computational models that can be run on standard desktop computers to perform fast and accurate analyses of telecommunication links. MMTAT is easy to use and can easily be integrated with other software applications and run as part of almost any computational simulation. It is distributed as either a stand-alone application program with a graphical user interface or a linkable library with a well-defined set of application programming interface (API) calls. As a stand-alone program, MMTAT provides both textual and graphical output. The graphs make it possible to understand, quickly and easily, how telecommunication performance varies with variations in input parameters. A delimited text file that can be read by any spreadsheet program is generated at the end of each run. The API in the linkable-library form of MMTAT enables the user to control simulation software and to change parameters during a simulation run. Results can be retrieved either at the end of a run or by use of a function call at any time step.

  2. On determining firing delay time of transitions for Petri net based signaling pathways by introducing stochastic decision rules.

    PubMed

    Miwa, Yoshimasa; Li, Chen; Ge, Qi-Wei; Matsuno, Hiroshi; Miyano, Satoru

    2010-01-01

    Parameter determination is important in modeling and simulating biological pathways including signaling pathways. Parameters are determined according to biological facts obtained from biological experiments and scientific publications. However, such reliable data describing detailed reactions are not reported in most cases. This prompted us to develop a general methodology of determining the parameters of a model in the case of that no information of the underlying biological facts is provided. In this study, we use the Petri net approach for modeling signaling pathways, and propose a method to determine firing delay times of transitions for Petri net models of signaling pathways by introducing stochastic decision rules. Petri net technology provides a powerful approach to modeling and simulating various concurrent systems, and recently have been widely accepted as a description method for biological pathways. Our method enables to determine the range of firing delay time which realizes smooth token flows in the Petri net model of a signaling pathway. The availability of this method has been confirmed by the results of an application to the interleukin-1 induced signaling pathway.

  3. On determining firing delay time of transitions for petri net based signaling pathways by introducing stochastic decision rules.

    PubMed

    Miwa, Yoshimasa; Li, Chen; Ge, Qi-Wei; Matsuno, Hiroshi; Miyano, Satoru

    2011-01-01

    Parameter determination is important in modeling and simulating biological pathways including signaling pathways. Parameters are determined according to biological facts obtained from biological experiments and scientific publications. However, such reliable data describing detailed reactions are not reported in most cases. This prompted us to develop a general methodology of determining the parameters of a model in the case of that no information of the underlying biological facts is provided. In this study, we use the Petri net approach for modeling signaling pathways, and propose a method to determine firing delay times of transitions for Petri net models of signaling pathways by introducing stochastic decision rules. Petri net technology provides a powerful approach to modeling and simulating various concurrent systems, and recently have been widely accepted as a description method for biological pathways. Our method enables to determine the range of firing delay time which realizes smooth token flows in the Petri net model of a signaling pathway. The availability of this method has been confirmed by the results of an application to the interleukin-1 induced signaling pathway.

  4. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah

    Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has tomore » gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.« less

  5. Operational Improvements From the In-Trail Procedure in the North Atlantic Organized Track System

    NASA Technical Reports Server (NTRS)

    Chartrand, Ryan C.; Bussink, Frank J. L.; Graff, Thomas J.; Murdoch, Jennifer L.; Jones, Kenneth M.

    2008-01-01

    This paper explains the computerized batch processing experiment examining the operational impacts of the introduction of Automatic Dependent Surveillance-Broadcast (ADS-B) equipment and the In-Trail Procedure (ITP) to the North Atlantic Organized Track System (NATOTS). This experiment was conducted using the Traffic Manager (TMX), a desktop simulation capable of simulating airspace environments and aircraft operations. ADS-B equipment can enable the use of new ground and airborne procedures, such as the ITP. The ITP is among the first of these new procedures, which will make use of improved situation awareness in the local surrounding airspace of ADS-B equipped aircraft to enable more efficient oceanic flight level changes. The data collected were analyzed with respect to multiple operationally relevant parameters including fuel burn, request approval rates, and the distribution of fuel savings. This experiment showed that through the use of ADS-B or ADS-B and the ITP that operational improvements and benefits could be achieved.

  6. Operational Improvements From Using the In-Trail Procedure in the North Atlantic Organized Track System

    NASA Technical Reports Server (NTRS)

    Chartrand, Ryan C.; Bussink, Frank J.; Graff, Thomas J.; Jones, Kenneth M.

    2009-01-01

    This paper explains the computerized batch processing experiment examining the operational impacts of the introduction of Automatic Dependent Surveillance-Broadcast (ADS-B) equipment and the In-Trail Procedure (ITP) to the North Atlantic Organized Track System. This experiment was conducted using the Traffic Manager (TMX), a desktop simulation capable of simulating airspace environments and aircraft operations. ADS-B equipment can enable the use of new ground and airborne procedures, such as the ITP. ITP is among the first of these new procedures, which will make use of improved situation awareness in the local surrounding airspace of ADS-B equipped aircraft to enable more efficient oceanic flight level changes. The collected data were analyzed with respect to multiple operationally relevant parameters including fuel burn, request approval rates, and the distribution of fuel savings. This experiment showed that through the use of ADS-B or ADS-B and the ITP that operational improvements and benefits could be achieved.

  7. Extending BPM Environments of Your Choice with Performance Related Decision Support

    NASA Astrophysics Data System (ADS)

    Fritzsche, Mathias; Picht, Michael; Gilani, Wasif; Spence, Ivor; Brown, John; Kilpatrick, Peter

    What-if Simulations have been identified as one solution for business performance related decision support. Such support is especially useful in cases where it can be automatically generated out of Business Process Management (BPM) Environments from the existing business process models and performance parameters monitored from the executed business process instances. Currently, some of the available BPM Environments offer basic-level performance prediction capabilities. However, these functionalities are normally too limited to be generally useful for performance related decision support at business process level. In this paper, an approach is presented which allows the non-intrusive integration of sophisticated tooling for what-if simulations, analytic performance prediction tools, process optimizations or a combination of such solutions into already existing BPM environments. The approach abstracts from process modelling techniques which enable automatic decision support spanning processes across numerous BPM Environments. For instance, this enables end-to-end decision support for composite processes modelled with the Business Process Modelling Notation (BPMN) on top of existing Enterprise Resource Planning (ERP) processes modelled with proprietary languages.

  8. HuMOVE: a low-invasive wearable monitoring platform in sexual medicine.

    PubMed

    Ciuti, Gastone; Nardi, Matteo; Valdastri, Pietro; Menciassi, Arianna; Basile Fasolo, Ciro; Dario, Paolo

    2014-10-01

    To investigate an accelerometer-based wearable system, named Human Movement (HuMOVE) platform, designed to enable quantitative and continuous measurement of sexual performance with minimal invasiveness and inconvenience for users. Design, implementation, and development of HuMOVE, a wearable platform equipped with an accelerometer sensor for monitoring inertial parameters for sexual performance assessment and diagnosis, were performed. The system enables quantitative measurement of movement parameters during sexual intercourse, meeting the requirements of wearability, data storage, sampling rate, and interfacing methods, which are fundamental for human sexual intercourse performance analysis. HuMOVE was validated through characterization using a controlled experimental test bench and evaluated in a human model during simulated sexual intercourse conditions. HuMOVE demonstrated to be a robust and quantitative monitoring platform and a reliable candidate for sexual performance evaluation and diagnosis. Characterization analysis on the controlled experimental test bench demonstrated an accurate correlation between the HuMOVE system and data from a reference displacement sensor. Experimental tests in the human model during simulated intercourse conditions confirmed the accuracy of the sexual performance evaluation platform and the effectiveness of the selected and derived parameters. The obtained outcomes also established the project expectations in terms of usability and comfort, evidenced by the questionnaires that highlighted the low invasiveness and acceptance of the device. To the best of our knowledge, HuMOVE platform is the first device for human sexual performance analysis compatible with sexual intercourse; the system has the potential to be a helpful tool for physicians to accurately classify sexual disorders, such as premature or delayed ejaculation. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Genomic-Enabled Prediction of Ordinal Data with Bayesian Logistic Ordinal Regression.

    PubMed

    Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Burgueño, Juan; Eskridge, Kent

    2015-08-18

    Most genomic-enabled prediction models developed so far assume that the response variable is continuous and normally distributed. The exception is the probit model, developed for ordered categorical phenotypes. In statistical applications, because of the easy implementation of the Bayesian probit ordinal regression (BPOR) model, Bayesian logistic ordinal regression (BLOR) is implemented rarely in the context of genomic-enabled prediction [sample size (n) is much smaller than the number of parameters (p)]. For this reason, in this paper we propose a BLOR model using the Pólya-Gamma data augmentation approach that produces a Gibbs sampler with similar full conditional distributions of the BPOR model and with the advantage that the BPOR model is a particular case of the BLOR model. We evaluated the proposed model by using simulation and two real data sets. Results indicate that our BLOR model is a good alternative for analyzing ordinal data in the context of genomic-enabled prediction with the probit or logit link. Copyright © 2015 Montesinos-López et al.

  10. Appropriate use of the increment entropy for electrophysiological time series.

    PubMed

    Liu, Xiaofeng; Wang, Xue; Zhou, Xu; Jiang, Aimin

    2018-04-01

    The increment entropy (IncrEn) is a new measure for quantifying the complexity of a time series. There are three critical parameters in the IncrEn calculation: N (length of the time series), m (dimensionality), and q (quantifying precision). However, the question of how to choose the most appropriate combination of IncrEn parameters for short datasets has not been extensively explored. The purpose of this research was to provide guidance on choosing suitable IncrEn parameters for short datasets by exploring the effects of varying the parameter values. We used simulated data, epileptic EEG data and cardiac interbeat (RR) data to investigate the effects of the parameters on the calculated IncrEn values. The results reveal that IncrEn is sensitive to changes in m, q and N for short datasets (N≤500). However, IncrEn reaches stability at a data length of N=1000 with m=2 and q=2, and for short datasets (N=100), it shows better relative consistency with 2≤m≤6 and 2≤q≤8 We suggest that the value of N should be no less than 100. To enable a clear distinction between different classes based on IncrEn, we recommend that m and q should take values between 2 and 4. With appropriate parameters, IncrEn enables the effective detection of complexity variations in physiological time series, suggesting that IncrEn should be useful for the analysis of physiological time series in clinical applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Experimental study of the novel tuned mass damper with inerter which enables changes of inertance

    NASA Astrophysics Data System (ADS)

    Brzeski, P.; Lazarek, M.; Perlikowski, P.

    2017-09-01

    In this paper we present the experimental verification of the novel tuned mass damper which enables changes of inertance. Characteristic feature of the proposed device is the presence of special type of inerter. This inerter incorporates a continuously variable transmission that enables stepless changes of inertance. Thus, it enables to adjust the parameters of the damping device to the current forcing characteristic. In the paper we present and describe the experimental rig that consists of the massive main oscillator forced kinematically and the prototype of the investigated damper. We perform a series of dedicated experiments to characterize the device and asses its damping efficiency. Moreover, we perform numerical simulations using the simple mathematical model of investigated system. Comparing the numerical results and the experimental data we legitimize the model and demonstrate the capabilities of the investigated tuned mass damper. Presented results prove that the concept of the novel type of tuned mass damper can be realized and enable to confirm its main advantages. Investigated prototype device offers excellent damping efficiency in a wide range of forcing frequencies.

  12. Bayesian Parameter Inference and Model Selection by Population Annealing in Systems Biology

    PubMed Central

    Murakami, Yohei

    2014-01-01

    Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named “posterior parameter ensemble”. We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor. PMID:25089832

  13. Understanding overlay signatures using machine learning on non-lithography context information

    NASA Astrophysics Data System (ADS)

    Overcast, Marshall; Mellegaard, Corey; Daniel, David; Habets, Boris; Erley, Georg; Guhlemann, Steffen; Thrun, Xaver; Buhl, Stefan; Tottewitz, Steven

    2018-03-01

    Overlay errors between two layers can be caused by non-lithography processes. While these errors can be compensated by the run-to-run system, such process and tool signatures are not always stable. In order to monitor the impact of non-lithography context on overlay at regular intervals, a systematic approach is needed. Using various machine learning techniques, significant context parameters that relate to deviating overlay signatures are automatically identified. Once the most influential context parameters are found, a run-to-run simulation is performed to see how much improvement can be obtained. The resulting analysis shows good potential for reducing the influence of hidden context parameters on overlay performance. Non-lithographic contexts are significant contributors, and their automatic detection and classification will enable the overlay roadmap, given the corresponding control capabilities.

  14. Viscoelastic flow modeling in the extrusion of a dough-like fluid

    NASA Technical Reports Server (NTRS)

    Dhanasekharan, M.; Kokini, J. L.; Janes, H. W. (Principal Investigator)

    2000-01-01

    This work attempts to investigate the effect of viscoelasticity and three-dimensional geometry in screw channels. The Phan-Thien Tanner (PTT) constitutive equation with simplified model parameters was solved in conjunction with the flow equations. Polyflow, a commercially available finite element code was used to solve the resulting nonlinear partial differential equations. The PTT model predicted one log scale lower pressure buildup compared to the equivalent Newtonian results. However, the velocity profile did not show significant changes for the chosen PTT model parameters. Past Researchers neglected viscoelastic effects and also the three dimensional nature of the flow in extruder channels. The results of this paper provide a starting point for further simulations using more realistic model parameters, which may enable the food engineer to more accurately scale-up and design extrusion processes.

  15. Removing the Impact of Baluns from Measurements of a Novel Antenna for Cosmological HI Measurements

    NASA Astrophysics Data System (ADS)

    Trung, Vincent; Ewall-Wice, Aaron Michael; Li, Jianshu; Hewitt, Jacqueline; Riley, Daniel; Bradley, Richard F.; Makhija, Krishna; Garza, Sierra; HERA Collaboration

    2018-01-01

    The Hydrogen Epoch of Reionization Array (HERA) is a low-frequency radio interferometer aiming to detect redshifted 21 cm emission from neutral hydrogen during the Epoch of Reionization at frequencies between 100 and 200 MHz. Extending HERA’s performance to lower frequencies will enable detection of radio waves at higher redshifts, when models predict that gas between galaxies was heated by X-rays from the first stellar-mass black holes. The isolation of foregrounds that are four orders of magnitude brighter than the faint cosmological signal presents and unprecedented set of design specifications for our antennas, including sensitivity and spectral smoothness over a large bandwidth. We are developing a broadband sinuous antenna feed for HERA, extending the bandwidth from 50 to 220 MHz, and we are verifying antenna performance with field measurements and simulations. Electromagnetic simulations compute the differential S-parameters of the antenna. We measure these S-parameters through a lossy balun attached to an unbalanced vector network analyzer. Removing the impact of this balun is critical in obtaining an accurate comparison between our simulations and measurements. I describe measurements to characterize the baluns and how they are used to remove the balun’s impact on the antenna S-parameter measurements. Field measurements of the broadband sinuous antenna dish at MIT and Green Bank Observatory are used to verify our electromagnetic simulations of the broadband sinuous antenna design. After applying our balun corrections, we find that our field measurements are in good agreement with the simulation, giving us confidence that our feeds will perform as designed.

  16. Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks

    NASA Astrophysics Data System (ADS)

    Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.

    2015-12-01

    A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.

  17. Millikelvin cooling of the center-of-mass motion of a levitated nanoparticle

    NASA Astrophysics Data System (ADS)

    Bullier, Nathanaël. P.; Pontin, Antonio; Barker, Peter F.

    2017-08-01

    Cavity optomechanics has been used to cool the center-of-mass motion of levitated nanospheres to millikelvin temperatures. Trapping the particle in the cavity field enables high mechanical frequencies bringing the system close to the resolved-sideband regime. Here we describe a Paul trap constructed from a printed circuit board that is small enough to fit inside the optical cavity and which should enable an accurate positioning of the particle inside the cavity field. This will increase the optical damping and therefore reduce the final temperature by at least one order of magnitude. Simulations of the potential inside the trap enable us to estimate the charge- to-mass ratio of trapped particles by measuring the secular frequencies as a function of the trap parameters. Lastly, we show the importance of reducing laser noise to reach lower temperatures and higher sensitivity in the phase-sensitive readout.

  18. Using high-performance networks to enable computational aerosciences applications

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1992-01-01

    One component of the U.S. Federal High Performance Computing and Communications Program (HPCCP) is the establishment of a gigabit network to provide a communications infrastructure for researchers across the nation. This gigabit network will provide new services and capabilities, in addition to increased bandwidth, to enable future applications. An understanding of these applications is necessary to guide the development of the gigabit network and other high-performance networks of the future. In this paper we focus on computational aerosciences applications run remotely using the Numerical Aerodynamic Simulation (NAS) facility located at NASA Ames Research Center. We characterize these applications in terms of network-related parameters and relate user experiences that reveal limitations imposed by the current wide-area networking infrastructure. Then we investigate how the development of a nationwide gigabit network would enable users of the NAS facility to work in new, more productive ways.

  19. Automatic 3D virtual scenes modeling for multisensors simulation

    NASA Astrophysics Data System (ADS)

    Latger, Jean; Le Goff, Alain; Cathala, Thierry; Larive, Mathieu

    2006-05-01

    SEDRIS that stands for Synthetic Environment Data Representation and Interchange Specification is a DoD/DMSO initiative in order to federate and make interoperable 3D mocks up in the frame of virtual reality and simulation. This paper shows an original application of SEDRIS concept for research physical multi sensors simulation, when SEDRIS is more classically known for training simulation. CHORALE (simulated Optronic Acoustic Radar battlefield) is used by the French DGA/DCE (Directorate for Test and Evaluation of the French Ministry of Defense) to perform multi-sensors simulations. CHORALE enables the user to create virtual and realistic multi spectral 3D scenes, and generate the physical signal received by a sensor, typically an IR sensor. In the scope of this CHORALE workshop, French DGA has decided to introduce a SEDRIS based new 3D terrain modeling tool that enables to create automatically 3D databases, directly usable by the physical sensor simulation CHORALE renderers. This AGETIM tool turns geographical source data (including GIS facilities) into meshed geometry enhanced with the sensor physical extensions, fitted to the ray tracing rendering of CHORALE, both for the infrared, electromagnetic and acoustic spectrum. The basic idea is to enhance directly the 2D source level with the physical data, rather than enhancing the 3D meshed level, which is more efficient (rapid database generation) and more reliable (can be generated many times, changing some parameters only). The paper concludes with the last current evolution of AGETIM in the scope mission rehearsal for urban war using sensors. This evolution includes indoor modeling for automatic generation of inner parts of buildings.

  20. Development of a Web Based Simulating System for Earthquake Modeling on the Grid

    NASA Astrophysics Data System (ADS)

    Seber, D.; Youn, C.; Kaiser, T.

    2007-12-01

    Existing cyberinfrastructure-based information, data and computational networks now allow development of state- of-the-art, user-friendly simulation environments that democratize access to high-end computational environments and provide new research opportunities for many research and educational communities. Within the Geosciences cyberinfrastructure network, GEON, we have developed the SYNSEIS (SYNthetic SEISmogram) toolkit to enable efficient computations of 2D and 3D seismic waveforms for a variety of research purposes especially for helping to analyze the EarthScope's USArray seismic data in a speedy and efficient environment. The underlying simulation software in SYNSEIS is a finite difference code, E3D, developed by LLNL (S. Larsen). The code is embedded within the SYNSEIS portlet environment and it is used by our toolkit to simulate seismic waveforms of earthquakes at regional distances (<1000km). Architecturally, SYNSEIS uses both Web Service and Grid computing resources in a portal-based work environment and has a built in access mechanism to connect to national supercomputer centers as well as to a dedicated, small-scale compute cluster for its runs. Even though Grid computing is well-established in many computing communities, its use among domain scientists still is not trivial because of multiple levels of complexities encountered. We grid-enabled E3D using our own dialect XML inputs that include geological models that are accessible through standard Web services within the GEON network. The XML inputs for this application contain structural geometries, source parameters, seismic velocity, density, attenuation values, number of time steps to compute, and number of stations. By enabling a portal based access to a such computational environment coupled with its dynamic user interface we enable a large user community to take advantage of such high end calculations in their research and educational activities. Our system can be used to promote an efficient and effective modeling environment to help scientists as well as educators in their daily activities and speed up the scientific discovery process.

  1. Method for Identification of Results of Dynamic Overloads in Assessment of Safety Use of the Mine Auxiliary Transportation System

    NASA Astrophysics Data System (ADS)

    Tokarczyk, Jarosław

    2016-12-01

    Method for identification the effects of dynamic overload affecting the people, which may occur in the emergency state of suspended monorail is presented in the paper. The braking curve using MBS (Multi-Body System) simulation was determined. For this purpose a computational model (MBS) of suspended monorail was developed and two different variants of numerical calculations were carried out. An algorithm of conducting numerical simulations to assess the effects of dynamic overload acting on the suspended monorails' users is also posted in the paper. An example of computational model FEM (Finite Element Method) composed of technical mean and the anthropometrical model ATB (Articulated Total Body) is shown. The simulation results are presented: graph of HIC (Head Injury Criterion) parameter and successive phases of dislocation of ATB model. Generator of computational models for safety criterion, which enables preparation of input data and remote starting the simulation, is proposed.

  2. Multi-Scale Modeling of Liquid Phase Sintering Affected by Gravity: Preliminary Analysis

    NASA Technical Reports Server (NTRS)

    Olevsky, Eugene; German, Randall M.

    2012-01-01

    A multi-scale simulation concept taking into account impact of gravity on liquid phase sintering is described. The gravity influence can be included at both the micro- and macro-scales. At the micro-scale, the diffusion mass-transport is directionally modified in the framework of kinetic Monte-Carlo simulations to include the impact of gravity. The micro-scale simulations can provide the values of the constitutive parameters for macroscopic sintering simulations. At the macro-scale, we are attempting to embed a continuum model of sintering into a finite-element framework that includes the gravity forces and substrate friction. If successful, the finite elements analysis will enable predictions relevant to space-based processing, including size and shape and property predictions. Model experiments are underway to support the models via extraction of viscosity moduli versus composition, particle size, heating rate, temperature and time.

  3. The COPD Knowledge Base: enabling data analysis and computational simulation in translational COPD research.

    PubMed

    Cano, Isaac; Tényi, Ákos; Schueller, Christine; Wolff, Martin; Huertas Migueláñez, M Mercedes; Gomez-Cabrero, David; Antczak, Philipp; Roca, Josep; Cascante, Marta; Falciani, Francesco; Maier, Dieter

    2014-11-28

    Previously we generated a chronic obstructive pulmonary disease (COPD) specific knowledge base (http://www.copdknowledgebase.eu) from clinical and experimental data, text-mining results and public databases. This knowledge base allowed the retrieval of specific molecular networks together with integrated clinical and experimental data. The COPDKB has now been extended to integrate over 40 public data sources on functional interaction (e.g. signal transduction, transcriptional regulation, protein-protein interaction, gene-disease association). In addition we integrated COPD-specific expression and co-morbidity networks connecting over 6 000 genes/proteins with physiological parameters and disease states. Three mathematical models describing different aspects of systemic effects of COPD were connected to clinical and experimental data. We have completely redesigned the technical architecture of the user interface and now provide html and web browser-based access and form-based searches. A network search enables the use of interconnecting information and the generation of disease-specific sub-networks from general knowledge. Integration with the Synergy-COPD Simulation Environment enables multi-scale integrated simulation of individual computational models while integration with a Clinical Decision Support System allows delivery into clinical practice. The COPD Knowledge Base is the only publicly available knowledge resource dedicated to COPD and combining genetic information with molecular, physiological and clinical data as well as mathematical modelling. Its integrated analysis functions provide overviews about clinical trends and connections while its semantically mapped content enables complex analysis approaches. We plan to further extend the COPDKB by offering it as a repository to publish and semantically integrate data from relevant clinical trials. The COPDKB is freely available after registration at http://www.copdknowledgebase.eu.

  4. Source encoding in multi-parameter full waveform inversion

    NASA Astrophysics Data System (ADS)

    Matharu, Gian; Sacchi, Mauricio D.

    2018-04-01

    Source encoding techniques alleviate the computational burden of sequential-source full waveform inversion (FWI) by considering multiple sources simultaneously rather than independently. The reduced data volume requires fewer forward/adjoint simulations per non-linear iteration. Applications of source-encoded full waveform inversion (SEFWI) have thus far focused on monoparameter acoustic inversion. We extend SEFWI to the multi-parameter case with applications presented for elastic isotropic inversion. Estimating multiple parameters can be challenging as perturbations in different parameters can prompt similar responses in the data. We investigate the relationship between source encoding and parameter trade-off by examining the multi-parameter source-encoded Hessian. Probing of the Hessian demonstrates the convergence of the expected source-encoded Hessian, to that of conventional FWI. The convergence implies that the parameter trade-off in SEFWI is comparable to that observed in FWI. A series of synthetic inversions are conducted to establish the feasibility of source-encoded multi-parameter FWI. We demonstrate that SEFWI requires fewer overall simulations than FWI to achieve a target model error for a range of first-order optimization methods. An inversion for spatially inconsistent P - (α) and S-wave (β) velocity models, corroborates the expectation of comparable parameter trade-off in SEFWI and FWI. The final example demonstrates a shortcoming of SEFWI when confronted with time-windowing in data-driven inversion schemes. The limitation is a consequence of the implicit fixed-spread acquisition assumption in SEFWI. Alternative objective functions, namely the normalized cross-correlation and L1 waveform misfit, do not enable SEFWI to overcome this limitation.

  5. Visual Predictive Check in Models with Time-Varying Input Function.

    PubMed

    Largajolli, Anna; Bertoldo, Alessandra; Campioni, Marco; Cobelli, Claudio

    2015-11-01

    The nonlinear mixed effects models are commonly used modeling techniques in the pharmaceutical research as they enable the characterization of the individual profiles together with the population to which the individuals belong. To ensure a correct use of them is fundamental to provide powerful diagnostic tools that are able to evaluate the predictive performance of the models. The visual predictive check (VPC) is a commonly used tool that helps the user to check by visual inspection if the model is able to reproduce the variability and the main trend of the observed data. However, the simulation from the model is not always trivial, for example, when using models with time-varying input function (IF). In this class of models, there is a potential mismatch between each set of simulated parameters and the associated individual IF which can cause an incorrect profile simulation. We introduce a refinement of the VPC by taking in consideration a correlation term (the Mahalanobis or normalized Euclidean distance) that helps the association of the correct IF with the individual set of simulated parameters. We investigate and compare its performance with the standard VPC in models of the glucose and insulin system applied on real and simulated data and in a simulated pharmacokinetic/pharmacodynamic (PK/PD) example. The newly proposed VPC performance appears to be better with respect to the standard VPC especially for the models with big variability in the IF where the probability of simulating incorrect profiles is higher.

  6. Simulating the Effects of an Extended Source on the Shack-Hartmann Wavefront Sensor Through Turbulence

    DTIC Science & Technology

    2011-03-01

    wavefront distortions in real time. Often, it is used to correct for optical fluctuations due to atmospheric turbulence and improve imaging system...propagation paths, the overall turbulence is relatively weak, with a Rytov number of only 0.045. The atmospheric parameters were then used to program a three...on an adaptive optics (AO) system, it enables further research on the effects of deep turbulence on AO systems and correlation based wavefront sensing

  7. In vitro flow assessment: from PC-MRI to computational fluid dynamics including fluid-structure interaction

    NASA Astrophysics Data System (ADS)

    Kratzke, Jonas; Rengier, Fabian; Weis, Christian; Beller, Carsten J.; Heuveline, Vincent

    2016-04-01

    Initiation and development of cardiovascular diseases can be highly correlated to specific biomechanical parameters. To examine and assess biomechanical parameters, numerical simulation of cardiovascular dynamics has the potential to complement and enhance medical measurement and imaging techniques. As such, computational fluid dynamics (CFD) have shown to be suitable to evaluate blood velocity and pressure in scenarios, where vessel wall deformation plays a minor role. However, there is a need for further validation studies and the inclusion of vessel wall elasticity for morphologies being subject to large displacement. In this work, we consider a fluid-structure interaction (FSI) model including the full elasticity equation to take the deformability of aortic wall soft tissue into account. We present a numerical framework, in which either a CFD study can be performed for less deformable aortic segments or an FSI simulation for regions of large displacement such as the aortic root and arch. Both of the methods are validated by means of an aortic phantom experiment. The computational results are in good agreement with 2D phase-contrast magnetic resonance imaging (PC-MRI) velocity measurements as well as catheter-based pressure measurements. The FSI simulation shows a characteristic vessel compliance effect on the flow field induced by the elasticity of the vessel wall, which the CFD model is not capable of. The in vitro validated FSI simulation framework can enable the computation of complementary biomechanical parameters such as the stress distribution within the vessel wall.

  8. Field-theoretic simulations of block copolymer nanocomposites in a constant interfacial tension ensemble.

    PubMed

    Koski, Jason P; Riggleman, Robert A

    2017-04-28

    Block copolymers, due to their ability to self-assemble into periodic structures with long range order, are appealing candidates to control the ordering of functionalized nanoparticles where it is well-accepted that the spatial distribution of nanoparticles in a polymer matrix dictates the resulting material properties. The large parameter space associated with block copolymer nanocomposites makes theory and simulation tools appealing to guide experiments and effectively isolate parameters of interest. We demonstrate a method for performing field-theoretic simulations in a constant volume-constant interfacial tension ensemble (nVγT) that enables the determination of the equilibrium properties of block copolymer nanocomposites, including when the composites are placed under tensile or compressive loads. Our approach is compatible with the complex Langevin simulation framework, which allows us to go beyond the mean-field approximation. We validate our approach by comparing our nVγT approach with free energy calculations to determine the ideal domain spacing and modulus of a symmetric block copolymer melt. We analyze the effect of numerical and thermodynamic parameters on the efficiency of the nVγT ensemble and subsequently use our method to investigate the ideal domain spacing, modulus, and nanoparticle distribution of a lamellar forming block copolymer nanocomposite. We find that the nanoparticle distribution is directly linked to the resultant domain spacing and is dependent on polymer chain density, nanoparticle size, and nanoparticle chemistry. Furthermore, placing the system under tension or compression can qualitatively alter the nanoparticle distribution within the block copolymer.

  9. Model based systems engineering (MBSE) applied to Radio Aurora Explorer (RAX) CubeSat mission operational scenarios

    NASA Astrophysics Data System (ADS)

    Spangelo, S. C.; Cutler, J.; Anderson, L.; Fosse, E.; Cheng, L.; Yntema, R.; Bajaj, M.; Delp, C.; Cole, B.; Soremekum, G.; Kaslow, D.

    Small satellites are more highly resource-constrained by mass, power, volume, delivery timelines, and financial cost relative to their larger counterparts. Small satellites are operationally challenging because subsystem functions are coupled and constrained by the limited available commodities (e.g. data, energy, and access times to ground resources). Furthermore, additional operational complexities arise because small satellite components are physically integrated, which may yield thermal or radio frequency interference. In this paper, we extend our initial Model Based Systems Engineering (MBSE) framework developed for a small satellite mission by demonstrating the ability to model different behaviors and scenarios. We integrate several simulation tools to execute SysML-based behavior models, including subsystem functions and internal states of the spacecraft. We demonstrate utility of this approach to drive the system analysis and design process. We demonstrate applicability of the simulation environment to capture realistic satellite operational scenarios, which include energy collection, the data acquisition, and downloading to ground stations. The integrated modeling environment enables users to extract feasibility, performance, and robustness metrics. This enables visualization of both the physical states (e.g. position, attitude) and functional states (e.g. operating points of various subsystems) of the satellite for representative mission scenarios. The modeling approach presented in this paper offers satellite designers and operators the opportunity to assess the feasibility of vehicle and network parameters, as well as the feasibility of operational schedules. This will enable future missions to benefit from using these models throughout the full design, test, and fly cycle. In particular, vehicle and network parameters and schedules can be verified prior to being implemented, during mission operations, and can also be updated in near real-time with oper- tional performance feedback.

  10. Coarse-Grained Descriptions of Dynamics for Networks with Both Intrinsic and Structural Heterogeneities

    PubMed Central

    Bertalan, Tom; Wu, Yan; Laing, Carlo; Gear, C. William; Kevrekidis, Ioannis G.

    2017-01-01

    Finding accurate reduced descriptions for large, complex, dynamically evolving networks is a crucial enabler to their simulation, analysis, and ultimately design. Here, we propose and illustrate a systematic and powerful approach to obtaining good collective coarse-grained observables—variables successfully summarizing the detailed state of such networks. Finding such variables can naturally lead to successful reduced dynamic models for the networks. The main premise enabling our approach is the assumption that the behavior of a node in the network depends (after a short initial transient) on the node identity: a set of descriptors that quantify the node properties, whether intrinsic (e.g., parameters in the node evolution equations) or structural (imparted to the node by its connectivity in the particular network structure). The approach creates a natural link with modeling and “computational enabling technology” developed in the context of Uncertainty Quantification. In our case, however, we will not focus on ensembles of different realizations of a problem, each with parameters randomly selected from a distribution. We will instead study many coupled heterogeneous units, each characterized by randomly assigned (heterogeneous) parameter value(s). One could then coin the term Heterogeneity Quantification for this approach, which we illustrate through a model dynamic network consisting of coupled oscillators with one intrinsic heterogeneity (oscillator individual frequency) and one structural heterogeneity (oscillator degree in the undirected network). The computational implementation of the approach, its shortcomings and possible extensions are also discussed. PMID:28659781

  11. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and a-Posteriori Error Estimation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estep, Donald

    2015-11-30

    This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.

  12. Implementation and Simulation Results using Autonomous Aerobraking Development Software

    NASA Technical Reports Server (NTRS)

    Maddock, Robert W.; DwyerCianciolo, Alicia M.; Bowes, Angela; Prince, Jill L. H.; Powell, Richard W.

    2011-01-01

    An Autonomous Aerobraking software system is currently under development with support from the NASA Engineering and Safety Center (NESC) that would move typically ground-based operations functions to onboard an aerobraking spacecraft, reducing mission risk and mission cost. The suite of software that will enable autonomous aerobraking is the Autonomous Aerobraking Development Software (AADS) and consists of an ephemeris model, onboard atmosphere estimator, temperature and loads prediction, and a maneuver calculation. The software calculates the maneuver time, magnitude and direction commands to maintain the spacecraft periapsis parameters within design structural load and/or thermal constraints. The AADS is currently tested in simulations at Mars, with plans to also evaluate feasibility and performance at Venus and Titan.

  13. A discrete-time adaptive control scheme for robot manipulators

    NASA Technical Reports Server (NTRS)

    Tarokh, M.

    1990-01-01

    A discrete-time model reference adaptive control scheme is developed for trajectory tracking of robot manipulators. The scheme utilizes feedback, feedforward, and auxiliary signals, obtained from joint angle measurement through simple expressions. Hyperstability theory is utilized to derive the adaptation laws for the controller gain matrices. It is shown that trajectory tracking is achieved despite gross robot parameter variation and uncertainties. The method offers considerable design flexibility and enables the designer to improve the performance of the control system by adjusting free design parameters. The discrete-time adaptation algorithm is extremely simple and is therefore suitable for real-time implementation. Simulations and experimental results are given to demonstrate the performance of the scheme.

  14. A Discontinuous Potential Model for Protein-Protein Interactions.

    PubMed

    Shao, Qing; Hall, Carol K

    2016-01-01

    Protein-protein interactions play an important role in many biologic and industrial processes. In this work, we develop a two-bead-per-residue model that enables us to account for protein-protein interactions in a multi-protein system using discontinuous molecular dynamics simulations. This model deploys discontinuous potentials to describe the non-bonded interactions and virtual bonds to keep proteins in their native state. The geometric and energetic parameters are derived from the potentials of mean force between sidechain-sidechain, sidechain-backbone, and backbone-backbone pairs. The energetic parameters are scaled with the aim of matching the second virial coefficient of lysozyme reported in experiment. We also investigate the performance of several bond-building strategies.

  15. Probabilistically modeling lava flows with MOLASSES

    NASA Astrophysics Data System (ADS)

    Richardson, J. A.; Connor, L.; Connor, C.; Gallant, E.

    2017-12-01

    Modeling lava flows through Cellular Automata methods enables a computationally inexpensive means to quickly forecast lava flow paths and ultimate areal extents. We have developed a lava flow simulator, MOLASSES, that forecasts lava flow inundation over an elevation model from a point source eruption. This modular code can be implemented in a deterministic fashion with given user inputs that will produce a single lava flow simulation. MOLASSES can also be implemented in a probabilistic fashion where given user inputs define parameter distributions that are randomly sampled to create many lava flow simulations. This probabilistic approach enables uncertainty in input data to be expressed in the model results and MOLASSES outputs a probability map of inundation instead of a determined lava flow extent. Since the code is comparatively fast, we use it probabilistically to investigate where potential vents are located that may impact specific sites and areas, as well as the unconditional probability of lava flow inundation of sites or areas from any vent. We have validated the MOLASSES code to community-defined benchmark tests and to the real world lava flows at Tolbachik (2012-2013) and Pico do Fogo (2014-2015). To determine the efficacy of the MOLASSES simulator at accurately and precisely mimicking the inundation area of real flows, we report goodness of fit using both model sensitivity and the Positive Predictive Value, the latter of which is a Bayesian posterior statistic. Model sensitivity is often used in evaluating lava flow simulators, as it describes how much of the lava flow was successfully modeled by the simulation. We argue that the positive predictive value is equally important in determining how good a simulator is, as it describes the percentage of the simulation space that was actually inundated by lava.

  16. Virtual reality to simulate large lighting with high efficiency LEDs

    NASA Astrophysics Data System (ADS)

    Blandet, Thierry; Coutelier, Gilles; Meyrueis, Patrick

    2011-05-01

    When a city or a local authority wishes to emphasize its historical heritage, for the lighting of its streets, setting up lights during the festive season, they call upon the skills of a lighting designer. The lighting designer proposes concepts, ideas, lighting, and to be able to present them, he makes use of simulation. On the other hand lighting technologies are evolving very rapidly and new lighting systems offer features that lighting designers are now integrating their projects. The street lights consume lot of energy; light projects are now taking into account the energy saving aspect. Lighting systems based on LEDs today provide good lighting needs, taking into account sustainable development issues while enabling new creative dimension. The lighting simulation can handle these parameters. Images or video simulation are no longer sufficient: stereoscopy and virtual reality techniques allow better communication and better understanding of projects. Virtual reality offers new possibilities of interaction, the freedom of movement in a scene, the presentation of variants or interactive simulations.

  17. A compact physical model for the simulation of pNML-based architectures

    NASA Astrophysics Data System (ADS)

    Turvani, G.; Riente, F.; Plozner, E.; Schmitt-Landsiedel, D.; Breitkreutz-v. Gamm, S.

    2017-05-01

    Among emerging technologies, perpendicular Nanomagnetic Logic (pNML) seems to be very promising because of its capability of combining logic and memory onto the same device, scalability, 3D-integration and low power consumption. Recently, Full Adder (FA) structures clocked by a global magnetic field have been experimentally demonstrated and detailed characterizations of the switching process governing the domain wall (DW) nucleation probability Pnuc and time tnuc have been performed. However, the design of pNML architectures represent a crucial point in the study of this technology; this can have a remarkable impact on the reliability of pNML structures. Here, we present a compact model developed in VHDL which enables to simulate complex pNML architectures while keeping into account critical physical parameters. Therefore, such parameters have been extracted from the experiments, fitted by the corresponding physical equations and encapsulated into the proposed model. Within this, magnetic structures are decomposed into a few basic elements (nucleation centers, nanowires, inverters etc.) represented by the according physical description. To validate the model, we redesigned a FA and compared our simulation results to the experiment. With this compact model of pNML devices we have envisioned a new methodology which makes it possible to simulate and test the physical behavior of complex architectures with very low computational costs.

  18. Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis

    PubMed Central

    Dumitrascu, Adela-Eliza; Lepadatescu, Badea; Dumitrascu, Dorin-Ion; Nedelcu, Anisor; Ciobanu, Doina Valentina

    2015-01-01

    Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram), which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed. PMID:26167524

  19. Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis.

    PubMed

    Dumitrascu, Adela-Eliza; Lepadatescu, Badea; Dumitrascu, Dorin-Ion; Nedelcu, Anisor; Ciobanu, Doina Valentina

    2015-01-01

    Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram), which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed.

  20. Experimental quantum simulations of many-body physics with trapped ions.

    PubMed

    Schneider, Ch; Porras, Diego; Schaetz, Tobias

    2012-02-01

    Direct experimental access to some of the most intriguing quantum phenomena is not granted due to the lack of precise control of the relevant parameters in their naturally intricate environment. Their simulation on conventional computers is impossible, since quantum behaviour arising with superposition states or entanglement is not efficiently translatable into the classical language. However, one could gain deeper insight into complex quantum dynamics by experimentally simulating the quantum behaviour of interest in another quantum system, where the relevant parameters and interactions can be controlled and robust effects detected sufficiently well. Systems of trapped ions provide unique control of both the internal (electronic) and external (motional) degrees of freedom. The mutual Coulomb interaction between the ions allows for large interaction strengths at comparatively large mutual ion distances enabling individual control and readout. Systems of trapped ions therefore exhibit a prominent system in several physical disciplines, for example, quantum information processing or metrology. Here, we will give an overview of different trapping techniques of ions as well as implementations for coherent manipulation of their quantum states and discuss the related theoretical basics. We then report on the experimental and theoretical progress in simulating quantum many-body physics with trapped ions and present current approaches for scaling up to more ions and more-dimensional systems.

  1. Runtime visualization of the human arterial tree.

    PubMed

    Insley, Joseph A; Papka, Michael E; Dong, Suchuan; Karniadakis, George; Karonis, Nicholas T

    2007-01-01

    Large-scale simulation codes typically execute for extended periods of time and often on distributed computational resources. Because these simulations can run for hours, or even days, scientists like to get feedback about the state of the computation and the validity of its results as it runs. It is also important that these capabilities be made available with little impact on the performance and stability of the simulation. Visualizing and exploring data in the early stages of the simulation can help scientists identify problems early, potentially avoiding a situation where a simulation runs for several days, only to discover that an error with an input parameter caused both time and resources to be wasted. We describe an application that aids in the monitoring and analysis of a simulation of the human arterial tree. The application provides researchers with high-level feedback about the state of the ongoing simulation and enables them to investigate particular areas of interest in greater detail. The application also offers monitoring information about the amount of data produced and data transfer performance among the various components of the application.

  2. Visually guided gait modifications for stepping over an obstacle: a bio-inspired approach.

    PubMed

    Silva, Pedro; Matos, Vitor; Santos, Cristina P

    2014-02-01

    There is an increasing interest in conceiving robotic systems that are able to move and act in an unstructured and not predefined environment, for which autonomy and adaptability are crucial features. In nature, animals are autonomous biological systems, which often serve as bio-inspiration models, not only for their physical and mechanical properties, but also their control structures that enable adaptability and autonomy-for which learning is (at least) partially responsible. This work proposes a system which seeks to enable a quadruped robot to online learn to detect and to avoid stumbling on an obstacle in its path. The detection relies in a forward internal model that estimates the robot's perceptive information by exploring the locomotion repetitive nature. The system adapts the locomotion in order to place the robot optimally before attempting to step over the obstacle, avoiding any stumbling. Locomotion adaptation is achieved by changing control parameters of a central pattern generator (CPG)-based locomotion controller. The mechanism learns the necessary alterations to the stride length in order to adapt the locomotion by changing the required CPG parameter. Both learning tasks occur online and together define a sensorimotor map, which enables the robot to learn to step over the obstacle in its path. Simulation results show the feasibility of the proposed approach.

  3. A generic open-source software framework supporting scenario simulations in bioterrorist crises.

    PubMed

    Falenski, Alexander; Filter, Matthias; Thöns, Christian; Weiser, Armin A; Wigger, Jan-Frederik; Davis, Matthew; Douglas, Judith V; Edlund, Stefan; Hu, Kun; Kaufman, James H; Appel, Bernd; Käsbohrer, Annemarie

    2013-09-01

    Since the 2001 anthrax attack in the United States, awareness of threats originating from bioterrorism has grown. This led internationally to increased research efforts to improve knowledge of and approaches to protecting human and animal populations against the threat from such attacks. A collaborative effort in this context is the extension of the open-source Spatiotemporal Epidemiological Modeler (STEM) simulation and modeling software for agro- or bioterrorist crisis scenarios. STEM, originally designed to enable community-driven public health disease models and simulations, was extended with new features that enable integration of proprietary data as well as visualization of agent spread along supply and production chains. STEM now provides a fully developed open-source software infrastructure supporting critical modeling tasks such as ad hoc model generation, parameter estimation, simulation of scenario evolution, estimation of effects of mitigation or management measures, and documentation. This open-source software resource can be used free of charge. Additionally, STEM provides critical features like built-in worldwide data on administrative boundaries, transportation networks, or environmental conditions (eg, rainfall, temperature, elevation, vegetation). Users can easily combine their own confidential data with built-in public data to create customized models of desired resolution. STEM also supports collaborative and joint efforts in crisis situations by extended import and export functionalities. In this article we demonstrate specifically those new software features implemented to accomplish STEM application in agro- or bioterrorist crisis scenarios.

  4. Novel Observer Scheme of Fuzzy-MRAS Sensorless Speed Control of Induction Motor Drive

    NASA Astrophysics Data System (ADS)

    Chekroun, S.; Zerikat, M.; Mechernene, A.; Benharir, N.

    2017-01-01

    This paper presents a novel approach Fuzzy-MRAS conception for robust accurate tracking of induction motor drive operating in a high-performance drives environment. Of the different methods for sensorless control of induction motor drive the model reference adaptive system (MRAS) finds lot of attention due to its good performance. The analysis of the sensorless vector control system using MRAS is presented and the resistance parameters variations and speed observer using new Fuzzy Self-Tuning adaptive IP Controller is proposed. In fact, fuzzy logic is reminiscent of human thinking processes and natural language enabling decisions to be made based on vague information. The present approach helps to achieve a good dynamic response, disturbance rejection and low to plant parameter variations of the induction motor. In order to verify the performances of the proposed observer and control algorithms and to test behaviour of the controlled system, numerical simulation is achieved. Simulation results are presented and discussed to shown the validity and the performance of the proposed observer.

  5. COSMOABC: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Ishida, E. E. O.; Vitenti, S. D. P.; Penna-Lima, M.; Cisewski, J.; de Souza, R. S.; Trindade, A. M. M.; Cameron, E.; Busti, V. C.; COIN Collaboration

    2015-11-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogues. Here we present COSMOABC, a Python ABC sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code is very flexible and can be easily coupled to an external simulator, while allowing to incorporate arbitrary distance and prior functions. As an example of practical application, we coupled COSMOABC with the NUMCOSMO library and demonstrate how it can be used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function. COSMOABC is published under the GPLv3 license on PyPI and GitHub and documentation is available at http://goo.gl/SmB8EX.

  6. Numerical investigation of flow parameters for solid rigid spheroidal particle in a pulsatile pipe flow

    NASA Astrophysics Data System (ADS)

    Varghese, Joffin; Jayakumar, J. S.

    2017-09-01

    Quantifying, forecasting and analysing the displacement rates of suspended particles are essential while discussing about blood flow analysis. Because blood is one of the major organs in the body, which enables transport phenomena, comprising of numerous blood cells. In order to model the blood flow, a flow domain was created and numerically simulated. Flow field velocity in the stream is solved utilizing Finite Volume Method utilizing FVM unstructured solver. In pulsatile flow, the effect of parameters such as average Reynolds number, tube radius, particle size and Womersley number are taken into account. In this study spheroidal particle trajectory in axial direction is simulated at different values of pulsating frequency including 1.2 Hz, 3.33 Hz and 4.00 Hz and various densities including 1005 kg/m3 and 1025 kg/m3 for the flow domain. The analysis accomplishes the interaction study of blood constituents for different flow situations which have applications in diagnosis and treatment of cardio vascular related diseases.

  7. Lipid14: The Amber Lipid Force Field

    PubMed Central

    2015-01-01

    The AMBER lipid force field has been updated to create Lipid14, allowing tensionless simulation of a number of lipid types with the AMBER MD package. The modular nature of this force field allows numerous combinations of head and tail groups to create different lipid types, enabling the easy insertion of new lipid species. The Lennard-Jones and torsion parameters of both the head and tail groups have been revised and updated partial charges calculated. The force field has been validated by simulating bilayers of six different lipid types for a total of 0.5 μs each without applying a surface tension; with favorable comparison to experiment for properties such as area per lipid, volume per lipid, bilayer thickness, NMR order parameters, scattering data, and lipid lateral diffusion. As the derivation of this force field is consistent with the AMBER development philosophy, Lipid14 is compatible with the AMBER protein, nucleic acid, carbohydrate, and small molecule force fields. PMID:24803855

  8. Properties of centralized cooperative sensing in cognitive radio networks

    NASA Astrophysics Data System (ADS)

    Skokowski, Paweł; Malon, Krzysztof; Łopatka, Jerzy

    2017-04-01

    Spectrum sensing is a functionality that enables network creation in the cognitive radio technology. Spectrum sensing is use for building the situation awareness knowledge for better use of radio resources and to adjust network parameters in case of jamming, interferences from legacy systems, decreasing link quality caused e.g. by nodes positions changes. This paper presents results from performed tests to compare cooperative centralized sensing versus local sensing. All tests were performed in created simulator developed in Matlab/Simulink environment.

  9. Coherent attacking continuous-variable quantum key distribution with entanglement in the middle

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoyuan; Shi, Ronghua; Zeng, Guihua; Guo, Ying

    2018-06-01

    We suggest an approach on the coherent attack of continuous-variable quantum key distribution (CVQKD) with an untrusted entangled source in the middle. The coherent attack strategy can be performed on the double links of quantum system, enabling the eavesdropper to steal more information from the proposed scheme using the entanglement correlation. Numeric simulation results show the improved performance of the attacked CVQKD system in terms of the derived secret key rate with the controllable parameters maximizing the stolen information.

  10. ACME Priority Metrics (A-PRIME)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Katherine J; Zender, Charlie; Van Roekel, Luke

    A-PRIME, is a collection of scripts designed to provide Accelerated Climate Model for Energy (ACME) model developers and analysts with a variety of analysis of the model needed to determine if the model is producing the desired results, depending on the goals of the simulation. The software is csh scripts based at the top level to enable scientist to provide the input parameters. Within the scripts, the csh scripts calls code to perform the postprocessing of the raw data analysis and create plots for visual assessment.

  11. The Modular Modeling System (MMS): A toolbox for water- and environmental-resources management

    USGS Publications Warehouse

    Leavesley, G.H.; Markstrom, S.L.; Viger, R.J.; Hay, L.E.; ,

    2005-01-01

    The increasing complexity of water- and environmental-resource problems require modeling approaches that incorporate knowledge from a broad range of scientific and software disciplines. To address this need, the U.S. Geological Survey (USGS) has developed the Modular Modeling System (MMS). MMS is an integrated system of computer software for model development, integration, and application. Its modular design allows a high level of flexibility and adaptability to enable modelers to incorporate their own software into a rich array of built-in models and modeling tools. These include individual process models, tightly coupled models, loosely coupled models, and fully- integrated decision support systems. A geographic information system (GIS) interface, the USGS GIS Weasel, has been integrated with MMS to enable spatial delineation and characterization of basin and ecosystem features, and to provide objective parameter-estimation methods for models using available digital data. MMS provides optimization and sensitivity-analysis tools to analyze model parameters and evaluate the extent to which uncertainty in model parameters affects uncertainty in simulation results. MMS has been coupled with the Bureau of Reclamation object-oriented reservoir and river-system modeling framework, RiverWare, to develop models to evaluate and apply optimal resource-allocation and management strategies to complex, operational decisions on multipurpose reservoir systems and watersheds. This decision support system approach has been developed, tested, and implemented in the Gunnison, Yakima, San Joaquin, Rio Grande, and Truckee River basins of the western United States. MMS is currently being coupled with the U.S. Forest Service model SIMulating Patterns and Processes at Landscape Scales (SIMPPLLE) to assess the effects of alternative vegetation-management strategies on a variety of hydrological and ecological responses. Initial development and testing of the MMS-SIMPPLLE integration is being conducted on the Colorado Plateau region of the western United Sates.

  12. Simulation of a photo-solar generator for an optimal output by a parabolic photovoltaic concentrator of Stirling engine type

    NASA Astrophysics Data System (ADS)

    Kaddour, A.; Benyoucef, B.

    Solar energy is the source of the most promising energy and the powerful one among renewable energies. Photovoltaic electricity (statement) is obtained by direct transformation of the sunlight into electricity, by means of cells statement. Then, we study the operation of cells statement by the digital simulation with an aim of optimizing the output of the parabolic concentrator of Stirling engine type. The Greenius software makes it possible to carry out the digital simulation in 2D and 3D and to study the influence of the various parameters on the characteristic voltage under illumination of the cell. The results obtained enabled us to determine the extrinsic factors which depend on the environment and the intrinsic factors which result from the properties of materials used.

  13. Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.

    PubMed

    Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi

    2012-11-08

    A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.

  14. Do bacterial cell numbers follow a theoretical Poisson distribution? Comparison of experimentally obtained numbers of single cells with random number generation via computer simulation.

    PubMed

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu

    2016-12-01

    We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Cohesive finite element modeling of the delamination of HTPB binder and HMX crystals under tensile loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walters, David J.; Luscher, Darby J.; Yeager, John D.

    Accurately modeling the mechanical behavior of the polymer binders and the degradation of interfaces between binder and crystal is important to science-based understanding of the macro-scale response of polymer bonded explosives. The paper presents a description of relatively a simple bi-crystal HMX-HTPB specimen and associated tensile loading experiment including computed tomography imaging, the pertinent constitutive theory, and details of numerical simulations used to infer the behavior of the material during the delamination process. Within this work, mechanical testing and direct numerical simulation of this relatively simple bi-crystal system enabled reasonable isolation of binder-crystal interface delamination, in which the effects ofmore » the complicated thermomechanical response of explosive crystals were minimized. Cohesive finite element modeling of the degradation and delamination of the interface between a modified HTPB binder and HMX crystals was used to reproduce observed results from tensile loading experiments on bi-crystal specimens. Several comparisons are made with experimental measurements in order to identify appropriate constitutive behavior of the binder and appropriate parameters for the cohesive traction-separation behavior of the crystal-binder interface. This research demonstrates the utility of directly modeling the delamination between binder and crystal within crystal-binder-crystal tensile specimen towards characterizing the behavior of these interfaces in a manner amenable to larger scale simulation of polycrystalline PBX materials. One critical aspect of this approach is micro computed tomography imaging conducted during the experiments, which enabled comparison of delamination patterns between the direct numerical simulation and actual specimen. In addition to optimizing the cohesive interface parameters, one important finding from this investigation is that understanding and representing the strain-hardening plasticity of HTPB binder is important within the context of using a cohesive traction-separation model for the delamination of a crystal-binder system.« less

  16. Investigating ice nucleation in cirrus clouds with an aerosol-enabled Multiscale Modeling Framework

    DOE PAGES

    Zhang, Chengzhu; Wang, Minghuai; Morrison, H.; ...

    2014-11-06

    In this study, an aerosol-dependent ice nucleation scheme [Liu and Penner, 2005] has been implemented in an aerosol-enabled multi-scale modeling framework (PNNL MMF) to study ice formation in upper troposphere cirrus clouds through both homogeneous and heterogeneous nucleation. The MMF model represents cloud scale processes by embedding a cloud-resolving model (CRM) within each vertical column of a GCM grid. By explicitly linking ice nucleation to aerosol number concentration, CRM-scale temperature, relative humidity and vertical velocity, the new MMF model simulates the persistent high ice supersaturation and low ice number concentration (10 to 100/L) at cirrus temperatures. The low ice numbermore » is attributed to the dominance of heterogeneous nucleation in ice formation. The new model simulates the observed shift of the ice supersaturation PDF towards higher values at low temperatures following homogeneous nucleation threshold. The MMF models predict a higher frequency of midlatitude supersaturation in the Southern hemisphere and winter hemisphere, which is consistent with previous satellite and in-situ observations. It is shown that compared to a conventional GCM, the MMF is a more powerful model to emulate parameters that evolve over short time scales such as supersaturation. Sensitivity tests suggest that the simulated global distribution of ice clouds is sensitive to the ice nucleation schemes and the distribution of sulfate and dust aerosols. Simulations are also performed to test empirical parameters related to auto-conversion of ice crystals to snow. Results show that with a value of 250 μm for the critical diameter, Dcs, that distinguishes ice crystals from snow, the model can produce good agreement to the satellite retrieved products in terms of cloud ice water path and ice water content, while the total ice water is not sensitive to the specification of Dcs value.« less

  17. Simulating Nationwide Pandemics: Applying the Multi-scale Epidemiologic Simulation and Analysis System to Human Infectious Diseases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dombroski, M; Melius, C; Edmunds, T

    2008-09-24

    This study uses the Multi-scale Epidemiologic Simulation and Analysis (MESA) system developed for foreign animal diseases to assess consequences of nationwide human infectious disease outbreaks. A literature review identified the state of the art in both small-scale regional models and large-scale nationwide models and characterized key aspects of a nationwide epidemiological model. The MESA system offers computational advantages over existing epidemiological models and enables a broader array of stochastic analyses of model runs to be conducted because of those computational advantages. However, it has only been demonstrated on foreign animal diseases. This paper applied the MESA modeling methodology to humanmore » epidemiology. The methodology divided 2000 US Census data at the census tract level into school-bound children, work-bound workers, elderly, and stay at home individuals. The model simulated mixing among these groups by incorporating schools, workplaces, households, and long-distance travel via airports. A baseline scenario with fixed input parameters was run for a nationwide influenza outbreak using relatively simple social distancing countermeasures. Analysis from the baseline scenario showed one of three possible results: (1) the outbreak burned itself out before it had a chance to spread regionally, (2) the outbreak spread regionally and lasted a relatively long time, although constrained geography enabled it to eventually be contained without affecting a disproportionately large number of people, or (3) the outbreak spread through air travel and lasted a long time with unconstrained geography, becoming a nationwide pandemic. These results are consistent with empirical influenza outbreak data. The results showed that simply scaling up a regional small-scale model is unlikely to account for all the complex variables and their interactions involved in a nationwide outbreak. There are several limitations of the methodology that should be explored in future work including validating the model against reliable historical disease data, improving contact rates, spread methods, and disease parameters through discussions with epidemiological experts, and incorporating realistic behavioral assumptions.« less

  18. Cohesive finite element modeling of the delamination of HTPB binder and HMX crystals under tensile loading

    DOE PAGES

    Walters, David J.; Luscher, Darby J.; Yeager, John D.; ...

    2018-02-27

    Accurately modeling the mechanical behavior of the polymer binders and the degradation of interfaces between binder and crystal is important to science-based understanding of the macro-scale response of polymer bonded explosives. The paper presents a description of relatively a simple bi-crystal HMX-HTPB specimen and associated tensile loading experiment including computed tomography imaging, the pertinent constitutive theory, and details of numerical simulations used to infer the behavior of the material during the delamination process. Within this work, mechanical testing and direct numerical simulation of this relatively simple bi-crystal system enabled reasonable isolation of binder-crystal interface delamination, in which the effects ofmore » the complicated thermomechanical response of explosive crystals were minimized. Cohesive finite element modeling of the degradation and delamination of the interface between a modified HTPB binder and HMX crystals was used to reproduce observed results from tensile loading experiments on bi-crystal specimens. Several comparisons are made with experimental measurements in order to identify appropriate constitutive behavior of the binder and appropriate parameters for the cohesive traction-separation behavior of the crystal-binder interface. This research demonstrates the utility of directly modeling the delamination between binder and crystal within crystal-binder-crystal tensile specimen towards characterizing the behavior of these interfaces in a manner amenable to larger scale simulation of polycrystalline PBX materials. One critical aspect of this approach is micro computed tomography imaging conducted during the experiments, which enabled comparison of delamination patterns between the direct numerical simulation and actual specimen. In addition to optimizing the cohesive interface parameters, one important finding from this investigation is that understanding and representing the strain-hardening plasticity of HTPB binder is important within the context of using a cohesive traction-separation model for the delamination of a crystal-binder system.« less

  19. Computational steering of GEM based detector simulations

    NASA Astrophysics Data System (ADS)

    Sheharyar, Ali; Bouhali, Othmane

    2017-10-01

    Gas based detector R&D relies heavily on full simulation of detectors and their optimization before final prototypes can be built and tested. These simulations in particular those with complex scenarios such as those involving high detector voltages or gas with larger gains are computationally intensive may take several days or weeks to complete. These long-running simulations usually run on the high-performance computers in batch mode. If the results lead to unexpected behavior, then the simulation might be rerun with different parameters. However, the simulations (or jobs) may have to wait in a queue until they get a chance to run again because the supercomputer is a shared resource that maintains a queue of other user programs as well and executes them as time and priorities permit. It may result in inefficient resource utilization and increase in the turnaround time for the scientific experiment. To overcome this issue, the monitoring of the behavior of a simulation, while it is running (or live), is essential. In this work, we employ the computational steering technique by coupling the detector simulations with a visualization package named VisIt to enable the exploration of the live data as it is produced by the simulation.

  20. Simulated Wake Characteristics Data for Closely Spaced Parallel Runway Operations Analysis

    NASA Technical Reports Server (NTRS)

    Guerreiro, Nelson M.; Neitzke, Kurt W.

    2012-01-01

    A simulation experiment was performed to generate and compile wake characteristics data relevant to the evaluation and feasibility analysis of closely spaced parallel runway (CSPR) operational concepts. While the experiment in this work is not tailored to any particular operational concept, the generated data applies to the broader class of CSPR concepts, where a trailing aircraft on a CSPR approach is required to stay ahead of the wake vortices generated by a lead aircraft on an adjacent CSPR. Data for wake age, circulation strength, and wake altitude change, at various lateral offset distances from the wake-generating lead aircraft approach path were compiled for a set of nine aircraft spanning the full range of FAA and ICAO wake classifications. A total of 54 scenarios were simulated to generate data related to key parameters that determine wake behavior. Of particular interest are wake age characteristics that can be used to evaluate both time- and distance- based in-trail separation concepts for all aircraft wake-class combinations. A simple first-order difference model was developed to enable the computation of wake parameter estimates for aircraft models having weight, wingspan and speed characteristics similar to those of the nine aircraft modeled in this work.

  1. Improving the realism of white matter numerical phantoms: a step towards a better understanding of the influence of structural disorders in diffusion MRI

    NASA Astrophysics Data System (ADS)

    Ginsburger, Kévin; Poupon, Fabrice; Beaujoin, Justine; Estournet, Delphine; Matuschke, Felix; Mangin, Jean-François; Axer, Markus; Poupon, Cyril

    2018-02-01

    White matter is composed of irregularly packed axons leading to a structural disorder in the extra-axonal space. Diffusion MRI experiments using oscillating gradient spin echo sequences have shown that the diffusivity transverse to axons in this extra-axonal space is dependent on the frequency of the employed sequence. In this study, we observe the same frequency-dependence using 3D simulations of the diffusion process in disordered media. We design a novel white matter numerical phantom generation algorithm which constructs biomimicking geometric configurations with few design parameters, and enables to control the level of disorder of the generated phantoms. The influence of various geometrical parameters present in white matter, such as global angular dispersion, tortuosity, presence of Ranvier nodes, beading, on the extra-cellular perpendicular diffusivity frequency dependence was investigated by simulating the diffusion process in numerical phantoms of increasing complexity and fitting the resulting simulated diffusion MR signal attenuation with an adequate analytical model designed for trapezoidal OGSE sequences. This work suggests that angular dispersion and especially beading have non-negligible effects on this extracellular diffusion metrics that may be measured using standard OGSE DW-MRI clinical protocols.

  2. Fragmentation modeling of a resin bonded sand

    NASA Astrophysics Data System (ADS)

    Hilth, William; Ryckelynck, David

    2017-06-01

    Cemented sands exhibit a complex mechanical behavior that can lead to sophisticated models, with numerous parameters without real physical meaning. However, using a rather simple generalized critical state bonded soil model has proven to be a relevant compromise between an easy calibration and good results. The constitutive model formulation considers a non-associated elasto-plastic formulation within the critical state framework. The calibration procedure, using standard laboratory tests, is complemented by the study of an uniaxial compression test observed by tomography. Using finite elements simulations, this test is simulated considering a non-homogeneous 3D media. The tomography of compression sample gives access to 3D displacement fields by using image correlation techniques. Unfortunately these fields have missing experimental data because of the low resolution of correlations for low displacement magnitudes. We propose a recovery method that reconstructs 3D full displacement fields and 2D boundary displacement fields. These fields are mandatory for the calibration of the constitutive parameters by using 3D finite element simulations. The proposed recovery technique is based on a singular value decomposition of available experimental data. This calibration protocol enables an accurate prediction of the fragmentation of the specimen.

  3. Simulating industrial plasma reactors - A fresh perspective

    NASA Astrophysics Data System (ADS)

    Mohr, Sebastian; Rahimi, Sara; Tennyson, Jonathan; Ansell, Oliver; Patel, Jash

    2016-09-01

    A key goal of the presented research project PowerBase is to produce new integration schemes which enable the manufacturability of 3D integrated power smart systems with high precision TSV etched features. The necessary high aspect ratio etch is performed via the BOSCH process. Investigations in industrial research are often use trial and improvement experimental methods. Simulations provide an alternative way to study the influence of external parameters on the final product, whilst also giving insights into the physical processes. This presentation investigates the process of simulating an industrial ICP reactor used over high power (up to 2x5 kW) and pressure (up to 200 mTorr) ranges, analysing the specific procedures to achieve a compromise between physical correctness and computational speed, while testing commonly made assumptions. This includes, for example, the effect of different physical models and the inclusion of different gas phase and surface reactions with the aim of accurately predicting the dependence of surface rates and profiles on external parameters in SF6 and C4F8 discharges. This project has received funding from the Electronic Component Systems for European Leadership Joint Undertaking under Grant Agreement No. 662133 PowerBase.

  4. Interactive model evaluation tool based on IPython notebook

    NASA Astrophysics Data System (ADS)

    Balemans, Sophie; Van Hoey, Stijn; Nopens, Ingmar; Seuntjes, Piet

    2015-04-01

    In hydrological modelling, some kind of parameter optimization is mostly performed. This can be the selection of a single best parameter set, a split in behavioural and non-behavioural parameter sets based on a selected threshold or a posterior parameter distribution derived with a formal Bayesian approach. The selection of the criterion to measure the goodness of fit (likelihood or any objective function) is an essential step in all of these methodologies and will affect the final selected parameter subset. Moreover, the discriminative power of the objective function is also dependent from the time period used. In practice, the optimization process is an iterative procedure. As such, in the course of the modelling process, an increasing amount of simulations is performed. However, the information carried by these simulation outputs is not always fully exploited. In this respect, we developed and present an interactive environment that enables the user to intuitively evaluate the model performance. The aim is to explore the parameter space graphically and to visualize the impact of the selected objective function on model behaviour. First, a set of model simulation results is loaded along with the corresponding parameter sets and a data set of the same variable as the model outcome (mostly discharge). The ranges of the loaded parameter sets define the parameter space. A selection of the two parameters visualised can be made by the user. Furthermore, an objective function and a time period of interest need to be selected. Based on this information, a two-dimensional parameter response surface is created, which actually just shows a scatter plot of the parameter combinations and assigns a color scale corresponding with the goodness of fit of each parameter combination. Finally, a slider is available to change the color mapping of the points. Actually, the slider provides a threshold to exclude non behaviour parameter sets and the color scale is only attributed to the remaining parameter sets. As such, by interactively changing the settings and interpreting the graph, the user gains insight in the model structural behaviour. Moreover, a more deliberate choice of objective function and periods of high information content can be identified. The environment is written in an IPython notebook and uses the available interactive functions provided by the IPython community. As such, the power of the IPython notebook as a development environment for scientific computing is illustrated (Shen, 2014).

  5. Validation of the Monte Carlo simulator GATE for indium-111 imaging.

    PubMed

    Assié, K; Gardin, I; Véra, P; Buvat, I

    2005-07-07

    Monte Carlo simulations are useful for optimizing and assessing single photon emission computed tomography (SPECT) protocols, especially when aiming at measuring quantitative parameters from SPECT images. Before Monte Carlo simulated data can be trusted, the simulation model must be validated. The purpose of this work was to validate the use of GATE, a new Monte Carlo simulation platform based on GEANT4, for modelling indium-111 SPECT data, the quantification of which is of foremost importance for dosimetric studies. To that end, acquisitions of (111)In line sources in air and in water and of a cylindrical phantom were performed, together with the corresponding simulations. The simulation model included Monte Carlo modelling of the camera collimator and of a back-compartment accounting for photomultiplier tubes and associated electronics. Energy spectra, spatial resolution, sensitivity values, images and count profiles obtained for experimental and simulated data were compared. An excellent agreement was found between experimental and simulated energy spectra. For source-to-collimator distances varying from 0 to 20 cm, simulated and experimental spatial resolution differed by less than 2% in air, while the simulated sensitivity values were within 4% of the experimental values. The simulation of the cylindrical phantom closely reproduced the experimental data. These results suggest that GATE enables accurate simulation of (111)In SPECT acquisitions.

  6. Detecting changes in ultrasound backscattered statistics by using Nakagami parameters: Comparisons of moment-based and maximum likelihood estimators.

    PubMed

    Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang

    2017-05-01

    The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE 1 and MLE 2 , respectively), and Greenwood approximation (MLE gw ) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE 1 , the MLE 2 and MLE gw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE 2 and MLE gw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE 2 and MLE gw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Rydberg aggregates

    NASA Astrophysics Data System (ADS)

    Wüster, S.; Rost, J.-M.

    2018-02-01

    We review Rydberg aggregates, assemblies of a few Rydberg atoms exhibiting energy transport through collective eigenstates, considering isolated atoms or assemblies embedded within clouds of cold ground-state atoms. We classify Rydberg aggregates, and provide an overview of their possible applications as quantum simulators for phenomena from chemical or biological physics. Our main focus is on flexible Rydberg aggregates, in which atomic motion is an essential feature. In these, simultaneous control over Rydberg-Rydberg interactions, external trapping and electronic energies, allows Born-Oppenheimer surfaces for the motion of the entire aggregate to be tailored as desired. This is illustrated with theory proposals towards the demonstration of joint motion and excitation transport, conical intersections and non-adiabatic effects. Additional flexibility for quantum simulations is enabled by the use of dressed dipole-dipole interactions or the embedding of the aggregate in a cold gas or Bose-Einstein condensate environment. Finally we provide some guidance regarding the parameter regimes that are most suitable for the realization of either static or flexible Rydberg aggregates based on Li or Rb atoms. The current status of experimental progress towards enabling Rydberg aggregates is also reviewed.

  8. Development of a Coherent Bistatic Vegetation Model for Signal of Opportunity Applications at VHF UHF-Bands

    NASA Technical Reports Server (NTRS)

    Kurum, Mehmet; Deshpande, Manohar; Joseph, Alicia T.; O'Neill, Peggy E.; Lang, Roger H.; Eroglu, Orhan

    2017-01-01

    A coherent bistatic vegetation scattering model, based on a Monte Carlo simulation, is being developed to simulate polarimetric bi-static reflectometry at VHF/UHF-bands (240-270 MHz). The model is aimed to assess the value of geostationary satellite signals of opportunity to enable estimation of the Earth's biomass and root-zone soil moisture. An expression for bistatic scattering from a vegetation canopy is derived for the practical case of a ground-based/low altitude platforms with passive receivers overlooking vegetation. Using analytical wave theory in conjunction with distorted Born approximation (DBA), the transmit and receive antennas effects (i.e., polarization, orientation, height, etc.) are explicitly accounted for. Both the coherency nature of the model (joint phase and amplitude information) and the explicit account of system parameters (antenna, altitude, polarization, etc) enable one to perform various beamforming techniques to evaluate realistic deployment configurations. In this paper, several test scenarios will be presented and the results will be evaluated for feasibility for future biomass and root-zone soil moisture application using geostationary communication satellite signals of opportunity at low frequencies.

  9. Model development and validation of geometrically complex eddy current coils using finite element methods

    NASA Astrophysics Data System (ADS)

    Brown, Alexander; Eviston, Connor

    2017-02-01

    Multiple FEM models of complex eddy current coil geometries were created and validated to calculate the change of impedance due to the presence of a notch. Capable realistic simulations of eddy current inspections are required for model assisted probability of detection (MAPOD) studies, inversion algorithms, experimental verification, and tailored probe design for NDE applications. An FEM solver was chosen to model complex real world situations including varying probe dimensions and orientations along with complex probe geometries. This will also enable creation of a probe model library database with variable parameters. Verification and validation was performed using other commercially available eddy current modeling software as well as experimentally collected benchmark data. Data analysis and comparison showed that the created models were able to correctly model the probe and conductor interactions and accurately calculate the change in impedance of several experimental scenarios with acceptable error. The promising results of the models enabled the start of an eddy current probe model library to give experimenters easy access to powerful parameter based eddy current models for alternate project applications.

  10. Implementation of PSF engineering in high-resolution 3D microscopy imaging with a LCoS (reflective) SLM

    NASA Astrophysics Data System (ADS)

    King, Sharon V.; Doblas, Ana; Patwary, Nurmohammed; Saavedra, Genaro; Martínez-Corral, Manuel; Preza, Chrysanthe

    2014-03-01

    Wavefront coding techniques are currently used to engineer unique point spread functions (PSFs) that enhance existing microscope modalities or create new ones. Previous work in this field demonstrated that simulated intensity PSFs encoded with a generalized cubic phase mask (GCPM) are invariant to spherical aberration or misfocus; dependent on parameter selection. Additional work demonstrated that simulated PSFs encoded with a squared cubic phase mask (SQUBIC) produce a depth invariant focal spot for application in confocal scanning microscopy. Implementation of PSF engineering theory with a liquid crystal on silicon (LCoS) spatial light modulator (SLM) enables validation of WFC phase mask designs and parameters by manipulating optical wavefront properties with a programmable diffractive element. To validate and investigate parameters of the GCPM and SQUBIC WFC masks, we implemented PSF engineering in an upright microscope modified with a dual camera port and a LCoS SLM. We present measured WFC PSFs and compare them to simulated PSFs through analysis of their effect on the microscope imaging system properties. Experimentally acquired PSFs show the same intensity distribution as simulation for the GCPM phase mask, the SQUBIC-mask and the well-known and characterized cubic-phase mask (CPM), first applied to high NA microscopy by Arnison et al.10, for extending depth of field. These measurements provide experimental validation of new WFC masks and demonstrate the use of the LCoS SLM as a WFC design tool. Although efficiency improvements are needed, this application of LCoS technology renders the microscope capable of switching among multiple WFC modes.

  11. Brain perfusion imaging using a Reconstruction-of-Difference (RoD) approach for cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Mow, M.; Zbijewski, W.; Sisniega, A.; Xu, J.; Dang, H.; Stayman, J. W.; Wang, X.; Foos, D. H.; Koliatsos, V.; Aygun, N.; Siewerdsen, J. H.

    2017-03-01

    Purpose: To improve the timely detection and treatment of intracranial hemorrhage or ischemic stroke, recent efforts include the development of cone-beam CT (CBCT) systems for perfusion imaging and new approaches to estimate perfusion parameters despite slow rotation speeds compared to multi-detector CT (MDCT) systems. This work describes development of a brain perfusion CBCT method using a reconstruction of difference (RoD) approach to enable perfusion imaging on a newly developed CBCT head scanner prototype. Methods: A new reconstruction approach using RoD with a penalized-likelihood framework was developed to image the temporal dynamics of vascular enhancement. A digital perfusion simulation was developed to give a realistic representation of brain anatomy, artifacts, noise, scanner characteristics, and hemo-dynamic properties. This simulation includes a digital brain phantom, time-attenuation curves and noise parameters, a novel forward projection method for improved computational efficiency, and perfusion parameter calculation. Results: Our results show the feasibility of estimating perfusion parameters from a set of images reconstructed from slow scans, sparse data sets, and arc length scans as short as 60 degrees. The RoD framework significantly reduces noise and time-varying artifacts from inconsistent projections. Proper regularization and the use of overlapping reconstructed arcs can potentially further decrease bias and increase temporal resolution, respectively. Conclusions: A digital brain perfusion simulation with RoD imaging approach has been developed and supports the feasibility of using a CBCT head scanner for perfusion imaging. Future work will include testing with data acquired using a 3D-printed perfusion phantom currently and translation to preclinical and clinical studies.

  12. Sensitivity of Induced Seismic Sequences to Rate-and-State Frictional Processes

    NASA Astrophysics Data System (ADS)

    Kroll, Kayla A.; Richards-Dinger, Keith B.; Dieterich, James H.

    2017-12-01

    It is well established that subsurface injection of fluids increases pore fluid pressures that may lead to shear failure along a preexisting fault surface. Concern among oil and gas, geothermal, and carbon storage operators has risen dramatically over the past decade due to the increase in the number and magnitude of induced earthquakes. Efforts to mitigate the risk associated with injection-induced earthquakes include modeling of the interaction between fluids and earthquake faults. Here we investigate this relationship with simulations that couple a geomechanical reservoir model and RSQSim, a physics-based earthquake simulator. RSQSim employs rate- and state-dependent friction (RSF) that enables the investigation of the time-dependent nature of earthquake sequences. We explore the effect of two RSF parameters and normal stress on the spatiotemporal characteristics of injection-induced seismicity. We perform >200 simulations to systematically investigate the effect of these model components on the evolution of induced seismicity sequences and compare the spatiotemporal characteristics of our synthetic catalogs to observations of induced earthquakes. We find that the RSF parameters control the ability of seismicity to migrate away from the injection well, the total number and maximum magnitude of induced events. Additionally, the RSF parameters control the occurrence/absence of premonitory events. Lastly, we find that earthquake stress drops can be modulated by the normal stress and/or the RSF parameters. Insight gained from this study can aid in further development of models that address best practice protocols for injection operations, site-specific models of injection-induced earthquakes, and probabilistic hazard and risk assessments.

  13. Simulating Aerosol Optical Properties With the Aerosol Simulation Program (ASP): Closure Studies Using ARCTAS Data

    NASA Astrophysics Data System (ADS)

    Alvarado, M. J.; Macintyre, H. L.; Bian, H.; Chin, M.; Wang, C.

    2012-12-01

    The scattering and absorption of ultraviolet and visible radiation by aerosols can significantly alter actinic fluxes and photolysis rates. Accurate modeling of aerosol optical properties is thus essential to simulating atmospheric chemistry, air quality, and climate. Here we evaluate the aerosol optical property predictions of the Aerosol Simulation Program (ASP) with in situ data on aerosol scattering and absorption gathered during the Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) campaign. The model simulations are initialized with in situ data on the aerosol size distribution and composition. We perform a set of sensitivity studies (e.g., internal vs. external mixture, core-in-shell versus Maxwell-Garnett, fraction of the organic carbon mass that is light-absorbing "brown carbon," etc.) to determine the model framework and parameters most consistent with the observations. We compare the ASP results to the aerosol optical property lookup tables in FAST-JX and suggest improvements that will better enable FAST-JX to simulate the impact of aerosols on photolysis rates and atmospheric chemistry.

  14. Computer Aided Battery Engineering Consortium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pesaran, Ahmad

    A multi-national lab collaborative team was assembled that includes experts from academia and industry to enhance recently developed Computer-Aided Battery Engineering for Electric Drive Vehicles (CAEBAT)-II battery crush modeling tools and to develop microstructure models for electrode design - both computationally efficient. Task 1. The new Multi-Scale Multi-Domain model framework (GH-MSMD) provides 100x to 1,000x computation speed-up in battery electrochemical/thermal simulation while retaining modularity of particles and electrode-, cell-, and pack-level domains. The increased speed enables direct use of the full model in parameter identification. Task 2. Mechanical-electrochemical-thermal (MECT) models for mechanical abuse simulation were simultaneously coupled, enabling simultaneous modelingmore » of electrochemical reactions during the short circuit, when necessary. The interactions between mechanical failure and battery cell performance were studied, and the flexibility of the model for various batteries structures and loading conditions was improved. Model validation is ongoing to compare with test data from Sandia National Laboratories. The ABDT tool was established in ANSYS. Task 3. Microstructural modeling was conducted to enhance next-generation electrode designs. This 3- year project will validate models for a variety of electrodes, complementing Advanced Battery Research programs. Prototype tools have been developed for electrochemical simulation and geometric reconstruction.« less

  15. Integrated Process Modeling-A Process Validation Life Cycle Companion.

    PubMed

    Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph

    2017-10-17

    During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.

  16. Design by Dragging: An Interface for Creative Forward and Inverse Design with Simulation Ensembles

    PubMed Central

    Coffey, Dane; Lin, Chi-Lun; Erdman, Arthur G.; Keefe, Daniel F.

    2014-01-01

    We present an interface for exploring large design spaces as encountered in simulation-based engineering, design of visual effects, and other tasks that require tuning parameters of computationally-intensive simulations and visually evaluating results. The goal is to enable a style of design with simulations that feels as-direct-as-possible so users can concentrate on creative design tasks. The approach integrates forward design via direct manipulation of simulation inputs (e.g., geometric properties, applied forces) in the same visual space with inverse design via “tugging” and reshaping simulation outputs (e.g., scalar fields from finite element analysis (FEA) or computational fluid dynamics (CFD)). The interface includes algorithms for interpreting the intent of users’ drag operations relative to parameterized models, morphing arbitrary scalar fields output from FEA and CFD simulations, and in-place interactive ensemble visualization. The inverse design strategy can be extended to use multi-touch input in combination with an as-rigid-as-possible shape manipulation to support rich visual queries. The potential of this new design approach is confirmed via two applications: medical device engineering of a vacuum-assisted biopsy device and visual effects design using a physically based flame simulation. PMID:24051845

  17. Parameter optimization of a hydrologic model in a snow-dominated basin using a modular Python framework

    NASA Astrophysics Data System (ADS)

    Volk, J. M.; Turner, M. A.; Huntington, J. L.; Gardner, M.; Tyler, S.; Sheneman, L.

    2016-12-01

    Many distributed models that simulate watershed hydrologic processes require a collection of multi-dimensional parameters as input, some of which need to be calibrated before the model can be applied. The Precipitation Runoff Modeling System (PRMS) is a physically-based and spatially distributed hydrologic model that contains a considerable number of parameters that often need to be calibrated. Modelers can also benefit from uncertainty analysis of these parameters. To meet these needs, we developed a modular framework in Python to conduct PRMS parameter optimization, uncertainty analysis, interactive visual inspection of parameters and outputs, and other common modeling tasks. Here we present results for multi-step calibration of sensitive parameters controlling solar radiation, potential evapo-transpiration, and streamflow in a PRMS model that we applied to the snow-dominated Dry Creek watershed in Idaho. We also demonstrate how our modular approach enables the user to use a variety of parameter optimization and uncertainty methods or easily define their own, such as Monte Carlo random sampling, uniform sampling, or even optimization methods such as the downhill simplex method or its commonly used, more robust counterpart, shuffled complex evolution.

  18. Remote sensing of tropospheric turbulence using GPS radio occultation

    NASA Astrophysics Data System (ADS)

    Shume, Esayas; Ao, Chi

    2016-07-01

    Radio occultation (RO) measurements are sensitive to the small-scale irregularities in the atmosphere. In this study, we present a new technique to estimate tropospheric turbulence strength (namely, scintillation index) by analyzing RO amplitude fluctuations in impact parameter domain. GPS RO observations from the COSMIC (Constellation Observing System for Meteorology, Ionosphere, and Climate) satellites enabled us to calculate global maps of scintillation measures, revealing the seasonal, latitudinal, and longitudinal characteristics of the turbulent troposphere. Such information are both difficult and expensive to obtain especially over the oceans. To verify our approach, simulation experiments using the multiple phase screen (MPS) method were conducted. The results show that scintillation indices inferred from the MPS simulations are in good agreement with scintillation measures estimated from COSMIC observations.

  19. Level-set simulations of soluble surfactant driven flows

    NASA Astrophysics Data System (ADS)

    Cleret de Langavant, Charles; Guittet, Arthur; Theillard, Maxime; Temprano-Coleto, Fernando; Gibou, Frédéric

    2017-11-01

    We present an approach to simulate the diffusion, advection and adsorption-desorption of a material quantity defined on an interface in two and three spatial dimensions. We use a level-set approach to capture the interface motion and a Quad/Octree data structure to efficiently solve the equations describing the underlying physics. Coupling with a Navier-Stokes solver enables the study of the effect of soluble surfactants that locally modify the parameters of surface tension on different types of flows. The method is tested on several benchmarks and applied to three typical examples of flows in the presence of surfactant: a bubble in a shear flow, the well-known phenomenon of tears of wine, and the Landau-Levich coating problem.

  20. Demixing-stimulated lane formation in binary complex plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, C.-R.; Jiang, K.; Suetterlin, K. R.

    2011-11-29

    Recently lane formation and phase separation have been reported for experiments with binary complex plasmas in the PK3-Plus laboratory onboard the International Space Station (ISS). Positive non-additivity of particle interactions is known to stimulate phase separation (demixing), but its effect on lane formation is unknown. In this work, we used Langevin dynamics (LD) simulation to probe the role of non-additivity interactions on lane formation. The competition between laning and demixing leads to thicker lanes. Analysis based on anisotropic scaling indices reveals a crossover from normal laning mode to a demixing-stimulated laning mode. Extensive numerical simulations enabled us to identify amore » critical value of the non-additivity parameter {Delta} for the crossover.« less

  1. Development of a Higher Fidelity Model for the Cascade Distillation Subsystem (CDS)

    NASA Technical Reports Server (NTRS)

    Perry, Bruce; Anderson, Molly

    2014-01-01

    Significant improvements have been made to the ACM model of the CDS, enabling accurate predictions of dynamic operations with fewer assumptions. The model has been utilized to predict how CDS performance would be impacted by changing operating parameters, revealing performance trade-offs and possibilities for improvement. CDS efficiency is driven by the THP coefficient of performance, which in turn is dependent on heat transfer within the system. Based on the remaining limitations of the simulation, priorities for further model development include: center dot Relaxing the assumption of total condensation center dot Incorporating dynamic simulation capability for the buildup of dissolved inert gasses in condensers center dot Examining CDS operation with more complex feeds center dot Extending heat transfer analysis to all surfaces

  2. Modelling of TES X-ray Microcalorimeters with a Novel Absorber Design

    NASA Technical Reports Server (NTRS)

    Iyomoto, Naoko; Bandler, Simon; Brefosky, Regis; Brown, Ari; Chervenak, James; Figueroa-Feliciano, Enectali; Finkbeiner, Frederick; Kelley, Richard; Kilbourne, Caroline; Lindeman, Mark; hide

    2007-01-01

    Our development of a novel x-ray absorber design that has enabled the incorporation of high-conductivity electroplated gold into our absorbers has yielded devices that not only have achieved breakthrough performance at 6 keV, but also are extraordinarily well modelled. We have determined device parameters that reproduce complex impedance curves and noise spectra throughout transition. Observed pulse heights, decay time and baseline energy resolution were in good agreement with simulated results using the same parameters. In the presentation, we will show these results in detail and we will also show highlights of the characterization of our gold/bismuth-absorber devices. We will discuss possible improvement of our current devices and expected performance of future devices using the modelling results.

  3. Numerical simulation of the geodynamo reaches Earth's core dynamical regime

    NASA Astrophysics Data System (ADS)

    Aubert, J.; Gastine, T.; Fournier, A.

    2016-12-01

    Numerical simulations of the geodynamo have been successful at reproducing a number of static (field morphology) and kinematic (secular variation patterns, core surface flows and westward drift) features of Earth's magnetic field, making them a tool of choice for the analysis and retrieval of geophysical information on Earth's core. However, classical numerical models have been run in a parameter regime far from that of the real system, prompting the question of whether we do get "the right answers for the wrong reasons", i.e. whether the agreement between models and nature simply occurs by chance and without physical relevance in the dynamics. In this presentation, we show that classical models succeed in describing the geodynamo because their large-scale spatial structure is essentially invariant as one progresses along a well-chosen path in parameter space to Earth's core conditions. This path is constrained by the need to enforce the relevant force balance (MAC or Magneto-Archimedes-Coriolis) and preserve the ratio of the convective overturn and magnetic diffusion times. Numerical simulations performed along this path are shown to be spatially invariant at scales larger than that where the magnetic energy is ohmically dissipated. This property enables the definition of large-eddy simulations that show good agreement with direct numerical simulations in the range where both are feasible, and that can be computed at unprecedented values of the control parameters, such as an Ekman number E=10-8. Combining direct and large-eddy simulations, large-scale invariance is observed over half the logarithmic distance in parameter space between classical models and Earth. The conditions reached at this mid-point of the path are furthermore shown to be representative of the rapidly-rotating, asymptotic dynamical regime in which Earth's core resides, with a MAC force balance undisturbed by viscosity or inertia, the enforcement of a Taylor state and strong-field dynamo action. We conclude that numerical modelling has advanced to a stage where it is possible to use models correctly representing the statics, kinematics and now the dynamics of the geodynamo. This opens the way to a better analysis of the geomagnetic field in the time and space domains.

  4. Cloud GPU-based simulations for SQUAREMR.

    PubMed

    Kantasis, George; Xanthis, Christos G; Haris, Kostas; Heiberg, Einar; Aletras, Anthony H

    2017-01-01

    Quantitative Magnetic Resonance Imaging (MRI) is a research tool, used more and more in clinical practice, as it provides objective information with respect to the tissues being imaged. Pixel-wise T 1 quantification (T 1 mapping) of the myocardium is one such application with diagnostic significance. A number of mapping sequences have been developed for myocardial T 1 mapping with a wide range in terms of measurement accuracy and precision. Furthermore, measurement results obtained with these pulse sequences are affected by errors introduced by the particular acquisition parameters used. SQUAREMR is a new method which has the potential of improving the accuracy of these mapping sequences through the use of massively parallel simulations on Graphical Processing Units (GPUs) by taking into account different acquisition parameter sets. This method has been shown to be effective in myocardial T 1 mapping; however, execution times may exceed 30min which is prohibitively long for clinical applications. The purpose of this study was to accelerate the construction of SQUAREMR's multi-parametric database to more clinically acceptable levels. The aim of this study was to develop a cloud-based cluster in order to distribute the computational load to several GPU-enabled nodes and accelerate SQUAREMR. This would accommodate high demands for computational resources without the need for major upfront equipment investment. Moreover, the parameter space explored by the simulations was optimized in order to reduce the computational load without compromising the T 1 estimates compared to a non-optimized parameter space approach. A cloud-based cluster with 16 nodes resulted in a speedup of up to 13.5 times compared to a single-node execution. Finally, the optimized parameter set approach allowed for an execution time of 28s using the 16-node cluster, without compromising the T 1 estimates by more than 10ms. The developed cloud-based cluster and optimization of the parameter set reduced the execution time of the simulations involved in constructing the SQUAREMR multi-parametric database thus bringing SQUAREMR's applicability within time frames that would be likely acceptable in the clinic. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Hatten, Xavier; Cournia, Zoe; Huc, Ivan

    The increasing importance of hydrogenase enzymes in the new energy research field has led us to examine the structure and dynamics of potential hydrogenase mimics, based on a ferrocene-peptide scaffold, using molecular dynamics (MD) simulations. To enable this MD study, a molecular mechanics force field for ferrocene-bearing peptides was developed and implemented in the CHARMM simulation package, thus extending the usefulness of the package into peptide-bioorganometallic chemistry. Using the automated frequency-matching method (AFMM), optimized intramolecular force-field parameters were generated through quantum chemical reference normal modes. The partial charges for ferrocene were derived by fitting point charges to quantum-chemically computed electrostaticmore » potentials. The force field was tested against experimental X-ray crystal structures of dipeptide derivatives of ferrocene-1,1'-dicarboxylic acid. The calculations reproduce accurately the molecular geometries, including the characteristic C{sub 2}-symmetrical intramolecular hydrogen-bonding pattern, that were stable over 0.1 {micro}s MD simulations. The crystal packing properties of ferrocene-1-(D)alanine-(D)proline-1'-(D)alanine-(D)proline were also accurately reproduced. The lattice parameters of this crystal were conserved during a 0.1 {micro}s MD simulation and match the experimental values almost exactly. Simulations of the peptides in dichloromethane are also in good agreement with experimental NMR and circular dichroism (CD) data in solution. The developed force field was used to perform MD simulations on novel, as yet unsynthesized peptide fragments that surround the active site of [Ni-Fe] hydrogenase. The results of this simulation lead us to propose an improved design for synthetic peptide-based hydrogenase models. The presented MD simulation results of metallocenes thereby provide a convincing validation of our proposal to use ferrocene-peptides as minimal enzyme mimics.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Hatten, Xavier; Cournia, Zoe; Smith, Jeremy C

    The increasing importance of hydrogenase enzymes in the new energy research field has led us to examine the structure and dynamics of potential hydrogenase mimics, based on a ferrocene-peptide scaffold, using molecular dynamics (MD) simulations. To enable this MD study, a molecular mechanics force field for ferrocene-bearing peptides was developed and implemented in the CHARMM simulation package, thus extending the usefulness of the package into peptide-bioorganometallic chemistry. Using the automated frequency-matching method (AFMM), optimized intramolecular force-field parameters were generated through quantum chemical reference normal modes. The partial charges for ferrocene were derived by fitting point charges to quantum-chemically computed electrostaticmore » potentials. The force field was tested against experimental X-ray crystal structures of dipeptide derivatives of ferrocene-1,1{prime}-dicarboxylic acid. The calculations reproduce accurately the molecular geometries, including the characteristic C2-symmetrical intramolecular hydrogen-bonding pattern, that were stable over 0.1{micro}s MD simulations. The crystal packing properties of ferrocene-1-(D)alanine-(D)proline{prime}-1-(D)alanine-(D)proline were also accurately reproduced. The lattice parameters of this crystal were conserved during a 0.1 s MD simulation and match the experimental values almost exactly. Simulations of the peptides in dichloromethane are also in good agreement with experimental NMR and circular dichroism (CD) data in solution. The developed force field was used to perform MD simulations on novel, as yet unsynthesized peptide fragments that surround the active site of [Ni-Fe] hydrogenase. The results of this simulation lead us to propose an improved design for synthetic peptide-based hydrogenase models. The presented MD simulation results of metallocenes thereby provide a convincing validation of our proposal to use ferrocene-peptides as minimal enzyme mimics.« less

  7. Tight-binding model for materials at mesoscale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tai, Yuan-Yen; Choi, Hongchul; Zhu, Wei

    2016-12-21

    TBM3 is an open source package for computational simulations of quantum materials at multiple scales in length and time. The project originated to investigate the multiferroic behavior in transition-metal oxide heterostructures. The framework has also been designed to study emergent phemona in other quantum materials like 2-dimensional transition-metal dichalcogenides, graphene, topological insulators, and skyrmion in materials, etc. In the long term, we will enable the package for transport and time-resolved phenomena. TBM3 is currently a C++ based numerical tool package and framework for the design and construction of any kind of lattice structures with multi-orbital and spin degrees of freedom.more » The fortran based portion of the package will be added in the near future. The design of TBM3 is in a highly flexible and reusable framework and the tight-binding parameters can be modeled or informed by DFT calculations. It is currently GPU enabled and feature of CPU enabled MPI will be added in the future.« less

  8. Single crystal to polycrystal neutron transmission simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dessieux, Luc Lucius; Stoica, Alexandru Dan; Bingham, Philip R.

    A collection of routines for calculation of the total cross section that determines the attenuation of neutrons by crystalline solids is presented. The total cross section is calculated semi-empirically as a function of crystal structure, neutron energy, temperature, and crystal orientation. The semi-empirical formula includes the contribution of parasitic Bragg scattering to the total cross section using both the crystal’s mosaic spread value and its orientation with respect to the neutron beam direction as parameters. These routines allow users to enter a distribution of crystal orientations for calculation of total cross sections of user defined powder or pseudo powder distributions,more » which enables simulation of non-uniformities such as texture and strain. In conclusion, the spectra for neutron transmission simulations in the neutron thermal energy range (2 meV–100 meV) are presented for single crystal and polycrystal samples and compared to measurements.« less

  9. An evaluation of noise reduction algorithms for particle-based fluid simulations in multi-scale applications

    NASA Astrophysics Data System (ADS)

    Zimoń, M. J.; Prosser, R.; Emerson, D. R.; Borg, M. K.; Bray, D. J.; Grinberg, L.; Reese, J. M.

    2016-11-01

    Filtering of particle-based simulation data can lead to reduced computational costs and enable more efficient information transfer in multi-scale modelling. This paper compares the effectiveness of various signal processing methods to reduce numerical noise and capture the structures of nano-flow systems. In addition, a novel combination of these algorithms is introduced, showing the potential of hybrid strategies to improve further the de-noising performance for time-dependent measurements. The methods were tested on velocity and density fields, obtained from simulations performed with molecular dynamics and dissipative particle dynamics. Comparisons between the algorithms are given in terms of performance, quality of the results and sensitivity to the choice of input parameters. The results provide useful insights on strategies for the analysis of particle-based data and the reduction of computational costs in obtaining ensemble solutions.

  10. Chromatography modelling to describe protein adsorption at bead level.

    PubMed

    Gerontas, Spyridon; Shapiro, Michael S; Bracewell, Daniel G

    2013-04-05

    Chromatographic modelling can be used to describe and further understand the behaviour of biological species during their chromatography separation on adsorption resins. Current modelling approaches assume uniform rate parameters throughout the column. Software and hardware advances now allow us to consider what can be learnt from modelling at bead level, enabling simulation of heterogeneity in bead and packed bed structure due to design or due to changes during operation. In this paper, a model has been developed to simulate at bead level protein loading in 1.5 μl microfluidic columns. This model takes into account the heterogeneity in bead sizes and the spatial variations of the characteristics of a packed bed, such as bed void fraction and dispersion, thus offering a detailed description of the flow field and mass transfer phenomena. Simulations were shown to be in good agreement with published experimental data. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. A simulation model to estimate the cost and effectiveness of alternative dialysis initiation strategies.

    PubMed

    Lee, Chris P; Chertow, Glenn M; Zenios, Stefanos A

    2006-01-01

    Patients with end-stage renal disease (ESRD) require dialysis to maintain survival. The optimal timing of dialysis initiation in terms of cost-effectiveness has not been established. We developed a simulation model of individuals progressing towards ESRD and requiring dialysis. It can be used to analyze dialysis strategies and scenarios. It was embedded in an optimization frame worked to derive improved strategies. Actual (historical) and simulated survival curves and hospitalization rates were virtually indistinguishable. The model overestimated transplantation costs (10%) but it was related to confounding by Medicare coverage. To assess the model's robustness, we examined several dialysis strategies while input parameters were perturbed. Under all 38 scenarios, relative rankings remained unchanged. An improved policy for a hypothetical patient was derived using an optimization algorithm. The model produces reliable results and is robust. It enables the cost-effectiveness analysis of dialysis strategies.

  12. The analysis of thermal comfort requirements through the simulation of an occupied building.

    PubMed

    Thellier, F; Cordier, A; Monchoux, F

    1994-05-01

    Building simulation usually focuses on the study of physical indoor parameters, but we must not forget the main aim of a house: to provide comfort to the occupants. This study was undertaken in order to build a complete tool to model thermal behaviour that will enable the prediction of thermal sensations of humans in a real environment. A human thermoregulation model was added to TRNSYS, a building simulation program. For our purposes, improvements had to be made to the original physiological model, by refining the calculation of all heat exchanges with the environment and adding a representation of clothes. This paper briefly describes the program, its modifications, and compares its results with experimental ones. An example of potential use is given, which points out the usefulness of such models in seeking the best solutions to reach optimal environmental conditions for global, and specially local comfort, of building occupants.

  13. Single crystal to polycrystal neutron transmission simulation

    DOE PAGES

    Dessieux, Luc Lucius; Stoica, Alexandru Dan; Bingham, Philip R.

    2018-02-02

    A collection of routines for calculation of the total cross section that determines the attenuation of neutrons by crystalline solids is presented. The total cross section is calculated semi-empirically as a function of crystal structure, neutron energy, temperature, and crystal orientation. The semi-empirical formula includes the contribution of parasitic Bragg scattering to the total cross section using both the crystal’s mosaic spread value and its orientation with respect to the neutron beam direction as parameters. These routines allow users to enter a distribution of crystal orientations for calculation of total cross sections of user defined powder or pseudo powder distributions,more » which enables simulation of non-uniformities such as texture and strain. In conclusion, the spectra for neutron transmission simulations in the neutron thermal energy range (2 meV–100 meV) are presented for single crystal and polycrystal samples and compared to measurements.« less

  14. Particle-in-Cell Modeling of Magnetron Sputtering Devices

    NASA Astrophysics Data System (ADS)

    Cary, John R.; Jenkins, T. G.; Crossette, N.; Stoltz, Peter H.; McGugan, J. M.

    2017-10-01

    In magnetron sputtering devices, ions arising from the interaction of magnetically trapped electrons with neutral background gas are accelerated via a negative voltage bias to strike a target cathode. Neutral atoms ejected from the target by such collisions then condense on neighboring material surfaces to form a thin coating of target material; a variety of industrial applications which require thin surface coatings are enabled by this plasma vapor deposition technique. In this poster we discuss efforts to simulate various magnetron sputtering devices using the Vorpal PIC code in 2D axisymmetric cylindrical geometry. Field solves are fully self-consistent, and discrete models for sputtering, secondary electron emission, and Monte Carlo collisions are included in the simulations. In addition, the simulated device can be coupled to an external feedback circuit. Erosion/deposition profiles and steady-state plasma parameters are obtained, and modifications due to self consistency are seen. Computational performance issues are also discussed. and Tech-X Corporation.

  15. Decoding the non-stationary neuron spike trains by dual Monte Carlo point process estimation in motor Brain Machine Interfaces.

    PubMed

    Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Fan, Gong; Wang, Yiwen; Zheng, Xiaoxiang

    2014-01-01

    Decoding algorithm in motor Brain Machine Interfaces translates the neural signals to movement parameters. They usually assume the connection between the neural firings and movements to be stationary, which is not true according to the recent studies that observe the time-varying neuron tuning property. This property results from the neural plasticity and motor learning etc., which leads to the degeneration of the decoding performance when the model is fixed. To track the non-stationary neuron tuning during decoding, we propose a dual model approach based on Monte Carlo point process filtering method that enables the estimation also on the dynamic tuning parameters. When applied on both simulated neural signal and in vivo BMI data, the proposed adaptive method performs better than the one with static tuning parameters, which raises a promising way to design a long-term-performing model for Brain Machine Interfaces decoder.

  16. Market-Based Coordination of Thermostatically Controlled Loads—Part I: A Mechanism Design Formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Sen; Zhang, Wei; Lian, Jianming

    This paper focuses on the coordination of a population of Thermostatically Controlled Loads (TCLs) with unknown parameters to achieve group objectives. The problem involves designing the bidding and market clearing strategy to motivate self-interested users to realize efficient energy allocation subject to a peak power constraint. Using the mechanism design approach, we propose a market-based coordination framework, which can effectively incorporate heterogeneous load dynamics, systematically deal with user preferences, account for the unknown load model parameters, and enable the real-world implementation with limited communication resources. This paper is divided into two parts. Part I presents a mathematical formulation of themore » problem and develops a coordination framework using the mechanism design approach. Part II presents a learning scheme to account for the unknown load model parameters, and evaluates the proposed framework through realistic simulations.« less

  17. A Software Toolkit to Study Systematic Uncertainties of the Physics Models of the Geant4 Simulation Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genser, Krzysztof; Hatcher, Robert; Perdue, Gabriel

    2016-11-10

    The Geant4 toolkit is used to model interactions between particles and matter. Geant4 employs a set of validated physics models that span a wide range of interaction energies. These models are tuned to cover a large variety of possible applications. This raises the critical question of what uncertainties are associated with the Geant4 physics model, or group of models, involved in a simulation project. To address the challenge, we have designed and implemented a comprehen- sive, modular, user-friendly software toolkit that allows the variation of one or more parameters of one or more Geant4 physics models involved in simulation studies.more » It also enables analysis of multiple variants of the resulting physics observables of interest in order to estimate the uncertain- ties associated with the simulation model choices. Key functionalities of the toolkit are presented in this paper and are illustrated with selected results.« less

  18. Simulating Global AeroMACS Airport Ground Station Antenna Power Transmission Limits to Avoid Interference With Mobile Satellite Service Feeder Uplinks

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.

    2013-01-01

    The Aeronautical Mobile Airport Communications System (AeroMACS), which is based upon the IEEE 802.16e mobile wireless standard, is expected to be implemented in the 5091 to 5150 MHz frequency band. As this band is also occupied by Mobile Satellite Service feeder uplinks, AeroMACS must be designed to avoid interference with this incumbent service. The aspects of AeroMACS operation that present potential interference are under analysis in order to enable the definition of standards that assure that such interference will be avoided. In this study, the cumulative interference power distribution at low Earth orbit from transmitters at global airports was simulated with the Visualyse Professional software. The dependence of the interference power on antenna distribution, gain patterns, duty cycle, and antenna tilt was simulated. As a function of these parameters, the simulation results are presented in terms of the limitations on transmitter power from global airports required to maintain the cumulative interference power under the established threshold.

  19. Performance analysis and evaluation of direct phase measuring deflectometry

    NASA Astrophysics Data System (ADS)

    Zhao, Ping; Gao, Nan; Zhang, Zonghua; Gao, Feng; Jiang, Xiangqian

    2018-04-01

    Three-dimensional (3D) shape measurement of specular objects plays an important role in intelligent manufacturing applications. Phase measuring deflectometry (PMD)-based methods are widely used to obtain the 3D shapes of specular surfaces because they offer the advantages of a large dynamic range, high measurement accuracy, full-field and noncontact operation, and automatic data processing. To enable measurement of specular objects with discontinuous and/or isolated surfaces, a direct PMD (DPMD) method has been developed to build a direct relationship between phase and depth. In this paper, a new virtual measurement system is presented and is used to optimize the system parameters and evaluate the system's performance in DPMD applications. Four system parameters are analyzed to obtain accurate measurement results. Experiments are performed using simulated and actual data and the results confirm the effects of these four parameters on the measurement results. Researchers can therefore select suitable system parameters for actual DPMD (including PMD) measurement systems to obtain the 3D shapes of specular objects with high accuracy.

  20. PSAMM: A Portable System for the Analysis of Metabolic Models

    PubMed Central

    Steffensen, Jon Lund; Dufault-Thompson, Keith; Zhang, Ying

    2016-01-01

    The genome-scale models of metabolic networks have been broadly applied in phenotype prediction, evolutionary reconstruction, community functional analysis, and metabolic engineering. Despite the development of tools that support individual steps along the modeling procedure, it is still difficult to associate mathematical simulation results with the annotation and biological interpretation of metabolic models. In order to solve this problem, here we developed a Portable System for the Analysis of Metabolic Models (PSAMM), a new open-source software package that supports the integration of heterogeneous metadata in model annotations and provides a user-friendly interface for the analysis of metabolic models. PSAMM is independent of paid software environments like MATLAB, and all its dependencies are freely available for academic users. Compared to existing tools, PSAMM significantly reduced the running time of constraint-based analysis and enabled flexible settings of simulation parameters using simple one-line commands. The integration of heterogeneous, model-specific annotation information in PSAMM is achieved with a novel format of YAML-based model representation, which has several advantages, such as providing a modular organization of model components and simulation settings, enabling model version tracking, and permitting the integration of multiple simulation problems. PSAMM also includes a number of quality checking procedures to examine stoichiometric balance and to identify blocked reactions. Applying PSAMM to 57 models collected from current literature, we demonstrated how the software can be used for managing and simulating metabolic models. We identified a number of common inconsistencies in existing models and constructed an updated model repository to document the resolution of these inconsistencies. PMID:26828591

  1. A cost effective and high fidelity fluoroscopy simulator using the Image-Guided Surgery Toolkit (IGSTK)

    NASA Astrophysics Data System (ADS)

    Gong, Ren Hui; Jenkins, Brad; Sze, Raymond W.; Yaniv, Ziv

    2014-03-01

    The skills required for obtaining informative x-ray fluoroscopy images are currently acquired while trainees provide clinical care. As a consequence, trainees and patients are exposed to higher doses of radiation. Use of simulation has the potential to reduce this radiation exposure by enabling trainees to improve their skills in a safe environment prior to treating patients. We describe a low cost, high fidelity, fluoroscopy simulation system. Our system enables operators to practice their skills using the clinical device and simulated x-rays of a virtual patient. The patient is represented using a set of temporal Computed Tomography (CT) images, corresponding to the underlying dynamic processes. Simulated x-ray images, digitally reconstructed radiographs (DRRs), are generated from the CTs using ray-casting with customizable machine specific imaging parameters. To establish the spatial relationship between the CT and the fluoroscopy device, the CT is virtually attached to a patient phantom and a web camera is used to track the phantom's pose. The camera is mounted on the fluoroscope's intensifier and the relationship between it and the x-ray source is obtained via calibration. To control image acquisition the operator moves the fluoroscope as in normal operation mode. Control of zoom, collimation and image save is done using a keypad mounted alongside the device's control panel. Implementation is based on the Image-Guided Surgery Toolkit (IGSTK), and the use of the graphics processing unit (GPU) for accelerated image generation. Our system was evaluated by 11 clinicians and was found to be sufficiently realistic for training purposes.

  2. An Application of the "Virtual Spacecraft" Concept in Evaluation of the Mars Pathfinder Lander Low Gain Antenna

    NASA Technical Reports Server (NTRS)

    Pogorzelski, R. J.; Beckon, R. J.

    1997-01-01

    The virtual spacecraft concept is embodied in a set of subsystems, either in the form of hardware or computational models, which together represent all, or a portion of, a spacecraft. For example, the telecommunications transponder may be a hardware prototype while the propulsion system may exist only as a simulation. As the various subsystems are realized in hardware, the spacecraft becomes progressively less virtual. This concept is enabled by JPL's Mission System Testbed which is a set of networked workstations running a message passing operating system called "TRAMEL" which stands for Task Remote Asynchronous Message Exchange Layer. Each simulation on the workstations, which may in fact be hardware controlled by the workstation, "publishes" its operating parameters on TRAMEL and other simulations requiring those parameters as input may "subscribe" to them. In this manner, the whole simulation operates as a single virtual system. This paper describes a simulation designed to evaluate a communications link between the earth and the Mars Pathfinder Lander module as it descends under a parachute through the Martian atmosphere toward the planet's surface. This link includes a transmitter and a low gain antenna on the spacecraft and a receiving antenna and receiver on the earth as well as a simulation of the dynamics of the spacecraft. The transmitter, the ground station antenna, the receiver and the dynamics are all simulated computationally while the spacecraft antenna is implemented in hardware on a very simple spacecraft mockup. The dynamics simulation is a record of one output of the ensemble of outputs of a Monte Carlo simulation of the descent. Additionally, the antenna/spacecraft mock-up system was simulated using APATCH, a shooting and bouncing ray code developed by Demaco, Inc. The antenna simulation, the antenna hardware, and the link simulation are all physically located in different facilities at JPL separated by several hundred meters and are linked via the local area network (LAN).

  3. Application of advanced sampling and analysis methods to predict the structure of adsorbed protein on a material surface

    PubMed Central

    Abramyan, Tigran M.; Hyde-Volpe, David L.; Stuart, Steven J.; Latour, Robert A.

    2017-01-01

    The use of standard molecular dynamics simulation methods to predict the interactions of a protein with a material surface have the inherent limitations of lacking the ability to determine the most likely conformations and orientations of the adsorbed protein on the surface and to determine the level of convergence attained by the simulation. In addition, standard mixing rules are typically applied to combine the nonbonded force field parameters of the solution and solid phases the system to represent interfacial behavior without validation. As a means to circumvent these problems, the authors demonstrate the application of an efficient advanced sampling method (TIGER2A) for the simulation of the adsorption of hen egg-white lysozyme on a crystalline (110) high-density polyethylene surface plane. Simulations are conducted to generate a Boltzmann-weighted ensemble of sampled states using force field parameters that were validated to represent interfacial behavior for this system. The resulting ensembles of sampled states were then analyzed using an in-house-developed cluster analysis method to predict the most probable orientations and conformations of the protein on the surface based on the amount of sampling performed, from which free energy differences between the adsorbed states were able to be calculated. In addition, by conducting two independent sets of TIGER2A simulations combined with cluster analyses, the authors demonstrate a method to estimate the degree of convergence achieved for a given amount of sampling. The results from these simulations demonstrate that these methods enable the most probable orientations and conformations of an adsorbed protein to be predicted and that the use of our validated interfacial force field parameter set provides closer agreement to available experimental results compared to using standard CHARMM force field parameterization to represent molecular behavior at the interface. PMID:28514864

  4. Connecting the large- and the small-scale magnetic fields of solar-like stars

    NASA Astrophysics Data System (ADS)

    Lehmann, L. T.; Jardine, M. M.; Mackay, D. H.; Vidotto, A. A.

    2018-05-01

    A key question in understanding the observed magnetic field topologies of cool stars is the link between the small- and the large-scale magnetic field and the influence of the stellar parameters on the magnetic field topology. We examine various simulated stars to connect the small-scale with the observable large-scale field. The highly resolved 3D simulations we used couple a flux transport model with a non-potential coronal model using a magnetofrictional technique. The surface magnetic field of these simulations is decomposed into spherical harmonics which enables us to analyse the magnetic field topologies on a wide range of length scales and to filter the large-scale magnetic field for a direct comparison with the observations. We show that the large-scale field of the self-consistent simulations fits the observed solar-like stars and is mainly set up by the global dipolar field and the large-scale properties of the flux pattern, e.g. the averaged latitudinal position of the emerging small-scale field and its global polarity pattern. The stellar parameters flux emergence rate, differential rotation and meridional flow affect the large-scale magnetic field topology. An increased flux emergence rate increases the magnetic flux in all field components and an increased differential rotation increases the toroidal field fraction by decreasing the poloidal field. The meridional flow affects the distribution of the magnetic energy across the spherical harmonic modes.

  5. Monte Carlo simulations of backscattering process in dislocation-containing SrTiO3 single crystal

    NASA Astrophysics Data System (ADS)

    Jozwik, P.; Sathish, N.; Nowicki, L.; Jagielski, J.; Turos, A.; Kovarik, L.; Arey, B.

    2014-05-01

    Studies of defects formation in crystals are of obvious importance in electronics, nuclear engineering and other disciplines where materials are exposed to different forms of irradiation. Rutherford Backscattering/Channeling (RBS/C) and Monte Carlo (MC) simulations are the most convenient tool for this purpose, as they allow one to determine several features of lattice defects: their type, concentration and damage accumulation kinetic. On the other hand various irradiation conditions can be efficiently modeled by ion irradiation method without leading to the radioactivity of the sample. Combination of ion irradiation with channeling experiment and MC simulations appears thus as a most versatile method in studies of radiation damage in materials. The paper presents the results on such a study performed on SrTiO3 (STO) single crystals irradiated with 320 keV Ar ions. The samples were analyzed also by using HRTEM as a complementary method which enables the measurement of geometrical parameters of crystal lattice deformation in the vicinity of dislocations. Once the parameters and their variations within the distance of several lattice constants from the dislocation core are known, they may be used in MC simulations for the quantitative determination of dislocation depth distribution profiles. The final outcome of the deconvolution procedure are cross-sections values calculated for two types of defects observed (RDA and dislocations).

  6. Application of a computationally efficient method to approximate gap model results with a probabilistic approach

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.

    2014-02-01

    To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second generation DGVM LPJ-GUESS to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, that increased the model's speed by approximately the factor 8, we were able to faster detect shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south-transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high resolution LPJ-GUESS simulation results for a large part of the Alpine region.

  7. Simulating Fiber Ordering and Aggregation In Shear Flow Using Dissipative Particle Dynamics

    NASA Astrophysics Data System (ADS)

    Stimatze, Justin T.

    We have developed a mesoscale simulation of fiber aggregation in shear flow using LAMMPS and its implementation of dissipative particle dynamics. Understanding fiber aggregation in shear flow and flow-induced microstructural fiber networks is critical to our interest in high-performance composite materials. Dissipative particle dynamics enables the consideration of hydrodynamic interactions between fibers through the coarse-grained simulation of the matrix fluid. Correctly simulating hydrodynamic interactions and accounting for fluid forces on the microstructure is required to correctly model the shear-induced aggregation process. We are able to determine stresses, viscosity, and fiber forces while simulating the evolution of a model fiber system undergoing shear flow. Fiber-fiber contact interactions are approximated by combinations of common pairwise forces, allowing the exploration of interaction-influenced fiber behaviors such as aggregation and bundling. We are then able to quantify aggregate structure and effective volume fraction for a range of relevant system and fiber-fiber interaction parameters. Our simulations have demonstrated several aggregate types dependent on system parameters such as shear rate, short-range attractive forces, and a resistance to relative rotation while in contact. A resistance to relative rotation at fiber-fiber contact points has been found to strongly contribute to an increased angle between neighboring aggregated fibers and therefore an increase in average aggregate volume fraction. This increase in aggregate volume fraction is strongly correlated with a significant enhancement of system viscosity, leading us to hypothesize that controlling the resistance to relative rotation during manufacturing processes is important when optimizing for desired composite material characteristics.

  8. Parameterizing the Spatial Markov Model From Breakthrough Curve Data Alone

    NASA Astrophysics Data System (ADS)

    Sherman, Thomas; Fakhari, Abbas; Miller, Savannah; Singha, Kamini; Bolster, Diogo

    2017-12-01

    The spatial Markov model (SMM) is an upscaled Lagrangian model that effectively captures anomalous transport across a diverse range of hydrologic systems. The distinct feature of the SMM relative to other random walk models is that successive steps are correlated. To date, with some notable exceptions, the model has primarily been applied to data from high-resolution numerical simulations and correlation effects have been measured from simulated particle trajectories. In real systems such knowledge is practically unattainable and the best one might hope for is breakthrough curves (BTCs) at successive downstream locations. We introduce a novel methodology to quantify velocity correlation from BTC data alone. By discretizing two measured BTCs into a set of arrival times and developing an inverse model, we estimate velocity correlation, thereby enabling parameterization of the SMM in studies where detailed Lagrangian velocity statistics are unavailable. The proposed methodology is applied to two synthetic numerical problems, where we measure all details and thus test the veracity of the approach by comparison of estimated parameters with known simulated values. Our results suggest that our estimated transition probabilities agree with simulated values and using the SMM with this estimated parameterization accurately predicts BTCs downstream. Our methodology naturally allows for estimates of uncertainty by calculating lower and upper bounds of velocity correlation, enabling prediction of a range of BTCs. The measured BTCs fall within the range of predicted BTCs. This novel method to parameterize the SMM from BTC data alone is quite parsimonious, thereby widening the SMM's practical applicability.

  9. STEAM: a software tool based on empirical analysis for micro electro mechanical systems

    NASA Astrophysics Data System (ADS)

    Devasia, Archana; Pasupuleti, Ajay; Sahin, Ferat

    2006-03-01

    In this research a generalized software framework that enables accurate computer aided design of MEMS devices is developed. The proposed simulation engine utilizes a novel material property estimation technique that generates effective material properties at the microscopic level. The material property models were developed based on empirical analysis and the behavior extraction of standard test structures. A literature review is provided on the physical phenomena that govern the mechanical behavior of thin films materials. This survey indicates that the present day models operate under a wide range of assumptions that may not be applicable to the micro-world. Thus, this methodology is foreseen to be an essential tool for MEMS designers as it would develop empirical models that relate the loading parameters, material properties, and the geometry of the microstructures with its performance characteristics. This process involves learning the relationship between the above parameters using non-parametric learning algorithms such as radial basis function networks and genetic algorithms. The proposed simulation engine has a graphical user interface (GUI) which is very adaptable, flexible, and transparent. The GUI is able to encompass all parameters associated with the determination of the desired material property so as to create models that provide an accurate estimation of the desired property. This technique was verified by fabricating and simulating bilayer cantilevers consisting of aluminum and glass (TEOS oxide) in our previous work. The results obtained were found to be very encouraging.

  10. GMC COLLISIONS AS TRIGGERS OF STAR FORMATION. I. PARAMETER SPACE EXPLORATION WITH 2D SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Benjamin; Loo, Sven Van; Tan, Jonathan C.

    We utilize magnetohydrodynamic (MHD) simulations to develop a numerical model for giant molecular cloud (GMC)–GMC collisions between nearly magnetically critical clouds. The goal is to determine if, and under what circumstances, cloud collisions can cause pre-existing magnetically subcritical clumps to become supercritical and undergo gravitational collapse. We first develop and implement new photodissociation region based heating and cooling functions that span the atomic to molecular transition, creating a multiphase ISM and allowing modeling of non-equilibrium temperature structures. Then in 2D and with ideal MHD, we explore a wide parameter space of magnetic field strength, magnetic field geometry, collision velocity, andmore » impact parameter and compare isolated versus colliding clouds. We find factors of ∼2–3 increase in mean clump density from typical collisions, with strong dependence on collision velocity and magnetic field strength, but ultimately limited by flux-freezing in 2D geometries. For geometries enabling flow along magnetic field lines, greater degrees of collapse are seen. We discuss observational diagnostics of cloud collisions, focussing on {sup 13}CO(J = 2–1), {sup 13}CO(J = 3–2), and {sup 12}CO(J = 8–7) integrated intensity maps and spectra, which we synthesize from our simulation outputs. We find that the ratio of J = 8–7 to lower-J emission is a powerful diagnostic probe of GMC collisions.« less

  11. All-fiber highly chirped dissipative soliton generation in the telecom range.

    PubMed

    Kharenko, Denis S; Zhdanov, Innokentiy S; Bednyakova, Anastasia E; Podivilov, Evgeniy V; Fedoruk, Mikhail P; Apolonski, Alexander; Turitsyn, Sergei K; Babin, Sergey A

    2017-08-15

    A high-energy (0.93 nJ) all-fiber erbium femtosecond oscillator operating in the telecom spectral range is proposed and realized. The laser cavity, built of commercially available fibers and components, combines polarization maintaining (PM) and non-PM parts providing stable generation of highly chirped (chirp parameter 40) pulses compressed in an output piece of standard PM fiber to 165 fs. The results of the numerical simulation agree well with the experiment. The analyzed intracavity pulse dynamics enables the classification of the generated pulses as dissipative solitons.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiche, Helmut Matthias; Vogel, Sven C.

    New in situ data for the U-C system are presented, with the goal of improving knowledge of the phase diagram to enable production of new ceramic fuels. The none quenchable, cubic, δ-phase, which in turn is fundamental to computational methods, was identified. Rich datasets of the formation synthesis of uranium carbide yield kinetics data which allow the benchmarking of modeling, thermodynamic parameters etc. The order-disorder transition (carbon sublattice melting) was observed due to equal sensitivity of neutrons to both elements. This dynamic has not been accurately described in some recent simulation-based publications.

  13. Dictionary-Based Tensor Canonical Polyadic Decomposition

    NASA Astrophysics Data System (ADS)

    Cohen, Jeremy Emile; Gillis, Nicolas

    2018-04-01

    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.

  14. A Fully Non-Metallic Gas Turbine Engine Enabled by Additive Manufacturing Part I: System Analysis, Component Identification, Additive Manufacturing, and Testing of Polymer Composites

    NASA Technical Reports Server (NTRS)

    Grady, Joseph E.; Haller, William J.; Poinsatte, Philip E.; Halbig, Michael C.; Schnulo, Sydney L.; Singh, Mrityunjay; Weir, Don; Wali, Natalie; Vinup, Michael; Jones, Michael G.; hide

    2015-01-01

    The research and development activities reported in this publication were carried out under NASA Aeronautics Research Institute (NARI) funded project entitled "A Fully Nonmetallic Gas Turbine Engine Enabled by Additive Manufacturing." The objective of the project was to conduct evaluation of emerging materials and manufacturing technologies that will enable fully nonmetallic gas turbine engines. The results of the activities are described in three part report. The first part of the report contains the data and analysis of engine system trade studies, which were carried out to estimate reduction in engine emissions and fuel burn enabled due to advanced materials and manufacturing processes. A number of key engine components were identified in which advanced materials and additive manufacturing processes would provide the most significant benefits to engine operation. The technical scope of activities included an assessment of the feasibility of using additive manufacturing technologies to fabricate gas turbine engine components from polymer and ceramic matrix composites, which were accomplished by fabricating prototype engine components and testing them in simulated engine operating conditions. The manufacturing process parameters were developed and optimized for polymer and ceramic composites (described in detail in the second and third part of the report). A number of prototype components (inlet guide vane (IGV), acoustic liners, engine access door) were additively manufactured using high temperature polymer materials. Ceramic matrix composite components included turbine nozzle components. In addition, IGVs and acoustic liners were tested in simulated engine conditions in test rigs. The test results are reported and discussed in detail.

  15. A Transonic and Surpersonic Investigation of Jet Exhaust Plume Effects on the Afterbody and Base Pressures of a Body of Revolution

    NASA Technical Reports Server (NTRS)

    Andrews, C. D.; Cooper, C. E., Jr.

    1974-01-01

    An experimental aerodynamic investigation was conducted to provide data for studies to determine the criteria for simulating rocket engine plume induced aerodynamic effects in the wind tunnel using a simulated gaseous plume. Model surface and base pressure data were obtained in the presence of both a simulated and a prototype gaseous plume for a matrix of plume properties to enable investigators to determine the parameters that correlate the simulated and prototype plume-induced data. The test program was conducted in the Marshall Space Flight Center's 14 x 14-inch trisonic wind tunnel using two models, the first being a strut mounted cone-ogive-cylinder model with a fineness ratio of 9. Model exterior pressures, model plenum chamber and nozzle performance data were obtained at Mach numbers of 0.9, 1.2, 1.46, and 3.48. The exhaust plume was generated by using air as the simulant gas, or Freon-14 (CF4) as the prototype gas, over a chamber pressure range from 0 to 2,000 psia and a total temperature range from 50 to 600 F.

  16. Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems

    DOE PAGES

    Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk; ...

    2017-11-07

    We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less

  17. Active Learning for Directed Exploration of Complex Systems

    NASA Technical Reports Server (NTRS)

    Burl, Michael C.; Wang, Esther

    2009-01-01

    Physics-based simulation codes are widely used in science and engineering to model complex systems that would be infeasible to study otherwise. Such codes provide the highest-fidelity representation of system behavior, but are often so slow to run that insight into the system is limited. For example, conducting an exhaustive sweep over a d-dimensional input parameter space with k-steps along each dimension requires k(sup d) simulation trials (translating into k(sup d) CPU-days for one of our current simulations). An alternative is directed exploration in which the next simulation trials are cleverly chosen at each step. Given the results of previous trials, supervised learning techniques (SVM, KDE, GP) are applied to build up simplified predictive models of system behavior. These models are then used within an active learning framework to identify the most valuable trials to run next. Several active learning strategies are examined including a recently-proposed information-theoretic approach. Performance is evaluated on a set of thirteen synthetic oracles, which serve as surrogates for the more expensive simulations and enable the experiments to be replicated by other researchers.

  18. Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk

    We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less

  19. Predicting Flows of Rarefied Gases

    NASA Technical Reports Server (NTRS)

    LeBeau, Gerald J.; Wilmoth, Richard G.

    2005-01-01

    DSMC Analysis Code (DAC) is a flexible, highly automated, easy-to-use computer program for predicting flows of rarefied gases -- especially flows of upper-atmospheric, propulsion, and vented gases impinging on spacecraft surfaces. DAC implements the direct simulation Monte Carlo (DSMC) method, which is widely recognized as standard for simulating flows at densities so low that the continuum-based equations of computational fluid dynamics are invalid. DAC enables users to model complex surface shapes and boundary conditions quickly and easily. The discretization of a flow field into computational grids is automated, thereby relieving the user of a traditionally time-consuming task while ensuring (1) appropriate refinement of grids throughout the computational domain, (2) determination of optimal settings for temporal discretization and other simulation parameters, and (3) satisfaction of the fundamental constraints of the method. In so doing, DAC ensures an accurate and efficient simulation. In addition, DAC can utilize parallel processing to reduce computation time. The domain decomposition needed for parallel processing is completely automated, and the software employs a dynamic load-balancing mechanism to ensure optimal parallel efficiency throughout the simulation.

  20. Engine monitoring display study

    NASA Technical Reports Server (NTRS)

    Hornsby, Mary E.

    1992-01-01

    The current study is part of a larger NASA effort to develop displays for an engine-monitoring system to enable the crew to monitor engine parameter trends more effectively. The objective was to evaluate the operational utility of adding three types of information to the basic Boeing Engine Indicating and Crew Alerting System (EICAS) display formats: alphanumeric alerting messages for engine parameters whose values exceed caution or warning limits; alphanumeric messages to monitor engine parameters that deviate from expected values; and a graphic depiction of the range of expected values for current conditions. Ten training and line pilots each flew 15 simulated flight scenarios with five variants of the basic EICAS format; these variants included different combinations of the added information. The pilots detected engine problems more quickly when engine alerting messages were included in the display; adding a graphic depiction of the range of expected values did not affect detection speed. The pilots rated both types of alphanumeric messages (alert and monitor parameter) as more useful and easier to interpret than the graphic depiction. Integrating engine parameter messages into the EICAS alerting system appears to be both useful and preferred.

  1. Thermo-mechanical simulation of liquid-supported stretch blow molding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmer, J.; Stommel, M.

    2015-05-22

    Stretch blow molding is the well-established plastics forming method to produce Polyehtylene therephtalate (PET) bottles. An injection molded preform is heated up above the PET glass transition temperature (Tg∼85°C) and subsequently inflated by pressurized air into a closed cavity. In the follow-up filling process, the resulting bottle is filled with the final product. A recently developed modification of the process combines the blowing and filling stages by directly using the final liquid product to inflate the preform. In a previously published paper, a mechanical simulation and successful evaluation of this liquid-driven stretch blow molding process was presented. In this way,more » a realistic process parameter dependent simulation of the preform deformation throughout the forming process was enabled, whereas the preform temperature evolution during forming was neglected. However, the formability of the preform is highly reduced when the temperature sinks below Tg during forming. Experimental investigations show temperature-induced failure cases due to the fast heat transfer between hot preform and cold liquid. Therefore, in this paper, a process dependent simulation of the temperature evolution during processing to avoid preform failure is presented. For this purpose, the previously developed mechanical model is used to extract the time dependent thickness evolution. This information serves as input for the heat transfer simulation. The required material parameters are calibrated from preform cooling experiments recorded with an infrared-camera. Furthermore, the high deformation ratios during processing lead to strain induced crystallization. This exothermal reaction is included into the simulation by extracting data from preform measurements at different stages of deformation via Differential Scanning Calorimetry (DSC). Finally, the thermal simulation model is evaluated by free forming experiments, recorded by a high-speed infrared camera.« less

  2. Application of a computationally efficient method to approximate gap model results with a probabilistic approach

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.

    2014-07-01

    To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second-generation DGVM (dynamic global vegetation model) LPJ-GUESS (Lund-Potsdam-Jena General Ecosystem Simulator) to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, which increased the model's speed by approximately the factor 8, we were able to faster detect the shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high-resolution LPJ-GUESS simulation results for a large part of the Alpine region.

  3. Generation of subterawatt-attosecond pulses in a soft x-ray free-electron laser

    DOE PAGES

    Huang, Senlin; Ding, Yuantao; Huang, Zhirong; ...

    2016-08-15

    Here, we propose a novel scheme to generate attosecond soft x rays in a self-seeded free-electron laser (FEL) suitable for enabling attosecond spectroscopic investigations. A time-energy chirped electron bunch with additional sinusoidal energy modulation is adopted to produce a short seed pulse through a self-seeding monochromator. This short seed pulse, together with high electron current spikes and a cascaded delay setup, enables a high-efficiency FEL with a fresh bunch scheme. Simulations show that using the Linac Coherent Light Source (LCLS) parameters, soft x-ray pulses with a FWHM of 260 attoseconds and a peak power of 0.5 TW can be obtained.more » This scheme also has the feature of providing a stable central wavelength determined by the self-seeding monochromator.« less

  4. Relativistic Electron Acceleration with Ultrashort Mid-IR Laser Pulses

    NASA Astrophysics Data System (ADS)

    Feder, Linus; Woodbury, Daniel; Shumakova, Valentina; Gollner, Claudia; Miao, Bo; Schwartz, Robert; Pugžlys, Audrius; Baltuška, Andrius; Milchberg, Howard

    2017-10-01

    We report the first results of laser plasma wakefield acceleration driven by ultrashort mid-infrared laser pulses (λ = 3.9 μm , pulsewidth 100 fs, energy <20 mJ, peak power <1 TW), which enables near- and above-critical density interactions with moderate-density gas jets. We present thresholds for electron acceleration based on critical parameters for relativistic self-focusing and target width, as well as trends in the accelerated beam profiles, charge and energy spectra which are supported by 3D particle-in-cell simulations. These results extend earlier work with sub-TW self-modulated laser wakefield acceleration using near IR drivers to the Mid-IR, and enable us to capture time-resolved images of relativistic self-focusing of the laser pulse. This work supported by DOE (DESC0010706TDD, DESC0015516); AFOSR(FA95501310044, FA95501610121); NSF(PHY1535519); DHS.

  5. Two applications of the Recently Developed UZF-MT3DMS Model for Evaluating Nonpoint-Source Fluxes (Invited)

    NASA Astrophysics Data System (ADS)

    Morway, E. D.; Niswonger, R. G.; Nishikawa, T.

    2013-12-01

    The solute-transport model MT3DMS was modified to simulate transport in the unsaturated-zone by incorporating the additional flow terms calculated by the Unsaturated-Zone Flow (UZF) package developed for MODFLOW. Referred to as UZF-MT3DMS, the model simulates advection and dispersion of conservative and reactive solutes in unsaturated and saturated porous media. Significant time savings are realized owing to the efficiency of the kinematic -wave approximation used by the UZF1 package relative to Richards' equation-based approaches, facilitating the use of automated parameter-estimation routines wherein thousands of model runs may be required. Currently, UZF-MT3DMS is applied to two real-world applications of existing MODFLOW and MT3DMS models retro-fitted to use the UZF1 package for simulating the unsaturated component of the sub-surface system. In the first application, two regional-scale investigations located in Colorado's Lower Arkansas River Valley (LARV) are developed to evaluate the extent and severity of unsaturated-zone salinization contributing to crop yield loss. Preliminary results indicate root zone concentrations over both regions are at or above salinity-thresholds of most crop types grown in the LARV. Regional-scale modeling investigations of salinization found in the literature commonly use lumped-parameter models rather than physically-based distributed-parameter models. In the second application, located near Joshua Tree, CA, nitrate loading to the underlying unconfined aquifer from domestic septic systems is evaluated. Due to the region's thick unsaturated-zone and correspondingly long unsaturated-zone residence times (multi-decade), UZF-MT3DMS enabled direct simulation of spatially-varying concentration break-through curves at the water table.

  6. Pinatubo Emulation in Multiple Models (POEMs): co-ordinated experiments in the ISA-MIP model intercomparison activity component of the SPARC Stratospheric Sulphur and it's Role in Climate initiative (SSiRC)

    NASA Astrophysics Data System (ADS)

    Lee, Lindsay; Mann, Graham; Carslaw, Ken; Toohey, Matthew; Aquila, Valentina

    2016-04-01

    The World Climate Research Program's SPARC initiative has a new international activity "Stratospheric Sulphur and its Role in Climate" (SSiRC) to better understand changes in stratospheric aerosol and precursor gaseous sulphur species. One component of SSiRC involves an intercomparison "ISA-MIP" of composition-climate models that simulate the stratospheric aerosol layer interactively. Within PoEMS each modelling group will run a "perturbed physics ensemble" (PPE) of interactive stratospheric aerosol (ISA) simulations of the Pinatubo eruption, varying several uncertain parameters associated with the eruption's SO2 emissions and model processes. A powerful new technique to quantify and attribute sources of uncertainty in complex global models is described by Lee et al. (2011, ACP). The analysis uses Gaussian emulation to derive a probability density function (pdf) of predicted quantities, essentially interpolating the PPE results in multi-dimensional parameter space. Once trained on the ensemble, a Monte Carlo simulation with the fast Gaussian emulator enabling a full variance-based sensitivity analysis. The approach has already been used effectively by Carslaw et al., (2013, Nature) to quantify the uncertainty in the cloud albedo effect forcing from a 3D global aerosol-microphysics model allowing to compare the sensitivy of different predicted quantities to uncertainties in natural and anthropogenic emissions types, and structural parameters in the models. Within ISA-MIP, each group will carry out a PPE of runs, with the subsequent analysis with the emulator assessing the uncertainty in the volcanic forcings predicted by each model. In this poster presentation we will give an outline of the "PoEMS" analysis, describing the uncertain parameters to be varied and the relevance to further understanding differences identified in previous international stratospheric aerosol assessments.

  7. Model parameters for representative wetland plant functional groups

    USGS Publications Warehouse

    Williams, Amber S.; Kiniry, James R.; Mushet, David M.; Smith, Loren M.; McMurry, Scott T.; Attebury, Kelly; Lang, Megan; McCarty, Gregory W.; Shaffer, Jill A.; Effland, William R.; Johnson, Mari-Vaughn V.

    2017-01-01

    Wetlands provide a wide variety of ecosystem services including water quality remediation, biodiversity refugia, groundwater recharge, and floodwater storage. Realistic estimation of ecosystem service benefits associated with wetlands requires reasonable simulation of the hydrology of each site and realistic simulation of the upland and wetland plant growth cycles. Objectives of this study were to quantify leaf area index (LAI), light extinction coefficient (k), and plant nitrogen (N), phosphorus (P), and potassium (K) concentrations in natural stands of representative plant species for some major plant functional groups in the United States. Functional groups in this study were based on these parameters and plant growth types to enable process-based modeling. We collected data at four locations representing some of the main wetland regions of the United States. At each site, we collected on-the-ground measurements of fraction of light intercepted, LAI, and dry matter within the 2013–2015 growing seasons. Maximum LAI and k variables showed noticeable variations among sites and years, while overall averages and functional group averages give useful estimates for multisite simulation modeling. Variation within each species gives an indication of what can be expected in such natural ecosystems. For P and K, the concentrations from highest to lowest were spikerush (Eleocharis macrostachya), reed canary grass (Phalaris arundinacea), smartweed (Polygonum spp.), cattail (Typha spp.), and hardstem bulrush (Schoenoplectus acutus). Spikerush had the highest N concentration, followed by smartweed, bulrush, reed canary grass, and then cattail. These parameters will be useful for the actual wetland species measured and for the wetland plant functional groups they represent. These parameters and the associated process-based models offer promise as valuable tools for evaluating environmental benefits of wetlands and for evaluating impacts of various agronomic practices in adjacent areas as they affect wetlands.

  8. A pilot study to determine medical laser generated air contaminant emission rates for a simulated surgical procedure.

    PubMed

    Lippert, Julia F; Lacey, Steven E; Lopez, Ramon; Franke, John; Conroy, Lorraine; Breskey, John; Esmen, Nurtan; Liu, Li

    2014-01-01

    The U.S. Occupational Safety and Health Administration (OSHA) estimates that half a million health-care workers are exposed to laser surgical smoke each year. The purpose of this study was to establish a methodology to (1) estimate emission rates of laser-generated air contaminants (LGACs) using an emission chamber, and to (2) perform a screening study to differentiate the effects of three laser operational parameters. An emission chamber was designed, fabricated, and assessed for performance to estimate the emission rates of gases and particles associated with LGACs during a simulated surgical procedure. Two medical lasers (Holmium Yttrium Aluminum Garnet [Ho:YAG] and carbon dioxide [CO2]) were set to a range of plausible medical laser operational parameters in a simulated surgery to pyrolyze porcine skin generating plume in the emission chamber. Power, pulse repetition frequency (PRF), and beam diameter were evaluated to determine the effect of each operational parameter on emission rate using a fractional factorial design. The plume was sampled for particulate matter and seven gas phase combustion byproduct contaminants (benzene, ethylbenzene, toluene, formaldehyde, hydrogen cyanide, carbon dioxide, and carbon monoxide): the gas phase emission results are presented here. Most of the measured concentrations of gas phase contaminants were below their limit of detection (LOD), but detectable measurements enabled us to determine laser operation parameter influence on CO2 emissions. Confined to the experimental conditions of this screening study, results indicated that beam diameter was statistically significantly influential and power was marginally statistically significant to emission rates of CO2 when using the Ho:YAG laser but not with the carbon dioxide laser; PRF was not influential vis-a-vis emission rates of these gas phase contaminants.

  9. Parameterizing Coefficients of a POD-Based Dynamical System

    NASA Technical Reports Server (NTRS)

    Kalb, Virginia L.

    2010-01-01

    A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers. The need for this or a similar method arises as follows: A procedure that includes direct numerical simulation followed by POD, followed by Galerkin projection to a dynamical system has been proven to enable representation of flow dynamics by a low-dimensional model at the Reynolds number of the simulation. However, a more difficult task is to obtain models that are valid over a range of Reynolds numbers. Extrapolation of low-dimensional models by use of straightforward Reynolds-number-based parameter continuation has proven to be inadequate for successful prediction of flows. A key part of the problem of constructing a dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers is the problem of understanding and providing for the variation of the coefficients of the dynamical system with the Reynolds number. Prior methods do not enable capture of temporal dynamics over ranges of Reynolds numbers in low-dimensional models, and are not even satisfactory when large numbers of modes are used. The basic idea of the present method is to solve the problem through a suitable parameterization of the coefficients of the dynamical system. The parameterization computations involve utilization of the transfer of kinetic energy between modes as a function of Reynolds number. The thus-parameterized dynamical system accurately predicts the flow dynamics and is applicable to a range of flow problems in the dynamical regime around the Hopf bifurcation. Parameter-continuation software can be used on the parameterized dynamical system to derive a bifurcation diagram that accurately predicts the temporal flow behavior.

  10. General Pharmacokinetic Model for Topically Administered Ocular Drug Dosage Forms.

    PubMed

    Deng, Feng; Ranta, Veli-Pekka; Kidron, Heidi; Urtti, Arto

    2016-11-01

    In ocular drug development, an early estimate of drug behavior before any in vivo experiments is important. The pharmacokinetics (PK) and bioavailability depend not only on active compound and excipients but also on physicochemical properties of the ocular drug formulation. We propose to utilize PK modelling to predict how drug and formulational properties affect drug bioavailability and pharmacokinetics. A physiologically relevant PK model based on the rabbit eye was built to simulate the effect of formulation and physicochemical properties on PK of pilocarpine solutions and fluorometholone suspensions. The model consists of four compartments: solid and dissolved drug in tear fluid, drug in corneal epithelium and aqueous humor. Parameter values and in vivo PK data in rabbits were taken from published literature. The model predicted the pilocarpine and fluorometholone concentrations in the corneal epithelium and aqueous humor with a reasonable accuracy for many different formulations. The model includes a graphical user interface that enables the user to modify parameters easily and thus simulate various formulations. The model is suitable for the development of ophthalmic formulations and the planning of bioequivalence studies.

  11. Mechanism and design of intermittent aeration activated sludge process for nitrogen removal.

    PubMed

    Hanhan, Oytun; Insel, Güçlü; Yagci, Nevin Ozgur; Artan, Nazik; Orhon, Derin

    2011-01-01

    The paper provided a comprehensive evaluation of the mechanism and design of intermittent aeration activated sludge process for nitrogen removal. Based on the specific character of the process the total cycle time, (T(C)), the aerated fraction, (AF), and the cycle time ratio, (CTR) were defined as major design parameters, aside from the sludge age of the system. Their impact on system performance was evaluated by means of process simulation. A rational design procedure was developed on the basis of basic stochiometry and mass balance related to the oxidation and removal of nitrogen under aerobic and anoxic conditions, which enabled selected of operation parameters of optimum performance. The simulation results indicated that the total nitrogen level could be reduced to a minimum level by appropriate manipulation of the aerated fraction and cycle time ratio. They also showed that the effluent total nitrogen could be lowered to around 4.0 mgN/L by adjusting the dissolved oxygen set-point to 0.5 mg/L, a level which promotes simultaneous nitrification and denitrification.

  12. New results on the hydrodynamic behaviour of fossil Nummulites tests from two nummulite banks from the Bartonian and Priabonian of northern Italy

    PubMed Central

    Seddighi, Mona; Briguglio, Antonino; Hohenegger, Johann; Papazzoni, Cesare Andrea

    2015-01-01

    Settling velocities of 58 well-preserved tests of fossil Nummulites were experimentally determined using a settling tube. The tests were collected from the nummulite banks of Pederiva di Grancona (A forms of N. lyelli and N. striatus, Middle Eocene) and San Germano dei Berici (A and B forms of N. fabianii, Late Eocene), both in the Berici Mts. (Veneto, northern Italy). The data were compared with estimated settling velocities that the same specimens might have had in life conditions. This was done by reconstructing their densities simulating water-filled condition and, to simulate post-diagenetic effects, under calcite-filled condition. These simulations show that A and B forms, even if they greatly diverge in shape, volume and size, still possess comparable settling velocities, and that each nummulite bank is characterized by specific hydrodynamic parameters. The use of settling velocity as a parameter to quantify the hydrodynamic behaviour of particles in seawater enables estimation of palaeoenvironmental conditions such as depth, substrate and the energy scenario. Such information is useful in obtaining further insights into the genesis of nummulite banks, the autochthony or allochthony of which is still being debated. Our results point to an autochthonous interpretation. PMID:26681827

  13. New results on the hydrodynamic behaviour of fossil Nummulites tests from two nummulite banks from the Bartonian and Priabonian of northern Italy.

    PubMed

    Seddighi, Mona; Briguglio, Antonino; Hohenegger, Johann; Papazzoni, Cesare Andrea

    2015-09-01

    Settling velocities of 58 well-preserved tests of fossil Nummulites were experimentally determined using a settling tube. The tests were collected from the nummulite banks of Pederiva di Grancona (A forms of N. lyelli and N. striatus, Middle Eocene) and San Germano dei Berici (A and B forms of N. fabianii, Late Eocene), both in the Berici Mts. (Veneto, northern Italy). The data were compared with estimated settling velocities that the same specimens might have had in life conditions. This was done by reconstructing their densities simulating water-filled condition and, to simulate post-diagenetic effects, under calcite-filled condition. These simulations show that A and B forms, even if they greatly diverge in shape, volume and size, still possess comparable settling velocities, and that each nummulite bank is characterized by specific hydrodynamic parameters. The use of settling velocity as a parameter to quantify the hydrodynamic behaviour of particles in seawater enables estimation of palaeoenvironmental conditions such as depth, substrate and the energy scenario. Such information is useful in obtaining further insights into the genesis of nummulite banks, the autochthony or allochthony of which is still being debated. Our results point to an autochthonous interpretation.

  14. Multilayer integral method for simulation of eddy currents in thin volumes of arbitrary geometry produced by MRI gradient coils.

    PubMed

    Sanchez Lopez, Hector; Freschi, Fabio; Trakic, Adnan; Smith, Elliot; Herbert, Jeremy; Fuentes, Miguel; Wilson, Stephen; Liu, Limei; Repetto, Maurizio; Crozier, Stuart

    2014-05-01

    This article aims to present a fast, efficient and accurate multi-layer integral method (MIM) for the evaluation of complex spatiotemporal eddy currents in nonmagnetic and thin volumes of irregular geometries induced by arbitrary arrangements of gradient coils. The volume of interest is divided into a number of layers, wherein the thickness of each layer is assumed to be smaller than the skin depth and where one of the linear dimensions is much smaller than the remaining two dimensions. The diffusion equation of the current density is solved both in time-harmonic and transient domain. The experimentally measured magnetic fields produced by the coil and the induced eddy currents as well as the corresponding time-decay constants were in close agreement with the results produced by the MIM. Relevant parameters such as power loss and force induced by the eddy currents in a split cryostat were simulated using the MIM. The proposed method is capable of accurately simulating the current diffusion process inside thin volumes, such as the magnet cryostat. The method permits the priori-calculation of optimal pre-emphasis parameters. The MIM enables unified designs of gradient coil-magnet structures for an optimal mitigation of deleterious eddy current effects. Copyright © 2013 Wiley Periodicals, Inc.

  15. Design and Evolution of a Modular Tensegrity Robot Platform

    NASA Technical Reports Server (NTRS)

    Bruce, Jonathan; Caluwaerts, Ken; Iscen, Atil; Sabelhaus, Andrew P.; SunSpiral, Vytas

    2014-01-01

    NASA Ames Research Center is developing a compliant modular tensegrity robotic platform for planetary exploration. In this paper we present the design and evolution of the platform's main hardware component, an untethered, robust tensegrity strut, with rich sensor feedback and cable actuation. Each strut is a complete robot, and multiple struts can be combined together to form a wide range of complex tensegrity robots. Our current goal for the tensegrity robotic platform is the development of SUPERball, a 6-strut icosahedron underactuated tensegrity robot aimed at dynamic locomotion for planetary exploration rovers and landers, but the aim is for the modular strut to enable a wide range of tensegrity morphologies. SUPERball is a second generation prototype, evolving from the tensegrity robot ReCTeR, which is also a modular, lightweight, highly compliant 6-strut tensegrity robot that was used to validate our physics based NASA Tensegrity Robot Toolkit (NTRT) simulator. Many hardware design parameters of the SUPERball were driven by locomotion results obtained in our validated simulator. These evolutionary explorations helped constrain motor torque and speed parameters, along with strut and string stress. As construction of the hardware has finalized, we have also used the same evolutionary framework to evolve controllers that respect the built hardware parameters.

  16. Semiclassical dynamics of spin density waves

    NASA Astrophysics Data System (ADS)

    Chern, Gia-Wei; Barros, Kipton; Wang, Zhentao; Suwa, Hidemaro; Batista, Cristian D.

    2018-01-01

    We present a theoretical framework for equilibrium and nonequilibrium dynamical simulation of quantum states with spin-density-wave (SDW) order. Within a semiclassical adiabatic approximation that retains electron degrees of freedom, we demonstrate that the SDW order parameter obeys a generalized Landau-Lifshitz equation. With the aid of an enhanced kernel polynomial method, our linear-scaling quantum Landau-Lifshitz dynamics (QLLD) method enables dynamical SDW simulations with N ≃105 lattice sites. Our real-space formulation can be used to compute dynamical responses, such as the dynamical structure factor, of complex and even inhomogeneous SDW configurations at zero or finite temperatures. Applying the QLLD to study the relaxation of a noncoplanar topological SDW under the excitation of a short pulse, we further demonstrate the crucial role of spatial correlations and fluctuations in the SDW dynamics.

  17. Simulating the Use of Alternative Fuels in a Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Chin, Jeffrey Chevoor; Liu, Yuan

    2013-01-01

    The interest in alternative fuels for aviation has created a need to evaluate their effect on engine performance. The use of dynamic turbofan engine simulations enables the comparative modeling of the performance of these fuels on a realistic test bed in terms of dynamic response and control compared to traditional fuels. The analysis of overall engine performance and response characteristics can lead to a determination of the practicality of using specific alternative fuels in commercial aircraft. This paper describes a procedure to model the use of alternative fuels in a large commercial turbofan engine, and quantifies their effects on engine and vehicle performance. In addition, the modeling effort notionally demonstrates that engine performance may be maintained by modifying engine control system software parameters to account for the alternative fuel.

  18. Effective potential kinetic theory for strongly coupled plasmas

    NASA Astrophysics Data System (ADS)

    Baalrud, Scott D.; Daligault, Jérôme

    2016-11-01

    The effective potential theory (EPT) is a recently proposed method for extending traditional plasma kinetic and transport theory into the strongly coupled regime. Validation from experiments and molecular dynamics simulations have shown it to be accurate up to the onset of liquid-like correlation parameters (corresponding to Γ ≃ 10-50 for the one-component plasma, depending on the process of interest). Here, this theory is briefly reviewed along with comparisons between the theory and molecular dynamics simulations for self-diffusivity and viscosity of the one-component plasma. A number of new results are also provided, including calculations of friction coefficients, energy exchange rates, stopping power, and mobility. The theory is also cast in the Landau and Fokker-Planck kinetic forms, which may prove useful for enabling efficient kinetic computations.

  19. Volumetric Imaging and Characterization of Focusing Waveguide Grating Couplers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katzenmeyer, Aaron Michael; McGuinness, Hayden James Evans; Starbuck, Andrew Lea

    Volumetric imaging of focusing waveguide grating coupler emission with high spatial resolution in the visible (λ = 637.3 nm) is demonstrated using a scanning near-field optical microscope with long z-axis travel range. Stacks of 2-D images recorded at fixed distance from the device are compiled to yield 3-D visualization of the light emission pattern and enable extraction of parameters, such as spot size, angle of emission, and focal height. Measurements of such parameters are not prevalent in the literature yet are necessary for efficacious design and integration. As a result, it is observed that finite-difference time-domain simulations based on fabricationmore » layout files do not perfectly predict in-hand device behavior, underscoring the merit of experimental validation, particularly for critical application.« less

  20. Volumetric Imaging and Characterization of Focusing Waveguide Grating Couplers

    DOE PAGES

    Katzenmeyer, Aaron Michael; McGuinness, Hayden James Evans; Starbuck, Andrew Lea; ...

    2017-08-29

    Volumetric imaging of focusing waveguide grating coupler emission with high spatial resolution in the visible (λ = 637.3 nm) is demonstrated using a scanning near-field optical microscope with long z-axis travel range. Stacks of 2-D images recorded at fixed distance from the device are compiled to yield 3-D visualization of the light emission pattern and enable extraction of parameters, such as spot size, angle of emission, and focal height. Measurements of such parameters are not prevalent in the literature yet are necessary for efficacious design and integration. As a result, it is observed that finite-difference time-domain simulations based on fabricationmore » layout files do not perfectly predict in-hand device behavior, underscoring the merit of experimental validation, particularly for critical application.« less

  1. From atoms to layers: in situ gold cluster growth kinetics during sputter deposition

    NASA Astrophysics Data System (ADS)

    Schwartzkopf, Matthias; Buffet, Adeline; Körstgens, Volker; Metwalli, Ezzeldin; Schlage, Kai; Benecke, Gunthard; Perlich, Jan; Rawolle, Monika; Rothkirch, André; Heidmann, Berit; Herzog, Gerd; Müller-Buschbaum, Peter; Röhlsberger, Ralf; Gehrke, Rainer; Stribeck, Norbert; Roth, Stephan V.

    2013-05-01

    The adjustment of size-dependent catalytic, electrical and optical properties of gold cluster assemblies is a very significant issue in modern applied nanotechnology. We present a real-time investigation of the growth kinetics of gold nanostructures from small nuclei to a complete gold layer during magnetron sputter deposition with high time resolution by means of in situ microbeam grazing incidence small-angle X-ray scattering (μGISAXS). We specify the four-stage growth including their thresholds with sub-monolayer resolution and identify phase transitions monitored in Yoneda intensity as a material-specific characteristic. An innovative and flexible geometrical model enables the extraction of morphological real space parameters, such as cluster size and shape, correlation distance, layer porosity and surface coverage, directly from reciprocal space scattering data. This approach enables a large variety of future investigations of the influence of different process parameters on the thin metal film morphology. Furthermore, our study allows for deducing the wetting behavior of gold cluster films on solid substrates and provides a better understanding of the growth kinetics in general, which is essential for optimization of manufacturing parameters, saving energy and resources.The adjustment of size-dependent catalytic, electrical and optical properties of gold cluster assemblies is a very significant issue in modern applied nanotechnology. We present a real-time investigation of the growth kinetics of gold nanostructures from small nuclei to a complete gold layer during magnetron sputter deposition with high time resolution by means of in situ microbeam grazing incidence small-angle X-ray scattering (μGISAXS). We specify the four-stage growth including their thresholds with sub-monolayer resolution and identify phase transitions monitored in Yoneda intensity as a material-specific characteristic. An innovative and flexible geometrical model enables the extraction of morphological real space parameters, such as cluster size and shape, correlation distance, layer porosity and surface coverage, directly from reciprocal space scattering data. This approach enables a large variety of future investigations of the influence of different process parameters on the thin metal film morphology. Furthermore, our study allows for deducing the wetting behavior of gold cluster films on solid substrates and provides a better understanding of the growth kinetics in general, which is essential for optimization of manufacturing parameters, saving energy and resources. Electronic supplementary information (ESI) available: The full GISAXS image sequence of the experiment, the model-based IsGISAXS-simulation sequence as movie files for comparison and detailed information about sample cleaning, XRR, FESEM, IsGISAXS, comparison μGIWAXS/μGISAXS, and sampling statistics. See DOI: 10.1039/c3nr34216f

  2. Hybrid adaptive ascent flight control for a flexible launch vehicle

    NASA Astrophysics Data System (ADS)

    Lefevre, Brian D.

    For the purpose of maintaining dynamic stability and improving guidance command tracking performance under off-nominal flight conditions, a hybrid adaptive control scheme is selected and modified for use as a launch vehicle flight controller. This architecture merges a model reference adaptive approach, which utilizes both direct and indirect adaptive elements, with a classical dynamic inversion controller. This structure is chosen for a number of reasons: the properties of the reference model can be easily adjusted to tune the desired handling qualities of the spacecraft, the indirect adaptive element (which consists of an online parameter identification algorithm) continually refines the estimates of the evolving characteristic parameters utilized in the dynamic inversion, and the direct adaptive element (which consists of a neural network) augments the linear feedback signal to compensate for any nonlinearities in the vehicle dynamics. The combination of these elements enables the control system to retain the nonlinear capabilities of an adaptive network while relying heavily on the linear portion of the feedback signal to dictate the dynamic response under most operating conditions. To begin the analysis, the ascent dynamics of a launch vehicle with a single 1st stage rocket motor (typical of the Ares 1 spacecraft) are characterized. The dynamics are then linearized with assumptions that are appropriate for a launch vehicle, so that the resulting equations may be inverted by the flight controller in order to compute the control signals necessary to generate the desired response from the vehicle. Next, the development of the hybrid adaptive launch vehicle ascent flight control architecture is discussed in detail. Alterations of the generic hybrid adaptive control architecture include the incorporation of a command conversion operation which transforms guidance input from quaternion form (as provided by NASA) to the body-fixed angular rate commands needed by the hybrid adaptive flight controller, development of a Newton's method based online parameter update that is modified to include a step size which regulates the rate of change in the parameter estimates, comparison of the modified Newton's method and recursive least squares online parameter update algorithms, modification of the neural network's input structure to accommodate for the nature of the nonlinearities present in a launch vehicle's ascent flight, examination of both tracking error based and modeling error based neural network weight update laws, and integration of feedback filters for the purpose of preventing harmful interaction between the flight control system and flexible structural modes. To validate the hybrid adaptive controller, a high-fidelity Ares I ascent flight simulator and a classical gain-scheduled proportional-integral-derivative (PID) ascent flight controller were obtained from the NASA Marshall Space Flight Center. The classical PID flight controller is used as a benchmark when analyzing the performance of the hybrid adaptive flight controller. Simulations are conducted which model both nominal and off-nominal flight conditions with structural flexibility of the vehicle either enabled or disabled. First, rigid body ascent simulations are performed with the hybrid adaptive controller under nominal flight conditions for the purpose of selecting the update laws which drive the indirect and direct adaptive components. With the neural network disabled, the results revealed that the recursive least squares online parameter update caused high frequency oscillations to appear in the engine gimbal commands. This is highly undesirable for long and slender launch vehicles, such as the Ares I, because such oscillation of the rocket nozzle could excite unstable structural flex modes. In contrast, the modified Newton's method online parameter update produced smooth control signals and was thus selected for use in the hybrid adaptive launch vehicle flight controller. In the simulations where the online parameter identification algorithm was disabled, the tracking error based neural network weight update law forced the network's output to diverge despite repeated reductions of the adaptive learning rate. As a result, the modeling error based neural network weight update law (which generated bounded signals) is utilized by the hybrid adaptive controller in all subsequent simulations. Comparing the PID and hybrid adaptive flight controllers under nominal flight conditions in rigid body ascent simulations showed that their tracking error magnitudes are similar for a period of time during the middle of the ascent phase. Though the PID controller performs better for a short interval around the 20 second mark, the hybrid adaptive controller performs far better from roughly 70 to 120 seconds. Elevating the aerodynamic loads by increasing the force and moment coefficients produced results very similar to the nominal case. However, applying a 5% or 10% thrust reduction to the first stage rocket motor causes the tracking error magnitude observed by the PID controller to be significantly elevated and diverge rapidly as the simulation concludes. In contrast, the hybrid adaptive controller steadily maintains smaller errors (often less than 50% of the corresponding PID value). Under the same sets of flight conditions with flexibility enabled, the results exhibit similar trends with the hybrid adaptive controller performing even better in each case. Again, the reduction of the first stage rocket motor's thrust clearly illustrated the superior robustness of the hybrid adaptive flight controller.

  3. Propulsion System Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Tai, Jimmy C. M.; McClure, Erin K.; Mavris, Dimitri N.; Burg, Cecile

    2002-01-01

    The Aerospace Systems Design Laboratory at the School of Aerospace Engineering in Georgia Institute of Technology has developed a core competency that enables propulsion technology managers to make technology investment decisions substantiated by propulsion and airframe technology system studies. This method assists the designer/manager in selecting appropriate technology concepts while accounting for the presence of risk and uncertainty as well as interactions between disciplines. This capability is incorporated into a single design simulation system that is described in this paper. This propulsion system design environment is created with a commercially available software called iSIGHT, which is a generic computational framework, and with analysis programs for engine cycle, engine flowpath, mission, and economic analyses. iSIGHT is used to integrate these analysis tools within a single computer platform and facilitate information transfer amongst the various codes. The resulting modeling and simulation (M&S) environment in conjunction with the response surface method provides the designer/decision-maker an analytical means to examine the entire design space from either a subsystem and/or system perspective. The results of this paper will enable managers to analytically play what-if games to gain insight in to the benefits (and/or degradation) of changing engine cycle design parameters. Furthermore, the propulsion design space will be explored probabilistically to show the feasibility and viability of the propulsion system integrated with a vehicle.

  4. Multiscale contact mechanics model for RF-MEMS switches with quantified uncertainties

    NASA Astrophysics Data System (ADS)

    Kim, Hojin; Huda Shaik, Nurul; Xu, Xin; Raman, Arvind; Strachan, Alejandro

    2013-12-01

    We introduce a multiscale model for contact mechanics between rough surfaces and apply it to characterize the force-displacement relationship for a metal-dielectric contact relevant for radio frequency micro-electromechanicl system (MEMS) switches. We propose a mesoscale model to describe the history-dependent force-displacement relationships in terms of the surface roughness, the long-range attractive interaction between the two surfaces, and the repulsive interaction between contacting asperities (including elastic and plastic deformation). The inputs to this model are the experimentally determined surface topography and the Hamaker constant as well as the mechanical response of individual asperities obtained from density functional theory calculations and large-scale molecular dynamics simulations. The model captures non-trivial processes including the hysteresis during loading and unloading due to plastic deformation, yet it is computationally efficient enough to enable extensive uncertainty quantification and sensitivity analysis. We quantify how uncertainties and variability in the input parameters, both experimental and theoretical, affect the force-displacement curves during approach and retraction. In addition, a sensitivity analysis quantifies the relative importance of the various input quantities for the prediction of force-displacement during contact closing and opening. The resulting force-displacement curves with quantified uncertainties can be directly used in device-level simulations of micro-switches and enable the incorporation of atomic and mesoscale phenomena in predictive device-scale simulations.

  5. SAChES: Scalable Adaptive Chain-Ensemble Sampling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Ray, Jaideep; Ebeida, Mohamed Salah

    We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the usemore » of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.« less

  6. Deep Learning for real-time gravitational wave detection and parameter estimation: Results with Advanced LIGO data

    NASA Astrophysics Data System (ADS)

    George, Daniel; Huerta, E. A.

    2018-03-01

    The recent Nobel-prize-winning detections of gravitational waves from merging black holes and the subsequent detection of the collision of two neutron stars in coincidence with electromagnetic observations have inaugurated a new era of multimessenger astrophysics. To enhance the scope of this emergent field of science, we pioneered the use of deep learning with convolutional neural networks, that take time-series inputs, for rapid detection and characterization of gravitational wave signals. This approach, Deep Filtering, was initially demonstrated using simulated LIGO noise. In this article, we present the extension of Deep Filtering using real data from LIGO, for both detection and parameter estimation of gravitational waves from binary black hole mergers using continuous data streams from multiple LIGO detectors. We demonstrate for the first time that machine learning can detect and estimate the true parameters of real events observed by LIGO. Our results show that Deep Filtering achieves similar sensitivities and lower errors compared to matched-filtering while being far more computationally efficient and more resilient to glitches, allowing real-time processing of weak time-series signals in non-stationary non-Gaussian noise with minimal resources, and also enables the detection of new classes of gravitational wave sources that may go unnoticed with existing detection algorithms. This unified framework for data analysis is ideally suited to enable coincident detection campaigns of gravitational waves and their multimessenger counterparts in real-time.

  7. Rapid methods for radionuclide contaminant transport in nuclear fuel cycle simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huff, Kathryn

    Here, nuclear fuel cycle and nuclear waste disposal decisions are technologically coupled. However, current nuclear fuel cycle simulators lack dynamic repository performance analysis due to the computational burden of high-fidelity hydrolgic contaminant transport models. The Cyder disposal environment and repository module was developed to fill this gap. It implements medium-fidelity hydrologic radionuclide transport models to support assessment appropriate for fuel cycle simulation in the Cyclus fuel cycle simulator. Rapid modeling of hundreds of discrete waste packages in a geologic environment is enabled within this module by a suite of four closed form models for advective, dispersive, coupled, and idealized con-more » taminant transport: a Degradation Rate model, a Mixed Cell model, a Lumped Parameter model, and a 1-D Permeable Porous Medium model. A summary of the Cyder module, its timestepping algorithm, and the mathematical models implemented within it are presented. Additionally, parametric demonstrations simulations performed with Cyder are presented and shown to demonstrate functional agreement with parametric simulations conducted in a standalone hydrologic transport model, the Clay Generic Disposal System Model developed by the Used Fuel Disposition Campaign Department of Energy Office of Nuclear Energy.« less

  8. Rapid methods for radionuclide contaminant transport in nuclear fuel cycle simulation

    DOE PAGES

    Huff, Kathryn

    2017-08-01

    Here, nuclear fuel cycle and nuclear waste disposal decisions are technologically coupled. However, current nuclear fuel cycle simulators lack dynamic repository performance analysis due to the computational burden of high-fidelity hydrolgic contaminant transport models. The Cyder disposal environment and repository module was developed to fill this gap. It implements medium-fidelity hydrologic radionuclide transport models to support assessment appropriate for fuel cycle simulation in the Cyclus fuel cycle simulator. Rapid modeling of hundreds of discrete waste packages in a geologic environment is enabled within this module by a suite of four closed form models for advective, dispersive, coupled, and idealized con-more » taminant transport: a Degradation Rate model, a Mixed Cell model, a Lumped Parameter model, and a 1-D Permeable Porous Medium model. A summary of the Cyder module, its timestepping algorithm, and the mathematical models implemented within it are presented. Additionally, parametric demonstrations simulations performed with Cyder are presented and shown to demonstrate functional agreement with parametric simulations conducted in a standalone hydrologic transport model, the Clay Generic Disposal System Model developed by the Used Fuel Disposition Campaign Department of Energy Office of Nuclear Energy.« less

  9. XNsim: Internet-Enabled Collaborative Distributed Simulation via an Extensible Network

    NASA Technical Reports Server (NTRS)

    Novotny, John; Karpov, Igor; Zhang, Chendi; Bedrossian, Nazareth S.

    2007-01-01

    In this paper, the XNsim approach to achieve Internet-enabled, dynamically scalable collaborative distributed simulation capabilities is presented. With this approach, a complete simulation can be assembled from shared component subsystems written in different formats, that run on different computing platforms, with different sampling rates, in different geographic locations, and over singlelmultiple networks. The subsystems interact securely with each other via the Internet. Furthermore, the simulation topology can be dynamically modified. The distributed simulation uses a combination of hub-and-spoke and peer-topeer network topology. A proof-of-concept demonstrator is also presented. The XNsim demonstrator can be accessed at http://www.jsc.draver.corn/xn that hosts various examples of Internet enabled simulations.

  10. Towards SWOT data assimilation for hydrology : automatic calibration of global flow routing model parameters in the Amazon basin

    NASA Astrophysics Data System (ADS)

    Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Biancamaria, S.; Boone, A.; Mognard, N. M.; Rogel, P.

    2011-12-01

    The Surface Water and Ocean Topography (SWOT) mission is a swath mapping radar interferometer that will provide global measurements of water surface elevation (WSE). The revisit time depends upon latitude and varies from two (low latitudes) to ten (high latitudes) per 22-day orbit repeat period. The high resolution and the global coverage of the SWOT data open the way for new hydrology studies. Here, the aim is to investigate the use of virtually generated SWOT data to improve discharge simulation using data assimilation techniques. In the framework of the SWOT virtual mission (VM), this study presents the first results of the automatic calibration of a global flow routing (GFR) scheme using SWOT VM measurements for the Amazon basin. The Hydrological Modeling and Analysis Platform (HyMAP) is used along with the MOCOM-UA multi-criteria global optimization algorithm. HyMAP has a 0.25-degree spatial resolution and runs at the daily time step to simulate discharge, water levels and floodplains. The surface runoff and baseflow drainage derived from the Interactions Sol-Biosphère-Atmosphère (ISBA) model are used as inputs for HyMAP. Previous works showed that the use of ENVISAT data enables the reduction of the uncertainty on some of the hydrological model parameters, such as river width and depth, Manning roughness coefficient and groundwater time delay. In the framework of the SWOT preparation work, the automatic calibration procedure was applied using SWOT VM measurements. For this Observing System Experiment (OSE), the synthetical data were obtained applying an instrument simulator (representing realistic SWOT errors) for one hydrological year to HYMAP simulated WSE using a "true" set of parameters. Only pixels representing rivers larger than 100 meters within the Amazon basin are considered to produce SWOT VM measurements. The automatic calibration procedure leads to the estimation of optimal parametersminimizing objective functions that formulate the difference between SWOT observations and modeled WSE using a perturbed set of parameters. Different formulations of the objective function were used, especially to account for SWOT observation errors, as well as various sets of calibration parameters.

  11. An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    imon, Donald L.; Armstrong, Jeffrey B.

    2012-01-01

    A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.

  12. Surface driven biomechanical breast image registration

    NASA Astrophysics Data System (ADS)

    Eiben, Björn; Vavourakis, Vasileios; Hipwell, John H.; Kabus, Sven; Lorenz, Cristian; Buelow, Thomas; Williams, Norman R.; Keshtgar, M.; Hawkes, David J.

    2016-03-01

    Biomechanical modelling enables large deformation simulations of breast tissues under different loading conditions to be performed. Such simulations can be utilised to transform prone Magnetic Resonance (MR) images into a different patient position, such as upright or supine. We present a novel integration of biomechanical modelling with a surface registration algorithm which optimises the unknown material parameters of a biomechanical model and performs a subsequent regularised surface alignment. This allows deformations induced by effects other than gravity, such as those due to contact of the breast and MR coil, to be reversed. Correction displacements are applied to the biomechanical model enabling transformation of the original pre-surgical images to the corresponding target position. The algorithm is evaluated for the prone-to-supine case using prone MR images and the skin outline of supine Computed Tomography (CT) scans for three patients. A mean target registration error (TRE) of 10:9 mm for internal structures is achieved. For the prone-to-upright scenario, an optical 3D surface scan of one patient is used as a registration target and the nipple distances after alignment between the transformed MRI and the surface are 10:1 mm and 6:3 mm respectively.

  13. Spectral Prior Image Constrained Compressed Sensing (Spectral PICCS) for Photon-Counting Computed Tomography

    PubMed Central

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-01-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in-vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43~73%) without sacrificing CT number accuracy or spatial resolution. PMID:27551878

  14. Spectral prior image constrained compressed sensing (spectral PICCS) for photon-counting computed tomography

    NASA Astrophysics Data System (ADS)

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-09-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43-73%) without sacrificing CT number accuracy or spatial resolution.

  15. Software for Acoustic Rendering

    NASA Technical Reports Server (NTRS)

    Miller, Joel D.

    2003-01-01

    SLAB is a software system that can be run on a personal computer to simulate an acoustic environment in real time. SLAB was developed to enable computational experimentation in which one can exert low-level control over a variety of signal-processing parameters, related to spatialization, for conducting psychoacoustic studies. Among the parameters that can be manipulated are the number and position of reflections, the fidelity (that is, the number of taps in finite-impulse-response filters), the system latency, and the update rate of the filters. Another goal in the development of SLAB was to provide an inexpensive means of dynamic synthesis of virtual audio over headphones, without need for special-purpose signal-processing hardware. SLAB has a modular, object-oriented design that affords the flexibility and extensibility needed to accommodate a variety of computational experiments and signal-flow structures. SLAB s spatial renderer has a fixed signal-flow architecture corresponding to a set of parallel signal paths from each source to a listener. This fixed architecture can be regarded as a compromise that optimizes efficiency at the expense of complete flexibility. Such a compromise is necessary, given the design goal of enabling computational psychoacoustic experimentation on inexpensive personal computers.

  16. Numerical simulation of convective generated gravity waves in the stratosphere and MLT regions.

    NASA Astrophysics Data System (ADS)

    Heale, C. J.; Snively, J. B.

    2017-12-01

    Convection is an important source of gravity wave generation, especially in the summer tropics and midlatitudes, and coherent wave fields above convection are now routinely measured in the stratosphere and mesosphere [e.g. Hoffmann et al., JGR, 118, 2013; Gong et al., JGR, 120, 2015; Perwitasari et al., GRL, 42, 22, 2016]. Numerical studies have been performed to investigate the generation mechanisms, source spectra, and their effects on the middle and upper atmosphere [e.g. Fovell et al., AMS, 49,16, 1992; Alexander and Holton, Atmos. Chem. Phys., 4 2004; Vincent et al., JGR, 1118, 2013], however there is still considerable work needed to fully describe these parameters. GCMs currently lack the resolution to explicitly simulate convection generation and rely on simplified parameterizations while full cloud resolving models are computationally expensive and often only extend into the stratosphere. More recent studies have improved the realism of these simulations by using radar derived precipitation rates to drive latent heating in models that simulate convection [Grimsdell et al., AMS, 67, 2010; Stephan and Alexander., J. Adv. Model. Earth. Syst, 7, 2015], however they too only consider wave propagation in the troposphere and stratosphere. We use a 2D nonlinear, fully compressible model [Snively and Pasko., JGR, 113, 2008] to excite convectively generated waves, based on NEXRAD radar data, using the Stephan and Alexander [2015] algorithms. We study the propagation, and spectral evolution of the generated waves up into the MLT region. Ambient atmosphere parameters are derived from observations and MERRA-2 reanalysis data, and stratospheric (AIRS) and mesospheric (Lidar, OH airglow) observations enable comparisons with simulation results.

  17. Kinetic Monte Carlo simulation of the efficiency roll-off, emission color, and degradation of organic light-emitting diodes (Presentation Recording)

    NASA Astrophysics Data System (ADS)

    Coehoorn, Reinder; van Eersel, Harm; Bobbert, Peter A.; Janssen, Rene A. J.

    2015-10-01

    The performance of Organic Light Emitting Diodes (OLEDs) is determined by a complex interplay of the charge transport and excitonic processes in the active layer stack. We have developed a three-dimensional kinetic Monte Carlo (kMC) OLED simulation method which includes all these processes in an integral manner. The method employs a physically transparent mechanistic approach, and is based on measurable parameters. All processes can be followed with molecular-scale spatial resolution and with sub-nanosecond time resolution, for any layer structure and any mixture of materials. In the talk, applications to the efficiency roll-off, emission color and lifetime of white and monochrome phosphorescent OLEDs [1,2] are demonstrated, and a comparison with experimental results is given. The simulations show to which extent the triplet-polaron quenching (TPQ) and triplet-triplet-annihilation (TTA) contribute to the roll-off, and how the microscopic parameters describing these processes can be deduced properly from dedicated experiments. Degradation is treated as a result of the (accelerated) conversion of emitter molecules to non-emissive sites upon a triplet-polaron quenching (TPQ) process. The degradation rate, and hence the device lifetime, is shown to depend on the emitter concentration and on the precise type of TPQ process. Results for both single-doped and co-doped OLEDs are presented, revealing that the kMC simulations enable efficient simulation-assisted layer stack development. [1] H. van Eersel et al., Appl. Phys. Lett. 105, 143303 (2014). [2] R. Coehoorn et al., Adv. Funct. Mater. (2015), publ. online (DOI: 10.1002/adfm.201402532)

  18. A parallel calibration utility for WRF-Hydro on high performance computers

    NASA Astrophysics Data System (ADS)

    Wang, J.; Wang, C.; Kotamarthi, V. R.

    2017-12-01

    A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.

  19. egs_brachy: a versatile and fast Monte Carlo code for brachytherapy

    NASA Astrophysics Data System (ADS)

    Chamberland, Marc J. P.; Taylor, Randle E. P.; Rogers, D. W. O.; Thomson, Rowan M.

    2016-12-01

    egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm)3 voxels) and eye plaque (with (1 mm)3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.

  20. Modeling of Solid State Transformer for the FREEDM System Demonstration

    NASA Astrophysics Data System (ADS)

    Jiang, Youyuan

    The Solid State Transformer (SST) is an essential component in the FREEDM system. This research focuses on the modeling of the SST and the controller hardware in the loop (CHIL) implementation of the SST for the support of the FREEDM system demonstration. The energy based control strategy for a three-stage SST is analyzed and applied. A simplified average model of the three-stage SST that is suitable for simulation in real time digital simulator (RTDS) has been developed in this study. The model is also useful for general time-domain power system analysis and simulation. The proposed simplified av-erage model has been validated in MATLAB and PLECS. The accuracy of the model has been verified through comparison with the cycle-by-cycle average (CCA) model and de-tailed switching model. These models are also implemented in PSCAD, and a special strategy to implement the phase shift modulation has been proposed to enable the switching model simulation in PSCAD. The implementation of the CHIL test environment of the SST in RTDS is described in this report. The parameter setup of the model has been discussed in detail. One of the dif-ficulties is the choice of the damping factor, which is revealed in this paper. Also the grounding of the system has large impact on the RTDS simulation. Another problem is that the performance of the system is highly dependent on the switch parameters such as voltage and current ratings. Finally, the functionalities of the SST have been realized on the platform. The distributed energy storage interface power injection and reverse power flow have been validated. Some limitations are noticed and discussed through the simulation on RTDS.

  1. egs_brachy: a versatile and fast Monte Carlo code for brachytherapy.

    PubMed

    Chamberland, Marc J P; Taylor, Randle E P; Rogers, D W O; Thomson, Rowan M

    2016-12-07

    egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm) 3 voxels) and eye plaque (with (1 mm) 3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.

  2. Estimation of ice activation parameters within a particle tracking Lagrangian cloud model using the ensemble Kalman filter to match ISCDAC golden case observations

    NASA Astrophysics Data System (ADS)

    Reisner, J. M.; Dubey, M. K.

    2010-12-01

    To both quantify and reduce uncertainty in ice activation parameterizations for stratus clouds occurring in the temperature range between -5 to -10 C ensemble simulations of an ISDAC golden case have been conducted. To formulate the ensemble, three parameters found within an ice activation model have been sampled using a Latin hypercube technique over a parameter range that induces large variability in both number and mass of ice. The ice activation model is contained within a Lagrangian cloud model that simulates particle number as a function of radius for cloud ice, snow, graupel, cloud, and rain particles. A unique aspect of this model is that it produces very low levels of numerical diffusion that enable the model to accurately resolve the sharp cloud edges associated with the ISDAC stratus deck. Another important aspect of the model is that near the cloud edges the number of particles can be significantly increased to reduce sampling errors and accurately resolve physical processes such as collision-coalescence that occur in this region. Thus, given these relatively low numerical errors, as compared to traditional bin models, the sensitivity of a stratus deck to changes in parameters found within the activation model can be examined without fear of numerical contamination. Likewise, once the ensemble has been completed, ISDAC observations can be incorporated into a Kalman filter to optimally estimate the ice activation parameters and reduce overall model uncertainty. Hence, this work will highlight the ability of an ensemble Kalman filter system coupled to a highly accurate numerical model to estimate important parameters found within microphysical parameterizations containing high uncertainty.

  3. Investigation of parameters affecting treatment time in MRI-guided transurethral ultrasound therapy

    NASA Astrophysics Data System (ADS)

    N'Djin, W. A.; Burtnyk, M.; Chopra, R.; Bronskill, M. J.

    2010-03-01

    MRI-guided transurethral ultrasound therapy shows promise for minimally invasive treatment of localized prostate cancer. Real-time MR temperature feedback enables the 3D control of thermal therapy to define an accurate region within the prostate. Previous in-vivo canine studies showed the feasibility of this method using transurethral planar transducers. The aim of this simulation study was to reduce the procedure time, while maintaining treatment accuracy by investigating new combinations of treatment parameters. A numerical model was used to simulate a multi-element heating applicator rotating inside the urethra in 10 human prostates. Acoustic power and rotation rate were varied based on the feedback of the temperature in the prostate. Several parameters were investigated for improving the treatment time. Maximum acoustic power and rotation rate were optimized interdependently as a function of prostate radius and transducer operating frequency, while avoiding temperatures >90° C in the prostate. Other trials were performed on each parameter separately, with the other parameter fixed. The concept of using dual-frequency transducers was studied, using the fundamental frequency or the 3rd harmonic component depending on the prostate radius. The maximum acoustic power which could be used decreased as a function of the prostate radius and the frequency. Decreasing the frequency (9.7-3.0 MHz) or increasing the power (10-20 W.cm-2) led to treatment times shorter by up to 50% under appropriate conditions. Dual-frequency configurations, while helpful, tended to have less impact on treatment times. Treatment accuracy was maintained and critical adjacent tissues like the rectal wall remained protected. The interdependence between power and frequency may require integrating multi-parametric functions inside the controller for future optimizations. As a first approach, however, even slight modifications of key parameters can be sufficient to reduce treatment time.

  4. Static Behavior of Chalcogenide Based Programmable Metallization Cells

    NASA Astrophysics Data System (ADS)

    Rajabi, Saba

    Nonvolatile memory (NVM) technologies have been an integral part of electronic systems for the past 30 years. The ideal non-volatile memory have minimal physical size, energy usage, and cost while having maximal speed, capacity, retention time, and radiation hardness. A promising candidate for next-generation memory is ion-conducting bridging RAM which is referred to as programmable metallization cell (PMC), conductive bridge RAM (CBRAM), or electrochemical metallization memory (ECM), which is likely to surpass flash memory in all the ideal memory characteristics. A comprehensive physics-based model is needed to completely understand PMC operation and assist in design optimization. To advance the PMC modeling effort, this thesis presents a precise physical model parameterizing materials associated with both ion-rich and ion-poor layers of the PMC's solid electrolyte, so that captures the static electrical behavior of the PMC in both its low-resistance on-state (LRS) and high resistance off-state (HRS). The experimental data is measured from a chalcogenide glass PMC designed and manufactured at ASU. The static on- and off-state resistance of a PMC device composed of a layered (Ag-rich/Ag-poor) Ge30Se70 ChG film is characterized and modeled using three dimensional simulation code written in Silvaco Atlas finite element analysis software. Calibrating the model to experimental data enables the extraction of device parameters such as material bandgaps, workfunctions, density of states, carrier mobilities, dielectric constants, and affinities. The sensitivity of our modeled PMC to the variation of its prominent achieved material parameters is examined on the HRS and LRS impedance behavior. The obtained accurate set of material parameters for both Ag-rich and Ag-poor ChG systems and process variation verification on electrical characteristics enables greater fidelity in PMC device simulation, which significantly enhances our ability to understand the underlying physics of ChG-based resistive switching memory.

  5. SiMon: Simulation Monitor for Computational Astrophysics

    NASA Astrophysics Data System (ADS)

    Xuran Qian, Penny; Cai, Maxwell Xu; Portegies Zwart, Simon; Zhu, Ming

    2017-09-01

    Scientific discovery via numerical simulations is important in modern astrophysics. This relatively new branch of astrophysics has become possible due to the development of reliable numerical algorithms and the high performance of modern computing technologies. These enable the analysis of large collections of observational data and the acquisition of new data via simulations at unprecedented accuracy and resolution. Ideally, simulations run until they reach some pre-determined termination condition, but often other factors cause extensive numerical approaches to break down at an earlier stage. In those cases, processes tend to be interrupted due to unexpected events in the software or the hardware. In those cases, the scientist handles the interrupt manually, which is time-consuming and prone to errors. We present the Simulation Monitor (SiMon) to automatize the farming of large and extensive simulation processes. Our method is light-weight, it fully automates the entire workflow management, operates concurrently across multiple platforms and can be installed in user space. Inspired by the process of crop farming, we perceive each simulation as a crop in the field and running simulation becomes analogous to growing crops. With the development of SiMon we relax the technical aspects of simulation management. The initial package was developed for extensive parameter searchers in numerical simulations, but it turns out to work equally well for automating the computational processing and reduction of observational data reduction.

  6. Simulation of transmission electron microscope images of biological specimens.

    PubMed

    Rullgård, H; Ofverstedt, L-G; Masich, S; Daneholt, B; Oktem, O

    2011-09-01

    We present a new approach to simulate electron cryo-microscope images of biological specimens. The framework for simulation consists of two parts; the first is a phantom generator that generates a model of a specimen suitable for simulation, the second is a transmission electron microscope simulator. The phantom generator calculates the scattering potential of an atomic structure in aqueous buffer and allows the user to define the distribution of molecules in the simulated image. The simulator includes a well defined electron-specimen interaction model based on the scalar Schrödinger equation, the contrast transfer function for optics, and a noise model that includes shot noise as well as detector noise including detector blurring. To enable optimal performance, the simulation framework also includes a calibration protocol for setting simulation parameters. To test the accuracy of the new framework for simulation, we compare simulated images to experimental images recorded of the Tobacco Mosaic Virus (TMV) in vitreous ice. The simulated and experimental images show good agreement with respect to contrast variations depending on dose and defocus. Furthermore, random fluctuations present in experimental and simulated images exhibit similar statistical properties. The simulator has been designed to provide a platform for development of new instrumentation and image processing procedures in single particle electron microscopy, two-dimensional crystallography and electron tomography with well documented protocols and an open source code into which new improvements and extensions are easily incorporated. © 2011 The Authors Journal of Microscopy © 2011 Royal Microscopical Society.

  7. Simulation based estimation of dynamic mechanical properties for viscoelastic materials used for vocal fold models

    NASA Astrophysics Data System (ADS)

    Rupitsch, Stefan J.; Ilg, Jürgen; Sutor, Alexander; Lerch, Reinhard; Döllinger, Michael

    2011-08-01

    In order to obtain a deeper understanding of the human phonation process and the mechanisms generating sound, realistic setups are built up containing artificial vocal folds. Usually, these vocal folds consist of viscoelastic materials (e.g., polyurethane mixtures). Reliable simulation based studies on the setups require the mechanical properties of the utilized viscoelastic materials. The aim of this work is the identification of mechanical material parameters (Young's modulus, Poisson's ratio, and loss factor) for those materials. Therefore, we suggest a low-cost measurement setup, the so-called vibration transmission analyzer (VTA) enabling to analyze the transfer behavior of viscoelastic materials for propagating mechanical waves. With the aid of a mathematical Inverse Method, the material parameters are adjusted in a convenient way so that the simulation results coincide with the measurement results for the transfer behavior. Contrary to other works, we determine frequency dependent functions for the mechanical properties characterizing the viscoelastic material in the frequency range of human speech (100-250 Hz). The results for three different materials clearly show that the Poisson's ratio is close to 0.5 and that the Young's modulus increases with higher frequencies. For a frequency of 400 Hz, the Young's modulus of the investigated viscoelastic materials is approximately 80% higher than for the static case (0 Hz). We verify the identified mechanical properties with experiments on fabricated vocal fold models. Thereby, only small deviations between measurements and simulations occur.

  8. Changes in running pattern due to fatigue and cognitive load in orienteering.

    PubMed

    Millet, Guillaume Y; Divert, Caroline; Banizette, Marion; Morin, Jean-Benoit

    2010-01-01

    The aim of this study was to examine the influence of fatigue on running biomechanics in normal running, in normal running with a cognitive task, and in running while map reading. Nineteen international and less experienced orienteers performed a fatiguing running exercise of duration and intensity similar to a classic distance orienteering race on an instrumented treadmill while performing mental arithmetic, an orienteering simulation, and control running at regular intervals. Two-way repeated-measures analysis of variance did not reveal any significant difference between mental arithmetic and control running for any of the kinematic and kinetic parameters analysed eight times over the fatiguing protocol. However, these parameters were systematically different between the orienteering simulation and the other two conditions (mental arithmetic and control running). The adaptations in orienteering simulation running were significantly more pronounced in the elite group when step frequency, peak vertical ground reaction force, vertical stiffness, and maximal downward displacement of the centre of mass during contact were considered. The effects of fatigue on running biomechanics depended on whether the orienteers read their map or ran normally. It is concluded that adding a cognitive load does not modify running patterns. Therefore, all changes in running pattern observed during the orienteering simulation, particularly in elite orienteers, are the result of adaptations to enable efficient map reading and/or potentially prevent injuries. Finally, running patterns are not affected to the same extent by fatigue when a map reading task is added.

  9. Design, Modeling, Fabrication, and Evaluation of the Air Amplifier for Improved Detection of Biomolecules by Electrospray Ionization Mass Spectrometry

    PubMed Central

    Robichaud, Guillaume; Dixon, R. Brent; Potturi, Amarnatha S.; Cassidy, Dan; Edwards, Jack R.; Sohn, Alex; Dow, Thomas A.; Muddiman, David C.

    2010-01-01

    Through a multi-disciplinary approach, the air amplifier is being evolved as a highly engineered device to improve detection limits of biomolecules when using electrospray ionization. Several key aspects have driven the modifications to the device through experimentation and simulations. We have developed a computer simulation that accurately portrays actual conditions and the results from these simulations are corroborated by the experimental data. These computer simulations can be used to predict outcomes from future designs resulting in a design process that is efficient in terms of financial cost and time. We have fabricated a new device with annular gap control over a range of 50 to 70 μm using piezoelectric actuators. This has enabled us to obtain better aerodynamic performance when compared to the previous design (2× more vacuum) and also more reproducible results. This is allowing us to study a broader experimental space than the previous design which is critical in guiding future directions. This work also presents and explains the principles behind a fractional factorial design of experiments methodology for testing a large number of experimental parameters in an orderly and efficient manner to understand and optimize the critical parameters that lead to obtain improved detection limits while minimizing the number of experiments performed. Preliminary results showed that several folds of improvements could be obtained for certain condition of operations (up to 34 folds). PMID:21499524

  10. Measurement and simulation of the TRR BNCT beam parameters

    NASA Astrophysics Data System (ADS)

    Bavarnegin, Elham; Sadremomtaz, Alireza; Khalafi, Hossein; Kasesaz, Yaser; Golshanian, Mohadeseh; Ghods, Hossein; Ezzati, Arsalan; Keyvani, Mehdi; Haddadi, Mohammad

    2016-09-01

    Recently, the configuration of the Tehran Research Reactor (TRR) thermal column has been modified and a proper thermal neutron beam for preclinical Boron Neutron Capture Therapy (BNCT) has been obtained. In this study, simulations and experimental measurements have been carried out to identify the BNCT beam parameters including the beam uniformity, the distribution of the thermal neutron dose, boron dose, gamma dose in a phantom and also the Therapeutic Gain (TG). To do this, the entire TRR structure including the reactor core, pool, the thermal column and beam tubes have been modeled using MCNPX Monte Carlo code. To measure in-phantom dose distribution a special head phantom has been constructed and foil activation techniques and TLD700 dosimeter have been used. The results show that there is enough uniformity in TRR thermal BNCT beam. TG parameter has the maximum value of 5.7 at the depth of 1 cm from the surface of the phantom, confirming that TRR thermal neutron beam has potential for being used in treatment of superficial brain tumors. For the purpose of a clinical trial, more modifications need to be done at the reactor, as, for example design, and construction of a treatment room at the beam exit which is our plan for future. To date, this beam is usable for biological studies and animal trials. There is a relatively good agreement between simulation and measurement especially within a diameter of 10 cm which is the dimension of usual BNCT beam ports. This relatively good agreement enables a more precise prediction of the irradiation conditions needed for future experiments.

  11. DSC: software tool for simulation-based design of control strategies applied to wastewater treatment plants.

    PubMed

    Ruano, M V; Ribes, J; Seco, A; Ferrer, J

    2011-01-01

    This paper presents a computer tool called DSC (Simulation based Controllers Design) that enables an easy design of control systems and strategies applied to wastewater treatment plants. Although the control systems are developed and evaluated by simulation, this tool aims to facilitate the direct implementation of the designed control system to the PC of the full-scale WWTP (wastewater treatment plants). The designed control system can be programmed in a dedicated control application and can be connected to either the simulation software or the SCADA of the plant. To this end, the developed DSC incorporates an OPC server (OLE for process control) which facilitates an open-standard communication protocol for different industrial process applications. The potential capabilities of the DSC tool are illustrated through the example of a full-scale application. An aeration control system applied to a nutrient removing WWTP was designed, tuned and evaluated with the DSC tool before its implementation in the full scale plant. The control parameters obtained by simulation were suitable for the full scale plant with only few modifications to improve the control performance. With the DSC tool, the control systems performance can be easily evaluated by simulation. Once developed and tuned by simulation, the control systems can be directly applied to the full-scale WWTP.

  12. Evolving a Neural Olfactorimotor System in Virtual and Real Olfactory Environments

    PubMed Central

    Rhodes, Paul A.; Anderson, Todd O.

    2012-01-01

    To provide a platform to enable the study of simulated olfactory circuitry in context, we have integrated a simulated neural olfactorimotor system with a virtual world which simulates both computational fluid dynamics as well as a robotic agent capable of exploring the simulated plumes. A number of the elements which we developed for this purpose have not, to our knowledge, been previously assembled into an integrated system, including: control of a simulated agent by a neural olfactorimotor system; continuous interaction between the simulated robot and the virtual plume; the inclusion of multiple distinct odorant plumes and background odor; the systematic use of artificial evolution driven by olfactorimotor performance (e.g., time to locate a plume source) to specify parameter values; the incorporation of the realities of an imperfect physical robot using a hybrid model where a physical robot encounters a simulated plume. We close by describing ongoing work toward engineering a high dimensional, reversible, low power electronic olfactory sensor which will allow olfactorimotor neural circuitry evolved in the virtual world to control an autonomous olfactory robot in the physical world. The platform described here is intended to better test theories of olfactory circuit function, as well as provide robust odor source localization in realistic environments. PMID:23112772

  13. Effect of threshold disorder on the quorum percolation model

    NASA Astrophysics Data System (ADS)

    Monceau, Pascal; Renault, Renaud; Métens, Stéphane; Bottani, Samuel

    2016-07-01

    We study the modifications induced in the behavior of the quorum percolation model on neural networks with Gaussian in-degree by taking into account an uncorrelated Gaussian thresholds variability. We derive a mean-field approach and show its relevance by carrying out explicit Monte Carlo simulations. It turns out that such a disorder shifts the position of the percolation transition, impacts the size of the giant cluster, and can even destroy the transition. Moreover, we highlight the occurrence of disorder independent fixed points above the quorum critical value. The mean-field approach enables us to interpret these effects in terms of activation probability. A finite-size analysis enables us to show that the order parameter is weakly self-averaging with an exponent independent on the thresholds disorder. Last, we show that the effects of the thresholds and connectivity disorders cannot be easily discriminated from the measured averaged physical quantities.

  14. Q-Learning-Based Adjustable Fixed-Phase Quantum Grover Search Algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Shi, Wensha; Wang, Yijun; Hu, Jiankun

    2017-02-01

    We demonstrate that the rotation phase can be suitably chosen to increase the efficiency of the phase-based quantum search algorithm, leading to a dynamic balance between iterations and success probabilities of the fixed-phase quantum Grover search algorithm with Q-learning for a given number of solutions. In this search algorithm, the proposed Q-learning algorithm, which is a model-free reinforcement learning strategy in essence, is used for performing a matching algorithm based on the fraction of marked items λ and the rotation phase α. After establishing the policy function α = π(λ), we complete the fixed-phase Grover algorithm, where the phase parameter is selected via the learned policy. Simulation results show that the Q-learning-based Grover search algorithm (QLGA) enables fewer iterations and gives birth to higher success probabilities. Compared with the conventional Grover algorithms, it avoids the optimal local situations, thereby enabling success probabilities to approach one.

  15. Bioinstrumentation for evaluation of workload in payload specialists: results of ASSESS II

    NASA Astrophysics Data System (ADS)

    Wegmann, Hans M.; Herrmann, Reinhold; Winget, Charles M.

    1980-11-01

    ASSESS II‡Acronym for Airborne Science/Spacelab Experiments System Simulation. was a cooperative NASA-ESA project which consisted of a detailed simulation of Spacelab operations using the NASA Ames Research Center CV-990 aircraft laboratory. The Medical Experiment reported on in this paper was part of the complex payload consisting of 11 different experiments. Its general purpose was to develop a technology, possibly flown on board of Spacelab, and enabling the assessment of workload through evaluating changes of circadian rhythmicity, sleep disturbances and episodical or cumulative stress. As parameters the following variables were measured: Rectal temperature, ECG, sleep-EEG and -EOG, the urinary excretion of hormones and electrolytes. The results revealed evidence that a Spacelab environment, as simulated in ASSESS II, will lead to internal dissociation of circadian rhythms, to sleep disturbances and to highly stressful working conditions. Altogether these effects will impose considerable workload upon Payload Specialists. It is suggested that an intensive pre-mission system simulation will reduce these impairments to a reasonable degree. The bioinstrumentation applied in this experiment proved to be a practical and reliable tool in assessing the objectives of the study.

  16. DEM Based Modeling: Grid or TIN? The Answer Depends

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.; Moreno, H. A.

    2015-12-01

    The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.

  17. Computing muscle, ligament, and osseous contributions to the elbow varus moment during baseball pitching

    PubMed Central

    Buffi, James H.; Werner, Katie; Kepple, Tom; Murray, Wendy M.

    2014-01-01

    Baseball pitching imposes a dangerous valgus load on the elbow that puts the joint at severe risk for injury. The goal of this study was to develop a musculoskeletal modeling approach to enable evaluation of muscle-tendon contributions to mitigating elbow injury risk in pitching. We implemented a forward dynamic simulation framework that used a scaled biomechanical model to reproduce a pitching motion recorded from a high school pitcher. The medial elbow muscles generated substantial, protective, varus elbow moments in our simulations. For our subject, the triceps generated large varus moments at the time of peak valgus loading; varus moments generated by the flexor digitorum superficialis were larger, but occurred later in the motion. Increasing muscle-tendon force output, either by augmenting parameters associated with strength and power or by increasing activation levels, decreased the load on the ulnar collateral ligament. Published methods have not previously quantified the biomechanics of elbow muscles during pitching. This simulation study represents a critical advancement in the study of baseball pitching and highlights the utility of simulation techniques in the study of this difficult problem. PMID:25281409

  18. SUPL support for mobile devices

    NASA Astrophysics Data System (ADS)

    Narisetty, Jayanthi; Soghoyan, Arpine; Sundaramurthy, Mohanapriya; Akopian, David

    2012-02-01

    Conventional Global Positioning System (GPS) receivers operate well in open-sky environments. But their performance degrades in urban canyons, indoors and underground due to multipath, foliage, dissipation, etc. To overcome such situations, several enhancements have been suggested such as Assisted GPS (A-GPS). Using this approach, orbital parameters including ephemeris and almanac along with reference time and coarse location information are provided to GPS receivers to assist in acquisition of weak signals. To test A-GPS enabled receivers high-end simulators are used, which are not affordable by many academic institutions. This paper presents an economical A-GPS supplement for inexpensive simulators which operates on application layer. Particularly proposed solution is integrated with National Instruments' (NI) GPS Simulation Toolkit and implemented using NI's Labview environment. This A-GPS support works for J2ME and Android platforms. The communication between the simulator and the receiver is in accordance with the Secure User Plane Location (SUPL) protocol encapsulated with Radio Resource Location Protocol (RRLP) applies to Global System for Mobile Communications (GSM) and Universal Mobile Telecommunications System (UMTS) cellular networks.

  19. Simulation-Guided 3D Nanomanufacturing via Focused Electron Beam Induced Deposition

    DOE PAGES

    Fowlkes, Jason D.; Winkler, Robert; Lewis, Brett B.; ...

    2016-06-10

    Focused electron beam induced deposition (FEBID) is one of the few techniques that enables direct-write synthesis of free-standing 3D nanostructures. While the fabrication of simple architectures such as vertical or curving nanowires has been achieved by simple trial and error, processing complex 3D structures is not tractable with this approach. This is due, inpart, to the dynamic interplay between electron–solid interactions and the transient spatial distribution of absorbed precursor molecules on the solid surface. Here, we demonstrate the ability to controllably deposit 3D lattice structures at the micro/nanoscale, which have received recent interest owing to superior mechanical and optical properties.more » Moreover, a hybrid Monte Carlo–continuum simulation is briefly overviewed, and subsequently FEBID experiments and simulations are directly compared. Finally, a 3D computer-aided design (CAD) program is introduced, which generates the beam parameters necessary for FEBID by both simulation and experiment. In using this approach, we demonstrate the fabrication of various 3D lattice structures using Pt-, Au-, and W-based precursors.« less

  20. Simulated Performance of the Orbiting Wide-angle Light Collectors (OWL) Experiment

    NASA Technical Reports Server (NTRS)

    Krizmanic, J. F.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    The Orbiting Wide-angle Light collectors (OWL) experiment is in NASA's mid-term strategic plan and will stereoscopically image, from equatorial orbit, the air fluorescence signal generated by airshowers induced by the ultrahigh energy (E greater than few x 10(exp 19) eV) component of the cosmic radiation. The use of a space-based platform enables an extremely large event acceptance aperture and thus will allow a high statistics measurement of these rare events. Detailed Monte Carlo simulations are required to quantify the physics potential of the mission as well as optimize the instrumental parameters. This paper reports on the results of the GSFC Monte Carlo simulation for two different, OWL instrument baseline designs. These results indicate that, assuming a continuation of the cosmic ray spectrum (theta approximately E(exp -2.75), OWL could have an event rate of 4000 events/year with E greater than or equal to 10(exp 20) eV. Preliminary results, based upon these Monte Carlo simulations, indicate that events can be accurately reconstructed in the detector focal plane arrays for the OWL instrument baseline designs under consideration.

  1. Analytical model of diffuse reflectance spectrum of skin tissue

    NASA Astrophysics Data System (ADS)

    Lisenko, S. A.; Kugeiko, M. M.; Firago, V. A.; Sobchuk, A. N.

    2014-01-01

    We have derived simple analytical expressions that enable highly accurate calculation of diffusely reflected light signals of skin in the spectral range from 450 to 800 nm at a distance from the region of delivery of exciting radiation. The expressions, taking into account the dependence of the detected signals on the refractive index, transport scattering coefficient, absorption coefficient and anisotropy factor of the medium, have been obtained in the approximation of a two-layer medium model (epidermis and dermis) for the same parameters of light scattering but different absorption coefficients of layers. Numerical experiments on the retrieval of the skin biophysical parameters from the diffuse reflectance spectra simulated by the Monte Carlo method show that commercially available fibre-optic spectrophotometers with a fixed distance between the radiation source and detector can reliably determine the concentration of bilirubin, oxy- and deoxyhaemoglobin in the dermis tissues and the tissue structure parameter characterising the size of its effective scatterers. We present the examples of quantitative analysis of the experimental data, confirming the correctness of estimates of biophysical parameters of skin using the obtained analytical expressions.

  2. Single Object & Time Series Spectroscopy with JWST NIRCam

    NASA Technical Reports Server (NTRS)

    Greene, Tom; Schlawin, Everett A.

    2017-01-01

    JWST will enable high signal-to-noise spectroscopic observations of the atmospheres of transiting planets with high sensitivity at wavelengths that are inaccessible with HST or other existing facilities. We plan to exploit this by measuring abundances, chemical compositions, cloud properties, and temperature-pressure parameters of a set of mostly warm (T 600 - 1200 K) and low mass (14 -200 Earth mass) planets in our guaranteed time program. These planets are expected to have significant molecular absorptions of H2O, CH4, CO2, CO, and other molecules that are key for determining these parameters and illuminating how and where the planets formed. We describe how we will use the NIRCam grisms to observe slitless transmission and emission spectra of these planets over 2.4 - 5.0 microns wavelength and how well these observations can measure our desired parameters. This will include how we set integration times, exposure parameters, and obtain simultaneous shorter wavelength images to track telescope pointing and stellar variability. We will illustrate this with specific examples showing model spectra, simulated observations, expected information retrieval results, completed Astronomer's Proposal Tools observing templates, target visibility, and other considerations.

  3. Modeling and Simulation With Operational Databases to Enable Dynamic Situation Assessment & Prediction

    DTIC Science & Technology

    2010-11-01

    subsections discuss the design of the simulations. 3.12.1 Lanchester5D Simulation A Lanchester simulation was developed to conduct performance...benchmarks using the WarpIV Kernel and HyperWarpSpeed. The Lanchester simulation contains a user-definable number of grid cells in which blue and red...forces engage in battle using Lanchester equations. Having a user-definable number of grid cells enables the simulation to be stressed with high entity

  4. Quantum Computing Architectural Design

    NASA Astrophysics Data System (ADS)

    West, Jacob; Simms, Geoffrey; Gyure, Mark

    2006-03-01

    Large scale quantum computers will invariably require scalable architectures in addition to high fidelity gate operations. Quantum computing architectural design (QCAD) addresses the problems of actually implementing fault-tolerant algorithms given physical and architectural constraints beyond those of basic gate-level fidelity. Here we introduce a unified framework for QCAD that enables the scientist to study the impact of varying error correction schemes, architectural parameters including layout and scheduling, and physical operations native to a given architecture. Our software package, aptly named QCAD, provides compilation, manipulation/transformation, multi-paradigm simulation, and visualization tools. We demonstrate various features of the QCAD software package through several examples.

  5. Characterizing the three-orbital Hubbard model with determinant quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kung, Y. F.; Chen, C. -C.; Wang, Yao

    Here, we characterize the three-orbital Hubbard model using state-of-the-art determinant quantum Monte Carlo (DQMC) simulations with parameters relevant to the cuprate high-temperature superconductors. The simulations find that doped holes preferentially reside on oxygen orbitals and that the (π,π) antiferromagnetic ordering vector dominates in the vicinity of the undoped system, as known from experiments. The orbitally-resolved spectral functions agree well with photoemission spectroscopy studies and enable identification of orbital content in the bands. A comparison of DQMC results with exact diagonalization and cluster perturbation theory studies elucidates how these different numerical techniques complement one another to produce a more complete understandingmore » of the model and the cuprates. Interestingly, our DQMC simulations predict a charge-transfer gap that is significantly smaller than the direct (optical) gap measured in experiment. Most likely, it corresponds to the indirect gap that has recently been suggested to be on the order of 0.8 eV, and demonstrates the subtlety in identifying charge gaps.« less

  6. Characterizing the three-orbital Hubbard model with determinant quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kung, Y. F.; Chen, C. -C.; Wang, Yao

    We characterize the three-orbital Hubbard model using state-of-the-art determinant quantum Monte Carlo (DQMC) simulations with parameters relevant to the cuprate high-temperature superconductors. The simulations find that doped holes preferentially reside on oxygen orbitals and that the (π,π) antiferromagnetic ordering vector dominates in the vicinity of the undoped system, as known from experiments. The orbitally-resolved spectral functions agree well with photoemission spectroscopy studies and enable identification of orbital content in the bands. A comparison of DQMC results with exact diagonalization and cluster perturbation theory studies elucidates how these different numerical techniques complement one another to produce a more complete understanding ofmore » the model and the cuprates. Interestingly, our DQMC simulations predict a charge-transfer gap that is significantly smaller than the direct (optical) gap measured in experiment. Most likely, it corresponds to the indirect gap that has recently been suggested to be on the order of 0.8 eV, and demonstrates the subtlety in identifying charge gaps.« less

  7. Characterizing the three-orbital Hubbard model with determinant quantum Monte Carlo

    DOE PAGES

    Kung, Y. F.; Chen, C. -C.; Wang, Yao; ...

    2016-04-29

    Here, we characterize the three-orbital Hubbard model using state-of-the-art determinant quantum Monte Carlo (DQMC) simulations with parameters relevant to the cuprate high-temperature superconductors. The simulations find that doped holes preferentially reside on oxygen orbitals and that the (π,π) antiferromagnetic ordering vector dominates in the vicinity of the undoped system, as known from experiments. The orbitally-resolved spectral functions agree well with photoemission spectroscopy studies and enable identification of orbital content in the bands. A comparison of DQMC results with exact diagonalization and cluster perturbation theory studies elucidates how these different numerical techniques complement one another to produce a more complete understandingmore » of the model and the cuprates. Interestingly, our DQMC simulations predict a charge-transfer gap that is significantly smaller than the direct (optical) gap measured in experiment. Most likely, it corresponds to the indirect gap that has recently been suggested to be on the order of 0.8 eV, and demonstrates the subtlety in identifying charge gaps.« less

  8. Characterizing the three-orbital Hubbard model with determinant quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kung, Y. F.; Chen, C.-C.; Wang, Yao; Huang, E. W.; Nowadnick, E. A.; Moritz, B.; Scalettar, R. T.; Johnston, S.; Devereaux, T. P.

    2016-04-01

    We characterize the three-orbital Hubbard model using state-of-the-art determinant quantum Monte Carlo (DQMC) simulations with parameters relevant to the cuprate high-temperature superconductors. The simulations find that doped holes preferentially reside on oxygen orbitals and that the (π ,π ) antiferromagnetic ordering vector dominates in the vicinity of the undoped system, as known from experiments. The orbitally-resolved spectral functions agree well with photoemission spectroscopy studies and enable identification of orbital content in the bands. A comparison of DQMC results with exact diagonalization and cluster perturbation theory studies elucidates how these different numerical techniques complement one another to produce a more complete understanding of the model and the cuprates. Interestingly, our DQMC simulations predict a charge-transfer gap that is significantly smaller than the direct (optical) gap measured in experiment. Most likely, it corresponds to the indirect gap that has recently been suggested to be on the order of 0.8 eV, and demonstrates the subtlety in identifying charge gaps.

  9. Dependence of AeroMACS Interference on Airport Radiation Pattern Characteristics

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.

    2012-01-01

    AeroMACS (Aeronautical Mobile Airport Communications System), which is based upon the IEEE 802.16e mobile wireless standard, is expected to be implemented in the 5091 to 5150 MHz frequency band. As this band is also occupied by Mobile Satellite Service (MSS) feeder uplinks, AeroMACS must be designed to avoid interference with this incumbent service. The aspects of AeroMACS operation that present potential interference are under analysis in order to enable the definition of standards that assure that such interference will be avoided. In this study, the cumulative interference power distribution at low earth orbit from AeroMACS transmitters at the 497 major airports in the contiguous United States was simulated with the Visualyse Professional software. The dependence of the interference power on the number of antenna beams per airport, gain patterns, and beam direction orientations was simulated. As a function of these parameters, the simulation results are presented in terms of the limitations on transmitter power required to maintain the cumulative interference power under the established threshold.

  10. Dependence of AeroMACS Interference on Airport Radiation Pattern Characteristics

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.

    2012-01-01

    AeroMACS (Aeronautical Mobile Airport Communications System), which is based upon the IEEE 802.16e mobile wireless standard, is expected to be implemented in the 5091-5150 MHz frequency band. As this band is also occupied by Mobile Satellite Service (MSS) feeder uplinks, AeroMACS must be designed to avoid interference with this incumbent service. The aspects of AeroMACS operation that present potential interference are under analysis in order to enable the definition of standards that assure that such interference will be avoided. In this study, the cumulative interference power distribution at low earth orbit from AeroMACS transmitters at the 497 major airports in the contiguous United States was simulated with the Visualyse Professional software. The dependence of the interference power on the number of antenna beams per airport, gain patterns, and beam direction orientations was simulated. As a function of these parameters, the simulation results are presented in terms of the limitations on transmitter power required to maintain the cumulative interference power under the established threshold.

  11. Investigation of Joint Visibility Between SAR and Optical Images of Urban Environments

    NASA Astrophysics Data System (ADS)

    Hughes, L. H.; Auer, S.; Schmitt, M.

    2018-05-01

    In this paper, we present a work-flow to investigate the joint visibility between very-high-resolution SAR and optical images of urban scenes. For this task, we extend the simulation framework SimGeoI to enable a simulation of individual pixels rather than complete images. Using the extended SimGeoI simulator, we carry out a case study using a TerraSAR-X staring spotlight image and a Worldview-2 panchromatic image acquired over the city of Munich, Germany. The results of this study indicate that about 55 % of the scene are visible in both images and are thus suitable for matching and data fusion endeavours, while about 25 % of the scene are affected by either radar shadow or optical occlusion. Taking the image acquisition parameters into account, our findings can provide support regarding the definition of upper bounds for image fusion tasks, as well as help to improve acquisition planning with respect to different application goals.

  12. Eigenvector method for umbrella sampling enables error analysis

    PubMed Central

    Thiede, Erik H.; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R.

    2016-01-01

    Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence. PMID:27586912

  13. A Computational Framework for Bioimaging Simulation.

    PubMed

    Watabe, Masaki; Arjunan, Satya N V; Fukushima, Seiya; Iwamoto, Kazunari; Kozuka, Jun; Matsuoka, Satomi; Shindo, Yuki; Ueda, Masahiro; Takahashi, Koichi

    2015-01-01

    Using bioimaging technology, biologists have attempted to identify and document analytical interpretations that underlie biological phenomena in biological cells. Theoretical biology aims at distilling those interpretations into knowledge in the mathematical form of biochemical reaction networks and understanding how higher level functions emerge from the combined action of biomolecules. However, there still remain formidable challenges in bridging the gap between bioimaging and mathematical modeling. Generally, measurements using fluorescence microscopy systems are influenced by systematic effects that arise from stochastic nature of biological cells, the imaging apparatus, and optical physics. Such systematic effects are always present in all bioimaging systems and hinder quantitative comparison between the cell model and bioimages. Computational tools for such a comparison are still unavailable. Thus, in this work, we present a computational framework for handling the parameters of the cell models and the optical physics governing bioimaging systems. Simulation using this framework can generate digital images of cell simulation results after accounting for the systematic effects. We then demonstrate that such a framework enables comparison at the level of photon-counting units.

  14. Analytical design of modified Smith predictor for unstable second-order processes with time delay

    NASA Astrophysics Data System (ADS)

    Ajmeri, Moina; Ali, Ahmad

    2017-06-01

    In this paper, a modified Smith predictor using three controllers, namely, stabilising (Gc), set-point tracking (Gc1), and load disturbance rejection (Gc2) controllers is proposed for second-order unstable processes with time delay. Controllers of the proposed structure are tuned using direct synthesis approach as this method enables the user to achieve a trade-off between the performance and robustness by adjusting a single design parameter. Furthermore, suitable values of the tuning parameters are recommended after studying their effect on the closed-loop performance and robustness. This is the main advantage of the proposed work over other recently published manuscripts, where authors provide only suitable ranges for the tuning parameters in spite of giving their suitable values. Simulation studies show that the proposed method results in satisfactory performance and improved robustness as compared to the recently reported control schemes. It is observed that the proposed scheme is able to work in the noisy environment also.

  15. Analysis of Welding Zinc Coated Steel Sheets in Zero Gap Configuration by 3D Simulations and High Speed Imaging

    NASA Astrophysics Data System (ADS)

    Koch, Holger; Kägeler, Christian; Otto, Andreas; Schmidt, Michael

    Welding of zinc coated sheets in zero gap configuration is of eminent interest for the automotive industry. This Laser welding process would enable the automotive industry to build auto bodies with a high durability in a plain manufacturing process. Today good welding results can only be achieved by expensive constructive procedures such as clamping devices to ensure a defined gad. The welding in zero gap configuration is a big challenge because of the vaporised zinc expelled from the interface between the two sheets. To find appropriate welding parameters for influencing the keyhole and melt pool dynamics, a three dimensional simulation and a high speed imaging system for laser keyhole welding have been developed. The obtained results help to understand the process of the melt pool perturbation caused by vaporised zinc.

  16. A coarse-grained simulation for the folding of molybdenum disulphide

    NASA Astrophysics Data System (ADS)

    Wang, Cui-Xia; Zhang, Chao; Jiang, Jin-Wu; Rabczuk, Timon

    2016-01-01

    We investigate the folding of molybdenum disulphide (MoS2) using coarse-grained (CG) simulations, in which all the parameters are determined analytically from the Stillinger-Weber atomic potential. Owing to its simplicity, the CG model can be used to derive analytic predictions for the relaxed configuration of the folded MoS2 and the resonant frequency for the breathing-like oscillation. We disclose two interesting phenomena for the breathing-like oscillation in the folded MoS2. First, the breathing-like oscillation is self-actuated, since this oscillation can be actuated by intrinsic thermal vibrations without any external actuation force. Second, the resonant frequency of the breathing-like oscillation is insensitive to the adsorption effect. These two features enable practical applications of the folded MoS2 based nanoresonators, where stable resonant oscillations are desirable.

  17. Glyph-based analysis of multimodal directional distributions in vector field ensembles

    NASA Astrophysics Data System (ADS)

    Jarema, Mihaela; Demir, Ismail; Kehrer, Johannes; Westermann, Rüdiger

    2015-04-01

    Ensemble simulations are increasingly often performed in the geosciences in order to study the uncertainty and variability of model predictions. Describing ensemble data by mean and standard deviation can be misleading in case of multimodal distributions. We present first results of a glyph-based visualization of multimodal directional distributions in 2D and 3D vector ensemble data. Directional information on the circle/sphere is modeled using mixtures of probability density functions (pdfs), which enables us to characterize the distributions with relatively few parameters. The resulting mixture models are represented by 2D and 3D lobular glyphs showing direction, spread and strength of each principal mode of the distributions. A 3D extension of our approach is realized by means of an efficient GPU rendering technique. We demonstrate our method in the context of ensemble weather simulations.

  18. Insolation-oriented model of photovoltaic module using Matlab/Simulink

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsai, Huan-Liang

    2010-07-15

    This paper presents a novel model of photovoltaic (PV) module which is implemented and analyzed using Matlab/Simulink software package. Taking the effect of sunlight irradiance on the cell temperature, the proposed model takes ambient temperature as reference input and uses the solar insolation as a unique varying parameter. The cell temperature is then explicitly affected by the sunlight intensity. The output current and power characteristics are simulated and analyzed using the proposed PV model. The model verification has been confirmed through an experimental measurement. The impact of solar irradiation on cell temperature makes the output characteristic more practical. In addition,more » the insolation-oriented PV model enables the dynamics of PV power system to be analyzed and optimized more easily by applying the environmental parameters of ambient temperature and solar irradiance. (author)« less

  19. Uncertainty-enabled design of electromagnetic reflectors with integrated shape control

    NASA Astrophysics Data System (ADS)

    Haque, Samiul; Kindrat, Laszlo P.; Zhang, Li; Mikheev, Vikenty; Kim, Daewa; Liu, Sijing; Chung, Jooyeon; Kuian, Mykhailo; Massad, Jordan E.; Smith, Ralph C.

    2018-03-01

    We implemented a computationally efficient model for a corner-supported, thin, rectangular, orthotropic polyvinylidene fluoride (PVDF) laminate membrane, actuated by a two-dimensional array of segmented electrodes. The laminate can be used as shape-controlled electromagnetic reflector and the model estimates the reflector's shape given an array of control voltages. In this paper, we describe a model to determine the shape of the laminate for a given distribution of control voltages. Then, we investigate the surface shape error and its sensitivity to the model parameters. Subsequently, we analyze the simulated deflection of the actuated bimorph using a Zernike polynomial decomposition. Finally, we provide a probabilistic description of reflector performance using statistical methods to quantify uncertainty. We make design recommendations for nominal parameter values and their tolerances based on optimization under uncertainty using multiple methods.

  20. BEAM OPTIMIZATION STUDY FOR AN X-RAY FEL OSCILLATOR AT THE LCLS-II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, Weilun; Huang, S.; Liu, K.X.

    2016-06-01

    The 4 GeV LCLS-II superconducting linac with high repetition beam rate enables the possibility to drive an X-Ray FEL oscillator at harmonic frequencies *. Compared to the regular LCLS-II machine setup, the oscillator mode requires a much longer bunch length with a relatively lower current. Also a flat longitudinal phase space distribution is critical to maintain the FEL gain since the X-ray cavity has extremely narrow bandwidth. In this paper, we study the longitudinal phase space optimization including shaping the initial beam from the injector and optimizing the bunch compressor and dechirper parameters. We obtain a bunch with a flatmore » energy chirp over 400 fs in the core part with current above 100 A. The optimization was based on LiTrack and Elegant simulations using LCLS-II beam parameters.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudholkar, Mihir; Ahmed, Shamin; Ericson, Milton Nance

    A compact model for SiC Power MOSFETs is presented. The model features a physical description of the channel current and internal capacitances and has been validated for dc, CV, and switching characteristics with measured data from a 1200-V, 20-A SiC power MOSFET in a temperature range of 25 degrees C to 225 degrees C. The peculiar variation of on-state resistance with temperature for SiC power MOSFETs has also been demonstrated through measurements and accounted for in the developed model. In order to improve the user experience with the model, a new datasheet driven parameter extraction strategy has been presented whichmore » requires only data available in device datasheets, to enable quick parameter extraction for off-the-shelf devices. Excellent agreement is shown between measurement and simulation using the presented model over the entire temperature range.« less

  2. Simulation of diesel engine emissions on the example of Fiat Panda in the NEDC test

    NASA Astrophysics Data System (ADS)

    Botwinska, Katarzyna; Mruk, Remigiusz; Słoma, Jacek; Tucki, Karol; Zaleski, Mateusz

    2017-10-01

    Road transport may be deemed a strategic branch of modern economy. Unfortunately, a rapid increase in the number of on-road motor vehicles entails some negative consequences as well, for instance, excessive concentration of exhausts produced by engines which results in deterioration of air quality. EURO emission standards which define acceptable limits for exhaust emissions of power units is an example of an activity performed in attempt to improve air quality. The EURO standard defines permissible amount of exhausts produced by a vehicle. Presently new units are examined through NEDC test. For the purpose of this thesis, a virtual test stand in a form of a computer simulation of a chassis dynamometer was used to simulate emission of a diesel engine (compression-ignition engine) in the NEDC test. Actual parameters of the 1.3 MultiJet engine of the Fiat Panda passenger car of 2014 were applied in the model. The simulation was carried out in the Matlab Simulink environment. The simulation model of the Fiat Panda passenger car enables the designation of the emission waveform for all test stages which corresponds to the values received during an approval test in real-life conditions.

  3. Rapid Prototyping of an Aircraft Model in an Object-Oriented Simulation

    NASA Technical Reports Server (NTRS)

    Kenney, P. Sean

    2003-01-01

    A team was created to participate in the Mars Scout Opportunity. Trade studies determined that an aircraft provided the best opportunity to complete the science objectives of the team. A high fidelity six degree of freedom flight simulation was required to provide credible evidence that the aircraft design fulfilled mission objectives and to support the aircraft design process by providing performance evaluations. The team created the simulation using the Langley Standard Real-Time Simulation in C++ (LaSRS++) application framework. A rapid prototyping approach was necessary because the team had only three months to both develop the aircraft simulation model and evaluate aircraft performance as the design and mission parameters matured. The design of LaSRS++ enabled rapid-prototyping in several ways. First, the framework allowed component models to be designed, implemented, unit-tested, and integrated quickly. Next, the framework provides a highly reusable infrastructure that allowed developers to maximize code reuse while concentrating on aircraft and mission specific features. Finally, the framework reduces risk by providing reusable components that allow developers to build a quality product with a compressed testing cycle that relies heavily on unit testing of new components.

  4. Synthetic Survey of the Kepler Field

    NASA Astrophysics Data System (ADS)

    Wells, Mark; Prša, Andrej

    2018-01-01

    In the era of large scale surveys, including LSST and Gaia, binary population studies will flourish due to the large influx of data. In addition to probing binary populations as a function of galactic latitude, under-sampled groups such as low mass binaries will be observed at an unprecedented rate. To prepare for these missions, binary population simulations need to be carried out at high fidelity. These simulations will enable the creation of simulated data and, through comparison with real data, will allow the underlying binary parameter distributions to be explored. In order for the simulations to be considered robust, they should reproduce observed distributions accurately. To this end we have developed a simulator which takes input models and creates a synthetic population of eclipsing binaries. Starting from a galactic single star model, implemented using Galaxia, a code by Sharma et al. (2011), and applying observed multiplicity, mass-ratio, period, and eccentricity distributions, as reported by Raghavan et al. (2010), Duchêne & Kraus (2013), and Moe & Di Stefano (2017), we are able to generate synthetic binary surveys that correspond to any survey cadences. In order to calibrate our input models we compare the results of our synthesized eclipsing binary survey to the Kepler Eclipsing Binary catalog.

  5. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  6. Flight parameter estimation using instantaneous frequency and direction of arrival measurements from a single acoustic sensor node.

    PubMed

    Lo, Kam W

    2017-03-01

    When an airborne sound source travels past a stationary ground-based acoustic sensor node in a straight line at constant altitude and constant speed that is not much less than the speed of sound in air, the movement of the source during the propagation of the signal from the source to the sensor node (commonly referred to as the "retardation effect") enables the full set of flight parameters of the source to be estimated by measuring the direction of arrival (DOA) of the signal at the sensor node over a sufficiently long period of time. This paper studies the possibility of using instantaneous frequency (IF) measurements from the sensor node to improve the precision of the flight parameter estimates when the source spectrum contains a harmonic line of constant frequency. A simplified Cramer-Rao lower bound analysis shows that the standard deviations in the estimates of the flight parameters can be reduced when IF measurements are used together with DOA measurements. Two flight parameter estimation algorithms that utilize both IF and DOA measurements are described and their performances are evaluated using both simulated data and real data.

  7. In situ determination of the static inductance and resistance of a plasma focus capacitor bank.

    PubMed

    Saw, S H; Lee, S; Roy, F; Chong, P L; Vengadeswaran, V; Sidik, A S M; Leong, Y W; Singh, A

    2010-05-01

    The static (unloaded) electrical parameters of a capacitor bank are of utmost importance for the purpose of modeling the system as a whole when the capacitor bank is discharged into its dynamic electromagnetic load. Using a physical short circuit across the electromagnetic load is usually technically difficult and is unnecessary. The discharge can be operated at the highest pressure permissible in order to minimize current sheet motion, thus simulating zero dynamic load, to enable bank parameters, static inductance L(0), and resistance r(0) to be obtained using lightly damped sinusoid equations given the bank capacitance C(0). However, for a plasma focus, even at the highest permissible pressure it is found that there is significant residual motion, so that the assumption of a zero dynamic load introduces unacceptable errors into the determination of the circuit parameters. To overcome this problem, the Lee model code is used to fit the computed current trace to the measured current waveform. Hence the dynamics is incorporated into the solution and the capacitor bank parameters are computed using the Lee model code, and more accurate static bank parameters are obtained.

  8. In situ determination of the static inductance and resistance of a plasma focus capacitor bank

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saw, S. H.; Institute for Plasma Focus Studies, 32 Oakpark Drive, Chadstone, Victoria 3148; Lee, S.

    2010-05-15

    The static (unloaded) electrical parameters of a capacitor bank are of utmost importance for the purpose of modeling the system as a whole when the capacitor bank is discharged into its dynamic electromagnetic load. Using a physical short circuit across the electromagnetic load is usually technically difficult and is unnecessary. The discharge can be operated at the highest pressure permissible in order to minimize current sheet motion, thus simulating zero dynamic load, to enable bank parameters, static inductance L{sub 0}, and resistance r{sub 0} to be obtained using lightly damped sinusoid equations given the bank capacitance C{sub 0}. However, formore » a plasma focus, even at the highest permissible pressure it is found that there is significant residual motion, so that the assumption of a zero dynamic load introduces unacceptable errors into the determination of the circuit parameters. To overcome this problem, the Lee model code is used to fit the computed current trace to the measured current waveform. Hence the dynamics is incorporated into the solution and the capacitor bank parameters are computed using the Lee model code, and more accurate static bank parameters are obtained.« less

  9. Optimized microsystems-enabled photovoltaics

    DOEpatents

    Cruz-Campa, Jose Luis; Nielson, Gregory N.; Young, Ralph W.; Resnick, Paul J.; Okandan, Murat; Gupta, Vipin P.

    2015-09-22

    Technologies pertaining to designing microsystems-enabled photovoltaic (MEPV) cells are described herein. A first restriction for a first parameter of an MEPV cell is received. Subsequently, a selection of a second parameter of the MEPV cell is received. Values for a plurality of parameters of the MEPV cell are computed such that the MEPV cell is optimized with respect to the second parameter, wherein the values for the plurality of parameters are computed based at least in part upon the restriction for the first parameter.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strout, Michelle

    Programming parallel machines is fraught with difficulties: the obfuscation of algorithms due to implementation details such as communication and synchronization, the need for transparency between language constructs and performance, the difficulty of performing program analysis to enable automatic parallelization techniques, and the existence of important "dusty deck" codes. The SAIMI project developed abstractions that enable the orthogonal specification of algorithms and implementation details within the context of existing DOE applications. The main idea is to enable the injection of small programming models such as expressions involving transcendental functions, polyhedral iteration spaces with sparse constraints, and task graphs into full programsmore » through the use of pragmas. These smaller, more restricted programming models enable orthogonal specification of many implementation details such as how to map the computation on to parallel processors, how to schedule the computation, and how to allocation storage for the computation. At the same time, these small programming models enable the expression of the most computationally intense and communication heavy portions in many scientific simulations. The ability to orthogonally manipulate the implementation for such computations will significantly ease performance programming efforts and expose transformation possibilities and parameter to automated approaches such as autotuning. At Colorado State University, the SAIMI project was supported through DOE grant DE-SC3956 from April 2010 through August 2015. The SAIMI project has contributed a number of important results to programming abstractions that enable the orthogonal specification of implementation details in scientific codes. This final report summarizes the research that was funded by the SAIMI project.« less

  11. Simulation and Data Analytics for Mobile Road Weather Sensors

    NASA Astrophysics Data System (ADS)

    Chettri, S. R.; Evans, J. D.; Tislin, D.

    2016-12-01

    Numerous algorithmic and theoretical considerations arise in simulating a vehicle-based weather observation network known as the Mobile Platform Environmental Data (MoPED). MoPED integrates sensor data from a fleet of commercial vehicles (about 600 at last count, with thousands more to come) as they travel interstate, state and local routes and metropolitan areas throughout the conterminous United States. The MoPED simulator models a fleet of anywhere between 1000-10,000 vehicles that travel a highway network encoded in a geospatial database, starting and finishing at random times and moving at randomly-varying speeds. Virtual instruments aboard these vehicles interpolate surface weather parameters (such as temperature and pressure) from the High-Resolution Rapid Refresh (HRRR) data series, an hourly, coast-to-coast 3km grid of weather parameters modeled by the National Centers for Environmental Prediction. Whereas real MoPED sensors have noise characteristics that lead to drop-outs, drift, or physically unrealizable values, our simulation introduces a variety of noise distributions into the parameter values inferred from HRRR (Fig. 1). Finally, the simulator collects weather readings from the National Weather Service's Automated Surface Observation System (ASOS, comprised of over 800 airports around the country) for comparison, validation, and analytical experiments. The simulator's MoPED-like weather data stream enables studies like the following: Experimenting with data analysis and calibration methods - e.g., by comparing noisy vehicle data with ASOS "ground truth" in close spatial and temporal proximity (e.g., 10km, 10 min) (Fig. 2). Inter-calibrating different vehicles' sensors when they pass near each other. Detecting spatial structure in the surface weather - such as dry lines, sudden changes in humidity that accompany severe weather - and estimating how many vehicles are needed to reliably map these structures and their motion. Detecting bottlenecks in the MoPED data infrastructure to ensure real-time data filtering and dissemination as number of vehicles scales up; or tuning the data structures needed to keep track of individual sensor calibrations. Expanding the analytical and data management approach to other mobile weather sensors such as smartphones.

  12. Optimal performance of single-column chromatography and simulated moving bed processes for the separation of optical isomers

    NASA Astrophysics Data System (ADS)

    Medi, Bijan; Kazi, Monzure-Khoda; Amanullah, Mohammad

    2013-06-01

    Chromatography has been established as the method of choice for the separation and purification of optically pure drugs which has a market size of about 250 billion USD. Single column chromatography (SCC) is commonly used in the development and testing phase of drug development while multi-column Simulated Moving Bed (SMB) chromatography is more suitable for large scale production due to its continuous nature. In this study, optimal performance of SCC and SMB processes for the separation of optical isomers under linear and overloaded separation conditions has been investigated. The performance indicators, namely productivity and desorbent requirement have been compared under geometric similarity for the separation of a mixture of guaifenesin, and Tröger's base enantiomers. SCC process has been analyzed under equilibrium assumption i.e., assuming infinite column efficiency, and zero dispersion, and its optimal performance parameters are compared with the optimal prediction of an SMB process by triangle theory. Simulation results obtained using actual experimental data indicate that SCC may compete with SMB in terms of productivity depending on the molecules to be separated. Besides, insights into the process performances in terms of degree of freedom and relationship between the optimal operating point and solubility limit of the optical isomers have been ascertained. This investigation enables appropriate selection of single or multi-column chromatographic processes based on column packing properties and isotherm parameters.

  13. A gyrokinetic perspective on the JET-ILW pedestal

    NASA Astrophysics Data System (ADS)

    Hatch, D. R.; Kotschenreuther, M.; Mahajan, S.; Valanju, P.; Liu, X.

    2017-03-01

    JET has been unable to recover historical confinement levels when operating with an ITER-like wall (ILW) due largely to the inaccessibility of high pedestal temperatures. Finding a path to overcome this challenge is of utmost importance for both a prospective JET DT campaign and for future ITER operation. Gyrokinetic simulations (using the Gene code) quantitatively capture experimental transport levels for a representative experimental discharge and qualitatively recover the major experimental trends. Microtearing turbulence is a major transport mechanisms for the low-temperature pedestals characteristic of unseeded JET-ILW discharges. At higher temperatures and/or lower {ρ\\ast} , we identify electrostatic ITG transport of a type that is strongly shear-suppressed on smaller machines. Consistent with observations, this transport mechanism is strongly reduced by the presence of a low-Z impurity (e.g. carbon or nitrogen at the level of {{Z}\\text{eff}}∼ 2 ), recovering the accessibility of high pedestal temperatures. Notably, simulations based on dimensionless {ρ\\ast} scans recover historical scaling behavior except in the unique JET-ILW parameter regime where ITG turbulence becomes important. Our simulations also elucidate the observed degradation of confinement caused by gas puffing, emphasizing the important role of the density pedestal structure. This study maps out important regions of parameter space, providing insights that may point to optimal physical regimes that can enable the recovery of high pedestal temperatures on JET.

  14. Modelling and numerical simulation of the in vivo mechanical response of the ascending aortic aneurysm in Marfan syndrome.

    PubMed

    García-Herrera, Claudio M; Celentano, Diego J; Herrera, Emilio A

    2017-03-01

    Marfan syndrome (MFS) is a genetic disorder that affects connective tissue, impairing cardiovascular structures and function, such as heart valves and aorta. Thus, patients with Marfan disease have a higher risk of developing circulatory problems associated with mitral and aortic valves prolapse, manifested as dilated aorta and aortic aneurysm. However, little is known about the biomechanical characteristics of these structures affected with MFS. This study presents the modelling and simulation of the mechanical response of human ascending aortic aneurysms in MFS under in vivo conditions with intraluminal pressures within normotensive and hypertensive ranges. We obtained ascending aortic segments from five adults with MFS subjected to a vascular prosthesis implantation replacing an aortic aneurysm. We characterised the arterial samples via ex vivo tensile test measurements that enable fitting the material parameters of a hyperelastic isotropic constitutive model. Then, these material parameters were used in a numerical simulation of an ascending aortic aneurysm subjected to in vivo normotensive and hypertensive conditions. In addition, we assessed different constraints related to the movement of the aortic root. Overall, our results provide not only a realistic description of the mechanical behaviour of the vessel, but also useful data about stress/stretch-based criteria to predict vascular rupture. This knowledge may be included in the clinical assessment to determine risk and indicate surgical intervention.

  15. Analytical and Experimental Performance Evaluation of BLE Neighbor Discovery Process Including Non-Idealities of Real Chipsets

    PubMed Central

    Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio

    2017-01-01

    The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage. PMID:28273801

  16. Diffraction effects incorporated design of a parallax barrier for a high-density multi-view autostereoscopic 3D display.

    PubMed

    Yoon, Ki-Hyuk; Ju, Heongkyu; Kwon, Hyunkyung; Park, Inkyu; Kim, Sung-Kyu

    2016-02-22

    We present optical characteristics of view image provided by a high-density multi-view autostereoscopic 3D display (HD-MVA3D) with a parallax barrier (PB). Diffraction effects that become of great importance in such a display system that uses a PB, are considered in an one-dimensional model of the 3D display, in which the numerical simulation of light from display panel pixels through PB slits to viewing zone is performed. The simulation results are then compared to the corresponding experimental measurements with discussion. We demonstrate that, as a main parameter for view image quality evaluation, the Fresnel number can be used to determine the PB slit aperture for the best performance of the display system. It is revealed that a set of the display parameters, which gives the Fresnel number of ∼ 0.7 offers maximized brightness of the view images while that corresponding to the Fresnel number of 0.4 ∼ 0.5 offers minimized image crosstalk. The compromise between the brightness and crosstalk enables optimization of the relative magnitude of the brightness to the crosstalk and lead to the choice of display parameter set for the HD-MVA3D with a PB, which satisfies the condition where the Fresnel number lies between 0.4 and 0.7.

  17. Analytical and Experimental Performance Evaluation of BLE Neighbor Discovery Process Including Non-Idealities of Real Chipsets.

    PubMed

    Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio

    2017-03-03

    The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage.

  18. Introducing stochastics into the simulation of convective precipitation events

    NASA Astrophysics Data System (ADS)

    Pistotnik, Georg

    2010-05-01

    In a joint project, the Central Institute for Meteorology and Geodynamics (ZAMG) and the Vienna University of Technology aimed to characterize strong precipitation events and their impact in the Bucklige Welt region in Eastern Austria. Both the region's hydrological and meteorological characteristics, namely its composition of virtually countless small catchments with short response times and a high frequency of summertime convective storms, cause the occurrence of flooding to be strictly tied to convective rainfall events, which is why this study has been focused on this type of precipitation. The meteorological database consists of the ZAMG's high-resolution analysis and nowcasting system INCA ("Integrated Nowcasting through Comprehensive Analysis"), which provides a set of precipitation analyses generated by a statistically optimized combination of rain gauge measurements and radar data with a temporal resolution of 15 minutes and a spatial resolution of 1 kilometre. An intensity threshold of 3.8mm/15min has been used to classify any observed precipitation as a convective one, thus extracting 245 convection days with a total number of almost 1600 individual storm events over the project region out of the 5-year data set from 2003 to 2007. Consecutive analyses were used to compute the motion of these storms, a complex process that could not be completely automatized; due to the repeated occurrence of storm splits or coalescences, a manual control of the automatically provided "suggestion" of movement had to be performed in order to merge two or more precipitation maxima to a single storm if necessary, thus yielding the smoothest and most plausible storm tracks and ensuring a high quality of the database. In the first part of the project, distributions for all characteristic parameters have been derived, including the number of storms per day, their place and time of initiation, their motion, lifetime, maximum intensity and maximum "cell volume" (i.e. overall precipitation per time step). Both components of the mean motion as well as of its deviations could be approximated by normal distributions, whereas the number of storms per day, their lifetime, maximum intensity and maximum cell volume roughly followed exponential distributions. The shapes of the convective cells were approximated by Gaussian bells with the peak intensity and the cell volume as boundary conditions. The temporal courses of the peak intensities and cell volumes were assumed to follow parabolas which are symmetric with respect to the half of the lifetime. In the second part of the project, these distributions were used to drive a random generator that allows simulating an arbitrary number of convection days in order to obtain pseudo time series of convective precipitation for each grid point. An algorithm to create correlated samples of random numbers enabled to also account for the observed correlation between some of the parameters, i.e. lifetime and maximum intensity or maximum cell volume. The spatial structures of the return periods of simulated convective precipitation events may provide valuable additional information when being assimilated to the time series measured by the (unfortunately rather sparse) rain gauges in this region. Thus, further studies have to investigate to what extent the "convection simulator" is able to reproduce these time series. Some iterative fine-tuning of the parameters' distributions as well as an extension of the database to a longer time span may further improve the results and enable to simulate realistic spatio-temporal convection scenarios ("design storms") that have the potential to feed hydrological models and, together with vegetation and soil characteristics, hopefully enable to better assess and regionalize the torrent hazard over the project region.

  19. New Opportunities for Remote Sensing Ionospheric Irregularities by Fitting Scintillation Spectra

    NASA Astrophysics Data System (ADS)

    Carrano, C. S.; Rino, C. L.; Groves, K. M.

    2017-12-01

    In a recent paper, we presented a phase screen theory for the spectrum of intensity scintillations when the refractive index irregularities follow a two-component power law [Carrano and Rino, DOI: 10.1002/2015RS005903]. More recently we have investigated the inverse problem, whereby phase screen parameters are inferred from scintillation time series. This is accomplished by fitting the spectrum of intensity fluctuations with a parametrized theoretical model using Maximum Likelihood (ML) methods. The Markov-Chain Monte-Carlo technique provides a-posteriori errors and confidence intervals. The Akaike Information Criterion (AIC) provides justification for the use of one- or two-component irregularity models. We refer to this fitting as Irregularity Parameter Estimation (IPE) since it provides a statistical description of the irregularities from the scintillations they produce. In this talk, we explore some new opportunities for remote sensing ionospheric irregularities afforded by IPE. Statistical characterization of irregularities and the plasma bubbles in which they are embedded provides insight into the development of the underlying instability. In a companion paper by Rino et al., IPE is used to interpret scintillation due to simulated EPB structure. IPE can be used to reconcile multi-frequency scintillation observations and to construct high fidelity scintillation simulation tools. In space-to-ground propagation scenarios, for which an estimate of the distance to the scattering region is available a-priori, IPE enables retrieval of zonal irregularity drift. In radio occultation scenarios, the distance to the irregularities is generally unknown but IPE enables retrieval of Fresnel frequency. A geometric model for the effective scan velocity maps Fresnel frequency to Fresnel scale, yielding the distance to the irregularities. We demonstrate this approach by geolocating irregularities observed by the CORISS instrument onboard the C/NOFS satellite.

  20. Collaborative Project: Development of an Isotope-Enabled CESM for Testing Abrupt Climate Changes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhengyu

    One of the most important validations for a state-of-art Earth System Model (ESM) with respect to climate changes is the simulation of the climate evolution and abrupt climate change events in the Earth’s history of the last 21,000 years. However, one great challenge for model validation is that ESMs usually do not directly simulate geochemical variables that can be compared directly with past proxy records. In this proposal, we have met this challenge by developing the simulation capability of major isotopes in a state-of-art ESM, the Community Earth System Model (CESM), enabling us to make direct model-data comparison by comparingmore » the model directly against proxy climate records. Our isotope-enabled ESM incorporates the capability of simulating key isotopes and geotracers, notably δ 18O, δD, δ 14C, and δ 13C, Nd and Pa/Th. The isotope-enabled ESM have been used to perform some simulations for the last 21000 years. The direct comparison of these simulations with proxy records has shed light on the mechanisms of important climate change events.« less

  1. Simulation supported POD for RT test case-concept and modeling

    NASA Astrophysics Data System (ADS)

    Gollwitzer, C.; Bellon, C.; Deresch, A.; Ewert, U.; Jaenisch, G.-R.; Zscherpel, U.; Mistral, Q.

    2012-05-01

    Within the framework of the European project PICASSO, the radiographic simulator aRTist (analytical Radiographic Testing inspection simulation tool) developed by BAM has been extended for reliability assessment of film and digital radiography. NDT of safety relevant components of aerospace industry requires the proof of probability of detection (POD) of the inspection. Modeling tools can reduce the expense of such extended, time consuming NDT trials, if the result of simulation fits to the experiment. Our analytic simulation tool consists of three modules for the description of the radiation source, the interaction of radiation with test pieces and flaws, and the detection process with special focus on film and digital industrial radiography. It features high processing speed with near-interactive frame rates and a high level of realism. A concept has been developed as well as a software extension for reliability investigations, completed by a user interface for planning automatic simulations with varying parameters and defects. Furthermore, an automatic image analysis procedure is included to evaluate the defect visibility. The radiographic modeling from 3D CAD of aero engine components and quality test samples are compared as a precondition for real trials. This enables the evaluation and optimization of film replacement for application of modern digital equipment for economical NDT and defined POD.

  2. Numerical Simulation and Experimental Validation of MIG Welding of T-Joints of Thin Aluminum Plates for Top Class Vehicles

    NASA Astrophysics Data System (ADS)

    Bonazzi, Enrico; Colombini, Elena; Panari, Davide; Vergnano, Alberto; Leali, Francesco; Veronesi, Paolo

    2017-01-01

    The integration of experiments with numerical simulations can efficiently support a quick evaluation of the welded joint. In this work, the MIG welding operation on aluminum T-joint thin plate has been studied by the integration of both simulation and experiments. The aim of the paper is to enlarge the global database, to promote the use of thin aluminum sheets in automotive body industries and to provide new data. Since the welding of aluminum thin plates is difficult to control due to high speed of the heat source and high heat flows during heating and cooling, a simulation model could be considered an effective design tool to predict the real phenomena. This integrated approach enables new evaluation possibilities on MIG-welded thin aluminum T-joints, as correspondence between the extension of the microstructural zones and the simulation parameters, material hardness, transient 3D temperature distribution on the surface and inside the material, stresses, strains, and deformations. The results of the mechanical simulations are comparable with the experimental measurements along the welding path, especially considering the variability of the process. The results could well predict the welding-induced distortion, which together with local heating during welding must be anticipated and subsequently minimized and counterbalance.

  3. The use of three-parameter rating table lookup programs, RDRAT and PARM3, in hydraulic flow models

    USGS Publications Warehouse

    Sanders, C.L.

    1995-01-01

    Subroutines RDRAT and PARM3 enable computer programs such as the BRANCH open-channel unsteady-flow model to route flows through or over combinations of critical-flow sections, culverts, bridges, road- overflow sections, fixed spillways, and(or) dams. The subroutines also obstruct upstream flow to simulate operation of flapper-type tide gates. A multiplier can be applied by date and time to simulate varying numbers of tide gates being open or alternative construction scenarios for multiple culverts. The subroutines use three-parameter (headwater, tailwater, and discharge) rating table lookup methods. These tables may be manually prepared using other programs that do step-backwater computations or compute flow through bridges and culverts or over dams. The subroutine, therefore, precludes the necessity of incorporating considerable hydraulic computational code into the client program, and provides complete flexibility for users of the model for routing flow through almost any affixed structure or combination of structures. The subroutines are written in Fortran 77 language, and have minimal exchange of information with the BRANCH model or other possible client programs. The report documents the interpolation methodology, data input requirements, and software.

  4. Fundamental Principles of Tremor Propagation in the Upper Limb.

    PubMed

    Davidson, Andrew D; Charles, Steven K

    2017-04-01

    Although tremor is the most common movement disorder, there exist few effective tremor-suppressing devices, in part because the characteristics of tremor throughout the upper limb are unknown. To clarify, optimally suppressing tremor requires a knowledge of the mechanical origin, propagation, and distribution of tremor throughout the upper limb. Here we present the first systematic investigation of how tremor propagates between the shoulder, elbow, forearm, and wrist. We simulated tremor propagation using a linear, time-invariant, lumped-parameter model relating joint torques and the resulting joint displacements. The model focused on the seven main degrees of freedom from the shoulder to the wrist and included coupled joint inertia, damping, and stiffness. We deliberately implemented a simple model to focus first on the most basic effects. Simulating tremorogenic joint torque as a sinusoidal input, we used the model to establish fundamental principles describing how input parameters (torque location and frequency) and joint impedance (inertia, damping, and stiffness) affect tremor propagation. We expect that the methods and principles presented here will serve as the groundwork for future refining studies to understand the origin, propagation, and distribution of tremor throughout the upper limb in order to enable the future development of optimal tremor-suppressing devices.

  5. Fundamental Principles of Tremor Propagation in the Upper Limb

    PubMed Central

    Davidson, Andrew D.; Charles, Steven K.

    2017-01-01

    Although tremor is the most common movement disorder, there exist few effective tremor-suppressing devices, in part because the characteristics of tremor throughout the upper limb are unknown. To clarify, optimally suppressing tremor requires a knowledge of the mechanical origin, propagation, and distribution of tremor throughout the upper limb. Here we present the first systematic investigation of how tremor propagates between the shoulder, elbow, forearm, and wrist. We simulated tremor propagation using a linear, time-invariant, lumped-parameter model relating joint torques and the resulting joint displacements. The model focused on the seven main degrees of freedom from the shoulder to the wrist and included coupled joint inertia, damping, and stiffness. We deliberately implemented a simple model to focus first on the most basic effects. Simulating tremorogenic joint torque as a sinusoidal input, we used the model to establish fundamental principles describing how input parameters (torque location and frequency) and joint impedance (inertia, damping, and stiffness) affect tremor propagation. We expect that the methods and principles presented here will serve as the groundwork for future refining studies to understand the origin, propagation, and distribution of tremor throughout the upper limb in order to enable the future development of optimal tremor-suppressing devices. PMID:27957608

  6. A Physics-Inspired Mechanistic Model of Migratory Movement Patterns in Birds.

    PubMed

    Revell, Christopher; Somveille, Marius

    2017-08-29

    In this paper, we introduce a mechanistic model of migratory movement patterns in birds, inspired by ideas and methods from physics. Previous studies have shed light on the factors influencing bird migration but have mainly relied on statistical correlative analysis of tracking data. Our novel method offers a bottom up explanation of population-level migratory movement patterns. It differs from previous mechanistic models of animal migration and enables predictions of pathways and destinations from a given starting location. We define an environmental potential landscape from environmental data and simulate bird movement within this landscape based on simple decision rules drawn from statistical mechanics. We explore the capacity of the model by qualitatively comparing simulation results to the non-breeding migration patterns of a seabird species, the Black-browed Albatross (Thalassarche melanophris). This minimal, two-parameter model was able to capture remarkably well the previously documented migration patterns of the Black-browed Albatross, with the best combination of parameter values conserved across multiple geographically separate populations. Our physics-inspired mechanistic model could be applied to other bird and highly-mobile species, improving our understanding of the relative importance of various factors driving migration and making predictions that could be useful for conservation.

  7. Analysis of dynamics and fit of diving suits

    NASA Astrophysics Data System (ADS)

    Mahnic Naglic, M.; Petrak, S.; Gersak, J.; Rolich, T.

    2017-10-01

    Paper presents research on dynamical behaviour and fit analysis of customised diving suits. Diving suits models are developed using the 3D flattening method, which enables the construction of a garment model directly on the 3D computer body model and separation of discrete 3D surfaces as well as transformation into 2D cutting parts. 3D body scanning of male and female test subjects was performed with the purpose of body measurements analysis in static and dynamic postures and processed body models were used for construction and simulation of diving suits prototypes. All necessary parameters, for 3D simulation were applied on obtained cutting parts, as well as parameters values for mechanical properties of neoprene material. Developed computer diving suits prototypes were used for stretch analysis on areas relevant for body dimensional changes according to dynamic anthropometrics. Garment pressures against the body in static and dynamic conditions was also analysed. Garments patterns for which the computer prototype verification was conducted were used for real prototype production. Real prototypes were also used for stretch and pressure analysis in static and dynamic conditions. Based on the obtained results, correlation analysis between body changes in dynamic positions and dynamic stress, determined on computer and real prototypes, was performed.

  8. Adaptive model reduction for continuous systems via recursive rational interpolation

    NASA Technical Reports Server (NTRS)

    Lilly, John H.

    1994-01-01

    A method for adaptive identification of reduced-order models for continuous stable SISO and MIMO plants is presented. The method recursively finds a model whose transfer function (matrix) matches that of the plant on a set of frequencies chosen by the designer. The algorithm utilizes the Moving Discrete Fourier Transform (MDFT) to continuously monitor the frequency-domain profile of the system input and output signals. The MDFT is an efficient method of monitoring discrete points in the frequency domain of an evolving function of time. The model parameters are estimated from MDFT data using standard recursive parameter estimation techniques. The algorithm has been shown in simulations to be quite robust to additive noise in the inputs and outputs. A significant advantage of the method is that it enables a type of on-line model validation. This is accomplished by simultaneously identifying a number of models and comparing each with the plant in the frequency domain. Simulations of the method applied to an 8th-order SISO plant and a 10-state 2-input 2-output plant are presented. An example of on-line model validation applied to the SISO plant is also presented.

  9. Bi-stable dendrite in constant electric field: a model analysis.

    PubMed

    Baginskas, A; Gutman, A; Svirskis, G

    1993-03-01

    Some neurons possess dendritic persistent inward current, which is activated during depolarization. Dendrites can be stably depolarized, i.e. they are bi-stable if the net current is inward. A proper method to show the existence of dendritic bi-stability is putting the neuron into the electric field to induce transmembrane potential changes along the dendrites. Here we present analytical and computer simulation of the bi-stable dendrite in the d.c. field. A prominent jump to a depolarization plateau can be seen in the soma upon initial hyperpolarization of its membrane. If a considerable portion of dendrites are parallel to the field it is impossible to switch off the depolarization plateau by changing the direction and the strength of the electric field. There is nothing similar in neurons with ohmic dendrites. The results of the simulation conform to the experimental observations in turtle motoneurons [Hounsgaard J. and Kiehn O. (1993) J. Physiol., Lond. (in press)]; comparison of the theoretical and the experimental results makes semi-quantitative estimation of some electrical parameters of dendrites possible. We propose modifications of the experiment which enable one to measure dendritic length constants and other parameters of stained neurons.

  10. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance.

    PubMed

    Dong, Han; Sharma, Diksha; Badano, Aldo

    2014-12-01

    Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridmantis, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webmantis and visualmantis to facilitate the setup of computational experiments via hybridmantis. The visualization tools visualmantis and webmantis enable the user to control simulation properties through a user interface. In the case of webmantis, control via a web browser allows access through mobile devices such as smartphones or tablets. webmantis acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridmantis. The users can download the output images and statistics through a zip file for future reference. In addition, webmantis provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. The visualization tools visualmantis and webmantis provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.

  11. Study of homogeneous bubble nucleation in liquid carbon dioxide by a hybrid approach combining molecular dynamics simulation and density gradient theory

    NASA Astrophysics Data System (ADS)

    Langenbach, K.; Heilig, M.; Horsch, M.; Hasse, H.

    2018-03-01

    A new method for predicting homogeneous bubble nucleation rates of pure compounds from vapor-liquid equilibrium (VLE) data is presented. It combines molecular dynamics simulation on the one side with density gradient theory using an equation of state (EOS) on the other. The new method is applied here to predict bubble nucleation rates in metastable liquid carbon dioxide (CO2). The molecular model of CO2 is taken from previous work of our group. PC-SAFT is used as an EOS. The consistency between the molecular model and the EOS is achieved by adjusting the PC-SAFT parameters to VLE data obtained from the molecular model. The influence parameter of density gradient theory is fitted to the surface tension of the molecular model. Massively parallel molecular dynamics simulations are performed close to the spinodal to compute bubble nucleation rates. From these simulations, the kinetic prefactor of the hybrid nucleation theory is estimated, whereas the nucleation barrier is calculated from density gradient theory. This enables the extrapolation of molecular simulation data to the whole metastable range including technically relevant densities. The results are tested against available experimental data and found to be in good agreement. The new method does not suffer from typical deficiencies of classical nucleation theory concerning the thermodynamic barrier at the spinodal and the bubble size dependence of surface tension, which is typically neglected in classical nucleation theory. In addition, the density in the center of critical bubbles and their surface tension is determined as a function of their radius. The usual linear Tolman correction to the capillarity approximation is found to be invalid.

  12. Study of homogeneous bubble nucleation in liquid carbon dioxide by a hybrid approach combining molecular dynamics simulation and density gradient theory.

    PubMed

    Langenbach, K; Heilig, M; Horsch, M; Hasse, H

    2018-03-28

    A new method for predicting homogeneous bubble nucleation rates of pure compounds from vapor-liquid equilibrium (VLE) data is presented. It combines molecular dynamics simulation on the one side with density gradient theory using an equation of state (EOS) on the other. The new method is applied here to predict bubble nucleation rates in metastable liquid carbon dioxide (CO 2 ). The molecular model of CO 2 is taken from previous work of our group. PC-SAFT is used as an EOS. The consistency between the molecular model and the EOS is achieved by adjusting the PC-SAFT parameters to VLE data obtained from the molecular model. The influence parameter of density gradient theory is fitted to the surface tension of the molecular model. Massively parallel molecular dynamics simulations are performed close to the spinodal to compute bubble nucleation rates. From these simulations, the kinetic prefactor of the hybrid nucleation theory is estimated, whereas the nucleation barrier is calculated from density gradient theory. This enables the extrapolation of molecular simulation data to the whole metastable range including technically relevant densities. The results are tested against available experimental data and found to be in good agreement. The new method does not suffer from typical deficiencies of classical nucleation theory concerning the thermodynamic barrier at the spinodal and the bubble size dependence of surface tension, which is typically neglected in classical nucleation theory. In addition, the density in the center of critical bubbles and their surface tension is determined as a function of their radius. The usual linear Tolman correction to the capillarity approximation is found to be invalid.

  13. Evaluation of decadal hindcasts by application of a satellite simulator for SSM/I & SSMIS

    NASA Astrophysics Data System (ADS)

    Spangehl, T.; Schroeder, M.; Glowienka-Hense, R.; Hense, A.; Bodas-Salcedo, A.; Hollmann, R.

    2017-12-01

    A satellite simulator for the Special Sensor Microwave Imager (SSM/I) and for the Special Sensor Microwave Imager and Sounder (SSMIS) is developed and applied to decadal hindcast simulations performed within the MiKlip project (http://fona-miklip.de, funded by the Federal Ministry of Education and Research in Germany). The aim is to evaluate the climatological and predictive skill of the hindcasts focusing on water cycle components. Classical evaluation approaches commonly focus on geophysical parameters such as temperature, precipitation or wind speed using observational datasets and reanalysis as reference. The employment of the satellite simulator enables an evaluation in the instrument's parameter and thereby reduces uncertainties on the reference side. The simulators are developed utilizing the CFMIP Observation Simulator Package (COSP, http://cfmip.metoffice.com/COSP.html). On the reference side the SSM/I & SSMIS Fundamental Climate Data Record (FCDR) provided by the CM SAF (DOI: 10.5676/EUM_SAF_CM/FCDR_MWI/V003) is used which constitutes a quality controlled, recalibrated and intercalibrated record of brightness temperatures for the period from 1978 to 2015. Simulated brightness temperatures for selected channels which are sensitive to either water vapor content (22 GHz) or hydrometeor content (85 GHz, vertical minus horizontal polarization) as an indicator for precipitation are used. For lead year 1 analysis of variance (ANOVA) reveals potential predictability for large parts of the tropical ocean areas for both water vapor and precipitation related channels. Furthermore, the Conditional Ranked Probability Skill Score (CRPSS) indicates predictive skill for large parts of the tropical/sub-tropical Pacific, parts of the tropical/sub-tropical Atlantic and the equatorial Indian Ocean. For lead years 2-3 ANOVA still indicates potential predictability for equatorial ocean areas. Moreover, CRPSS indicates predictive skill for parts of the tropical/subtropical ocean areas. These results suggest that the hindcasts show skill even beyond lead year 1 when comparing against climatology as a reference forecast.

  14. AxonPacking: An Open-Source Software to Simulate Arrangements of Axons in White Matter

    PubMed Central

    Mingasson, Tom; Duval, Tanguy; Stikov, Nikola; Cohen-Adad, Julien

    2017-01-01

    HIGHLIGHTS AxonPacking: Open-source software for simulating white matter microstructure.Validation on a theoretical disk packing problem.Reproducible and stable for various densities and diameter distributions.Can be used to study interplay between myelin/fiber density and restricted fraction. Quantitative Magnetic Resonance Imaging (MRI) can provide parameters that describe white matter microstructure, such as the fiber volume fraction (FVF), the myelin volume fraction (MVF) or the axon volume fraction (AVF) via the fraction of restricted water (fr). While already being used for clinical application, the complex interplay between these parameters requires thorough validation via simulations. These simulations required a realistic, controlled and adaptable model of the white matter axons with the surrounding myelin sheath. While there already exist useful algorithms to perform this task, none of them combine optimisation of axon packing, presence of myelin sheath and availability as free and open source software. Here, we introduce a novel disk packing algorithm that addresses these issues. The performance of the algorithm is tested in term of reproducibility over 50 runs, resulting density, and stability over iterations. This tool was then used to derive multiple values of FVF and to study the impact of this parameter on fr and MVF in light of the known microstructure based on histology sample. The standard deviation of the axon density over runs was lower than 10−3 and the expected hexagonal packing for monodisperse disks was obtained with a density close to the optimal density (obtained: 0.892, theoretical: 0.907). Using an FVF ranging within [0.58, 0.82] and a mean inter-axon gap ranging within [0.1, 1.1] μm, MVF ranged within [0.32, 0.44] and fr ranged within [0.39, 0.71], which is consistent with the histology. The proposed algorithm is implemented in the open-source software AxonPacking (https://github.com/neuropoly/axonpacking) and can be useful for validating diffusion models as well as for enabling researchers to study the interplay between microstructure parameters when evaluating qMRI methods. PMID:28197091

  15. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions

    PubMed Central

    Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-01-01

    Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation. PMID:26150807

  16. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions.

    PubMed

    Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-01-01

    Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation.

  17. Power converters for the 120 V bus supply control

    NASA Astrophysics Data System (ADS)

    Elisabelar, Christian

    1993-03-01

    Power converters for the 120 V bus supply control in such projects as Columbus and Hermes are addressed. Because of the power levels involved and the existing state of the art, several converter modules need to be connected in parallel to supply a single bus. To simplify the study, the power of each converter is set at around 1 kW. Many converter structures which satisfy requirement specifications and several solutions, with or without galvanic insulation, are proposed. The choice and sizing of the converter structure are considered. Stress factors and available technology are selection criteria in determining the most suitable structures. The dimensions of each structure, taking into account the rules of space design enable efficiency to be analytically estimated and it is subsequently verified experimentally. The converter command and its functional performance are then addressed. Numerical simulations with SUCCESS software are run to observe the actual operation of the power part of the converter and to develop the command law with its regulation parameters. The converter is simulated in its entirety and different transients are studied like load variation, no load operating point, short circuit. The response time, stability and behavior under disturbed conditions are thus known. A comparison of the various structures studied enabled the optimal converter to be chosen for some 120 V regulated bus applications.

  18. Tapping the Brake for Entry, Descent, and Landing

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Thompson, Kyle; Korzun, Ashley

    2016-01-01

    A matrix of simulations of hypersonic flow over blunt entry vehicles with steady and pulsing retropropulsion jets is presented. Retropropulsion in the supersonic domain is primarily designed to reduce vehicle velocity directly with thrust. Retropropulsion in the hypersonic domain may enable significant pressure recovery through unsteady, oblique shocks while providing a buffer of reactant gases with relatively low total temperature. Improved pressure recovery, a function of Mach number squared and oblique shock angle, could potentially serve to increase aerodynamic drag in this domain. Pulsing jets are studied to include an additional degree of freedom to search for resonances in an already unsteady flow domain with an objective to maximize the time-averaged drag coefficient. In this paradigm, small jets with minimal footprints of the nozzle exit on the vehicle forebody may be capable of delivering the requisite perturbations to the flow. Simulations are executed assuming inviscid, symmetric flow of a perfect gas to enable a rapid assessment of the parameter space (nozzle geometry, plenum conditions, jet pulse frequency). The pulsed-jet configuration produces moderately larger drag than the constant jet configuration but smaller drag than the jet-off case in this preliminary examination of a single design point. The fundamentals of a new algorithm for this challenging application with time dependent, interacting discontinuities using the feature detection capabilities of Walsh functions are introduced.

  19. A Verification-Driven Approach to Control Analysis and Tuning

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2008-01-01

    This paper proposes a methodology for the analysis and tuning of controllers using control verification metrics. These metrics, which are introduced in a companion paper, measure the size of the largest uncertainty set of a given class for which the closed-loop specifications are satisfied. This framework integrates deterministic and probabilistic uncertainty models into a setting that enables the deformation of sets in the parameter space, the control design space, and in the union of these two spaces. In regard to control analysis, we propose strategies that enable bounding regions of the design space where the specifications are satisfied by all the closed-loop systems associated with a prescribed uncertainty set. When this is unfeasible, we bound regions where the probability of satisfying the requirements exceeds a prescribed value. In regard to control tuning, we propose strategies for the improvement of the robust characteristics of a baseline controller. Some of these strategies use multi-point approximations to the control verification metrics in order to alleviate the numerical burden of solving a min-max problem. Since this methodology targets non-linear systems having an arbitrary, possibly implicit, functional dependency on the uncertain parameters and for which high-fidelity simulations are available, they are applicable to realistic engineering problems..

  20. SIPSim: A Modeling Toolkit to Predict Accuracy and Aid Design of DNA-SIP Experiments.

    PubMed

    Youngblut, Nicholas D; Barnett, Samuel E; Buckley, Daniel H

    2018-01-01

    DNA Stable isotope probing (DNA-SIP) is a powerful method that links identity to function within microbial communities. The combination of DNA-SIP with multiplexed high throughput DNA sequencing enables simultaneous mapping of in situ assimilation dynamics for thousands of microbial taxonomic units. Hence, high throughput sequencing enabled SIP has enormous potential to reveal patterns of carbon and nitrogen exchange within microbial food webs. There are several different methods for analyzing DNA-SIP data and despite the power of SIP experiments, it remains difficult to comprehensively evaluate method accuracy across a wide range of experimental parameters. We have developed a toolset (SIPSim) that simulates DNA-SIP data, and we use this toolset to systematically evaluate different methods for analyzing DNA-SIP data. Specifically, we employ SIPSim to evaluate the effects that key experimental parameters (e.g., level of isotopic enrichment, number of labeled taxa, relative abundance of labeled taxa, community richness, community evenness, and beta-diversity) have on the specificity, sensitivity, and balanced accuracy (defined as the product of specificity and sensitivity) of DNA-SIP analyses. Furthermore, SIPSim can predict analytical accuracy and power as a function of experimental design and community characteristics, and thus should be of great use in the design and interpretation of DNA-SIP experiments.

  1. Non-linear auto-regressive models for cross-frequency coupling in neural time series

    PubMed Central

    Tallot, Lucille; Grabot, Laetitia; Doyère, Valérie; Grenier, Yves; Gramfort, Alexandre

    2017-01-01

    We address the issue of reliably detecting and quantifying cross-frequency coupling (CFC) in neural time series. Based on non-linear auto-regressive models, the proposed method provides a generative and parametric model of the time-varying spectral content of the signals. As this method models the entire spectrum simultaneously, it avoids the pitfalls related to incorrect filtering or the use of the Hilbert transform on wide-band signals. As the model is probabilistic, it also provides a score of the model “goodness of fit” via the likelihood, enabling easy and legitimate model selection and parameter comparison; this data-driven feature is unique to our model-based approach. Using three datasets obtained with invasive neurophysiological recordings in humans and rodents, we demonstrate that these models are able to replicate previous results obtained with other metrics, but also reveal new insights such as the influence of the amplitude of the slow oscillation. Using simulations, we demonstrate that our parametric method can reveal neural couplings with shorter signals than non-parametric methods. We also show how the likelihood can be used to find optimal filtering parameters, suggesting new properties on the spectrum of the driving signal, but also to estimate the optimal delay between the coupled signals, enabling a directionality estimation in the coupling. PMID:29227989

  2. SIPSim: A Modeling Toolkit to Predict Accuracy and Aid Design of DNA-SIP Experiments

    PubMed Central

    Youngblut, Nicholas D.; Barnett, Samuel E.; Buckley, Daniel H.

    2018-01-01

    DNA Stable isotope probing (DNA-SIP) is a powerful method that links identity to function within microbial communities. The combination of DNA-SIP with multiplexed high throughput DNA sequencing enables simultaneous mapping of in situ assimilation dynamics for thousands of microbial taxonomic units. Hence, high throughput sequencing enabled SIP has enormous potential to reveal patterns of carbon and nitrogen exchange within microbial food webs. There are several different methods for analyzing DNA-SIP data and despite the power of SIP experiments, it remains difficult to comprehensively evaluate method accuracy across a wide range of experimental parameters. We have developed a toolset (SIPSim) that simulates DNA-SIP data, and we use this toolset to systematically evaluate different methods for analyzing DNA-SIP data. Specifically, we employ SIPSim to evaluate the effects that key experimental parameters (e.g., level of isotopic enrichment, number of labeled taxa, relative abundance of labeled taxa, community richness, community evenness, and beta-diversity) have on the specificity, sensitivity, and balanced accuracy (defined as the product of specificity and sensitivity) of DNA-SIP analyses. Furthermore, SIPSim can predict analytical accuracy and power as a function of experimental design and community characteristics, and thus should be of great use in the design and interpretation of DNA-SIP experiments. PMID:29643843

  3. A non-local mixing-length theory able to compute core overshooting

    NASA Astrophysics Data System (ADS)

    Gabriel, M.; Belkacem, K.

    2018-04-01

    Turbulent convection is certainly one of the most important and thorny issues in stellar physics. Our deficient knowledge of this crucial physical process introduces a fairly large uncertainty concerning the internal structure and evolution of stars. A striking example is overshoot at the edge of convective cores. Indeed, nearly all stellar evolutionary codes treat the overshooting zones in a very approximative way that considers both its extent and the profile of the temperature gradient as free parameters. There are only a few sophisticated theories of stellar convection such as Reynolds stress approaches, but they also require the adjustment of a non-negligible number of free parameters. We present here a theory, based on the plume theory as well as on the mean-field equations, but without relying on the usual Taylor's closure hypothesis. It leads us to a set of eight differential equations plus a few algebraic ones. Our theory is essentially a non-mixing length theory. It enables us to compute the temperature gradient in a shrinking convective core and its overshooting zone. The case of an expanding convective core is also discussed, though more briefly. Numerical simulations have quickly improved during recent years and enabling us to foresee that they will probably soon provide a model of convection adapted to the computation of 1D stellar models.

  4. Comprehensive analytical model for locally contacted rear surface passivated solar cells

    NASA Astrophysics Data System (ADS)

    Wolf, Andreas; Biro, Daniel; Nekarda, Jan; Stumpp, Stefan; Kimmerle, Achim; Mack, Sebastian; Preu, Ralf

    2010-12-01

    For optimum performance of solar cells featuring a locally contacted rear surface, the metallization fraction as well as the size and distribution of the local contacts are crucial, since Ohmic and recombination losses have to be balanced. In this work we present a set of equations which enable to calculate this trade off without the need of numerical simulations. Our model combines established analytical and empirical equations to predict the energy conversion efficiency of a locally contacted device. For experimental verification, we fabricate devices from float zone silicon wafers of different resistivity using the laser fired contact technology for forming the local rear contacts. The detailed characterization of test structures enables the determination of important physical parameters, such as the surface recombination velocity at the contacted area and the spreading resistance of the contacts. Our analytical model reproduces the experimental results very well and correctly predicts the optimum contact spacing without the use of free fitting parameters. We use our model to estimate the optimum bulk resistivity for locally contacted devices fabricated from conventional Czochralski-grown silicon material. These calculations use literature values for the stable minority carrier lifetime to account for the bulk recombination caused by the formation of boron-oxygen complexes under carrier injection.

  5. Equilibration of experimentally determined protein structures for molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Walton, Emily B.; Vanvliet, Krystyn J.

    2006-12-01

    Preceding molecular dynamics simulations of biomolecular interactions, the molecule of interest is often equilibrated with respect to an initial configuration. This so-called equilibration stage is required because the input structure is typically not within the equilibrium phase space of the simulation conditions, particularly in systems as complex as proteins, which can lead to artifactual trajectories of protein dynamics. The time at which nonequilibrium effects from the initial configuration are minimized—what we will call the equilibration time—marks the beginning of equilibrium phase-space exploration. Note that the identification of this time does not imply exploration of the entire equilibrium phase space. We have found that current equilibration methodologies contain ambiguities that lead to uncertainty in determining the end of the equilibration stage of the trajectory. This results in equilibration times that are either too long, resulting in wasted computational resources, or too short, resulting in the simulation of molecular trajectories that do not accurately represent the physical system. We outline and demonstrate a protocol for identifying the equilibration time that is based on the physical model of Normal Mode Analysis. We attain the computational efficiency required of large-protein simulations via a stretched exponential approximation that enables an analytically tractable and physically meaningful form of the root-mean-square deviation of atoms comprising the protein. We find that the fitting parameters (which correspond to physical properties of the protein) fluctuate initially but then stabilize for increased simulation time, independently of the simulation duration or sampling frequency. We define the end of the equilibration stage—and thus the equilibration time—as the point in the simulation when these parameters attain constant values. Compared to existing methods, our approach provides the objective identification of the time at which the simulated biomolecule has entered an energetic basin. For the representative protein considered, bovine pancreatic trypsin inhibitor, existing methods indicate a range of 0.2-10ns of simulation until a local minimum is attained. Our approach identifies a substantially narrower range of 4.5-5.5ns , which will lead to a much more objective choice of equilibration time.

  6. Modular programming for tuberculosis control, the "AuTuMN" platform.

    PubMed

    Trauer, James McCracken; Ragonnet, Romain; Doan, Tan Nhut; McBryde, Emma Sue

    2017-08-07

    Tuberculosis (TB) is now the world's leading infectious killer and major programmatic advances will be needed if we are to meet the ambitious new End TB Targets. Although mathematical models are powerful tools for TB control, such models must be flexible enough to capture the complexity and heterogeneity of the global TB epidemic. This includes simulating a disease that affects age groups and other risk groups differently, has varying levels of infectiousness depending upon the organ involved and varying outcomes from treatment depending on the drug resistance pattern of the infecting strain. We adopted sound basic principles of software engineering to develop a modular software platform for simulation of TB control interventions ("AuTuMN"). These included object-oriented programming, logical linkage between modules and consistency of code syntax and variable naming. The underlying transmission dynamic model incorporates optional stratification by age, risk group, strain and organ involvement, while our approach to simulating time-variant programmatic parameters better captures the historical progression of the epidemic. An economic model is overlaid upon this epidemiological model which facilitates comparison between new and existing technologies. A "Model runner" module allows for predictions of future disease burden trajectories under alternative scenario situations, as well as uncertainty, automatic calibration, cost-effectiveness and optimisation. The model has now been used to guide TB control strategies across a range of settings and countries, with our modular approach enabling repeated application of the tool without the need for extensive modification for each application. The modular construction of the platform minimises errors, enhances readability and collaboration between multiple programmers and enables rapid adaptation to answer questions in a broad range of contexts without the need for extensive re-programming. Such features are particularly important in simulating an epidemic as complex and diverse as TB.

  7. Can Landscape Evolution Models (LEMs) be used to reconstruct palaeo-climate and sea-level histories?

    NASA Astrophysics Data System (ADS)

    Leyland, J.; Darby, S. E.

    2011-12-01

    Reconstruction of palaeo-environmental conditions over long time periods is notoriously difficult, especially where there are limited or no proxy records from which to extract data. Application of landscape evolution models (LEMs) for palaeo-environmental reconstruction involves hindcast modeling, in which simulation scenarios are configured with specific model variables and parameters chosen to reflect a specific hypothesis of environmental change. In this form of modeling, the environmental time series utilized are considered credible when modeled and observed landscape metrics converge. Herein we account for the uncertainties involved in evaluating the degree to which the model simulations and observations converge using Monte Carlo analysis of reduced complexity `metamodels'. The technique is applied to a case study focused on a specific set of gullies found on the southwest coast of the Isle of Wight, UK. A key factor controlling the Holocene evolution of these coastal gullies is the balance between rates of sea-cliff retreat (driven by sea-level rise) and headwards incision caused by knickpoint migration (driven by the rate of runoff). We simulate these processes using a version of the GOLEM model that has been modified to represent sea-cliff retreat. A Central Composite Design (CCD) sampling technique was employed, enabling the trajectories of gully response to different combinations of driving conditions to be modeled explicitly. In some of these simulations, where the range of bedrock erodibility (0.03 to 0.04 m0.2 a-1) and rate of sea-level change (0.005 to 0.0059 m a-1) is tightly constrained, modeled gully forms conform closely to those observed in reality, enabling a suite of climate and sea-level change scenarios which plausibly explain the Holocene evolution of the Isle of Wight gullies to be identified.

  8. A method for modeling laterally asymmetric proton beamlets resulting from collimation

    PubMed Central

    Gelover, Edgar; Wang, Dongxu; Hill, Patrick M.; Flynn, Ryan T.; Gao, Mingcheng; Laub, Steve; Pankuch, Mark; Hyer, Daniel E.

    2015-01-01

    Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σx1,σx2,σy1,σy2) together with the spatial location of the maximum dose (μx,μy). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets. PMID:25735287

  9. A statistical kinematic source inversion approach based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo

    2014-05-01

    Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.

  10. Reconstructing Mammalian Sleep Dynamics with Data Assimilation

    PubMed Central

    Sedigh-Sarvestani, Madineh; Schiff, Steven J.; Gluckman, Bruce J.

    2012-01-01

    Data assimilation is a valuable tool in the study of any complex system, where measurements are incomplete, uncertain, or both. It enables the user to take advantage of all available information including experimental measurements and short-term model forecasts of a system. Although data assimilation has been used to study other biological systems, the study of the sleep-wake regulatory network has yet to benefit from this toolset. We present a data assimilation framework based on the unscented Kalman filter (UKF) for combining sparse measurements together with a relatively high-dimensional nonlinear computational model to estimate the state of a model of the sleep-wake regulatory system. We demonstrate with simulation studies that a few noisy variables can be used to accurately reconstruct the remaining hidden variables. We introduce a metric for ranking relative partial observability of computational models, within the UKF framework, that allows us to choose the optimal variables for measurement and also provides a methodology for optimizing framework parameters such as UKF covariance inflation. In addition, we demonstrate a parameter estimation method that allows us to track non-stationary model parameters and accommodate slow dynamics not included in the UKF filter model. Finally, we show that we can even use observed discretized sleep-state, which is not one of the model variables, to reconstruct model state and estimate unknown parameters. Sleep is implicated in many neurological disorders from epilepsy to schizophrenia, but simultaneous observation of the many brain components that regulate this behavior is difficult. We anticipate that this data assimilation framework will enable better understanding of the detailed interactions governing sleep and wake behavior and provide for better, more targeted, therapies. PMID:23209396

  11. Ab initio modeling of nonequilibrium electron-ion dynamics of iron in the warm dense matter regime

    NASA Astrophysics Data System (ADS)

    Ogitsu, T.; Fernandez-Pañella, A.; Hamel, S.; Correa, A. A.; Prendergast, D.; Pemmaraju, C. D.; Ping, Y.

    2018-06-01

    The spatiotemporal electron and ion relaxation dynamics of iron induced by femtosecond laser pulses was studied using a one-dimensional two-temperature model (1D-TTM) where electron and ion temperature-dependent thermophysical parameters such as specific heat (C ), electron-phonon coupling (G ), and thermal conductivity (K ) were calculated with ab initio density-functional-theory (DFT) simulations. Based on the simulated time evolutions of electron and ion temperature distributions [Te(x ,t ) and Ti(x ,t ) ], the time evolution of x-ray absorption near-edge spectroscopy (XANES) was calculated and compared with experimental results reported by Fernandez-Pañella et al., where the slope of XANES spectrum at the onset of absorption (s ) was used due to its excellent sensitivity to the electron temperature. Our results indicate that the ion temperature dependence on G and C , which is largely neglected in the past studies, is very important for studying the nonequilibrium electron-ion relaxation dynamics of iron in warm dense matter (WDM) conditions. It is also shown that the 1 /s behavior becomes very sensitive to the thermal gradient profile, in other words, to the values of K in a TTM simulation, for target thickness of about two to four times the mean free path of conduction electrons. Our approach based on 1D-TTM and XANES simulations can be used to determine the optimal combination of target geometry and laser fluence for a given target material, which will enable us to tightly constrain the thermophysical parameters under electron-ion nonequilibrium WDM conditions.

  12. Solute transport with equilibrium aqueous complexation and either sorption or ion exchange: Simulation methodology and applications

    USGS Publications Warehouse

    Lewis, F.M.; Voss, C.I.; Rubin, J.

    1987-01-01

    Methodologies that account for specific types of chemical reactions in the simulation of solute transport can be developed so they are compatible with solution algorithms employed in existing transport codes. This enables the simulation of reactive transport in complex multidimensional flow regimes, and provides a means for existing codes to account for some of the fundamental chemical processes that occur among transported solutes. Two equilibrium-controlled reaction systems demonstrate a methodology for accommodating chemical interaction into models of solute transport. One system involves the sorption of a given chemical species, as well as two aqueous complexations in which the sorbing species is a participant. The other reaction set involves binary ion exchange coupled with aqueous complexation involving one of the exchanging species. The methodology accommodates these reaction systems through the addition of nonlinear terms to the transport equations for the sorbing species. Example simulation results show (1) the effect equilibrium chemical parameters have on the spatial distributions of concentration for complexing solutes; (2) that an interrelationship exists between mechanical dispersion and the various reaction processes; (3) that dispersive parameters of the porous media cannot be determined from reactive concentration distributions unless the reaction is accounted for or the influence of the reaction is negligible; (4) how the concentration of a chemical species may be significantly affected by its participation in an aqueous complex with a second species which also sorbs; and (5) that these coupled chemical processes influencing reactive transport can be demonstrated in two-dimensional flow regimes. ?? 1987.

  13. Multiscale Simulation of Microbe Structure and Dynamics

    PubMed Central

    Joshi, Harshad; Singharoy, Abhishek; Sereda, Yuriy V.; Cheluvaraja, Srinath C.; Ortoleva, Peter J.

    2012-01-01

    A multiscale mathematical and computational approach is developed that captures the hierarchical organization of a microbe. It is found that a natural perspective for understanding a microbe is in terms of a hierarchy of variables at various levels of resolution. This hierarchy starts with the N -atom description and terminates with order parameters characterizing a whole microbe. This conceptual framework is used to guide the analysis of the Liouville equation for the probability density of the positions and momenta of the N atoms constituting the microbe and its environment. Using multiscale mathematical techniques, we derive equations for the co-evolution of the order parameters and the probability density of the N-atom state. This approach yields a rigorous way to transfer information between variables on different space-time scales. It elucidates the interplay between equilibrium and far-from-equilibrium processes underlying microbial behavior. It also provides framework for using coarse-grained nanocharacterization data to guide microbial simulation. It enables a methodical search for free-energy minimizing structures, many of which are typically supported by the set of macromolecules and membranes constituting a given microbe. This suite of capabilities provides a natural framework for arriving at a fundamental understanding of microbial behavior, the analysis of nanocharacterization data, and the computer-aided design of nanostructures for biotechnical and medical purposes. Selected features of the methodology are demonstrated using our multiscale bionanosystem simulator DeductiveMultiscaleSimulator. Systems used to demonstrate the approach are structural transitions in the cowpea chlorotic mosaic virus, RNA of satellite tobacco mosaic virus, virus-like particles related to human papillomavirus, and iron-binding protein lactoferrin. PMID:21802438

  14. ExoData: A Python package to handle large exoplanet catalogue data

    NASA Astrophysics Data System (ADS)

    Varley, Ryan

    2016-10-01

    Exoplanet science often involves using the system parameters of real exoplanets for tasks such as simulations, fitting routines, and target selection for proposals. Several exoplanet catalogues are already well established but often lack a version history and code friendly interfaces. Software that bridges the barrier between the catalogues and code enables users to improve the specific repeatability of results by facilitating the retrieval of exact system parameters used in articles results along with unifying the equations and software used. As exoplanet science moves towards large data, gone are the days where researchers can recall the current population from memory. An interface able to query the population now becomes invaluable for target selection and population analysis. ExoData is a Python interface and exploratory analysis tool for the Open Exoplanet Catalogue. It allows the loading of exoplanet systems into Python as objects (Planet, Star, Binary, etc.) from which common orbital and system equations can be calculated and measured parameters retrieved. This allows researchers to use tested code of the common equations they require (with units) and provides a large science input catalogue of planets for easy plotting and use in research. Advanced querying of targets is possible using the database and Python programming language. ExoData is also able to parse spectral types and fill in missing parameters according to programmable specifications and equations. Examples of use cases are integration of equations into data reduction pipelines, selecting planets for observing proposals and as an input catalogue to large scale simulation and analysis of planets. ExoData is a Python package available freely on GitHub.

  15. SF-FDTD analysis of a predictive physical model for parallel aligned liquid crystal devices

    NASA Astrophysics Data System (ADS)

    Márquez, Andrés.; Francés, Jorge; Martínez, Francisco J.; Gallego, Sergi; Alvarez, Mariela L.; Calzado, Eva M.; Pascual, Inmaculada; Beléndez, Augusto

    2017-08-01

    Recently we demonstrated a novel and simplified model enabling to calculate the voltage dependent retardance provided by parallel aligned liquid crystal devices (PA-LCoS) for a very wide range of incidence angles and any wavelength in the visible. To our knowledge it represents the most simplified approach still showing predictive capability. Deeper insight into the physics behind the simplified model is necessary to understand if the parameters in the model are physically meaningful. Since the PA-LCoS is a black-box where we do not have information about the physical parameters of the device, we cannot perform this kind of analysis using the experimental retardance measurements. In this work we develop realistic simulations for the non-linear tilt of the liquid crystal director across the thickness of the liquid crystal layer in the PA devices. We consider these profiles to have a sine-like shape, which is a good approximation for typical ranges of applied voltage in commercial PA-LCoS microdisplays. For these simulations we develop a rigorous method based on the split-field finite difference time domain (SF-FDTD) technique which provides realistic retardance values. These values are used as the experimental measurements to which the simplified model is fitted. From this analysis we learn that the simplified model is very robust, providing unambiguous solutions when fitting its parameters. We also learn that two of the parameters in the model are physically meaningful, proving a useful reverse-engineering approach, with predictive capability, to probe into internal characteristics of the PA-LCoS device.

  16. Setup of a Parameterized FE Model for the Die Roll Prediction in Fine Blanking using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Stanke, J.; Trauth, D.; Feuerhack, A.; Klocke, F.

    2017-09-01

    Die roll is a morphological feature of fine blanked sheared edges. The die roll reduces the functional part of the sheared edge. To compensate for the die roll thicker sheet metal strips and secondary machining must be used. However, in order to avoid this, the influence of various fine blanking process parameters on the die roll has been experimentally and numerically studied, but there is still a lack of knowledge on the effects of some factors and especially factor interactions on the die roll. Recent changes in the field of artificial intelligence motivate the hybrid use of the finite element method and artificial neural networks to account for these non-considered parameters. Therefore, a set of simulations using a validated finite element model of fine blanking is firstly used to train an artificial neural network. Then the artificial neural network is trained with thousands of experimental trials. Thus, the objective of this contribution is to develop an artificial neural network that reliably predicts the die roll. Therefore, in this contribution, the setup of a fully parameterized 2D FE model is presented that will be used for batch training of an artificial neural network. The FE model enables an automatic variation of the edge radii of blank punch and die plate, the counter and blank holder force, the sheet metal thickness and part diameter, V-ring height and position, cutting velocity as well as material parameters covered by the Hensel-Spittel model for 16MnCr5 (1.7131, AISI/SAE 5115). The FE model is validated using experimental trails. The results of this contribution is a FE model suitable to perform 9.623 simulations and to pass the simulated die roll width and height automatically to an artificial neural network.

  17. Models for small-scale structure on cosmic strings. II. Scaling and its stability

    NASA Astrophysics Data System (ADS)

    Vieira, J. P. P.; Martins, C. J. A. P.; Shellard, E. P. S.

    2016-11-01

    We make use of the formalism described in a previous paper [Martins et al., Phys. Rev. D 90, 043518 (2014)] to address general features of wiggly cosmic string evolution. In particular, we highlight the important role played by poorly understood energy loss mechanisms and propose a simple Ansatz which tackles this problem in the context of an extended velocity-dependent one-scale model. We find a general procedure to determine all the scaling solutions admitted by a specific string model and study their stability, enabling a detailed comparison with future numerical simulations. A simpler comparison with previous Goto-Nambu simulations supports earlier evidence that scaling is easier to achieve in the matter era than in the radiation era. In addition, we also find that the requirement that a scaling regime be stable seems to notably constrain the allowed range of energy loss parameters.

  18. Figures of Merit for Control Verification

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Goesu. Daniel P.

    2008-01-01

    This paper proposes a methodology for evaluating a controller's ability to satisfy a set of closed-loop specifications when the plant has an arbitrary functional dependency on uncertain parameters. Control verification metrics applicable to deterministic and probabilistic uncertainty models are proposed. These metrics, which result from sizing the largest uncertainty set of a given class for which the specifications are satisfied, enable systematic assessment of competing control alternatives regardless of the methods used to derive them. A particularly attractive feature of the tools derived is that their efficiency and accuracy do not depend on the robustness of the controller. This is in sharp contrast to Monte Carlo based methods where the number of simulations required to accurately approximate the failure probability grows exponentially with its closeness to zero. This framework allows for the integration of complex, high-fidelity simulations of the integrated system and only requires standard optimization algorithms for its implementation.

  19. Experimental investigation of supersonic low pressure air plasma flows obtained with different arc-jet operating conditions

    NASA Astrophysics Data System (ADS)

    Lago, Viviana; Ndiaye, Abdoul-Aziz

    2012-11-01

    A stationary arc-jet plasma flow at low pressure is used to simulate some properties of the gas flow surrounding a vehicle during its entry into celestial body's atmospheres. This paper presents an experimental study concerning plasmas simulating a re-entry into our planet. Optical measurements have been carried out for several operating plasma conditions in the free stream, and in the shock layer formed in front of a flat cylindrical plate, placed in the plasma jet. The analysis of the spectral radiation enabled the identification of the emitting species, the determination of the rotational and vibrational temperatures in the free-stream and in the shock layer and the determination of the distance of the shock to the flat plate face. Some plasma fluid parameters like, stagnation pressure, specific enthalpy and heat flux have been determined experimentally along the plasma-jet axis.

  20. A New Tribological Test for Candidate Brush Seal Materials Evaluation

    NASA Technical Reports Server (NTRS)

    Fellenstein, James A.; Dellacorte, Christopher

    1994-01-01

    A new tribological test for candidate brush seal materials evaluation has been developed. The sliding contact between the brush seal wires and their mating counterface journal is simulated by testing a small tuft of wire against the outside diameter of a high speed rotating shaft. The test configuration is similar to a standard block on ring geometry. The new tester provides the capability to measure both the friction and wear of candidate wire and counterface materials under controlled loading conditions in the gram to kilogram range. A wide test condition latitude of speeds (1 to 27 m/s), temperatures (25 to 700 C), and loads (0.5 to 10 N) enables the simulation of many of the important tribological parameters found in turbine engine brush seals. This paper describes the new test rig and specimen configuration and presents initial data for candidate seal materials comparing tuft test results and wear surface morphology to field tested seal components.

  1. Overview of an Indoor Sonic Boom Simulator at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Klos, Jacob

    2012-01-01

    A facility has been constructed at NASA Langley Research Center to simulate the soundscape inside residential houses that are exposed to environmental noise from aircraft. This controllable indoor listening environment, the Interior Effects Room, enables systematic study of parameters that affect psychoacoustic response. The single-room facility, built using typical residential construction methods and materials, is surrounded on adjacent sides by two arrays of loudspeakers in close proximity to the exterior walls. The arrays, containing 52 subwoofers and 52 mid-range speakers, have a usable bandwidth of 3 Hz to 5 kHz and sufficient output to allow study of sonic boom noise. In addition to these exterior arrays, satellite speakers placed inside the room are used to augment the transmitted sound with rattle and other audible contact ]induced noise that can result from low frequency excitation of a residential house. The layout of the facility, operational characteristics, acoustic characteristics and equalization approaches are summarized.

  2. Interpreting plasmonic response of epitaxial Ag/Si(100) island ensembles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kong, Dexin; Jiang, Liying; Drucker, Jeff

    Associating features in the experimentally measured optical response of epitaxial Ag islands grown on Si(100) with the localized surface plasmon resonances (LSPRs) hosted by the Ag islands is challenging due to the variation of the Si dielectric function over the energy range under consideration. However, it is possible to conclusively identify features in the experimental spectra with LSPR modes oscillating both parallel and perpendicular to the epitaxial interface by simulating the optical response. The Abeles matrix method is used to describe the composite layered system and the Ag islands are modeled using the thin island film model developed by Bedeauxmore » and Vlieger. By incorporating island morphology parameters determined by quantitative analysis of electron micrographs, the simulation faithfully reproduces the main features of the experimental spectra. Individually zeroing the dipoles associated with the LSPR modes enables conclusive identification of their contribution to the optical response of the composite system.« less

  3. Compact and controlled microfluidic mixing and biological particle capture

    NASA Astrophysics Data System (ADS)

    Ballard, Matthew; Owen, Drew; Mills, Zachary Grant; Hesketh, Peter J.; Alexeev, Alexander

    2016-11-01

    We use three-dimensional simulations and experiments to develop a multifunctional microfluidic device that performs rapid and controllable microfluidic mixing and specific particle capture. Our device uses a compact microfluidic channel decorated with magnetic features. A rotating magnetic field precisely controls individual magnetic microbeads orbiting around the features, enabling effective continuous-flow mixing of fluid streams over a compact mixing region. We use computer simulations to elucidate the underlying physical mechanisms that lead to effective mixing and compare them with experimental mixing results. We study the effect of various system parameters on microfluidic mixing to design an efficient micromixer. We also experimentally and numerically demonstrate that orbiting microbeads can effectively capture particles transported by the fluid, which has major implications in pre-concentration and detection of biological particles including various cells and bacteria, with applications in areas such as point-of-care diagnostics, biohazard detection, and food safety. Support from NSF and USDA is gratefully acknowledged.

  4. Profit-based conventional resource scheduling with renewable energy penetration

    NASA Astrophysics Data System (ADS)

    Reddy, K. Srikanth; Panwar, Lokesh Kumar; Kumar, Rajesh; Panigrahi, B. K.

    2017-08-01

    Technological breakthroughs in renewable energy technologies (RETs) enabled them to attain grid parity thereby making them potential contenders for existing conventional resources. To examine the market participation of RETs, this paper formulates a scheduling problem accommodating energy market participation of wind- and solar-independent power producers (IPPs) treating both conventional and RETs as identical entities. Furthermore, constraints pertaining to penetration and curtailments of RETs are restructured. Additionally, an appropriate objective function for profit incurred by conventional resource IPPs through reserve market participation as a function of renewable energy curtailment is also proposed. The proposed concept is simulated with a test system comprising 10 conventional generation units in conjunction with solar photovoltaic (SPV) and wind energy generators (WEG). The simulation results indicate that renewable energy integration and its curtailment limits influence the market participation or scheduling strategies of conventional resources in both energy and reserve markets. Furthermore, load and reliability parameters are also affected.

  5. High speed stereovision setup for position and motion estimation of fertilizer particles leaving a centrifugal spreader.

    PubMed

    Hijazi, Bilal; Cool, Simon; Vangeyte, Jürgen; Mertens, Koen C; Cointault, Frédéric; Paindavoine, Michel; Pieters, Jan G

    2014-11-13

    A 3D imaging technique using a high speed binocular stereovision system was developed in combination with corresponding image processing algorithms for accurate determination of the parameters of particles leaving the spinning disks of centrifugal fertilizer spreaders. Validation of the stereo-matching algorithm using a virtual 3D stereovision simulator indicated an error of less than 2 pixels for 90% of the particles. The setup was validated using the cylindrical spread pattern of an experimental spreader. A 2D correlation coefficient of 90% and a Relative Error of 27% was found between the experimental results and the (simulated) spread pattern obtained with the developed setup. In combination with a ballistic flight model, the developed image acquisition and processing algorithms can enable fast determination and evaluation of the spread pattern which can be used as a tool for spreader design and precise machine calibration.

  6. High contrast ion acceleration at intensities exceeding 10{sup 21} Wcm{sup −2}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dollar, F.; Zulick, C.; Matsuoka, T.

    2013-05-15

    Ion acceleration from short pulse laser interactions at intensities of 2×10{sup 21}Wcm{sup −2} was studied experimentally under a wide variety of parameters, including laser contrast, incidence angle, and target thickness. Trends in maximum proton energy were observed, as well as evidence of improvement in the acceleration gradients by using dual plasma mirrors over traditional pulse cleaning techniques. Extremely high efficiency acceleration gradients were produced, accelerating both the contaminant layer and high charge state ions from the bulk of the target. Two dimensional particle-in-cell simulations enabled the study of the influence of scale length on submicron targets, where hydrodynamic expansion affectsmore » the rear surface as well as the front. Experimental evidence of larger electric fields for sharp density plasmas is observed in simulation results as well for such targets, where target ions are accelerated without the need for contaminant removal.« less

  7. Assessing performance of flaw characterization methods through uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Miorelli, R.; Le Bourdais, F.; Artusi, X.

    2018-04-01

    In this work, we assess the inversion performance in terms of crack characterization and localization based on synthetic signals associated to ultrasonic and eddy current physics. More precisely, two different standard iterative inversion algorithms are used to minimize the discrepancy between measurements (i.e., the tested data) and simulations. Furthermore, in order to speed up the computational time and get rid of the computational burden often associated to iterative inversion algorithms, we replace the standard forward solver by a suitable metamodel fit on a database built offline. In a second step, we assess the inversion performance by adding uncertainties on a subset of the database parameters and then, through the metamodel, we propagate these uncertainties within the inversion procedure. The fast propagation of uncertainties enables efficiently evaluating the impact due to the lack of knowledge on some parameters employed to describe the inspection scenarios, which is a situation commonly encountered in the industrial NDE context.

  8. SPS pilot signal design and power transponder analysis, volume 2, phase 3

    NASA Technical Reports Server (NTRS)

    Lindsey, W. C.; Scholtz, R. A.; Chie, C. M.

    1980-01-01

    The problem of pilot signal parameter optimization and the related problem of power transponder performance analysis for the Solar Power Satellite reference phase control system are addressed. Signal and interference models were established to enable specifications of the front end filters including both the notch filter and the antenna frequency response. A simulation program package was developed to be included in SOLARSIM to perform tradeoffs of system parameters based on minimizing the phase error for the pilot phase extraction. An analytical model that characterizes the overall power transponder operation was developed. From this model, the effects of different phase noise disturbance sources that contribute to phase variations at the output of the power transponders were studied and quantified. Results indicate that it is feasible to hold the antenna array phase error to less than one degree per power module for the type of disturbances modeled.

  9. Modeling of human movement monitoring using Bluetooth Low Energy technology.

    PubMed

    Mokhtari, G; Zhang, Q; Karunanithi, M

    2015-01-01

    Bluetooth Low Energy (BLE) is a wireless communication technology which can be used to monitor human movements. In this monitoring system, a BLE signal scanner scans signal strength of BLE tags carried by people, to thus infer human movement patterns within its monitoring zone. However to the extent of our knowledge one main aspect of this monitoring system which has not yet been thoroughly investigated in literature is how to build a sound theoretical model, based on tunable BLE communication parameters such as scanning time interval and advertising time interval, to enable the study and design of effective and efficient movement monitoring systems. In this paper, we proposed and developed a statistical model based on Monte-Carlo simulation, which can be utilized to assess impacts of BLE technology parameters in terms of latency and efficiency, on a movement monitoring system, and can thus benefit a more efficient system design.

  10. Design of sub-Angstrom compact free-electron laser source

    NASA Astrophysics Data System (ADS)

    Bonifacio, Rodolfo; Fares, Hesham; Ferrario, Massimo; McNeil, Brian W. J.; Robb, Gordon R. M.

    2017-01-01

    In this paper, we propose for first time practical parameters to construct a compact sub-Angstrom Free Electron Laser (FEL) based on Compton backscattering. Our recipe is based on using picocoulomb electron bunch, enabling very low emittance and ultracold electron beam. We assume the FEL is operating in a quantum regime of Self Amplified Spontaneous Emission (SASE). The fundamental quantum feature is a significantly narrower spectrum of the emitted radiation relative to classical SASE. The quantum regime of the SASE FEL is reached when the momentum spread of the electron beam is smaller than the photon recoil momentum. Following the formulae describing SASE FEL operation, realistic designs for quantum FEL experiments are proposed. We discuss the practical constraints that influence the experimental parameters. Numerical simulations of power spectra and intensities are presented and attractive radiation characteristics such as high flux, narrow linewidth, and short pulse structure are demonstrated.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyson, T. A.; Gao, W.; Chen, Y. -S.

    Solar cells based on hybrid perovskites have shown high efficiency while possessing simple processing methods. To gain a fundamental understanding of their properties on an atomic level, we investigate single crystals of CH 3NH 3PbI 3 with a narrow transition (~5 K) near 327 K. Temperature dependent structural measurements reveal a persistent tetragonal structure with smooth changes in the atomic displacement parameters (ADPs) on crossing T*. We show that the ADPs for I ions yield extended flat regions in the potential wells consistent with the measured large thermal expansion parameter. Molecular dynamics simulations reveal that this material exhibits significant asymmetriesmore » in the Pb-I pair distribution functions. We also show that the intrinsically enhanced freedom of motion of the iodine atoms enables large deformations. This flexibility (softness) of the atomic structure results in highly localized atomic relaxation about defects and hence accounts for both the high carrier mobility as well as the structural instability.« less

  12. "Genetically Engineered" Nanoelectronics

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Salazar-Lazaro, Carlos H.; Stoica, Adrian; Cwik, Thomas

    2000-01-01

    The quantum mechanical functionality of nanoelectronic devices such as resonant tunneling diodes (RTDs), quantum well infrared-photodetectors (QWIPs), quantum well lasers, and heterostructure field effect transistors (HFETs) is enabled by material variations on an atomic scale. The design and optimization of such devices requires a fundamental understanding of electron transport in such dimensions. The Nanoelectronic Modeling Tool (NEMO) is a general-purpose quantum device design and analysis tool based on a fundamental non-equilibrium electron transport theory. NEW was combined with a parallelized genetic algorithm package (PGAPACK) to evolve structural and material parameters to match a desired set of experimental data. A numerical experiment that evolves structural variations such as layer widths and doping concentrations is performed to analyze an experimental current voltage characteristic. The genetic algorithm is found to drive the NEMO simulation parameters close to the experimentally prescribed layer thicknesses and doping profiles. With such a quantitative agreement between theory and experiment design synthesis can be performed.

  13. Deriving estimates of individual variability in genetic potentials of performance traits for 3 dairy breeds, using a model of lifetime nutrient partitioning.

    PubMed

    Phuong, H N; Martin, O; de Boer, I J M; Ingvartsen, K L; Schmidely, Ph; Friggens, N C

    2015-01-01

    This study explored the ability of an existing lifetime nutrient partitioning model for simulating individual variability in genetic potentials of dairy cows. Generally, the model assumes a universal trajectory of dynamic partitioning of priority between life functions and genetic scaling parameters are then incorporated to simulate individual difference in performance. Data of 102 cows including 180 lactations of 3 breeds: Danish Red, Danish Holstein, and Jersey, which were completely independent from those used previously for model development, were used. Individual cow performance records through sequential lactations were used to derive genetic scaling parameters for each animal by calibrating the model to achieve best fit, cow by cow. The model was able to fit individual curves of body weight, and milk fat, milk protein, and milk lactose concentrations with a high degree of accuracy. Daily milk yield and dry matter intake were satisfactorily predicted in early and mid lactation, but underpredictions were found in late lactation. Breeds and parities did not significantly affect the prediction accuracy. The means of genetic scaling parameters between Danish Red and Danish Holstein were similar but significantly different from those of Jersey. The extent of correlations between the genetic scaling parameters was consistent with that reported in the literature. In conclusion, this model is of value as a tool to derive estimates of genetic potentials of milk yield, milk composition, body reserve usage, and growth for different genotypes of cow. Moreover, it can be used to separate genetic variability in performance between individual cows from environmental noise. The model enables simulation of the effects of a genetic selection strategy on lifetime efficiency of individual cows, which has a main advantage of including the rearing costs, and thus, can be used to explore the impact of future selection on animal performance and efficiency. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. Interactive Visual Analytics Approch for Exploration of Geochemical Model Simulations with Different Parameter Sets

    NASA Astrophysics Data System (ADS)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris

    2015-04-01

    Many geoscience applications can benefit from testing many combinations of input parameters for geochemical simulation models. It is, however, a challenge to screen the input and output data from the model to identify the significant relationships between input parameters and output variables. For addressing this problem we propose a Visual Analytics approach that has been developed in an ongoing collaboration between computer science and geoscience researchers. Our Visual Analytics approach uses visualization methods of hierarchical horizontal axis, multi-factor stacked bar charts and interactive semi-automated filtering for input and output data together with automatic sensitivity analysis. This guides the users towards significant relationships. We implement our approach as an interactive data exploration tool. It is designed with flexibility in mind, so that a diverse set of tasks such as inverse modeling, sensitivity analysis and model parameter refinement can be supported. Here we demonstrate the capabilities of our approach by two examples for gas storage applications. For the first example our Visual Analytics approach enabled the analyst to observe how the element concentrations change around previously established baselines in response to thousands of different combinations of mineral phases. This supported combinatorial inverse modeling for interpreting observations about the chemical composition of the formation fluids at the Ketzin pilot site for CO2 storage. The results indicate that, within the experimental error range, the formation fluid cannot be considered at local thermodynamical equilibrium with the mineral assemblage of the reservoir rock. This is a valuable insight from the predictive geochemical modeling for the Ketzin site. For the second example our approach supports sensitivity analysis for a reaction involving the reductive dissolution of pyrite with formation of pyrrothite in presence of gaseous hydrogen. We determine that this reaction is thermodynamically favorable under a broad range of conditions. This includes low temperatures and absence of microbial catalysators. Our approach has potential for use in other applications that involve exploration of relationships in geochemical simulation model data.

  15. Simulation and fitting of complex reaction network TPR: The key is the objective function

    DOE PAGES

    Savara, Aditya Ashi

    2016-07-07

    In this research, a method has been developed for finding improved fits during simulation and fitting of data from complex reaction network temperature programmed reactions (CRN-TPR). It was found that simulation and fitting of CRN-TPR presents additional challenges relative to simulation and fitting of simpler TPR systems. The method used here can enable checking the plausibility of proposed chemical mechanisms and kinetic models. The most important finding was that when choosing an objective function, use of an objective function that is based on integrated production provides more utility in finding improved fits when compared to an objective function based onmore » the rate of production. The response surface produced by using the integrated production is monotonic, suppresses effects from experimental noise, requires fewer points to capture the response behavior, and can be simulated numerically with smaller errors. For CRN-TPR, there is increased importance (relative to simple reaction network TPR) in resolving of peaks prior to fitting, as well as from weighting of experimental data points. Using an implicit ordinary differential equation solver was found to be inadequate for simulating CRN-TPR. Lastly, the method employed here was capable of attaining improved fits in simulation and fitting of CRN-TPR when starting with a postulated mechanism and physically realistic initial guesses for the kinetic parameters.« less

  16. Parameter estimation for compact binary coalescence signals with the first generation gravitational-wave detector network

    NASA Astrophysics Data System (ADS)

    Aasi, J.; Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Ajith, P.; Allen, B.; Allocca, A.; Amador Ceron, E.; Amariutei, D.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Ast, S.; Aston, S. M.; Astone, P.; Atkinson, D.; Aufmuth, P.; Aulbert, C.; Aylott, B. E.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Bao, Y.; Barayoga, J. C. B.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Basti, A.; Batch, J.; Bauchrowitz, J.; Bauer, Th. S.; Bebronne, M.; Beck, D.; Behnke, B.; Bejger, M.; Beker, M. G.; Bell, A. S.; Bell, C.; Belopolski, I.; Benacquista, M.; Berliner, J. M.; Bertolini, A.; Betzwieser, J.; Beveridge, N.; Beyersdorf, P. T.; Bhadbade, T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Biswas, R.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Blom, M.; Bock, O.; Bodiya, T. P.; Bogan, C.; Bond, C.; Bondarescu, R.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, S.; Bosi, L.; Bouhou, B.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Breyer, J.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burguet–Castell, J.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chalermsongsak, T.; Charlton, P.; Chassande-Mottin, E.; Chen, W.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chua, S. S. Y.; Chung, C. T. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, D. E.; Clark, J. A.; Clayton, J. H.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colacino, C. N.; Colla, A.; Colombini, M.; Conte, A.; Conte, R.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M.; Coulon, J.-P.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Cutler, R. M.; Dahl, K.; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Daveloza, H.; Davier, M.; Daw, E. J.; Dayanga, T.; De Rosa, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; Del Pozzo, W.; Dent, T.; Dergachev, V.; DeRosa, R.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Paolo Emilio, M.; Di Virgilio, A.; Díaz, M.; Dietz, A.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorsher, S.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dumas, J.-C.; Dwyer, S.; Eberle, T.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Endrőczi, G.; Engel, R.; Etzel, T.; Evans, K.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Farr, B. F.; Farr, W. M.; Favata, M.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Feroz, F.; Ferrante, I.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Foley, S.; Forsi, E.; Forte, L. A.; Fotopoulos, N.; Fournier, J.-D.; Franc, J.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M. A.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Friedrich, D.; Fritschel, P.; Frolov, V. V.; Fujimoto, M.-K.; Fulda, P. J.; Fyffe, M.; Gair, J.; Galimberti, M.; Gammaitoni, L.; Garcia, J.; Garufi, F.; Gáspár, M. E.; Gelencser, G.; Gemme, G.; Genin, E.; Gennai, A.; Gergely, L. Á.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gil-Casanova, S.; Gill, C.; Gleason, J.; Goetz, E.; González, G.; Gorodetsky, M. L.; Goßler, S.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Griffo, C.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gupta, R.; Gustafson, E. K.; Gustafson, R.; Hallam, J. M.; Hammer, D.; Hammond, G.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Hayama, K.; Hayau, J.-F.; Heefner, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M. A.; Heng, I. S.; Heptonstall, A. W.; Herrera, V.; Heurs, M.; Hewitson, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Holtrop, M.; Hong, T.; Hooper, S.; Hough, J.; Howell, E. J.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Izumi, K.; Jacobson, M.; James, E.; Jang, Y. J.; Jaranowski, P.; Jesse, E.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kasprzack, M.; Kasturi, R.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kaufman, K.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Keitel, D.; Kelley, D.; Kells, W.; Keppel, D. G.; Keresztes, Z.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, B. K.; Kim, C.; Kim, H.; Kim, K.; Kim, N.; Kim, Y. M.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J.; Kokeyama, K.; Kondrashov, V.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kurdyumov, R.; Kwee, P.; Lam, P. K.; Landry, M.; Langley, A.; Lantz, B.; Lastzka, N.; Lawrie, C.; Lazzarini, A.; Le Roux, A.; Leaci, P.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Leong, J. R.; Leonor, I.; Leroy, N.; Letendre, N.; Lhuillier, V.; Li, J.; Li, T. G. F.; Lindquist, P. E.; Litvine, V.; Liu, Y.; Liu, Z.; Lockerbie, N. A.; Lodhia, D.; Logue, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J.; Lubinski, M.; Lück, H.; Lundgren, A. P.; Macarthur, J.; Macdonald, E.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; Meadors, G. D.; Mehmet, M.; Meier, T.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Menéndez, D. F.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Milano, L.; Miller, J.; Minenkov, Y.; Mingarelli, C. M. F.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Mohan, M.; Mohapatra, S. R. P.; Moraru, D.; Moreno, G.; Morgado, N.; Morgia, A.; Mori, T.; Morriss, S. R.; Mosca, S.; Mossavi, K.; Mours, B.; Mow–Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nash, T.; Naticchioni, L.; Necula, V.; Nelson, J.; Neri, I.; Newton, G.; Nguyen, T.; Nishizawa, A.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Oldenberg, R. G.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ott, C. D.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Page, A.; Palladino, L.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Paoletti, R.; Papa, M. A.; Parisi, M.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Pedraza, M.; Penn, S.; Perreca, A.; Persichetti, G.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pihlaja, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Poggiani, R.; Pöld, J.; Postiglione, F.; Poux, C.; Prato, M.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Rakhmanov, M.; Ramet, C.; Rankins, B.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Roberts, M.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Rodriguez, C.; Rodruck, M.; Rolland, L.; Rollins, J. G.; Romano, R.; Romie, J. H.; Rosińska, D.; Röver, C.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Salemi, F.; Sammut, L.; Sandberg, V.; Sankar, S.; Sannibale, V.; Santamaría, L.; Santiago-Prieto, I.; Santostasi, G.; Saracco, E.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Savage, R. L.; Schilling, R.; Schnabel, R.; Schofield, R. M. S.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Seifert, F.; Sellers, D.; Sentenac, D.; Sergeev, A.; Shaddock, D. A.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siemens, X.; Sigg, D.; Simakov, D.; Singer, A.; Singer, L.; Sintes, A. M.; Skelton, G. R.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, R. J. E.; Smith-Lefebvre, N. D.; Somiya, K.; Sorazu, B.; Speirits, F. C.; Sperandio, L.; Stefszky, M.; Steinert, E.; Steinlechner, J.; Steinlechner, S.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S. E.; Stroeer, A. S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sung, M.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Szeifert, G.; Tacca, M.; Taffarello, L.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, R.; ter Braack, A. P. M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Thüring, A.; Titsler, C.; Tokmakov, K. V.; Tomlinson, C.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C. V.; Torrie, C. I.; Tournefier, E.; Travasso, F.; Traylor, G.; Tse, M.; Ugolini, D.; Vahlbruch, H.; Vajente, G.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van Veggel, A. A.; Vass, S.; Vasuth, M.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A. E.; Vinet, J.-Y.; Vitale, S.; Vocca, H.; Vorvick, C.; Vyatchanin, S. P.; Wade, A.; Wade, L.; Wade, M.; Waldman, S. J.; Wallace, L.; Wan, Y.; Wang, M.; Wang, X.; Wanner, A.; Ward, R. L.; Was, M.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wiesner, K.; Wilkinson, C.; Willems, P. A.; Williams, L.; Williams, R.; Willke, B.; Wimmer, M.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Wittel, H.; Woan, G.; Wooley, R.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yamamoto, K.; Yancey, C. C.; Yang, H.; Yeaton-Massey, D.; Yoshida, S.; Yvert, M.; Zadrożny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, F.; Zhang, L.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zweizig, J.

    2013-09-01

    Compact binary systems with neutron stars or black holes are one of the most promising sources for ground-based gravitational-wave detectors. Gravitational radiation encodes rich information about source physics; thus parameter estimation and model selection are crucial analysis steps for any detection candidate events. Detailed models of the anticipated waveforms enable inference on several parameters, such as component masses, spins, sky location and distance, that are essential for new astrophysical studies of these sources. However, accurate measurements of these parameters and discrimination of models describing the underlying physics are complicated by artifacts in the data, uncertainties in the waveform models and in the calibration of the detectors. Here we report such measurements on a selection of simulated signals added either in hardware or software to the data collected by the two LIGO instruments and the Virgo detector during their most recent joint science run, including a “blind injection” where the signal was not initially revealed to the collaboration. We exemplify the ability to extract information about the source physics on signals that cover the neutron-star and black-hole binary parameter space over the component mass range 1M⊙-25M⊙ and the full range of spin parameters. The cases reported in this study provide a snapshot of the status of parameter estimation in preparation for the operation of advanced detectors.

  17. Simulated ground-water flow in the Hueco Bolson, an alluvial-basin aquifer system near El Paso, Texas

    USGS Publications Warehouse

    Heywood, Charles E.; Yager, Richard M.

    2003-01-01

    The neighboring cities of El Paso, Texas, and Ciudad Juarez, Chihuahua, Mexico, have historically relied on ground-water withdrawals from the Hueco Bolson, an alluvial-aquifer system, to supply water to their growing populations. By 1996, ground-water drawdown exceeded 60 meters in some areas under Ciudad Juarez and El Paso. A simulation of steady-state and transient ground-water flow in the Hueco Bolson in westernmost Texas, south-central New Mexico, and northern Chihuahua, Mexico, was developed using MODFLOW-96. The model is needed by El Paso Water Utilities to evaluate strategies for obtaining the most beneficial use of the Hueco Bolson aquifer system. The transient simulation represents a period of 100 years beginning in 1903 and ending in 2002. The period 1903 through 1968 was represented with 66 annual stress periods, and the period 1969 through 2002 was represented with 408 monthly stress periods. The ground-water flow model was calibrated using MODFLOWP and UCODE. Parameter values representing aquifer properties and boundary conditions were adjusted through nonlinear regression in a transient-state simulation with 96 annual time steps to produce a model that approximated (1) 4,352 water levels measured in 292 wells from 1912 to 1995, (2) three seepage-loss rates from a reach of the Rio Grande during periods from 1979 to 1981, (3) three seepage-loss rates from a reach of the Franklin Canal during periods from 1990 to 1992, and (4) 24 seepage rates into irrigation drains from 1961 to 1983. Once a calibrated model was obtained with MODFLOWP and UCODE, the optimal parameter set was used to create an equivalent MODFLOW-96 simulation with monthly temporal discretization to improve computations of seepage from the Rio Grande and to define the flow field for a chloride-transport simulation. Model boundary conditions were modified at appropriate times during the simulation to represent changes in well pumpage, drainage of agricultural fields, and channel modifications of the Rio Grande. The model input was generated from geographic information system databases, which facilitated rapid model construction and enabled testing of several conceptualizations of hydrogeologic facies boundaries. Specific yield of unconfined layers and hydraulic conductance of Quaternary faults in the fluvial facies were the most sensitive model parameters, suggesting that ground-water flow is impeded across the fault planes.

  18. On production and asymmetric focusing of flat electron beams using rectangular capillary discharge plasmas

    DOE PAGES

    Bagdasarov, G. A.; Bobrova, N. A.; Boldarev, A. S.; ...

    2017-12-27

    A method for the asymmetric focusing of electron bunches, based on the active plasma lensing technique is proposed. Our method takes advantage of the strong inhomogeneous magnetic field generated inside the capillary discharge plasma to focus the ultrarelativistic electrons. The plasma and magnetic field parameters inside the capillary discharge are described theoretically and modeled with dissipative magnetohydrodynamic computer simulations enabling analysis of the capillaries of rectangle cross-sections. We could use large aspect ratio rectangular capillaries to transport electron beams with high emittance asymmetries, as well as assist in forming spatially flat electron bunches for final focusing before the interaction point.

  19. Radar cross-section reduction based on an iterative fast Fourier transform optimized metasurface

    NASA Astrophysics Data System (ADS)

    Song, Yi-Chuan; Ding, Jun; Guo, Chen-Jiang; Ren, Yu-Hui; Zhang, Jia-Kai

    2016-07-01

    A novel polarization insensitive metasurface with over 25 dB monostatic radar cross-section (RCS) reduction is introduced. The proposed metasurface is comprised of carefully arranged unit cells with spatially varied dimension, which enables approximate uniform diffusion of incoming electromagnetic (EM) energy and reduces the threat from bistatic radar system. An iterative fast Fourier transform (FFT) method for conventional antenna array pattern synthesis is innovatively applied to find the best unit cell geometry parameter arrangement. Finally, a metasurface sample is fabricated and tested to validate RCS reduction behavior predicted by full wave simulation software Ansys HFSSTM and marvelous agreement is observed.

  20. Unprecedented homotopy perturbation method for solving nonlinear equations in the enzymatic reaction of glucose in a spherical matrix.

    PubMed

    Saranya, K; Mohan, V; Kizek, R; Fernandez, C; Rajendran, L

    2018-02-01

    The theory of glucose-responsive composite membranes for the planar diffusion and reaction process is extended to a microsphere membrane. The theoretical model of glucose oxidation and hydrogen peroxide production in the chitosan-aliginate microsphere has been discussed in this manuscript for the first time. We have successfully reported an analytical derived methodology utilizing homotopy perturbation to perform the numerical simulation. The influence and sensitive analysis of various parameters on the concentrations of gluconic acid and hydrogen peroxide are also discussed. The theoretical results enable to predict and optimize the performance of enzyme kinetics.

  1. Modelling baryonic effects on galaxy cluster mass profiles

    NASA Astrophysics Data System (ADS)

    Shirasaki, Masato; Lau, Erwin T.; Nagai, Daisuke

    2018-06-01

    Gravitational lensing is a powerful probe of the mass distribution of galaxy clusters and cosmology. However, accurate measurements of the cluster mass profiles are limited by uncertainties in cluster astrophysics. In this work, we present a physically motivated model of baryonic effects on the cluster mass profiles, which self-consistently takes into account the impact of baryons on the concentration as well as mass accretion histories of galaxy clusters. We calibrate this model using the Omega500 hydrodynamical cosmological simulations of galaxy clusters with varying baryonic physics. Our model will enable us to simultaneously constrain cluster mass, concentration, and cosmological parameters using stacked weak lensing measurements from upcoming optical cluster surveys.

  2. On production and asymmetric focusing of flat electron beams using rectangular capillary discharge plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bagdasarov, G. A.; Bobrova, N. A.; Boldarev, A. S.

    A method for the asymmetric focusing of electron bunches, based on the active plasma lensing technique is proposed. Our method takes advantage of the strong inhomogeneous magnetic field generated inside the capillary discharge plasma to focus the ultrarelativistic electrons. The plasma and magnetic field parameters inside the capillary discharge are described theoretically and modeled with dissipative magnetohydrodynamic computer simulations enabling analysis of the capillaries of rectangle cross-sections. We could use large aspect ratio rectangular capillaries to transport electron beams with high emittance asymmetries, as well as assist in forming spatially flat electron bunches for final focusing before the interaction point.

  3. On production and asymmetric focusing of flat electron beams using rectangular capillary discharge plasmas

    NASA Astrophysics Data System (ADS)

    Bagdasarov, G. A.; Bobrova, N. A.; Boldarev, A. S.; Olkhovskaya, O. G.; Sasorov, P. V.; Gasilov, V. A.; Barber, S. K.; Bulanov, S. S.; Gonsalves, A. J.; Schroeder, C. B.; van Tilborg, J.; Esarey, E.; Leemans, W. P.; Levato, T.; Margarone, D.; Korn, G.; Kando, M.; Bulanov, S. V.

    2017-12-01

    A method for the asymmetric focusing of electron bunches, based on the active plasma lensing technique, is proposed. This method takes advantage of the strong inhomogeneous magnetic field generated inside the capillary discharge plasma to focus on the ultrarelativistic electrons. The plasma and magnetic field parameters inside the capillary discharge are described theoretically and modeled with dissipative magnetohydrodynamic computer simulations enabling analysis of the capillaries of rectangle cross-sections. Large aspect ratio rectangular capillaries might be used to transport electron beams with high emittance asymmetries, as well as assist in forming spatially flat electron bunches for final focusing before the interaction point.

  4. Artificial Immune Algorithm for Subtask Industrial Robot Scheduling in Cloud Manufacturing

    NASA Astrophysics Data System (ADS)

    Suma, T.; Murugesan, R.

    2018-04-01

    The current generation of manufacturing industry requires an intelligent scheduling model to achieve an effective utilization of distributed manufacturing resources, which motivated us to work on an Artificial Immune Algorithm for subtask robot scheduling in cloud manufacturing. This scheduling model enables a collaborative work between the industrial robots in different manufacturing centers. This paper discussed two optimizing objectives which includes minimizing the cost and load balance of industrial robots through scheduling. To solve these scheduling problems, we used the algorithm based on Artificial Immune system. The parameters are simulated with MATLAB and the results compared with the existing algorithms. The result shows better performance than existing.

  5. Integration of quantum key distribution and private classical communication through continuous variable

    NASA Astrophysics Data System (ADS)

    Wang, Tianyi; Gong, Feng; Lu, Anjiang; Zhang, Damin; Zhang, Zhengping

    2017-12-01

    In this paper, we propose a scheme that integrates quantum key distribution and private classical communication via continuous variables. The integrated scheme employs both quadratures of a weak coherent state, with encrypted bits encoded on the signs and Gaussian random numbers encoded on the values of the quadratures. The integration enables quantum and classical data to share the same physical and logical channel. Simulation results based on practical system parameters demonstrate that both classical communication and quantum communication can be implemented over distance of tens of kilometers, thus providing a potential solution for simultaneous transmission of quantum communication and classical communication.

  6. Using Enabling Technologies to Facilitate the Comparison of Satellite Observations with the Model Forecasts for Hurricane Study

    NASA Astrophysics Data System (ADS)

    Li, P.; Knosp, B.; Hristova-Veleva, S. M.; Niamsuwan, N.; Johnson, M. P.; Shen, T. P. J.; Tanelli, S.; Turk, J.; Vu, Q. A.

    2014-12-01

    Due to their complexity and volume, the satellite data are underutilized in today's hurricane research and operations. To better utilize these data, we developed the JPL Tropical Cyclone Information System (TCIS) - an Interactive Data Portal providing fusion between Near-Real-Time satellite observations and model forecasts to facilitate model evaluation and improvement. We have collected satellite observations and model forecasts in the Atlantic Basin and the East Pacific for the hurricane seasons since 2010 and supported the NASA Airborne Campaigns for Hurricane Study such as the Genesis and Rapid Intensification Processes (GRIP) in 2010 and the Hurricane and Severe Storm Sentinel (HS3) from 2012 to 2014. To enable the direct inter-comparisons of the satellite observations and the model forecasts, the TCIS was integrated with the NASA Earth Observing System Simulator Suite (NEOS3) to produce synthetic observations (e.g. simulated passive microwave brightness temperatures) from a number of operational hurricane forecast models (HWRF and GFS). An automated process was developed to trigger NEOS3 simulations via web services given the location and time of satellite observations, monitor the progress of the NEOS3 simulations, display the synthetic observation and ingest them into the TCIS database when they are done. In addition, three analysis tools, the joint PDF analysis of the brightness temperatures, ARCHER for finding the storm-center and the storm organization and the Wave Number Analysis tool for storm asymmetry and morphology analysis were integrated into TCIS to provide statistical and structural analysis on both observed and synthetic data. Interactive tools were built in the TCIS visualization system to allow the spatial and temporal selections of the datasets, the invocation of the tools with user specified parameters, and the display and the delivery of the results. In this presentation, we will describe the key enabling technologies behind the design of the TCIS interactive data portal and analysis tools, including the spatial database technology for the representation and query of the level 2 satellite data, the automatic process flow using web services, the interactive user interface using the Google Earth API, and a common and expandable Python wrapper to invoke the analysis tools.

  7. Hybrid neuro-heuristic methodology for simulation and control of dynamic systems over time interval.

    PubMed

    Woźniak, Marcin; Połap, Dawid

    2017-09-01

    Simulation and positioning are very important aspects of computer aided engineering. To process these two, we can apply traditional methods or intelligent techniques. The difference between them is in the way they process information. In the first case, to simulate an object in a particular state of action, we need to perform an entire process to read values of parameters. It is not very convenient for objects for which simulation takes a long time, i.e. when mathematical calculations are complicated. In the second case, an intelligent solution can efficiently help on devoted way of simulation, which enables us to simulate the object only in a situation that is necessary for a development process. We would like to present research results on developed intelligent simulation and control model of electric drive engine vehicle. For a dedicated simulation method based on intelligent computation, where evolutionary strategy is simulating the states of the dynamic model, an intelligent system based on devoted neural network is introduced to control co-working modules while motion is in time interval. Presented experimental results show implemented solution in situation when a vehicle transports things over area with many obstacles, what provokes sudden changes in stability that may lead to destruction of load. Therefore, applied neural network controller prevents the load from destruction by positioning characteristics like pressure, acceleration, and stiffness voltage to absorb the adverse changes of the ground. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Active Learning and Engagement with the Wireless Indoor Location Device (WILD) Learning System

    NASA Astrophysics Data System (ADS)

    Moldwin, M.; Samson, P. J.; Ojeda, L.; Miller, T.; Yu, J.

    2016-12-01

    The Wireless Indoor Location Device (WILD) Learning System being developed at the University of Michigan and the Education Technology company A2 Motus LLC provides a unique platform for social learning by allowing students to become active participants in live simulations of complex systems, like hurricane formation. The WILD Learning System enables teachers to engage students in kinesthetic activities that explore complex models from a wide variety of STEAM (Science, Technology, Engineering, Art and Math) disciplines. The system provides students' location, orientation and motion within the classroom and assigns each student different parameters depending on the activity. For example, students learning about hurricanes could be assigned atmospheric pressure levels and asked to arrange themselves around the room to simulate a hurricane. The Wild Learning System software then takes the students' pressure readings and locations and projects their locations overlaid onto a real-time generated simulated pressure weather map enabling the observation of how their arrangement influences the pressure structure. The teacher then could have the students orient themselves in the direction they think the resulting wind field will be based on the pressure contours as the system can show an arrow originating from each of the students position in the direction that they are facing. The system also could incorporate a student response-type system for the instructor to then directly question students about other concepts and record their response to both the kinesthetic activity and other formative assessment questions. The WILD Learning System consists of a sensor package for each student in the class, beacons to enable precise localization of the students, software to calculate student location information, and educational software for a variety of activities. In addition, a software development kit (SDK) is under development that would allow others to create additional learning activities using the WILD Learning System. (WILD Learning System development has been partially supported by NASA's CYGNSS Mission EPO, the NSF and the University of Michigan).

  9. Bayesian population receptive field modelling.

    PubMed

    Zeidman, Peter; Silson, Edward Harry; Schwarzkopf, Dietrich Samuel; Baker, Chris Ian; Penny, Will

    2017-09-08

    We introduce a probabilistic (Bayesian) framework and associated software toolbox for mapping population receptive fields (pRFs) based on fMRI data. This generic approach is intended to work with stimuli of any dimension and is demonstrated and validated in the context of 2D retinotopic mapping. The framework enables the experimenter to specify generative (encoding) models of fMRI timeseries, in which experimental stimuli enter a pRF model of neural activity, which in turns drives a nonlinear model of neurovascular coupling and Blood Oxygenation Level Dependent (BOLD) response. The neuronal and haemodynamic parameters are estimated together on a voxel-by-voxel or region-of-interest basis using a Bayesian estimation algorithm (variational Laplace). This offers several novel contributions to receptive field modelling. The variance/covariance of parameters are estimated, enabling receptive fields to be plotted while properly representing uncertainty about pRF size and location. Variability in the haemodynamic response across the brain is accounted for. Furthermore, the framework introduces formal hypothesis testing to pRF analysis, enabling competing models to be evaluated based on their log model evidence (approximated by the variational free energy), which represents the optimal tradeoff between accuracy and complexity. Using simulations and empirical data, we found that parameters typically used to represent pRF size and neuronal scaling are strongly correlated, which is taken into account by the Bayesian methods we describe when making inferences. We used the framework to compare the evidence for six variants of pRF model using 7 T functional MRI data and we found a circular Difference of Gaussians (DoG) model to be the best explanation for our data overall. We hope this framework will prove useful for mapping stimulus spaces with any number of dimensions onto the anatomy of the brain. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  10. A New Approach to Modeling Jupiter's Magnetosphere

    NASA Astrophysics Data System (ADS)

    Fukazawa, K.; Katoh, Y.; Walker, R. J.; Kimura, T.; Tsuchiya, F.; Murakami, G.; Kita, H.; Tao, C.; Murata, K. T.

    2017-12-01

    The scales in planetary magnetospheres range from 10s of planetary radii to kilometers. For a number of years we have studied the magnetospheres of Jupiter and Saturn by using 3-dimensional magnetohydrodynamic (MHD) simulations. However, we have not been able to reach even the limits of the MHD approximation because of the large amount of computer resources required. Recently thanks to the progress in supercomputer systems, we have obtained the capability to simulate Jupiter's magnetosphere with 1000 times the number of grid points used in our previous simulations. This has allowed us to combine the high resolution global simulation with a micro-scale simulation of the Jovian magnetosphere. In particular we can combine a hybrid (kinetic ions and fluid electrons) simulation with the MHD simulation. In addition, the new capability enables us to run multi-parameter survey simulations of the Jupiter-solar wind system. In this study we performed a high-resolution simulation of Jovian magnetosphere to connect with the hybrid simulation, and lower resolution simulations under the various solar wind conditions to compare with Hisaki and Juno observations. In the high-resolution simulation we used a regular Cartesian gird with 0.15 RJ grid spacing and placed the inner boundary at 7 RJ. From these simulation settings, we provide the magnetic field out to around 20 RJ from Jupiter as a background field for the hybrid simulation. For the first time we have been able to resolve Kelvin Helmholtz waves on the magnetopause. We have investigated solar wind dynamic pressures between 0.01 and 0.09 nPa for a number of IMF values. These simulation data are open for the registered users to download the raw data. We have compared the results of these simulations with Hisaki auroral observations.

  11. A system for environmental model coupling and code reuse: The Great Rivers Project

    NASA Astrophysics Data System (ADS)

    Eckman, B.; Rice, J.; Treinish, L.; Barford, C.

    2008-12-01

    As part of the Great Rivers Project, IBM is collaborating with The Nature Conservancy and the Center for Sustainability and the Global Environment (SAGE) at the University of Wisconsin, Madison to build a Modeling Framework and Decision Support System (DSS) designed to help policy makers and a variety of stakeholders (farmers, fish & wildlife managers, hydropower operators, et al.) to assess, come to consensus, and act on land use decisions representing effective compromises between human use and ecosystem preservation/restoration. Initially focused on Brazil's Paraguay-Parana, China's Yangtze, and the Mississippi Basin in the US, the DSS integrates data and models from a wide variety of environmental sectors, including water balance, water quality, carbon balance, crop production, hydropower, and biodiversity. In this presentation we focus on the modeling framework aspect of this project. In our approach to these and other environmental modeling projects, we see a flexible, extensible modeling framework infrastructure for defining and running multi-step analytic simulations as critical. In this framework, we divide monolithic models into atomic components with clearly defined semantics encoded via rich metadata representation. Once models and their semantics and composition rules have been registered with the system by their authors or other experts, non-expert users may construct simulations as workflows of these atomic model components. A model composition engine enforces rules/constraints for composing model components into simulations, to avoid the creation of Frankenmodels, models that execute but produce scientifically invalid results. A common software environment and common representations of data and models are required, as well as an adapter strategy for code written in e.g., Fortran or python, that still enables efficient simulation runs, including parallelization. Since each new simulation, as a new composition of model components, requires calibration of parameters (fudge factors) to produce scientifically valid results, we are also developing an autocalibration engine. Finally, visualization is a key element of this modeling framework strategy, both to convey complex scientific data effectively, and also to enable non-expert users to make full use of the relevant features of the framework. We are developing a visualization environment with a strong data model, to enable visualizations, model results, and data all to be handled similarly.

  12. Circuit-based versus full-wave modelling of active microwave circuits

    NASA Astrophysics Data System (ADS)

    Bukvić, Branko; Ilić, Andjelija Ž.; Ilić, Milan M.

    2018-03-01

    Modern full-wave computational tools enable rigorous simulations of linear parts of complex microwave circuits within minutes, taking into account all physical electromagnetic (EM) phenomena. Non-linear components and other discrete elements of the hybrid microwave circuit are then easily added within the circuit simulator. This combined full-wave and circuit-based analysis is a must in the final stages of the circuit design, although initial designs and optimisations are still faster and more comfortably done completely in the circuit-based environment, which offers real-time solutions at the expense of accuracy. However, due to insufficient information and general lack of specific case studies, practitioners still struggle when choosing an appropriate analysis method, or a component model, because different choices lead to different solutions, often with uncertain accuracy and unexplained discrepancies arising between the simulations and measurements. We here design a reconfigurable power amplifier, as a case study, using both circuit-based solver and a full-wave EM solver. We compare numerical simulations with measurements on the manufactured prototypes, discussing the obtained differences, pointing out the importance of measured parameters de-embedding, appropriate modelling of discrete components and giving specific recipes for good modelling practices.

  13. RFA Guardian: Comprehensive Simulation of Radiofrequency Ablation Treatment of Liver Tumors.

    PubMed

    Voglreiter, Philip; Mariappan, Panchatcharam; Pollari, Mika; Flanagan, Ronan; Blanco Sequeiros, Roberto; Portugaller, Rupert Horst; Fütterer, Jurgen; Schmalstieg, Dieter; Kolesnik, Marina; Moche, Michael

    2018-01-15

    The RFA Guardian is a comprehensive application for high-performance patient-specific simulation of radiofrequency ablation of liver tumors. We address a wide range of usage scenarios. These include pre-interventional planning, sampling of the parameter space for uncertainty estimation, treatment evaluation and, in the worst case, failure analysis. The RFA Guardian is the first of its kind that exhibits sufficient performance for simulating treatment outcomes during the intervention. We achieve this by combining a large number of high-performance image processing, biomechanical simulation and visualization techniques into a generalized technical workflow. Further, we wrap the feature set into a single, integrated application, which exploits all available resources of standard consumer hardware, including massively parallel computing on graphics processing units. This allows us to predict or reproduce treatment outcomes on a single personal computer with high computational performance and high accuracy. The resulting low demand for infrastructure enables easy and cost-efficient integration into the clinical routine. We present a number of evaluation cases from the clinical practice where users performed the whole technical workflow from patient-specific modeling to final validation and highlight the opportunities arising from our fast, accurate prediction techniques.

  14. Unsteady adjoint for large eddy simulation of a coupled turbine stator-rotor system

    NASA Astrophysics Data System (ADS)

    Talnikar, Chaitanya; Wang, Qiqi; Laskowski, Gregory

    2016-11-01

    Unsteady fluid flow simulations like large eddy simulation are crucial in capturing key physics in turbomachinery applications like separation and wake formation in flow over a turbine vane with a downstream blade. To determine how sensitive the design objectives of the coupled system are to control parameters, an unsteady adjoint is needed. It enables the computation of the gradient of an objective with respect to a large number of inputs in a computationally efficient manner. In this paper we present unsteady adjoint solutions for a coupled turbine stator-rotor system. As the transonic fluid flows over the stator vane, the boundary layer transitions to turbulence. The turbulent wake then impinges on the rotor blades, causing early separation. This coupled system exhibits chaotic dynamics which causes conventional adjoint solutions to diverge exponentially, resulting in the corruption of the sensitivities obtained from the adjoint solutions for long-time simulations. In this presentation, adjoint solutions for aerothermal objectives are obtained through a localized adjoint viscosity injection method which aims to stabilize the adjoint solution and maintain accurate sensitivities. Preliminary results obtained from the supercomputer Mira will be shown in the presentation.

  15. Application of London-type dispersion corrections to the solid-state density functional theory simulation of the terahertz spectra of crystalline pharmaceuticals.

    PubMed

    King, Matthew D; Buchanan, William D; Korter, Timothy M

    2011-03-14

    The effects of applying an empirical dispersion correction to solid-state density functional theory methods were evaluated in the simulation of the crystal structure and low-frequency (10 to 90 cm(-1)) terahertz spectrum of the non-steroidal anti-inflammatory drug, naproxen. The naproxen molecular crystal is bound largely by weak London force interactions, as well as by more prominent interactions such as hydrogen bonding, and thus serves as a good model for the assessment of the pair-wise dispersion correction term in systems influenced by intermolecular interactions of various strengths. Modifications to the dispersion parameters were tested in both fully optimized unit cell dimensions and those determined by X-ray crystallography, with subsequent simulations of the THz spectrum being performed. Use of the unmodified PBE density functional leads to an unrealistic expansion of the unit cell volume and the poor representation of the THz spectrum. Inclusion of a modified dispersion correction enabled a high-quality simulation of the THz spectrum and crystal structure of naproxen to be achieved without the need for artificially constraining the unit cell dimensions.

  16. Motions of Celestial Bodies; Computer simulations

    NASA Astrophysics Data System (ADS)

    Butikov, Eugene

    2014-10-01

    This book is written for a wide range of graduate and undergraduate students studying various courses in physics and astronomy. It is accompanied by the award winning educational software package 'Planets and Satellites' developed by the author. This text, together with the interactive software, is intended to help students learn and understand the fundamental concepts and the laws of physics as they apply to the fascinating world of the motions of natural and artificial celestial bodies. The primary aim of the book is the understanding of the foundations of classical and modern physics, while their application to celestial mechanics is used to illustrate these concepts. The simulation programs create vivid and lasting impressions of the investigated phenomena, and provide students and their instructors with a powerful tool which enables them to explore basic concepts that are difficult to study and teach in an abstract conventional manner. Students can work with the text and software at a pace they can enjoy, varying parameters of the simulated systems. Each section of the textbook is supplied with questions, exercises, and problems. Using some of the suggested simulation programs, students have an opportunity to perform interesting mini-research projects in physics and astronomy.

  17. Application of the MCNPX-McStas interface for shielding calculations and guide design at ESS

    NASA Astrophysics Data System (ADS)

    Klinkby, E. B.; Knudsen, E. B.; Willendrup, P. K.; Lauritzen, B.; Nonbøl, E.; Bentley, P.; Filges, U.

    2014-07-01

    Recently, an interface between the Monte Carlo code MCNPX and the neutron ray-tracing code MCNPX was developed [1, 2]. Based on the expected neutronic performance and guide geometries relevant for the ESS, the combined MCNPX-McStas code is used to calculate dose rates along neutron beam guides. The generation and moderation of neutrons is simulated using a full scale MCNPX model of the ESS target monolith. Upon entering the neutron beam extraction region, the individual neutron states are handed to McStas via the MCNPX-McStas interface. McStas transports the neutrons through the beam guide, and by using newly developed event logging capability, the neutron state parameters corresponding to un-reflected neutrons are recorded at each scattering. This information is handed back to MCNPX where it serves as neutron source input for a second MCNPX simulation. This simulation enables calculation of dose rates in the vicinity of the guide. In addition the logging mechanism is employed to record the scatterings along the guides which is exploited to simulate the supermirror quality requirements (i.e. m-values) needed at different positions along the beam guide to transport neutrons in the same guide/source setup.

  18. Robust estimation for ordinary differential equation models.

    PubMed

    Cao, J; Wang, L; Xu, J

    2011-12-01

    Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.

  19. Measurements of gas parameters in plasma-assisted supersonic combustion processes using diode laser spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolshov, Mikhail A; Kuritsyn, Yu A; Liger, V V

    2009-09-30

    We report a procedure for temperature and water vapour concentration measurements in an unsteady-state combustion zone using diode laser absorption spectroscopy. The procedure involves measurements of the absorption spectrum of water molecules around 1.39 {mu}m. It has been used to determine hydrogen combustion parameters in M = 2 gas flows in the test section of a supersonic wind tunnel. The relatively high intensities of the absorption lines used have enabled direct absorption measurements. We describe a differential technique for measurements of transient absorption spectra, the procedure we used for primary data processing and approaches for determining the gas temperature andmore » H{sub 2}O concentration in the probed zone. The measured absorption spectra are fitted with spectra simulated using parameters from spectroscopic databases. The combustion-time-averaged ({approx}50 ms) gas temperature and water vapour partial pressure in the hot wake region are determined to be 1050 K and 21 Torr, respectively. The large signal-to-noise ratio in our measurements allowed us to assess the temporal behaviour of these parameters. The accuracy in our temperature measurements in the probed zone is {approx}40 K. (laser applications and other topics in quantum electronics)« less

  20. Halogenation of Hydraulic Fracturing Additives in the Shale Well Parameter Space

    NASA Astrophysics Data System (ADS)

    Sumner, A. J.; Plata, D.

    2017-12-01

    Horizontal Drilling and Hydraulic fracturing (HDHF) involves the deep-well injection of a `fracking fluid' composed of diverse and numerous chemical additives designed to facilitate the release and collection of natural gas from shale plays. The potential impacts of HDHF operations on water resources and ecosystems are numerous, and analyses of flowback samples revealed organic compounds from both geogenic and anthropogenic sources. Furthermore, halogenated chemicals were also detected, and these compounds are rarely disclosed, suggesting the in situ halogenation of reactive additives. To test this transformation hypothesis, we designed and operated a novel high pressure and temperature reactor system to simulate the shale well parameter space and investigate the chemical reactivity of twelve commonly disclosed and functionally diverse HDHF additives. Early results revealed an unanticipated halogenation pathway of α-β unsaturated aldehyde, Cinnamaldehyde, in the presence of oxidant and concentrated brine. Ongoing experiments over a range of parameters informed a proposed mechanism, demonstrating the role of various shale-well specific parameters in enabling the demonstrated halogenation pathway. Ultimately, these results will inform a host of potentially unintended interactions of HDHF additives during the extreme conditions down-bore of a shale well during HDHF activities.

  1. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Measurements of gas parameters in plasma-assisted supersonic combustion processes using diode laser spectroscopy

    NASA Astrophysics Data System (ADS)

    Bolshov, Mikhail A.; Kuritsyn, Yu A.; Liger, V. V.; Mironenko, V. R.; Leonov, S. B.; Yarantsev, D. A.

    2009-09-01

    We report a procedure for temperature and water vapour concentration measurements in an unsteady-state combustion zone using diode laser absorption spectroscopy. The procedure involves measurements of the absorption spectrum of water molecules around 1.39 μm. It has been used to determine hydrogen combustion parameters in M = 2 gas flows in the test section of a supersonic wind tunnel. The relatively high intensities of the absorption lines used have enabled direct absorption measurements. We describe a differential technique for measurements of transient absorption spectra, the procedure we used for primary data processing and approaches for determining the gas temperature and H2O concentration in the probed zone. The measured absorption spectra are fitted with spectra simulated using parameters from spectroscopic databases. The combustion-time-averaged (~50 ms) gas temperature and water vapour partial pressure in the hot wake region are determined to be 1050 K and 21 Torr, respectively. The large signal-to-noise ratio in our measurements allowed us to assess the temporal behaviour of these parameters. The accuracy in our temperature measurements in the probed zone is ~40 K.

  2. Simplified Predictive Models for CO 2 Sequestration Performance Assessment: Research Topical Report on Task #4 - Reduced-Order Method (ROM) Based Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Srikanta; Jin, Larry; He, Jincong

    2015-06-30

    Reduced-order models provide a means for greatly accelerating the detailed simulations that will be required to manage CO 2 storage operations. In this work, we investigate the use of one such method, POD-TPWL, which has previously been shown to be effective in oil reservoir simulation problems. This method combines trajectory piecewise linearization (TPWL), in which the solution to a new (test) problem is represented through a linearization around the solution to a previously-simulated (training) problem, with proper orthogonal decomposition (POD), which enables solution states to be expressed in terms of a relatively small number of parameters. We describe the applicationmore » of POD-TPWL for CO 2-water systems simulated using a compositional procedure. Stanford’s Automatic Differentiation-based General Purpose Research Simulator (AD-GPRS) performs the full-order training simulations and provides the output (derivative matrices and system states) required by the POD-TPWL method. A new POD-TPWL capability introduced in this work is the use of horizontal injection wells that operate under rate (rather than bottom-hole pressure) control. Simulation results are presented for CO 2 injection into a synthetic aquifer and into a simplified model of the Mount Simon formation. Test cases involve the use of time-varying well controls that differ from those used in training runs. Results of reasonable accuracy are consistently achieved for relevant well quantities. Runtime speedups of around a factor of 370 relative to full- order AD-GPRS simulations are achieved, though the preprocessing needed for POD-TPWL model construction corresponds to the computational requirements for about 2.3 full-order simulation runs. A preliminary treatment for POD-TPWL modeling in which test cases differ from training runs in terms of geological parameters (rather than well controls) is also presented. Results in this case involve only small differences between training and test runs, though they do demonstrate that the approach is able to capture basic solution trends. The impact of some of the detailed numerical treatments within the POD-TPWL formulation is considered in an Appendix.« less

  3. Interactive Exploration and Analysis of Large-Scale Simulations Using Topology-Based Data Segmentation.

    PubMed

    Bremer, Peer-Timo; Weber, Gunther; Tierny, Julien; Pascucci, Valerio; Day, Marcus S; Bell, John B

    2011-09-01

    Large-scale simulations are increasingly being used to study complex scientific and engineering phenomena. As a result, advanced visualization and data analysis are also becoming an integral part of the scientific process. Often, a key step in extracting insight from these large simulations involves the definition, extraction, and evaluation of features in the space and time coordinates of the solution. However, in many applications, these features involve a range of parameters and decisions that will affect the quality and direction of the analysis. Examples include particular level sets of a specific scalar field, or local inequalities between derived quantities. A critical step in the analysis is to understand how these arbitrary parameters/decisions impact the statistical properties of the features, since such a characterization will help to evaluate the conclusions of the analysis as a whole. We present a new topological framework that in a single-pass extracts and encodes entire families of possible features definitions as well as their statistical properties. For each time step we construct a hierarchical merge tree a highly compact, yet flexible feature representation. While this data structure is more than two orders of magnitude smaller than the raw simulation data it allows us to extract a set of features for any given parameter selection in a postprocessing step. Furthermore, we augment the trees with additional attributes making it possible to gather a large number of useful global, local, as well as conditional statistic that would otherwise be extremely difficult to compile. We also use this representation to create tracking graphs that describe the temporal evolution of the features over time. Our system provides a linked-view interface to explore the time-evolution of the graph interactively alongside the segmentation, thus making it possible to perform extensive data analysis in a very efficient manner. We demonstrate our framework by extracting and analyzing burning cells from a large-scale turbulent combustion simulation. In particular, we show how the statistical analysis enabled by our techniques provides new insight into the combustion process.

  4. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    USGS Publications Warehouse

    Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  5. Flow cytometry in the post fluorescence era.

    PubMed

    Nolan, Garry P

    2011-12-01

    While flow cytometry once enabled researchers to examine 10--15 cell surface parameters, new mass flow cytometry technology enables interrogation of up to 45 parameters on a single cell. This new technology has increased understanding of cell expression and how cells differentiate during hematopoiesis. Using this information, knowledge of leukemia cell biology has also increased. Other new technologies, such as SPADE analysis and single cell network profiling (SCNP), are enabling researchers to put different cancers into more biologically similar categories and have the potential to enable more personalized medicine. Copyright © 2011. Published by Elsevier Ltd.

  6. An approach for accurate simulation of liquid mixing in a T-shaped micromixer.

    PubMed

    Matsunaga, Takuya; Lee, Ho-Joon; Nishino, Koichi

    2013-04-21

    In this paper, we propose a new computational method for efficient evaluation of the fluid mixing behaviour in a T-shaped micromixer with a rectangular cross section at high Schmidt number under steady state conditions. Our approach enables a low-cost high-quality simulation based on tracking of fluid particles for convective fluid mixing and posterior solving of a model of the species equation for molecular diffusion. The examined parameter range is Re = 1.33 × 10(-2) to 240 at Sc = 3600. The proposed method is shown to simulate well the mixing quality even in the engulfment regime, where the ordinary grid-based simulation is not able to obtain accurate solutions with affordable mesh sizes due to the numerical diffusion at high Sc. The obtained results agree well with a backward random-walk Monte Carlo simulation, by which the accuracy of the proposed method is verified. For further investigation of the characteristics of the proposed method, the Sc dependency is examined in a wide range of Sc from 10 to 3600 at Re = 200. The study reveals that the model discrepancy error emerges more significantly in the concentration distribution at lower Sc, while the resulting mixing quality is accurate over the entire range.

  7. Exploring the Dynamics of Cell Processes through Simulations of Fluorescence Microscopy Experiments

    PubMed Central

    Angiolini, Juan; Plachta, Nicolas; Mocskos, Esteban; Levi, Valeria

    2015-01-01

    Fluorescence correlation spectroscopy (FCS) methods are powerful tools for unveiling the dynamical organization of cells. For simple cases, such as molecules passively moving in a homogeneous media, FCS analysis yields analytical functions that can be fitted to the experimental data to recover the phenomenological rate parameters. Unfortunately, many dynamical processes in cells do not follow these simple models, and in many instances it is not possible to obtain an analytical function through a theoretical analysis of a more complex model. In such cases, experimental analysis can be combined with Monte Carlo simulations to aid in interpretation of the data. In response to this need, we developed a method called FERNET (Fluorescence Emission Recipes and Numerical routines Toolkit) based on Monte Carlo simulations and the MCell-Blender platform, which was designed to treat the reaction-diffusion problem under realistic scenarios. This method enables us to set complex geometries of the simulation space, distribute molecules among different compartments, and define interspecies reactions with selected kinetic constants, diffusion coefficients, and species brightness. We apply this method to simulate single- and multiple-point FCS, photon-counting histogram analysis, raster image correlation spectroscopy, and two-color fluorescence cross-correlation spectroscopy. We believe that this new program could be very useful for predicting and understanding the output of fluorescence microscopy experiments. PMID:26039162

  8. Multivariable extrapolation of grand canonical free energy landscapes

    NASA Astrophysics Data System (ADS)

    Mahynski, Nathan A.; Errington, Jeffrey R.; Shen, Vincent K.

    2017-12-01

    We derive an approach for extrapolating the free energy landscape of multicomponent systems in the grand canonical ensemble, obtained from flat-histogram Monte Carlo simulations, from one set of temperature and chemical potentials to another. This is accomplished by expanding the landscape in a Taylor series at each value of the order parameter which defines its macrostate phase space. The coefficients in each Taylor polynomial are known exactly from fluctuation formulas, which may be computed by measuring the appropriate moments of extensive variables that fluctuate in this ensemble. Here we derive the expressions necessary to define these coefficients up to arbitrary order. In principle, this enables a single flat-histogram simulation to provide complete thermodynamic information over a broad range of temperatures and chemical potentials. Using this, we also show how to combine a small number of simulations, each performed at different conditions, in a thermodynamically consistent fashion to accurately compute properties at arbitrary temperatures and chemical potentials. This method may significantly increase the computational efficiency of biased grand canonical Monte Carlo simulations, especially for multicomponent mixtures. Although approximate, this approach is amenable to high-throughput and data-intensive investigations where it is preferable to have a large quantity of reasonably accurate simulation data, rather than a smaller amount with a higher accuracy.

  9. Simulation of fault performance of a diesel engine driven brushless alternator through PSPICE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayanan, S.S.Y.; Ananthakrishnan, P.; Hangari, V.U.

    1995-12-31

    Analysis of the fault performance of a brushless alternator with damper windings in the main alternator has been handled ab initio as a total modeling and simulation problem through proper application of Park`s equivalent circuit approach individually to the main exciter alternator units of the brushless alternator and the same has been implemented through PSPICE. The accuracy of the parameters used in the modeling and results obtained through PSPICE implementation are then evaluated for a specific 125 kVA brushless alternator in two stages as followed: first, by comparison of the predicted fault performance obtained from simulation of the 125 kVAmore » main alternator alone treated as a conventional alternator with the results obtained through the use of closed form analytical expressions available in the literature for fault currents and torques in such conventional alternators. Secondly, by comparison of some of the simulation results with those obtained experimentally on the brushless alternator itself. To enable proper calculation of derating factors to be used in the design of such brushless alternators, simulation results then include harmonic analysis of the steady state fault currents and torques. Throughout these studies, the brushless alternator is treated to be on no load at the instant of occurrence of fault.« less

  10. epiDMS: Data Management and Analytics for Decision-Making From Epidemic Spread Simulation Ensembles.

    PubMed

    Liu, Sicong; Poccia, Silvestro; Candan, K Selçuk; Chowell, Gerardo; Sapino, Maria Luisa

    2016-12-01

    Carefully calibrated large-scale computational models of epidemic spread represent a powerful tool to support the decision-making process during epidemic emergencies. Epidemic models are being increasingly used for generating forecasts of the spatial-temporal progression of epidemics at different spatial scales and for assessing the likely impact of different intervention strategies. However, the management and analysis of simulation ensembles stemming from large-scale computational models pose challenges, particularly when dealing with multiple interdependent parameters, spanning multiple layers and geospatial frames, affected by complex dynamic processes operating at different resolutions. We describe and illustrate with examples a novel epidemic simulation data management system, epiDMS, that was developed to address the challenges that arise from the need to generate, search, visualize, and analyze, in a scalable manner, large volumes of epidemic simulation ensembles and observations during the progression of an epidemic. epiDMS is a publicly available system that facilitates management and analysis of large epidemic simulation ensembles. epiDMS aims to fill an important hole in decision-making during healthcare emergencies by enabling critical services with significant economic and health impact. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  11. Spectroscopic ellipsometry for analysis of polycrystalline thin-film photovoltaic devices and prediction of external quantum efficiency

    NASA Astrophysics Data System (ADS)

    Ibdah, Abdel-Rahman; Koirala, Prakash; Aryal, Puruswottam; Pradhan, Puja; Marsillac, Sylvain; Rockett, Angus A.; Podraza, Nikolas J.; Collins, Robert W.

    2017-11-01

    Complete polycrystalline thin-film photovoltaic (PV) devices employing CuIn1-xGaxSe2/CdS and CdS/CdTe heterojunctions have been studied by ex situ spectroscopic ellipsometry (SE). In this study, layer thicknesses have been extracted along with photon energy independent parameters such as compositions that describe the dielectric function spectra ε(E) of the individual layers. For accurate ex situ SE analysis of these PV devices, a database of ε(E) spectra is required for all thin film component materials used in each of the two absorber technologies. When possible, database measurements are performed by applying SE in situ immediately after deposition of the thin film materials and after cooling to room temperature in order to avoid oxidation and surface contamination. Determination of ε(E) from the resulting in situ SE data requires structural information that can be obtained from analysis of SE data acquired in real time during the deposition process. From the results of ex situ analysis of the complete CuIn1-xGaxSe2 (CIGS) and CdTe PV devices, the deduced layer thicknesses in combination with the parameters describing ε(E) can be employed in further studies that simulate the external quantum efficiency (EQE) spectra of the devices. These simulations have been performed here by assuming that all electron-hole pairs generated within the active layers, i.e. layers incorporating a dominant absorber component (either CIGS or CdTe), are separated and collected. The active layers may include not only the bulk absorber but also window and back contact interface layers, and individual current contributions from these layers have been determined in the simulations. In addition, the ex situ SE analysis results enable calculation of the absorbance spectra for the inactive layers and the overall reflectance spectra, which lead to quantification of all optical losses in terms of a current density deficit. Mapping SE can be performed given the high speed of multichannel ellipsometers employing array detection, and the resulting EQE simulation capability has wide applications in predicting large area PV module output. The ultimate goal is an on-line capability that enables prediction of PV sub-cell current output as early as possible in the production process.

  12. Extending semi-numeric reionization models to the first stars and galaxies

    NASA Astrophysics Data System (ADS)

    Koh, Daegene; Wise, John H.

    2018-03-01

    Semi-numeric methods have made it possible to efficiently model the epoch of reionization (EoR). While most implementations involve a reduction to a simple three-parameter model, we introduce a new mass-dependent ionizing efficiency parameter that folds in physical parameters that are constrained by the latest numerical simulations. This new parametrization enables the effective modelling of a broad range of host halo masses containing ionizing sources, extending from the smallest Population III host haloes with M ˜ 106 M⊙, which are often ignored, to the rarest cosmic peaks with M ˜ 1012 M⊙ during EoR. We compare the resulting ionizing histories with a typical three-parameter model and also compare with the latest constraints from the Planck mission. Our model results in an optical depth due to Thomson scattering, τe = 0.057, that is consistent with Planck. The largest difference in our model is shown in the resulting bubble size distributions that peak at lower characteristic sizes and are broadened. We also consider the uncertainties of the various physical parameters, and comparing the resulting ionizing histories broadly disfavours a small contribution from galaxies. The smallest haloes cease a meaningful contribution to the ionizing photon budget after z = 10, implying that they play a role in determining the start of EoR and little else.

  13. Integration of simulations and visualizations into classroom contexts through role playing

    NASA Astrophysics Data System (ADS)

    Moysey, S. M.

    2016-12-01

    While simulations create a novel way to engage students, the idea of numerical modeling may be overwhelming to a wide swath of students - particularly non-geoscience majors or those students early in their earth science education. Yet even for these students, simulations and visualizations remain a powerful way to explore concepts and take ownership over their learning. One approach to bring these tools into the classroom is to introduce them as a component of a larger role-playing activity. I present two specific examples of how I have done this within a general education course broadly focused on water resources sustainability. In the first example, we have created an online multi-player watershed management game where players make management decisions for their individual farms, which in turn set the parameters for a watershed-scale groundwater model that continuously runs in the background. Through the simulation students were able to influence the behavior of the environment and see feedbacks on their individual land within the game. Though the original intent was to focus student learning on the hydrologic aspects of the watershed behavior, I have found that the value of the simulation is actually in allowing students to become immersed in a way that enables deep conversations about topics ranging from environmental policy to social justice. The second example presents an overview of a role playing activity focused on a multi-party negotiation of water rights in the Klamath watershed. In this case each student takes on a different role in the negotiation (e.g., farmer, energy producer, government, environmental advocate, etc.) and is presented with a rich set of data tying environmental and economic factors to the operation of reservoirs. In this case the simulation model is very simple, i.e., a mass balance calculator that students use to predict the consequences of their management decisions. The simplicity of the simulator, however, allows for reinforcement of the fundamental concept of mass balance which is a key scientific theme throughout the course. It also allows students to focus on analysis of data that enables them to tie hydrologic behaviors to societal consequences that guide their decision making.

  14. RFI and SCRIMP Model Development and Verification

    NASA Technical Reports Server (NTRS)

    Loos, Alfred C.; Sayre, Jay

    2000-01-01

    Vacuum-Assisted Resin Transfer Molding (VARTM) processes are becoming promising technologies in the manufacturing of primary composite structures in the aircraft industry as well as infrastructure. A great deal of work still needs to be done on efforts to reduce the costly trial-and-error methods of VARTM processing that are currently in practice today. A computer simulation model of the VARTM process would provide a cost-effective tool in the manufacturing of composites utilizing this technique. Therefore, the objective of this research was to modify an existing three-dimensional, Resin Film Infusion (RFI)/Resin Transfer Molding (RTM) model to include VARTM simulation capabilities and to verify this model with the fabrication of aircraft structural composites. An additional objective was to use the VARTM model as a process analysis tool, where this tool would enable the user to configure the best process for manufacturing quality composites. Experimental verification of the model was performed by processing several flat composite panels. The parameters verified included flow front patterns and infiltration times. The flow front patterns were determined to be qualitatively accurate, while the simulated infiltration times over predicted experimental times by 8 to 10%. Capillary and gravitational forces were incorporated into the existing RFI/RTM model in order to simulate VARTM processing physics more accurately. The theoretical capillary pressure showed the capability to reduce the simulated infiltration times by as great as 6%. The gravity, on the other hand, was found to be negligible for all cases. Finally, the VARTM model was used as a process analysis tool. This enabled the user to determine such important process constraints as the location and type of injection ports and the permeability and location of the high-permeable media. A process for a three-stiffener composite panel was proposed. This configuration evolved from the variation of the process constraints in the modeling of several different composite panels. The configuration was proposed by considering such factors as: infiltration time, the number of vacuum ports, and possible areas of void entrapment.

  15. Model of fluid flow and internal erosion of a porous fragile medium

    NASA Astrophysics Data System (ADS)

    Kudrolli, Arshad; Clotet, Xavier

    2016-11-01

    We discuss the internal erosion and transport of particles leading to heterogeneity and channelization of a porous granular bed driven by fluid flow by introducing a model experimental system which enables direct visualization of the evolution of porosity from the single particle up to the system scale. Further, we develop a hybrid hydrodynamic-statistical model to understand the main ingredients needed to simulate our observations. A uniqueness of our study is the close coupling of the experiments and simulations with control parameters used in the simulations derived from the experiments. Understanding this system is of fundamental importance to a number of geophysical processes, and in the extraction of hydrocarbons in the subsurface including the deposition of proppants used in hydraulic fracturing. We provide clear evidence for the importance of curvature of the interface between high and low porosity regions in determining the flux rate needed for erosion and the spatial locations where channels grow. This material is based upon work supported by the U.S. Department of Energy Office of Science, Office of Basic Energy Sciences program under DE-SC0010274.

  16. Simulation and analysis of differential global positioning system for civil helicopter operations

    NASA Technical Reports Server (NTRS)

    Denaro, R. P.; Cabak, A. R.

    1983-01-01

    A Differential Global Positioning System (DGPS) computer simulation was developed, to provide a versatile tool for assessing DGPS referenced civil helicopter navigation. The civil helicopter community will probably be an early user of the GPS capability because of the unique mission requirements which include offshore exploration and low altitude transport into remote areas not currently served by ground based Navaids. The Monte Carlo simulation provided a sufficiently high fidelity dynamic motion and propagation environment to enable accurate comparisons of alternative differential GPS implementations and navigation filter tradeoffs. The analyst has provided the capability to adjust most aspects of the system, the helicopter flight profile, the receiver Kalman filter, and the signal propagation environment to assess differential GPS performance and parameter sensitivities. Preliminary analysis was conducted to evaluate alternative implementations of the differential navigation algorithm in both the position and measurement domain. Results are presented to show that significant performance gains are achieved when compared with conventional GPS but that differences due to DGPS implementation techniques were small. System performance was relatively insensitive to the update rates of the error correction information.

  17. Dengue fever spreading based on probabilistic cellular automata with two lattices

    NASA Astrophysics Data System (ADS)

    Pereira, F. M. M.; Schimit, P. H. T.

    2018-06-01

    Modeling and simulation of mosquito-borne diseases have gained attention due to a growing incidence in tropical countries in the past few years. Here, we study the dengue spreading in a population modeled by cellular automata, where there are two lattices to model the human-mosquitointeraction: one lattice for human individuals, and one lattice for mosquitoes in order to enable different dynamics in populations. The disease considered is the dengue fever with one, two or three different serotypes coexisting in population. Although many regions exhibit the incidence of only one serotype, here we set a complete framework to also study the occurrence of two and three serotypes at the same time in a population. Furthermore, the flexibility of the model allows its use to other mosquito-borne diseases, like chikungunya, yellow fever and malaria. An approximation of the cellular automata is proposed in terms of ordinary differential equations; the spreading of mosquitoes is studied and the influence of some model parameters are analyzed with numerical simulations. Finally, a method to combat dengue spreading is simulated based on a reduction of mosquito birth and mosquito bites in population.

  18. Model-Based Control of a Nonlinear Aircraft Engine Simulation using an Optimal Tuner Kalman Filter Approach

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Csank, Jeffrey Thomas; Chicatelli, Amy; Kilver, Jacob

    2013-01-01

    This paper covers the development of a model-based engine control (MBEC) methodology featuring a self tuning on-board model applied to an aircraft turbofan engine simulation. Here, the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) serves as the MBEC application engine. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC over a wide range of operating points. The on-board model is a piece-wise linear model derived from CMAPSS40k and updated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. Investigations using the MBEC to provide a stall margin limit for the controller protection logic are presented that could provide benefits over a simple acceleration schedule that is currently used in traditional engine control architectures.

  19. A Computational Framework for Bioimaging Simulation

    PubMed Central

    Watabe, Masaki; Arjunan, Satya N. V.; Fukushima, Seiya; Iwamoto, Kazunari; Kozuka, Jun; Matsuoka, Satomi; Shindo, Yuki; Ueda, Masahiro; Takahashi, Koichi

    2015-01-01

    Using bioimaging technology, biologists have attempted to identify and document analytical interpretations that underlie biological phenomena in biological cells. Theoretical biology aims at distilling those interpretations into knowledge in the mathematical form of biochemical reaction networks and understanding how higher level functions emerge from the combined action of biomolecules. However, there still remain formidable challenges in bridging the gap between bioimaging and mathematical modeling. Generally, measurements using fluorescence microscopy systems are influenced by systematic effects that arise from stochastic nature of biological cells, the imaging apparatus, and optical physics. Such systematic effects are always present in all bioimaging systems and hinder quantitative comparison between the cell model and bioimages. Computational tools for such a comparison are still unavailable. Thus, in this work, we present a computational framework for handling the parameters of the cell models and the optical physics governing bioimaging systems. Simulation using this framework can generate digital images of cell simulation results after accounting for the systematic effects. We then demonstrate that such a framework enables comparison at the level of photon-counting units. PMID:26147508

  20. Life's attractors : understanding developmental systems through reverse engineering and in silico evolution.

    PubMed

    Jaeger, Johannes; Crombach, Anton

    2012-01-01

    We propose an approach to evolutionary systems biology which is based on reverse engineering of gene regulatory networks and in silico evolutionary simulations. We infer regulatory parameters for gene networks by fitting computational models to quantitative expression data. This allows us to characterize the regulatory structure and dynamical repertoire of evolving gene regulatory networks with a reasonable amount of experimental and computational effort. We use the resulting network models to identify those regulatory interactions that are conserved, and those that have diverged between different species. Moreover, we use the models obtained by data fitting as starting points for simulations of evolutionary transitions between species. These simulations enable us to investigate whether such transitions are random, or whether they show stereotypical series of regulatory changes which depend on the structure and dynamical repertoire of an evolving network. Finally, we present a case study-the gap gene network in dipterans (flies, midges, and mosquitoes)-to illustrate the practical application of the proposed methodology, and to highlight the kind of biological insights that can be gained by this approach.

  1. An Improved Lattice Boltzmann Model for Non-Newtonian Flows with Applications to Solid-Fluid Interactions in External Flows

    NASA Astrophysics Data System (ADS)

    Adam, Saad; Premnath, Kannan

    2016-11-01

    Fluid mechanics of non-Newtonian fluids, which arise in numerous settings, are characterized by non-linear constitutive models that pose certain unique challenges for computational methods. Here, we consider the lattice Boltzmann method (LBM), which offers some computational advantages due to its kinetic basis and its simpler stream-and-collide procedure enabling efficient simulations. However, further improvements are necessary to improve its numerical stability and accuracy for computations involving broader parameter ranges. Hence, in this study, we extend the cascaded LBM formulation by modifying its moment equilibria and relaxation parameters to handle a variety of non-Newtonian constitutive equations, including power-law and Bingham fluids, with improved stability. In addition, we include corrections to the moment equilibria to obtain an inertial frame invariant scheme without cubic-velocity defects. After preforming its validation study for various benchmark flows, we study the physics of non-Newtonian flow over pairs of circular and square cylinders in a tandem arrangement, especially the wake structure interactions and their effects on resulting forces in each cylinder, and elucidate the effect of the various characteristic parameters.

  2. Modelling hen harrier dynamics to inform human-wildlife conflict resolution: a spatially-realistic, individual-based approach.

    PubMed

    Heinonen, Johannes P M; Palmer, Stephen C F; Redpath, Steve M; Travis, Justin M J

    2014-01-01

    Individual-based models have gained popularity in ecology, and enable simultaneous incorporation of spatial explicitness and population dynamic processes to understand spatio-temporal patterns of populations. We introduce an individual-based model for understanding and predicting spatial hen harrier (Circus cyaneus) population dynamics in Great Britain. The model uses a landscape with habitat, prey and game management indices. The hen harrier population was initialised according to empirical census estimates for 1988/89 and simulated until 2030, and predictions for 1998, 2004 and 2010 were compared to empirical census estimates for respective years. The model produced a good qualitative match to overall trends between 1989 and 2010. Parameter explorations revealed relatively high elasticity in particular to demographic parameters such as juvenile male mortality. This highlights the need for robust parameter estimates from empirical research. There are clearly challenges for replication of real-world population trends, but this model provides a useful tool for increasing understanding of drivers of hen harrier dynamics and focusing research efforts in order to inform conflict management decisions.

  3. Tight-binding analysis of Si and GaAs ultrathin bodies with subatomic wave-function resolution

    NASA Astrophysics Data System (ADS)

    Tan, Yaohua P.; Povolotskyi, Michael; Kubis, Tillmann; Boykin, Timothy B.; Klimeck, Gerhard

    2015-08-01

    Empirical tight-binding (ETB) methods are widely used in atomistic device simulations. Traditional ways of generating the ETB parameters rely on direct fitting to bulk experiments or theoretical electronic bands. However, ETB calculations based on existing parameters lead to unphysical results in ultrasmall structures like the As-terminated GaAs ultrathin bodies (UTBs). In this work, it is shown that more transferable ETB parameters with a short interaction range can be obtained by a process of mapping ab initio bands and wave functions to ETB models. This process enables the calibration of not only the ETB energy bands but also the ETB wave functions with corresponding ab initio calculations. Based on the mapping process, ETB models of Si and GaAs are parameterized with respect to hybrid functional calculations. Highly localized ETB basis functions are obtained. Both the ETB energy bands and wave functions with subatomic resolution of UTBs show good agreement with the corresponding hybrid functional calculations. The ETB methods can then be used to explain realistically extended devices in nonequilibrium that cannot be tackled with ab initio methods.

  4. Modelling Hen Harrier Dynamics to Inform Human-Wildlife Conflict Resolution: A Spatially-Realistic, Individual-Based Approach

    PubMed Central

    Heinonen, Johannes P. M.; Palmer, Stephen C. F.; Redpath, Steve M.; Travis, Justin M. J.

    2014-01-01

    Individual-based models have gained popularity in ecology, and enable simultaneous incorporation of spatial explicitness and population dynamic processes to understand spatio-temporal patterns of populations. We introduce an individual-based model for understanding and predicting spatial hen harrier (Circus cyaneus) population dynamics in Great Britain. The model uses a landscape with habitat, prey and game management indices. The hen harrier population was initialised according to empirical census estimates for 1988/89 and simulated until 2030, and predictions for 1998, 2004 and 2010 were compared to empirical census estimates for respective years. The model produced a good qualitative match to overall trends between 1989 and 2010. Parameter explorations revealed relatively high elasticity in particular to demographic parameters such as juvenile male mortality. This highlights the need for robust parameter estimates from empirical research. There are clearly challenges for replication of real-world population trends, but this model provides a useful tool for increasing understanding of drivers of hen harrier dynamics and focusing research efforts in order to inform conflict management decisions. PMID:25405860

  5. A three-dimensional virtual environment for modeling mechanical cardiopulmonary interactions.

    PubMed

    Kaye, J M; Primiano, F P; Metaxas, D N

    1998-06-01

    We have developed a real-time computer system for modeling mechanical physiological behavior in an interactive, 3-D virtual environment. Such an environment can be used to facilitate exploration of cardiopulmonary physiology, particularly in situations that are difficult to reproduce clinically. We integrate 3-D deformable body dynamics with new, formal models of (scalar) cardiorespiratory physiology, associating the scalar physiological variables and parameters with the corresponding 3-D anatomy. Our framework enables us to drive a high-dimensional system (the 3-D anatomical models) from one with fewer parameters (the scalar physiological models) because of the nature of the domain and our intended application. Our approach is amenable to modeling patient-specific circumstances in two ways. First, using CT scan data, we apply semi-automatic methods for extracting and reconstructing the anatomy to use in our simulations. Second, our scalar physiological models are defined in terms of clinically measurable, patient-specific parameters. This paper describes our approach, problems we have encountered and a sample of results showing normal breathing and acute effects of pneumothoraces.

  6. Danish heathland manipulation experiment data in Model-Data-Fusion

    NASA Astrophysics Data System (ADS)

    Thum, Tea; Peylin, Philippe; Ibrom, Andreas; Van Der Linden, Leon; Beier, Claus; Bacour, Cédric; Santaren, Diego; Ciais, Philippe

    2013-04-01

    In ecosystem manipulation experiments (EMEs) the ecosystem is artificially exposed to different environmental conditions that aim to simulate circumstances in future climate. At Danish EME site Brandbjerg the responses of a heathland to drought, warming and increased atmospheric CO2 concentration are studied. The warming manipulation is realized by passive nighttime warming. The measurements include control plots as well as replicates for each three treatment separately and in combination. The Brandbjerg heathland ecosystem is dominated by heather and wavy hairgrass. These experiments provide excellent data for validation and development of ecosystem models. In this work we used a generic vegetation model ORCHIDEE with Model-Data-Fusion (MDF) approach. ORCHIDEE model is a process-based model that describes the exchanges of carbon, water and energy between the atmosphere and the vegetation. It can be run at different spatial scales from global to site level. Different vegetation types are described in ORCHIDEE as plant functional types. In MDF we are using observations from the site to optimize the model parameters. This enables us to assess the modelling errors and the performance of the model for different manipulation treatments. This insight will inform us whether the different processes are adequately modelled or if the model is missing some important processes. We used a genetic algorithm in the MDF. The data available from the site included measurements of aboveground biomass, heterotrophic soil respiration and total ecosystem respiration from years 2006-2008. The biomass was measured six times doing this period. The respiration measurements were done with manual chamber measurements. For the soil respiration we used results from an empirical model that has been developed for the site. This enabled us to have more data for the MDF. Before the MDF we performed a sensitivity analysis of the model parameters to different data streams. Fifteen most influential parameters were chosen to be optimized. These included parameters connected to photosynthesis, phenology, allocation of biomass and respiration. All three data streams were used simultaneously in the MDF. Before the MDF, the model had the tendency to overestimate the respiration and the aboveground biomass. After MDF the model simulations were closer to the observations, but its estimations for those variables that were not used in the MDF, such as, e.g., fine root biomass growth, did not improve greatly. In these runs the vegetation of Brandbjerg site was described in ORCHIDEE as C3 grass, which had some characteristics that do not apply to a Danish heathland very well. The results suggest that a new plant functional type needs to be developed to ORCHIDEE in order to successfully simulate such ecosystem as Brandbjerg.

  7. Design and Control of Modular Spine-Like Tensegrity Structures

    NASA Technical Reports Server (NTRS)

    Mirletz, Brian T.; Park, In-Won; Flemons, Thomas E.; Agogino, Adrian K.; Quinn, Roger D.; SunSpiral, Vytas

    2014-01-01

    We present a methodology enabled by the NASA Tensegrity Robotics Toolkit (NTRT) for the rapid structural design of tensegrity robots in simulation and an approach for developing control systems using central pattern generators, local impedance controllers, and parameter optimization techniques to determine effective locomotion strategies for the robot. Biomimetic tensegrity structures provide advantageous properties to robotic locomotion and manipulation tasks, such as their adaptability and force distribution properties, flexibility, energy efficiency, and access to extreme terrains. While strides have been made in designing insightful static biotensegrity structures, gaining a clear understanding of how a particular structure can efficiently move has been an open problem. The tools in the NTRT enable the rapid exploration of the dynamics of a given morphology, and the links between structure, controllability, and resulting gait efficiency. To highlight the effectiveness of the NTRT at this exploration of morphology and control, we will provide examples from the designs and locomotion of four different modular spine-like tensegrity robots.

  8. A cell-based computational model of early embryogenesis coupling mechanical behaviour and gene regulation

    NASA Astrophysics Data System (ADS)

    Delile, Julien; Herrmann, Matthieu; Peyriéras, Nadine; Doursat, René

    2017-01-01

    The study of multicellular development is grounded in two complementary domains: cell biomechanics, which examines how physical forces shape the embryo, and genetic regulation and molecular signalling, which concern how cells determine their states and behaviours. Integrating both sides into a unified framework is crucial to fully understand the self-organized dynamics of morphogenesis. Here we introduce MecaGen, an integrative modelling platform enabling the hypothesis-driven simulation of these dual processes via the coupling between mechanical and chemical variables. Our approach relies upon a minimal `cell behaviour ontology' comprising mesenchymal and epithelial cells and their associated behaviours. MecaGen enables the specification and control of complex collective movements in 3D space through a biologically relevant gene regulatory network and parameter space exploration. Three case studies investigating pattern formation, epithelial differentiation and tissue tectonics in zebrafish early embryogenesis, the latter with quantitative comparison to live imaging data, demonstrate the validity and usefulness of our framework.

  9. A Bayesian Poisson-lognormal Model for Count Data for Multiple-Trait Multiple-Environment Genomic-Enabled Prediction

    PubMed Central

    Montesinos-López, Osval A.; Montesinos-López, Abelardo; Crossa, José; Toledo, Fernando H.; Montesinos-López, José C.; Singh, Pawan; Juliana, Philomin; Salinas-Ruiz, Josafhat

    2017-01-01

    When a plant scientist wishes to make genomic-enabled predictions of multiple traits measured in multiple individuals in multiple environments, the most common strategy for performing the analysis is to use a single trait at a time taking into account genotype × environment interaction (G × E), because there is a lack of comprehensive models that simultaneously take into account the correlated counting traits and G × E. For this reason, in this study we propose a multiple-trait and multiple-environment model for count data. The proposed model was developed under the Bayesian paradigm for which we developed a Markov Chain Monte Carlo (MCMC) with noninformative priors. This allows obtaining all required full conditional distributions of the parameters leading to an exact Gibbs sampler for the posterior distribution. Our model was tested with simulated data and a real data set. Results show that the proposed multi-trait, multi-environment model is an attractive alternative for modeling multiple count traits measured in multiple environments. PMID:28364037

  10. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  11. Calibration of an Unsteady Groundwater Flow Model for a Complex, Strongly Heterogeneous Aquifer

    NASA Astrophysics Data System (ADS)

    Curtis, Z. K.; Liao, H.; Li, S. G.; Phanikumar, M. S.; Lusch, D.

    2016-12-01

    Modeling of groundwater systems characterized by complex three-dimensional structure and heterogeneity remains a significant challenge. Most of today's groundwater models are developed based on relatively simple conceptual representations in favor of model calibratibility. As more complexities are modeled, e.g., by adding more layers and/or zones, or introducing transient processes, more parameters have to be estimated and issues related to ill-posed groundwater problems and non-unique calibration arise. Here, we explore the use of an alternative conceptual representation for groundwater modeling that is fully three-dimensional and can capture complex 3D heterogeneity (both systematic and "random") without over-parameterizing the aquifer system. In particular, we apply Transition Probability (TP) geostatistics on high resolution borehole data from a water well database to characterize the complex 3D geology. Different aquifer material classes, e.g., `AQ' (aquifer material), `MAQ' (marginal aquifer material'), `PCM' (partially confining material), and `CM' (confining material), are simulated, with the hydraulic properties of each material type as tuning parameters during calibration. The TP-based approach is applied to simulate unsteady groundwater flow in a large, complex, and strongly heterogeneous glacial aquifer system in Michigan across multiple spatial and temporal scales. The resulting model is calibrated to observed static water level data over a time span of 50 years. The results show that the TP-based conceptualization enables much more accurate and robust calibration/simulation than that based on conventional deterministic layer/zone based conceptual representations.

  12. From plot to regional scales: Effect of land use and soil type on soil erosion in the southern Amazon

    NASA Astrophysics Data System (ADS)

    Schindewolf, Marcus; Schultze, Nico; Amorim, Ricardo S. S.; Schmidt, Jürgen

    2015-04-01

    The corridor along the Brazilian Highway 163 in the Southern Amazon is affected by radical changes in land use patterns. In order to enable a model based assessment of erosion risks on different land use and soil types a transportable disc type rainfall simulator is applied to identify the most important infiltration and erosion parameters of the EROSION 3D model. Since particle detachment highly depends on experimental plot length, a combined runoff supply is used for the virtually extension of the plot length to more than 20 m. Simulations were conducted on the most common regional land use, soil management and soil types for dry and wet runs. The experiments are characterized by high final infiltration rates (0.3 - 2.5 mm*min^-1), low sediment concentrations (0.2-6.5 g*L^-1) and accordingly low soil loss rates (0.002-50 Kg*m^-2), strongly related to land use, applied management and soil type. Ploughed pastures and clear cuts reveal highest soil losses whereas croplands are less affected. Due to higher aggregate stabilities Ferrasols are less endangered than Acrisols. Derived model parameters are plausible, comparable to existing data bases and reproduce the effects of land use and soil management on soil loss. Thus it is possible to apply the EROSION 3D soil loss model in Southern Amazonia for erosion risk assessment and scenario simulation under changing climate and land use conditions.

  13. A radiosity model for heterogeneous canopies in remote sensing

    NASA Astrophysics Data System (ADS)

    GarcíA-Haro, F. J.; Gilabert, M. A.; Meliá, J.

    1999-05-01

    A radiosity model has been developed to compute bidirectional reflectance from a heterogeneous canopy approximated by an arbitrary configuration of plants or clumps of vegetation, placed on the ground surface in a prescribed manner. Plants are treated as porous cylinders formed by aggregations of layers of leaves. This model explicitly computes solar radiation leaving each individual surface, taking into account multiple scattering processes between leaves and soil, and occlusion of neighboring plants. Canopy structural parameters adopted in this study have served to simplify the computation of the geometric factors of the radiosity equation, and thus this model has enabled us to simulate multispectral images of vegetation scenes. Simulated images have shown to be valuable approximations of satellite data, and then a sensitivity analysis to the dominant parameters of discontinuous canopies (plant density, leaf area index (LAI), leaf angle distribution (LAD), plant dimensions, soil optical properties, etc.) and scene (sun/ view angles and atmospheric conditions) has been undertaken. The radiosity model has let us gain a deep insight into the radiative regime inside the canopy, showing it to be governed by occlusion of incoming irradiance, multiple scattering of radiation between canopy elements and interception of upward radiance by leaves. Results have indicated that unlike leaf distribution, other structural parameters such as LAI, LAD, and plant dimensions have a strong influence on canopy reflectance. In addition, concepts have been developed that are useful to understand the reflectance behavior of the canopy, such as an effective LAI related to leaf inclination.

  14. Hydrogeophysical Assessment of Aquifer Uncertainty Using Simulated Annealing driven MRF-Based Stochastic Joint Inversion

    NASA Astrophysics Data System (ADS)

    Oware, E. K.

    2017-12-01

    Geophysical quantification of hydrogeological parameters typically involve limited noisy measurements coupled with inadequate understanding of the target phenomenon. Hence, a deterministic solution is unrealistic in light of the largely uncertain inputs. Stochastic imaging (SI), in contrast, provides multiple equiprobable realizations that enable probabilistic assessment of aquifer properties in a realistic manner. Generation of geologically realistic prior models is central to SI frameworks. Higher-order statistics for representing prior geological features in SI are, however, usually borrowed from training images (TIs), which may produce undesirable outcomes if the TIs are unpresentatitve of the target structures. The Markov random field (MRF)-based SI strategy provides a data-driven alternative to TI-based SI algorithms. In the MRF-based method, the simulation of spatial features is guided by Gibbs energy (GE) minimization. Local configurations with smaller GEs have higher likelihood of occurrence and vice versa. The parameters of the Gibbs distribution for computing the GE are estimated from the hydrogeophysical data, thereby enabling the generation of site-specific structures in the absence of reliable TIs. In Metropolis-like SI methods, the variance of the transition probability controls the jump-size. The procedure is a standard Markov chain Monte Carlo (McMC) method when a constant variance is assumed, and becomes simulated annealing (SA) when the variance (cooling temperature) is allowed to decrease gradually with time. We observe that in certain problems, the large variance typically employed at the beginning to hasten burn-in may be unideal for sampling at the equilibrium state. The powerfulness of SA stems from its flexibility to adaptively scale the variance at different stages of the sampling. Degeneration of results were reported in a previous implementation of the MRF-based SI strategy based on a constant variance. Here, we present an updated version of the algorithm based on SA that appears to resolve the degeneration problem with seemingly improved results. We illustrate the performance of the SA version with a joint inversion of time-lapse concentration and electrical resistivity measurements in a hypothetical trinary hydrofacies aquifer characterization problem.

  15. Assessment of Impact of Monoenergetic Photon Sources on Prioritized Nonproliferation Applications: Simulation Study Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geddes, Cameron; Ludewigt, Bernhard; Valentine, John

    Near-monoenergetic photon sources (MPSs) have the potential to improve sensitivity at greatly reduced dose in existing applications and enable new capabilities in other applications. MPS advantages include the ability to select energy, energy spread, flux, and pulse structures to deliver only the photons needed for the application, while suppressing extraneous dose and background. Some MPSs also offer narrow divergence photon beams which can target dose and/or mitigate scattering contributions to image contrast degradation. Current broad-band, bremsstrahlung photon sources (e.g., linacs and betatrons) deliver unnecessary dose that in some cases also interferes with the signature to be detected and/or restricts operations,more » and must be collimated (reducing flux) to generate narrow divergence beams. While MPSs can in principle resolve these issues, they are technically challenging to produce. Candidate MPS technologies for nonproliferation applications are now being developed, each of which have different properties (e.g. broad divergence vs. narrow). Within each technology, source parameters trade off against one another (e.g. flux vs. energy spread), representing a large operation space. To guide development, requirements for each application of interest must be defined and simulations conducted to define MPS parameters that deliver benefit relative to current systems. The present project conducted a broad assessment of potential nonproliferation applications where MPSs may provide new capabilities or significant performance enhancement (reported separately), which led to prioritization of several applications for detailed analysis. The applications prioritized were: cargo screening and interdiction of Special Nuclear Materials (SNM), detection of hidden SNM, treaty/dismantlement verification, and spent fuel dry storage cask content verification. High resolution imaging for stockpile stewardship was considered as a sub-area of the treaty topic, as it is also of interest for future treaty use. This report presents higher-fidelity calculations and modeling results to quantitatively evaluate the prioritized applications, and to derive the key MPS properties that drive application benefit. Simulations focused on the conventional signatures of radiography, photofission, and NRF to enable comparison to present methods and evaluation of benefit.« less

  16. Integrating acoustic telemetry into mark-recapture models to improve the precision of apparent survival and abundance estimates.

    PubMed

    Dudgeon, Christine L; Pollock, Kenneth H; Braccini, J Matias; Semmens, Jayson M; Barnett, Adam

    2015-07-01

    Capture-mark-recapture models are useful tools for estimating demographic parameters but often result in low precision when recapture rates are low. Low recapture rates are typical in many study systems including fishing-based studies. Incorporating auxiliary data into the models can improve precision and in some cases enable parameter estimation. Here, we present a novel application of acoustic telemetry for the estimation of apparent survival and abundance within capture-mark-recapture analysis using open population models. Our case study is based on simultaneously collecting longline fishing and acoustic telemetry data for a large mobile apex predator, the broadnose sevengill shark (Notorhynchus cepedianus), at a coastal site in Tasmania, Australia. Cormack-Jolly-Seber models showed that longline data alone had very low recapture rates while acoustic telemetry data for the same time period resulted in at least tenfold higher recapture rates. The apparent survival estimates were similar for the two datasets but the acoustic telemetry data showed much greater precision and enabled apparent survival parameter estimation for one dataset, which was inestimable using fishing data alone. Combined acoustic telemetry and longline data were incorporated into Jolly-Seber models using a Monte Carlo simulation approach. Abundance estimates were comparable to those with longline data only; however, the inclusion of acoustic telemetry data increased precision in the estimates. We conclude that acoustic telemetry is a useful tool for incorporating in capture-mark-recapture studies in the marine environment. Future studies should consider the application of acoustic telemetry within this framework when setting up the study design and sampling program.

  17. High performance hybrid functional Petri net simulations of biological pathway models on CUDA.

    PubMed

    Chalkidis, Georgios; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Hybrid functional Petri nets are a wide-spread tool for representing and simulating biological models. Due to their potential of providing virtual drug testing environments, biological simulations have a growing impact on pharmaceutical research. Continuous research advancements in biology and medicine lead to exponentially increasing simulation times, thus raising the demand for performance accelerations by efficient and inexpensive parallel computation solutions. Recent developments in the field of general-purpose computation on graphics processing units (GPGPU) enabled the scientific community to port a variety of compute intensive algorithms onto the graphics processing unit (GPU). This work presents the first scheme for mapping biological hybrid functional Petri net models, which can handle both discrete and continuous entities, onto compute unified device architecture (CUDA) enabled GPUs. GPU accelerated simulations are observed to run up to 18 times faster than sequential implementations. Simulating the cell boundary formation by Delta-Notch signaling on a CUDA enabled GPU results in a speedup of approximately 7x for a model containing 1,600 cells.

  18. Evaluation of decadal hindcasts using satellite simulators

    NASA Astrophysics Data System (ADS)

    Spangehl, Thomas; Mazurkiewicz, Alex; Schröder, Marc

    2013-04-01

    The evaluation of dynamical ensemble forecast systems requires a solid validation of basic processes such as the global atmospheric water and energy cycle. The value of any validation approach strongly depends on the quality of the observational data records used. Current approaches utilize in situ measurements, remote sensing data and reanalyses. Related data records are subject to a number of uncertainties and limitations such as representativeness, spatial and temporal resolution and homogeneity. However, recently several climate data records with known and sufficient quality became available. In particular, the satellite data records offer the opportunity to obtain reference information on global scales including the oceans. Here we consider the simulation of satellite radiances from the climate model output enabling an evaluation in the instrument's parameter space to avoid uncertainties stemming from the application of retrieval schemes in order to minimise uncertainties on the reference side. Utilizing the CFMIP Observation Simulator Package (COSP) we develop satellite simulators for the Tropical Rainfall Measuring Mission precipitation radar (TRMM PR) and the Infrared Atmospheric Sounding Interferometer (IASI). The simulators are applied within the MiKlip project funded by BMBF (German Federal Ministry of Education and Research) to evaluate decadal climate predictions performed with the MPI-ESM developed at the Max Planck Institute for Meteorology. While TRMM PR enables the evaluation of the vertical structure of precipitation over tropical and sub-tropical areas, IASI is used to support the global evaluation of clouds and radiation. In a first step the reliability of the developed simulators needs to be explored. The simulation of radiances in the instrument space requires the generation of sub-grid scale variability from the climate model output. Furthermore, assumptions are made to simulate radiances such as, for example, the distribution of different hydrometeor types. Therefore, testing is performed to determine the extent to which the quality of the simulator results depends on the applied methods used to generate sub-grid variability (e.g. sub-grid resolution). Moreover, the sensitivity of results to the choice of different distributions of hydrometeors is explored. The model evaluation is carried out in a statistical manner using histograms of radar reflectivities (TRMM PR) and brightness temperatures (IASI). Finally, methods to deduce data suitable for probabilistic evaluation of decadal hindcasts such as simple indices are discussed.

  19. Numerical investigation and Uncertainty Quantification of the Impact of the geological and geomechanical properties on the seismo-acoustic responses of underground chemical explosions

    NASA Astrophysics Data System (ADS)

    Ezzedine, S. M.; Pitarka, A.; Vorobiev, O.; Glenn, L.; Antoun, T.

    2017-12-01

    We have performed three-dimensional high resolution simulations of underground chemical explosions conducted recently in jointed rock outcrop as part of the Source Physics Experiments (SPE) being conducted at the Nevada National Security Site (NNSS). The main goal of the current study is to investigate the effects of the structural and geomechanical properties on the spall phenomena due to underground chemical explosions and its subsequent effect on the seismo-acoustic signature at far distances. Two parametric studies have been undertaken to assess the impact of different 1) conceptual geological models including a single layer and two layers model, with and without joints and with and without varying geomechanical properties, and 2) depth of bursts of the chemical explosions and explosion yields. Through these investigations we have explored not only the near-field response of the chemical explosions but also the far-field responses of the seismic and the acoustic signatures. The near-field simulations were conducted using the Eulerian and Lagrangian codes, GEODYN and GEODYN -L, respectively, while the far-field seismic simulations were conducted using the elastic wave propagation code, WPP, and the acoustic response using the Kirchhoff-Helmholtz-Rayleigh time-dependent approximation code, KHR. Though a series of simulations we have recorded the velocity field histories a) at the ground surface on an acoustic-source-patch for the acoustic simulations, and 2) on a seismic-source-box for the seismic simulations. We first analyzed the SPE3 experimental data and simulated results, then simulated SPE4-prime, SPE5, and SPE6 to anticipate their seismo-acoustic responses given conditions of uncertainties. SPE experiments were conducted in a granitic formation; we have extended the parametric study to include other geological settings such dolomite and alluvial formations. These parametric studies enabled us 1) investigating the geotechnical and geophysical key parameters that impact the seismo-acoustic responses of underground chemical explosions and 2) deciphering and ranking through a global sensitivity analysis the most important key parameters to be characterized on site to minimize uncertainties in prediction and discrimination.

  20. Towards an understanding of the attributes of simulation that enable learning in undergraduate nurse education: A grounded theory study.

    PubMed

    Bland, Andrew J; Tobbell, Jane

    2016-09-01

    Simulation has become an established feature of nurse education yet little is understood about the mechanisms that lead to learning. To explore the attributes of simulation-based education that enable student learning in undergraduate nurse education. Final year students drawn from one UK University (n=46) participated in a grounded theory study. First, nonparticipant observation and video recording of student activity was undertaken. Following initial analysis, recordings and observations were deconstructed during focus group interviews that enabled both the researcher and participants to unpack meaning. Lastly emergent findings were verified with final year students drawn from a second UK University (n=6). A staged approach to learning emerged from engagement in simulation. This began with initial hesitation as students moved through nonlinear stages to making connections and thinking like a nurse. Core findings suggest that simulation enables curiosity and intellect (main concern) through doing (core category) and interaction with others identified as social collaboration (category). This study offers a theoretical basis for understanding simulation-based education and integration of strategies that maximise the potential for learning. Additionally it offers direction for further research, particularly with regards to how the application of theory to practice is accelerated through learning by doing and working collaboratively. Copyright © 2016 Elsevier Ltd. All rights reserved.

Top