Event-driven simulation in SELMON: An overview of EDSE
NASA Technical Reports Server (NTRS)
Rouquette, Nicolas F.; Chien, Steve A.; Charest, Leonard, Jr.
1992-01-01
EDSE (event-driven simulation engine), a model-based event-driven simulator implemented for SELMON, a tool for sensor selection and anomaly detection in real-time monitoring is described. The simulator is used in conjunction with a causal model to predict future behavior of the model from observed data. The behavior of the causal model is interpreted as equivalent to the behavior of the physical system being modeled. An overview of the functionality of the simulator and the model-based event-driven simulation paradigm on which it is based is provided. Included are high-level descriptions of the following key properties: event consumption and event creation, iterative simulation, synchronization and filtering of monitoring data from the physical system. Finally, how EDSE stands with respect to the relevant open issues of discrete-event and model-based simulation is discussed.
Zhang, Hang; Xu, Qingyan
2017-10-27
Grain selection is an important process in single crystal turbine blades manufacturing. Selector structure is a control factor of grain selection, as well as directional solidification (DS). In this study, the grain selection and structure design of the spiral selector were investigated through experimentation and simulation. A heat transfer model and a 3D microstructure growth model were established based on the Cellular automaton-Finite difference (CA-FD) method for the grain selector. Consequently, the temperature field, the microstructure and the grain orientation distribution were simulated and further verified. The average error of the temperature result was less than 1.5%. The grain selection mechanisms were further analyzed and validated through simulations. The structural design specifications of the selector were suggested based on the two grain selection effects. The structural parameters of the spiral selector, namely, the spiral tunnel diameter ( d w ), the spiral pitch ( h b ) and the spiral diameter ( h s ), were studied and the design criteria of these parameters were proposed. The experimental and simulation results demonstrated that the improved selector could accurately and efficiently produce a single crystal structure.
Zhang, Hang; Xu, Qingyan
2017-01-01
Grain selection is an important process in single crystal turbine blades manufacturing. Selector structure is a control factor of grain selection, as well as directional solidification (DS). In this study, the grain selection and structure design of the spiral selector were investigated through experimentation and simulation. A heat transfer model and a 3D microstructure growth model were established based on the Cellular automaton-Finite difference (CA-FD) method for the grain selector. Consequently, the temperature field, the microstructure and the grain orientation distribution were simulated and further verified. The average error of the temperature result was less than 1.5%. The grain selection mechanisms were further analyzed and validated through simulations. The structural design specifications of the selector were suggested based on the two grain selection effects. The structural parameters of the spiral selector, namely, the spiral tunnel diameter (dw), the spiral pitch (hb) and the spiral diameter (hs), were studied and the design criteria of these parameters were proposed. The experimental and simulation results demonstrated that the improved selector could accurately and efficiently produce a single crystal structure. PMID:29077067
Ru, Sushan; Hardner, Craig; Carter, Patrick A; Evans, Kate; Main, Dorrie; Peace, Cameron
2016-01-01
Seedling selection identifies superior seedlings as candidate cultivars based on predicted genetic potential for traits of interest. Traditionally, genetic potential is determined by phenotypic evaluation. With the availability of DNA tests for some agronomically important traits, breeders have the opportunity to include DNA information in their seedling selection operations—known as marker-assisted seedling selection. A major challenge in deploying marker-assisted seedling selection in clonally propagated crops is a lack of knowledge in genetic gain achievable from alternative strategies. Existing models based on additive effects considering seed-propagated crops are not directly relevant for seedling selection of clonally propagated crops, as clonal propagation captures all genetic effects, not just additive. This study modeled genetic gain from traditional and various marker-based seedling selection strategies on a single trait basis through analytical derivation and stochastic simulation, based on a generalized seedling selection scheme of clonally propagated crops. Various trait-test scenarios with a range of broad-sense heritability and proportion of genotypic variance explained by DNA markers were simulated for two populations with different segregation patterns. Both derived and simulated results indicated that marker-based strategies tended to achieve higher genetic gain than phenotypic seedling selection for a trait where the proportion of genotypic variance explained by marker information was greater than the broad-sense heritability. Results from this study provides guidance in optimizing genetic gain from seedling selection for single traits where DNA tests providing marker information are available. PMID:27148453
NASA Astrophysics Data System (ADS)
Li, Zhanjie; Yu, Jingshan; Xu, Xinyi; Sun, Wenchao; Pang, Bo; Yue, Jiajia
2018-06-01
Hydrological models are important and effective tools for detecting complex hydrological processes. Different models have different strengths when capturing the various aspects of hydrological processes. Relying on a single model usually leads to simulation uncertainties. Ensemble approaches, based on multi-model hydrological simulations, can improve application performance over single models. In this study, the upper Yalongjiang River Basin was selected for a case study. Three commonly used hydrological models (SWAT, VIC, and BTOPMC) were selected and used for independent simulations with the same input and initial values. Then, the BP neural network method was employed to combine the results from the three models. The results show that the accuracy of BP ensemble simulation is better than that of the single models.
Application of simulation models for the optimization of business processes
NASA Astrophysics Data System (ADS)
Jašek, Roman; Sedláček, Michal; Chramcov, Bronislav; Dvořák, Jiří
2016-06-01
The paper deals with the applications of modeling and simulation tools in the optimization of business processes, especially in solving an optimization of signal flow in security company. As a modeling tool was selected Simul8 software that is used to process modeling based on discrete event simulation and which enables the creation of a visual model of production and distribution processes.
Simulation of selected genealogies.
Slade, P F
2000-02-01
Algorithms for generating genealogies with selection conditional on the sample configuration of n genes in one-locus, two-allele haploid and diploid models are presented. Enhanced integro-recursions using the ancestral selection graph, introduced by S. M. Krone and C. Neuhauser (1997, Theor. Popul. Biol. 51, 210-237), which is the non-neutral analogue of the coalescent, enables accessible simulation of the embedded genealogy. A Monte Carlo simulation scheme based on that of R. C. Griffiths and S. Tavaré (1996, Math. Comput. Modelling 23, 141-158), is adopted to consider the estimation of ancestral times under selection. Simulations show that selection alters the expected depth of the conditional ancestral trees, depending on a mutation-selection balance. As a consequence, branch lengths are shown to be an ineffective criterion for detecting the presence of selection. Several examples are given which quantify the effects of selection on the conditional expected time to the most recent common ancestor. Copyright 2000 Academic Press.
Intelligent Exit-Selection Behaviors during a Room Evacuation
NASA Astrophysics Data System (ADS)
Zarita, Zainuddin; Lim Eng, Aik
2012-01-01
A modified version of the existing cellular automata (CA) model is proposed to simulate an evacuation procedure in a classroom with and without obstacles. Based on the numerous literature on the implementation of CA in modeling evacuation motions, it is notable that most of the published studies do not take into account the pedestrian's ability to select the exit route in their models. To resolve these issues, we develop a CA model incorporating a probabilistic neural network for determining the decision-making ability of the pedestrians, and simulate an exit-selection phenomenon in the simulation. Intelligent exit-selection behavior is observed in our model. From the simulation results, it is observed that occupants tend to select the exit closest to them when the density is low, but if the density is high they will go to an alternative exit so as to avoid a long wait. This reflects the fact that occupants may not fully utilize multiple exits during evacuation. The improvement in our proposed model is valuable for further study and for upgrading the safety aspects of building designs.
Simulating natural selection in landscape genetics
E. L. Landguth; S. A. Cushman; N. Johnson
2012-01-01
Linking landscape effects to key evolutionary processes through individual organism movement and natural selection is essential to provide a foundation for evolutionary landscape genetics. Of particular importance is determining how spatially- explicit, individual-based models differ from classic population genetics and evolutionary ecology models based on ideal...
Williamson, Scott; Fledel-Alon, Adi; Bustamante, Carlos D
2004-09-01
We develop a Poisson random-field model of polymorphism and divergence that allows arbitrary dominance relations in a diploid context. This model provides a maximum-likelihood framework for estimating both selection and dominance parameters of new mutations using information on the frequency spectrum of sequence polymorphisms. This is the first DNA sequence-based estimator of the dominance parameter. Our model also leads to a likelihood-ratio test for distinguishing nongenic from genic selection; simulations indicate that this test is quite powerful when a large number of segregating sites are available. We also use simulations to explore the bias in selection parameter estimates caused by unacknowledged dominance relations. When inference is based on the frequency spectrum of polymorphisms, genic selection estimates of the selection parameter can be very strongly biased even for minor deviations from the genic selection model. Surprisingly, however, when inference is based on polymorphism and divergence (McDonald-Kreitman) data, genic selection estimates of the selection parameter are nearly unbiased, even for completely dominant or recessive mutations. Further, we find that weak overdominant selection can increase, rather than decrease, the substitution rate relative to levels of polymorphism. This nonintuitive result has major implications for the interpretation of several popular tests of neutrality.
Heinrichs, Julie; Aldridge, Cameron L.; O'Donnell, Michael; Schumaker, Nathan
2017-01-01
Prioritizing habitats for conservation is a challenging task, particularly for species with fluctuating populations and seasonally dynamic habitat needs. Although the use of resource selection models to identify and prioritize habitat for conservation is increasingly common, their ability to characterize important long-term habitats for dynamic populations are variable. To examine how habitats might be prioritized differently if resource selection was directly and dynamically linked with population fluctuations and movement limitations among seasonal habitats, we constructed a spatially explicit individual-based model for a dramatically fluctuating population requiring temporally varying resources. Using greater sage-grouse (Centrocercus urophasianus) in Wyoming as a case study, we used resource selection function maps to guide seasonal movement and habitat selection, but emergent population dynamics and simulated movement limitations modified long-term habitat occupancy. We compared priority habitats in RSF maps to long-term simulated habitat use. We examined the circumstances under which the explicit consideration of movement limitations, in combination with population fluctuations and trends, are likely to alter predictions of important habitats. In doing so, we assessed the future occupancy of protected areas under alternative population and habitat conditions. Habitat prioritizations based on resource selection models alone predicted high use in isolated parcels of habitat and in areas with low connectivity among seasonal habitats. In contrast, results based on more biologically-informed simulations emphasized central and connected areas near high-density populations, sometimes predicted to be low selection value. Dynamic models of habitat use can provide additional biological realism that can extend, and in some cases, contradict habitat use predictions generated from short-term or static resource selection analyses. The explicit inclusion of population dynamics and movement propensities via spatial simulation modeling frameworks may provide an informative means of predicting long-term habitat use, particularly for fluctuating populations with complex seasonal habitat needs. Importantly, our results indicate the possible need to consider habitat selection models as a starting point rather than the common end point for refining and prioritizing habitats for protection for cyclic and highly variable populations.
ERIC Educational Resources Information Center
Reardon, Sean F.; Baker, Rachel; Kasman, Matt; Klasik, Daniel; Townsend, Joseph
2017-01-01
This paper simulates a system of socioeconomic status (SES)-based affirmative action in college admissions and examines the extent to which it can produce racial diversity in selective colleges. Using simulation models, we investigate the potential relative effects of race- and/or SES-based affirmative action policies, alongside targeted,…
Selecting climate simulations for impact studies based on multivariate patterns of climate change.
Mendlik, Thomas; Gobiet, Andreas
In climate change impact research it is crucial to carefully select the meteorological input for impact models. We present a method for model selection that enables the user to shrink the ensemble to a few representative members, conserving the model spread and accounting for model similarity. This is done in three steps: First, using principal component analysis for a multitude of meteorological parameters, to find common patterns of climate change within the multi-model ensemble. Second, detecting model similarities with regard to these multivariate patterns using cluster analysis. And third, sampling models from each cluster, to generate a subset of representative simulations. We present an application based on the ENSEMBLES regional multi-model ensemble with the aim to provide input for a variety of climate impact studies. We find that the two most dominant patterns of climate change relate to temperature and humidity patterns. The ensemble can be reduced from 25 to 5 simulations while still maintaining its essential characteristics. Having such a representative subset of simulations reduces computational costs for climate impact modeling and enhances the quality of the ensemble at the same time, as it prevents double-counting of dependent simulations that would lead to biased statistics. The online version of this article (doi:10.1007/s10584-015-1582-0) contains supplementary material, which is available to authorized users.
NASA Astrophysics Data System (ADS)
Ranatunga, T.
2017-12-01
Modeling of fate and transport of fecal bacteria in a watershed is a processed based approach that considers releases from manure, point sources, and septic systems. Overland transport with water and sediments, infiltration into soils, transport in the vadose zone and groundwater, die-off and growth processes, and in-stream transport are considered as the other major processes in bacteria simulation. This presentation will discuss a simulation of fecal indicator bacteria source loading and in-stream conditions of a non-tidal watershed (Cedar Bayou Watershed) in South Central Texas using two models; Spatially Explicit Load Enrichment Calculation Tool (SELECT) and Soil and Water Assessment Tool (SWAT). Furthermore, it will discuss a probable approach of bacteria source load reduction in order to meet the water quality standards in the streams. The selected watershed is listed as having levels of fecal indicator bacteria that posed a risk for contact recreation and wading by the Texas Commission of Environmental Quality (TCEQ). The SELECT modeling approach was used in estimating the bacteria source loading from land categories. Major bacteria sources considered were, failing septic systems, discharges from wastewater treatment facilities, excreta from livestock (Cattle, Horses, Sheep and Goat), excreta from Wildlife (Feral Hogs, and Deer), Pet waste (mainly from Dogs), and runoff from urban surfaces. The estimated source loads from SELECT model were input to the SWAT model, and simulate the bacteria transport through the land and in-stream. The calibrated SWAT model was then used to estimate the indicator bacteria in-stream concentrations for future years based on regional land use, population and household forecast (up to 2040). Based on the reductions required to meet the water quality standards in-stream, the corresponding required source load reductions were estimated.
Howard Evan Canfield; Vicente L. Lopes
2000-01-01
A process-based, simulation model for evaporation, soil water and streamflow (BROOK903) was used to estimate soil moisture change on a semiarid rangeland watershed in southeastern Arizona. A sensitivity analysis was performed to select parameters affecting ET and soil moisture for calibration. Automatic parameter calibration was performed using a procedure based on a...
NASA Astrophysics Data System (ADS)
Keshtpoor, M.; Carnacina, I.; Yablonsky, R. M.
2016-12-01
Extratropical cyclones (ETCs) are the primary driver of storm surge events along the UK and northwest mainland Europe coastlines. In an effort to evaluate the storm surge risk in coastal communities in this region, a stochastic catalog is developed by perturbing the historical storm seeds of European ETCs to account for 10,000 years of possible ETCs. Numerical simulation of the storm surge generated by the full 10,000-year stochastic catalog, however, is computationally expensive and may take several months to complete with available computational resources. A new statistical regression model is developed to select the major surge-generating events from the stochastic ETC catalog. This regression model is based on the maximum storm surge, obtained via numerical simulations using a calibrated version of the Delft3D-FM hydrodynamic model with a relatively coarse mesh, of 1750 historical ETC events that occurred over the past 38 years in Europe. These numerically-simulated surge values were regressed to the local sea level pressure and the U and V components of the wind field at the location of 196 tide gauge stations near the UK and northwest mainland Europe coastal areas. The regression model suggests that storm surge values in the area of interest are highly correlated to the U- and V-component of wind speed, as well as the sea level pressure. Based on these correlations, the regression model was then used to select surge-generating storms from the 10,000-year stochastic catalog. Results suggest that roughly 105,000 events out of 480,000 stochastic storms are surge-generating events and need to be considered for numerical simulation using a hydrodynamic model. The selected stochastic storms were then simulated in Delft3D-FM, and the final refinement of the storm population was performed based on return period analysis of the 1750 historical event simulations at each of the 196 tide gauges in preparation for Delft3D-FM fine mesh simulations.
Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan
2015-01-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129
Schnitzer, Mireille E; Lok, Judith J; Gruber, Susan
2016-05-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010 [27]) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low- and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios.
Vector control of wind turbine on the basis of the fuzzy selective neural net*
NASA Astrophysics Data System (ADS)
Engel, E. A.; Kovalev, I. V.; Engel, N. E.
2016-04-01
An article describes vector control of wind turbine based on fuzzy selective neural net. Based on the wind turbine system’s state, the fuzzy selective neural net tracks an maximum power point under random perturbations. Numerical simulations are accomplished to clarify the applicability and advantages of the proposed vector wind turbine’s control on the basis of the fuzzy selective neuronet. The simulation results show that the proposed intelligent control of wind turbine achieves real-time control speed and competitive performance, as compared to a classical control model with PID controllers based on traditional maximum torque control strategy.
A Simulation Study of Methods for Selecting Subgroup-Specific Doses in Phase I Trials
Morita, Satoshi; Thall, Peter F.; Takeda, Kentaro
2016-01-01
Summary Patient heterogeneity may complicate dose-finding in phase I clinical trials if the dose-toxicity curves differ between subgroups. Conducting separate trials within subgroups may lead to infeasibly small sample sizes in subgroups having low prevalence. Alternatively, it is not obvious how to conduct a single trial while accounting for heterogeneity. To address this problem, we consider a generalization of the continual reassessment method (O’Quigley, et al., 1990) based on a hierarchical Bayesian dose-toxicity model that borrows strength between subgroups under the assumption that the subgroups are exchangeable. We evaluate a design using this model that includes subgroup-specific dose selection and safety rules. A simulation study is presented that includes comparison of this method to three alternative approaches, based on non-hierarchical models, that make different types of assumptions about within-subgroup dose-toxicity curves. The simulations show that the hierarchical model-based method is recommended in settings where the dose-toxicity curves are exchangeable between subgroups. We present practical guidelines for application, and provide computer programs for trial simulation and conduct. PMID:28111916
Xu, Dong; Zhang, Jian; Roy, Ambrish; Zhang, Yang
2011-01-01
I-TASSER is an automated pipeline for protein tertiary structure prediction using multiple threading alignments and iterative structure assembly simulations. In CASP9 experiments, two new algorithms, QUARK and FG-MD, were added to the I-TASSER pipeline for improving the structural modeling accuracy. QUARK is a de novo structure prediction algorithm used for structure modeling of proteins that lack detectable template structures. For distantly homologous targets, QUARK models are found useful as a reference structure for selecting good threading alignments and guiding the I-TASSER structure assembly simulations. FG-MD is an atomic-level structural refinement program that uses structural fragments collected from the PDB structures to guide molecular dynamics simulation and improve the local structure of predicted model, including hydrogen-bonding networks, torsion angles and steric clashes. Despite considerable progress in both the template-based and template-free structure modeling, significant improvements on protein target classification, domain parsing, model selection, and ab initio folding of beta-proteins are still needed to further improve the I-TASSER pipeline. PMID:22069036
NASA Astrophysics Data System (ADS)
Ranatunga, T.
2016-12-01
Modeling of fate and transport of fecal bacteria in a watershed is generally a processed based approach that considers releases from manure, point sources, and septic systems. Overland transport with water and sediments, infiltration into soils, transport in the vadose zone and groundwater, die-off and growth processes, and in-stream transport are considered as the other major processes in bacteria simulation. This presentation will discuss a simulation of fecal indicator bacteria (E.coli) source loading and in-stream conditions of a non-tidal watershed (Cedar Bayou Watershed) in South Central Texas using two models; Spatially Explicit Load Enrichment Calculation Tool (SELECT) and Soil and Water Assessment Tool (SWAT). Furthermore, it will discuss a probable approach of bacteria source load reduction in order to meet the water quality standards in the streams. The selected watershed is listed as having levels of fecal indicator bacteria that posed a risk for contact recreation and wading by the Texas Commission of Environmental Quality (TCEQ). The SELECT modeling approach was used in estimating the bacteria source loading from land categories. Major bacteria sources considered were, failing septic systems, discharges from wastewater treatment facilities, excreta from livestock (Cattle, Horses, Sheep and Goat), excreta from Wildlife (Feral Hogs, and Deer), Pet waste (mainly from Dogs), and runoff from urban surfaces. The estimated source loads were input to the SWAT model in order to simulate the transport through the land and in-stream conditions. The calibrated SWAT model was then used to estimate the indicator bacteria in-stream concentrations for future years based on H-GAC's regional land use, population and household projections (up to 2040). Based on the in-stream reductions required to meet the water quality standards, the corresponding required source load reductions were estimated.
NASA Astrophysics Data System (ADS)
Song, Yunquan; Lin, Lu; Jian, Ling
2016-07-01
Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.
Climatic Models Ensemble-based Mid-21st Century Runoff Projections: A Bayesian Framework
NASA Astrophysics Data System (ADS)
Achieng, K. O.; Zhu, J.
2017-12-01
There are a number of North American Regional Climate Change Assessment Program (NARCCAP) climatic models that have been used to project surface runoff in the mid-21st century. Statistical model selection techniques are often used to select the model that best fits data. However, model selection techniques often lead to different conclusions. In this study, ten models are averaged in Bayesian paradigm to project runoff. Bayesian Model Averaging (BMA) is used to project and identify effect of model uncertainty on future runoff projections. Baseflow separation - a two-digital filter which is also called Eckhardt filter - is used to separate USGS streamflow (total runoff) into two components: baseflow and surface runoff. We use this surface runoff as the a priori runoff when conducting BMA of runoff simulated from the ten RCM models. The primary objective of this study is to evaluate how well RCM multi-model ensembles simulate surface runoff, in a Bayesian framework. Specifically, we investigate and discuss the following questions: How well do ten RCM models ensemble jointly simulate surface runoff by averaging over all the models using BMA, given a priori surface runoff? What are the effects of model uncertainty on surface runoff simulation?
NASA Astrophysics Data System (ADS)
Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin
2017-01-01
Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.
Simulation of unsteady flows by the DSMC macroscopic chemistry method
NASA Astrophysics Data System (ADS)
Goldsworthy, Mark; Macrossan, Michael; Abdel-jawad, Madhat
2009-03-01
In the Direct Simulation Monte-Carlo (DSMC) method, a combination of statistical and deterministic procedures applied to a finite number of 'simulator' particles are used to model rarefied gas-kinetic processes. In the macroscopic chemistry method (MCM) for DSMC, chemical reactions are decoupled from the specific particle pairs selected for collisions. Information from all of the particles within a cell, not just those selected for collisions, is used to determine a reaction rate coefficient for that cell. Unlike collision-based methods, MCM can be used with any viscosity or non-reacting collision models and any non-reacting energy exchange models. It can be used to implement any reaction rate formulations, whether these be from experimental or theoretical studies. MCM has been previously validated for steady flow DSMC simulations. Here we show how MCM can be used to model chemical kinetics in DSMC simulations of unsteady flow. Results are compared with a collision-based chemistry procedure for two binary reactions in a 1-D unsteady shock-expansion tube simulation. Close agreement is demonstrated between the two methods for instantaneous, ensemble-averaged profiles of temperature, density and species mole fractions, as well as for the accumulated number of net reactions per cell.
A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection
Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B
2015-01-01
Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050
Winter Simulation Conference, Miami Beach, Fla., December 4-6, 1978, Proceedings. Volumes 1 & 2
NASA Technical Reports Server (NTRS)
Highland, H. J. (Editor); Nielsen, N. R.; Hull, L. G.
1978-01-01
The papers report on the various aspects of simulation such as random variate generation, simulation optimization, ranking and selection of alternatives, model management, documentation, data bases, and instructional methods. Simulation studies in a wide variety of fields are described, including system design and scheduling, government and social systems, agriculture, computer systems, the military, transportation, corporate planning, ecosystems, health care, manufacturing and industrial systems, computer networks, education, energy, production planning and control, financial models, behavioral models, information systems, and inventory control.
Algorithms and architecture for multiprocessor based circuit simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deutsch, J.T.
Accurate electrical simulation is critical to the design of high performance integrated circuits. Logic simulators can verify function and give first-order timing information. Switch level simulators are more effective at dealing with charge sharing than standard logic simulators, but cannot provide accurate timing information or discover DC problems. Delay estimation techniques and cell level simulation can be used in constrained design methods, but must be tuned for each application, and circuit simulation must still be used to generate the cell models. None of these methods has the guaranteed accuracy that many circuit designers desire, and none can provide detailed waveformmore » information. Detailed electrical-level simulation can predict circuit performance if devices and parasitics are modeled accurately. However, the computational requirements of conventional circuit simulators make it impractical to simulate current large circuits. In this dissertation, the implementation of Iterated Timing Analysis (ITA), a relaxation-based technique for accurate circuit simulation, on a special-purpose multiprocessor is presented. The ITA method is an SOR-Newton, relaxation-based method which uses event-driven analysis and selective trace to exploit the temporal sparsity of the electrical network. Because event-driven selective trace techniques are employed, this algorithm lends itself to implementation on a data-driven computer.« less
Development of mathematical models of environmental physiology
NASA Technical Reports Server (NTRS)
Stolwijk, J. A. J.; Mitchell, J. W.; Nadel, E. R.
1971-01-01
Selected articles concerned with mathematical or simulation models of human thermoregulation are presented. The articles presented include: (1) development and use of simulation models in medicine, (2) model of cardio-vascular adjustments during exercise, (3) effective temperature scale based on simple model of human physiological regulatory response, (4) behavioral approach to thermoregulatory set point during exercise, and (5) importance of skin temperature in sweat regulation.
Exploring Several Methods of Groundwater Model Selection
NASA Astrophysics Data System (ADS)
Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar
2017-04-01
Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).
A decision tool for selecting trench cap designs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paige, G.B.; Stone, J.J.; Lane, L.J.
1995-12-31
A computer based prototype decision support system (PDSS) is being developed to assist the risk manager in selecting an appropriate trench cap design for waste disposal sites. The selection of the {open_quote}best{close_quote} design among feasible alternatives requires consideration of multiple and often conflicting objectives. The methodology used in the selection process consists of: selecting and parameterizing decision variables using data, simulation models, or expert opinion; selecting feasible trench cap design alternatives; ordering the decision variables and ranking the design alternatives. The decision model is based on multi-objective decision theory and uses a unique approach to order the decision variables andmore » rank the design alternatives. Trench cap designs are evaluated based on federal regulations, hydrologic performance, cover stability and cost. Four trench cap designs, which were monitored for a four year period at Hill Air Force Base in Utah, are used to demonstrate the application of the PDSS and evaluate the results of the decision model. The results of the PDSS, using both data and simulations, illustrate the relative advantages of each of the cap designs and which cap is the {open_quotes}best{close_quotes} alternative for a given set of criteria and a particular importance order of those decision criteria.« less
Numerical Simulation of Selecting Model Scale of Cable in Wind Tunnel Test
NASA Astrophysics Data System (ADS)
Huang, Yifeng; Yang, Jixin
The numerical simulation method based on computational Fluid Dynamics (CFD) provides a possible alternative means of physical wind tunnel test. Firstly, the correctness of the numerical simulation method is validated by one certain example. In order to select the minimum length of the cable as to a certain diameter in the numerical wind tunnel tests, the numerical wind tunnel tests based on CFD are carried out on the cables with several different length-diameter ratios (L/D). The results show that, when the L/D reaches to 18, the drag coefficient is stable essentially.
ERIC Educational Resources Information Center
Xiang, Lin
2011-01-01
This is a collective case study seeking to develop detailed descriptions of how programming an agent-based simulation influences a group of 8th grade students' model-based inquiry (MBI) by examining students' agent-based programmable modeling (ABPM) processes and the learning outcomes. The context of the present study was a biology unit on…
Panthee, Nirmal; Okada, Jun-ichi; Washio, Takumi; Mochizuki, Youhei; Suzuki, Ryohei; Koyama, Hidekazu; Ono, Minoru; Hisada, Toshiaki; Sugiura, Seiryo
2016-07-01
Despite extensive studies on clinical indices for the selection of patient candidates for cardiac resynchronization therapy (CRT), approximately 30% of selected patients do not respond to this therapy. Herein, we examined whether CRT simulations based on individualized realistic three-dimensional heart models can predict the therapeutic effect of CRT in a canine model of heart failure with left bundle branch block. In four canine models of failing heart with dyssynchrony, individualized three-dimensional heart models reproducing the electromechanical activity of each animal were created based on the computer tomographic images. CRT simulations were performed for 25 patterns of three ventricular pacing lead positions. Lead positions producing the best and the worst therapeutic effects were selected in each model. The validity of predictions was tested in acute experiments in which hearts were paced from the sites identified by simulations. We found significant correlations between the experimentally observed improvement in ejection fraction (EF) and the predicted improvements in ejection fraction (P<0.01) or the maximum value of the derivative of left ventricular pressure (P<0.01). The optimal lead positions produced better outcomes compared with the worst positioning in all dogs studied, although there were significant variations in responses. Variations in ventricular wall thickness among the dogs may have contributed to these responses. Thus CRT simulations using the individualized three-dimensional heart models can predict acute hemodynamic improvement, and help determine the optimal positions of the pacing lead. Copyright © 2016 Elsevier B.V. All rights reserved.
Stochastic model search with binary outcomes for genome-wide association studies.
Russu, Alberto; Malovini, Alberto; Puca, Annibale A; Bellazzi, Riccardo
2012-06-01
The spread of case-control genome-wide association studies (GWASs) has stimulated the development of new variable selection methods and predictive models. We introduce a novel Bayesian model search algorithm, Binary Outcome Stochastic Search (BOSS), which addresses the model selection problem when the number of predictors far exceeds the number of binary responses. Our method is based on a latent variable model that links the observed outcomes to the underlying genetic variables. A Markov Chain Monte Carlo approach is used for model search and to evaluate the posterior probability of each predictor. BOSS is compared with three established methods (stepwise regression, logistic lasso, and elastic net) in a simulated benchmark. Two real case studies are also investigated: a GWAS on the genetic bases of longevity, and the type 2 diabetes study from the Wellcome Trust Case Control Consortium. Simulations show that BOSS achieves higher precisions than the reference methods while preserving good recall rates. In both experimental studies, BOSS successfully detects genetic polymorphisms previously reported to be associated with the analyzed phenotypes. BOSS outperforms the other methods in terms of F-measure on simulated data. In the two real studies, BOSS successfully detects biologically relevant features, some of which are missed by univariate analysis and the three reference techniques. The proposed algorithm is an advance in the methodology for model selection with a large number of features. Our simulated and experimental results showed that BOSS proves effective in detecting relevant markers while providing a parsimonious model.
Working covariance model selection for generalized estimating equations.
Carey, Vincent J; Wang, You-Gan
2011-11-20
We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.
On validating remote sensing simulations using coincident real data
NASA Astrophysics Data System (ADS)
Wang, Mingming; Yao, Wei; Brown, Scott; Goodenough, Adam; van Aardt, Jan
2016-05-01
The remote sensing community often requires data simulation, either via spectral/spatial downsampling or through virtual, physics-based models, to assess systems and algorithms. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is one such first-principles, physics-based model for simulating imagery for a range of modalities. Complex simulation of vegetation environments subsequently has become possible, as scene rendering technology and software advanced. This in turn has created questions related to the validity of such complex models, with potential multiple scattering, bidirectional distribution function (BRDF), etc. phenomena that could impact results in the case of complex vegetation scenes. We selected three sites, located in the Pacific Southwest domain (Fresno, CA) of the National Ecological Observatory Network (NEON). These sites represent oak savanna, hardwood forests, and conifer-manzanita-mixed forests. We constructed corresponding virtual scenes, using airborne LiDAR and imaging spectroscopy data from NEON, ground-based LiDAR data, and field-collected spectra to characterize the scenes. Imaging spectroscopy data for these virtual sites then were generated using the DIRSIG simulation environment. This simulated imagery was compared to real AVIRIS imagery (15m spatial resolution; 12 pixels/scene) and NEON Airborne Observation Platform (AOP) data (1m spatial resolution; 180 pixels/scene). These tests were performed using a distribution-comparison approach for select spectral statistics, e.g., established the spectra's shape, for each simulated versus real distribution pair. The initial comparison results of the spectral distributions indicated that the shapes of spectra between the virtual and real sites were closely matched.
Simulation of solid-liquid flows in a stirred bead mill based on computational fluid dynamics (CFD)
NASA Astrophysics Data System (ADS)
Winardi, S.; Widiyastuti, W.; Septiani, E. L.; Nurtono, T.
2018-05-01
The selection of simulation model is an important step in computational fluid dynamics (CFD) to obtain an agreement with experimental work. In addition, computational time and processor speed also influence the performance of the simulation results. Here, we report the simulation of solid-liquid flow in a bead mill using Eulerian model. Multiple Reference Frame (MRF) was also used to model the interaction between moving (shaft and disk) and stationary (chamber exclude shaft and disk) zones. Bead mill dimension was based on the experimental work of Yamada and Sakai (2013). The effect of shaft rotation speed of 1200 and 1800 rpm on the particle distribution and the flow field was discussed. For rotation speed of 1200 rpm, the particles spread evenly throughout the bead mill chamber. On the other hand, for the rotation speed of 1800 rpm, the particles tend to be thrown to the near wall region resulting in the dead zone and found no particle in the center region. The selected model agreed well to the experimental data with average discrepancies less than 10%. Furthermore, the simulation was run without excessive computational cost.
Wang, Jian-Li; Yuan, Zi-Gang; Qian, Guo-Liang; Bao, Wu-Qiao; Jin, Guo-Liang
2018-06-01
The study aimed to develop simulation models including intracranial aneurysmal and parent vessel geometries, as well as vascular branches, through 3D printing technology. The simulation models focused on the benefits of aneurysmal treatments and clinical education. This prospective study included 13 consecutive patients who suffered from intracranial aneurysms confirmed by digital subtraction angiography (DSA) in the Neurosurgery Department of Shaoxing People's Hospital. The original 3D-DSA image data were extracted through the picture archiving and communication system and imported into Mimics. After reconstructing and transforming to Binary STL format, the simulation models of the hollow vascular tree were printed using 3D devices. The intracranial aneurysm 3D printing simulation model was developed based on DSA to assist neurosurgeons in aneurysmal treatments and residency training. Seven neurosurgical residents and 15 standardization training residents received their simulation model training and gave high assessments for the educational course with the follow-up qualitative questionnaire. 3D printed simulation models based on DSA can perfectly reveal target aneurysms and help neurosurgeons select therapeutic strategies precisely. As an educational tool, the 3D aneurysm vascular simulation model is useful for training residents.
NASA Astrophysics Data System (ADS)
Erkol, Şirag; Yücel, Gönenç
In this study, the problem of seed selection is investigated. This problem is mainly treated as an optimization problem, which is proved to be NP-hard. There are several heuristic approaches in the literature which mostly use algorithmic heuristics. These approaches mainly focus on the trade-off between computational complexity and accuracy. Although the accuracy of algorithmic heuristics are high, they also have high computational complexity. Furthermore, in the literature, it is generally assumed that complete information on the structure and features of a network is available, which is not the case in most of the times. For the study, a simulation model is constructed, which is capable of creating networks, performing seed selection heuristics, and simulating diffusion models. Novel metric-based seed selection heuristics that rely only on partial information are proposed and tested using the simulation model. These heuristics use local information available from nodes in the synthetically created networks. The performances of heuristics are comparatively analyzed on three different network types. The results clearly show that the performance of a heuristic depends on the structure of a network. A heuristic to be used should be selected after investigating the properties of the network at hand. More importantly, the approach of partial information provided promising results. In certain cases, selection heuristics that rely only on partial network information perform very close to similar heuristics that require complete network data.
Space construction base control system
NASA Technical Reports Server (NTRS)
1978-01-01
Aspects of an attitude control system were studied and developed for a large space base that is structurally flexible and whose mass properties change rather dramatically during its orbital lifetime. Topics of discussion include the following: (1) space base orbital pointing and maneuvering; (2) angular momentum sizing of actuators; (3) momentum desaturation selection and sizing; (4) multilevel control technique applied to configuration one; (5) one-dimensional model simulation; (6) N-body discrete coordinate simulation; (7) structural analysis math model formulation; and (8) discussion of control problems and control methods.
Evaluation of new collision-pair selection models in DSMC
NASA Astrophysics Data System (ADS)
Akhlaghi, Hassan; Roohi, Ehsan
2017-10-01
The current paper investigates new collision-pair selection procedures in a direct simulation Monte Carlo (DSMC) method. Collision partner selection based on the random procedure from nearest neighbor particles and deterministic selection of nearest neighbor particles have already been introduced as schemes that provide accurate results in a wide range of problems. In the current research, new collision-pair selections based on the time spacing and direction of the relative movement of particles are introduced and evaluated. Comparisons between the new and existing algorithms are made considering appropriate test cases including fluctuations in homogeneous gas, 2D equilibrium flow, and Fourier flow problem. Distribution functions for number of particles and collisions in cell, velocity components, and collisional parameters (collision separation, time spacing, relative velocity, and the angle between relative movements of particles) are investigated and compared with existing analytical relations for each model. The capability of each model in the prediction of the heat flux in the Fourier problem at different cell numbers, numbers of particles, and time steps is examined. For new and existing collision-pair selection schemes, the effect of an alternative formula for the number of collision-pair selections and avoiding repetitive collisions are investigated via the prediction of the Fourier heat flux. The simulation results demonstrate the advantages and weaknesses of each model in different test cases.
NASA Astrophysics Data System (ADS)
Miller, V. M.; Semiatin, S. L.; Szczepanski, C.; Pilchak, A. L.
2018-06-01
The ability to predict the evolution of crystallographic texture during hot work of titanium alloys in the α + β temperature regime is greatly significant to numerous engineering disciplines; however, research efforts are complicated by the rapid changes in phase volume fractions and flow stresses with temperature in addition to topological considerations. The viscoplastic self-consistent (VPSC) polycrystal plasticity model is employed to simulate deformation in the two phase field. Newly developed parameter selection schemes utilizing automated optimization based on two different error metrics are considered. In the first optimization scheme, which is commonly used in the literature, the VPSC parameters are selected based on the quality of fit between experiment and simulated flow curves at six hot-working temperatures. Under the second newly developed scheme, parameters are selected to minimize the difference between the simulated and experimentally measured α textures after accounting for the β → α transformation upon cooling. It is demonstrated that both methods result in good qualitative matches for the experimental α phase texture, but texture-based optimization results in a substantially better quantitative orientation distribution function match.
Simulation of Ground Winds Time Series for the NASA Crew Launch Vehicle (CLV)
NASA Technical Reports Server (NTRS)
Adelfang, Stanley I.
2008-01-01
Simulation of wind time series based on power spectrum density (PSD) and spectral coherence models for ground wind turbulence is described. The wind models, originally developed for the Shuttle program, are based on wind measurements at the NASA 150-m meteorological tower at Cape Canaveral, FL. The current application is for the design and/or protection of the CLV from wind effects during on-pad exposure during periods from as long as days prior to launch, to seconds or minutes just prior to launch and seconds after launch. The evaluation of vehicle response to wind will influence the design and operation of constraint systems for support of the on-pad vehicle. Longitudinal and lateral wind component time series are simulated at critical vehicle locations. The PSD model for wind turbulence is a function of mean wind speed, elevation and temporal frequency. Integration of the PSD equation over a selected frequency range yields the variance of the time series to be simulated. The square root of the PSD defines a low-pass filter that is applied to adjust the components of the Fast Fourier Transform (FFT) of Gaussian white noise. The first simulated time series near the top of the launch vehicle is the inverse transform of the adjusted FFT. Simulation of the wind component time series at the nearest adjacent location (and all other succeeding next nearest locations) is based on a model for the coherence between winds at two locations as a function of frequency and separation distance, where the adjacent locations are separated vertically and/or horizontally. The coherence function is used to calculate a coherence weighted FFT of the wind at the next nearest location, given the FFT of the simulated time series at the previous location and the essentially incoherent FFT of the wind at the selected location derived a priori from the PSD model. The simulated time series at each adjacent location is the inverse Fourier transform of the coherence weighted FFT. For a selected design case, the equations, the process and the simulated time series at multiple vehicle stations are presented.
Simulation of tropical cyclone activity over the western North Pacific based on CMIP5 models
NASA Astrophysics Data System (ADS)
Shen, Haibo; Zhou, Weican; Zhao, Haikun
2017-09-01
Based on the Coupled Model Inter-comparison Project 5 (CMIP5) models, the tropical cyclone (TC) activity in the summers of 1965-2005 over the western North Pacific (WNP) is simulated by a TC dynamically downscaling system. In consideration of diversity among climate models, Bayesian model averaging (BMA) and equal-weighed model averaging (EMA) methods are applied to produce the ensemble large-scale environmental factors of the CMIP5 model outputs. The environmental factors generated by BMA and EMA methods are compared, as well as the corresponding TC simulations by the downscaling system. Results indicate that BMA method shows a significant advantage over the EMA. In addition, impacts of model selections on BMA method are examined. To each factor, ten models with better performance are selected from 30 CMIP5 models and then conduct BMA, respectively. As a consequence, the ensemble environmental factors and simulated TC activity are similar with the results from the 30 models' BMA, which verifies the BMA method can afford corresponding weight for each model in the ensemble based on the model's predictive skill. Thereby, the existence of poor performance models will not particularly affect the BMA effectiveness and the ensemble outcomes are improved. Finally, based upon the BMA method and downscaling system, we analyze the sensitivity of TC activity to three important environmental factors, i.e., sea surface temperature (SST), large-scale steering flow, and vertical wind shear. Among three factors, SST and large-scale steering flow greatly affect TC tracks, while average intensity distribution is sensitive to all three environmental factors. Moreover, SST and vertical wind shear jointly play a critical role in the inter-annual variability of TC lifetime maximum intensity and frequency of intense TCs.
Creech, Tyler G; Epps, Clinton W; Landguth, Erin L; Wehausen, John D; Crowhurst, Rachel S; Holton, Brandon; Monello, Ryan J
2017-01-01
Landscape genetic studies based on neutral genetic markers have contributed to our understanding of the influence of landscape composition and configuration on gene flow and genetic variation. However, the potential for species to adapt to changing landscapes will depend on how natural selection influences adaptive genetic variation. We demonstrate how landscape resistance models can be combined with genetic simulations incorporating natural selection to explore how the spread of adaptive variation is affected by landscape characteristics, using desert bighorn sheep (Ovis canadensis nelsoni) in three differing regions of the southwestern United States as an example. We conducted genetic sampling and least-cost path modeling to optimize landscape resistance models independently for each region, and then simulated the spread of an adaptive allele favored by selection across each region. Optimized landscape resistance models differed between regions with respect to landscape variables included and their relationships to resistance, but the slope of terrain and the presence of water barriers and major roads had the greatest impacts on gene flow. Genetic simulations showed that differences among landscapes strongly influenced spread of adaptive genetic variation, with faster spread (1) in landscapes with more continuously distributed habitat and (2) when a pre-existing allele (i.e., standing genetic variation) rather than a novel allele (i.e., mutation) served as the source of adaptive genetic variation. The combination of landscape resistance models and genetic simulations has broad conservation applications and can facilitate comparisons of adaptive potential within and between landscapes.
Epps, Clinton W.; Landguth, Erin L.; Wehausen, John D.; Crowhurst, Rachel S.; Holton, Brandon; Monello, Ryan J.
2017-01-01
Landscape genetic studies based on neutral genetic markers have contributed to our understanding of the influence of landscape composition and configuration on gene flow and genetic variation. However, the potential for species to adapt to changing landscapes will depend on how natural selection influences adaptive genetic variation. We demonstrate how landscape resistance models can be combined with genetic simulations incorporating natural selection to explore how the spread of adaptive variation is affected by landscape characteristics, using desert bighorn sheep (Ovis canadensis nelsoni) in three differing regions of the southwestern United States as an example. We conducted genetic sampling and least-cost path modeling to optimize landscape resistance models independently for each region, and then simulated the spread of an adaptive allele favored by selection across each region. Optimized landscape resistance models differed between regions with respect to landscape variables included and their relationships to resistance, but the slope of terrain and the presence of water barriers and major roads had the greatest impacts on gene flow. Genetic simulations showed that differences among landscapes strongly influenced spread of adaptive genetic variation, with faster spread (1) in landscapes with more continuously distributed habitat and (2) when a pre-existing allele (i.e., standing genetic variation) rather than a novel allele (i.e., mutation) served as the source of adaptive genetic variation. The combination of landscape resistance models and genetic simulations has broad conservation applications and can facilitate comparisons of adaptive potential within and between landscapes. PMID:28464013
Tučník, Petr; Bureš, Vladimír
2016-01-01
Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.
Evaluation of brightness temperature from a forward model of ground-based microwave radiometer
NASA Astrophysics Data System (ADS)
Rambabu, S.; Pillai, J. S.; Agarwal, A.; Pandithurai, G.
2014-06-01
Ground-based microwave radiometers are getting great attention in recent years due to their capability to profile the temperature and humidity at high temporal and vertical resolution in the lower troposphere. The process of retrieving these parameters from the measurements of radiometric brightness temperature ( T B ) includes the inversion algorithm, which uses the back ground information from a forward model. In the present study, an algorithm development and evaluation of this forward model for a ground-based microwave radiometer, being developed by Society for Applied Microwave Electronics Engineering and Research (SAMEER) of India, is presented. Initially, the analysis of absorption coefficient and weighting function at different frequencies was made to select the channels. Further the range of variation of T B for these selected channels for the year 2011, over the two stations Mumbai and Delhi is discussed. Finally the comparison between forward-model simulated T B s and radiometer measured T B s at Mahabaleshwar (73.66 ∘E and 17.93∘N) is done to evaluate the model. There is good agreement between model simulations and radiometer observations, which suggests that these forward model simulations can be used as background for inversion models for retrieving the temperature and humidity profiles.
Sun, Mingzhu; Xu, Hui; Zeng, Xingjuan; Zhao, Xin
2017-01-01
There are various fantastic biological phenomena in biological pattern formation. Mathematical modeling using reaction-diffusion partial differential equation systems is employed to study the mechanism of pattern formation. However, model parameter selection is both difficult and time consuming. In this paper, a visual feedback simulation framework is proposed to calculate the parameters of a mathematical model automatically based on the basic principle of feedback control. In the simulation framework, the simulation results are visualized, and the image features are extracted as the system feedback. Then, the unknown model parameters are obtained by comparing the image features of the simulation image and the target biological pattern. Considering two typical applications, the visual feedback simulation framework is applied to fulfill pattern formation simulations for vascular mesenchymal cells and lung development. In the simulation framework, the spot, stripe, labyrinthine patterns of vascular mesenchymal cells, the normal branching pattern and the branching pattern lacking side branching for lung branching are obtained in a finite number of iterations. The simulation results indicate that it is easy to achieve the simulation targets, especially when the simulation patterns are sensitive to the model parameters. Moreover, this simulation framework can expand to other types of biological pattern formation. PMID:28225811
Sun, Mingzhu; Xu, Hui; Zeng, Xingjuan; Zhao, Xin
2017-01-01
There are various fantastic biological phenomena in biological pattern formation. Mathematical modeling using reaction-diffusion partial differential equation systems is employed to study the mechanism of pattern formation. However, model parameter selection is both difficult and time consuming. In this paper, a visual feedback simulation framework is proposed to calculate the parameters of a mathematical model automatically based on the basic principle of feedback control. In the simulation framework, the simulation results are visualized, and the image features are extracted as the system feedback. Then, the unknown model parameters are obtained by comparing the image features of the simulation image and the target biological pattern. Considering two typical applications, the visual feedback simulation framework is applied to fulfill pattern formation simulations for vascular mesenchymal cells and lung development. In the simulation framework, the spot, stripe, labyrinthine patterns of vascular mesenchymal cells, the normal branching pattern and the branching pattern lacking side branching for lung branching are obtained in a finite number of iterations. The simulation results indicate that it is easy to achieve the simulation targets, especially when the simulation patterns are sensitive to the model parameters. Moreover, this simulation framework can expand to other types of biological pattern formation.
Enhanced Contact Graph Routing (ECGR) MACHETE Simulation Model
NASA Technical Reports Server (NTRS)
Segui, John S.; Jennings, Esther H.; Clare, Loren P.
2013-01-01
Contact Graph Routing (CGR) for Delay/Disruption Tolerant Networking (DTN) space-based networks makes use of the predictable nature of node contacts to make real-time routing decisions given unpredictable traffic patterns. The contact graph will have been disseminated to all nodes before the start of route computation. CGR was designed for space-based networking environments where future contact plans are known or are independently computable (e.g., using known orbital dynamics). For each data item (known as a bundle in DTN), a node independently performs route selection by examining possible paths to the destination. Route computation could conceivably run thousands of times a second, so computational load is important. This work refers to the simulation software model of Enhanced Contact Graph Routing (ECGR) for DTN Bundle Protocol in JPL's MACHETE simulation tool. The simulation model was used for performance analysis of CGR and led to several performance enhancements. The simulation model was used to demonstrate the improvements of ECGR over CGR as well as other routing methods in space network scenarios. ECGR moved to using earliest arrival time because it is a global monotonically increasing metric that guarantees the safety properties needed for the solution's correctness since route re-computation occurs at each node to accommodate unpredicted changes (e.g., traffic pattern, link quality). Furthermore, using earliest arrival time enabled the use of the standard Dijkstra algorithm for path selection. The Dijkstra algorithm for path selection has a well-known inexpensive computational cost. These enhancements have been integrated into the open source CGR implementation. The ECGR model is also useful for route metric experimentation and comparisons with other DTN routing protocols particularly when combined with MACHETE's space networking models and Delay Tolerant Link State Routing (DTLSR) model.
Stochastic model search with binary outcomes for genome-wide association studies
Malovini, Alberto; Puca, Annibale A; Bellazzi, Riccardo
2012-01-01
Objective The spread of case–control genome-wide association studies (GWASs) has stimulated the development of new variable selection methods and predictive models. We introduce a novel Bayesian model search algorithm, Binary Outcome Stochastic Search (BOSS), which addresses the model selection problem when the number of predictors far exceeds the number of binary responses. Materials and methods Our method is based on a latent variable model that links the observed outcomes to the underlying genetic variables. A Markov Chain Monte Carlo approach is used for model search and to evaluate the posterior probability of each predictor. Results BOSS is compared with three established methods (stepwise regression, logistic lasso, and elastic net) in a simulated benchmark. Two real case studies are also investigated: a GWAS on the genetic bases of longevity, and the type 2 diabetes study from the Wellcome Trust Case Control Consortium. Simulations show that BOSS achieves higher precisions than the reference methods while preserving good recall rates. In both experimental studies, BOSS successfully detects genetic polymorphisms previously reported to be associated with the analyzed phenotypes. Discussion BOSS outperforms the other methods in terms of F-measure on simulated data. In the two real studies, BOSS successfully detects biologically relevant features, some of which are missed by univariate analysis and the three reference techniques. Conclusion The proposed algorithm is an advance in the methodology for model selection with a large number of features. Our simulated and experimental results showed that BOSS proves effective in detecting relevant markers while providing a parsimonious model. PMID:22534080
Building Better Planet Populations for EXOSIMS
NASA Astrophysics Data System (ADS)
Garrett, Daniel; Savransky, Dmitry
2018-01-01
The Exoplanet Open-Source Imaging Mission Simulator (EXOSIMS) software package simulates ensembles of space-based direct imaging surveys to provide a variety of science and engineering yield distributions for proposed mission designs. These mission simulations rely heavily on assumed distributions of planetary population parameters including semi-major axis, planetary radius, eccentricity, albedo, and orbital orientation to provide heuristics for target selection and to simulate planetary systems for detection and characterization. The distributions are encoded in PlanetPopulation modules within EXOSIMS which are selected by the user in the input JSON script when a simulation is run. The earliest written PlanetPopulation modules available in EXOSIMS are based on planet population models where the planetary parameters are considered to be independent from one another. While independent parameters allow for quick computation of heuristics and sampling for simulated planetary systems, results from planet-finding surveys have shown that many parameters (e.g., semi-major axis/orbital period and planetary radius) are not independent. We present new PlanetPopulation modules for EXOSIMS which are built on models based on planet-finding survey results where semi-major axis and planetary radius are not independent and provide methods for sampling their joint distribution. These new modules enhance the ability of EXOSIMS to simulate realistic planetary systems and give more realistic science yield distributions.
A Mixture Rasch Model-Based Computerized Adaptive Test for Latent Class Identification
ERIC Educational Resources Information Center
Jiao, Hong; Macready, George; Liu, Junhui; Cho, Youngmi
2012-01-01
This study explored a computerized adaptive test delivery algorithm for latent class identification based on the mixture Rasch model. Four item selection methods based on the Kullback-Leibler (KL) information were proposed and compared with the reversed and the adaptive KL information under simulated testing conditions. When item separation was…
Main steam line break accident simulation of APR1400 using the model of ATLAS facility
NASA Astrophysics Data System (ADS)
Ekariansyah, A. S.; Deswandri; Sunaryo, Geni R.
2018-02-01
A main steam line break simulation for APR1400 as an advanced design of PWR has been performed using the RELAP5 code. The simulation was conducted in a model of thermal-hydraulic test facility called as ATLAS, which represents a scaled down facility of the APR1400 design. The main steam line break event is described in a open-access safety report document, in which initial conditions and assumptionsfor the analysis were utilized in performing the simulation and analysis of the selected parameter. The objective of this work was to conduct a benchmark activities by comparing the simulation results of the CESEC-III code as a conservative approach code with the results of RELAP5 as a best-estimate code. Based on the simulation results, a general similarity in the behavior of selected parameters was observed between the two codes. However the degree of accuracy still needs further research an analysis by comparing with the other best-estimate code. Uncertainties arising from the ATLAS model should be minimized by taking into account much more specific data in developing the APR1400 model.
NASA Technical Reports Server (NTRS)
Plitau, Denis; Prasad, Narasimha S.
2012-01-01
The Active Sensing of CO2 Emissions over Nights Days and Seasons (ASCENDS) mission recommended by the NRC Decadal Survey has a desired accuracy of 0.3% in carbon dioxide mixing ratio (XCO2) retrievals requiring careful selection and optimization of the instrument parameters. NASA Langley Research Center (LaRC) is investigating 1.57 micron carbon dioxide as well as the 1.26-1.27 micron oxygen bands for our proposed ASCENDS mission requirements investigation. Simulation studies are underway for these bands to select optimum instrument parameters. The simulations are based on a multi-wavelength lidar modeling framework being developed at NASA LaRC to predict the performance of CO2 and O2 sensing from space and airborne platforms. The modeling framework consists of a lidar simulation module and a line-by-line calculation component with interchangeable lineshape routines to test the performance of alternative lineshape models in the simulations. As an option the line-by-line radiative transfer model (LBLRTM) program may also be used for line-by-line calculations. The modeling framework is being used to perform error analysis, establish optimum measurement wavelengths as well as to identify the best lineshape models to be used in CO2 and O2 retrievals. Several additional programs for HITRAN database management and related simulations are planned to be included in the framework. The description of the modeling framework with selected results of the simulation studies for CO2 and O2 sensing is presented in this paper.
The effect of friend selection on social influences in obesity.
Trogdon, Justin G; Allaire, Benjamin T
2014-12-01
We present an agent-based model of weight choice and peer selection that simulates the effect of peer selection on social multipliers for weight loss interventions. The model generates social clustering around weight through two mechanisms: a causal link from others' weight to an individual's weight and the propensity to select peers based on weight. We simulated weight loss interventions and tried to identify intervention targets that maximized the spillover of weight loss from intervention participants to nonparticipants. Social multipliers increase with the number of intervention participants' friends. For example, when friend selection was based on a variable exogenous to weight, the weight lost among non-participants increased by 23% (14.3lb vs. 11.6lb) when targeting the most popular obese. Holding constant the number of participants' friends, multipliers increase with increased weight clustering due to selection, up to a point. For example, among the most popular obese, social multipliers when matching on a characteristic correlated with weight (1.189) were higher than when matching on the exogenous characteristic (1.168) and when matching on weight (1.180). Increased weight clustering also implies more obese "friends of friends" of participants, who reduce social multipliers. Copyright © 2014 Elsevier B.V. All rights reserved.
Binder, Harald; Sauerbrei, Willi; Royston, Patrick
2013-06-15
In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2) = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.
Simulation optimization of PSA-threshold based prostate cancer screening policies
Zhang, Jingyu; Denton, Brian T.; Shah, Nilay D.; Inman, Brant A.
2013-01-01
We describe a simulation optimization method to design PSA screening policies based on expected quality adjusted life years (QALYs). Our method integrates a simulation model in a genetic algorithm which uses a probabilistic method for selection of the best policy. We present computational results about the efficiency of our algorithm. The best policy generated by our algorithm is compared to previously recommended screening policies. Using the policies determined by our model, we present evidence that patients should be screened more aggressively but for a shorter length of time than previously published guidelines recommend. PMID:22302420
Optimisation of GaN LEDs and the reduction of efficiency droop using active machine learning
Rouet-Leduc, Bertrand; Barros, Kipton Marcos; Lookman, Turab; ...
2016-04-26
A fundamental challenge in the design of LEDs is to maximise electro-luminescence efficiency at high current densities. We simulate GaN-based LED structures that delay the onset of efficiency droop by spreading carrier concentrations evenly across the active region. Statistical analysis and machine learning effectively guide the selection of the next LED structure to be examined based upon its expected efficiency as well as model uncertainty. This active learning strategy rapidly constructs a model that predicts Poisson-Schrödinger simulations of devices, and that simultaneously produces structures with higher simulated efficiencies.
NASA Astrophysics Data System (ADS)
Feng, Wenjie; Wu, Shenghe; Yin, Yanshu; Zhang, Jiajia; Zhang, Ke
2017-07-01
A training image (TI) can be regarded as a database of spatial structures and their low to higher order statistics used in multiple-point geostatistics (MPS) simulation. Presently, there are a number of methods to construct a series of candidate TIs (CTIs) for MPS simulation based on a modeler's subjective criteria. The spatial structures of TIs are often various, meaning that the compatibilities of different CTIs with the conditioning data are different. Therefore, evaluation and optimal selection of CTIs before MPS simulation is essential. This paper proposes a CTI evaluation and optimal selection method based on minimum data event distance (MDevD). In the proposed method, a set of MDevD properties are established through calculation of the MDevD of conditioning data events in each CTI. Then, CTIs are evaluated and ranked according to the mean value and variance of the MDevD properties. The smaller the mean value and variance of an MDevD property are, the more compatible the corresponding CTI is with the conditioning data. In addition, data events with low compatibility in the conditioning data grid can be located to help modelers select a set of complementary CTIs for MPS simulation. The MDevD property can also help to narrow the range of the distance threshold for MPS simulation. The proposed method was evaluated using three examples: a 2D categorical example, a 2D continuous example, and an actual 3D oil reservoir case study. To illustrate the method, a C++ implementation of the method is attached to the paper.
Computer aided design of Langasite resonant cantilevers: analytical models and simulations
NASA Astrophysics Data System (ADS)
Tellier, C. R.; Leblois, T. G.; Durand, S.
2010-05-01
Analytical models for the piezoelectric excitation and for the wet micromachining of resonant cantilevers are proposed. Firstly, computations of metrological performances of micro-resonators allow us to select special cuts and special alignment of the cantilevers. Secondly the self-elaborated simulator TENSOSIM based on the kinematic and tensorial model furnishes etching shapes of cantilevers. As the result the number of selected cuts is reduced. Finally the simulator COMSOL® is used to evaluate the influence of final etching shape on metrological performances and especially on the resonance frequency. Changes in frequency are evaluated and deviating behaviours of structures with less favourable built-ins are tested showing that the X cut is the best cut for LGS resonant cantilevers vibrating in flexural modes (type 1 and type 2) or in torsion mode.
Liu, Xiang; Peng, Yingwei; Tu, Dongsheng; Liang, Hua
2012-10-30
Survival data with a sizable cure fraction are commonly encountered in cancer research. The semiparametric proportional hazards cure model has been recently used to analyze such data. As seen in the analysis of data from a breast cancer study, a variable selection approach is needed to identify important factors in predicting the cure status and risk of breast cancer recurrence. However, no specific variable selection method for the cure model is available. In this paper, we present a variable selection approach with penalized likelihood for the cure model. The estimation can be implemented easily by combining the computational methods for penalized logistic regression and the penalized Cox proportional hazards models with the expectation-maximization algorithm. We illustrate the proposed approach on data from a breast cancer study. We conducted Monte Carlo simulations to evaluate the performance of the proposed method. We used and compared different penalty functions in the simulation studies. Copyright © 2012 John Wiley & Sons, Ltd.
Variable selection in discrete survival models including heterogeneity.
Groll, Andreas; Tutz, Gerhard
2017-04-01
Several variable selection procedures are available for continuous time-to-event data. However, if time is measured in a discrete way and therefore many ties occur models for continuous time are inadequate. We propose penalized likelihood methods that perform efficient variable selection in discrete survival modeling with explicit modeling of the heterogeneity in the population. The method is based on a combination of ridge and lasso type penalties that are tailored to the case of discrete survival. The performance is studied in simulation studies and an application to the birth of the first child.
Modeling of Protection in Dynamic Simulation Using Generic Relay Models and Settings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samaan, Nader A.; Dagle, Jeffery E.; Makarov, Yuri V.
This paper shows how generic protection relay models available in planning tools can be augmented with settings that are based on NERC standards or best engineering practice. Selected generic relay models in Siemens PSS®E have been used in dynamic simulations in the proposed approach. Undervoltage, overvoltage, underfrequency, and overfrequency relays have been modeled for each generating unit. Distance-relay protection was modeled for transmission system protection. Two types of load-shedding schemes were modeled: underfrequency (frequency-responsive non-firm load shedding) and underfrequency and undervoltage firm load shedding. Several case studies are given to show the impact of protection devices on dynamic simulations. Thismore » is useful for simulating cascading outages.« less
Frisch, Simon; Dshemuchadse, Maja; Görner, Max; Goschke, Thomas; Scherbaum, Stefan
2015-11-01
Selective attention biases information processing toward stimuli that are relevant for achieving our goals. However, the nature of this bias is under debate: Does it solely rely on the amplification of goal-relevant information or is there a need for additional inhibitory processes that selectively suppress currently distracting information? Here, we explored the processes underlying selective attention with a dynamic, modeling-based approach that focuses on the continuous evolution of behavior over time. We present two dynamic neural field models incorporating the diverging theoretical assumptions. Simulations with both models showed that they make similar predictions with regard to response times but differ markedly with regard to their continuous behavior. Human data observed via mouse tracking as a continuous measure of performance revealed evidence for the model solely based on amplification but no indication of persisting selective distracter inhibition.
Seo, Dong Gi; Choi, Jeongwook
2018-05-17
Computerized adaptive testing (CAT) has been adopted in license examinations due to a test efficiency and accuracy. Many research about CAT have been published to prove the efficiency and accuracy of measurement. This simulation study investigated scoring method and item selection methods to implement CAT in Korean medical license examination (KMLE). This study used post-hoc (real data) simulation design. The item bank used in this study was designed with all items in a 2017 KMLE. All CAT algorithms for this study were implemented by a 'catR' package in R program. In terms of accuracy, Rasch and 2parametric logistic (PL) model performed better than 3PL model. Modal a Posteriori (MAP) or Expected a Posterior (EAP) provided more accurate estimates than MLE and WLE. Furthermore Maximum posterior weighted information (MPWI) or Minimum expected posterior variance (MEPV) performed better than other item selection methods. In terms of efficiency, Rasch model was recommended to reduce test length. Simulation study should be performed under varied test conditions before adopting a live CAT. Based on a simulation study, specific scoring and item selection methods should be predetermined before implementing a live CAT.
NASA Astrophysics Data System (ADS)
Saber, M.; Sefelnasr, A.; Yilmaz, K. K.
2015-12-01
Flash flood is a natural hydrological phenomenon which affects many regions of the world. The behavior and effect of this phenomenon is different from one region to the other regions depending on several issues such as climatology and hydrological and topographical conditions at the target regions. Wadi assiut, Egypt as arid environment, and Gumara catchment, Lake Tana, Ethiopia, as humid conditions have been selected for application. The main target of this work is to simulate flash floods at both catchments considering the difference between them on the flash flood behaviors based on the variability of both of them. In order to simulate the flash floods, remote sensing data and a physical-based distributed hydrological model, Hydro-BEAM-WaS (Hydrological River Basin Environmental Assessment Model incorporating Wadi System) have been integrated used in this work. Based on the simulation results of flash floods in these regions, it was found that the time to reach the maximum peak is very short and consequently the warning time is very short as well. It was found that the flash floods starts from zero flow in arid environment, but on the contrary in humid arid, it starts from Base flow which is changeable based on the simulated events. Distribution maps of flash floods showing the vulnerable regions of these selected areas have been developed. Consequently, some mitigation strategies relying on this study have been introduced. The proposed methodology can be applied effectively for flash flood forecasting at different climate regions, however the paucity of observational data.
Extension of a Kolmogorov Atmospheric Turbulence Model for Time-Based Simulation Implementation
NASA Technical Reports Server (NTRS)
McMinn, John D.
1997-01-01
The development of any super/hypersonic aircraft requires the interaction of a wide variety of technical disciplines to maximize vehicle performance. For flight and engine control system design and development on this class of vehicle, realistic mathematical simulation models of atmospheric turbulence, including winds and the varying thermodynamic properties of the atmosphere, are needed. A model which has been tentatively selected by a government/industry group of flight and engine/inlet controls representatives working on the High Speed Civil Transport is one based on the Kolmogorov spectrum function. This report compares the Dryden and Kolmogorov turbulence forms, and describes enhancements that add functionality to the selected Kolmogorov model. These added features are: an altitude variation of the eddy dissipation rate based on Dryden data, the mapping of the eddy dissipation rate database onto a regular latitude and longitude grid, a method to account for flight at large vehicle attitude angles, and a procedure for transitioning smoothly across turbulence segments.
Model selection with multiple regression on distance matrices leads to incorrect inferences.
Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H
2017-01-01
In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.
NASA Astrophysics Data System (ADS)
Cho, G. S.
2017-09-01
For performance optimization of Refrigerated Warehouses, design parameters are selected based on the physical parameters such as number of equipment and aisles, speeds of forklift for ease of modification. This paper provides a comprehensive framework approach for the system design of Refrigerated Warehouses. We propose a modeling approach which aims at the simulation optimization so as to meet required design specifications using the Design of Experiment (DOE) and analyze a simulation model using integrated aspect-oriented modeling approach (i-AOMA). As a result, this suggested method can evaluate the performance of a variety of Refrigerated Warehouses operations.
Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks
Walpole, J.; Chappell, J.C.; Cluceru, J.G.; Mac Gabhann, F.; Bautch, V.L.; Peirce, S. M.
2015-01-01
Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods. PMID:26158406
Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks.
Walpole, J; Chappell, J C; Cluceru, J G; Mac Gabhann, F; Bautch, V L; Peirce, S M
2015-09-01
Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods.
Surgical stent planning: simulation parameter study for models based on DICOM standards.
Scherer, S; Treichel, T; Ritter, N; Triebel, G; Drossel, W G; Burgert, O
2011-05-01
Endovascular Aneurysm Repair (EVAR) can be facilitated by a realistic simulation model of stent-vessel-interaction. Therefore, numerical feasibility and integrability in the clinical environment was evaluated. The finite element method was used to determine necessary simulation parameters for stent-vessel-interaction in EVAR. Input variables and result data of the simulation model were examined for their standardization using DICOM supplements. The study identified four essential parameters for the stent-vessel simulation: blood pressure, intima constitution, plaque occurrence and the material properties of vessel and plaque. Output quantities such as radial force of the stent and contact pressure between stent/vessel can help the surgeon to evaluate implant fixation and sealing. The model geometry can be saved with DICOM "Surface Segmentation" objects and the upcoming "Implant Templates" supplement. Simulation results can be stored using the "Structured Report". A standards-based general simulation model for optimizing stent-graft selection may be feasible. At present, there are limitations due to specification of individual vessel material parameters and for simulating the proximal fixation of stent-grafts with hooks. Simulation data with clinical relevance for documentation and presentation can be stored using existing or new DICOM extensions.
Soh, Zu; Nishikawa, Shinya; Kurita, Yuichi; Takiguchi, Noboru; Tsuji, Toshio
2016-01-01
To predict the odor quality of an odorant mixture, the interaction between odorants must be taken into account. Previously, an experiment in which mice discriminated between odorant mixtures identified a selective adaptation mechanism in the olfactory system. This paper proposes an olfactory model for odorant mixtures that can account for selective adaptation in terms of neural activity. The proposed model uses the spatial activity pattern of the mitral layer obtained from model simulations to predict the perceptual similarity between odors. Measured glomerular activity patterns are used as input to the model. The neural interaction between mitral cells and granular cells is then simulated, and a dissimilarity index between odors is defined using the activity patterns of the mitral layer. An odor set composed of three odorants is used to test the ability of the model. Simulations are performed based on the odor discrimination experiment on mice. As a result, we observe that part of the neural activity in the glomerular layer is enhanced in the mitral layer, whereas another part is suppressed. We find that the dissimilarity index strongly correlates with the odor discrimination rate of mice: r = 0.88 (p = 0.019). We conclude that our model has the ability to predict the perceptual similarity of odorant mixtures. In addition, the model also accounts for selective adaptation via the odor discrimination rate, and the enhancement and inhibition in the mitral layer may be related to this selective adaptation.
2016-01-01
Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the–server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models. PMID:27806061
Simulation Of Combat With An Expert System
NASA Technical Reports Server (NTRS)
Provenzano, J. P.
1989-01-01
Proposed expert system predicts outcomes of combat situations. Called "COBRA", combat outcome based on rules for attrition, system selects rules for mathematical modeling of losses and discrete events in combat according to previous experiences. Used with another software module known as the "Game". Game/COBRA software system, consisting of Game and COBRA modules, provides for both quantitative aspects and qualitative aspects in simulations of battles. COBRA intended for simulation of large-scale military exercises, concepts embodied in it have much broader applicability. In industrial research, knowledge-based system enables qualitative as well as quantitative simulations.
Non-Equilibrium Dynamics Contribute to Ion Selectivity in the KcsA Channel
Haas, Stephan; Farley, Robert A.
2014-01-01
The ability of biological ion channels to conduct selected ions across cell membranes is critical for the survival of both animal and bacterial cells. Numerous investigations of ion selectivity have been conducted over more than 50 years, yet the mechanisms whereby the channels select certain ions and reject others are not well understood. Here we report a new application of Jarzynski’s Equality to investigate the mechanism of ion selectivity using non-equilibrium molecular dynamics simulations of Na+ and K+ ions moving through the KcsA channel. The simulations show that the selectivity filter of KcsA adapts and responds to the presence of the ions with structural rearrangements that are different for Na+ and K+. These structural rearrangements facilitate entry of K+ ions into the selectivity filter and permeation through the channel, and rejection of Na+ ions. A mechanistic model of ion selectivity by this channel based on the results of the simulations relates the structural rearrangement of the selectivity filter to the differential dehydration of ions and multiple-ion occupancy and describes a mechanism to efficiently select and conduct K+. Estimates of the K+/Na+ selectivity ratio and steady state ion conductance for KcsA from the simulations are in good quantitative agreement with experimental measurements. This model also accurately describes experimental observations of channel block by cytoplasmic Na+ ions, the “punch through” relief of channel block by cytoplasmic positive voltages, and is consistent with the knock-on mechanism of ion permeation. PMID:24465882
Model selection for multi-component frailty models.
Ha, Il Do; Lee, Youngjo; MacKenzie, Gilbert
2007-11-20
Various frailty models have been developed and are now widely used for analysing multivariate survival data. It is therefore important to develop an information criterion for model selection. However, in frailty models there are several alternative ways of forming a criterion and the particular criterion chosen may not be uniformly best. In this paper, we study an Akaike information criterion (AIC) on selecting a frailty structure from a set of (possibly) non-nested frailty models. We propose two new AIC criteria, based on a conditional likelihood and an extended restricted likelihood (ERL) given by Lee and Nelder (J. R. Statist. Soc. B 1996; 58:619-678). We compare their performance using well-known practical examples and demonstrate that the two criteria may yield rather different results. A simulation study shows that the AIC based on the ERL is recommended, when attention is focussed on selecting the frailty structure rather than the fixed effects.
Jabor, A; Vlk, T; Boril, P
1996-04-15
We designed a simulation model for the assessment of the financial risks involved when a new diagnostic test is introduced in the laboratory. The model is based on a neural network consisting of ten neurons and assumes that input entities can have assigned appropriate uncertainty. Simulations are done on a 1-day interval basis. Risk analysis completes the model and the financial effects are evaluated for a selected time period. The basic output of the simulation consists of total expenses and income during the simulation time, net present value of the project at the end of simulation, total number of control samples during simulation, total number of patients evaluated and total number of used kits.
Simulating soil organic carbon changes across toposequences under dryland agriculture using CQESTR
USDA-ARS?s Scientific Manuscript database
Soil organic carbon (SOC) and its management under dryland cropping systems are very critical for both crop productivity and environment health. The objective of this study was to evaluate the performance of CQESTR, a process-based C model, in simulating SOC changes across toposequences of selected ...
NASA Astrophysics Data System (ADS)
Shi, Jinfei; Zhu, Songqing; Chen, Ruwen
2017-12-01
An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.
Interactive model evaluation tool based on IPython notebook
NASA Astrophysics Data System (ADS)
Balemans, Sophie; Van Hoey, Stijn; Nopens, Ingmar; Seuntjes, Piet
2015-04-01
In hydrological modelling, some kind of parameter optimization is mostly performed. This can be the selection of a single best parameter set, a split in behavioural and non-behavioural parameter sets based on a selected threshold or a posterior parameter distribution derived with a formal Bayesian approach. The selection of the criterion to measure the goodness of fit (likelihood or any objective function) is an essential step in all of these methodologies and will affect the final selected parameter subset. Moreover, the discriminative power of the objective function is also dependent from the time period used. In practice, the optimization process is an iterative procedure. As such, in the course of the modelling process, an increasing amount of simulations is performed. However, the information carried by these simulation outputs is not always fully exploited. In this respect, we developed and present an interactive environment that enables the user to intuitively evaluate the model performance. The aim is to explore the parameter space graphically and to visualize the impact of the selected objective function on model behaviour. First, a set of model simulation results is loaded along with the corresponding parameter sets and a data set of the same variable as the model outcome (mostly discharge). The ranges of the loaded parameter sets define the parameter space. A selection of the two parameters visualised can be made by the user. Furthermore, an objective function and a time period of interest need to be selected. Based on this information, a two-dimensional parameter response surface is created, which actually just shows a scatter plot of the parameter combinations and assigns a color scale corresponding with the goodness of fit of each parameter combination. Finally, a slider is available to change the color mapping of the points. Actually, the slider provides a threshold to exclude non behaviour parameter sets and the color scale is only attributed to the remaining parameter sets. As such, by interactively changing the settings and interpreting the graph, the user gains insight in the model structural behaviour. Moreover, a more deliberate choice of objective function and periods of high information content can be identified. The environment is written in an IPython notebook and uses the available interactive functions provided by the IPython community. As such, the power of the IPython notebook as a development environment for scientific computing is illustrated (Shen, 2014).
Katsube, Takayuki; Wajima, Toshihiro; Ishibashi, Toru; Arjona Ferreira, Juan Camilo; Echols, Roger
2017-01-01
Cefiderocol, a novel parenteral siderophore cephalosporin, exhibits potent efficacy against most Gram-negative bacteria, including carbapenem-resistant strains. Since cefiderocol is excreted primarily via the kidneys, this study was conducted to develop a population pharmacokinetics (PK) model to determine dose adjustment based on renal function. Population PK models were developed based on data for cefiderocol concentrations in plasma, urine, and dialysate with a nonlinear mixed-effects model approach. Monte-Carlo simulations were conducted to calculate the probability of target attainment (PTA) of fraction of time during the dosing interval where the free drug concentration in plasma exceeds the MIC (T f >MIC ) for an MIC range of 0.25 to 16 μg/ml. For the simulations, dose regimens were selected to compare cefiderocol exposure among groups with different levels of renal function. The developed models well described the PK of cefiderocol for each renal function group. A dose of 2 g every 8 h with 3-h infusions provided >90% PTA for 75% T f >MIC for an MIC of ≤4 μg/ml for patients with normal renal function, while a more frequent dose (every 6 h) could be used for patients with augmented renal function. A reduced dose and/or extended dosing interval was selected for patients with impaired renal function. A supplemental dose immediately after intermittent hemodialysis was proposed for patients requiring intermittent hemodialysis. The PK of cefiderocol could be adequately modeled, and the modeling-and-simulation approach suggested dose regimens based on renal function, ensuring drug exposure with adequate bactericidal effect. Copyright © 2016 American Society for Microbiology.
2007-09-01
behavior libraries selection box, Savage Tactics behavior sub-folder and hostile behavior sub-folder that contains the behavior that is being assigned to...21) applications. The interface allows users to select models (locations, friendly assets, hostile assets, neutral assets, etc) that will be used in...altitude, etc.) for each model and define their behaviors (friendly patrol craft, hostile explosive-laden vessel, etc). Once the models and their
2016-06-01
characteristics, experimental design techniques, and analysis methodologies that distinguish each phase of the MBSE MEASA. To ensure consistency... methodology . Experimental design selection, simulation analysis, and trade space analysis support the final two stages. Figure 27 segments the MBSE MEASA...rounding has the potential to increase the correlation between columns of the experimental design matrix. The design methodology presented in Vieira
Watershed scale response to climate change--Yampa River Basin, Colorado
Hay, Lauren E.; Battaglin, William A.; Markstrom, Steven L.
2012-01-01
General Circulation Model simulations of future climate through 2099 project a wide range of possible scenarios. To determine the sensitivity and potential effect of long-term climate change on the freshwater resources of the United States, the U.S. Geological Survey Global Change study, "An integrated watershed scale response to global change in selected basins across the United States" was started in 2008. The long-term goal of this national study is to provide the foundation for hydrologically based climate change studies across the nation. Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Yampa River Basin at Steamboat Springs, Colorado.
Optimum Vehicle Component Integration with InVeST (Integrated Vehicle Simulation Testbed)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, W; Paddack, E; Aceves, S
2001-12-27
We have developed an Integrated Vehicle Simulation Testbed (InVeST). InVeST is based on the concept of Co-simulation, and it allows the development of virtual vehicles that can be analyzed and optimized as an overall integrated system. The virtual vehicle is defined by selecting different vehicle components from a component library. Vehicle component models can be written in multiple programming languages running on different computer platforms. At the same time, InVeST provides full protection for proprietary models. Co-simulation is a cost-effective alternative to competing methodologies, such as developing a translator or selecting a single programming language for all vehicle components. InVeSTmore » has been recently demonstrated using a transmission model and a transmission controller model. The transmission model was written in SABER and ran on a Sun/Solaris workstation, while the transmission controller was written in MATRIXx and ran on a PC running Windows NT. The demonstration was successfully performed. Future plans include the applicability of Co-simulation and InVeST to analysis and optimization of multiple complex systems, including those of Intelligent Transportation Systems.« less
John, Shalini; Thangapandian, Sundarapandian; Lee, Keun Woo
2012-01-01
Human pancreatic cholesterol esterase (hCEase) is one of the lipases found to involve in the digestion of large and broad spectrum of substrates including triglycerides, phospholipids, cholesteryl esters, etc. The presence of bile salts is found to be very important for the activation of hCEase. Molecular dynamic simulations were performed for the apoform and bile salt complexed form of hCEase using the co-ordinates of two bile salts from bovine CEase. The stability of the systems throughout the simulation time was checked and two representative structures from the highly populated regions were selected using cluster analysis. These two representative structures were used in pharmacophore model generation. The generated pharmacophore models were validated and used in database screening. The screened hits were refined for their drug-like properties based on Lipinski's rule of five and ADMET properties. The drug-like compounds were further refined by molecular docking simulation using GOLD program based on the GOLD fitness score, mode of binding, and molecular interactions with the active site amino acids. Finally, three hits of novel scaffolds were selected as potential leads to be used in novel and potent hCEase inhibitor design. The stability of binding modes and molecular interactions of these final hits were re-assured by molecular dynamics simulations.
Finnerty, Justin John; Peyser, Alexander; Carloni, Paolo
2015-01-01
Cation selective channels constitute the gate for ion currents through the cell membrane. Here we present an improved statistical mechanical model based on atomistic structural information, cation hydration state and without tuned parameters that reproduces the selectivity of biological Na+ and Ca2+ ion channels. The importance of the inclusion of step-wise cation hydration in these results confirms the essential role partial dehydration plays in the bacterial Na+ channels. The model, proven reliable against experimental data, could be straightforwardly used for designing Na+ and Ca2+ selective nanopores.
Structure refinement of membrane proteins via molecular dynamics simulations.
Dutagaci, Bercem; Heo, Lim; Feig, Michael
2018-07-01
A refinement protocol based on physics-based techniques established for water soluble proteins is tested for membrane protein structures. Initial structures were generated by homology modeling and sampled via molecular dynamics simulations in explicit lipid bilayer and aqueous solvent systems. Snapshots from the simulations were selected based on scoring with either knowledge-based or implicit membrane-based scoring functions and averaged to obtain refined models. The protocol resulted in consistent and significant refinement of the membrane protein structures similar to the performance of refinement methods for soluble proteins. Refinement success was similar between sampling in the presence of lipid bilayers and aqueous solvent but the presence of lipid bilayers may benefit the improvement of lipid-facing residues. Scoring with knowledge-based functions (DFIRE and RWplus) was found to be as good as scoring using implicit membrane-based scoring functions suggesting that differences in internal packing is more important than orientations relative to the membrane during the refinement of membrane protein homology models. © 2018 Wiley Periodicals, Inc.
Sutton, Steven C; Hu, Mingxiu
2006-05-05
Many mathematical models have been proposed for establishing an in vitro/in vivo correlation (IVIVC). The traditional IVIVC model building process consists of 5 steps: deconvolution, model fitting, convolution, prediction error evaluation, and cross-validation. This is a time-consuming process and typically a few models at most are tested for any given data set. The objectives of this work were to (1) propose a statistical tool to screen models for further development of an IVIVC, (2) evaluate the performance of each model under different circumstances, and (3) investigate the effectiveness of common statistical model selection criteria for choosing IVIVC models. A computer program was developed to explore which model(s) would be most likely to work well with a random variation from the original formulation. The process used Monte Carlo simulation techniques to build IVIVC models. Data-based model selection criteria (Akaike Information Criteria [AIC], R2) and the probability of passing the Food and Drug Administration "prediction error" requirement was calculated. To illustrate this approach, several real data sets representing a broad range of release profiles are used to illustrate the process and to demonstrate the advantages of this automated process over the traditional approach. The Hixson-Crowell and Weibull models were often preferred over the linear. When evaluating whether a Level A IVIVC model was possible, the model selection criteria AIC generally selected the best model. We believe that the approach we proposed may be a rapid tool to determine which IVIVC model (if any) is the most applicable.
Landguth, Erin L; Bearlin, Andrew; Day, Casey; Dunham, Jason B.
2016-01-01
1. Combining landscape demographic and genetics models offers powerful methods for addressing questions for eco-evolutionary applications.2. Using two illustrative examples, we present Cost–Distance Meta-POPulation, a program to simulate changes in neutral and/or selection-driven genotypes through time as a function of individual-based movement, complex spatial population dynamics, and multiple and changing landscape drivers.3. Cost–Distance Meta-POPulation provides a novel tool for questions in landscape genetics by incorporating population viability analysis, while linking directly to conservation applications.
Scheper, Carsten; Wensch-Dorendorf, Monika; Yin, Tong; Dressel, Holger; Swalve, Herrmann; König, Sven
2016-06-29
Intensified selection of polled individuals has recently gained importance in predominantly horned dairy cattle breeds as an alternative to routine dehorning. The status quo of the current polled breeding pool of genetically-closely related artificial insemination sires with lower breeding values for performance traits raises questions regarding the effects of intensified selection based on this founder pool. We developed a stochastic simulation framework that combines the stochastic simulation software QMSim and a self-designed R program named QUALsim that acts as an external extension. Two traits were simulated in a dairy cattle population for 25 generations: one quantitative (QMSim) and one qualitative trait with Mendelian inheritance (i.e. polledness, QUALsim). The assignment scheme for qualitative trait genotypes initiated realistic initial breeding situations regarding allele frequencies, true breeding values for the quantitative trait and genetic relatedness. Intensified selection for polled cattle was achieved using an approach that weights estimated breeding values in the animal best linear unbiased prediction model for the quantitative trait depending on genotypes or phenotypes for the polled trait with a user-defined weighting factor. Selection response for the polled trait was highest in the selection scheme based on genotypes. Selection based on phenotypes led to significantly lower allele frequencies for polled. The male selection path played a significantly greater role for a fast dissemination of polled alleles compared to female selection strategies. Fixation of the polled allele implies selection based on polled genotypes among males. In comparison to a base breeding scenario that does not take polledness into account, intensive selection for polled substantially reduced genetic gain for this quantitative trait after 25 generations. Reducing selection intensity for polled males while maintaining strong selection intensity among females, simultaneously decreased losses in genetic gain and achieved a final allele frequency of 0.93 for polled. A fast transition to a completely polled population through intensified selection for polled was in contradiction to the preservation of high genetic gain for the quantitative trait. Selection on male polled genotypes with moderate weighting, and selection on female polled phenotypes with high weighting, could be a suitable compromise regarding all important breeding aspects.
Social power and opinion formation in complex networks
NASA Astrophysics Data System (ADS)
Jalili, Mahdi
2013-02-01
In this paper we investigate the effects of social power on the evolution of opinions in model networks as well as in a number of real social networks. A continuous opinion formation model is considered and the analysis is performed through numerical simulation. Social power is given to a proportion of agents selected either randomly or based on their degrees. As artificial network structures, we consider scale-free networks constructed through preferential attachment and Watts-Strogatz networks. Numerical simulations show that scale-free networks with degree-based social power on the hub nodes have an optimal case where the largest number of the nodes reaches a consensus. However, given power to a random selection of nodes could not improve consensus properties. Introducing social power in Watts-Strogatz networks could not significantly change the consensus profile.
Estimation of Carbon Flux of Forest Ecosystem over Qilian Mountains by BIOME-BGC Model
NASA Astrophysics Data System (ADS)
Yan, Min; Tian, Xin; Li, Zengyuan; Chen, Erxue; Li, Chunmei
2014-11-01
The gross primary production (GPP) and net ecosystem exchange (NEE) are important indicators for carbon fluxes. This study aims at evaluating the forest GPP and NEE over the Qilian Mountains using meteorological, remotely sensed and other ancillary data at large scale. To realize this, the widely used ecological-process-based model, Biome-BGC, and remote-sensing-based model, MODIS GPP algorithm, were selected for the simulation of the forest carbon fluxes. The combination of these two models was based on calibrating the Biome-BGC by the optimized MODIS GPP algorithm. The simulated GPP and NEE values were evaluated against the eddy covariance observed GPPs and NEEs, and the well agreements have been reached, with R2=0.76, 0.67 respectively.
Estimation of Carbon Flux of Forest Ecosystem over Qilian Mountains by BIOME-BGC Model
NASA Astrophysics Data System (ADS)
Yan, Min; Tian, Xin; Li, Zengyuan; Chen, Erxue; Li, Chunmei
2014-11-01
The gross primary production (GPP) and net ecosystem exchange (NEE) are important indicators for carbon fluxes. This study aims at evaluating the forest GPP and NEE over the Qilian Mountains using meteorological, remotely sensed and other ancillary data at large scale. To realize this, the widely used ecological-process- based model, Biome-BGC, and remote-sensing-based model, MODIS GPP algorithm, were selected for the simulation of the forest carbon fluxes. The combination of these two models was based on calibrating the Biome-BGC by the optimized MODIS GPP algorithm. The simulated GPP and NEE values were evaluated against the eddy covariance observed GPPs and NEEs, and the well agreements have been reached, with R2=0.76, 0.67 respectively.
Discriminative Projection Selection Based Face Image Hashing
NASA Astrophysics Data System (ADS)
Karabat, Cagatay; Erdogan, Hakan
Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.
Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin
2017-08-01
Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.
Simulation of Lunar Surface Communications Network Exploration Scenarios
NASA Technical Reports Server (NTRS)
Linsky, Thomas W.; Bhasin, Kul B.; White, Alex; Palangala, Srihari
2006-01-01
Simulations and modeling of surface-based communications networks provides a rapid and cost effective means of requirement analysis, protocol assessments, and tradeoff studies. Robust testing in especially important for exploration systems, where the cost of deployment is high and systems cannot be easily replaced or repaired. However, simulation of the envisioned exploration networks cannot be achieved using commercial off the shelf network simulation software. Models for the nonstandard, non-COTS protocols used aboard space systems are not readily available. This paper will address the simulation of realistic scenarios representative of the activities which will take place on the surface of the Moon, including selection of candidate network architectures, and the development of an integrated simulation tool using OPNET modeler capable of faithfully modeling those communications scenarios in the variable delay, dynamic surface environments. Scenarios for exploration missions, OPNET development, limitations, and simulations results will be provided and discussed.
Latent Class Analysis of Incomplete Data via an Entropy-Based Criterion
Larose, Chantal; Harel, Ofer; Kordas, Katarzyna; Dey, Dipak K.
2016-01-01
Latent class analysis is used to group categorical data into classes via a probability model. Model selection criteria then judge how well the model fits the data. When addressing incomplete data, the current methodology restricts the imputation to a single, pre-specified number of classes. We seek to develop an entropy-based model selection criterion that does not restrict the imputation to one number of clusters. Simulations show the new criterion performing well against the current standards of AIC and BIC, while a family studies application demonstrates how the criterion provides more detailed and useful results than AIC and BIC. PMID:27695391
Simulation of tunneling construction methods of the Cisumdawu toll road
NASA Astrophysics Data System (ADS)
Abduh, Muhamad; Sukardi, Sapto Nugroho; Ola, Muhammad Rusdian La; Ariesty, Anita; Wirahadikusumah, Reini D.
2017-11-01
Simulation can be used as a tool for planning and analysis of a construction method. Using simulation technique, a contractor could design optimally resources associated with a construction method and compare to other methods based on several criteria, such as productivity, waste, and cost. This paper discusses the use of simulation using Norwegian Method of Tunneling (NMT) for a 472-meter tunneling work in the Cisumdawu Toll Road project. Primary and secondary data were collected to provide useful information for simulation as well as problems that may be faced by the contractor. The method was modelled using the CYCLONE and then simulated using the WebCYCLONE. The simulation could show the duration of the project from the duration model of each work tasks which based on literature review, machine productivity, and several assumptions. The results of simulation could also show the total cost of the project that was modeled based on journal construction & building unit cost and online websites of local and international suppliers. The analysis of the advantages and disadvantages of the method was conducted based on its, wastes, and cost. The simulation concluded the total cost of this operation is about Rp. 900,437,004,599 and the total duration of the tunneling operation is 653 days. The results of the simulation will be used for a recommendation to the contractor before the implementation of the already selected tunneling operation.
Circular analysis in complex stochastic systems
Valleriani, Angelo
2015-01-01
Ruling out observations can lead to wrong models. This danger occurs unwillingly when one selects observations, experiments, simulations or time-series based on their outcome. In stochastic processes, conditioning on the future outcome biases all local transition probabilities and makes them consistent with the selected outcome. This circular self-consistency leads to models that are inconsistent with physical reality. It is also the reason why models built solely on macroscopic observations are prone to this fallacy. PMID:26656656
Research on numerical simulation technology about regional important pollutant diffusion of haze
NASA Astrophysics Data System (ADS)
Du, Boying; Ma, Yunfeng; Li, Qiangqiang; Wang, Qi; Hu, Qiongqiong; Bian, Yushan
2018-02-01
In order to analyze the formation of haze in Shenyang and the factors that affect the diffusion of pollutants, the simulation experiment adopted in this paper is based on the numerical model of WRF/CALPUFF coupling. Simulation experiment was conducted to select PM10 of Shenyang City in the period from March 1 to 8, and the PM10 in the regional important haze was simulated. The survey was conducted with more than 120 enterprises section the point of the emission source of this experiment. The contrastive data were analyzed with 11 air quality monitoring points, and the simulation results were compared. Analyze the contribution rate of each typical enterprise to the air quality, verify the correctness of the simulation results, and then use the model to establish the prediction model.
NASA Astrophysics Data System (ADS)
Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen
2018-01-01
Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.
NASA Astrophysics Data System (ADS)
Caviedes-Voullième, Daniel; García-Navarro, Pilar; Murillo, Javier
2012-07-01
SummaryHydrological simulation of rain-runoff processes is often performed with lumped models which rely on calibration to generate storm hydrographs and study catchment response to rain. In this paper, a distributed, physically-based numerical model is used for runoff simulation in a mountain catchment. This approach offers two advantages. The first is that by using shallow-water equations for runoff flow, there is less freedom to calibrate routing parameters (as compared to, for example, synthetic hydrograph methods). The second, is that spatial distributions of water depth and velocity can be obtained. Furthermore, interactions among the various hydrological processes can be modeled in a physically-based approach which may depend on transient and spatially distributed factors. On the other hand, the undertaken numerical approach relies on accurate terrain representation and mesh selection, which also affects significantly the computational cost of the simulations. Hence, we investigate the response of a gauged catchment with this distributed approach. The methodology consists of analyzing the effects that the mesh has on the simulations by using a range of meshes. Next, friction is applied to the model and the response to variations and interaction with the mesh is studied. Finally, a first approach with the well-known SCS Curve Number method is studied to evaluate its behavior when coupled with a shallow-water model for runoff flow. The results show that mesh selection is of great importance, since it may affect the results in a magnitude as large as physical factors, such as friction. Furthermore, results proved to be less sensitive to roughness spatial distribution than to mesh properties. Finally, the results indicate that SCS-CN may not be suitable for simulating hydrological processes together with a shallow-water model.
Newcom, D W; Baas, T J; Stalder, K J; Schwab, C R
2005-04-01
Three selection models were evaluated to compare selection candidate rankings based on EBV and to evaluate subsequent effects of model-derived EBV on the selection differential and expected genetic response in the population. Data were collected from carcass- and ultrasound-derived estimates of loin i.m. fat percent (IMF) in a population of Duroc swine under selection to increase IMF. The models compared were Model 1, a two-trait animal model used in the selection experiment that included ultrasound IMF from all pigs scanned and carcass IMF from pigs slaughtered to estimate breeding values for both carcass (C1) and ultrasound IMF (U1); Model 2, a single-trait animal model that included ultrasound IMF values on all pigs scanned to estimate breeding values for ultrasound IMF (U2); and Model 3, a multiple-trait animal model including carcass IMF from slaughtered pigs and the first three principal components from a total of 10 image parameters averaged across four longitudinal ultrasound images to estimate breeding values for carcass IMF (C3). Rank correlations between breeding value estimates for U1 and C1, U1 and U2, and C1 and C3 were 0.95, 0.97, and 0.92, respectively. Other rank correlations were 0.86 or less. In the selection experiment, approximately the top 10% of boars and 50% of gilts were selected. Selection differentials for pigs in Generation 3 were greatest when ranking pigs based on C1, followed by U1, U2, and C3. In addition, selection differential and estimated response were evaluated when simulating selection of the top 1, 5, and 10% of sires and 50% of dams. Results of this analysis indicated the greatest selection differential was for selection based on C1. The greatest loss in selection differential was found for selection based on C3 when selecting the top 10 and 1% of boars and 50% of gilts. The loss in estimated response when selecting varying percentages of boars and the top 50% of gilts was greatest when selection was based on C3 (16.0 to 25.8%) and least for selection based on U1 (1.3 to 10.9%). Estimated genetic change from selection based on carcass IMF was greater than selection based on ultrasound IMF. Results show that selection based on a combination of ultrasonically predicted IMF and sib carcass IMF produced the greatest selection differentials and should lead to the greatest genetic change.
A methodology for the assessment of manned flight simulator fidelity
NASA Technical Reports Server (NTRS)
Hess, Ronald A.; Malsbury, Terry N.
1989-01-01
A relatively simple analytical methodology for assessing the fidelity of manned flight simulators for specific vehicles and tasks is offered. The methodology is based upon an application of a structural model of the human pilot, including motion cue effects. In particular, predicted pilot/vehicle dynamic characteristics are obtained with and without simulator limitations. A procedure for selecting model parameters can be implemented, given a probable pilot control strategy. In analyzing a pair of piloting tasks for which flight and simulation data are available, the methodology correctly predicted the existence of simulator fidelity problems. The methodology permitted the analytical evaluation of a change in simulator characteristics and indicated that a major source of the fidelity problems was a visual time delay in the simulation.
Modelling and simulation techniques for membrane biology.
Burrage, Kevin; Hancock, John; Leier, André; Nicolau, Dan V
2007-07-01
One of the most important aspects of Computational Cell Biology is the understanding of the complicated dynamical processes that take place on plasma membranes. These processes are often so complicated that purely temporal models cannot always adequately capture the dynamics. On the other hand, spatial models can have large computational overheads. In this article, we review some of these issues with respect to chemistry, membrane microdomains and anomalous diffusion and discuss how to select appropriate modelling and simulation paradigms based on some or all the following aspects: discrete, continuous, stochastic, delayed and complex spatial processes.
Heuristic for Critical Machine Based a Lot Streaming for Two-Stage Hybrid Production Environment
NASA Astrophysics Data System (ADS)
Vivek, P.; Saravanan, R.; Chandrasekaran, M.; Pugazhenthi, R.
2017-03-01
Lot streaming in Hybrid flowshop [HFS] is encountered in many real world problems. This paper deals with a heuristic approach for Lot streaming based on critical machine consideration for a two stage Hybrid Flowshop. The first stage has two identical parallel machines and the second stage has only one machine. In the second stage machine is considered as a critical by valid reasons these kind of problems is known as NP hard. A mathematical model developed for the selected problem. The simulation modelling and analysis were carried out in Extend V6 software. The heuristic developed for obtaining optimal lot streaming schedule. The eleven cases of lot streaming were considered. The proposed heuristic was verified and validated by real time simulation experiments. All possible lot streaming strategies and possible sequence under each lot streaming strategy were simulated and examined. The heuristic consistently yielded optimal schedule consistently in all eleven cases. The identification procedure for select best lot streaming strategy was suggested.
Adjustment and validation of a simulation tool for CSP plants based on parabolic trough technology
NASA Astrophysics Data System (ADS)
García-Barberena, Javier; Ubani, Nora
2016-05-01
The present work presents the validation process carried out for a simulation tool especially designed for the energy yield assessment of concentrating solar plants based on parabolic through (PT) technology. The validation has been carried out by comparing the model estimations with real data collected from a commercial CSP plant. In order to adjust the model parameters used for the simulation, 12 different days were selected among one-year of operational data measured at the real plant. The 12 days were simulated and the estimations compared with the measured data, focusing on the most important variables from the simulation point of view: temperatures, pressures and mass flow of the solar field, gross power, parasitic power, and net power delivered by the plant. Based on these 12 days, the key parameters for simulating the model were properly fixed and the simulation of a whole year performed. The results obtained for a complete year simulation showed very good agreement for the gross and net electric total production. The estimations for these magnitudes show a 1.47% and 2.02% BIAS respectively. The results proved that the simulation software describes with great accuracy the real operation of the power plant and correctly reproduces its transient behavior.
NASA Astrophysics Data System (ADS)
Attada, Raju; Kumar, Prashant; Dasari, Hari Prasad
2018-04-01
Assessment of the land surface models (LSMs) on monsoon studies over the Indian summer monsoon (ISM) region is essential. In this study, we evaluate the skill of LSMs at 10 km spatial resolution in simulating the 2010 monsoon season. The thermal diffusion scheme (TDS), rapid update cycle (RUC), and Noah and Noah with multi-parameterization (Noah-MP) LSMs are chosen based on nature of complexity, that is, from simple slab model to multi-parameterization options coupled with the Weather Research and Forecasting (WRF) model. Model results are compared with the available in situ observations and reanalysis fields. The sensitivity of monsoon elements, surface characteristics, and vertical structures to different LSMs is discussed. Our results reveal that the monsoon features are reproduced by WRF model with all LSMs, but with some regional discrepancies. The model simulations with selected LSMs are able to reproduce the broad rainfall patterns, orography-induced rainfall over the Himalayan region, and dry zone over the southern tip of India. The unrealistic precipitation pattern over the equatorial western Indian Ocean is simulated by WRF-LSM-based experiments. The spatial and temporal distributions of top 2-m soil characteristics (soil temperature and soil moisture) are well represented in RUC and Noah-MP LSM-based experiments during the ISM. Results show that the WRF simulations with RUC, Noah, and Noah-MP LSM-based experiments significantly improved the skill of 2-m temperature and moisture compared to TDS (chosen as a base) LSM-based experiments. Furthermore, the simulations with Noah, RUC, and Noah-MP LSMs exhibit minimum error in thermodynamics fields. In case of surface wind speed, TDS LSM performed better compared to other LSM experiments. A significant improvement is noticeable in simulating rainfall by WRF model with Noah, RUC, and Noah-MP LSMs over TDS LSM. Thus, this study emphasis the importance of choosing/improving LSMs for simulating the ISM phenomena in a regional model.
A time domain frequency-selective multivariate Granger causality approach.
Leistritz, Lutz; Witte, Herbert
2016-08-01
The investigation of effective connectivity is one of the major topics in computational neuroscience to understand the interaction between spatially distributed neuronal units of the brain. Thus, a wide variety of methods has been developed during the last decades to investigate functional and effective connectivity in multivariate systems. Their spectrum ranges from model-based to model-free approaches with a clear separation into time and frequency range methods. We present in this simulation study a novel time domain approach based on Granger's principle of predictability, which allows frequency-selective considerations of directed interactions. It is based on a comparison of prediction errors of multivariate autoregressive models fitted to systematically modified time series. These modifications are based on signal decompositions, which enable a targeted cancellation of specific signal components with specific spectral properties. Depending on the embedded signal decomposition method, a frequency-selective or data-driven signal-adaptive Granger Causality Index may be derived.
NASA Astrophysics Data System (ADS)
Peng, Chong; Wang, Lun; Liao, T. Warren
2015-10-01
Currently, chatter has become the critical factor in hindering machining quality and productivity in machining processes. To avoid cutting chatter, a new method based on dynamic cutting force simulation model and support vector machine (SVM) is presented for the prediction of chatter stability lobes. The cutting force is selected as the monitoring signal, and the wavelet energy entropy theory is used to extract the feature vectors. A support vector machine is constructed using the MATLAB LIBSVM toolbox for pattern classification based on the feature vectors derived from the experimental cutting data. Then combining with the dynamic cutting force simulation model, the stability lobes diagram (SLD) can be estimated. Finally, the predicted results are compared with existing methods such as zero-order analytical (ZOA) and semi-discretization (SD) method as well as actual cutting experimental results to confirm the validity of this new method.
Island-Model Genomic Selection for Long-Term Genetic Improvement of Autogamous Crops.
Yabe, Shiori; Yamasaki, Masanori; Ebana, Kaworu; Hayashi, Takeshi; Iwata, Hiroyoshi
2016-01-01
Acceleration of genetic improvement of autogamous crops such as wheat and rice is necessary to increase cereal production in response to the global food crisis. Population and pedigree methods of breeding, which are based on inbred line selection, are used commonly in the genetic improvement of autogamous crops. These methods, however, produce a few novel combinations of genes in a breeding population. Recurrent selection promotes recombination among genes and produces novel combinations of genes in a breeding population, but it requires inaccurate single-plant evaluation for selection. Genomic selection (GS), which can predict genetic potential of individuals based on their marker genotype, might have high reliability of single-plant evaluation and might be effective in recurrent selection. To evaluate the efficiency of recurrent selection with GS, we conducted simulations using real marker genotype data of rice cultivars. Additionally, we introduced the concept of an "island model" inspired by evolutionary algorithms that might be useful to maintain genetic variation through the breeding process. We conducted GS simulations using real marker genotype data of rice cultivars to evaluate the efficiency of recurrent selection and the island model in an autogamous species. Results demonstrated the importance of producing novel combinations of genes through recurrent selection. An initial population derived from admixture of multiple bi-parental crosses showed larger genetic gains than a population derived from a single bi-parental cross in whole cycles, suggesting the importance of genetic variation in an initial population. The island-model GS better maintained genetic improvement in later generations than the other GS methods, suggesting that the island-model GS can utilize genetic variation in breeding and can retain alleles with small effects in the breeding population. The island-model GS will become a new breeding method that enhances the potential of genomic selection in autogamous crops, especially bringing long-term improvement.
Study on Contaminant Transportation of a Typical Chemical Industry Park Based on GMS Software
NASA Astrophysics Data System (ADS)
Huang, LinXian; Liu, GuoZhen; Xing, LiTing; Liu, BenHua; Xu, ZhengHe; Yang, LiZhi; Zhu, HebgHua
2018-03-01
The groundwater solute transport model can effectively simulated the transport path, the transport scope, and the concentration of contaminant which can provide quantitative data for groundwater pollution repair and groundwater resource management. In this study, we selected biological modern technology research base of Shandong province as research objective and simulated the pollution characteristic of typicalcontaminant cis-1, 3-dichloropropene under different operating conditions by using GMS software.
NASA Astrophysics Data System (ADS)
Xiang, Lin
This is a collective case study seeking to develop detailed descriptions of how programming an agent-based simulation influences a group of 8 th grade students' model-based inquiry (MBI) by examining students' agent-based programmable modeling (ABPM) processes and the learning outcomes. The context of the present study was a biology unit on natural selection implemented in a charter school of a major California city during spring semester of 2009. Eight 8th grade students, two boys and six girls, participated in this study. All of them were low socioeconomic status (SES). English was a second language for all of them, but they had been identified as fluent English speakers at least a year before the study. None of them had learned either natural selection or programming before the study. The study spanned over 7 weeks and was comprised of two study phases. In phase one the subject students learned natural selection in science classroom and how to do programming in NetLogo, an ABPM tool, in a computer lab; in phase two, the subject students were asked to program a simulation of adaptation based on the natural selection model in NetLogo. Both qualitative and quantitative data were collected in this study. The data resources included (1) pre and post test questionnaire, (2) student in-class worksheet, (3) programming planning sheet, (4) code-conception matching sheet, (5) student NetLogo projects, (6) videotaped programming processes, (7) final interview, and (8) investigator's field notes. Both qualitative and quantitative approaches were applied to analyze the gathered data. The findings suggested that students made progress on understanding adaptation phenomena and natural selection at the end of ABPM-supported MBI learning but the progress was limited. These students still held some misconceptions in their conceptual models, such as the idea that animals need to "learn" to adapt into the environment. Besides, their models of natural selection appeared to be incomplete and many relationships among the model ideas had not been well established by the end of the study. Most of them did not treat the natural selection model as a whole but only focused on some ideas within the model. Very few of them could scientifically apply the natural selection model to interpret other evolutionary phenomena. The findings about participating students' programming processes revealed these processes were composed of consecutive programming cycles. The cycle typically included posing a task, constructing and running program codes, and examining the resulting simulation. Students held multiple ideas and applied various programming strategies in these cycles. Students were involved in MBI at each step of a cycle. Three types of ideas, six programming strategies and ten MBI actions were identified out of the processes. The relationships among these ideas, strategies and actions were also identified and described. Findings suggested that ABPM activities could support MBI by (1) exposing students' personal models and understandings, (2) provoking and supporting a series of model-based inquiry activities, such as elaborating target phenomena, abstracting patterns, and revising conceptual models, and (3) provoking and supporting tangible and productive conversations among students, as well as between the instructor and students. Findings also revealed three programming behaviors that appeared to impede productive MBI, including (1) solely phenomenon-orientated programming, (2) transplanting program codes, and (3) blindly running procedures. Based on the findings, I propose a general modeling process in ABPM activities, summarize the ways in which MBI can be supported in ABPM activities and constrained by multiple factors, and suggest the implications of this study in the future ABPM-assisted science instructional design and research.
Reinecke, Isabel; Schultze-Mosgau, Marcus-Hillert; Nave, Rüdiger; Schmitz, Heinz; Ploeger, Bart A
2017-05-01
Pharmacokinetics (PK) of anastrozole (ATZ) and levonorgestrel (LNG) released from an intravaginal ring (IVR) intended to treat endometriosis symptoms were characterized, and the exposure-response relationship focusing on the development of large ovarian follicle-like structures was investigated by modeling and simulation to support dose selection for further studies. A population PK analysis and simulations were performed for ATZ and LNG based on clinical phase 1 study data from 66 healthy women. A PK/PD model was developed to predict the probability of a maximum follicle size ≥30 mm and the potential contribution of ATZ beside the known LNG effects. Population PK models for ATZ and LNG were established where the interaction of LNG with sex hormone-binding globulin (SHBG) as well as a stimulating effect of estradiol on SHBG were considered. Furthermore, simulations showed that doses of 40 μg/d LNG combined with 300, 600, or 1050 μg/d ATZ reached anticipated exposure levels for both drugs, facilitating selection of ATZ and LNG doses in the phase 2 dose-finding study. The main driver for the effect on maximum follicle size appears to be unbound LNG exposure. A 50% probability of maximum follicle size ≥30 mm was estimated for 40 μg/d LNG based on the exposure-response analysis. ATZ in the dose range investigated does not increase the risk for ovarian cysts as occurs with LNG at a dose that does not inhibit ovulation. © 2016, The American College of Clinical Pharmacology.
NASA Astrophysics Data System (ADS)
Gustafson, W. I., Jr.; Vogelmann, A. M.; Li, Z.; Cheng, X.; Endo, S.; Krishna, B.; Toto, T.; Xiao, H.
2017-12-01
Large-eddy simulation (LES) is a powerful tool for understanding atmospheric turbulence and cloud development. However, the results are sensitive to the choice of forcing data sets used to drive the LES model, and the most realistic forcing data is difficult to identify a priori. Knowing the sensitivity of boundary layer and cloud processes to forcing data selection is critical when using LES to understand atmospheric processes and when developing associated parameterizations. The U.S. Department of Energy Atmospheric Radiation Measurement (ARM) User Facility has been developing the capability to routinely generate ensembles of LES based on a selection of plausible input forcing data sets. The LES ARM Symbiotic Simulation and Observation (LASSO) project is initially generating simulations for shallow convection days at the ARM Southern Great Plains site in Oklahoma. This talk will examine 13 days with shallow convection selected from the period May-August 2016, with multiple forcing sources and spatial scales used to generate an LES ensemble for each of the days, resulting in hundreds of LES runs with coincident observations from ARM's extensive suite of in situ and retrieval-based products. This talk will focus particularly on the sensitivity of the cloud development and its relation to forcing data. Variability of the PBL characteristics, lifting condensation level, cloud base height, cloud fraction, and liquid water path will be examined. More information about the LASSO project can be found at https://www.arm.gov/capabilities/modeling/lasso.
NASA Astrophysics Data System (ADS)
Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe
2016-11-01
Given the ever increasing number of climate change simulations being carried out, it has become impractical to use all of them to cover the uncertainty of climate change impacts. Various methods have been proposed to optimally select subsets of a large ensemble of climate simulations for impact studies. However, the behaviour of optimally-selected subsets of climate simulations for climate change impacts is unknown, since the transfer process from climate projections to the impact study world is usually highly non-linear. Consequently, this study investigates the transferability of optimally-selected subsets of climate simulations in the case of hydrological impacts. Two different methods were used for the optimal selection of subsets of climate scenarios, and both were found to be capable of adequately representing the spread of selected climate model variables contained in the original large ensemble. However, in both cases, the optimal subsets had limited transferability to hydrological impacts. To capture a similar variability in the impact model world, many more simulations have to be used than those that are needed to simply cover variability from the climate model variables' perspective. Overall, both optimal subset selection methods were better than random selection when small subsets were selected from a large ensemble for impact studies. However, as the number of selected simulations increased, random selection often performed better than the two optimal methods. To ensure adequate uncertainty coverage, the results of this study imply that selecting as many climate change simulations as possible is the best avenue. Where this was not possible, the two optimal methods were found to perform adequately.
Analysis of habitat-selection rules using an individual-based model
Steven F. Railsback; Bret C. Harvey
2002-01-01
Abstract - Despite their promise for simulating natural complexity,individual-based models (IBMs) are rarely used for ecological research or resource management. Few IBMs have been shown to reproduce realistic patterns of behavior by individual organisms.To test our IBM of stream salmonids and draw conclusions about foraging theory,we analyzed the IBM âs ability to...
Kaneko, Masato; Tanigawa, Takahiko; Hashizume, Kensei; Kajikawa, Mariko; Tajiri, Masahiro; Mueck, Wolfgang
2013-01-01
This study was designed to confirm the appropriateness of the dose setting for a Japanese phase III study of rivaroxaban in patients with non-valvular atrial fibrillation (NVAF), which had been based on model simulation employing phase II study data. The previously developed mixed-effects pharmacokinetic/pharmacodynamic (PK-PD) model, which consisted of an oral one-compartment model parameterized in terms of clearance, volume and a first-order absorption rate, was rebuilt and optimized using the data for 597 subjects from the Japanese phase III study, J-ROCKET AF. A mixed-effects modeling technique in NONMEM was used to quantify both unexplained inter-individual variability and inter-occasion variability, which are random effect parameters. The final PK and PK-PD models were evaluated to identify influential covariates. The empirical Bayes estimates of AUC and C(max) from the final PK model were consistent with the simulated results from the Japanese phase II study. There was no clear relationship between individual estimated exposures and safety-related events, and the estimated exposure levels were consistent with the global phase III data. Therefore, it was concluded that the dose selected for the phase III study with Japanese NVAF patients by means of model simulation employing phase II study data had been appropriate from the PK-PD perspective.
Dynamic simulation of perturbation responses in a closed-loop virtual arm model.
Du, Yu-Fan; He, Xin; Lan, Ning
2010-01-01
A closed-loop virtual arm (VA) model has been developed in SIMULINK environment by adding spinal reflex circuits and propriospinal neural networks to the open-loop VA model developed in early study [1]. An improved virtual muscle model (VM4.0) is used to speed up simulation and to generate more precise recruitment of muscle force at low levels of muscle activation. Time delays in the reflex loops are determined by their synaptic connections and afferent transmission back to the spinal cord. Reflex gains are properly selected so that closed-loop responses are stable. With the closed-loop VA model, we are developing an approach to evaluate system behaviors by dynamic simulation of perturbation responses. Joint stiffness is calculated based on simulated perturbation responses by a least-squares algorithm in MATLAB. This method of dynamic simulation will be essential for further evaluation of feedforward and reflex control of arm movement and position.
Transportation Planning for Your Community
DOT National Transportation Integrated Search
2000-12-01
The Highway Economic Requirements System (HERS) is a computer model designed to simulate improvement selection decisions based on the relative benefit-cost merits of alternative improvement options. HERS is intended to estimate national level investm...
NASA Astrophysics Data System (ADS)
Lafontaine, J.; Hay, L.; Archfield, S. A.; Farmer, W. H.; Kiang, J. E.
2014-12-01
The U.S. Geological Survey (USGS) has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development, and facilitate the application of hydrologic simulations within the continental US. The portion of the NHM located within the Gulf Coastal Plains and Ozarks Landscape Conservation Cooperative (GCPO LCC) is being used to test the feasibility of improving streamflow simulations in gaged and ungaged watersheds by linking statistically- and physically-based hydrologic models. The GCPO LCC covers part or all of 12 states and 5 sub-geographies, totaling approximately 726,000 km2, and is centered on the lower Mississippi Alluvial Valley. A total of 346 USGS streamgages in the GCPO LCC region were selected to evaluate the performance of this new calibration methodology for the period 1980 to 2013. Initially, the physically-based models are calibrated to measured streamflow data to provide a baseline for comparison. An enhanced calibration procedure then is used to calibrate the physically-based models in the gaged and ungaged areas of the GCPO LCC using statistically-based estimates of streamflow. For this application, the calibration procedure is adjusted to address the limitations of the statistically generated time series to reproduce measured streamflow in gaged basins, primarily by incorporating error and bias estimates. As part of this effort, estimates of uncertainty in the model simulations are also computed for the gaged and ungaged watersheds.
Finnerty, Justin John
2015-01-01
Cation selective channels constitute the gate for ion currents through the cell membrane. Here we present an improved statistical mechanical model based on atomistic structural information, cation hydration state and without tuned parameters that reproduces the selectivity of biological Na+ and Ca2+ ion channels. The importance of the inclusion of step-wise cation hydration in these results confirms the essential role partial dehydration plays in the bacterial Na+ channels. The model, proven reliable against experimental data, could be straightforwardly used for designing Na+ and Ca2+ selective nanopores. PMID:26460827
Coon, William F.
2011-01-01
Simulation of streamflows in small subbasins was improved by adjusting model parameter values to match base flows, storm peaks, and storm recessions more precisely than had been done with the original model. Simulated recessional and low flows were either increased or decreased as appropriate for a given stream, and simulated peak flows generally were lowered in the revised model. The use of suspended-sediment concentrations rather than concentrations of the surrogate constituent, total suspended solids, resulted in increases in the simulated low-flow sediment concentrations and, in most cases, decreases in the simulated peak-flow sediment concentrations. Simulated orthophosphate concentrations in base flows generally increased but decreased for peak flows in selected headwater subbasins in the revised model. Compared with the original model, phosphorus concentrations simulated by the revised model were comparable in forested subbasins, generally decreased in developed and wetland-dominated subbasins, and increased in agricultural subbasins. A final revision to the model was made by the addition of the simulation of chloride (salt) concentrations in the Onondaga Creek Basin to help water-resource managers better understand the relative contributions of salt from multiple sources in this particular tributary. The calibrated revised model was used to (1) compute loading rates for the various land types that were simulated in the model, (2) conduct a watershed-management analysis that estimated the portion of the total load that was likely to be transported to Onondaga Lake from each of the modeled subbasins, (3) compute and assess chloride loads to Onondaga Lake from the Onondaga Creek Basin, and (4) simulate precolonization (forested) conditions in the basin to estimate the probable minimum phosphorus loads to the lake.
I PASS: an interactive policy analysis simulation system.
Doug Olson; Con Schallau; Wilbur Maki
1984-01-01
This paper describes an interactive policy analysis simulation system(IPASS) that can be used to analyze the long-term economic and demographic effects of alternative forest resource management policies. The IPASS model is a dynamic analytical tool that forecasts growth and development of an economy. It allows the user to introduce changes in selected parameters based...
Reviews of Selected Books and Articles on Gaming and Simulation.
ERIC Educational Resources Information Center
Shubik, Martin; Brewer, Garry D.
This annotated bibliography represents the initial step taken by the authors to apply standards of excellence to the evaluation of literature in the fields of gaming, simulation, and model-building. It aims at helping persons interested in these subjects deal with the flood of literature on these topics by making value judgments, based on the…
Two-dimensional Lagrangian simulation of suspended sediment
Schoellhamer, David H.
1988-01-01
A two-dimensional laterally averaged model for suspended sediment transport in steady gradually varied flow that is based on the Lagrangian reference frame is presented. The layered Lagrangian transport model (LLTM) for suspended sediment performs laterally averaged concentration. The elevations of nearly horizontal streamlines and the simulation time step are selected to optimize model stability and efficiency. The computational elements are parcels of water that are moved along the streamlines in the Lagrangian sense and are mixed with neighboring parcels. Three applications show that the LLTM can accurately simulate theoretical and empirical nonequilibrium suspended sediment distributions and slug injections of suspended sediment in a laboratory flume.
Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations
NASA Astrophysics Data System (ADS)
Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.
2017-09-01
This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.
A Parameter Tuning Scheme of Sea-ice Model Based on Automatic Differentiation Technique
NASA Astrophysics Data System (ADS)
Kim, J. G.; Hovland, P. D.
2001-05-01
Automatic diferentiation (AD) technique was used to illustrate a new approach for parameter tuning scheme of an uncoupled sea-ice model. Atmospheric forcing field of 1992 obtained from NCEP data was used as enforcing variables in the study. The simulation results were compared with the observed ice movement provided by the International Arctic Buoy Programme (IABP). All of the numerical experiments were based on a widely used dynamic and thermodynamic model for simulating the seasonal sea-ice chnage of the main Arctic ocean. We selected five dynamic and thermodynamic parameters for the tuning process in which the cost function defined by the norm of the difference between observed and simulated ice drift locations was minimized. The selected parameters are the air and ocean drag coefficients, the ice strength constant, the turning angle at ice-air/ocean interface, and the bulk sensible heat transfer coefficient. The drag coefficients were the major parameters to control sea-ice movement and extent. The result of the study shows that more realistic simulations of ice thickness distribution was produced by tuning the simulated ice drift trajectories. In the tuning process, the L-BFCGS-B minimization algorithm of a quasi-Newton method was used. The derivative information required in the minimization iterations was provided by the AD processed Fortran code. Compared with a conventional approach, AD generated derivative code provided fast and robust computations of derivative information.
Hybrid modeling and empirical analysis of automobile supply chain network
NASA Astrophysics Data System (ADS)
Sun, Jun-yan; Tang, Jian-ming; Fu, Wei-ping; Wu, Bing-ying
2017-05-01
Based on the connection mechanism of nodes which automatically select upstream and downstream agents, a simulation model for dynamic evolutionary process of consumer-driven automobile supply chain is established by integrating ABM and discrete modeling in the GIS-based map. Firstly, the rationality is proved by analyzing the consistency of sales and changes in various agent parameters between the simulation model and a real automobile supply chain. Second, through complex network theory, hierarchical structures of the model and relationships of networks at different levels are analyzed to calculate various characteristic parameters such as mean distance, mean clustering coefficients, and degree distributions. By doing so, it verifies that the model is a typical scale-free network and small-world network. Finally, the motion law of this model is analyzed from the perspective of complex self-adaptive systems. The chaotic state of the simulation system is verified, which suggests that this system has typical nonlinear characteristics. This model not only macroscopically illustrates the dynamic evolution of complex networks of automobile supply chain but also microcosmically reflects the business process of each agent. Moreover, the model construction and simulation of the system by means of combining CAS theory and complex networks supplies a novel method for supply chain analysis, as well as theory bases and experience for supply chain analysis of auto companies.
Two Paradoxes in Linear Regression Analysis.
Feng, Ge; Peng, Jing; Tu, Dongke; Zheng, Julia Z; Feng, Changyong
2016-12-25
Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection.
Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P
2011-05-19
There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.
Kumar, Avishek; Campitelli, Paul; Thorpe, M F; Ozkan, S Banu
2015-12-01
The most successful protein structure prediction methods to date have been template-based modeling (TBM) or homology modeling, which predicts protein structure based on experimental structures. These high accuracy predictions sometimes retain structural errors due to incorrect templates or a lack of accurate templates in the case of low sequence similarity, making these structures inadequate in drug-design studies or molecular dynamics simulations. We have developed a new physics based approach to the protein refinement problem by mimicking the mechanism of chaperons that rehabilitate misfolded proteins. The template structure is unfolded by selectively (targeted) pulling on different portions of the protein using the geometric based technique FRODA, and then refolded using hierarchically restrained replica exchange molecular dynamics simulations (hr-REMD). FRODA unfolding is used to create a diverse set of topologies for surveying near native-like structures from a template and to provide a set of persistent contacts to be employed during re-folding. We have tested our approach on 13 previous CASP targets and observed that this method of folding an ensemble of partially unfolded structures, through the hierarchical addition of contact restraints (that is, first local and then nonlocal interactions), leads to a refolding of the structure along with refinement in most cases (12/13). Although this approach yields refined models through advancement in sampling, the task of blind selection of the best refined models still needs to be solved. Overall, the method can be useful for improved sampling for low resolution models where certain of the portions of the structure are incorrectly modeled. © 2015 Wiley Periodicals, Inc.
Mean-variance model for portfolio optimization with background risk based on uncertainty theory
NASA Astrophysics Data System (ADS)
Zhai, Jia; Bai, Manying
2018-04-01
The aim of this paper is to develop a mean-variance model for portfolio optimization considering the background risk, liquidity and transaction cost based on uncertainty theory. In portfolio selection problem, returns of securities and assets liquidity are assumed as uncertain variables because of incidents or lacking of historical data, which are common in economic and social environment. We provide crisp forms of the model and a hybrid intelligent algorithm to solve it. Under a mean-variance framework, we analyze the portfolio frontier characteristic considering independently additive background risk. In addition, we discuss some effects of background risk and liquidity constraint on the portfolio selection. Finally, we demonstrate the proposed models by numerical simulations.
Czaplewski, Cezary; Karczynska, Agnieszka; Sieradzan, Adam K; Liwo, Adam
2018-04-30
A server implementation of the UNRES package (http://www.unres.pl) for coarse-grained simulations of protein structures with the physics-based UNRES model, coined a name UNRES server, is presented. In contrast to most of the protein coarse-grained models, owing to its physics-based origin, the UNRES force field can be used in simulations, including those aimed at protein-structure prediction, without ancillary information from structural databases; however, the implementation includes the possibility of using restraints. Local energy minimization, canonical molecular dynamics simulations, replica exchange and multiplexed replica exchange molecular dynamics simulations can be run with the current UNRES server; the latter are suitable for protein-structure prediction. The user-supplied input includes protein sequence and, optionally, restraints from secondary-structure prediction or small x-ray scattering data, and simulation type and parameters which are selected or typed in. Oligomeric proteins, as well as those containing D-amino-acid residues and disulfide links can be treated. The output is displayed graphically (minimized structures, trajectories, final models, analysis of trajectory/ensembles); however, all output files can be downloaded by the user. The UNRES server can be freely accessed at http://unres-server.chem.ug.edu.pl.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan
2016-07-04
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically-average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less
Posterior Predictive Bayesian Phylogenetic Model Selection
Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn
2014-01-01
We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892
Johnson, Brent A
2009-10-01
We consider estimation and variable selection in the partial linear model for censored data. The partial linear model for censored data is a direct extension of the accelerated failure time model, the latter of which is a very important alternative model to the proportional hazards model. We extend rank-based lasso-type estimators to a model that may contain nonlinear effects. Variable selection in such partial linear model has direct application to high-dimensional survival analyses that attempt to adjust for clinical predictors. In the microarray setting, previous methods can adjust for other clinical predictors by assuming that clinical and gene expression data enter the model linearly in the same fashion. Here, we select important variables after adjusting for prognostic clinical variables but the clinical effects are assumed nonlinear. Our estimator is based on stratification and can be extended naturally to account for multiple nonlinear effects. We illustrate the utility of our method through simulation studies and application to the Wisconsin prognostic breast cancer data set.
NASA Astrophysics Data System (ADS)
Huang, Pengnian; Li, Zhijia; Chen, Ji; Li, Qiaoling; Yao, Cheng
2016-11-01
To simulate the hydrological processes in semi-arid areas properly is still challenging. This study assesses the impact of different modeling strategies on simulating flood processes in semi-arid catchments. Four classic hydrological models, TOPMODEL, XINANJIANG (XAJ), SAC-SMA and TANK, were selected and applied to three semi-arid catchments in North China. Based on analysis and comparison of the simulation results of these classic models, four new flexible models were constructed and used to further investigate the suitability of various modeling strategies for semi-arid environments. Numerical experiments were also designed to examine the performances of the models. The results show that in semi-arid catchments a suitable model needs to include at least one nonlinear component to simulate the main process of surface runoff generation. If there are more than two nonlinear components in the hydrological model, they should be arranged in parallel, rather than in series. In addition, the results show that the parallel nonlinear components should be combined by multiplication rather than addition. Moreover, this study reveals that the key hydrological process over semi-arid catchments is the infiltration excess surface runoff, a non-linear component.
a Weighted Local-World Evolving Network Model Based on the Edge Weights Preferential Selection
NASA Astrophysics Data System (ADS)
Li, Ping; Zhao, Qingzhen; Wang, Haitang
2013-05-01
In this paper, we use the edge weights preferential attachment mechanism to build a new local-world evolutionary model for weighted networks. It is different from previous papers that the local-world of our model consists of edges instead of nodes. Each time step, we connect a new node to two existing nodes in the local-world through the edge weights preferential selection. Theoretical analysis and numerical simulations show that the scale of the local-world affect on the weight distribution, the strength distribution and the degree distribution. We give the simulations about the clustering coefficient and the dynamics of infectious diseases spreading. The weight dynamics of our network model can portray the structure of realistic networks such as neural network of the nematode C. elegans and Online Social Network.
Modeling of block copolymer dry etching for directed self-assembly lithography
NASA Astrophysics Data System (ADS)
Belete, Zelalem; Baer, Eberhard; Erdmann, Andreas
2018-03-01
Directed self-assembly (DSA) of block copolymers (BCP) is a promising alternative technology to overcome the limits of patterning for the semiconductor industry. DSA exploits the self-assembling property of BCPs for nano-scale manufacturing and to repair defects in patterns created during photolithography. After self-assembly of BCPs, to transfer the created pattern to the underlying substrate, selective etching of PMMA (poly (methyl methacrylate)) to PS (polystyrene) is required. However, the etch process to transfer the self-assemble "fingerprint" DSA patterns to the underlying layer is still a challenge. Using combined experimental and modelling studies increases understanding of plasma interaction with BCP materials during the etch process and supports the development of selective process that form well-defined patterns. In this paper, a simple model based on a generic surface model has been developed and an investigation to understand the etch behavior of PS-b-PMMA for Ar, and Ar/O2 plasma chemistries has been conducted. The implemented model is calibrated for etch rates and etch profiles with literature data to extract parameters and conduct simulations. In order to understand the effect of the plasma on the block copolymers, first the etch model was calibrated for polystyrene (PS) and poly (methyl methacrylate) (PMMA) homopolymers. After calibration of the model with the homopolymers etch rate, a full Monte-Carlo simulation was conducted and simulation results are compared with the critical-dimension (CD) and selectivity of etch profile measurement. In addition, etch simulations for lamellae pattern have been demonstrated, using the implemented model.
JSC interactive basic accounting system
NASA Technical Reports Server (NTRS)
Spitzer, J. F.
1978-01-01
Design concepts for an interactive basic accounting system (IBAS) are considered in terms of selecting the design option which provides the best response at the lowest cost. Modeling the IBAS workload and applying this workload to a U1108 EXEC 8 based system using both a simulation model and the real system is discussed.
Integrated modelling of stormwater treatment systems uptake.
Castonguay, A C; Iftekhar, M S; Urich, C; Bach, P M; Deletic, A
2018-05-24
Nature-based solutions provide a variety of benefits in growing cities, ranging from stormwater treatment to amenity provision such as aesthetics. However, the decision-making process involved in the installation of such green infrastructure is not straightforward, as much uncertainty around the location, size, costs and benefits impedes systematic decision-making. We developed a model to simulate decision rules used by local municipalities to install nature-based stormwater treatment systems, namely constructed wetlands, ponds/basins and raingardens. The model was used to test twenty-four scenarios of policy-making, by combining four asset selection, two location selection and three budget constraint decision rules. Based on the case study of a local municipality in Metropolitan Melbourne, Australia, the modelled uptake of stormwater treatment systems was compared with attributes of real-world systems for the simulation period. Results show that the actual budgeted funding is not reliable to predict systems' uptake and that policy-makers are more likely to plan expenditures based on installation costs. The model was able to replicate the cumulative treatment capacity and the location of systems. As such, it offers a novel approach to investigate the impact of using different decision rules to provide environmental services considering biophysical and economic factors. Copyright © 2018 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Reardon, Sean; Baker, Rachel; Kasman, Matt; Townsend, Joe; Klasik, Daniel
2014-01-01
The creation of racially diverse colleges at all levels of selectivity has proven to be no small task, even with the legal use of race-conscious affirmative action. As evidenced in the postsecondary destinations of the high school class of 2004, very selective schools (those with Barron's Selectivity rankings of 1, 2 or 3) have many more White,…
ISPE: A knowledge-based system for fluidization studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, S.
1991-01-01
Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all specified goals'' are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that canmore » enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.« less
NASA Astrophysics Data System (ADS)
Youn, Joo-Sang; Seok, Seung-Joon; Kang, Chul-Hee
This paper presents a new QoS model for end-to-end service provisioning in multi-hop wireless networks. In legacy IEEE 802.11e based multi-hop wireless networks, the fixed assignment of service classes according to flow's priority at every node causes priority inversion problem when performing end-to-end service differentiation. Thus, this paper proposes a new QoS provisioning model called Dynamic Hop Service Differentiation (DHSD) to alleviate the problem and support effective service differentiation between end-to-end nodes. Many previous works for QoS model through the 802.11e based service differentiation focus on packet scheduling on several service queues with different service rate and service priority. Our model, however, concentrates on a dynamic class selection scheme, called Per Hop Class Assignment (PHCA), in the node's MAC layer, which selects a proper service class for each packet, in accordance with queue states and service requirement, in every node along the end-to-end route of the packet. The proposed QoS solution is evaluated using the OPNET simulator. The simulation results show that the proposed model outperforms both best-effort and 802.11e based strict priority service models in mobile ad hoc environments.
Bhaumik, Basabi; Mathur, Mona
2003-01-01
We present a model for development of orientation selectivity in layer IV simple cells. Receptive field (RF) development in the model, is determined by diffusive cooperation and resource limited competition guided axonal growth and retraction in geniculocortical pathway. The simulated cortical RFs resemble experimental RFs. The receptive field model is incorporated in a three-layer visual pathway model consisting of retina, LGN and cortex. We have studied the effect of activity dependent synaptic scaling on orientation tuning of cortical cells. The mean value of hwhh (half width at half the height of maximum response) in simulated cortical cells is 58 degrees when we consider only the linear excitatory contribution from LGN. We observe a mean improvement of 22.8 degrees in tuning response due to the non-linear spiking mechanisms that include effects of threshold voltage and synaptic scaling factor.
Neuronvisio: A Graphical User Interface with 3D Capabilities for NEURON.
Mattioni, Michele; Cohen, Uri; Le Novère, Nicolas
2012-01-01
The NEURON simulation environment is a commonly used tool to perform electrical simulation of neurons and neuronal networks. The NEURON User Interface, based on the now discontinued InterViews library, provides some limited facilities to explore models and to plot their simulation results. Other limitations include the inability to generate a three-dimensional visualization, no standard mean to save the results of simulations, or to store the model geometry within the results. Neuronvisio (http://neuronvisio.org) aims to address these deficiencies through a set of well designed python APIs and provides an improved UI, allowing users to explore and interact with the model. Neuronvisio also facilitates access to previously published models, allowing users to browse, download, and locally run NEURON models stored in ModelDB. Neuronvisio uses the matplotlib library to plot simulation results and uses the HDF standard format to store simulation results. Neuronvisio can be viewed as an extension of NEURON, facilitating typical user workflows such as model browsing, selection, download, compilation, and simulation. The 3D viewer simplifies the exploration of complex model structure, while matplotlib permits the plotting of high-quality graphs. The newly introduced ability of saving numerical results allows users to perform additional analysis on their previous simulations.
Island-Model Genomic Selection for Long-Term Genetic Improvement of Autogamous Crops
Yabe, Shiori; Yamasaki, Masanori; Ebana, Kaworu; Hayashi, Takeshi; Iwata, Hiroyoshi
2016-01-01
Acceleration of genetic improvement of autogamous crops such as wheat and rice is necessary to increase cereal production in response to the global food crisis. Population and pedigree methods of breeding, which are based on inbred line selection, are used commonly in the genetic improvement of autogamous crops. These methods, however, produce a few novel combinations of genes in a breeding population. Recurrent selection promotes recombination among genes and produces novel combinations of genes in a breeding population, but it requires inaccurate single-plant evaluation for selection. Genomic selection (GS), which can predict genetic potential of individuals based on their marker genotype, might have high reliability of single-plant evaluation and might be effective in recurrent selection. To evaluate the efficiency of recurrent selection with GS, we conducted simulations using real marker genotype data of rice cultivars. Additionally, we introduced the concept of an “island model” inspired by evolutionary algorithms that might be useful to maintain genetic variation through the breeding process. We conducted GS simulations using real marker genotype data of rice cultivars to evaluate the efficiency of recurrent selection and the island model in an autogamous species. Results demonstrated the importance of producing novel combinations of genes through recurrent selection. An initial population derived from admixture of multiple bi-parental crosses showed larger genetic gains than a population derived from a single bi-parental cross in whole cycles, suggesting the importance of genetic variation in an initial population. The island-model GS better maintained genetic improvement in later generations than the other GS methods, suggesting that the island-model GS can utilize genetic variation in breeding and can retain alleles with small effects in the breeding population. The island-model GS will become a new breeding method that enhances the potential of genomic selection in autogamous crops, especially bringing long-term improvement. PMID:27115872
Breimer, Gerben E; Haji, Faizal A; Bodani, Vivek; Cunningham, Melissa S; Lopez-Rios, Adriana-Lucia; Okrainec, Allan; Drake, James M
2017-02-01
The relative educational benefits of virtual reality (VR) and physical simulation models for endoscopic third ventriculostomy (ETV) have not been evaluated "head to head." To compare and identify the relative utility of a physical and VR ETV simulation model for use in neurosurgical training. Twenty-three neurosurgical residents and 3 fellows performed an ETV on both a physical and VR simulation model. Trainees rated the models using 5-point Likert scales evaluating the domains of anatomy, instrument handling, procedural content, and the overall fidelity of the simulation. Paired t tests were performed for each domain's mean overall score and individual items. The VR model has relative benefits compared with the physical model with respect to realistic representation of intraventricular anatomy at the foramen of Monro (4.5, standard deviation [SD] = 0.7 vs 4.1, SD = 0.6; P = .04) and the third ventricle floor (4.4, SD = 0.6 vs 4.0, SD = 0.9; P = .03), although the overall anatomy score was similar (4.2, SD = 0.6 vs 4.0, SD = 0.6; P = .11). For overall instrument handling and procedural content, the physical simulator outperformed the VR model (3.7, SD = 0.8 vs 4.5; SD = 0.5, P < .001 and 3.9; SD = 0.8 vs 4.2, SD = 0.6; P = .02, respectively). Overall task fidelity across the 2 simulators was not perceived as significantly different. Simulation model selection should be based on educational objectives. Training focused on learning anatomy or decision-making for anatomic cues may be aided with the VR simulation model. A focus on developing manual dexterity and technical skills using endoscopic equipment in the operating room may be better learned on the physical simulation model. Copyright © 2016 by the Congress of Neurological Surgeons
NASA Astrophysics Data System (ADS)
Or, D.; von Ruette, J.; Lehmann, P.
2017-12-01
Landslides and subsequent debris-flows initiated by rainfall represent a common natural hazard in mountainous regions. We integrated a landslide hydro-mechanical triggering model with a simple model for debris flow runout pathways and developed a graphical user interface (GUI) to represent these natural hazards at catchment scale at any location. The STEP-TRAMM GUI provides process-based estimates of the initiation locations and sizes of landslides patterns based on digital elevation models (SRTM) linked with high resolution global soil maps (SoilGrids 250 m resolution) and satellite based information on rainfall statistics for the selected region. In the preprocessing phase the STEP-TRAMM model estimates soil depth distribution to supplement other soil information for delineating key hydrological and mechanical properties relevant to representing local soil failure. We will illustrate this publicly available GUI and modeling platform to simulate effects of deforestation on landslide hazards in several regions and compare model outcome with satellite based information.
ERIC Educational Resources Information Center
Kangassalo, Marjatta
Using a pictorial computer simulation of a natural phenomenon, children's exploration processes and their construction of conceptual models were examined. The selected natural phenomenon was the variations of sunlight and heat of the sun experienced on the earth in relation to the positions of the earth and sun in space, and the subjects were…
Two Paradoxes in Linear Regression Analysis
FENG, Ge; PENG, Jing; TU, Dongke; ZHENG, Julia Z.; FENG, Changyong
2016-01-01
Summary Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection. PMID:28638214
PROPAGATOR: a synchronous stochastic wildfire propagation model with distributed computation engine
NASA Astrophysics Data System (ADS)
D´Andrea, M.; Fiorucci, P.; Biondi, G.; Negro, D.
2012-04-01
PROPAGATOR is a stochastic model of forest fire spread, useful as a rapid method for fire risk assessment. The model is based on a 2D stochastic cellular automaton. The domain of simulation is discretized using a square regular grid with cell size of 20x20 meters. The model uses high-resolution information such as elevation and type of vegetation on the ground. Input parameters are wind direction, speed and the ignition point of fire. The simulation of fire propagation is done via a stochastic mechanism of propagation between a burning cell and a non-burning cell belonging to its neighbourhood, i.e. the 8 adjacent cells in the rectangular grid. The fire spreads from one cell to its neighbours with a certain base probability, defined using vegetation types of two adjacent cells, and modified by taking into account the slope between them, wind direction and speed. The simulation is synchronous, and takes into account the time needed by the burning fire to cross each cell. Vegetation cover, slope, wind speed and direction affect the fire-propagation speed from cell to cell. The model simulates several mutually independent realizations of the same stochastic fire propagation process. Each of them provides a map of the area burned at each simulation time step. Propagator simulates self-extinction of the fire, and the propagation process continues until at least one cell of the domain is burning in each realization. The output of the model is a series of maps representing the probability of each cell of the domain to be affected by the fire at each time-step: these probabilities are obtained by evaluating the relative frequency of ignition of each cell with respect to the complete set of simulations. Propagator is available as a module in the OWIS (Opera Web Interfaces) system. The model simulation runs on a dedicated server and it is remote controlled from the client program, NAZCA. Ignition points of the simulation can be selected directly in a high-resolution, three-dimensional graphical representation of the Italian territory within NAZCA. The other simulation parameters, namely wind speed and direction, number of simulations, computing grid size and temporal resolution, can be selected from within the program interface. The output of the simulation is showed in real-time during the simulation, and are also available off-line and on the DEWETRA system, a Web GIS-based system for environmental risk assessment, developed according to OGC-INSPIRE standards. The model execution is very fast, providing a full prevision for the scenario in few minutes, and can be useful for real-time active fire management and suppression.
Learning from Avatars: Learning Assistants Practice Physics Pedagogy in a Classroom Simulator
ERIC Educational Resources Information Center
Chini, Jacquelyn J.; Straub, Carrie L.; Thomas, Kevin H.
2016-01-01
Undergraduate students are increasingly being used to support course transformations that incorporate research-based instructional strategies. While such students are typically selected based on strong content knowledge and possible interest in teaching, they often do not have previous pedagogical training. The current training models make use of…
Single-cell-based computer simulation of the oxygen-dependent tumour response to irradiation
NASA Astrophysics Data System (ADS)
Harting, Christine; Peschke, Peter; Borkenstein, Klaus; Karger, Christian P.
2007-08-01
Optimization of treatment plans in radiotherapy requires the knowledge of tumour control probability (TCP) and normal tissue complication probability (NTCP). Mathematical models may help to obtain quantitative estimates of TCP and NTCP. A single-cell-based computer simulation model is presented, which simulates tumour growth and radiation response on the basis of the response of the constituting cells. The model contains oxic, hypoxic and necrotic tumour cells as well as capillary cells which are considered as sources of a radial oxygen profile. Survival of tumour cells is calculated by the linear quadratic model including the modified response due to the local oxygen concentration. The model additionally includes cell proliferation, hypoxia-induced angiogenesis, apoptosis and resorption of inactivated tumour cells. By selecting different degrees of angiogenesis, the model allows the simulation of oxic as well as hypoxic tumours having distinctly different oxygen distributions. The simulation model showed that poorly oxygenated tumours exhibit an increased radiation tolerance. Inter-tumoural variation of radiosensitivity flattens the dose response curve. This effect is enhanced by proliferation between fractions. Intra-tumoural radiosensitivity variation does not play a significant role. The model may contribute to the mechanistic understanding of the influence of biological tumour parameters on TCP. It can in principle be validated in radiation experiments with experimental tumours.
NASA Technical Reports Server (NTRS)
Phillips, Dave; Haas, William; Barth, Tim; Benjamin, Perakath; Graul, Michael; Bagatourova, Olga
2005-01-01
Range Process Simulation Tool (RPST) is a computer program that assists managers in rapidly predicting and quantitatively assessing the operational effects of proposed technological additions to, and/or upgrades of, complex facilities and engineering systems such as the Eastern Test Range. Originally designed for application to space transportation systems, RPST is also suitable for assessing effects of proposed changes in industrial facilities and large organizations. RPST follows a model-based approach that includes finite-capacity schedule analysis and discrete-event process simulation. A component-based, scalable, open architecture makes RPST easily and rapidly tailorable for diverse applications. Specific RPST functions include: (1) definition of analysis objectives and performance metrics; (2) selection of process templates from a processtemplate library; (3) configuration of process models for detailed simulation and schedule analysis; (4) design of operations- analysis experiments; (5) schedule and simulation-based process analysis; and (6) optimization of performance by use of genetic algorithms and simulated annealing. The main benefits afforded by RPST are provision of information that can be used to reduce costs of operation and maintenance, and the capability for affordable, accurate, and reliable prediction and exploration of the consequences of many alternative proposed decisions.
NASA Astrophysics Data System (ADS)
Dubrovsky, M.; Farda, A.; Huth, R.
2012-12-01
The regional-scale simulations of weather-sensitive processes (e.g. hydrology, agriculture and forestry) for the present and/or future climate often require high resolution meteorological inputs in terms of the time series of selected surface weather characteristics (typically temperature, precipitation, solar radiation, humidity, wind) for a set of stations or on a regular grid. As even the latest Global and Regional Climate Models (GCMs and RCMs) do not provide realistic representation of statistical structure of the surface weather, the model outputs must be postprocessed (downscaled) to achieve the desired statistical structure of the weather data before being used as an input to the follow-up simulation models. One of the downscaling approaches, which is employed also here, is based on a weather generator (WG), which is calibrated using the observed weather series and then modified (in case of simulations for the future climate) according to the GCM- or RCM-based climate change scenarios. The present contribution uses the parametric daily weather generator M&Rfi to follow two aims: (1) Validation of the new simulations of the present climate (1961-1990) made by the ALADIN-Climate/CZ (v.2) Regional Climate Model at 25 km resolution. The WG parameters will be derived from the RCM-simulated surface weather series and compared to those derived from observational data in the Czech meteorological stations. The set of WG parameters will include selected statistics of the surface temperature and precipitation (characteristics of the mean, variability, interdiurnal variability and extremes). (2) Testing a potential of RCM output for calibration of the WG for the ungauged locations. The methodology being examined will consist in using the WG, whose parameters are interpolated from the surrounding stations and then corrected based on a RCM-simulated spatial variability. The quality of the weather series produced by the WG calibrated in this way will be assessed in terms of selected climatic characteristics focusing on extreme precipitation and temperature characteristics (including characteristics of dry/wet/hot/cold spells). Acknowledgements: The present experiment is made within the frame of projects ALARO (project P209/11/2405 sponsored by the Czech Science Foundation), WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports) and VALUE (COST ES 1102 action).
1990-03-01
and M.H. Knuter. Applied Linear Regression Models. Homewood IL: Richard D. Erwin Inc., 1983. Pritsker, A. Alan B. Introduction to Simulation and SLAM...Control Variates in Simulation," European Journal of Operational Research, 42: (1989). Neter, J., W. Wasserman, and M.H. Xnuter. Applied Linear Regression Models
Biogeographic patterns in ocean microbes emerge in a neutral agent-based model.
Hellweger, Ferdi L; van Sebille, Erik; Fredrick, Neil D
2014-09-12
A key question in ecology and evolution is the relative role of natural selection and neutral evolution in producing biogeographic patterns. We quantify the role of neutral processes by simulating division, mutation, and death of 100,000 individual marine bacteria cells with full 1 million-base-pair genomes in a global surface ocean circulation model. The model is run for up to 100,000 years and output is analyzed using BLAST (Basic Local Alignment Search Tool) alignment and metagenomics fragment recruitment. Simulations show the production and maintenance of biogeographic patterns, characterized by distinct provinces subject to mixing and periodic takeovers by neighbors (coalescence), after which neutral evolution reestablishes the province and the patterns reorganize. The emergent patterns are substantial (e.g., down to 99.5% DNA identity between North and Central Pacific provinces) and suggest that microbes evolve faster than ocean currents can disperse them. This approach can also be used to explore environmental selection. Copyright © 2014, American Association for the Advancement of Science.
Simulation study on electric field intensity above train roof
NASA Astrophysics Data System (ADS)
Fan, Yizhe; Li, Huawei; Yang, Shasha
2018-04-01
In order to understand the distribution of electric field in the space above the train roof accurately and select the installation position of the detection device reasonably, in this paper, the 3D model of pantograph-catenary is established by using SolidWorks software, and the spatial electric field distribution of pantograph-catenary model is simulated based on Comsol software. According to the electric field intensity analysis within the 0.4m space above train roof, we give a reasonable installation of the detection device.
Wickman, Jonas; Diehl, Sebastian; Blasius, Bernd; Klausmeier, Christopher A; Ryabov, Alexey B; Brännström, Åke
2017-04-01
Spatial structure can decisively influence the way evolutionary processes unfold. To date, several methods have been used to study evolution in spatial systems, including population genetics, quantitative genetics, moment-closure approximations, and individual-based models. Here we extend the study of spatial evolutionary dynamics to eco-evolutionary models based on reaction-diffusion equations and adaptive dynamics. Specifically, we derive expressions for the strength of directional and stabilizing/disruptive selection that apply both in continuous space and to metacommunities with symmetrical dispersal between patches. For directional selection on a quantitative trait, this yields a way to integrate local directional selection across space and determine whether the trait value will increase or decrease. The robustness of this prediction is validated against quantitative genetics. For stabilizing/disruptive selection, we show that spatial heterogeneity always contributes to disruptive selection and hence always promotes evolutionary branching. The expression for directional selection is numerically very efficient and hence lends itself to simulation studies of evolutionary community assembly. We illustrate the application and utility of the expressions for this purpose with two examples of the evolution of resource utilization. Finally, we outline the domain of applicability of reaction-diffusion equations as a modeling framework and discuss their limitations.
Variable speed limit strategies analysis with link transmission model on urban expressway
NASA Astrophysics Data System (ADS)
Li, Shubin; Cao, Danni
2018-02-01
The variable speed limit (VSL) is a kind of active traffic management method. Most of the strategies are used in the expressway traffic flow control in order to ensure traffic safety. However, the urban expressway system is the main artery, carrying most traffic pressure. It has similar traffic characteristics with the expressways between cities. In this paper, the improved link transmission model (LTM) combined with VSL strategies is proposed, based on the urban expressway network. The model can simulate the movement of the vehicles and the shock wave, and well balance the relationship between the amount of calculation and accuracy. Furthermore, the optimal VSL strategy can be proposed based on the simulation method. It can provide management strategies for managers. Finally, a simple example is given to illustrate the model and method. The selected indexes are the average density, the average speed and the average flow on the traffic network in the simulation. The simulation results show that the proposed model and method are feasible. The VSL strategy can effectively alleviate traffic congestion in some cases, and greatly promote the efficiency of the transportation system.
On selecting evidence to test hypotheses: A theory of selection tasks.
Ragni, Marco; Kola, Ilir; Johnson-Laird, Philip N
2018-05-21
How individuals choose evidence to test hypotheses is a long-standing puzzle. According to an algorithmic theory that we present, it is based on dual processes: individuals' intuitions depending on mental models of the hypothesis yield selections of evidence matching instances of the hypothesis, but their deliberations yield selections of potential counterexamples to the hypothesis. The results of 228 experiments using Wason's selection task corroborated the theory's predictions. Participants made dependent choices of items of evidence: the selections in 99 experiments were significantly more redundant (using Shannon's measure) than those of 10,000 simulations of each experiment based on independent selections. Participants tended to select evidence corresponding to instances of hypotheses, or to its counterexamples, or to both. Given certain contents, instructions, or framings of the task, they were more likely to select potential counterexamples to the hypothesis. When participants received feedback about their selections in the "repeated" selection task, they switched from selections of instances of the hypothesis to selection of potential counterexamples. These results eliminated most of the 15 alternative theories of selecting evidence. In a meta-analysis, the model theory yielded a better fit of the results of 228 experiments than the one remaining theory based on reasoning rather than meaning. We discuss the implications of the model theory for hypothesis testing and for a well-known paradox of confirmation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Zhang, Xiaoshuai; Xue, Fuzhong; Liu, Hong; Zhu, Dianwen; Peng, Bin; Wiemels, Joseph L; Yang, Xiaowei
2014-12-10
Genome-wide Association Studies (GWAS) are typically designed to identify phenotype-associated single nucleotide polymorphisms (SNPs) individually using univariate analysis methods. Though providing valuable insights into genetic risks of common diseases, the genetic variants identified by GWAS generally account for only a small proportion of the total heritability for complex diseases. To solve this "missing heritability" problem, we implemented a strategy called integrative Bayesian Variable Selection (iBVS), which is based on a hierarchical model that incorporates an informative prior by considering the gene interrelationship as a network. It was applied here to both simulated and real data sets. Simulation studies indicated that the iBVS method was advantageous in its performance with highest AUC in both variable selection and outcome prediction, when compared to Stepwise and LASSO based strategies. In an analysis of a leprosy case-control study, iBVS selected 94 SNPs as predictors, while LASSO selected 100 SNPs. The Stepwise regression yielded a more parsimonious model with only 3 SNPs. The prediction results demonstrated that the iBVS method had comparable performance with that of LASSO, but better than Stepwise strategies. The proposed iBVS strategy is a novel and valid method for Genome-wide Association Studies, with the additional advantage in that it produces more interpretable posterior probabilities for each variable unlike LASSO and other penalized regression methods.
Systematic reconstruction of TRANSPATH data into Cell System Markup Language
Nagasaki, Masao; Saito, Ayumu; Li, Chen; Jeong, Euna; Miyano, Satoru
2008-01-01
Background Many biological repositories store information based on experimental study of the biological processes within a cell, such as protein-protein interactions, metabolic pathways, signal transduction pathways, or regulations of transcription factors and miRNA. Unfortunately, it is difficult to directly use such information when generating simulation-based models. Thus, modeling rules for encoding biological knowledge into system-dynamics-oriented standardized formats would be very useful for fully understanding cellular dynamics at the system level. Results We selected the TRANSPATH database, a manually curated high-quality pathway database, which provides a plentiful source of cellular events in humans, mice, and rats, collected from over 31,500 publications. In this work, we have developed 16 modeling rules based on hybrid functional Petri net with extension (HFPNe), which is suitable for graphical representing and simulating biological processes. In the modeling rules, each Petri net element is incorporated with Cell System Ontology to enable semantic interoperability of models. As a formal ontology for biological pathway modeling with dynamics, CSO also defines biological terminology and corresponding icons. By combining HFPNe with the CSO features, it is possible to make TRANSPATH data to simulation-based and semantically valid models. The results are encoded into a biological pathway format, Cell System Markup Language (CSML), which eases the exchange and integration of biological data and models. Conclusion By using the 16 modeling rules, 97% of the reactions in TRANSPATH are converted into simulation-based models represented in CSML. This reconstruction demonstrates that it is possible to use our rules to generate quantitative models from static pathway descriptions. PMID:18570683
Systematic reconstruction of TRANSPATH data into cell system markup language.
Nagasaki, Masao; Saito, Ayumu; Li, Chen; Jeong, Euna; Miyano, Satoru
2008-06-23
Many biological repositories store information based on experimental study of the biological processes within a cell, such as protein-protein interactions, metabolic pathways, signal transduction pathways, or regulations of transcription factors and miRNA. Unfortunately, it is difficult to directly use such information when generating simulation-based models. Thus, modeling rules for encoding biological knowledge into system-dynamics-oriented standardized formats would be very useful for fully understanding cellular dynamics at the system level. We selected the TRANSPATH database, a manually curated high-quality pathway database, which provides a plentiful source of cellular events in humans, mice, and rats, collected from over 31,500 publications. In this work, we have developed 16 modeling rules based on hybrid functional Petri net with extension (HFPNe), which is suitable for graphical representing and simulating biological processes. In the modeling rules, each Petri net element is incorporated with Cell System Ontology to enable semantic interoperability of models. As a formal ontology for biological pathway modeling with dynamics, CSO also defines biological terminology and corresponding icons. By combining HFPNe with the CSO features, it is possible to make TRANSPATH data to simulation-based and semantically valid models. The results are encoded into a biological pathway format, Cell System Markup Language (CSML), which eases the exchange and integration of biological data and models. By using the 16 modeling rules, 97% of the reactions in TRANSPATH are converted into simulation-based models represented in CSML. This reconstruction demonstrates that it is possible to use our rules to generate quantitative models from static pathway descriptions.
Zhang, Liping; Zhang, Shiwen; Huang, Yajie; Cao, Meng; Huang, Yuanfang; Zhang, Hongyan
2016-01-01
Understanding abandoned mine land (AML) changes during land reclamation is crucial for reusing damaged land resources and formulating sound ecological restoration policies. This study combines the linear programming (LP) model and the CLUE-S model to simulate land-use dynamics in the Mentougou District (Beijing, China) from 2007 to 2020 under three reclamation scenarios, that is, the planning scenario based on the general land-use plan in study area (scenario 1), maximal comprehensive benefits (scenario 2), and maximal ecosystem service value (scenario 3). Nine landscape-scale graph metrics were then selected to describe the landscape characteristics. The results show that the coupled model presented can simulate the dynamics of AML effectively and the spatially explicit transformations of AML were different. New cultivated land dominates in scenario 1, while construction land and forest land account for major percentages in scenarios 2 and 3, respectively. Scenario 3 has an advantage in most of the selected indices as the patches combined most closely. To conclude, reclaiming AML by transformation into more forest can reduce the variability and maintain the stability of the landscape ecological system in study area. These findings contribute to better mapping AML dynamics and providing policy support for the management of AML. PMID:27023575
Zhang, Liping; Zhang, Shiwen; Huang, Yajie; Cao, Meng; Huang, Yuanfang; Zhang, Hongyan
2016-03-24
Understanding abandoned mine land (AML) changes during land reclamation is crucial for reusing damaged land resources and formulating sound ecological restoration policies. This study combines the linear programming (LP) model and the CLUE-S model to simulate land-use dynamics in the Mentougou District (Beijing, China) from 2007 to 2020 under three reclamation scenarios, that is, the planning scenario based on the general land-use plan in study area (scenario 1), maximal comprehensive benefits (scenario 2), and maximal ecosystem service value (scenario 3). Nine landscape-scale graph metrics were then selected to describe the landscape characteristics. The results show that the coupled model presented can simulate the dynamics of AML effectively and the spatially explicit transformations of AML were different. New cultivated land dominates in scenario 1, while construction land and forest land account for major percentages in scenarios 2 and 3, respectively. Scenario 3 has an advantage in most of the selected indices as the patches combined most closely. To conclude, reclaiming AML by transformation into more forest can reduce the variability and maintain the stability of the landscape ecological system in study area. These findings contribute to better mapping AML dynamics and providing policy support for the management of AML.
Simulating drinking in social networks to inform alcohol prevention and treatment efforts.
Hallgren, Kevin A; McCrady, Barbara S; Caudell, Thomas P; Witkiewitz, Katie; Tonigan, J Scott
2017-11-01
Adolescent drinking influences, and is influenced by, peer alcohol use. Several efficacious adolescent alcohol interventions include elements aimed at reducing susceptibility to peer influence. Modeling these interventions within dynamically changing social networks may improve our understanding of how such interventions work and for whom they work best. We used stochastic actor-based models to simulate longitudinal drinking and friendship formation within social networks using parameters obtained from a meta-analysis of real-world 10th grade adolescent social networks. Levels of social influence (i.e., friends affecting changes in one's drinking) and social selection (i.e., drinking affecting changes in one's friendships) were manipulated at several levels, which directly impacted the degree of clustering in friendships based on similarity in drinking behavior. Midway through each simulation, one randomly selected heavy-drinking actor from each network received an "intervention" that either (a) reduced their susceptibility to social influence, (b) reduced their susceptibility to social selection, (c) eliminated a friendship with a heavy drinker, or (d) initiated a friendship with a nondrinker. Only the intervention that eliminated targeted actors' susceptibility to social influence consistently reduced that actor's drinking. Moreover, this was only effective in networks with social influence and social selection that were at higher levels than what was found in the real-world reference study. Social influence and social selection are dynamic processes that can lead to complex systems that may moderate the effectiveness of network-based interventions. Interventions that reduce susceptibility to social influence may be most effective among adolescents with high susceptibility to social influence and heavier-drinking friends. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Simulation environment and graphical visualization environment: a COPD use-case
2014-01-01
Background Today, many different tools are developed to execute and visualize physiological models that represent the human physiology. Most of these tools run models written in very specific programming languages which in turn simplify the communication among models. Nevertheless, not all of these tools are able to run models written in different programming languages. In addition, interoperability between such models remains an unresolved issue. Results In this paper we present a simulation environment that allows, first, the execution of models developed in different programming languages and second the communication of parameters to interconnect these models. This simulation environment, developed within the Synergy-COPD project, aims at helping and supporting bio-researchers and medical students understand the internal mechanisms of the human body through the use of physiological models. This tool is composed of a graphical visualization environment, which is a web interface through which the user can interact with the models, and a simulation workflow management system composed of a control module and a data warehouse manager. The control module monitors the correct functioning of the whole system. The data warehouse manager is responsible for managing the stored information and supporting its flow among the different modules. This simulation environment has been validated with the integration of three models: two deterministic, i.e. based on linear and differential equations, and one probabilistic, i.e., based on probability theory. These models have been selected based on the disease under study in this project, i.e., chronic obstructive pulmonary disease. Conclusion It has been proved that the simulation environment presented here allows the user to research and study the internal mechanisms of the human physiology by the use of models via a graphical visualization environment. A new tool for bio-researchers is ready for deployment in various use cases scenarios. PMID:25471327
Simulation of large-scale rule-based models
Colvin, Joshua; Monine, Michael I.; Faeder, James R.; Hlavacek, William S.; Von Hoff, Daniel D.; Posner, Richard G.
2009-01-01
Motivation: Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. Results: DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein–protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of StochSim. DYNSTOC differs from StochSim by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. Availability: DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at http://public.tgen.org/dynstoc/. Contact: dynstoc@tgen.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19213740
Cichero, Elena; D'Ursi, Pasqualina; Moscatelli, Marco; Bruno, Olga; Orro, Alessandro; Rotolo, Chiara; Milanesi, Luciano; Fossa, Paola
2013-12-01
Phosphodiesterase 11 (PDE11) is the latest isoform of the PDEs family to be identified, acting on both cyclic adenosine monophosphate and cyclic guanosine monophosphate. The initial reports of PDE11 found evidence for PDE11 expression in skeletal muscle, prostate, testis, and salivary glands; however, the tissue distribution of PDE11 still remains a topic of active study and some controversy. Given the sequence similarity between PDE11 and PDE5, several PDE5 inhibitors have been shown to cross-react with PDE11. Accordingly, many non-selective inhibitors, such as IBMX, zaprinast, sildenafil, and dipyridamole, have been documented to inhibit PDE11. Only recently, a series of dihydrothieno[3,2-d]pyrimidin-4(3H)-one derivatives proved to be selective toward the PDE11 isoform. In the absence of experimental data about PDE11 X-ray structures, we found interesting to gain a better understanding of the enzyme-inhibitor interactions using in silico simulations. In this work, we describe a computational approach based on homology modeling, docking, and molecular dynamics simulation to derive a predictive 3D model of PDE11. Using a Graphical Processing Unit architecture, it is possible to perform long simulations, find stable interactions involved in the complex, and finally to suggest guideline for the identification and synthesis of potent and selective inhibitors. © 2013 John Wiley & Sons A/S.
Feng, Yongjiu; Tong, Xiaohua
2017-09-22
Defining transition rules is an important issue in cellular automaton (CA)-based land use modeling because these models incorporate highly correlated driving factors. Multicollinearity among correlated driving factors may produce negative effects that must be eliminated from the modeling. Using exploratory regression under pre-defined criteria, we identified all possible combinations of factors from the candidate factors affecting land use change. Three combinations that incorporate five driving factors meeting pre-defined criteria were assessed. With the selected combinations of factors, three logistic regression-based CA models were built to simulate dynamic land use change in Shanghai, China, from 2000 to 2015. For comparative purposes, a CA model with all candidate factors was also applied to simulate the land use change. Simulations using three CA models with multicollinearity eliminated performed better (with accuracy improvements about 3.6%) than the model incorporating all candidate factors. Our results showed that not all candidate factors are necessary for accurate CA modeling and the simulations were not sensitive to changes in statistically non-significant driving factors. We conclude that exploratory regression is an effective method to search for the optimal combinations of driving factors, leading to better land use change models that are devoid of multicollinearity. We suggest identification of dominant factors and elimination of multicollinearity before building land change models, making it possible to simulate more realistic outcomes.
Mankodi, T K; Bhandarkar, U V; Puranik, B P
2017-08-28
A new ab initio based chemical model for a Direct Simulation Monte Carlo (DSMC) study suitable for simulating rarefied flows with a high degree of non-equilibrium is presented. To this end, Collision Induced Dissociation (CID) cross sections for N 2 +N 2 →N 2 +2N are calculated and published using a global complete active space self-consistent field-complete active space second order perturbation theory N 4 potential energy surface and quasi-classical trajectory algorithm for high energy collisions (up to 30 eV). CID cross sections are calculated for only a selected set of ro-vibrational combinations of the two nitrogen molecules, and a fitting scheme based on spectroscopic weights is presented to interpolate the CID cross section for all possible ro-vibrational combinations. The new chemical model is validated by calculating equilibrium reaction rate coefficients that can be compared well with existing shock tube and computational results. High-enthalpy hypersonic nitrogen flows around a cylinder in the transition flow regime are simulated using DSMC to compare the predictions of the current ab initio based chemical model with the prevailing phenomenological model (the total collision energy model). The differences in the predictions are discussed.
NASA Astrophysics Data System (ADS)
Özel, Tuğrul; Arısoy, Yiğit M.; Criales, Luis E.
Computational modelling of Laser Powder Bed Fusion (L-PBF) processes such as Selective laser Melting (SLM) can reveal information that is hard to obtain or unobtainable by in-situ experimental measurements. A 3D thermal field that is not visible by the thermal camera can be obtained by solving the 3D heat transfer problem. Furthermore, microstructural modelling can be used to predict the quality and mechanical properties of the product. In this paper, a nonlinear 3D Finite Element Method based computational code is developed to simulate the SLM process with different process parameters such as laser power and scan velocity. The code is further improved by utilizing an in-situ thermal camera recording to predict spattering which is in turn included as a stochastic heat loss. Then, thermal gradients extracted from the simulations applied to predict growth directions in the resulting microstructure.
NASA Technical Reports Server (NTRS)
Montgomery, Raymond C.; Ghosh, Dave; Kenny, Sean
1991-01-01
This paper presents results of analytic and simulation studies to determine the effectiveness of torque-wheel actuators in suppressing the vibrations of two-link telerobotic arms with attached payloads. The simulations use a planar generic model of a two-link arm with a torque wheel at the free end. Parameters of the arm model are selected to be representative of a large space-based robotic arm of the same class as the Space Shuttle Remote Manipulator, whereas parameters of the torque wheel are selected to be similar to those of the Mini-Mast facility at the Langley Research Center. Results show that this class of torque-wheel can produce an oscillation of 2.5 cm peak-to-peak in the end point of the arm and that the wheel produces significantly less overshoot when the arm is issued an abrupt stop command from the telerobotic input station.
Simulating a base population in honey bee for molecular genetic studies
2012-01-01
Background Over the past years, reports have indicated that honey bee populations are declining and that infestation by an ecto-parasitic mite (Varroa destructor) is one of the main causes. Selective breeding of resistant bees can help to prevent losses due to the parasite, but it requires that a robust breeding program and genetic evaluation are implemented. Genomic selection has emerged as an important tool in animal breeding programs and simulation studies have shown that it yields more accurate breeding value estimates, higher genetic gain and low rates of inbreeding. Since genomic selection relies on marker data, simulations conducted on a genomic dataset are a pre-requisite before selection can be implemented. Although genomic datasets have been simulated in other species undergoing genetic evaluation, simulation of a genomic dataset specific to the honey bee is required since this species has a distinct genetic and reproductive biology. Our software program was aimed at constructing a base population by simulating a random mating honey bee population. A forward-time population simulation approach was applied since it allows modeling of genetic characteristics and reproductive behavior specific to the honey bee. Results Our software program yielded a genomic dataset for a base population in linkage disequilibrium. In addition, information was obtained on (1) the position of markers on each chromosome, (2) allele frequency, (3) χ2 statistics for Hardy-Weinberg equilibrium, (4) a sorted list of markers with a minor allele frequency less than or equal to the input value, (5) average r2 values of linkage disequilibrium between all simulated marker loci pair for all generations and (6) average r2 value of linkage disequilibrium in the last generation for selected markers with the highest minor allele frequency. Conclusion We developed a software program that takes into account the genetic and reproductive biology specific to the honey bee and that can be used to constitute a genomic dataset compatible with the simulation studies necessary to optimize breeding programs. The source code together with an instruction file is freely accessible at http://msproteomics.org/Research/Misc/honeybeepopulationsimulator.html PMID:22520469
Simulating a base population in honey bee for molecular genetic studies.
Gupta, Pooja; Conrad, Tim; Spötter, Andreas; Reinsch, Norbert; Bienefeld, Kaspar
2012-06-27
Over the past years, reports have indicated that honey bee populations are declining and that infestation by an ecto-parasitic mite (Varroa destructor) is one of the main causes. Selective breeding of resistant bees can help to prevent losses due to the parasite, but it requires that a robust breeding program and genetic evaluation are implemented. Genomic selection has emerged as an important tool in animal breeding programs and simulation studies have shown that it yields more accurate breeding value estimates, higher genetic gain and low rates of inbreeding. Since genomic selection relies on marker data, simulations conducted on a genomic dataset are a pre-requisite before selection can be implemented. Although genomic datasets have been simulated in other species undergoing genetic evaluation, simulation of a genomic dataset specific to the honey bee is required since this species has a distinct genetic and reproductive biology. Our software program was aimed at constructing a base population by simulating a random mating honey bee population. A forward-time population simulation approach was applied since it allows modeling of genetic characteristics and reproductive behavior specific to the honey bee. Our software program yielded a genomic dataset for a base population in linkage disequilibrium. In addition, information was obtained on (1) the position of markers on each chromosome, (2) allele frequency, (3) χ(2) statistics for Hardy-Weinberg equilibrium, (4) a sorted list of markers with a minor allele frequency less than or equal to the input value, (5) average r(2) values of linkage disequilibrium between all simulated marker loci pair for all generations and (6) average r2 value of linkage disequilibrium in the last generation for selected markers with the highest minor allele frequency. We developed a software program that takes into account the genetic and reproductive biology specific to the honey bee and that can be used to constitute a genomic dataset compatible with the simulation studies necessary to optimize breeding programs. The source code together with an instruction file is freely accessible at http://msproteomics.org/Research/Misc/honeybeepopulationsimulator.html.
Abbas, Ismail; Rovira, Joan; Casanovas, Josep
2006-12-01
To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.
NASA Astrophysics Data System (ADS)
Ou, G.; Nijssen, B.; Nearing, G. S.; Newman, A. J.; Mizukami, N.; Clark, M. P.
2016-12-01
The Structure for Unifying Multiple Modeling Alternatives (SUMMA) provides a unifying modeling framework for process-based hydrologic modeling by defining a general set of conservation equations for mass and energy, with the capability to incorporate multiple choices for spatial discretizations and flux parameterizations. In this study, we provide a first demonstration of large-scale hydrologic simulations using SUMMA through an application to the Columbia River Basin (CRB) in the northwestern United States and Canada for a multi-decadal simulation period. The CRB is discretized into 11,723 hydrologic response units (HRUs) according to the United States Geologic Service Geospatial Fabric. The soil parameters are derived from the Natural Resources Conservation Service Soil Survey Geographic (SSURGO) Database. The land cover parameters are based on the National Land Cover Database from the year 2001 created by the Multi-Resolution Land Characteristics (MRLC) Consortium. The forcing data, including hourly air pressure, temperature, specific humidity, wind speed, precipitation, shortwave and longwave radiations, are based on Phase 2 of the North American Land Data Assimilation System (NLDAS-2) and averaged for each HRU. The simulation results are compared to simulations with the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS). We are particularly interested in SUMMA's capability to mimic model behaviors of the other two models through the selection of appropriate model parameterizations in SUMMA.
Using Trust to Establish a Secure Routing Model in Cognitive Radio Network.
Zhang, Guanghua; Chen, Zhenguo; Tian, Liqin; Zhang, Dongwen
2015-01-01
Specific to the selective forwarding attack on routing in cognitive radio network, this paper proposes a trust-based secure routing model. Through monitoring nodes' forwarding behaviors, trusts of nodes are constructed to identify malicious nodes. In consideration of that routing selection-based model must be closely collaborative with spectrum allocation, a route request piggybacking available spectrum opportunities is sent to non-malicious nodes. In the routing decision phase, nodes' trusts are used to construct available path trusts and delay measurement is combined for making routing decisions. At the same time, according to the trust classification, different responses are made specific to their service requests. By adopting stricter punishment on malicious behaviors from non-trusted nodes, the cooperation of nodes in routing can be stimulated. Simulation results and analysis indicate that this model has good performance in network throughput and end-to-end delay under the selective forwarding attack.
Das, Debananda; Koh, Yasuhiro; Tojo, Yasushi; Ghosh, Arun K; Mitsuya, Hiroaki
2009-12-01
Reliable and robust prediction of the binding affinity for drug molecules continues to be a daunting challenge. We simulated the binding interactions and free energy of binding of nine protease inhibitors (PIs) with wild-type and various mutant proteases by performing GBSA simulations in which each PI's partial charge was determined by quantum mechanics (QM) and the partial charge accounts for the polarization induced by the protease environment. We employed a hybrid solvation model that retains selected explicit water molecules in the protein with surface-generalized Born (SGB) implicit solvent. We examined the correlation of the free energy with the antiviral potency of PIs with regard to amino acid substitutions in protease. The GBSA free energy thus simulated showed strong correlations (r > 0.75) with antiviral IC(50) values of PIs when amino acid substitutions were present in the protease active site. We also simulated the binding free energy of PIs with P2-bis-tetrahydrofuranylurethane (bis-THF) or related cores, utilizing a bis-THF-containing protease crystal structure as a template. The free energy showed a strong correlation (r = 0.93) with experimentally determined anti-HIV-1 potency. The present data suggest that the presence of selected explicit water in protein and protein polarization-induced quantum charges for the inhibitor, compared to lack of explicit water and a static force-field-based charge model, can serve as an improved lead optimization tool and warrants further exploration.
Row, Jeffrey R.; Knick, Steven T.; Oyler-McCance, Sara J.; Lougheed, Stephen C.; Fedy, Bradley C.
2017-01-01
Dispersal can impact population dynamics and geographic variation, and thus, genetic approaches that can establish which landscape factors influence population connectivity have ecological and evolutionary importance. Mixed models that account for the error structure of pairwise datasets are increasingly used to compare models relating genetic differentiation to pairwise measures of landscape resistance. A model selection framework based on information criteria metrics or explained variance may help disentangle the ecological and landscape factors influencing genetic structure, yet there are currently no consensus for the best protocols. Here, we develop landscape-directed simulations and test a series of replicates that emulate independent empirical datasets of two species with different life history characteristics (greater sage-grouse; eastern foxsnake). We determined that in our simulated scenarios, AIC and BIC were the best model selection indices and that marginal R2 values were biased toward more complex models. The model coefficients for landscape variables generally reflected the underlying dispersal model with confidence intervals that did not overlap with zero across the entire model set. When we controlled for geographic distance, variables not in the underlying dispersal models (i.e., nontrue) typically overlapped zero. Our study helps establish methods for using linear mixed models to identify the features underlying patterns of dispersal across a variety of landscapes.
Tran-Duy, An; Boonen, Annelies; van de Laar, Mart A F J; Franke, Angelinus C; Severens, Johan L
2011-12-01
To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Discrete event simulation paradigm was selected for model development. Drug efficacy was modelled as changes in disease activity (Bath Ankylosing Spondylitis Disease Activity Index (BASDAI)) and functional status (Bath Ankylosing Spondylitis Functional Index (BASFI)), which were linked to costs and health utility using statistical models fitted based on an observational AS cohort. Published clinical data were used to estimate drug efficacy and time to events. Two strategies were compared: (1) five available non-steroidal anti-inflammatory drugs (strategy 1) and (2) same as strategy 1 plus two tumour necrosis factor α inhibitors (strategy 2). 13,000 patients were followed up individually until death. For probability sensitivity analysis, Monte Carlo simulations were performed with 1000 sets of parameters sampled from the appropriate probability distributions. The models successfully generated valid data on treatments, BASDAI, BASFI, utility, quality-adjusted life years (QALYs) and costs at time points with intervals of 1-3 months during the simulation length of 70 years. Incremental cost per QALY gained in strategy 2 compared with strategy 1 was €35,186. At a willingness-to-pay threshold of €80,000, it was 99.9% certain that strategy 2 was cost-effective. The modelling framework provides great flexibility to implement complex algorithms representing treatment selection, disease progression and changes in costs and utilities over time of patients with AS. Results obtained from the simulation are plausible.
Virtual planning for craniomaxillofacial surgery--7 years of experience.
Adolphs, Nicolai; Haberl, Ernst-Johannes; Liu, Weichen; Keeve, Erwin; Menneking, Horst; Hoffmeister, Bodo
2014-07-01
Contemporary computer-assisted surgery systems more and more allow for virtual simulation of even complex surgical procedures with increasingly realistic predictions. Preoperative workflows are established and different commercially software solutions are available. Potential and feasibility of virtual craniomaxillofacial surgery as an additional planning tool was assessed retrospectively by comparing predictions and surgical results. Since 2006 virtual simulation has been performed in selected patient cases affected by complex craniomaxillofacial disorders (n = 8) in addition to standard surgical planning based on patient specific 3d-models. Virtual planning could be performed for all levels of the craniomaxillofacial framework within a reasonable preoperative workflow. Simulation of even complex skeletal displacements corresponded well with the real surgical result and soft tissue simulation proved to be helpful. In combination with classic 3d-models showing the underlying skeletal pathology virtual simulation improved planning and transfer of craniomaxillofacial corrections. Additional work and expenses may be justified by increased possibilities of visualisation, information, instruction and documentation in selected craniomaxillofacial procedures. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Sepúlveda, Nicasio
2002-01-01
A numerical model of the intermediate and Floridan aquifer systems in peninsular Florida was used to (1) test and refine the conceptual understanding of the regional ground-water flow system; (2) develop a data base to support subregional ground-water flow modeling; and (3) evaluate effects of projected 2020 ground-water withdrawals on ground-water levels. The four-layer model was based on the computer code MODFLOW-96, developed by the U.S. Geological Survey. The top layer consists of specified-head cells simulating the surficial aquifer system as a source-sink layer. The second layer simulates the intermediate aquifer system in southwest Florida and the intermediate confining unit where it is present. The third and fourth layers simulate the Upper and Lower Floridan aquifers, respectively. Steady-state ground-water flow conditions were approximated for time-averaged hydrologic conditions from August 1993 through July 1994 (1993-94). This period was selected based on data from Upper Floridan a quifer wells equipped with continuous water-level recorders. The grid used for the ground-water flow model was uniform and composed of square 5,000-foot cells, with 210 columns and 300 rows.
NASA Astrophysics Data System (ADS)
Deliparaschos, Kyriakos M.; Michail, Konstantinos; Zolotas, Argyrios C.; Tzafestas, Spyros G.
2016-05-01
This work presents a field programmable gate array (FPGA)-based embedded software platform coupled with a software-based plant, forming a hardware-in-the-loop (HIL) that is used to validate a systematic sensor selection framework. The systematic sensor selection framework combines multi-objective optimization, linear-quadratic-Gaussian (LQG)-type control, and the nonlinear model of a maglev suspension. A robustness analysis of the closed-loop is followed (prior to implementation) supporting the appropriateness of the solution under parametric variation. The analysis also shows that quantization is robust under different controller gains. While the LQG controller is implemented on an FPGA, the physical process is realized in a high-level system modeling environment. FPGA technology enables rapid evaluation of the algorithms and test designs under realistic scenarios avoiding heavy time penalty associated with hardware description language (HDL) simulators. The HIL technique facilitates significant speed-up in the required execution time when compared to its software-based counterpart model.
Inferring the photometric and size evolution of galaxies from image simulations. I. Method
NASA Astrophysics Data System (ADS)
Carassou, Sébastien; de Lapparent, Valérie; Bertin, Emmanuel; Le Borgne, Damien
2017-09-01
Context. Current constraints on models of galaxy evolution rely on morphometric catalogs extracted from multi-band photometric surveys. However, these catalogs are altered by selection effects that are difficult to model, that correlate in non trivial ways, and that can lead to contradictory predictions if not taken into account carefully. Aims: To address this issue, we have developed a new approach combining parametric Bayesian indirect likelihood (pBIL) techniques and empirical modeling with realistic image simulations that reproduce a large fraction of these selection effects. This allows us to perform a direct comparison between observed and simulated images and to infer robust constraints on model parameters. Methods: We use a semi-empirical forward model to generate a distribution of mock galaxies from a set of physical parameters. These galaxies are passed through an image simulator reproducing the instrumental characteristics of any survey and are then extracted in the same way as the observed data. The discrepancy between the simulated and observed data is quantified, and minimized with a custom sampling process based on adaptive Markov chain Monte Carlo methods. Results: Using synthetic data matching most of the properties of a Canada-France-Hawaii Telescope Legacy Survey Deep field, we demonstrate the robustness and internal consistency of our approach by inferring the parameters governing the size and luminosity functions and their evolutions for different realistic populations of galaxies. We also compare the results of our approach with those obtained from the classical spectral energy distribution fitting and photometric redshift approach. Conclusions: Our pipeline infers efficiently the luminosity and size distribution and evolution parameters with a very limited number of observables (three photometric bands). When compared to SED fitting based on the same set of observables, our method yields results that are more accurate and free from systematic biases.
Security Analysis of Selected AMI Failure Scenarios Using Agent Based Game Theoretic Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T
Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our analysis on the Advanced Metering Infrastructure (AMI) functional domain which the National Electric Sector Cyber security Organization Resource (NESCOR) working group has currently documented 29 failure scenarios. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain. From thesemore » five selected scenarios, we characterize them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrates how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.« less
Hong, Eun-Mi; Park, Yongeun; Muirhead, Richard; Jeong, Jaehak; Pachepsky, Yakov A
2018-02-15
The Agricultural Policy/Environmental eXtender (APEX) is a watershed-scale water quality model that includes detailed representation of agricultural management. The objective of this work was to develop a process-based model for simulating the fate and transport of manure-borne bacteria on land and in streams with the APEX model. The bacteria model utilizes manure erosion rates to estimate the amount of edge-of-field bacteria export. Bacteria survival in manure is simulated as a two-stage process separately for each manure application event. In-stream microbial fate and transport processes include bacteria release from streambeds due to sediment resuspension during high flow events, active release from the streambed sediment during low flow periods, bacteria settling with sediment, and survival. Default parameter values were selected from published databases and evaluated based on field observations. The APEX model with the newly developed microbial fate and transport module was applied to simulate fate and transport of the fecal indicator bacterium Escherichia coli in the Toenepi watershed, New Zealand that was monitored for seven years. The stream network of the watershed ran through grazing lands with daily bovine waste deposition. Results show that the APEX with the bacteria module reproduced well the monitored pattern of E. coli concentrations at the watershed outlet. The APEX with the microbial fate and transport module will be utilized for predicting microbial quality of water as affected by various agricultural practices, evaluating monitoring protocols, and supporting the selection of management practices based on regulations that rely on fecal indicator bacteria concentrations. Published by Elsevier B.V.
Simulation-based bronchoscopy training: systematic review and meta-analysis.
Kennedy, Cassie C; Maldonado, Fabien; Cook, David A
2013-07-01
Simulation-based bronchoscopy training is increasingly used, but effectiveness remains uncertain. We sought to perform a comprehensive synthesis of published work on simulation-based bronchoscopy training. We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus for eligible articles through May 11, 2011. We included all original studies involving health professionals that evaluated, in comparison with no intervention or an alternative instructional approach, simulation-based training for flexible or rigid bronchoscopy. Study selection and data abstraction were performed independently and in duplicate. We pooled results using random effects meta-analysis. From an initial pool of 10,903 articles, we identified 17 studies evaluating simulation-based bronchoscopy training. In comparison with no intervention, simulation training was associated with large benefits on skills and behaviors (pooled effect size, 1.21 [95% CI, 0.82-1.60]; n=8 studies) and moderate benefits on time (0.62 [95% CI, 0.12-1.13]; n=7). In comparison with clinical instruction, behaviors with real patients showed nonsignificant effects favoring simulation for time (0.61 [95% CI, -1.47 to 2.69]) and process (0.33 [95% CI, -1.46 to 2.11]) outcomes (n=2 studies each), although variation in training time might account for these differences. Four studies compared alternate simulation-based training approaches. Inductive analysis to inform instructional design suggested that longer or more structured training is more effective, authentic clinical context adds value, and animal models and plastic part-task models may be superior to more costly virtual-reality simulators. Simulation-based bronchoscopy training is effective in comparison with no intervention. Comparative effectiveness studies are few.
Simulation of flow and water quality of the Arroyo Colorado, Texas, 1989-99
Raines, Timothy H.; Miranda, Roger M.
2002-01-01
A model parameter set for use with the Hydrological Simulation Program—FORTRAN watershed model was developed to simulate flow and water quality for selected properties and constituents for the Arroyo Colorado from the city of Mission to the Laguna Madre, Texas. The model simulates flow, selected water-quality properties, and constituent concentrations. The model can be used to estimate a total maximum daily load for selected properties and constituents in the Arroyo Colorado. The model was calibrated and tested for flow with data measured during 1989–99 at three streamflow-gaging stations. The errors for total flow volume ranged from -0.1 to 29.0 percent, and the errors for total storm volume ranged from -15.6 to 8.4 percent. The model was calibrated and tested for water quality for seven properties and constituents with 1989–99 data. The model was calibrated sequentially for suspended sediment, water temperature, biochemical oxygen demand, dissolved oxygen, nitrate nitrogen, ammonia nitrogen, and orthophosphate. The simulated concentrations of the selected properties and constituents generally matched the measured concentrations available for the calibration and testing periods. The model was used to simulate total point- and nonpoint-source loads for selected properties and constituents for 1989–99 for urban, natural, and agricultural land-use types. About one-third to one-half of the biochemical oxygen demand and nutrient loads are from urban point and nonpoint sources, although only 13 percent of the total land use in the basin is urban.
A case for spiking neural network simulation based on configurable multiple-FPGA systems.
Yang, Shufan; Wu, Qiang; Li, Renfa
2011-09-01
Recent neuropsychological research has begun to reveal that neurons encode information in the timing of spikes. Spiking neural network simulations are a flexible and powerful method for investigating the behaviour of neuronal systems. Simulation of the spiking neural networks in software is unable to rapidly generate output spikes in large-scale of neural network. An alternative approach, hardware implementation of such system, provides the possibility to generate independent spikes precisely and simultaneously output spike waves in real time, under the premise that spiking neural network can take full advantage of hardware inherent parallelism. We introduce a configurable FPGA-oriented hardware platform for spiking neural network simulation in this work. We aim to use this platform to combine the speed of dedicated hardware with the programmability of software so that it might allow neuroscientists to put together sophisticated computation experiments of their own model. A feed-forward hierarchy network is developed as a case study to describe the operation of biological neural systems (such as orientation selectivity of visual cortex) and computational models of such systems. This model demonstrates how a feed-forward neural network constructs the circuitry required for orientation selectivity and provides platform for reaching a deeper understanding of the primate visual system. In the future, larger scale models based on this framework can be used to replicate the actual architecture in visual cortex, leading to more detailed predictions and insights into visual perception phenomenon.
PowderSim: Lagrangian Discrete and Mesh-Free Continuum Simulation Code for Cohesive Soils
NASA Technical Reports Server (NTRS)
Johnson, Scott; Walton, Otis; Settgast, Randolph
2013-01-01
PowderSim is a calculation tool that combines a discrete-element method (DEM) module, including calibrated interparticle-interaction relationships, with a mesh-free, continuum, SPH (smoothed-particle hydrodynamics) based module that utilizes enhanced, calibrated, constitutive models capable of mimicking both large deformations and the flow behavior of regolith simulants and lunar regolith under conditions anticipated during in situ resource utilization (ISRU) operations. The major innovation introduced in PowderSim is to use a mesh-free method (SPH-based) with a calibrated and slightly modified critical-state soil mechanics constitutive model to extend the ability of the simulation tool to also address full-scale engineering systems in the continuum sense. The PowderSim software maintains the ability to address particle-scale problems, like size segregation, in selected regions with a traditional DEM module, which has improved contact physics and electrostatic interaction models.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Mousavi, Monireh Sadat; Ashrafi, Khosro; Motlagh, Majid Shafie Pour; Niksokhan, Mohhamad Hosein; Vosoughifar, HamidReza
2018-02-01
In this study, coupled method for simulation of flow pattern based on computational methods for fluid dynamics with optimization technique using genetic algorithms is presented to determine the optimal location and number of sensors in an enclosed residential complex parking in Tehran. The main objective of this research is costs reduction and maximum coverage with regard to distribution of existing concentrations in different scenarios. In this study, considering all the different scenarios for simulation of pollution distribution using CFD simulations has been challenging due to extent of parking and number of cars available. To solve this problem, some scenarios have been selected based on random method. Then, maximum concentrations of scenarios are chosen for performing optimization. CFD simulation outputs are inserted as input in the optimization model using genetic algorithm. The obtained results stated optimal number and location of sensors.
Semantic Agent-Based Service Middleware and Simulation for Smart Cities
Liu, Ming; Xu, Yang; Hu, Haixiao; Mohammed, Abdul-Wahid
2016-01-01
With the development of Machine-to-Machine (M2M) technology, a variety of embedded and mobile devices is integrated to interact via the platform of the Internet of Things, especially in the domain of smart cities. One of the primary challenges is that selecting the appropriate services or service combination for upper layer applications is hard, which is due to the absence of a unified semantical service description pattern, as well as the service selection mechanism. In this paper, we define a semantic service representation model from four key properties: Capability (C), Deployment (D), Resource (R) and IOData (IO). Based on this model, an agent-based middleware is built to support semantic service enablement. In this middleware, we present an efficient semantic service discovery and matching approach for a service combination process, which calculates the semantic similarity between services, and a heuristic algorithm to search the service candidates for a specific service request. Based on this design, we propose a simulation of virtual urban fire fighting, and the experimental results manifest the feasibility and efficiency of our design. PMID:28009818
Semantic Agent-Based Service Middleware and Simulation for Smart Cities.
Liu, Ming; Xu, Yang; Hu, Haixiao; Mohammed, Abdul-Wahid
2016-12-21
With the development of Machine-to-Machine (M2M) technology, a variety of embedded and mobile devices is integrated to interact via the platform of the Internet of Things, especially in the domain of smart cities. One of the primary challenges is that selecting the appropriate services or service combination for upper layer applications is hard, which is due to the absence of a unified semantical service description pattern, as well as the service selection mechanism. In this paper, we define a semantic service representation model from four key properties: Capability (C), Deployment (D), Resource (R) and IOData (IO). Based on this model, an agent-based middleware is built to support semantic service enablement. In this middleware, we present an efficient semantic service discovery and matching approach for a service combination process, which calculates the semantic similarity between services, and a heuristic algorithm to search the service candidates for a specific service request. Based on this design, we propose a simulation of virtual urban fire fighting, and the experimental results manifest the feasibility and efficiency of our design.
Noise sensitivity of portfolio selection in constant conditional correlation GARCH models
NASA Astrophysics Data System (ADS)
Varga-Haszonits, I.; Kondor, I.
2007-11-01
This paper investigates the efficiency of minimum variance portfolio optimization for stock price movements following the Constant Conditional Correlation GARCH process proposed by Bollerslev. Simulations show that the quality of portfolio selection can be improved substantially by computing optimal portfolio weights from conditional covariances instead of unconditional ones. Measurement noise can be further reduced by applying some filtering method on the conditional correlation matrix (such as Random Matrix Theory based filtering). As an empirical support for the simulation results, the analysis is also carried out for a time series of S&P500 stock prices.
NASA Technical Reports Server (NTRS)
Shaffer, Scott; Dunbar, R. Scott; Hsiao, S. Vincent; Long, David G.
1989-01-01
The NASA Scatterometer, NSCAT, is an active spaceborne radar designed to measure the normalized radar backscatter coefficient (sigma0) of the ocean surface. These measurements can, in turn, be used to infer the surface vector wind over the ocean using a geophysical model function. Several ambiguous wind vectors result because of the nature of the model function. A median-filter-based ambiguity removal algorithm will be used by the NSCAT ground data processor to select the best wind vector from the set of ambiguous wind vectors. This process is commonly known as dealiasing or ambiguity removal. The baseline NSCAT ambiguity removal algorithm and the method used to select the set of optimum parameter values are described. An extensive simulation of the NSCAT instrument and ground data processor provides a means of testing the resulting tuned algorithm. This simulation generates the ambiguous wind-field vectors expected from the instrument as it orbits over a set of realistic meoscale wind fields. The ambiguous wind field is then dealiased using the median-based ambiguity removal algorithm. Performance is measured by comparison of the unambiguous wind fields with the true wind fields. Results have shown that the median-filter-based ambiguity removal algorithm satisfies NSCAT mission requirements.
Ranking of Business Process Simulation Software Tools with DEX/QQ Hierarchical Decision Model.
Damij, Nadja; Boškoski, Pavle; Bohanec, Marko; Mileva Boshkoska, Biljana
2016-01-01
The omnipresent need for optimisation requires constant improvements of companies' business processes (BPs). Minimising the risk of inappropriate BP being implemented is usually performed by simulating the newly developed BP under various initial conditions and "what-if" scenarios. An effectual business process simulations software (BPSS) is a prerequisite for accurate analysis of an BP. Characterisation of an BPSS tool is a challenging task due to the complex selection criteria that includes quality of visual aspects, simulation capabilities, statistical facilities, quality reporting etc. Under such circumstances, making an optimal decision is challenging. Therefore, various decision support models are employed aiding the BPSS tool selection. The currently established decision support models are either proprietary or comprise only a limited subset of criteria, which affects their accuracy. Addressing this issue, this paper proposes a new hierarchical decision support model for ranking of BPSS based on their technical characteristics by employing DEX and qualitative to quantitative (QQ) methodology. Consequently, the decision expert feeds the required information in a systematic and user friendly manner. There are three significant contributions of the proposed approach. Firstly, the proposed hierarchical model is easily extendible for adding new criteria in the hierarchical structure. Secondly, a fully operational decision support system (DSS) tool that implements the proposed hierarchical model is presented. Finally, the effectiveness of the proposed hierarchical model is assessed by comparing the resulting rankings of BPSS with respect to currently available results.
Simulating the role of visual selective attention during the development of perceptual completion
Schlesinger, Matthew; Amso, Dima; Johnson, Scott P.
2014-01-01
We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds’ performance on a second measure, the perceptual unity task. Two parameters in the model – corresponding to areas in the occipital and parietal cortices – were systematically varied while the gaze patterns produced by the model were recorded and subsequently analyzed. Three key findings emerged from the simulation study. First, the model successfully replicated the performance of 3-month-olds on the unity perception task. Second, the model also helps to explain the improved performance of 2-month-olds when the size of the occluder in the unity perception task is reduced. Third, in contrast to our previous simulation results, variation in only one of the two cortical regions simulated (i.e. recurrent activity in posterior parietal cortex) resulted in a performance pattern that matched 3-month-olds. These findings provide additional support for our hypothesis that the development of perceptual completion in early infancy is promoted by progressive improvements in visual selective attention and oculomotor skill. PMID:23106728
Simulating the role of visual selective attention during the development of perceptual completion.
Schlesinger, Matthew; Amso, Dima; Johnson, Scott P
2012-11-01
We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds' performance on a second measure, the perceptual unity task. Two parameters in the model - corresponding to areas in the occipital and parietal cortices - were systematically varied while the gaze patterns produced by the model were recorded and subsequently analyzed. Three key findings emerged from the simulation study. First, the model successfully replicated the performance of 3-month-olds on the unity perception task. Second, the model also helps to explain the improved performance of 2-month-olds when the size of the occluder in the unity perception task is reduced. Third, in contrast to our previous simulation results, variation in only one of the two cortical regions simulated (i.e. recurrent activity in posterior parietal cortex) resulted in a performance pattern that matched 3-month-olds. These findings provide additional support for our hypothesis that the development of perceptual completion in early infancy is promoted by progressive improvements in visual selective attention and oculomotor skill. © 2012 Blackwell Publishing Ltd.
Detecting Directional Selection in the Presence of Recent Admixture in African-Americans
Lohmueller, Kirk E.; Bustamante, Carlos D.; Clark, Andrew G.
2011-01-01
We investigate the performance of tests of neutrality in admixed populations using plausible demographic models for African-American history as well as resequencing data from African and African-American populations. The analysis of both simulated and human resequencing data suggests that recent admixture does not result in an excess of false-positive results for neutrality tests based on the frequency spectrum after accounting for the population growth in the parental African population. Furthermore, when simulating positive selection, Tajima's D, Fu and Li's D, and haplotype homozygosity have lower power to detect population-specific selection using individuals sampled from the admixed population than from the nonadmixed population. Fay and Wu's H test, however, has more power to detect selection using individuals from the admixed population than from the nonadmixed population, especially when the selective sweep ended long ago. Our results have implications for interpreting recent genome-wide scans for positive selection in human populations. PMID:21196524
ISPE: A knowledge-based system for fluidization studies. 1990 Annual report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, S.
1991-01-01
Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all ``specified goals`` are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that canmore » enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.« less
Xi, Qing; Li, Zhao-Fu; Luo, Chuan
2014-05-01
Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.
NASA Astrophysics Data System (ADS)
Norton, P. A., II
2015-12-01
The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.
A grouping method based on grid density and relationship for crowd evacuation simulation
NASA Astrophysics Data System (ADS)
Li, Yan; Liu, Hong; Liu, Guang-peng; Li, Liang; Moore, Philip; Hu, Bin
2017-05-01
Psychological factors affect the movement of people in the competitive or panic mode of evacuation, in which the density of pedestrians is relatively large and the distance among them is small. In this paper, a crowd is divided into groups according to their social relations to simulate the actual movement of crowd evacuation more realistically and increase the attractiveness of the group based on social force model. The force of group attraction is the synthesis of two forces; one is the attraction of the individuals generated by their social relations to gather, and the other is that of the group leader to the individuals within the group to ensure that the individuals follow the leader. The synthetic force determines the trajectory of individuals. The evacuation process is demonstrated using the improved social force model. In the improved social force model, the individuals with close social relations gradually present a closer and coordinated action while following the leader. In this paper, a grouping algorithm is proposed based on grid density and relationship via computer simulation to illustrate the features of the improved social force model. The definition of the parameters involved in the algorithm is given, and the effect of relational value on the grouping is tested. Reasonable numbers of grids and weights are selected. The effectiveness of the algorithm is shown through simulation experiments. A simulation platform is also established using the proposed grouping algorithm and the improved social force model for crowd evacuation simulation.
Mathematical Modelling for Patient Selection in Proton Therapy.
Mee, T; Kirkby, N F; Kirkby, K J
2018-05-01
Proton beam therapy (PBT) is still relatively new in cancer treatment and the clinical evidence base is relatively sparse. Mathematical modelling offers assistance when selecting patients for PBT and predicting the demand for service. Discrete event simulation, normal tissue complication probability, quality-adjusted life-years and Markov Chain models are all mathematical and statistical modelling techniques currently used but none is dominant. As new evidence and outcome data become available from PBT, comprehensive models will emerge that are less dependent on the specific technologies of radiotherapy planning and delivery. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Efficient spiking neural network model of pattern motion selectivity in visual cortex.
Beyeler, Michael; Richert, Micah; Dutt, Nikil D; Krichmar, Jeffrey L
2014-07-01
Simulating large-scale models of biological motion perception is challenging, due to the required memory to store the network structure and the computational power needed to quickly solve the neuronal dynamics. A low-cost yet high-performance approach to simulating large-scale neural network models in real-time is to leverage the parallel processing capability of graphics processing units (GPUs). Based on this approach, we present a two-stage model of visual area MT that we believe to be the first large-scale spiking network to demonstrate pattern direction selectivity. In this model, component-direction-selective (CDS) cells in MT linearly combine inputs from V1 cells that have spatiotemporal receptive fields according to the motion energy model of Simoncelli and Heeger. Pattern-direction-selective (PDS) cells in MT are constructed by pooling over MT CDS cells with a wide range of preferred directions. Responses of our model neurons are comparable to electrophysiological results for grating and plaid stimuli as well as speed tuning. The behavioral response of the network in a motion discrimination task is in agreement with psychophysical data. Moreover, our implementation outperforms a previous implementation of the motion energy model by orders of magnitude in terms of computational speed and memory usage. The full network, which comprises 153,216 neurons and approximately 40 million synapses, processes 20 frames per second of a 40 × 40 input video in real-time using a single off-the-shelf GPU. To promote the use of this algorithm among neuroscientists and computer vision researchers, the source code for the simulator, the network, and analysis scripts are publicly available.
Simulation-based assessment in anesthesiology: requirements for practical implementation.
Boulet, John R; Murray, David J
2010-04-01
Simulations have taken a central role in the education and assessment of medical students, residents, and practicing physicians. The introduction of simulation-based assessments in anesthesiology, especially those used to establish various competencies, has demanded fairly rigorous studies concerning the psychometric properties of the scores. Most important, major efforts have been directed at identifying, and addressing, potential threats to the validity of simulation-based assessment scores. As a result, organizations that wish to incorporate simulation-based assessments into their evaluation practices can access information regarding effective test development practices, the selection of appropriate metrics, the minimization of measurement errors, and test score validation processes. The purpose of this article is to provide a broad overview of the use of simulation for measuring physician skills and competencies. For simulations used in anesthesiology, studies that describe advances in scenario development, the development of scoring rubrics, and the validation of assessment results are synthesized. Based on the summary of relevant research, psychometric requirements for practical implementation of simulation-based assessments in anesthesiology are forwarded. As technology expands, and simulation-based education and evaluation takes on a larger role in patient safety initiatives, the groundbreaking work conducted to date can serve as a model for those individuals and organizations that are responsible for developing, scoring, or validating simulation-based education and assessment programs in anesthesiology.
Lee, Kyu Il; Jo, Sunhwan; Rui, Huan; Egwolf, Bernhard; Roux, Benoît; Pastor, Richard W; Im, Wonpil
2012-01-30
Brownian dynamics (BD) based on accurate potential of mean force is an efficient and accurate method for simulating ion transport through wide ion channels. Here, a web-based graphical user interface (GUI) is presented for carrying out grand canonical Monte Carlo (GCMC) BD simulations of channel proteins: http://www.charmm-gui.org/input/gcmcbd. The webserver is designed to help users avoid most of the technical difficulties and issues encountered in setting up and simulating complex pore systems. GCMC/BD simulation results for three proteins, the voltage dependent anion channel (VDAC), α-Hemolysin (α-HL), and the protective antigen pore of the anthrax toxin (PA), are presented to illustrate the system setup, input preparation, and typical output (conductance, ion density profile, ion selectivity, and ion asymmetry). Two models for the input diffusion constants for potassium and chloride ions in the pore are compared: scaling of the bulk diffusion constants by 0.5, as deduced from previous all-atom molecular dynamics simulations of VDAC, and a hydrodynamics based model (HD) of diffusion through a tube. The HD model yields excellent agreement with experimental conductances for VDAC and α-HL, while scaling bulk diffusion constants by 0.5 leads to underestimates of 10-20%. For PA, simulated ion conduction values overestimate experimental values by a factor of 1.5-7 (depending on His protonation state and the transmembrane potential), implying that the currently available computational model of this protein requires further structural refinement. Copyright © 2011 Wiley Periodicals, Inc.
A COMPARISON OF CMAQ-BASED AEROSOL PROPERTIES WITH IMPROVE, MODIS, AND AERONET DATA
We compare select aerosol Properties derived from the Community Multiscale Air Quality (CMAQ) model-simulated aerosol mass concentrations with routine data from the National Aeronautics and Space Administration (NASA) satellite-borne Moderate Resolution Imaging Spectro-radiometer...
Scattering Properties of Large Irregular Cosmic Dust Particles at Visible Wavelengths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escobar-Cerezo, J.; Palmer, C.; Muñoz, O.
The effect of internal inhomogeneities and surface roughness on the scattering behavior of large cosmic dust particles is studied by comparing model simulations with laboratory measurements. The present work shows the results of an attempt to model a dust sample measured in the laboratory with simulations performed by a ray-optics model code. We consider this dust sample as a good analogue for interplanetary and interstellar dust as it shares its refractive index with known materials in these media. Several sensitivity tests have been performed for both structural cases (internal inclusions and surface roughness). Three different samples have been selected tomore » mimic inclusion/coating inhomogeneities: two measured scattering matrices of hematite and white clay, and a simulated matrix for water ice. These three matrices are selected to cover a wide range of imaginary refractive indices. The selection of these materials also seeks to study astrophysical environments of interest such as Mars, where hematite and clays have been detected, and comets. Based on the results of the sensitivity tests shown in this work, we perform calculations for a size distribution of a silicate-type host particle model with inclusions and surface roughness to reproduce the experimental measurements of a dust sample. The model fits the measurements quite well, proving that surface roughness and internal structure play a role in the scattering pattern of irregular cosmic dust particles.« less
Kagawa, Kotaro; Takimoto, Gaku
2016-02-01
Many plant species employing a food-deceptive pollination strategy show discrete or continuous floral polymorphism within their populations. Previous studies have suggested that negative frequency-dependent selection (NFDS) caused by the learning behavior of pollinators was responsible for the maintenance of floral polymorphism. However, NFDS alone does not explain why and when discrete or continuous polymorphism evolves. In this study, we use an evolutionary simulation model to propose that inaccurate discrimination of flower colors by pollinators results in evolution of discrete flower color polymorphism. Simulations showed that associative learning based on inaccurate discrimination in pollinators caused disruptive selection of flower colors. The degree of inaccuracy determined the number of discrete flower colors that evolved. Our results suggest that animal behavior based on inaccurate discrimination may be a general cause of disruptive selection that promotes discrete trait polymorphism.
Namour, Florence; Diderichsen, Paul Matthias; Cox, Eugène; Vayssière, Béatrice; Van der Aa, Annegret; Tasset, Chantal; Van't Klooster, Gerben
2015-08-01
Filgotinib (GLPG0634) is a selective inhibitor of Janus kinase 1 (JAK1) currently in development for the treatment of rheumatoid arthritis and Crohn's disease. While less selective JAK inhibitors have shown long-term efficacy in treating inflammatory conditions, this was accompanied by dose-limiting side effects. Here, we describe the pharmacokinetics of filgotinib and its active metabolite in healthy volunteers and the use of pharmacokinetic-pharmacodynamic modeling and simulation to support dose selection for phase IIB in patients with rheumatoid arthritis. Two trials were conducted in healthy male volunteers. In the first trial, filgotinib was administered as single doses from 10 mg up to multiple daily doses of 200 mg. In the second trial, daily doses of 300 and 450 mg for 10 days were evaluated. Non-compartmental analysis was used to determine individual pharmacokinetic parameters for filgotinib and its metabolite. The overall pharmacodynamic activity for the two moieties was assessed in whole blood using interleukin-6-induced phosphorylation of signal-transducer and activator of transcription 1 as a biomarker for JAK1 activity. These data were used to conduct non-linear mixed-effects modeling to investigate a pharmacokinetic/pharmacodynamic relationship. Modeling and simulation on the basis of early clinical data suggest that the pharmacokinetics of filgotinib are dose proportional up to 200 mg, in agreement with observed data, and support that both filgotinib and its metabolite contribute to its pharmacodynamic effects. Simulation of biomarker response supports that the maximum pharmacodynamic effect is reached at a daily dose of 200 mg filgotinib. Based on these results, a daily dose range up to 200 mg has been selected for phase IIB dose-finding studies in patients with rheumatoid arthritis.
SIGMA: A Knowledge-Based Simulation Tool Applied to Ecosystem Modeling
NASA Technical Reports Server (NTRS)
Dungan, Jennifer L.; Keller, Richard; Lawless, James G. (Technical Monitor)
1994-01-01
The need for better technology to facilitate building, sharing and reusing models is generally recognized within the ecosystem modeling community. The Scientists' Intelligent Graphical Modelling Assistant (SIGMA) creates an environment for model building, sharing and reuse which provides an alternative to more conventional approaches which too often yield poorly documented, awkwardly structured model code. The SIGMA interface presents the user a list of model quantities which can be selected for computation. Equations to calculate the model quantities may be chosen from an existing library of ecosystem modeling equations, or built using a specialized equation editor. Inputs for dim equations may be supplied by data or by calculation from other equations. Each variable and equation is expressed using ecological terminology and scientific units, and is documented with explanatory descriptions and optional literature citations. Automatic scientific unit conversion is supported and only physically-consistent equations are accepted by the system. The system uses knowledge-based semantic conditions to decide which equations in its library make sense to apply in a given situation, and supplies these to the user for selection. "Me equations and variables are graphically represented as a flow diagram which provides a complete summary of the model. Forest-BGC, a stand-level model that simulates photosynthesis and evapo-transpiration for conifer canopies, was originally implemented in Fortran and subsequenty re-implemented using SIGMA. The SIGMA version reproduces daily results and also provides a knowledge base which greatly facilitates inspection, modification and extension of Forest-BGC.
Enhanced TCAS 2/CDTI traffic Sensor digital simulation model and program description
NASA Technical Reports Server (NTRS)
Goka, T.
1984-01-01
Digital simulation models of enhanced TCAS 2/CDTI traffic sensors are developed, based on actual or projected operational and performance characteristics. Two enhanced Traffic (or Threat) Alert and Collision Avoidance Systems are considered. A digital simulation program is developed in FORTRAN. The program contains an executive with a semireal time batch processing capability. The simulation program can be interfaced with other modules with a minimum requirement. Both the traffic sensor and CAS logic modules are validated by means of extensive simulation runs. Selected validation cases are discussed in detail, and capabilities and limitations of the actual and simulated systems are noted. The TCAS systems are not specifically intended for Cockpit Display of Traffic Information (CDTI) applications. These systems are sufficiently general to allow implementation of CDTI functions within the real systems' constraints.
Molecular simulation of water removal from simple gases with zeolite NaA.
Csányi, Eva; Ható, Zoltán; Kristóf, Tamás
2012-06-01
Water vapor removal from some simple gases using zeolite NaA was studied by molecular simulation. The equilibrium adsorption properties of H(2)O, CO, H(2), CH(4) and their mixtures in dehydrated zeolite NaA were computed by grand canonical Monte Carlo simulations. The simulations employed Lennard-Jones + Coulomb type effective pair potential models, which are suitable for the reproduction of thermodynamic properties of pure substances. Based on the comparison of the simulation results with experimental data for single-component adsorption at different temperatures and pressures, a modified interaction potential model for the zeolite is proposed. In the adsorption simulations with mixtures presented here, zeolite exhibits extremely high selectivity of water to the investigated weakly polar/non-polar gases demonstrating the excellent dehydration ability of zeolite NaA in engineering applications.
NASA Astrophysics Data System (ADS)
Bharatham, Kavitha; Bharatham, Nagakumar; Kwon, Yong Jung; Lee, Keun Woo
2008-12-01
Allosteric inhibition of protein tyrosine phosphatase 1B (PTP1B), has paved a new path to design specific inhibitors for PTP1B, which is an important drug target for the treatment of type II diabetes and obesity. The PTP1B1-282-allosteric inhibitor complex crystal structure lacks α7 (287-298) and moreover there is no available 3D structure of PTP1B1-298 in open form. As the interaction between α7 and α6-α3 helices plays a crucial role in allosteric inhibition, α7 was modeled to the PTP1B1-282 in open form complexed with an allosteric inhibitor (compound-2) and a 5 ns MD simulation was performed to investigate the relative orientation of the α7-α6-α3 helices. The simulation conformational space was statistically sampled by clustering analyses. This approach was helpful to reveal certain clues on PTP1B allosteric inhibition. The simulation was also utilized in the generation of receptor based pharmacophore models to include the conformational flexibility of the protein-inhibitor complex. Three cluster representative structures of the highly populated clusters were selected for pharmacophore model generation. The three pharmacophore models were subsequently utilized for screening databases to retrieve molecules containing the features that complement the allosteric site. The retrieved hits were filtered based on certain drug-like properties and molecular docking simulations were performed in two different conformations of protein. Thus, performing MD simulation with α7 to investigate the changes at the allosteric site, then developing receptor based pharmacophore models and finally docking the retrieved hits into two distinct conformations will be a reliable methodology in identifying PTP1B allosteric inhibitors.
Peterson, Steven M.; Flynn, Amanda T.; Vrabel, Joseph; Ryter, Derek W.
2015-08-12
The calibrated groundwater-flow model was used with the Groundwater-Management Process for the 2005 version of the U.S. Geological Survey modular three-dimensional groundwater model, MODFLOW–2005, to provide a tool for the NPNRD to better understand how water-management decisions could affect stream base flows of the North Platte River at Bridgeport, Nebr., streamgage in a future period from 2008 to 2019 under varying climatic conditions. The simulation-optimization model was constructed to analyze the maximum increase in simulated stream base flow that could be obtained with the minimum amount of reductions in groundwater withdrawals for irrigation. A second analysis extended the first to analyze the simulated base-flow benefit of groundwater withdrawals along with application of intentional recharge, that is, water from canals being released into rangeland areas with sandy soils. With optimized groundwater withdrawals and intentional recharge, the maximum simulated stream base flow was 15–23 cubic feet per second (ft3/s) greater than with no management at all, or 10–15 ft3/s larger than with managed groundwater withdrawals only. These results indicate not only the amount that simulated stream base flow can be increased by these management options, but also the locations where the management options provide the most or least benefit to the simulated stream base flow. For the analyses in this report, simulated base flow was best optimized by reductions in groundwater withdrawals north of the North Platte River and in the western half of the area. Intentional recharge sites selected by the optimization had a complex distribution but were more likely to be closer to the North Platte River or its tributaries. Future users of the simulation-optimization model will be able to modify the input files as to type, location, and timing of constraints, decision variables of groundwater withdrawals by zone, and other variables to explore other feasible management scenarios that may yield different increases in simulated future base flow of the North Platte River.
Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S
2014-12-09
Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.
NASA Astrophysics Data System (ADS)
Tao, Zhu; Shi, Runhe; Zeng, Yuyan; Gao, Wei
2017-09-01
The 3D model is an important part of simulated remote sensing for earth observation. Regarding the small-scale spatial extent of DART software, both the details of the model itself and the number of models of the distribution have an important impact on the scene canopy Normalized Difference Vegetation Index (NDVI).Taking the phragmitesaustralis in the Yangtze Estuary as an example, this paper studied the effect of the P.australias model on the canopy NDVI, based on the previous studies of the model precision, mainly from the cell dimension of the DART software and the density distribution of the P.australias model in the scene, As well as the choice of the density of the P.australiass model under the cost of computer running time in the actual simulation. The DART Cell dimensions and the density of the scene model were set by using the optimal precision model from the existing research results. The simulation results of NDVI with different model densities under different cell dimensions were analyzed by error analysis. By studying the relationship between relative error, absolute error and time costs, we have mastered the density selection method of P.australias model in the simulation of small-scale spatial scale scene. Experiments showed that the number of P.australias in the simulated scene need not be the same as those in the real environment due to the difference between the 3D model and the real scenarios. The best simulation results could be obtained by keeping the density ratio of about 40 trees per square meter, simultaneously, of the visual effects.
Consequences of Base Time for Redundant Signals Experiments
Townsend, James T.; Honey, Christopher
2007-01-01
We report analytical and computational investigations into the effects of base time on the diagnosticity of two popular theoretical tools in the redundant signals literature: (1) the race model inequality and (2) the capacity coefficient. We show analytically and without distributional assumptions that the presence of base time decreases the sensitivity of both of these measures to model violations. We further use simulations to investigate the statistical power model selection tools based on the race model inequality, both with and without base time. Base time decreases statistical power, and biases the race model test toward conservatism. The magnitude of this biasing effect increases as we increase the proportion of total reaction time variance contributed by base time. We marshal empirical evidence to suggest that the proportion of reaction time variance contributed by base time is relatively small, and that the effects of base time on the diagnosticity of our model-selection tools are therefore likely to be minor. However, uncertainty remains concerning the magnitude and even the definition of base time. Experimentalists should continue to be alert to situations in which base time may contribute a large proportion of the total reaction time variance. PMID:18670591
Modeling of pilot's visual behavior for low-level flight
NASA Astrophysics Data System (ADS)
Schulte, Axel; Onken, Reiner
1995-06-01
Developers of synthetic vision systems for low-level flight simulators deal with the problem to decide which features to incorporate in order to achieve most realistic training conditions. This paper supports an approach to this problem on the basis of modeling the pilot's visual behavior. This approach is founded upon the basic requirement that the pilot's mechanisms of visual perception should be identical in simulated and real low-level flight. Flight simulator experiments with pilots were conducted for knowledge acquisition. During the experiments video material of a real low-level flight mission containing different situations was displayed to the pilot who was acting under a realistic mission assignment in a laboratory environment. Pilot's eye movements could be measured during the replay. The visual mechanisms were divided into rule based strategies for visual navigation, based on the preflight planning process, as opposed to skill based processes. The paper results in a model of the pilot's planning strategy of a visual fixing routine as part of the navigation task. The model is a knowledge based system based upon the fuzzy evaluation of terrain features in order to determine the landmarks used by pilots. It can be shown that a computer implementation of the model selects those features, which were preferred by trained pilots, too.
Tanigawa, Takahiko; Kaneko, Masato; Hashizume, Kensei; Kajikawa, Mariko; Ueda, Hitoshi; Tajiri, Masahiro; Paolini, John F; Mueck, Wolfgang
2013-01-01
The global ROCKET AF phase III trial evaluated rivaroxaban 20 mg once daily (o.d.) for stroke prevention in atrial fibrillation (AF). Based on rivaroxaban pharmacokinetics in Japanese subjects and lower anticoagulation preferences in Japan, particularly in elderly patients, the optimal dose regimen for Japanese AF patients was considered. The aim of this analysis was dose selection for Japanese patients from a pharmacokinetic aspect by comparison of simulated exposure in Japanese patients with those in Caucasian patients. As a result of population pharmacokinetics-pharmacodynamics analyses, a one-compartment pharmacokinetic model with first-order absorption and direct link pharmacokinetic-pharmacodynamic models optimally described the plasma concentration and pharmacodynamic models (Factor Xa activity, prothrombin time, activated partial thromboplastin time, and HepTest), which were also consistent with previous works. Steady-state simulations indicated 15 mg rivaroxaban o.d. doses in Japanese patients with AF would yield exposures comparable to the 20 mg o.d. dose in Caucasian patients with AF. In conclusion, in the context of the lower anticoagulation targets in Japanese practice, the population pharmacokinetic and pharmacodynamic modeling supports 15 mg o.d. as the principal rivaroxaban dose in J-ROCKET AF.
DDDAMS-based Urban Surveillance and Crowd Control via UAVs and UGVs
2015-12-04
for crowd dynamics modeling by incorporating multi-resolution data, where a grid-based method is used to model crowd motion with UAVs’ low -resolution...information and more computational intensive (and time-consuming). Given that the deployment of fidelity selection results in simulation faces computational... low fidelity information FOV y (A) DR x (A) DR y (A) Not detected high fidelity information Table 1: Parameters for UAV and UGV for their detection
Efficient rejection-based simulation of biochemical reactions with stochastic noise and delays
NASA Astrophysics Data System (ADS)
Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto
2014-10-01
We propose a new exact stochastic rejection-based simulation algorithm for biochemical reactions and extend it to systems with delays. Our algorithm accelerates the simulation by pre-computing reaction propensity bounds to select the next reaction to perform. Exploiting such bounds, we are able to avoid recomputing propensities every time a (delayed) reaction is initiated or finished, as is typically necessary in standard approaches. Propensity updates in our approach are still performed, but only infrequently and limited for a small number of reactions, saving computation time and without sacrificing exactness. We evaluate the performance improvement of our algorithm by experimenting with concrete biological models.
StochKit2: software for discrete stochastic simulation of biochemical systems with events.
Sanft, Kevin R; Wu, Sheng; Roh, Min; Fu, Jin; Lim, Rone Kwei; Petzold, Linda R
2011-09-01
StochKit2 is the first major upgrade of the popular StochKit stochastic simulation software package. StochKit2 provides highly efficient implementations of several variants of Gillespie's stochastic simulation algorithm (SSA), and tau-leaping with automatic step size selection. StochKit2 features include automatic selection of the optimal SSA method based on model properties, event handling, and automatic parallelism on multicore architectures. The underlying structure of the code has been completely updated to provide a flexible framework for extending its functionality. StochKit2 runs on Linux/Unix, Mac OS X and Windows. It is freely available under GPL version 3 and can be downloaded from http://sourceforge.net/projects/stochkit/. petzold@engineering.ucsb.edu.
ICME for Crashworthiness of TWIP Steels: From Ab Initio to the Crash Performance
NASA Astrophysics Data System (ADS)
Güvenç, O.; Roters, F.; Hickel, T.; Bambach, M.
2015-01-01
During the last decade, integrated computational materials engineering (ICME) emerged as a field which aims to promote synergetic usage of formerly isolated simulation models, data and knowledge in materials science and engineering, in order to solve complex engineering problems. In our work, we applied the ICME approach to a crash box, a common automobile component crucial to passenger safety. A newly developed high manganese steel was selected as the material of the component and its crashworthiness was assessed by simulated and real drop tower tests. The crashworthiness of twinning-induced plasticity (TWIP) steel is intrinsically related to the strain hardening behavior caused by the combination of dislocation glide and deformation twinning. The relative contributions of those to the overall hardening behavior depend on the stacking fault energy (SFE) of the selected material. Both the deformation twinning mechanism and the stacking fault energy are individually well-researched topics, but especially for high-manganese steels, the determination of the stacking-fault energy and the occurrence of deformation twinning as a function of the SFE are crucial to understand the strain hardening behavior. We applied ab initio methods to calculate the stacking fault energy of the selected steel composition as an input to a recently developed strain hardening model which models deformation twinning based on the SFE-dependent dislocation mechanisms. This physically based material model is then applied to simulate a drop tower test in order to calculate the energy absorption capacity of the designed component. The results are in good agreement with experiments. The model chain links the crash performance to the SFE and hence to the chemical composition, which paves the way for computational materials design for crashworthiness.
Plant toxins and trophic cascades alter fire regime and succession on a boral forest landscape
Feng, Zhilan; Alfaro-Murillo, Jorge A.; DeAngelis, Donald L.; Schmidt, Jennifer; Barga, Matthew; Zheng, Yiqiang; Ahmad Tamrin, Muhammad Hanis B.; Olson, Mark; Glaser, Tim; Kielland, Knut; Chapin, F. Stuart; Bryant, John
2012-01-01
Two models were integrated in order to study the effect of plant toxicity and a trophic cascade on forest succession and fire patterns across a boreal landscape in central Alaska. One of the models, ALFRESCO, is a cellular automata model that stochastically simulates transitions from spruce dominated 1 km2 spatial cells to deciduous woody vegetation based on stochastic fires, and from deciduous woody vegetation to spruce based on age of the cell with some stochastic variation. The other model, the ‘toxin-dependent functional response’ model (TDFRM) simulates woody vegetation types with different levels of toxicity, an herbivore browser (moose) that can forage selectively on these types, and a carnivore (wolf) that preys on the herbivore. Here we replace the simple succession rules in each ALFRESCO cell by plant–herbivore–carnivore dynamics from TDFRM. The central hypothesis tested in the integrated model is that the herbivore, by feeding selectively on low-toxicity deciduous woody vegetation, speeds succession towards high-toxicity evergreens, like spruce. Wolves, by keeping moose populations down, can help slow the succession. Our results confirmed this hypothesis for the model calibrated to the Tanana floodplain of Alaska. We used the model to estimate the effects of different levels of wolf control. Simulations indicated that management reductions in wolf densities could reduce the mean time to transition from deciduous to spruce by more than 15 years, thereby increasing landscape flammability. The integrated model can be useful in estimating ecosystem impacts of wolf control and moose harvesting in central Alaska.
A Stigmergy Collaboration Approach in the Open Source Software Developer Community
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Xiaohui; Pullum, Laura L; Treadwell, Jim N
2009-01-01
The communication model of some self-organized online communities is significantly different from the traditional social network based community. It is problematic to use social network analysis to analyze the collaboration structure and emergent behaviors in these communities because these communities lack peer-to-peer connections. Stigmergy theory provides an explanation of the collaboration model of these communities. In this research, we present a stigmergy approach for building an agent-based simulation to simulate the collaboration model in the open source software (OSS) developer community. We used a group of actors who collaborate on OSS projects through forums as our frame of reference andmore » investigated how the choices actors make in contributing their work on the projects determines the global status of the whole OSS project. In our simulation, the forum posts serve as the digital pheromone and the modified Pierre-Paul Grasse pheromone model is used for computing the developer agents behavior selection probability.« less
The long-term evolution of multilocus traits under frequency-dependent disruptive selection.
van Doorn, G Sander; Dieckmann, Ulf
2006-11-01
Frequency-dependent disruptive selection is widely recognized as an important source of genetic variation. Its evolutionary consequences have been extensively studied using phenotypic evolutionary models, based on quantitative genetics, game theory, or adaptive dynamics. However, the genetic assumptions underlying these approaches are highly idealized and, even worse, predict different consequences of frequency-dependent disruptive selection. Population genetic models, by contrast, enable genotypic evolutionary models, but traditionally assume constant fitness values. Only a minority of these models thus addresses frequency-dependent selection, and only a few of these do so in a multilocus context. An inherent limitation of these remaining studies is that they only investigate the short-term maintenance of genetic variation. Consequently, the long-term evolution of multilocus characters under frequency-dependent disruptive selection remains poorly understood. We aim to bridge this gap between phenotypic and genotypic models by studying a multilocus version of Levene's soft-selection model. Individual-based simulations and deterministic approximations based on adaptive dynamics theory provide insights into the underlying evolutionary dynamics. Our analysis uncovers a general pattern of polymorphism formation and collapse, likely to apply to a wide variety of genetic systems: after convergence to a fitness minimum and the subsequent establishment of genetic polymorphism at multiple loci, genetic variation becomes increasingly concentrated on a few loci, until eventually only a single polymorphic locus remains. This evolutionary process combines features observed in quantitative genetics and adaptive dynamics models, and it can be explained as a consequence of changes in the selection regime that are inherent to frequency-dependent disruptive selection. Our findings demonstrate that the potential of frequency-dependent disruptive selection to maintain polygenic variation is considerably smaller than previously expected.
Development of a Water Recovery System Resource Tracking Model
NASA Technical Reports Server (NTRS)
Chambliss, Joe; Stambaugh, Imelda; Sargusingh, Miriam; Shull, Sarah; Moore, Michael
2015-01-01
A simulation model has been developed to track water resources in an exploration vehicle using Regenerative Life Support (RLS) systems. The Resource Tracking Model (RTM) integrates the functions of all the vehicle components that affect the processing and recovery of water during simulated missions. The approach used in developing the RTM enables its use as part of a complete vehicle simulation for real time mission studies. Performance data for the components in the RTM is focused on water processing. The data provided to the model has been based on the most recent information available regarding the technology of the component. This paper will describe the process of defining the RLS system to be modeled, the way the modeling environment was selected, and how the model has been implemented. Results showing how the RLS components exchange water are provided in a set of test cases.
Development of a Water Recovery System Resource Tracking Model
NASA Technical Reports Server (NTRS)
Chambliss, Joe; Stambaugh, Imelda; Sarguishm, Miriam; Shull, Sarah; Moore, Michael
2014-01-01
A simulation model has been developed to track water resources in an exploration vehicle using regenerative life support (RLS) systems. The model integrates the functions of all the vehicle components that affect the processing and recovery of water during simulated missions. The approach used in developing the model results in the RTM being a part of of a complete vehicle simulation that can be used in real time mission studies. Performance data for the variety of components in the RTM is focused on water processing and has been defined based on the most recent information available for the technology of the component. This paper will describe the process of defining the RLS system to be modeled and then the way the modeling environment was selected and how the model has been implemented. Results showing how the variety of RLS components exchange water are provided in a set of test cases.
Computational study on UV curing characteristics in nanoimprint lithography: Stochastic simulation
NASA Astrophysics Data System (ADS)
Koyama, Masanori; Shirai, Masamitsu; Kawata, Hiroaki; Hirai, Yoshihiko; Yasuda, Masaaki
2017-06-01
A computational simulation model of UV curing in nanoimprint lithography based on a simplified stochastic approach is proposed. The activated unit reacts with a randomly selected monomer within a critical reaction radius. Cluster units are chained to each other. Then, another monomer is activated and the next chain reaction occurs. This process is repeated until a virgin monomer disappears within the reaction radius or until the activated monomers react with each other. The simulation model well describes the basic UV curing characteristics, such as the molecular weight distributions of the reacted monomers and the effect of the initiator concentration on the conversion ratio. The effects of film thickness on UV curing characteristics are also studied by the simulation.
Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach.
Cavagnaro, Daniel R; Gonzalez, Richard; Myung, Jay I; Pitt, Mark A
2013-02-01
Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models.
Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling
NASA Astrophysics Data System (ADS)
Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.
2017-04-01
Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.
NASA Astrophysics Data System (ADS)
Noori, Roohollah; Safavi, Salman; Nateghi Shahrokni, Seyyed Afshin
2013-07-01
The five-day biochemical oxygen demand (BOD5) is one of the key parameters in water quality management. In this study, a novel approach, i.e., reduced-order adaptive neuro-fuzzy inference system (ROANFIS) model was developed for rapid estimation of BOD5. In addition, an uncertainty analysis of adaptive neuro-fuzzy inference system (ANFIS) and ROANFIS models was carried out based on Monte-Carlo simulation. Accuracy analysis of ANFIS and ROANFIS models based on both developed discrepancy ratio and threshold statistics revealed that the selected ROANFIS model was superior. Pearson correlation coefficient (R) and root mean square error for the best fitted ROANFIS model were 0.96 and 7.12, respectively. Furthermore, uncertainty analysis of the developed models indicated that the selected ROANFIS had less uncertainty than the ANFIS model and accurately forecasted BOD5 in the Sefidrood River Basin. Besides, the uncertainty analysis also showed that bracketed predictions by 95% confidence bound and d-factor in the testing steps for the selected ROANFIS model were 94% and 0.83, respectively.
Bayesian Model Selection under Time Constraints
NASA Astrophysics Data System (ADS)
Hoege, M.; Nowak, W.; Illman, W. A.
2017-12-01
Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.
Genomic Prediction Accounting for Residual Heteroskedasticity
Ou, Zhining; Tempelman, Robert J.; Steibel, Juan P.; Ernst, Catherine W.; Bates, Ronald O.; Bello, Nora M.
2015-01-01
Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. PMID:26564950
NASA Astrophysics Data System (ADS)
Suzuki, Makoto; Kameda, Toshimasa; Doi, Ayumi; Borisov, Sergey; Babin, Sergey
2018-03-01
The interpretation of scanning electron microscopy (SEM) images of the latest semiconductor devices is not intuitive and requires comparison with computed images based on theoretical modeling and simulations. For quantitative image prediction and geometrical reconstruction of the specimen structure, the accuracy of the physical model is essential. In this paper, we review the current models of electron-solid interaction and discuss their accuracy. We perform the comparison of the simulated results with our experiments of SEM overlay of under-layer, grain imaging of copper interconnect, and hole bottom visualization by angular selective detectors, and show that our model well reproduces the experimental results. Remaining issues for quantitative simulation are also discussed, including the accuracy of the charge dynamics, treatment of beam skirt, and explosive increase in computing time.
VARIABLE SELECTION FOR REGRESSION MODELS WITH MISSING DATA
Garcia, Ramon I.; Ibrahim, Joseph G.; Zhu, Hongtu
2009-01-01
We consider the variable selection problem for a class of statistical models with missing data, including missing covariate and/or response data. We investigate the smoothly clipped absolute deviation penalty (SCAD) and adaptive LASSO and propose a unified model selection and estimation procedure for use in the presence of missing data. We develop a computationally attractive algorithm for simultaneously optimizing the penalized likelihood function and estimating the penalty parameters. Particularly, we propose to use a model selection criterion, called the ICQ statistic, for selecting the penalty parameters. We show that the variable selection procedure based on ICQ automatically and consistently selects the important covariates and leads to efficient estimates with oracle properties. The methodology is very general and can be applied to numerous situations involving missing data, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Simulations are given to demonstrate the methodology and examine the finite sample performance of the variable selection procedures. Melanoma data from a cancer clinical trial is presented to illustrate the proposed methodology. PMID:20336190
NASA Astrophysics Data System (ADS)
Ercan, A.; Kavvas, M. L.; Ishida, K.; Chen, Z. Q.; Amin, M. Z. M.; Shaaban, A. J.
2017-12-01
Impacts of climate change on the hydrologic processes under future climate change conditions were assessed over various watersheds of Peninsular Malaysia by means of a coupled regional climate and physically-based hydrology model that utilized an ensemble of future climate change projections. An ensemble of 15 different future climate realizations from coarse resolution global climate models' (GCMs) projections for the 21st century were dynamically downscaled to 6 km resolution over Peninsular Malaysia by a regional numerical climate model, which was then coupled with the watershed hydrology model WEHY through the atmospheric boundary layer over the selected watersheds of Peninsular Malaysia. Hydrologic simulations were carried out at hourly increments and at hillslope-scale in order to assess the impacts of climate change on the water balances and flooding conditions at the selected watersheds during the 21st century. The coupled regional climate and hydrology model was simulated for a duration of 90 years for each of the 15 realizations. It is demonstrated that the increase in mean monthly flows due to the impact of expected climate change during 2040-2100 is statistically significant at the selected watersheds. Furthermore, the flood frequency analyses for the selected watersheds indicate an overall increasing trend in the second half of the 21st century.
NASA Astrophysics Data System (ADS)
Jones, Mackenzie L.; Hickox, Ryan C.; Mutch, Simon J.; Croton, Darren J.; Ptak, Andrew F.; DiPompeo, Michael A.
2017-07-01
In studies of the connection between active galactic nuclei (AGNs) and their host galaxies, there is widespread disagreement on some key aspects of the connection. These disagreements largely stem from a lack of understanding of the nature of the full underlying AGN population. Recent attempts to probe this connection utilize both observations and simulations to correct for a missed population, but presently are limited by intrinsic biases and complicated models. We take a simple simulation for galaxy evolution and add a new prescription for AGN activity to connect galaxy growth to dark matter halo properties and AGN activity to star formation. We explicitly model selection effects to produce an “observed” AGN population for comparison with observations and empirically motivated models of the local universe. This allows us to bypass the difficulties inherent in models that attempt to infer the AGN population by inverting selection effects. We investigate the impact of selecting AGNs based on thresholds in luminosity or Eddington ratio on the “observed” AGN population. By limiting our model AGN sample in luminosity, we are able to recreate the observed local AGN luminosity function and specific star formation-stellar mass distribution, and show that using an Eddington ratio threshold introduces less bias into the sample by selecting the full range of growing black holes, despite the challenge of selecting low-mass black holes. We find that selecting AGNs using these various thresholds yield samples with different AGN host galaxy properties.
Kessner, Darren; Novembre, John
2015-01-01
Evolve and resequence studies combine artificial selection experiments with massively parallel sequencing technology to study the genetic basis for complex traits. In these experiments, individuals are selected for extreme values of a trait, causing alleles at quantitative trait loci (QTL) to increase or decrease in frequency in the experimental population. We present a new analysis of the power of artificial selection experiments to detect and localize quantitative trait loci. This analysis uses a simulation framework that explicitly models whole genomes of individuals, quantitative traits, and selection based on individual trait values. We find that explicitly modeling QTL provides qualitatively different insights than considering independent loci with constant selection coefficients. Specifically, we observe how interference between QTL under selection affects the trajectories and lengthens the fixation times of selected alleles. We also show that a substantial portion of the genetic variance of the trait (50–100%) can be explained by detected QTL in as little as 20 generations of selection, depending on the trait architecture and experimental design. Furthermore, we show that power depends crucially on the opportunity for recombination during the experiment. Finally, we show that an increase in power is obtained by leveraging founder haplotype information to obtain allele frequency estimates. PMID:25672748
NASA Astrophysics Data System (ADS)
Shin, Sun-Hee; Kim, Ok-Yeon; Kim, Dongmin; Lee, Myong-In
2017-07-01
Using 32 CMIP5 (Coupled Model Intercomparison Project Phase 5) models, this study examines the veracity in the simulation of cloud amount and their radiative effects (CREs) in the historical run driven by observed external radiative forcing for 1850-2005, and their future changes in the RCP (Representative Concentration Pathway) 4.5 scenario runs for 2006-2100. Validation metrics for the historical run are designed to examine the accuracy in the representation of spatial patterns for climatological mean, and annual and interannual variations of clouds and CREs. The models show large spread in the simulation of cloud amounts, specifically in the low cloud amount. The observed relationship between cloud amount and the controlling large-scale environment are also reproduced diversely by various models. Based on the validation metrics, four models—ACCESS1.0, ACCESS1.3, HadGEM2-CC, and HadGEM2-ES—are selected as best models, and the average of the four models performs more skillfully than the multimodel ensemble average. All models project global-mean SST warming at the increase of the greenhouse gases, but the magnitude varies across the simulations between 1 and 2 K, which is largely attributable to the difference in the change of cloud amount and distribution. The models that simulate more SST warming show a greater increase in the net CRE due to reduced low cloud and increased incoming shortwave radiation, particularly over the regions of marine boundary layer in the subtropics. Selected best-performing models project a significant reduction in global-mean cloud amount of about -0.99% K-1 and net radiative warming of 0.46 W m-2 K-1, suggesting a role of positive feedback to global warming.
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; ...
2016-06-01
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less
NASA Astrophysics Data System (ADS)
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; Ren, Huiying; Liu, Ying; Swiler, Laura
2016-07-01
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesian model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.
A high-resolution physically-based global flood hazard map
NASA Astrophysics Data System (ADS)
Kaheil, Y.; Begnudelli, L.; McCollum, J.
2016-12-01
We present the results from a physically-based global flood hazard model. The model uses a physically-based hydrologic model to simulate river discharges, and 2D hydrodynamic model to simulate inundation. The model is set up such that it allows the application of large-scale flood hazard through efficient use of parallel computing. For hydrology, we use the Hillslope River Routing (HRR) model. HRR accounts for surface hydrology using Green-Ampt parameterization. The model is calibrated against observed discharge data from the Global Runoff Data Centre (GRDC) network, among other publicly-available datasets. The parallel-computing framework takes advantage of the river network structure to minimize cross-processor messages, and thus significantly increases computational efficiency. For inundation, we implemented a computationally-efficient 2D finite-volume model with wetting/drying. The approach consists of simulating flood along the river network by forcing the hydraulic model with the streamflow hydrographs simulated by HRR, and scaled up to certain return levels, e.g. 100 years. The model is distributed such that each available processor takes the next simulation. Given an approximate criterion, the simulations are ordered from most-demanding to least-demanding to ensure that all processors finalize almost simultaneously. Upon completing all simulations, the maximum envelope of flood depth is taken to generate the final map. The model is applied globally, with selected results shown from different continents and regions. The maps shown depict flood depth and extent at different return periods. These maps, which are currently available at 3 arc-sec resolution ( 90m) can be made available at higher resolutions where high resolution DEMs are available. The maps can be utilized by flood risk managers at the national, regional, and even local levels to further understand their flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs.
DOT National Transportation Integrated Search
1996-11-01
The Highway Economic Requirements System (HERS) is a computer model designed to simulate improvement selection decisions based on the relative benefit-cost merits of alternative improvement options. HERS is intended to estimate national level investm...
Design of high-fidelity haptic display for one-dimensional force reflection applications
NASA Astrophysics Data System (ADS)
Gillespie, Brent; Rosenberg, Louis B.
1995-12-01
This paper discusses the development of a virtual reality platform for the simulation of medical procedures which involve needle insertion into human tissue. The paper's focus is the hardware and software requirements for haptic display of a particular medical procedure known as epidural analgesia. To perform this delicate manual procedure, an anesthesiologist must carefully guide a needle through various layers of tissue using only haptic cues for guidance. As a simplifying aspect for the simulator design, all motions and forces involved in the task occur along a fixed line once insertion begins. To create a haptic representation of this procedure, we have explored both physical modeling and perceptual modeling techniques. A preliminary physical model was built based on CT-scan data of the operative site. A preliminary perceptual model was built based on current training techniques for the procedure provided by a skilled instructor. We compare and contrast these two modeling methods and discuss the implications of each. We select and defend the perceptual model as a superior approach for the epidural analgesia simulator.
Modelling approaches: the case of schizophrenia.
Heeg, Bart M S; Damen, Joep; Buskens, Erik; Caleo, Sue; de Charro, Frank; van Hout, Ben A
2008-01-01
Schizophrenia is a chronic disease characterized by periods of relative stability interrupted by acute episodes (or relapses). The course of the disease may vary considerably between patients. Patient histories show considerable inter- and even intra-individual variability. We provide a critical assessment of the advantages and disadvantages of three modelling techniques that have been used in schizophrenia: decision trees, (cohort and micro-simulation) Markov models and discrete event simulation models. These modelling techniques are compared in terms of building time, data requirements, medico-scientific experience, simulation time, clinical representation, and their ability to deal with patient heterogeneity, the timing of events, prior events, patient interaction, interaction between co-variates and variability (first-order uncertainty). We note that, depending on the research question, the optimal modelling approach should be selected based on the expected differences between the comparators, the number of co-variates, the number of patient subgroups, the interactions between co-variates, and simulation time. Finally, it is argued that in case micro-simulation is required for the cost-effectiveness analysis of schizophrenia treatments, a discrete event simulation model is best suited to accurately capture all of the relevant interdependencies in this chronic, highly heterogeneous disease with limited long-term follow-up data.
A data-driven dynamics simulation framework for railway vehicles
NASA Astrophysics Data System (ADS)
Nie, Yinyu; Tang, Zhao; Liu, Fengjia; Chang, Jian; Zhang, Jianjun
2018-03-01
The finite element (FE) method is essential for simulating vehicle dynamics with fine details, especially for train crash simulations. However, factors such as the complexity of meshes and the distortion involved in a large deformation would undermine its calculation efficiency. An alternative method, the multi-body (MB) dynamics simulation provides satisfying time efficiency but limited accuracy when highly nonlinear dynamic process is involved. To maintain the advantages of both methods, this paper proposes a data-driven simulation framework for dynamics simulation of railway vehicles. This framework uses machine learning techniques to extract nonlinear features from training data generated by FE simulations so that specific mesh structures can be formulated by a surrogate element (or surrogate elements) to replace the original mechanical elements, and the dynamics simulation can be implemented by co-simulation with the surrogate element(s) embedded into a MB model. This framework consists of a series of techniques including data collection, feature extraction, training data sampling, surrogate element building, and model evaluation and selection. To verify the feasibility of this framework, we present two case studies, a vertical dynamics simulation and a longitudinal dynamics simulation, based on co-simulation with MATLAB/Simulink and Simpack, and a further comparison with a popular data-driven model (the Kriging model) is provided. The simulation result shows that using the legendre polynomial regression model in building surrogate elements can largely cut down the simulation time without sacrifice in accuracy.
NASA Astrophysics Data System (ADS)
Jiang, Xue; Lu, Wenxi; Hou, Zeyu; Zhao, Haiqing; Na, Jin
2015-11-01
The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.
NASA Astrophysics Data System (ADS)
Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.
2015-12-01
The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.
Speaker-independent phoneme recognition with a binaural auditory image model
NASA Astrophysics Data System (ADS)
Francis, Keith Ivan
1997-09-01
This dissertation presents phoneme recognition techniques based on a binaural fusion of outputs of the auditory image model and subsequent azimuth-selective phoneme recognition in a noisy environment. Background information concerning speech variations, phoneme recognition, current binaural fusion techniques and auditory modeling issues is explained. The research is constrained to sources in the frontal azimuthal plane of a simulated listener. A new method based on coincidence detection of neural activity patterns from the auditory image model of Patterson is used for azimuth-selective phoneme recognition. The method is tested in various levels of noise and the results are reported in contrast to binaural fusion methods based on various forms of correlation to demonstrate the potential of coincidence- based binaural phoneme recognition. This method overcomes smearing of fine speech detail typical of correlation based methods. Nevertheless, coincidence is able to measure similarity of left and right inputs and fuse them into useful feature vectors for phoneme recognition in noise.
Diagnosing Model Errors in Simulations of Solar Radiation on Inclined Surfaces: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit
2016-06-01
Transposition models have been widely used in the solar energy industry to simulate solar radiation on inclined PV panels. Following numerous studies comparing the performance of transposition models, this paper aims to understand the quantitative uncertainty in the state-of-the-art transposition models and the sources leading to the uncertainty. Our results suggest that an isotropic transposition model developed by Badescu substantially underestimates diffuse plane-of-array (POA) irradiances when diffuse radiation is perfectly isotropic. In the empirical transposition models, the selection of empirical coefficients and land surface albedo can both result in uncertainty in the output. This study can be used as amore » guide for future development of physics-based transposition models.« less
Diagnosing Model Errors in Simulation of Solar Radiation on Inclined Surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit
2016-11-21
Transposition models have been widely used in the solar energy industry to simulate solar radiation on inclined PV panels. Following numerous studies comparing the performance of transposition models, this paper aims to understand the quantitative uncertainty in the state-of-the-art transposition models and the sources leading to the uncertainty. Our results show significant differences between two highly used isotropic transposition models with one substantially underestimating the diffuse plane-of-array (POA) irradiances when diffuse radiation is perfectly isotropic. In the empirical transposition models, the selection of empirical coefficients and land surface albedo can both result in uncertainty in the output. This study canmore » be used as a guide for future development of physics-based transposition models.« less
NASA Astrophysics Data System (ADS)
Frollo, Ivan; Krafčík, Andrej; Andris, Peter; Přibil, Jiří; Dermek, Tomáš
2015-12-01
Circular samples are the frequent objects of "in-vitro" investigation using imaging method based on magnetic resonance principles. The goal of our investigation is imaging of thin planar layers without using the slide selection procedure, thus only 2D imaging or imaging of selected layers of samples in circular vessels, eppendorf tubes,.. compulsorily using procedure "slide selection". In spite of that the standard imaging methods was used, some specificity arise when mathematical modeling of these procedure is introduced. In the paper several mathematical models were presented that were compared with real experimental results. Circular magnetic samples were placed into the homogenous magnetic field of a low field imager based on nuclear magnetic resonance. For experimental verification an MRI 0.178 Tesla ESAOTE Opera imager was used.
NASA Astrophysics Data System (ADS)
Lucas, Charles E.; Walters, Eric A.; Jatskevich, Juri; Wasynczuk, Oleg; Lamm, Peter T.
2003-09-01
In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.
Simulation-Based Bronchoscopy Training
Kennedy, Cassie C.; Maldonado, Fabien
2013-01-01
Background: Simulation-based bronchoscopy training is increasingly used, but effectiveness remains uncertain. We sought to perform a comprehensive synthesis of published work on simulation-based bronchoscopy training. Methods: We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus for eligible articles through May 11, 2011. We included all original studies involving health professionals that evaluated, in comparison with no intervention or an alternative instructional approach, simulation-based training for flexible or rigid bronchoscopy. Study selection and data abstraction were performed independently and in duplicate. We pooled results using random effects meta-analysis. Results: From an initial pool of 10,903 articles, we identified 17 studies evaluating simulation-based bronchoscopy training. In comparison with no intervention, simulation training was associated with large benefits on skills and behaviors (pooled effect size, 1.21 [95% CI, 0.82-1.60]; n = 8 studies) and moderate benefits on time (0.62 [95% CI, 0.12-1.13]; n = 7). In comparison with clinical instruction, behaviors with real patients showed nonsignificant effects favoring simulation for time (0.61 [95% CI, −1.47 to 2.69]) and process (0.33 [95% CI, −1.46 to 2.11]) outcomes (n = 2 studies each), although variation in training time might account for these differences. Four studies compared alternate simulation-based training approaches. Inductive analysis to inform instructional design suggested that longer or more structured training is more effective, authentic clinical context adds value, and animal models and plastic part-task models may be superior to more costly virtual-reality simulators. Conclusions: Simulation-based bronchoscopy training is effective in comparison with no intervention. Comparative effectiveness studies are few. PMID:23370487
Gamado, Kokouvi; Marion, Glenn; Porphyre, Thibaud
2017-01-01
Livestock epidemics have the potential to give rise to significant economic, welfare, and social costs. Incursions of emerging and re-emerging pathogens may lead to small and repeated outbreaks. Analysis of the resulting data is statistically challenging but can inform disease preparedness reducing potential future losses. We present a framework for spatial risk assessment of disease incursions based on data from small localized historic outbreaks. We focus on between-farm spread of livestock pathogens and illustrate our methods by application to data on the small outbreak of Classical Swine Fever (CSF) that occurred in 2000 in East Anglia, UK. We apply models based on continuous time semi-Markov processes, using data-augmentation Markov Chain Monte Carlo techniques within a Bayesian framework to infer disease dynamics and detection from incompletely observed outbreaks. The spatial transmission kernel describing pathogen spread between farms, and the distribution of times between infection and detection, is estimated alongside unobserved exposure times. Our results demonstrate inference is reliable even for relatively small outbreaks when the data-generating model is known. However, associated risk assessments depend strongly on the form of the fitted transmission kernel. Therefore, for real applications, methods are needed to select the most appropriate model in light of the data. We assess standard Deviance Information Criteria (DIC) model selection tools and recently introduced latent residual methods of model assessment, in selecting the functional form of the spatial transmission kernel. These methods are applied to the CSF data, and tested in simulated scenarios which represent field data, but assume the data generation mechanism is known. Analysis of simulated scenarios shows that latent residual methods enable reliable selection of the transmission kernel even for small outbreaks whereas the DIC is less reliable. Moreover, compared with DIC, model choice based on latent residual assessment correlated better with predicted risk. PMID:28293559
Marshall, Deborah A; Burgos-Liz, Lina; IJzerman, Maarten J; Crown, William; Padula, William V; Wong, Peter K; Pasupathy, Kalyan S; Higashi, Mitchell K; Osgood, Nathaniel D
2015-03-01
In a previous report, the ISPOR Task Force on Dynamic Simulation Modeling Applications in Health Care Delivery Research Emerging Good Practices introduced the fundamentals of dynamic simulation modeling and identified the types of health care delivery problems for which dynamic simulation modeling can be used more effectively than other modeling methods. The hierarchical relationship between the health care delivery system, providers, patients, and other stakeholders exhibits a level of complexity that ought to be captured using dynamic simulation modeling methods. As a tool to help researchers decide whether dynamic simulation modeling is an appropriate method for modeling the effects of an intervention on a health care system, we presented the System, Interactions, Multilevel, Understanding, Loops, Agents, Time, Emergence (SIMULATE) checklist consisting of eight elements. This report builds on the previous work, systematically comparing each of the three most commonly used dynamic simulation modeling methods-system dynamics, discrete-event simulation, and agent-based modeling. We review criteria for selecting the most suitable method depending on 1) the purpose-type of problem and research questions being investigated, 2) the object-scope of the model, and 3) the method to model the object to achieve the purpose. Finally, we provide guidance for emerging good practices for dynamic simulation modeling in the health sector, covering all aspects, from the engagement of decision makers in the model design through model maintenance and upkeep. We conclude by providing some recommendations about the application of these methods to add value to informed decision making, with an emphasis on stakeholder engagement, starting with the problem definition. Finally, we identify areas in which further methodological development will likely occur given the growing "volume, velocity and variety" and availability of "big data" to provide empirical evidence and techniques such as machine learning for parameter estimation in dynamic simulation models. Upon reviewing this report in addition to using the SIMULATE checklist, the readers should be able to identify whether dynamic simulation modeling methods are appropriate to address the problem at hand and to recognize the differences of these methods from those of other, more traditional modeling approaches such as Markov models and decision trees. This report provides an overview of these modeling methods and examples of health care system problems in which such methods have been useful. The primary aim of the report was to aid decisions as to whether these simulation methods are appropriate to address specific health systems problems. The report directs readers to other resources for further education on these individual modeling methods for system interventions in the emerging field of health care delivery science and implementation. Copyright © 2015. Published by Elsevier Inc.
Advanced Computational Methods for High-accuracy Refinement of Protein Low-quality Models
NASA Astrophysics Data System (ADS)
Zang, Tianwu
Predicting the 3-dimentional structure of protein has been a major interest in the modern computational biology. While lots of successful methods can generate models with 3˜5A root-mean-square deviation (RMSD) from the solution, the progress of refining these models is quite slow. It is therefore urgently needed to develop effective methods to bring low-quality models to higher-accuracy ranges (e.g., less than 2 A RMSD). In this thesis, I present several novel computational methods to address the high-accuracy refinement problem. First, an enhanced sampling method, named parallel continuous simulated tempering (PCST), is developed to accelerate the molecular dynamics (MD) simulation. Second, two energy biasing methods, Structure-Based Model (SBM) and Ensemble-Based Model (EBM), are introduced to perform targeted sampling around important conformations. Third, a three-step method is developed to blindly select high-quality models along the MD simulation. These methods work together to make significant refinement of low-quality models without any knowledge of the solution. The effectiveness of these methods is examined in different applications. Using the PCST-SBM method, models with higher global distance test scores (GDT_TS) are generated and selected in the MD simulation of 18 targets from the refinement category of the 10th Critical Assessment of Structure Prediction (CASP10). In addition, in the refinement test of two CASP10 targets using the PCST-EBM method, it is indicated that EBM may bring the initial model to even higher-quality levels. Furthermore, a multi-round refinement protocol of PCST-SBM improves the model quality of a protein to the level that is sufficient high for the molecular replacement in X-ray crystallography. Our results justify the crucial position of enhanced sampling in the protein structure prediction and demonstrate that a considerable improvement of low-accuracy structures is still achievable with current force fields.
NASA Astrophysics Data System (ADS)
Czerepicki, A.; Koniak, M.
2017-06-01
The paper presents a method of modelling the processes of aging lithium-ion batteries, its implementation as a computer application and results for battery state estimation. Authors use previously developed behavioural battery model, which was built using battery operating characteristics obtained from the experiment. This model was implemented in the form of a computer program using a database to store battery characteristics. Batteries aging process is a new extended functionality of the model. Algorithm of computer simulation uses a real measurements of battery capacity as a function of the battery charge and discharge cycles number. Simulation allows to take into account the incomplete cycles of charge or discharge battery, which are characteristic for transport powered by electricity. The developed model was used to simulate the battery state estimation for different load profiles, obtained by measuring the movement of the selected means of transport.
Platania, Chiara Bianca Maria; Salomone, Salvatore; Leggio, Gian Marco; Drago, Filippo; Bucolo, Claudio
2012-01-01
Dopamine (DA) receptors, a class of G-protein coupled receptors (GPCRs), have been targeted for drug development for the treatment of neurological, psychiatric and ocular disorders. The lack of structural information about GPCRs and their ligand complexes has prompted the development of homology models of these proteins aimed at structure-based drug design. Crystal structure of human dopamine D3 (hD3) receptor has been recently solved. Based on the hD3 receptor crystal structure we generated dopamine D2 and D3 receptor models and refined them with molecular dynamics (MD) protocol. Refined structures, obtained from the MD simulations in membrane environment, were subsequently used in molecular docking studies in order to investigate potential sites of interaction. The structure of hD3 and hD2L receptors was differentiated by means of MD simulations and D3 selective ligands were discriminated, in terms of binding energy, by docking calculation. Robust correlation of computed and experimental Ki was obtained for hD3 and hD2L receptor ligands. In conclusion, the present computational approach seems suitable to build and refine structure models of homologous dopamine receptors that may be of value for structure-based drug discovery of selective dopaminergic ligands. PMID:22970199
Statistical power calculations for mixed pharmacokinetic study designs using a population approach.
Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel
2014-09-01
Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.
Constructing Agent Model for Virtual Training Systems
NASA Astrophysics Data System (ADS)
Murakami, Yohei; Sugimoto, Yuki; Ishida, Toru
Constructing highly realistic agents is essential if agents are to be employed in virtual training systems. In training for collaboration based on face-to-face interaction, the generation of emotional expressions is one key. In training for guidance based on one-to-many interaction such as direction giving for evacuations, emotional expressions must be supplemented by diverse agent behaviors to make the training realistic. To reproduce diverse behavior, we characterize agents by using a various combinations of operation rules instantiated by the user operating the agent. To accomplish this goal, we introduce a user modeling method based on participatory simulations. These simulations enable us to acquire information observed by each user in the simulation and the operating history. Using these data and the domain knowledge including known operation rules, we can generate an explanation for each behavior. Moreover, the application of hypothetical reasoning, which offers consistent selection of hypotheses, to the generation of explanations allows us to use otherwise incompatible operation rules as domain knowledge. In order to validate the proposed modeling method, we apply it to the acquisition of an evacuee's model in a fire-drill experiment. We successfully acquire a subject's model corresponding to the results of an interview with the subject.
NASA Astrophysics Data System (ADS)
Mohammed, K.; Islam, A. S.; Khan, M. J. U.; Das, M. K.
2017-12-01
With the large number of hydrologic models presently available along with the global weather and geographic datasets, streamflows of almost any river in the world can be easily modeled. And if a reasonable amount of observed data from that river is available, then simulations of high accuracy can sometimes be performed after calibrating the model parameters against those observed data through inverse modeling. Although such calibrated models can succeed in simulating the general trend or mean of the observed flows very well, more often than not they fail to adequately simulate the extreme flows. This causes difficulty in tasks such as generating reliable projections of future changes in extreme flows due to climate change, which is obviously an important task due to floods and droughts being closely connected to people's lives and livelihoods. We propose an approach where the outputs of a physically-based hydrologic model are used as an input to a machine learning model to try and better simulate the extreme flows. To demonstrate this offline-coupling approach, the Soil and Water Assessment Tool (SWAT) was selected as the physically-based hydrologic model, the Artificial Neural Network (ANN) as the machine learning model and the Ganges-Brahmaputra-Meghna (GBM) river system as the study area. The GBM river system, located in South Asia, is the third largest in the world in terms of freshwater generated and forms the largest delta in the world. The flows of the GBM rivers were simulated separately in order to test the performance of this proposed approach in accurately simulating the extreme flows generated by different basins that vary in size, climate, hydrology and anthropogenic intervention on stream networks. Results show that by post-processing the simulated flows of the SWAT models with ANN models, simulations of extreme flows can be significantly improved. The mean absolute errors in simulating annual maximum/minimum daily flows were minimized from 4967 cusecs to 1294 cusecs for Ganges, from 5695 cusecs to 2115 cusecs for Brahmaputra and from 689 cusecs to 321 cusecs for Meghna. Using this approach, simulations of hydrologic variables other than streamflow can also be improved given that a decent amount of observed data for that variable is available.
Jarukanont, Daungruthai; Coimbra, João T S; Bauerhenne, Bernd; Fernandes, Pedro A; Patel, Shekhar; Ramos, Maria J; Garcia, Martin E
2014-10-21
We report on the viability of breaking selected bonds in biological systems using tailored electromagnetic radiation. We first demonstrate, by performing large-scale simulations, that pulsed electric fields cannot produce selective bond breaking. Then, we present a theoretical framework for describing selective energy concentration on particular bonds of biomolecules upon application of tailored electromagnetic radiation. The theory is based on the mapping of biomolecules to a set of coupled harmonic oscillators and on optimal control schemes to describe optimization of temporal shape, the phase and polarization of the external radiation. We have applied this theory to demonstrate the possibility of selective bond breaking in the active site of bacterial DNA topoisomerase. For this purpose, we have focused on a model that was built based on a case study. Results are given as a proof of concept.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jager, Yetta
2005-01-01
This study uses a genetic individual-based model of white sturgeon (Acipenser transmontanus) populations in a river to examine the genetic and demographic trade-offs associated with operating a conservation hatchery. Simulation experiments evaluated three management practices: (i) setting quotas to equalize family contributions in an effort to prevent genetic swamping, (ii) an adaptive management scheme that interrupts stocking when introgression exceeds a specified threshold, and (iii) alternative broodstock selection strategies that influence domestication. The first set of simulations, designed to evaluate equalizing the genetic contribution of families, did not show the genetic benefits expected. The second set of simulations showed thatmore » simulated adaptive management was not successful in controlling introgression over the long term, especially with uncertain feedback. The third set of simulations compared the effects of three alternative broodstock selection strategies on domestication for hypothetical traits controlling early density-dependent survival. Simulated aquaculture selected for a density-tolerant phenotype when broodstock were taken from a genetically connected population. Using broodstock from an isolated population (i.e., above an upstream barrier or in a different watershed) was more effective at preventing domestication than using wild broodstock from a connected population.« less
Flexible Environments for Grand-Challenge Simulation in Climate Science
NASA Astrophysics Data System (ADS)
Pierrehumbert, R.; Tobis, M.; Lin, J.; Dieterich, C.; Caballero, R.
2004-12-01
Current climate models are monolithic codes, generally in Fortran, aimed at high-performance simulation of the modern climate. Though they adequately serve their designated purpose, they present major barriers to application in other problems. Tailoring them to paleoclimate of planetary simulations, for instance, takes months of work. Theoretical studies, where one may want to remove selected processes or break feedback loops, are similarly hindered. Further, current climate models are of little value in education, since the implementation of textbook concepts and equations in the code is obscured by technical detail. The Climate Systems Center at the University of Chicago seeks to overcome these limitations by bringing modern object-oriented design into the business of climate modeling. Our ultimate goal is to produce an end-to-end modeling environment capable of configuring anything from a simple single-column radiative-convective model to a full 3-D coupled climate model using a uniform, flexible interface. Technically, the modeling environment is implemented as a Python-based software component toolkit: key number-crunching procedures are implemented as discrete, compiled-language components 'glued' together and co-ordinated by Python, combining the high performance of compiled languages and the flexibility and extensibility of Python. We are incrementally working towards this final objective following a series of distinct, complementary lines. We will present an overview of these activities, including PyOM, a Python-based finite-difference ocean model allowing run-time selection of different Arakawa grids and physical parameterizations; CliMT, an atmospheric modeling toolkit providing a library of 'legacy' radiative, convective and dynamical modules which can be knitted into dynamical models, and PyCCSM, a version of NCAR's Community Climate System Model in which the coupler and run-control architecture are re-implemented in Python, augmenting its flexibility and adaptability.
PopGen Fishbowl: A Free Online Simulation Model of Microevolutionary Processes
ERIC Educational Resources Information Center
Jones, Thomas C.; Laughlin, Thomas F.
2010-01-01
Natural selection and other components of evolutionary theory are known to be particularly challenging concepts for students to understand. To help illustrate these concepts, we developed a simulation model of microevolutionary processes. The model features all the components of Hardy-Weinberg theory, with population size, selection, gene flow,…
Firefly as a novel swarm intelligence variable selection method in spectroscopy.
Goodarzi, Mohammad; dos Santos Coelho, Leandro
2014-12-10
A critical step in multivariate calibration is wavelength selection, which is used to build models with better prediction performance when applied to spectral data. Up to now, many feature selection techniques have been developed. Among all different types of feature selection techniques, those based on swarm intelligence optimization methodologies are more interesting since they are usually simulated based on animal and insect life behavior to, e.g., find the shortest path between a food source and their nests. This decision is made by a crowd, leading to a more robust model with less falling in local minima during the optimization cycle. This paper represents a novel feature selection approach to the selection of spectroscopic data, leading to more robust calibration models. The performance of the firefly algorithm, a swarm intelligence paradigm, was evaluated and compared with genetic algorithm and particle swarm optimization. All three techniques were coupled with partial least squares (PLS) and applied to three spectroscopic data sets. They demonstrate improved prediction results in comparison to when only a PLS model was built using all wavelengths. Results show that firefly algorithm as a novel swarm paradigm leads to a lower number of selected wavelengths while the prediction performance of built PLS stays the same. Copyright © 2014. Published by Elsevier B.V.
Variability-aware compact modeling and statistical circuit validation on SRAM test array
NASA Astrophysics Data System (ADS)
Qiao, Ying; Spanos, Costas J.
2016-03-01
Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose a variability-aware compact model characterization methodology based on stepwise parameter selection. Transistor I-V measurements are obtained from bit transistor accessible SRAM test array fabricated using a collaborating foundry's 28nm FDSOI technology. Our in-house customized Monte Carlo simulation bench can incorporate these statistical compact models; and simulation results on SRAM writability performance are very close to measurements in distribution estimation. Our proposed statistical compact model parameter extraction methodology also has the potential of predicting non-Gaussian behavior in statistical circuit performances through mixtures of Gaussian distributions.
Considerations for Reporting Finite Element Analysis Studies in Biomechanics
Erdemir, Ahmet; Guess, Trent M.; Halloran, Jason; Tadepalli, Srinivas C.; Morrison, Tina M.
2012-01-01
Simulation-based medicine and the development of complex computer models of biological structures is becoming ubiquitous for advancing biomedical engineering and clinical research. Finite element analysis (FEA) has been widely used in the last few decades to understand and predict biomechanical phenomena. Modeling and simulation approaches in biomechanics are highly interdisciplinary, involving novice and skilled developers in all areas of biomedical engineering and biology. While recent advances in model development and simulation platforms offer a wide range of tools to investigators, the decision making process during modeling and simulation has become more opaque. Hence, reliability of such models used for medical decision making and for driving multiscale analysis comes into question. Establishing guidelines for model development and dissemination is a daunting task, particularly with the complex and convoluted models used in FEA. Nonetheless, if better reporting can be established, researchers will have a better understanding of a model’s value and the potential for reusability through sharing will be bolstered. Thus, the goal of this document is to identify resources and considerate reporting parameters for FEA studies in biomechanics. These entail various levels of reporting parameters for model identification, model structure, simulation structure, verification, validation, and availability. While we recognize that it may not be possible to provide and detail all of the reporting considerations presented, it is possible to establish a level of confidence with selective use of these parameters. More detailed reporting, however, can establish an explicit outline of the decision-making process in simulation-based analysis for enhanced reproducibility, reusability, and sharing. PMID:22236526
Similarity Theory of Withdrawn Water Temperature Experiment
2015-01-01
Selective withdrawal from a thermal stratified reservoir has been widely utilized in managing reservoir water withdrawal. Besides theoretical analysis and numerical simulation, model test was also necessary in studying the temperature of withdrawn water. However, information on the similarity theory of the withdrawn water temperature model remains lacking. Considering flow features of selective withdrawal, the similarity theory of the withdrawn water temperature model was analyzed theoretically based on the modification of governing equations, the Boussinesq approximation, and some simplifications. The similarity conditions between the model and the prototype were suggested. The conversion of withdrawn water temperature between the model and the prototype was proposed. Meanwhile, the fundamental theory of temperature distribution conversion was firstly proposed, which could significantly improve the experiment efficiency when the basic temperature of the model was different from the prototype. Based on the similarity theory, an experiment was performed on the withdrawn water temperature which was verified by numerical method. PMID:26065020
A Biologically Inspired Computational Model of Basal Ganglia in Action Selection.
Baston, Chiara; Ursino, Mauro
2015-01-01
The basal ganglia (BG) are a subcortical structure implicated in action selection. The aim of this work is to present a new cognitive neuroscience model of the BG, which aspires to represent a parsimonious balance between simplicity and completeness. The model includes the 3 main pathways operating in the BG circuitry, that is, the direct (Go), indirect (NoGo), and hyperdirect pathways. The main original aspects, compared with previous models, are the use of a two-term Hebb rule to train synapses in the striatum, based exclusively on neuronal activity changes caused by dopamine peaks or dips, and the role of the cholinergic interneurons (affected by dopamine themselves) during learning. Some examples are displayed, concerning a few paradigmatic cases: action selection in basal conditions, action selection in the presence of a strong conflict (where the role of the hyperdirect pathway emerges), synapse changes induced by phasic dopamine, and learning new actions based on a previous history of rewards and punishments. Finally, some simulations show model working in conditions of altered dopamine levels, to illustrate pathological cases (dopamine depletion in parkinsonian subjects or dopamine hypermedication). Due to its parsimonious approach, the model may represent a straightforward tool to analyze BG functionality in behavioral experiments.
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
The cross-validated AUC for MCP-logistic regression with high-dimensional data.
Jiang, Dingfeng; Huang, Jian; Zhang, Ying
2013-10-01
We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.
Ranking of Business Process Simulation Software Tools with DEX/QQ Hierarchical Decision Model
2016-01-01
The omnipresent need for optimisation requires constant improvements of companies’ business processes (BPs). Minimising the risk of inappropriate BP being implemented is usually performed by simulating the newly developed BP under various initial conditions and “what-if” scenarios. An effectual business process simulations software (BPSS) is a prerequisite for accurate analysis of an BP. Characterisation of an BPSS tool is a challenging task due to the complex selection criteria that includes quality of visual aspects, simulation capabilities, statistical facilities, quality reporting etc. Under such circumstances, making an optimal decision is challenging. Therefore, various decision support models are employed aiding the BPSS tool selection. The currently established decision support models are either proprietary or comprise only a limited subset of criteria, which affects their accuracy. Addressing this issue, this paper proposes a new hierarchical decision support model for ranking of BPSS based on their technical characteristics by employing DEX and qualitative to quantitative (QQ) methodology. Consequently, the decision expert feeds the required information in a systematic and user friendly manner. There are three significant contributions of the proposed approach. Firstly, the proposed hierarchical model is easily extendible for adding new criteria in the hierarchical structure. Secondly, a fully operational decision support system (DSS) tool that implements the proposed hierarchical model is presented. Finally, the effectiveness of the proposed hierarchical model is assessed by comparing the resulting rankings of BPSS with respect to currently available results. PMID:26871694
Re-Examining of Moffitt’s Theory of Delinquency through Agent Based Modeling
Leaw, Jia Ning; Ang, Rebecca P.; Huan, Vivien S.; Chan, Wei Teng; Cheong, Siew Ann
2015-01-01
Moffitt’s theory of delinquency suggests that at-risk youths can be divided into two groups, the adolescence- limited group and the life-course-persistent group, predetermined at a young age, and social interactions between these two groups become important during the adolescent years. We built an agent-based model based on the microscopic interactions Moffitt described: (i) a maturity gap that dictates (ii) the cost and reward of antisocial behavior, and (iii) agents imitating the antisocial behaviors of others more successful than themselves, to find indeed the two groups emerging in our simulations. Moreover, through an intervention simulation where we moved selected agents from one social network to another, we also found that the social network plays an important role in shaping the life course outcome. PMID:26062022
ERIC Educational Resources Information Center
Davis, Laurie Laughlin; Pastor, Dena A.; Dodd, Barbara G.; Chiang, Claire; Fitzpatrick, Steven J.
2003-01-01
Examined the effectiveness of the Sympson-Hetter technique and rotated content balancing relative to no exposure control and no content rotation conditions in a computerized adaptive testing system based on the partial credit model. Simulation results show the Sympson-Hetter technique can be used with minimal impact on measurement precision,…
Antle, John M.; Stoorvogel, Jetse J.; Valdivia, Roberto O.
2014-01-01
This article presents conceptual and empirical foundations for new parsimonious simulation models that are being used to assess future food and environmental security of farm populations. The conceptual framework integrates key features of the biophysical and economic processes on which the farming systems are based. The approach represents a methodological advance by coupling important behavioural processes, for example, self-selection in adaptive responses to technological and environmental change, with aggregate processes, such as changes in market supply and demand conditions or environmental conditions as climate. Suitable biophysical and economic data are a critical limiting factor in modelling these complex systems, particularly for the characterization of out-of-sample counterfactuals in ex ante analyses. Parsimonious, population-based simulation methods are described that exploit available observational, experimental, modelled and expert data. The analysis makes use of a new scenario design concept called representative agricultural pathways. A case study illustrates how these methods can be used to assess food and environmental security. The concluding section addresses generalizations of parametric forms and linkages of regional models to global models. PMID:24535388
Antle, John M; Stoorvogel, Jetse J; Valdivia, Roberto O
2014-04-05
This article presents conceptual and empirical foundations for new parsimonious simulation models that are being used to assess future food and environmental security of farm populations. The conceptual framework integrates key features of the biophysical and economic processes on which the farming systems are based. The approach represents a methodological advance by coupling important behavioural processes, for example, self-selection in adaptive responses to technological and environmental change, with aggregate processes, such as changes in market supply and demand conditions or environmental conditions as climate. Suitable biophysical and economic data are a critical limiting factor in modelling these complex systems, particularly for the characterization of out-of-sample counterfactuals in ex ante analyses. Parsimonious, population-based simulation methods are described that exploit available observational, experimental, modelled and expert data. The analysis makes use of a new scenario design concept called representative agricultural pathways. A case study illustrates how these methods can be used to assess food and environmental security. The concluding section addresses generalizations of parametric forms and linkages of regional models to global models.
An, Yongkai; Lu, Wenxi; Cheng, Weiguo
2015-01-01
This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008
Comparison of two integration methods for dynamic causal modeling of electrophysiological data.
Lemaréchal, Jean-Didier; George, Nathalie; David, Olivier
2018-06-01
Dynamic causal modeling (DCM) is a methodological approach to study effective connectivity among brain regions. Based on a set of observations and a biophysical model of brain interactions, DCM uses a Bayesian framework to estimate the posterior distribution of the free parameters of the model (e.g. modulation of connectivity) and infer architectural properties of the most plausible model (i.e. model selection). When modeling electrophysiological event-related responses, the estimation of the model relies on the integration of the system of delay differential equations (DDEs) that describe the dynamics of the system. In this technical note, we compared two numerical schemes for the integration of DDEs. The first, and standard, scheme approximates the DDEs (more precisely, the state of the system, with respect to conduction delays among brain regions) using ordinary differential equations (ODEs) and solves it with a fixed step size. The second scheme uses a dedicated DDEs solver with adaptive step sizes to control error, making it theoretically more accurate. To highlight the effects of the approximation used by the first integration scheme in regard to parameter estimation and Bayesian model selection, we performed simulations of local field potentials using first, a simple model comprising 2 regions and second, a more complex model comprising 6 regions. In these simulations, the second integration scheme served as the standard to which the first one was compared. Then, the performances of the two integration schemes were directly compared by fitting a public mismatch negativity EEG dataset with different models. The simulations revealed that the use of the standard DCM integration scheme was acceptable for Bayesian model selection but underestimated the connectivity parameters and did not allow an accurate estimation of conduction delays. Fitting to empirical data showed that the models systematically obtained an increased accuracy when using the second integration scheme. We conclude that inference on connectivity strength and delay based on DCM for EEG/MEG requires an accurate integration scheme. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Patch-Based Generative Shape Model and MDL Model Selection for Statistical Analysis of Archipelagos
NASA Astrophysics Data System (ADS)
Ganz, Melanie; Nielsen, Mads; Brandt, Sami
We propose a statistical generative shape model for archipelago-like structures. These kind of structures occur, for instance, in medical images, where our intention is to model the appearance and shapes of calcifications in x-ray radio graphs. The generative model is constructed by (1) learning a patch-based dictionary for possible shapes, (2) building up a time-homogeneous Markov model to model the neighbourhood correlations between the patches, and (3) automatic selection of the model complexity by the minimum description length principle. The generative shape model is proposed as a probability distribution of a binary image where the model is intended to facilitate sequential simulation. Our results show that a relatively simple model is able to generate structures visually similar to calcifications. Furthermore, we used the shape model as a shape prior in the statistical segmentation of calcifications, where the area overlap with the ground truth shapes improved significantly compared to the case where the prior was not used.
Defect modelling in an interactive 3-D CAD environment
NASA Astrophysics Data System (ADS)
Reilly, D.; Potts, A.; McNab, A.; Toft, M.; Chapman, R. K.
2000-05-01
This paper describes enhancement of the NDT Workbench, as presented at QNDE '98, to include theoretical models for the ultrasonic inspection of smooth planar defects, developed by British Energy and BNFL-Magnox Generation. The Workbench is a PC-based software package for the reconstruction, visualization and analysis of 3-D ultrasonic NDT data in an interactive CAD environment. This extension of the Workbeach now provides the user with a well established modelling approach, coupled with a graphical user interface for: a) configuring the model for flaw size, shape, orientation and location; b) flexible specification of probe parameters; c) selection of scanning surface and scan pattern on the CAD component model; d) presentation of the output as a simulated ultrasound image within the component, or as graphical or tabular displays. The defect modelling facilities of the Workbench can be used for inspection procedure assessment and confirmation of data interpretation, by comparison of overlay images generated from real and simulated data. The modelling technique currently implemented is based on the Geometrical Theory of Diffraction, for simulation of strip-like, circular or elliptical crack responses in the time harmonic or time dependent cases. Eventually, the Workbench will also allow modelling using elastodynamic Kirchhoff theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genser, Krzysztof; Hatcher, Robert; Kelsey, Michael
The Geant4 simulation toolkit is used to model interactions between particles and matter. Geant4 employs a set of validated physics models that span a wide range of interaction energies. These models rely on measured cross-sections and phenomenological models with the physically motivated parameters that are tuned to cover many application domains. To study what uncertainties are associated with the Geant4 physics models we have designed and implemented a comprehensive, modular, user-friendly software toolkit that allows the variation of one or more parameters of one or more Geant4 physics models involved in simulation studies. It also enables analysis of multiple variantsmore » of the resulting physics observables of interest in order to estimate the uncertainties associated with the simulation model choices. Based on modern event-processing infrastructure software, the toolkit offers a variety of attractive features, e.g. exible run-time con gurable work ow, comprehensive bookkeeping, easy to expand collection of analytical components. Design, implementation technology, and key functionalities of the toolkit are presented in this paper and illustrated with selected results.« less
NASA Astrophysics Data System (ADS)
Hoepfer, Matthias
Over the last two decades, computer modeling and simulation have evolved as the tools of choice for the design and engineering of dynamic systems. With increased system complexities, modeling and simulation become essential enablers for the design of new systems. Some of the advantages that modeling and simulation-based system design allows for are the replacement of physical tests to ensure product performance, reliability and quality, the shortening of design cycles due to the reduced need for physical prototyping, the design for mission scenarios, the invoking of currently nonexisting technologies, and the reduction of technological and financial risks. Traditionally, dynamic systems are modeled in a monolithic way. Such monolithic models include all the data, relations and equations necessary to represent the underlying system. With increased complexity of these models, the monolithic model approach reaches certain limits regarding for example, model handling and maintenance. Furthermore, while the available computer power has been steadily increasing according to Moore's Law (a doubling in computational power every 10 years), the ever-increasing complexities of new models have negated the increased resources available. Lastly, modern systems and design processes are interdisciplinary, enforcing the necessity to make models more flexible to be able to incorporate different modeling and design approaches. The solution to bypassing the shortcomings of monolithic models is cosimulation. In a very general sense, co-simulation addresses the issue of linking together different dynamic sub-models to a model which represents the overall, integrated dynamic system. It is therefore an important enabler for the design of interdisciplinary, interconnected, highly complex dynamic systems. While a basic co-simulation setup can be very easy, complications can arise when sub-models display behaviors such as algebraic loops, singularities, or constraints. This work frames the co-simulation approach to modeling and simulation. It lays out the general approach to dynamic system co-simulation, and gives a comprehensive overview of what co-simulation is and what it is not. It creates a taxonomy of the requirements and limits of co-simulation, and the issues arising with co-simulating sub-models. Possible solutions towards resolving the stated problems are investigated to a certain depth. A particular focus is given to the issue of time stepping. It will be shown that for dynamic models, the selection of the simulation time step is a crucial issue with respect to computational expense, simulation accuracy, and error control. The reasons for this are discussed in depth, and a time stepping algorithm for co-simulation with unknown dynamic sub-models is proposed. Motivations and suggestions for the further treatment of selected issues are presented.
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; An, Hyunuk; Kim, Sanghyun
2015-04-01
Soil moisture, a critical factor in hydrologic systems, plays a key role in synthesizing interactions among soil, climate, hydrological response, solute transport and ecosystem dynamics. The spatial and temporal distribution of soil moisture at a hillslope scale is essential for understanding hillslope runoff generation processes. In this study, we implement Monte Carlo simulations in the hillslope scale using a three-dimensional surface-subsurface integrated model (3D model). Numerical simulations are compared with multiple soil moistures which had been measured using TDR(Mini_TRASE) for 22 locations in 2 or 3 depths during a whole year at a hillslope (area: 2100 square meters) located in Bongsunsa Watershed, South Korea. In stochastic simulations via Monte Carlo, uncertainty of the soil parameters and input forcing are considered and model ensembles showing good performance are selected separately for several seasonal periods. The presentation will be focused on the characterization of seasonal variations of model parameters based on simulations with field measurements. In addition, structural limitations of the contemporary modeling method will be discussed.
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach
Cavagnaro, Daniel R.; Gonzalez, Richard; Myung, Jay I.; Pitt, Mark A.
2014-01-01
Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856
NASA Astrophysics Data System (ADS)
Miyagawa, Chihiro; Kobayashi, Takumi; Taishi, Toshinori; Hoshikawa, Keigo
2014-09-01
Based on the growth of 3-inch diameter c-axis sapphire using the vertical Bridgman (VB) technique, numerical simulations were made and used to guide the growth of a 6-inch diameter sapphire. A 2D model of the VB hot-zone was constructed, the seeding interface shape of the 3-inch diameter sapphire as revealed by green laser scattering was estimated numerically, and the temperature distributions of two VB hot-zone models designed for 6-inch diameter sapphire growth were numerically simulated to achieve the optimal growth of large crystals. The hot-zone model with one heater was selected and prepared, and 6-inch diameter c-axis sapphire boules were actually grown, as predicted by the numerical results.
A chaotic model for advertising diffusion problem with competition
NASA Astrophysics Data System (ADS)
Ip, W. H.; Yung, K. L.; Wang, Dingwei
2012-08-01
In this article, the author extends Dawid and Feichtinger's chaotic advertising diffusion model into the duopoly case. A computer simulation system is used to test this enhanced model. Based on the analysis of simulation results, it is found that the best advertising strategy in duopoly is to increase the advertising investment to reach the best Win-Win situation where the oscillation of market portion will not occur. In order to effectively arrive at the best situation, we define a synthetic index and two thresholds. An estimation method for the parameters of the index and thresholds is proposed in this research. We can reach the Win-Win situation by simply selecting the control parameters to make the synthetic index close to the threshold of min-oscillation state. The numerical example and computational results indicated that the proposed chaotic model is useful to describe and analyse advertising diffusion process in duopoly, it is an efficient tool for the selection and optimisation of advertising strategy.
Update of global TC simulations using a variable resolution non-hydrostatic model
NASA Astrophysics Data System (ADS)
Park, S. H.
2017-12-01
Using in a variable resolution meshes in MPAS during 2017 summer., Tropical cyclone (TC) forecasts are simulated. Two physics suite are tested to explore performance and bias of each physics suite for TC forecasting. A WRF physics suite is selected from experience on weather forecasting and CAM (Community Atmosphere Model) physics is taken from a AMIP type climate simulation. Based on the last year results from CAM5 physical parameterization package and comparing with WRF physics, we investigated a issue with intensity bias using updated version of CAM physics (CAM6). We also compared these results with coupled version of TC simulations. During this talk, TC structure will be compared specially around of boundary layer and investigate their relationship between TC intensity and different physics package.
NASA Technical Reports Server (NTRS)
Zaychik, Kirill B.; Cardullo, Frank M.
2012-01-01
Results have been obtained using conventional techniques to model the generic human operator?s control behavior, however little research has been done to identify an individual based on control behavior. The hypothesis investigated is that different operators exhibit different control behavior when performing a given control task. Two enhancements to existing human operator models, which allow personalization of the modeled control behavior, are presented. One enhancement accounts for the testing control signals, which are introduced by an operator for more accurate control of the system and/or to adjust the control strategy. This uses the Artificial Neural Network which can be fine-tuned to model the testing control. Another enhancement takes the form of an equiripple filter which conditions the control system power spectrum. A novel automated parameter identification technique was developed to facilitate the identification process of the parameters of the selected models. This utilizes a Genetic Algorithm based optimization engine called the Bit-Climbing Algorithm. Enhancements were validated using experimental data obtained from three different sources: the Manual Control Laboratory software experiments, Unmanned Aerial Vehicle simulation, and NASA Langley Research Center Visual Motion Simulator studies. This manuscript also addresses applying human operator models to evaluate the effectiveness of motion feedback when simulating actual pilot control behavior in a flight simulator.
Space-time latent component modeling of geo-referenced health data.
Lawson, Andrew B; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun
2010-08-30
Latent structure models have been proposed in many applications. For space-time health data it is often important to be able to find the underlying trends in time, which are supported by subsets of small areas. Latent structure modeling is one such approach to this analysis. This paper presents a mixture-based approach that can be applied to component selection. The analysis of a Georgia ambulatory asthma county-level data set is presented and a simulation-based evaluation is made. Copyright (c) 2010 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Chen, L. Leon; Ulmer, Stephan; Deisboeck, Thomas S.
2010-01-01
We present an application of a previously developed agent-based glioma model (Chen et al 2009 Biosystems 95 234-42) for predicting spatio-temporal tumor progression using a patient-specific MRI lattice derived from apparent diffusion coefficient (ADC) data. Agents representing collections of migrating glioma cells are initialized based upon voxels at the outer border of the tumor identified on T1-weighted (Gd+) MRI at an initial time point. These simulated migratory cells exhibit a specific biologically inspired spatial search paradigm, representing a weighting of the differential contribution from haptotactic permission and biomechanical resistance on the migration decision process. ADC data from 9 months after the initial tumor resection were used to select the best search paradigm for the simulation, which was initiated using data from 6 months after the initial operation. Using this search paradigm, 100 simulations were performed to derive a probabilistic map of tumor invasion locations. The simulation was able to successfully predict a recurrence in the dorsal/posterior aspect long before it was depicted on T1-weighted MRI, 18 months after the initial operation.
Chen, L Leon; Ulmer, Stephan; Deisboeck, Thomas S
2010-01-21
We present an application of a previously developed agent-based glioma model (Chen et al 2009 Biosystems 95 234-42) for predicting spatio-temporal tumor progression using a patient-specific MRI lattice derived from apparent diffusion coefficient (ADC) data. Agents representing collections of migrating glioma cells are initialized based upon voxels at the outer border of the tumor identified on T1-weighted (Gd+) MRI at an initial time point. These simulated migratory cells exhibit a specific biologically inspired spatial search paradigm, representing a weighting of the differential contribution from haptotactic permission and biomechanical resistance on the migration decision process. ADC data from 9 months after the initial tumor resection were used to select the best search paradigm for the simulation, which was initiated using data from 6 months after the initial operation. Using this search paradigm, 100 simulations were performed to derive a probabilistic map of tumor invasion locations. The simulation was able to successfully predict a recurrence in the dorsal/posterior aspect long before it was depicted on T1-weighted MRI, 18 months after the initial operation.
A Measurement and Simulation Based Methodology for Cache Performance Modeling and Tuning
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
We present a cache performance modeling methodology that facilitates the tuning of uniprocessor cache performance for applications executing on shared memory multiprocessors by accurately predicting the effects of source code level modifications. Measurements on a single processor are initially used for identifying parts of code where cache utilization improvements may significantly impact the overall performance. Cache simulation based on trace-driven techniques can be carried out without gathering detailed address traces. Minimal runtime information for modeling cache performance of a selected code block includes: base virtual addresses of arrays, virtual addresses of variables, and loop bounds for that code block. Rest of the information is obtained from the source code. We show that the cache performance predictions are as reliable as those obtained through trace-driven simulations. This technique is particularly helpful to the exploration of various "what-if' scenarios regarding the cache performance impact for alternative code structures. We explain and validate this methodology using a simple matrix-matrix multiplication program. We then apply this methodology to predict and tune the cache performance of two realistic scientific applications taken from the Computational Fluid Dynamics (CFD) domain.
Wan, Q; Masters, R C; Lidzey, D; Abrams, K J; Dapor, M; Plenderleith, R A; Rimmer, S; Claeyssens, F; Rodenburg, C
2016-12-01
Recently developed detectors can deliver high resolution and high contrast images of nanostructured carbon based materials in low voltage scanning electron microscopes (LVSEM) with beam deceleration. Monte Carlo Simulations are also used to predict under which exact imaging conditions purely compositional contrast can be obtained and optimised. This allows the prediction of the electron signal intensity in angle selective conditions for back-scattered electron (BSE) imaging in LVSEM and compares it to experimental signals. Angle selective detection with a concentric back scattered (CBS) detector is considered in the model in the absence and presence of a deceleration field, respectively. The validity of the model prediction for both cases was tested experimentally for amorphous C and Cu and applied to complex nanostructured carbon based materials, namely a Poly(N-isopropylacrylamide)/Poly(ethylene glycol) Diacrylate (PNIPAM/PEGDA) semi-interpenetration network (IPN) and a Poly(3-hexylthiophene-2,5-diyl) (P3HT) film, to map nano-scale composition and crystallinity distribution by avoiding experimental imaging conditions that lead to a mixed topographical and compositional contrast. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Efficient rejection-based simulation of biochemical reactions with stochastic noise and delays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thanh, Vo Hong, E-mail: vo@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; Department of Mathematics, University of Trento
2014-10-07
We propose a new exact stochastic rejection-based simulation algorithm for biochemical reactions and extend it to systems with delays. Our algorithm accelerates the simulation by pre-computing reaction propensity bounds to select the next reaction to perform. Exploiting such bounds, we are able to avoid recomputing propensities every time a (delayed) reaction is initiated or finished, as is typically necessary in standard approaches. Propensity updates in our approach are still performed, but only infrequently and limited for a small number of reactions, saving computation time and without sacrificing exactness. We evaluate the performance improvement of our algorithm by experimenting with concretemore » biological models.« less
Development of optimization-based probabilistic earthquake scenarios for the city of Tehran
NASA Astrophysics Data System (ADS)
Zolfaghari, M. R.; Peyghaleh, E.
2016-01-01
This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less computation power. The authors have used this approach for risk assessment towards identification of effectiveness-profitability of risk mitigation measures, using optimization model for resource allocation. Based on the error-computation trade-off, 62-earthquake scenarios are chosen to be used for this purpose.
Sanders, Michael J.; Markstrom, Steven L.; Regan, R. Steven; Atkinson, R. Dwight
2017-09-15
A module for simulation of daily mean water temperature in a network of stream segments has been developed as an enhancement to the U.S. Geological Survey Precipitation Runoff Modeling System (PRMS). This new module is based on the U.S. Fish and Wildlife Service Stream Network Temperature model, a mechanistic, one-dimensional heat transport model. The new module is integrated in PRMS. Stream-water temperature simulation is activated by selection of the appropriate input flags in the PRMS Control File and by providing the necessary additional inputs in standard PRMS input files.This report includes a comprehensive discussion of the methods relevant to the stream temperature calculations and detailed instructions for model input preparation.
Wang, Cheng; Hipp, John R; Butts, Carter T; Jose, Rupa; Lakon, Cynthia M
2017-05-01
While studies suggest that peer influence can in some cases encourage adolescent substance use, recent work demonstrates that peer influence may be on average protective for cigarette smoking, raising questions about whether this effect occurs for other substance use behaviors. Herein, we focus on adolescent drinking, which may follow different social dynamics than smoking. We use a data-calibrated Stochastic Actor-Based (SAB) Model of adolescent friendship tie choice and drinking behavior to explore the impact of manipulating the size of peer influence and selection effects on drinking in two school-based networks. We first fit a SAB Model to data on friendship tie choice and adolescent drinking behavior within two large schools (n = 2178 and n = 976) over three time points using data from the National Longitudinal Study of Adolescent to Adult Health. We then alter the size of the peer influence and selection parameters with all other effects fixed at their estimated values and simulate the social systems forward 1000 times under varying conditions. Whereas peer selection appears to contribute to drinking behavior similarity among adolescents, there is no evidence that it leads to higher levels of drinking at the school level. A stronger peer influence effect lowers the overall level of drinking in both schools. There are many similarities in the patterning of findings between this study of drinking and previous work on smoking, suggesting that peer influence and selection may function similarly with respect to these substances.
EnsembleGraph: Interactive Visual Analysis of Spatial-Temporal Behavior for Ensemble Simulation Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shu, Qingya; Guo, Hanqi; Che, Limei
We present a novel visualization framework—EnsembleGraph— for analyzing ensemble simulation data, in order to help scientists understand behavior similarities between ensemble members over space and time. A graph-based representation is used to visualize individual spatiotemporal regions with similar behaviors, which are extracted by hierarchical clustering algorithms. A user interface with multiple-linked views is provided, which enables users to explore, locate, and compare regions that have similar behaviors between and then users can investigate and analyze the selected regions in detail. The driving application of this paper is the studies on regional emission influences over tropospheric ozone, which is based onmore » ensemble simulations conducted with different anthropogenic emission absences using the MOZART-4 (model of ozone and related tracers, version 4) model. We demonstrate the effectiveness of our method by visualizing the MOZART-4 ensemble simulation data and evaluating the relative regional emission influences on tropospheric ozone concentrations. Positive feedbacks from domain experts and two case studies prove efficiency of our method.« less
NASA Astrophysics Data System (ADS)
Lafontaine, J.; Hay, L.; Markstrom, S. L.
2016-12-01
The United States Geological Survey (USGS) has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development, and facilitate the application of hydrologic simulations within the conterminous United States (CONUS). As many stream reaches in the CONUS are either not gaged, or are substantially impacted by water use or flow regulation, ancillary information must be used to determine reasonable parameter estimations for streamflow simulations. Hydrologic models for 1,576 gaged watersheds across the CONUS were developed to test the feasibility of improving streamflow simulations linking physically-based hydrologic models with remotely-sensed data products (i.e. snow water equivalent). Initially, the physically-based models were calibrated to measured streamflow data to provide a baseline for comparison across multiple calibration strategy tests. In addition, not all ancillary datasets are appropriate for application to all parts of the CONUS (e.g. snow water equivalent in the southeastern U.S., where snow is a rarity). As it is not expected that any one data product or model simulation will be sufficient for representing hydrologic behavior across the entire CONUS, a systematic evaluation of which data products improve hydrologic simulations for various regions across the CONUS was performed. The resulting portfolio of calibration strategies can be used to guide selection of an appropriate combination of modeled and measured information for hydrologic model development and calibration. In addition, these calibration strategies have been developed to be flexible so that new data products can be assimilated. This analysis provides a foundation to understand how well models work when sufficient streamflow data are not available and could be used to further inform hydrologic model parameter development for ungaged areas.
Evaluation of Marine Corps Manpower Computer Simulation Model
2016-12-01
merit- based promotion selection that is in conjunction with the “up or out” manpower system. To ensure mission accomplishment within M&RA, it is...historical data the MSM pulls from an online Oracle database. Two types of data base pulls occur here: acquiring historical data of manpower pyramid...is based off of the assumption that the historical manpower progression is constant, and therefore is controllable. This unfortunately does not marry
Detecting directional selection in the presence of recent admixture in African-Americans.
Lohmueller, Kirk E; Bustamante, Carlos D; Clark, Andrew G
2011-03-01
We investigate the performance of tests of neutrality in admixed populations using plausible demographic models for African-American history as well as resequencing data from African and African-American populations. The analysis of both simulated and human resequencing data suggests that recent admixture does not result in an excess of false-positive results for neutrality tests based on the frequency spectrum after accounting for the population growth in the parental African population. Furthermore, when simulating positive selection, Tajima's D, Fu and Li's D, and haplotype homozygosity have lower power to detect population-specific selection using individuals sampled from the admixed population than from the nonadmixed population. Fay and Wu's H test, however, has more power to detect selection using individuals from the admixed population than from the nonadmixed population, especially when the selective sweep ended long ago. Our results have implications for interpreting recent genome-wide scans for positive selection in human populations. © 2011 by the Genetics Society of America
Gilroy, D L; Phillips, K P; Richardson, D S; van Oosterhout, C
2017-07-01
Balancing selection can maintain immunogenetic variation within host populations, but detecting its signal in a postbottlenecked population is challenging due to the potentially overriding effects of drift. Toll-like receptor genes (TLRs) play a fundamental role in vertebrate immune defence and are predicted to be under balancing selection. We previously characterized variation at TLR loci in the Seychelles warbler (Acrocephalus sechellensis), an endemic passerine that has undergone a historical bottleneck. Five of seven TLR loci were polymorphic, which is in sharp contrast to the low genomewide variation observed. However, standard population genetic statistical methods failed to detect a contemporary signature of selection at any TLR locus. We examined whether the observed TLR polymorphism could be explained by neutral evolution, simulating the population's demography in the software DIYABC. This showed that the posterior distributions of mutation rates had to be unrealistically high to explain the observed genetic variation. We then conducted simulations with an agent-based model using typical values for the mutation rate, which indicated that weak balancing selection has acted on the three TLR genes. The model was able to detect evidence of past selection elevating TLR polymorphism in the prebottleneck populations, but was unable to discern any effects of balancing selection in the contemporary population. Our results show drift is the overriding evolutionary force that has shaped TLR variation in the contemporary Seychelles warbler population, and the observed TLR polymorphisms might be merely the 'ghost of selection past'. Forecast models predict immunogenetic variation in this species will continue to be eroded in the absence of contemporary balancing selection. Such 'drift debt' occurs when a gene pool has not yet reached its new equilibrium level of polymorphism, and this loss could be an important threat to many recently bottlenecked populations. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.
Lakon, Cynthia M; Hipp, John R; Wang, Cheng; Butts, Carter T; Jose, Rupa
2015-12-01
We used a stochastic actor-based approach to examine the effect of peer influence and peer selection--the propensity to choose friends who are similar--on smoking among adolescents. Data were collected from 1994 to 1996 from 2 schools involved in the National Longitudinal Study of Adolescent to Adult Health, with respectively 2178 and 976 students, and different levels of smoking. Our experimental manipulations of the peer influence and selection parameters in a simulation strategy indicated that stronger peer influence decreased school-level smoking. In contrast to the assumption that a smoker may induce a nonsmoker to begin smoking, adherence to antismoking norms may result in an adolescent nonsmoker inducing a smoker to stop smoking and reduce school-level smoking.
Pain expressiveness and altruistic behavior: an exploration using agent-based modeling.
de C Williams, Amanda C; Gallagher, Elizabeth; Fidalgo, Antonio R; Bentley, Peter J
2016-03-01
Predictions which invoke evolutionary mechanisms are hard to test. Agent-based modeling in artificial life offers a way to simulate behaviors and interactions in specific physical or social environments over many generations. The outcomes have implications for understanding adaptive value of behaviors in context. Pain-related behavior in animals is communicated to other animals that might protect or help, or might exploit or predate. An agent-based model simulated the effects of displaying or not displaying pain (expresser/nonexpresser strategies) when injured and of helping, ignoring, or exploiting another in pain (altruistic/nonaltruistic/selfish strategies). Agents modeled in MATLAB interacted at random while foraging (gaining energy); random injury interrupted foraging for a fixed time unless help from an altruistic agent, who paid an energy cost, speeded recovery. Environmental and social conditions also varied, and each model ran for 10,000 iterations. Findings were meaningful in that, in general, contingencies that evident from experimental work with a variety of mammals, over a few interactions, were replicated in the agent-based model after selection pressure over many generations. More energy-demanding expression of pain reduced its frequency in successive generations, and increasing injury frequency resulted in fewer expressers and altruists. Allowing exploitation of injured agents decreased expression of pain to near zero, but altruists remained. Decreasing costs or increasing benefits of helping hardly changed its frequency, whereas increasing interaction rate between injured agents and helpers diminished the benefits to both. Agent-based modeling allows simulation of complex behaviors and environmental pressures over evolutionary time.
Performance Analysis of Transposition Models Simulating Solar Radiation on Inclined Surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit
2016-06-02
Transposition models have been widely used in the solar energy industry to simulate solar radiation on inclined photovoltaic panels. Following numerous studies comparing the performance of transposition models, this work aims to understand the quantitative uncertainty in state-of-the-art transposition models and the sources leading to the uncertainty. Our results show significant differences between two highly used isotropic transposition models, with one substantially underestimating the diffuse plane-of-array irradiances when diffuse radiation is perfectly isotropic. In the empirical transposition models, the selection of the empirical coefficients and land surface albedo can both result in uncertainty in the output. This study can bemore » used as a guide for the future development of physics-based transposition models and evaluations of system performance.« less
Genomic Prediction Accounting for Residual Heteroskedasticity.
Ou, Zhining; Tempelman, Robert J; Steibel, Juan P; Ernst, Catherine W; Bates, Ronald O; Bello, Nora M
2015-11-12
Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. Copyright © 2016 Ou et al.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Mackenzie L.; Hickox, Ryan C.; DiPompeo, Michael A.
In studies of the connection between active galactic nuclei (AGNs) and their host galaxies, there is widespread disagreement on some key aspects of the connection. These disagreements largely stem from a lack of understanding of the nature of the full underlying AGN population. Recent attempts to probe this connection utilize both observations and simulations to correct for a missed population, but presently are limited by intrinsic biases and complicated models. We take a simple simulation for galaxy evolution and add a new prescription for AGN activity to connect galaxy growth to dark matter halo properties and AGN activity to starmore » formation. We explicitly model selection effects to produce an “observed” AGN population for comparison with observations and empirically motivated models of the local universe. This allows us to bypass the difficulties inherent in models that attempt to infer the AGN population by inverting selection effects. We investigate the impact of selecting AGNs based on thresholds in luminosity or Eddington ratio on the “observed” AGN population. By limiting our model AGN sample in luminosity, we are able to recreate the observed local AGN luminosity function and specific star formation-stellar mass distribution, and show that using an Eddington ratio threshold introduces less bias into the sample by selecting the full range of growing black holes, despite the challenge of selecting low-mass black holes. We find that selecting AGNs using these various thresholds yield samples with different AGN host galaxy properties.« less
Study on Earthquake Emergency Evacuation Drill Trainer Development
NASA Astrophysics Data System (ADS)
ChangJiang, L.
2016-12-01
With the improvement of China's urbanization, to ensure people survive the earthquake needs scientific routine emergency evacuation drills. Drawing on cellular automaton, shortest path algorithm and collision avoidance, we designed a model of earthquake emergency evacuation drill for school scenes. Based on this model, we made simulation software for earthquake emergency evacuation drill. The software is able to perform the simulation of earthquake emergency evacuation drill by building spatial structural model and selecting the information of people's location grounds on actual conditions of constructions. Based on the data of simulation, we can operate drilling in the same building. RFID technology could be used here for drill data collection which read personal information and send it to the evacuation simulation software via WIFI. Then the simulation software would contrast simulative data with the information of actual evacuation process, such as evacuation time, evacuation path, congestion nodes and so on. In the end, it would provide a contrastive analysis report to report assessment result and optimum proposal. We hope the earthquake emergency evacuation drill software and trainer can provide overall process disposal concept for earthquake emergency evacuation drill in assembly occupancies. The trainer can make the earthquake emergency evacuation more orderly, efficient, reasonable and scientific to fulfill the increase in coping capacity of urban hazard.
Monte Carlo simulations of parapatric speciation
NASA Astrophysics Data System (ADS)
Schwämmle, V.; Sousa, A. O.; de Oliveira, S. M.
2006-06-01
Parapatric speciation is studied using an individual-based model with sexual reproduction. We combine the theory of mutation accumulation for biological ageing with an environmental selection pressure that varies according to the individuals geographical positions and phenotypic traits. Fluctuations and genetic diversity of large populations are crucial ingredients to model the features of evolutionary branching and are intrinsic properties of the model. Its implementation on a spatial lattice gives interesting insights into the population dynamics of speciation on a geographical landscape and the disruptive selection that leads to the divergence of phenotypes. Our results suggest that assortative mating is not an obligatory ingredient to obtain speciation in large populations at low gene flow.
NASA Astrophysics Data System (ADS)
Scheer, Dirk; Konrad, Wilfried; Class, Holger; Kissinger, Alexander; Knopf, Stefan; Noack, Vera
2017-06-01
Saltwater intrusion into potential drinking water aquifers due to the injection of CO2 into deep saline aquifers is one of the potential hazards associated with the geological storage of CO2. Thus, in a site selection process, models for predicting the fate of the displaced brine are required, for example, for a risk assessment or the optimization of pressure management concepts. From the very beginning, this research on brine migration aimed at involving expert and stakeholder knowledge and assessment in simulating the impacts of injecting CO2 into deep saline aquifers by means of a participatory modeling process. The involvement exercise made use of two approaches. First, guideline-based interviews were carried out, aiming at eliciting expert and stakeholder knowledge and assessments of geological structures and mechanisms affecting CO2-induced brine migration. Second, a stakeholder workshop including the World Café format yielded evaluations and judgments of the numerical modeling approach, scenario selection, and preliminary simulation results. The participatory modeling approach gained several results covering brine migration in general, the geological model sketch, scenario development, and the review of the preliminary simulation results. These results were included in revised versions of both the geological model and the numerical model, helping to improve the analysis of regional-scale brine migration along vertical pathways due to CO2 injection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Kyoo Sil; Barker, Erin; Cheng, Guang
2016-01-06
In this paper, a three-dimensional (3D) microstructure-based finite element modeling method (i.e., extrinsic modeling method) is developed, which can be used in examining the effects of porosity on the ductility/fracture of Mg castings. For this purpose, AM60 Mg tensile samples were generated under high-pressure die-casting in a specially-designed mold. Before the tensile test, the samples were CT-scanned to obtain the pore distributions within the samples. 3D microstructure-based finite element models were then developed based on the obtained actual pore distributions of the gauge area. The input properties for the matrix material were determined by fitting the simulation result to themore » experimental result of a selected sample, and then used for all the other samples’ simulation. The results show that the ductility and fracture locations predicted from simulations agree well with the experimental results. This indicates that the developed 3D extrinsic modeling method may be used to examine the influence of various aspects of pore sizes/distributions as well as intrinsic properties (i.e., matrix properties) on the ductility/fracture of Mg castings.« less
Skrivanek, Zachary; Berry, Scott; Berry, Don; Chien, Jenny; Geiger, Mary Jane; Anderson, James H.; Gaydos, Brenda
2012-01-01
Background Dulaglutide (dula, LY2189265), a long-acting glucagon-like peptide-1 analog, is being developed to treat type 2 diabetes mellitus. Methods To foster the development of dula, we designed a two-stage adaptive, dose-finding, inferentially seamless phase 2/3 study. The Bayesian theoretical framework is used to adaptively randomize patients in stage 1 to 7 dula doses and, at the decision point, to either stop for futility or to select up to 2 dula doses for stage 2. After dose selection, patients continue to be randomized to the selected dula doses or comparator arms. Data from patients assigned the selected doses will be pooled across both stages and analyzed with an analysis of covariance model, using baseline hemoglobin A1c and country as covariates. The operating characteristics of the trial were assessed by extensive simulation studies. Results Simulations demonstrated that the adaptive design would identify the correct doses 88% of the time, compared to as low as 6% for a fixed-dose design (the latter value based on frequentist decision rules analogous to the Bayesian decision rules for adaptive design). Conclusions This article discusses the decision rules used to select the dula dose(s); the mathematical details of the adaptive algorithm—including a description of the clinical utility index used to mathematically quantify the desirability of a dose based on safety and efficacy measurements; and a description of the simulation process and results that quantify the operating characteristics of the design. PMID:23294775
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.
NASA Astrophysics Data System (ADS)
Wu, F.; Yi, J.; Li, W. J.
2014-03-01
An active sensing diagnostic system for reinforced concrete SHM has been under investigation. Test results show that the system can detect the damage of the structure. To fundamentally understand the damage algorithm and therefore to establish a robust diagnostic method, accurate Finite Element Analysis (FEA) for the system becomes essential. For the system, a rebar with surface bonded PZT under a transient wave load was simulated and analyzed using commercial FEA software. A detailed 2D axi-symmetric model for a rebar attaching PZT was first established. The model simulates the rebar with wedges, an epoxy adhesive layer, as well as a PZT layer. PZT material parameter transformation with high order tensors was discussed due to the format differences between IEEE Standard and ANSYS. The selection of material properties such as Raleigh damping coefficients was discussed. The direct coupled-field analysis type was selected during simulation. The results from simulation matched well with the experimental data. Further simulation for debonding damage detection for concrete beam with the PZT rebar has been performed. And the numerical results have been validated with test results too. The good consistency between two proves that the numerical models were reasonably accurate. Further system optimization has been performed based on these models. By changing PZT layout and size, the output signals could be increased with magnitudes. And the damage detection signals have been found to be increased exponentially with the debonding size of the rebar.
Hao, Fangran; Wang, Siyuan; Zhu, Xiao; Xue, Junsheng; Li, Jingyun; Wang, Lijie; Li, Jian; Lu, Wei; Zhou, Tianyan
2017-02-01
To investigate the anti-tumor effect of sunitinib in combination with dopamine in the treatment of nu/nu nude mice bearing non-small cell lung cancer (NSCLC) A549 cells and to develop the combination PK/PD model. Further, simulations were conducted to optimize the administration regimens. A PK/PD model was developed based on our preclinical experiment to explore the relationship between plasma concentration and drug effect quantitatively. Further, the model was evaluated and validated. By fixing the parameters obtained from the PK/PD model, simulations were built to predict the tumor suppression under various regimens. The synergistic effect was observed between sunitinib and dopamine in the study, which was confirmed by the effect constant (GAMA, estimated as 2.49). The enhanced potency of dopamine on sunitinib was exerted by on/off effect in the PK/PD model. The optimal dose regimen was selected as sunitinib (120 mg/kg, q3d) in combination with dopamine (2 mg/kg, q3d) based on the simulation study. The synergistic effect of sunitinib and dopamine was demonstrated by the preclinical experiment and confirmed by the developed PK/PD model. In addition, the regimens were optimized by means of modeling as well as simulation, which may be conducive to clinical study.
Improving stability of prediction models based on correlated omics data by using network approaches.
Tissier, Renaud; Houwing-Duistermaat, Jeanine; Rodríguez-Girondo, Mar
2018-01-01
Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1) network construction, 2) clustering to empirically derive modules or pathways, and 3) building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM) and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.
The composite load spectra project
NASA Technical Reports Server (NTRS)
Newell, J. F.; Ho, H.; Kurth, R. E.
1990-01-01
Probabilistic methods and generic load models capable of simulating the load spectra that are induced in space propulsion system components are being developed. Four engine component types (the transfer ducts, the turbine blades, the liquid oxygen posts and the turbopump oxidizer discharge duct) were selected as representative hardware examples. The composite load spectra that simulate the probabilistic loads for these components are typically used as the input loads for a probabilistic structural analysis. The knowledge-based system approach used for the composite load spectra project provides an ideal environment for incremental development. The intelligent database paradigm employed in developing the expert system provides a smooth coupling between the numerical processing and the symbolic (information) processing. Large volumes of engine load information and engineering data are stored in database format and managed by a database management system. Numerical procedures for probabilistic load simulation and database management functions are controlled by rule modules. Rules were hard-wired as decision trees into rule modules to perform process control tasks. There are modules to retrieve load information and models. There are modules to select loads and models to carry out quick load calculations or make an input file for full duty-cycle time dependent load simulation. The composite load spectra load expert system implemented today is capable of performing intelligent rocket engine load spectra simulation. Further development of the expert system will provide tutorial capability for users to learn from it.
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
NASA Technical Reports Server (NTRS)
Catalina, Adrian V.; Sen, S.; Rose, M. Franklin (Technical Monitor)
2001-01-01
The evolution of cellular solid/liquid interfaces from an initially unstable planar front was studied by means of a two-dimensional computer simulation. The developed numerical model makes use of an interface tracking procedure and has the capability to describe the dynamics of the interface morphology based on local changes of the thermodynamic conditions. The fundamental physics of this formulation was validated against experimental microgravity results and the predictions of the analytical linear stability theory. The performed simulations revealed that in certain conditions, based on a competitive growth mechanism, an interface could become unstable to random perturbations of infinitesimal amplitude even at wavelengths smaller than the neutral wavelength, lambda(sub c), predicted by the linear stability theory. Furthermore, two main stages of spacing selection have been identified. In the first stage, at low perturbations amplitude, the selection mechanism is driven by the maximum growth rate of instabilities while in the second stage the selection is influenced by nonlinear phenomena caused by the interactions between the neighboring cells. Comparison of these predictions with other existing theories of pattern formation and experimental results will be discussed.
Second Generation Crop Yield Models Review
NASA Technical Reports Server (NTRS)
Hodges, T. (Principal Investigator)
1982-01-01
Second generation yield models, including crop growth simulation models and plant process models, may be suitable for large area crop yield forecasting in the yield model development project. Subjective and objective criteria for model selection are defined and models which might be selected are reviewed. Models may be selected to provide submodels as input to other models; for further development and testing; or for immediate testing as forecasting tools. A plant process model may range in complexity from several dozen submodels simulating (1) energy, carbohydrates, and minerals; (2) change in biomass of various organs; and (3) initiation and development of plant organs, to a few submodels simulating key physiological processes. The most complex models cannot be used directly in large area forecasting but may provide submodels which can be simplified for inclusion into simpler plant process models. Both published and unpublished models which may be used for development or testing are reviewed. Several other models, currently under development, may become available at a later date.
Convis: A Toolbox to Fit and Simulate Filter-Based Models of Early Visual Processing
Huth, Jacob; Masquelier, Timothée; Arleo, Angelo
2018-01-01
We developed Convis, a Python simulation toolbox for large scale neural populations which offers arbitrary receptive fields by 3D convolutions executed on a graphics card. The resulting software proves to be flexible and easily extensible in Python, while building on the PyTorch library (The Pytorch Project, 2017), which was previously used successfully in deep learning applications, for just-in-time optimization and compilation of the model onto CPU or GPU architectures. An alternative implementation based on Theano (Theano Development Team, 2016) is also available, although not fully supported. Through automatic differentiation, any parameter of a specified model can be optimized to approach a desired output which is a significant improvement over e.g., Monte Carlo or particle optimizations without gradients. We show that a number of models including even complex non-linearities such as contrast gain control and spiking mechanisms can be implemented easily. We show in this paper that we can in particular recreate the simulation results of a popular retina simulation software VirtualRetina (Wohrer and Kornprobst, 2009), with the added benefit of providing (1) arbitrary linear filters instead of the product of Gaussian and exponential filters and (2) optimization routines utilizing the gradients of the model. We demonstrate the utility of 3d convolution filters with a simple direction selective filter. Also we show that it is possible to optimize the input for a certain goal, rather than the parameters, which can aid the design of experiments as well as closed-loop online stimulus generation. Yet, Convis is more than a retina simulator. For instance it can also predict the response of V1 orientation selective cells. Convis is open source under the GPL-3.0 license and available from https://github.com/jahuth/convis/ with documentation at https://jahuth.github.io/convis/. PMID:29563867
gPKPDSim: a SimBiology®-based GUI application for PKPD modeling in drug development.
Hosseini, Iraj; Gajjala, Anita; Bumbaca Yadav, Daniela; Sukumaran, Siddharth; Ramanujan, Saroja; Paxson, Ricardo; Gadkar, Kapil
2018-04-01
Modeling and simulation (M&S) is increasingly used in drug development to characterize pharmacokinetic-pharmacodynamic (PKPD) relationships and support various efforts such as target feasibility assessment, molecule selection, human PK projection, and preclinical and clinical dose and schedule determination. While model development typically require mathematical modeling expertise, model exploration and simulations could in many cases be performed by scientists in various disciplines to support the design, analysis and interpretation of experimental studies. To this end, we have developed a versatile graphical user interface (GUI) application to enable easy use of any model constructed in SimBiology ® to execute various common PKPD analyses. The MATLAB ® -based GUI application, called gPKPDSim, has a single screen interface and provides functionalities including simulation, data fitting (parameter estimation), population simulation (exploring the impact of parameter variability on the outputs of interest), and non-compartmental PK analysis. Further, gPKPDSim is a user-friendly tool with capabilities including interactive visualization, exporting of results and generation of presentation-ready figures. gPKPDSim was designed primarily for use in preclinical and translational drug development, although broader applications exist. gPKPDSim is a MATLAB ® -based open-source application and is publicly available to download from MATLAB ® Central™. We illustrate the use and features of gPKPDSim using multiple PKPD models to demonstrate the wide applications of this tool in pharmaceutical sciences. Overall, gPKPDSim provides an integrated, multi-purpose user-friendly GUI application to enable efficient use of PKPD models by scientists from various disciplines, regardless of their modeling expertise.
Lázár, Attila N; Clarke, Derek; Adams, Helen; Akanda, Abdur Razzaque; Szabo, Sylvia; Nicholls, Robert J; Matthews, Zoe; Begum, Dilruba; Saleh, Abul Fazal M; Abedin, Md Anwarul; Payo, Andres; Streatfield, Peter Kim; Hutton, Craig; Mondal, M Shahjahan; Moslehuddin, Abu Zofar Md
2015-06-01
Coastal Bangladesh experiences significant poverty and hazards today and is highly vulnerable to climate and environmental change over the coming decades. Coastal stakeholders are demanding information to assist in the decision making processes, including simulation models to explore how different interventions, under different plausible future socio-economic and environmental scenarios, could alleviate environmental risks and promote development. Many existing simulation models neglect the complex interdependencies between the socio-economic and environmental system of coastal Bangladesh. Here an integrated approach has been proposed to develop a simulation model to support agriculture and poverty-based analysis and decision-making in coastal Bangladesh. In particular, we show how a simulation model of farmer's livelihoods at the household level can be achieved. An extended version of the FAO's CROPWAT agriculture model has been integrated with a downscaled regional demography model to simulate net agriculture profit. This is used together with a household income-expenses balance and a loans logical tree to simulate the evolution of food security indicators and poverty levels. Modelling identifies salinity and temperature stress as limiting factors to crop productivity and fertilisation due to atmospheric carbon dioxide concentrations as a reinforcing factor. The crop simulation results compare well with expected outcomes but also reveal some unexpected behaviours. For example, under current model assumptions, temperature is more important than salinity for crop production. The agriculture-based livelihood and poverty simulations highlight the critical significance of debt through informal and formal loans set at such levels as to persistently undermine the well-being of agriculture-dependent households. Simulations also indicate that progressive approaches to agriculture (i.e. diversification) might not provide the clear economic benefit from the perspective of pricing due to greater susceptibility to climate vagaries. The livelihood and poverty results highlight the importance of the holistic consideration of the human-nature system and the careful selection of poverty indicators. Although the simulation model at this stage contains the minimum elements required to simulate the complexity of farmer livelihood interactions in coastal Bangladesh, the crop and socio-economic findings compare well with expected behaviours. The presented integrated model is the first step to develop a holistic, transferable analytic method and tool for coastal Bangladesh.
NASA Technical Reports Server (NTRS)
Christhilf, David M.; Pototzky, Anthony S.; Stevens, William L.
2010-01-01
The Simulink-based Simulation Architecture for Evaluating Controls for Aerospace Vehicles (SAREC-ASV) was modified to incorporate linear models representing aeroservoelastic characteristics of the SemiSpan SuperSonic Transport (S4T) wind-tunnel model. The S4T planform is for a Technology Concept Aircraft (TCA) design from the 1990s. The model has three control surfaces and is instrumented with accelerometers and strain gauges. Control laws developed for wind-tunnel testing for Ride Quality Enhancement, Gust Load Alleviation, and Flutter Suppression System functions were implemented in the simulation. The simulation models open- and closed-loop response to turbulence and to control excitation. It provides time histories for closed-loop stable conditions above the open-loop flutter boundary. The simulation is useful for assessing the potential impact of closed-loop control rate and position saturation. It also provides a means to assess fidelity of system identification procedures by providing time histories for a known plant model, with and without unmeasured turbulence as a disturbance. Sets of linear models representing different Mach number and dynamic pressure conditions were implemented as MATLAB Linear Time Invariant (LTI) objects. Configuration changes were implemented by selecting which LTI object to use in a Simulink template block. A limited comparison of simulation versus wind-tunnel results is shown.
Warship Combat System Selection Methodology Based on Discrete Event Simulation
2010-09-01
Platform (from Spanish) PD Damage Probability xiv PHit Hit Probability PKill Kill Probability RSM Response Surface Model SAM Surface-Air Missile...such a large target allows an assumption that the probability of a hit ( PHit ) is one. This structure can be considered as a bridge; therefore, the
Arabidopsis Ecotypes: A Model for Course Projects in Organismal Plant Biology & Evolution
ERIC Educational Resources Information Center
Wyatt, Sarah; Ballard, Harvey E.
2007-01-01
We present an inquiry-based project using readily-available seed stocks of Arabidopsis. Seedlings are grown under simulated "common garden" conditions to test evolutionary and organismal principles. Students learn scientific method by developing hypotheses and selecting appropriate data and analyses for their experiments. Experiments can be…
Zhang, Bing; Tan, Vincent B C; Lim, Kian Meng; Tay, Tong Earn
2006-06-01
Interests in CDK2 and CDK5 have stemmed mainly from their association with cancer and neuronal migration or differentiation related diseases and the need to design selective inhibitors for these kinases. Molecular dynamics (MD) simulations have not only become a viable approach to drug design because of advances in computer technology but are increasingly an integral part of drug discovery processes. It is common in MD simulations of inhibitor/CDK complexes to exclude the activator of the CDKs in the structural models to keep computational time tractable. In this paper, we present simulation results of CDK2 and CDK5 with roscovitine using models with and without their activators (cyclinA and p25). While p25 was found to induce slight changes in CDK5, the calculations support that cyclinA leads to significant conformational changes near the active site of CDK2. This suggests that detailed and structure-based inhibitor design targeted at these CDKs should employ activator-included models of the kinases. Comparisons between P/CDK2/cyclinA/roscovitine and CDK5/p25/roscovitine complexes reveal differences in the conformations of the glutamine around the active sites, which may be exploited to find highly selective inhibitors with respect to CDK2 and CDK5.
NASA Technical Reports Server (NTRS)
Seldner, K.
1976-01-01
The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.
Mitigating randomness of consumer preferences under certain conditional choices
NASA Astrophysics Data System (ADS)
Bothos, John M. A.; Thanos, Konstantinos-Georgios; Papadopoulou, Eirini; Daveas, Stelios; Thomopoulos, Stelios C. A.
2017-05-01
Agent-based crowd behaviour consists a significant field of research that has drawn a lot of attention in recent years. Agent-based crowd simulation techniques have been used excessively to forecast the behaviour of larger or smaller crowds in terms of certain given conditions influenced by specific cognition models and behavioural rules and norms, imposed from the beginning. Our research employs conditional event algebra, statistical methodology and agent-based crowd simulation techniques in developing a behavioural econometric model about the selection of certain economic behaviour by a consumer that faces a spectre of potential choices when moving and acting in a multiplex mall. More specifically we try to analyse the influence of demographic, economic, social and cultural factors on the economic behaviour of a certain individual and then we try to link its behaviour with the general behaviour of the crowds of consumers in multiplex malls using agent-based crowd simulation techniques. We then run our model using Generalized Least Squares and Maximum Likelihood methods to come up with the most probable forecast estimations, regarding the agent's behaviour. Our model is indicative about the formation of consumers' spectre of choices in multiplex malls under the condition of predefined preferences and can be used as a guide for further research in this area.
Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, J.; Polly, B.; Collis, J.
2013-09-01
This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less
Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon
2013-09-01
This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less
NASA Astrophysics Data System (ADS)
Jacobs-Crisioni, C.; Koopmans, C. C.
2016-07-01
This paper introduces a GIS-based model that simulates the geographic expansion of transport networks by several decision-makers with varying objectives. The model progressively adds extensions to a growing network by choosing the most attractive investments from a limited choice set. Attractiveness is defined as a function of variables in which revenue and broader societal benefits may play a role and can be based on empirically underpinned parameters that may differ according to private or public interests. The choice set is selected from an exhaustive set of links and presumably contains those investment options that best meet private operator's objectives by balancing the revenues of additional fare against construction costs. The investment options consist of geographically plausible routes with potential detours. These routes are generated using a fine-meshed regularly latticed network and shortest path finding methods. Additionally, two indicators of the geographic accuracy of the simulated networks are introduced. A historical case study is presented to demonstrate the model's first results. These results show that the modelled networks reproduce relevant results of the historically built network with reasonable accuracy.
Lee, Kyu Il; Jo, Sunhwan; Rui, Huan; Egwolf, Bernhard; Roux, Benoît; Pastor, Richard W.; Im, Wonpil
2011-01-01
Brownian dynamics (BD) in a suitably constructed potential of mean force is an efficient and accurate method for simulating ion transport through wide ion channels. Here, a web-based graphical user interface (GUI) is presented for grand canonical Monte Carlo (GCMC) BD simulations of channel proteins: http://www.charmm-gui.org/input/gcmcbd. The webserver is designed to help users avoid most of the technical difficulties and issues encountered in setting up and simulating complex pore systems. GCMC/BD simulation results for three proteins, the voltage dependent anion channel (VDAC), α-Hemolysin, and the protective antigen pore of the anthrax toxin (PA), are presented to illustrate system setup, input preparation, and typical output (conductance, ion density profile, ion selectivity, and ion asymmetry). Two models for the input diffusion constants for potassium and chloride ions in the pore are compared: scaling of the bulk diffusion constants by 0.5, as deduced from previous all-atom molecular dynamics simulations of VDAC; and a hydrodynamics based model (HD) of diffusion through a tube. The HD model yields excellent agreement with experimental conductances for VDAC and α-Hemolysin, while scaling bulk diffusion constants by 0.5 leads to underestimates of 10–20%. For PA, simulated ion conduction values overestimate experimental values by a factor of 1.5 to 7 (depending on His protonation state and the transmembrane potential), implying that the currently available computational model of this protein requires further structural refinement. PMID:22102176
NASA Astrophysics Data System (ADS)
Kuznetsova, M. M.; Liu, Y. H.; Rastaetter, L.; Pembroke, A. D.; Chen, L. J.; Hesse, M.; Glocer, A.; Komar, C. M.; Dorelli, J.; Roytershteyn, V.
2016-12-01
The presentation will provide overview of new tools, services and models implemented at the Community Coordinated Modeling Center (CCMC) to facilitate MMS dayside results analysis. We will provide updates on implementation of Particle-in-Cell (PIC) simulations at the CCMC and opportunities for on-line visualization and analysis of results of PIC simulations of asymmetric magnetic reconnection for different guide fields and boundary conditions. Fields, plasma parameters, particle distribution moments as well as particle distribution functions calculated in selected regions of the vicinity of reconnection sites can be analyzed through the web-based interactive visualization system. In addition there are options to request distribution functions in user selected regions of interest and to fly through simulated magnetic reconnection configurations and a map of distributions to facilitate comparisons with observations. A broad collection of global magnetosphere models hosted at the CCMC provide opportunity to put MMS observations and local PIC simulations into global context. We recently implemented the RECON-X post processing tool (Glocer et al, 2016) which allows users to determine the location of separator surface around closed field lines and between open field lines and solar wind field lines. The tool also finds the separatrix line where the two surfaces touch and positions of magnetic nulls. The surfaces and the separatrix line can be visualized relative to satellite positions in the dayside magnetosphere using an interactive HTML-5 visualization for each time step processed. To validate global magnetosphere models' capability to simulate locations of dayside magnetosphere boundaries we will analyze the proximity of MMS to simulated separatrix locations for a set of MMS diffusion region crossing events.
NASA Astrophysics Data System (ADS)
Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan; Medeiros, Lia; Özel, Feryal; Psaltis, Dimitrios
2016-12-01
The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore the robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan
2016-12-01
The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore themore » robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.« less
NASA Astrophysics Data System (ADS)
Lafontaine, J.; Hay, L.
2015-12-01
The United States Geological Survey (USGS) has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development, and facilitate the application of hydrologic simulations within the conterminous United States (CONUS). More than 1,700 gaged watersheds across the CONUS were modeled to test the feasibility of improving streamflow simulations in gaged and ungaged watersheds by linking statistically- and physically-based hydrologic models with remotely-sensed data products (i.e. - snow water equivalent) and estimates of uncertainty. Initially, the physically-based models were calibrated to measured streamflow data to provide a baseline for comparison. As many stream reaches in the CONUS are either not gaged, or are substantially impacted by water use or flow regulation, ancillary information must be used to determine reasonable parameter estimations for streamflow simulations. In addition, not all ancillary datasets are appropriate for application to all parts of the CONUS (e.g. - snow water equivalent in the southeastern U.S., where snow is a rarity). As it is not expected that any one data product or model simulation will be sufficient for representing hydrologic behavior across the entire CONUS, a systematic evaluation of which data products improve simulations of streamflow for various regions across the CONUS was performed. The resulting portfolio of calibration strategies can be used to guide selection of an appropriate combination of simulated and measured information for model development and calibration at a given location of interest. In addition, these calibration strategies have been developed to be flexible so that new data products or simulated information can be assimilated. This analysis provides a foundation to understand how well models work when streamflow data is either not available or is limited and could be used to further inform hydrologic model parameter development for ungaged areas.
Information Security Analysis Using Game Theory and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlicher, Bob G; Abercrombie, Robert K
Information security analysis can be performed using game theory implemented in dynamic simulations of Agent Based Models (ABMs). Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. Our approach addresses imperfect information and scalability that allows us to also address previous limitations of current stochastic game models. Such models only consider perfect information assuming that the defender is always able to detect attacks; assuming that the state transition probabilities are fixed before the game assuming that the players actions aremore » always synchronous; and that most models are not scalable with the size and complexity of systems under consideration. Our use of ABMs yields results of selected experiments that demonstrate our proposed approach and provides a quantitative measure for realistic information systems and their related security scenarios.« less
ID201202961, DOE S-124,539, Information Security Analysis Using Game Theory and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Schlicher, Bob G
Information security analysis can be performed using game theory implemented in dynamic simulations of Agent Based Models (ABMs). Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. Our approach addresses imperfect information and scalability that allows us to also address previous limitations of current stochastic game models. Such models only consider perfect information assuming that the defender is always able to detect attacks; assuming that the state transition probabilities are fixed before the game assuming that the players actions aremore » always synchronous; and that most models are not scalable with the size and complexity of systems under consideration. Our use of ABMs yields results of selected experiments that demonstrate our proposed approach and provides a quantitative measure for realistic information systems and their related security scenarios.« less
Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).
Yang, Owen; Choi, Bernard
2013-01-01
To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.
Elevated temperature crack growth
NASA Technical Reports Server (NTRS)
Malik, S. N.; Vanstone, R. H.; Kim, K. S.; Laflen, J. H.
1985-01-01
The purpose is to determine the ability of currently available P-I integrals to correlate fatigue crack propagation under conditions that simulate the turbojet engine combustor liner environment. The utility of advanced fracture mechanics measurements will also be evaluated during the course of the program. To date, an appropriate specimen design, a crack displacement measurement method, and boundary condition simulation in the computational model of the specimen were achieved. Alloy 718 was selected as an analog material based on its ability to simulate high temperature behavior at lower temperatures. Tensile and cyclic tests were run at several strain rates so that an appropriate constitutive model could be developed. Suitable P-I integrals were programmed into a finite element post-processor for eventual comparison with experimental data.
Stein, Mart Lambertus; Rudge, James W; Coker, Richard; van der Weijden, Charlie; Krumkamp, Ralf; Hanvoravongchai, Piya; Chavez, Irwin; Putthasri, Weerasak; Phommasack, Bounlay; Adisasmito, Wiku; Touch, Sok; Sat, Le Minh; Hsu, Yu-Chen; Kretzschmar, Mirjam; Timen, Aura
2012-10-12
Health care planning for pandemic influenza is a challenging task which requires predictive models by which the impact of different response strategies can be evaluated. However, current preparedness plans and simulations exercises, as well as freely available simulation models previously made for policy makers, do not explicitly address the availability of health care resources or determine the impact of shortages on public health. Nevertheless, the feasibility of health systems to implement response measures or interventions described in plans and trained in exercises depends on the available resource capacity. As part of the AsiaFluCap project, we developed a comprehensive and flexible resource modelling tool to support public health officials in understanding and preparing for surges in resource demand during future pandemics. The AsiaFluCap Simulator is a combination of a resource model containing 28 health care resources and an epidemiological model. The tool was built in MS Excel© and contains a user-friendly interface which allows users to select mild or severe pandemic scenarios, change resource parameters and run simulations for one or multiple regions. Besides epidemiological estimations, the simulator provides indications on resource gaps or surpluses, and the impact of shortages on public health for each selected region. It allows for a comparative analysis of the effects of resource availability and consequences of different strategies of resource use, which can provide guidance on resource prioritising and/or mobilisation. Simulation results are displayed in various tables and graphs, and can also be easily exported to GIS software to create maps for geographical analysis of the distribution of resources. The AsiaFluCap Simulator is freely available software (http://www.cdprg.org) which can be used by policy makers, policy advisors, donors and other stakeholders involved in preparedness for providing evidence based and illustrative information on health care resource capacities during future pandemics. The tool can inform both preparedness plans and simulation exercises and can help increase the general understanding of dynamics in resource capacities during a pandemic. The combination of a mathematical model with multiple resources and the linkage to GIS for creating maps makes the tool unique compared to other available software.
NASA Technical Reports Server (NTRS)
Janardan, B. A.; Brausch, J. F.; Price, A. O.
1984-01-01
Acoustic and diagnostic data that were obtained to determine the influence of selected geometric and aerodynamic flow variables of coannular nozzles with thermal acoustic shields are summarized in this comprehensive data report. A total of 136 static and simulated flight acoustic test points were conducted with 9 scale-model nozzles. Aerodynamic laser velocimeter measurements were made for four selected plumes. In addition, static pressure data in the chute base region of the suppressor configurations were obtained to assess the influence of the shield stream on the suppressor base drag.
Michel, Pierre; Baumstarck, Karine; Ghattas, Badih; Pelletier, Jean; Loundou, Anderson; Boucekine, Mohamed; Auquier, Pascal; Boyer, Laurent
2016-04-01
The aim was to develop a multidimensional computerized adaptive short-form questionnaire, the MusiQoL-MCAT, from a fixed-length QoL questionnaire for multiple sclerosis.A total of 1992 patients were enrolled in this international cross-sectional study. The development of the MusiQoL-MCAT was based on the assessment of between-items MIRT model fit followed by real-data simulations. The MCAT algorithm was based on Bayesian maximum a posteriori estimation of latent traits and Kullback-Leibler information item selection. We examined several simulations based on a fixed number of items. Accuracy was assessed using correlations (r) between initial IRT scores and MCAT scores. Precision was assessed using the standard error measurement (SEM) and the root mean square error (RMSE).The multidimensional graded response model was used to estimate item parameters and IRT scores. Among the MCAT simulations, the 16-item version of the MusiQoL-MCAT was selected because the accuracy and precision became stable with 16 items with satisfactory levels (r ≥ 0.9, SEM ≤ 0.55, and RMSE ≤ 0.3). External validity of the MusiQoL-MCAT was satisfactory.The MusiQoL-MCAT presents satisfactory properties and can individually tailor QoL assessment to each patient, making it less burdensome to patients and better adapted for use in clinical practice.
Michel, Pierre; Baumstarck, Karine; Ghattas, Badih; Pelletier, Jean; Loundou, Anderson; Boucekine, Mohamed; Auquier, Pascal; Boyer, Laurent
2016-01-01
Abstract The aim was to develop a multidimensional computerized adaptive short-form questionnaire, the MusiQoL-MCAT, from a fixed-length QoL questionnaire for multiple sclerosis. A total of 1992 patients were enrolled in this international cross-sectional study. The development of the MusiQoL-MCAT was based on the assessment of between-items MIRT model fit followed by real-data simulations. The MCAT algorithm was based on Bayesian maximum a posteriori estimation of latent traits and Kullback–Leibler information item selection. We examined several simulations based on a fixed number of items. Accuracy was assessed using correlations (r) between initial IRT scores and MCAT scores. Precision was assessed using the standard error measurement (SEM) and the root mean square error (RMSE). The multidimensional graded response model was used to estimate item parameters and IRT scores. Among the MCAT simulations, the 16-item version of the MusiQoL-MCAT was selected because the accuracy and precision became stable with 16 items with satisfactory levels (r ≥ 0.9, SEM ≤ 0.55, and RMSE ≤ 0.3). External validity of the MusiQoL-MCAT was satisfactory. The MusiQoL-MCAT presents satisfactory properties and can individually tailor QoL assessment to each patient, making it less burdensome to patients and better adapted for use in clinical practice. PMID:27057832
Yellepeddi, Venkata; Rower, Joseph; Liu, Xiaoxi; Kumar, Shaun; Rashid, Jahidur; Sherwin, Catherine M T
2018-05-18
Physiologically based pharmacokinetic modeling and simulation is an important tool for predicting the pharmacokinetics, pharmacodynamics, and safety of drugs in pediatrics. Physiologically based pharmacokinetic modeling is applied in pediatric drug development for first-time-in-pediatric dose selection, simulation-based trial design, correlation with target organ toxicities, risk assessment by investigating possible drug-drug interactions, real-time assessment of pharmacokinetic-safety relationships, and assessment of non-systemic biodistribution targets. This review summarizes the details of a physiologically based pharmacokinetic modeling approach in pediatric drug research, emphasizing reports on pediatric physiologically based pharmacokinetic models of individual drugs. We also compare and contrast the strategies employed by various researchers in pediatric physiologically based pharmacokinetic modeling and provide a comprehensive overview of physiologically based pharmacokinetic modeling strategies and approaches in pediatrics. We discuss the impact of physiologically based pharmacokinetic models on regulatory reviews and product labels in the field of pediatric pharmacotherapy. Additionally, we examine in detail the current limitations and future directions of physiologically based pharmacokinetic modeling in pediatrics with regard to the ability to predict plasma concentrations and pharmacokinetic parameters. Despite the skepticism and concern in the pediatric community about the reliability of physiologically based pharmacokinetic models, there is substantial evidence that pediatric physiologically based pharmacokinetic models have been used successfully to predict differences in pharmacokinetics between adults and children for several drugs. It is obvious that the use of physiologically based pharmacokinetic modeling to support various stages of pediatric drug development is highly attractive and will rapidly increase, provided the robustness and reliability of these techniques are well established.
Decision Aids for Airborne Intercept Operations in Advanced Aircrafts
NASA Technical Reports Server (NTRS)
Madni, A.; Freedy, A.
1981-01-01
A tactical decision aid (TDA) for the F-14 aircrew, i.e., the naval flight officer and pilot, in conducting a multitarget attack during the performance of a Combat Air Patrol (CAP) role is presented. The TDA employs hierarchical multiattribute utility models for characterizing mission objectives in operationally measurable terms, rule based AI-models for tactical posture selection, and fast time simulation for maneuver consequence prediction. The TDA makes aspect maneuver recommendations, selects and displays the optimum mission posture, evaluates attackable and potentially attackable subsets, and recommends the 'best' attackable subset along with the required course perturbation.
Effects of selective attention on continuous opinions and discrete decisions
NASA Astrophysics Data System (ADS)
Si, Xia-Meng; Liu, Yun; Xiong, Fei; Zhang, Yan-Chao; Ding, Fei; Cheng, Hui
2010-09-01
Selective attention describes that individuals have a preference on information according to their involving motivation. Based on achievements of social psychology, we propose an opinion interacting model to improve the modeling of individuals’ interacting behaviors. There are two parameters governing the probability of agents interacting with opponents, i.e. individual relevance and time-openness. It is found that, large individual relevance and large time-openness advance the appearance of large clusters, but large individual relevance and small time-openness favor the lessening of extremism. We also put this new model into application to work out some factor leading to a successful product. Numerical simulations show that selective attention, especially individual relevance, cannot be ignored by launcher firms and information spreaders so as to attain the most successful promotion.
Results from the VALUE perfect predictor experiment: process-based evaluation
NASA Astrophysics Data System (ADS)
Maraun, Douglas; Soares, Pedro; Hertig, Elke; Brands, Swen; Huth, Radan; Cardoso, Rita; Kotlarski, Sven; Casado, Maria; Pongracz, Rita; Bartholy, Judit
2016-04-01
Until recently, the evaluation of downscaled climate model simulations has typically been limited to surface climatologies, including long term means, spatial variability and extremes. But these aspects are often, at least partly, tuned in regional climate models to match observed climate. The tuning issue is of course particularly relevant for bias corrected regional climate models. In general, a good performance of a model for these aspects in present climate does therefore not imply a good performance in simulating climate change. It is now widely accepted that, to increase our condidence in climate change simulations, it is necessary to evaluate how climate models simulate relevant underlying processes. In other words, it is important to assess whether downscaling does the right for the right reason. Therefore, VALUE has carried out a broad process-based evaluation study based on its perfect predictor experiment simulations: the downscaling methods are driven by ERA-Interim data over the period 1979-2008, reference observations are given by a network of 85 meteorological stations covering all European climates. More than 30 methods participated in the evaluation. In order to compare statistical and dynamical methods, only variables provided by both types of approaches could be considered. This limited the analysis to conditioning local surface variables on variables from driving processes that are simulated by ERA-Interim. We considered the following types of processes: at the continental scale, we evaluated the performance of downscaling methods for positive and negative North Atlantic Oscillation, Atlantic ridge and blocking situations. At synoptic scales, we considered Lamb weather types for selected European regions such as Scandinavia, the United Kingdom, the Iberian Pensinsula or the Alps. At regional scales we considered phenomena such as the Mistral, the Bora or the Iberian coastal jet. Such process-based evaluation helps to attribute biases in surface variables to underlying processes and ultimately to improve climate models.
Song, Mingkai; Cui, Linlin; Kuang, Han; Zhou, Jingwei; Yang, Pengpeng; Zhuang, Wei; Chen, Yong; Liu, Dong; Zhu, Chenjie; Chen, Xiaochun; Ying, Hanjie; Wu, Jinglan
2018-08-10
An intermittent simulated moving bed (3F-ISMB) operation scheme, the extension of the 3W-ISMB to the non-linear adsorption region, has been introduced for separation of glucose, lactic acid and acetic acid ternary-mixture. This work focuses on exploring the feasibility of the proposed process theoretically and experimentally. Firstly, the real 3F-ISMB model coupled with the transport dispersive model (TDM) and the Modified-Langmuir isotherm was established to build up the separation parameter plane. Subsequently, three operating conditions were selected from the plane to run the 3F-ISMB unit. The experimental results were used to verify the model. Afterwards, the influences of the various flow rates on the separation performances were investigated systematically by means of the validated 3F-ISMB model. The intermittent-retained component lactic acid was finally obtained with the purity of 98.5%, recovery of 95.5% and the average concentration of 38 g/L. The proposed 3F-ISMB process can efficiently separate the mixture with low selectivity into three fractions. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Inam, Azhar; Adamowski, Jan; Prasher, Shiv; Halbe, Johannes; Malard, Julien; Albano, Raffaele
2017-08-01
Many simulation models focus on simulating a single physical process and do not constitute balanced representations of the physical, social and economic components of a system. The present study addresses this challenge by integrating a physical (P) model (SAHYSMOD) with a group (stakeholder) built system dynamics model (GBSDM) through a component modeling approach based on widely applied tools such as MS Excel, Python and Visual Basic for Applications (VBA). The coupled model (P-GBSDM) was applied to test soil salinity management scenarios (proposed by stakeholders) for the Haveli region of the Rechna Doab Basin in Pakistan. Scenarios such as water banking, vertical drainage, canal lining, and irrigation water reallocation were simulated with the integrated model. Spatiotemporal maps and economic and environmental trade-off criteria were used to examine the effectiveness of the selected management scenarios. After 20 years of simulation, canal lining reduced soil salinity by 22% but caused an initial reduction of 18% in farm income, which requires an initial investment from the government. The government-sponsored Salinity Control and Reclamation Project (SCARP) is a short-term policy that resulted in a 37% increase in water availability with a 12% increase in farmer income. However, it showed detrimental effects on soil salinity in the long term, with a 21% increase in soil salinity due to secondary salinization. The new P-GBSDM was shown to be an effective platform for engaging stakeholders and simulating their proposed management policies while taking into account socioeconomic considerations. This was not possible using the physically based SAHYSMOD model alone.
Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.
Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan
2014-01-01
Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.
NASA Astrophysics Data System (ADS)
Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn
2015-03-01
Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.
Capoccia, Massimo; Marconi, Silvia; Singh, Sanjeet Avtaar; Pisanelli, Domenico M; De Lazzari, Claudio
2018-05-02
Modelling and simulation may become clinically applicable tools for detailed evaluation of the cardiovascular system and clinical decision-making to guide therapeutic intervention. Models based on pressure-volume relationship and zero-dimensional representation of the cardiovascular system may be a suitable choice given their simplicity and versatility. This approach has great potential for application in heart failure where the impact of left ventricular assist devices has played a significant role as a bridge to transplant and more recently as a long-term solution for non eligible candidates. We sought to investigate the value of simulation in the context of three heart failure patients with a view to predict or guide further management. CARDIOSIM © was the software used for this purpose. The study was based on retrospective analysis of haemodynamic data previously discussed at a multidisciplinary meeting. The outcome of the simulations addressed the value of a more quantitative approach in the clinical decision process. Although previous experience, co-morbidities and the risk of potentially fatal complications play a role in clinical decision-making, patient-specific modelling may become a daily approach for selection and optimisation of device-based treatment for heart failure patients. Willingness to adopt this integrated approach may be the key to further progress.
Terrain modeling for real-time simulation
NASA Astrophysics Data System (ADS)
Devarajan, Venkat; McArthur, Donald E.
1993-10-01
There are many applications, such as pilot training, mission rehearsal, and hardware-in-the- loop simulation, which require the generation of realistic images of terrain and man-made objects in real-time. One approach to meeting this requirement is to drape photo-texture over a planar polygon model of the terrain. The real time system then computes, for each pixel of the output image, the address in a texture map based on the intersection of the line-of-sight vector with the terrain model. High quality image generation requires that the terrain be modeled with a fine mesh of polygons while hardware costs limit the number of polygons which may be displayed for each scene. The trade-off between these conflicting requirements must be made in real-time because it depends on the changing position and orientation of the pilot's eye point or simulated sensor. The traditional approach is to develop a data base consisting of multiple levels of detail (LOD), and then selecting for display LODs as a function of range. This approach could lead to both anomalies in the displayed scene and inefficient use of resources. An approach has been developed in which the terrain is modeled with a set of nested polygons and organized as a tree with each node corresponding to a polygon. This tree is pruned to select the optimum set of nodes for each eye-point position. As the point of view moves, the visibility of some nodes drops below the limit of perception and may be deleted while new points must be added in regions near the eye point. An analytical model has been developed to determine the number of polygons required for display. This model leads to quantitative performance measures of the triangulation algorithm which is useful for optimizing system performance with a limited display capability.
Impact and Penetration Simulations for Composite Wing-like Structures
NASA Technical Reports Server (NTRS)
Knight, Norman F.
1998-01-01
The goal of this research project was to develop methodologies for the analysis of wing-like structures subjected to impact loadings. Low-speed impact causing either no damage or only minimal damage and high-speed impact causing severe laminate damage and possible penetration of the structure were to be considered during this research effort. To address this goal, an assessment of current analytical tools for impact analysis was performed. Assessment of the analytical tools for impact and penetration simulations with regard to accuracy, modeling, and damage modeling was considered as well as robustness, efficient, and usage in a wing design environment. Following a qualitative assessment, selected quantitative evaluations will be performed using the leading simulation tools. Based on this assessment, future research thrusts for impact and penetration simulation of composite wing-like structures were identified.
Zhu, Guanhua; Liu, Wei; Bao, Chenglong; Tong, Dudu; Ji, Hui; Shen, Zuowei; Yang, Daiwen; Lu, Lanyuan
2018-05-01
The structural variations of multidomain proteins with flexible parts mediate many biological processes, and a structure ensemble can be determined by selecting a weighted combination of representative structures from a simulated structure pool, producing the best fit to experimental constraints such as interatomic distance. In this study, a hybrid structure-based and physics-based atomistic force field with an efficient sampling strategy is adopted to simulate a model di-domain protein against experimental paramagnetic relaxation enhancement (PRE) data that correspond to distance constraints. The molecular dynamics simulations produce a wide range of conformations depicted on a protein energy landscape. Subsequently, a conformational ensemble recovered with low-energy structures and the minimum-size restraint is identified in good agreement with experimental PRE rates, and the result is also supported by chemical shift perturbations and small-angle X-ray scattering data. It is illustrated that the regularizations of energy and ensemble-size prevent an arbitrary interpretation of protein conformations. Moreover, energy is found to serve as a critical control to refine the structure pool and prevent data overfitting, because the absence of energy regularization exposes ensemble construction to the noise from high-energy structures and causes a more ambiguous representation of protein conformations. Finally, we perform structure-ensemble optimizations with a topology-based structure pool, to enhance the understanding on the ensemble results from different sources of pool candidates. © 2018 Wiley Periodicals, Inc.
Kasmarek, Mark C.
2012-01-01
The MODFLOW-2000 groundwater flow model described in this report comprises four layers, one for each of the hydrogeologic units of the aquifer system except the Catahoula confining system, the assumed no-flow base of the system. The HAGM is composed of 137 rows and 245 columns of 1-square-mile grid cells with lateral no-flow boundaries at the extent of each hydrogeologic unit to the northwest, at groundwater divides associated with large rivers to the southwest and northeast, and at the downdip limit of freshwater to the southeast. The model was calibrated within the specified criteria by using trial-and-error adjustment of selected model-input data in a series of transient simulations until the model output (potentiometric surfaces, land-surface subsidence, and selected water-budget components) acceptably reproduced field measured (or estimated) aquifer responses including water level and subsidence. The HAGM-simulated subsidence generally compared well to 26 Predictions Relating Effective Stress to Subsidence (PRESS) models in Harris, Galveston, and Fort Bend Counties. Simulated HAGM results indicate that as much as 10 feet (ft) of subsidence has occurred in southeastern Harris County. Measured subsidence and model results indicate that a larger geographic area encompassing this area of maximum subsidence and much of central to southeastern Harris County has subsided at least 6 ft. For the western part of the study area, the HAGM simulated as much as 3 ft of subsidence in Wharton, Jackson, and Matagorda Counties. For the eastern part of the study area, the HAGM simulated as much as 3 ft of subsidence at the boundary of Hardin and Jasper Counties. Additionally, in the southeastern part of the study area in Orange County, the HAGM simulated as much as 3 ft of subsidence. Measured subsidence for these areas in the western and eastern parts of the HAGM has not been documented.
Ranking streamflow model performance based on Information theory metrics
NASA Astrophysics Data System (ADS)
Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas
2016-04-01
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.
Accelerating rejection-based simulation of biochemical reactions with bounded acceptance probability
NASA Astrophysics Data System (ADS)
Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto
2016-06-01
Stochastic simulation of large biochemical reaction networks is often computationally expensive due to the disparate reaction rates and high variability of population of chemical species. An approach to accelerate the simulation is to allow multiple reaction firings before performing update by assuming that reaction propensities are changing of a negligible amount during a time interval. Species with small population in the firings of fast reactions significantly affect both performance and accuracy of this simulation approach. It is even worse when these small population species are involved in a large number of reactions. We present in this paper a new approximate algorithm to cope with this problem. It is based on bounding the acceptance probability of a reaction selected by the exact rejection-based simulation algorithm, which employs propensity bounds of reactions and the rejection-based mechanism to select next reaction firings. The reaction is ensured to be selected to fire with an acceptance rate greater than a predefined probability in which the selection becomes exact if the probability is set to one. Our new algorithm improves the computational cost for selecting the next reaction firing and reduces the updating the propensities of reactions.
Accelerating rejection-based simulation of biochemical reactions with bounded acceptance probability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thanh, Vo Hong, E-mail: vo@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; Department of Mathematics, University of Trento, Trento
Stochastic simulation of large biochemical reaction networks is often computationally expensive due to the disparate reaction rates and high variability of population of chemical species. An approach to accelerate the simulation is to allow multiple reaction firings before performing update by assuming that reaction propensities are changing of a negligible amount during a time interval. Species with small population in the firings of fast reactions significantly affect both performance and accuracy of this simulation approach. It is even worse when these small population species are involved in a large number of reactions. We present in this paper a new approximatemore » algorithm to cope with this problem. It is based on bounding the acceptance probability of a reaction selected by the exact rejection-based simulation algorithm, which employs propensity bounds of reactions and the rejection-based mechanism to select next reaction firings. The reaction is ensured to be selected to fire with an acceptance rate greater than a predefined probability in which the selection becomes exact if the probability is set to one. Our new algorithm improves the computational cost for selecting the next reaction firing and reduces the updating the propensities of reactions.« less
Modeling Citation Networks Based on Vigorousness and Dormancy
NASA Astrophysics Data System (ADS)
Wang, Xue-Wen; Zhang, Li-Jie; Yang, Guo-Hong; Xu, Xin-Jian
2013-08-01
In citation networks, the activity of papers usually decreases with age and dormant papers may be discovered and become fashionable again. To model this phenomenon, a competition mechanism is suggested which incorporates two factors: vigorousness and dormancy. Based on this idea, a citation network model is proposed, in which a node has two discrete stage: vigorous and dormant. Vigorous nodes can be deactivated and dormant nodes may be activated and become vigorous. The evolution of the network couples addition of new nodes and state transitions of old ones. Both analytical calculation and numerical simulation show that the degree distribution of nodes in generated networks displays a good right-skewed behavior. Particularly, scale-free networks are obtained as the deactivated vertex is target selected and exponential networks are realized for the random-selected case. Moreover, the measurement of four real-world citation networks achieves a good agreement with the stochastic model.
Evaluation of a two-dimensional numerical model for air quality simulation in a street canyon
NASA Astrophysics Data System (ADS)
Okamoto, Shin `Ichi; Lin, Fu Chi; Yamada, Hiroaki; Shiozawa, Kiyoshige
For many urban areas, the most severe air pollution caused by automobile emissions appears along a road surrounded by tall buildings: the so=called street canyon. A practical two-dimensional numerical model has been developed to be applied to this kind of road structure. This model contains two submodels: a wind-field model and a diffusion model based on a Monte Carlo particle scheme. In order to evaluate the predictive performance of this model, an air quality simulation was carried out at three trunk roads in the Tokyo metropolitan area: Nishi-Shimbashi, Aoyama and Kanda-Nishikicho (using SF 6 as a tracer and NO x measurement). Since this model has two-dimensional properties and cannot be used for the parallel wind condition, the perpendicular wind condition was selected for the simulation. The correlation coefficients for the SF 6 and NO x data in Aoyama were 0.67 and 0.62, respectively. When predictive performance of this model is compared with other models, this model is comparable to the SRI model, and superior to the APPS three-dimensional numerical model.
Uses and abuses of multipliers in the stand prognosis model
David A. Hamilton
1994-01-01
Users of the Stand Prognosis Model may have difficulties in selecting the proper set of multipliers to simulate a desired effect or in determining the appropriate value to assign to selected multipliers. A series of examples describe impact of multipliers on simulated stand development. Guidelines for the proper use of multipliers are presented....
Selected Urban Simulations and Games. IFF Working Paper WP-4.
ERIC Educational Resources Information Center
Nagelberg, Mark; Little, Dennis L.
Summary descriptions of selected urban simulations and games that have been developed outside the Institute For The Future are presented. The operating characteristics and potential applications of each model are described. These include (1) the history of development, (2) model and player requirements, (3) a description of the environment being…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Liwei; Qian, Yun; Zhou, Tianjun
2014-10-01
In this study, we calibrated the performance of regional climate model RegCM3 with Massachusetts Institute of Technology (MIT)-Emanuel cumulus parameterization scheme over CORDEX East Asia domain by tuning the selected seven parameters through multiple very fast simulated annealing (MVFSA) sampling method. The seven parameters were selected based on previous studies, which customized the RegCM3 with MIT-Emanuel scheme through three different ways by using the sensitivity experiments. The responses of model results to the seven parameters were investigated. Since the monthly total rainfall is constrained, the simulated spatial pattern of rainfall and the probability density function (PDF) distribution of daily rainfallmore » rates are significantly improved in the optimal simulation. Sensitivity analysis suggest that the parameter “relative humidity criteria” (RH), which has not been considered in the default simulation, has the largest effect on the model results. The responses of total rainfall over different regions to RH were examined. Positive responses of total rainfall to RH are found over northern equatorial western Pacific, which are contributed by the positive responses of explicit rainfall. Followed by an increase of RH, the increases of the low-level convergence and the associated increases in cloud water favor the increase of the explicit rainfall. The identified optimal parameters constrained by the total rainfall have positive effects on the low-level circulation and the surface air temperature. Furthermore, the optimized parameters based on the extreme case are suitable for a normal case and the model’s new version with mixed convection scheme.« less
A detailed numerical simulation of a liquid-propellant rocket engine ground test experiment
NASA Astrophysics Data System (ADS)
Lankford, D. W.; Simmons, M. A.; Heikkinen, B. D.
1992-07-01
A computational simulation of a Liquid Rocket Engine (LRE) ground test experiment was performed using two modeling approaches. The results of the models were compared with selected data to assess the validity of state-of-the-art computational tools for predicting the flowfield and radiative transfer in complex flow environments. The data used for comparison consisted of in-band station radiation measurements obtained in the near-field portion of the plume exhaust. The test article was a subscale LRE with an afterbody, resulting in a large base region. The flight conditions were such that afterburning regions were observed in the plume flowfield. A conventional standard modeling approach underpredicted the extent of afterburning and the associated radiation levels. These results were attributed to the absence of the base flow region which is not accounted for in this model. To assess the effects of the base region a Navier-Stokes model was applied. The results of this calculation indicate that the base recirculation effects are dominant features in the immediate expansion region and resulted in a much improved comparison. However, the downstream in-band station radiation data remained underpredicted by this model.
Hyunwoo Kim; Devendra M. Amatya; Stephen W. Broome; Dean L. Hesterberg; Minha Choi
2011-01-01
The DRAINWAT, DRAINmod for WATershed model, was selected for hydrological modelling to obtain water table depths and drainage outflows at Open Grounds Farm in Carteret County, North Carolina, USA. Six simulated storm events from the study period were compared with the measured data and analysed. Simulation results from the whole study period and selected rainfall...
Chromatic energy filter and characterization of laser-accelerated proton beams for particle therapy
NASA Astrophysics Data System (ADS)
Hofmann, Ingo; Meyer-ter-Vehn, Jürgen; Yan, Xueqing; Al-Omari, Husam
2012-07-01
The application of laser accelerated protons or ions for particle therapy has to cope with relatively large energy and angular spreads as well as possibly significant random fluctuations. We suggest a method for combined focusing and energy selection, which is an effective alternative to the commonly considered dispersive energy selection by magnetic dipoles. Our method is based on the chromatic effect of a magnetic solenoid (or any other energy dependent focusing device) in combination with an aperture to select a certain energy width defined by the aperture radius. It is applied to an initial 6D phase space distribution of protons following the simulation output from a Radiation Pressure Acceleration model. Analytical formula for the selection aperture and chromatic emittance are confirmed by simulation results using the TRACEWIN code. The energy selection is supported by properly placed scattering targets to remove the imprint of the chromatic effect on the beam and to enable well-controlled and shot-to-shot reproducible energy and transverse density profiles.
Estimating and Identifying Unspecified Correlation Structure for Longitudinal Data
Hu, Jianhua; Wang, Peng; Qu, Annie
2014-01-01
Identifying correlation structure is important to achieving estimation efficiency in analyzing longitudinal data, and is also crucial for drawing valid statistical inference for large size clustered data. In this paper, we propose a nonparametric method to estimate the correlation structure, which is applicable for discrete longitudinal data. We utilize eigenvector-based basis matrices to approximate the inverse of the empirical correlation matrix and determine the number of basis matrices via model selection. A penalized objective function based on the difference between the empirical and model approximation of the correlation matrices is adopted to select an informative structure for the correlation matrix. The eigenvector representation of the correlation estimation is capable of reducing the risk of model misspecification, and also provides useful information on the specific within-cluster correlation pattern of the data. We show that the proposed method possesses the oracle property and selects the true correlation structure consistently. The proposed method is illustrated through simulations and two data examples on air pollution and sonar signal studies. PMID:26361433
A hybrid intelligent algorithm for portfolio selection problem with fuzzy returns
NASA Astrophysics Data System (ADS)
Li, Xiang; Zhang, Yang; Wong, Hau-San; Qin, Zhongfeng
2009-11-01
Portfolio selection theory with fuzzy returns has been well developed and widely applied. Within the framework of credibility theory, several fuzzy portfolio selection models have been proposed such as mean-variance model, entropy optimization model, chance constrained programming model and so on. In order to solve these nonlinear optimization models, a hybrid intelligent algorithm is designed by integrating simulated annealing algorithm, neural network and fuzzy simulation techniques, where the neural network is used to approximate the expected value and variance for fuzzy returns and the fuzzy simulation is used to generate the training data for neural network. Since these models are used to be solved by genetic algorithm, some comparisons between the hybrid intelligent algorithm and genetic algorithm are given in terms of numerical examples, which imply that the hybrid intelligent algorithm is robust and more effective. In particular, it reduces the running time significantly for large size problems.
A Biologically Inspired Computational Model of Basal Ganglia in Action Selection
Baston, Chiara
2015-01-01
The basal ganglia (BG) are a subcortical structure implicated in action selection. The aim of this work is to present a new cognitive neuroscience model of the BG, which aspires to represent a parsimonious balance between simplicity and completeness. The model includes the 3 main pathways operating in the BG circuitry, that is, the direct (Go), indirect (NoGo), and hyperdirect pathways. The main original aspects, compared with previous models, are the use of a two-term Hebb rule to train synapses in the striatum, based exclusively on neuronal activity changes caused by dopamine peaks or dips, and the role of the cholinergic interneurons (affected by dopamine themselves) during learning. Some examples are displayed, concerning a few paradigmatic cases: action selection in basal conditions, action selection in the presence of a strong conflict (where the role of the hyperdirect pathway emerges), synapse changes induced by phasic dopamine, and learning new actions based on a previous history of rewards and punishments. Finally, some simulations show model working in conditions of altered dopamine levels, to illustrate pathological cases (dopamine depletion in parkinsonian subjects or dopamine hypermedication). Due to its parsimonious approach, the model may represent a straightforward tool to analyze BG functionality in behavioral experiments. PMID:26640481
Detecting consistent patterns of directional adaptation using differential selection codon models.
Parto, Sahar; Lartillot, Nicolas
2017-06-23
Phylogenetic codon models are often used to characterize the selective regimes acting on protein-coding sequences. Recent methodological developments have led to models explicitly accounting for the interplay between mutation and selection, by modeling the amino acid fitness landscape along the sequence. However, thus far, most of these models have assumed that the fitness landscape is constant over time. Fluctuations of the fitness landscape may often be random or depend on complex and unknown factors. However, some organisms may be subject to systematic changes in selective pressure, resulting in reproducible molecular adaptations across independent lineages subject to similar conditions. Here, we introduce a codon-based differential selection model, which aims to detect and quantify the fine-grained consistent patterns of adaptation at the protein-coding level, as a function of external conditions experienced by the organism under investigation. The model parameterizes the global mutational pressure, as well as the site- and condition-specific amino acid selective preferences. This phylogenetic model is implemented in a Bayesian MCMC framework. After validation with simulations, we applied our method to a dataset of HIV sequences from patients with known HLA genetic background. Our differential selection model detects and characterizes differentially selected coding positions specifically associated with two different HLA alleles. Our differential selection model is able to identify consistent molecular adaptations as a function of repeated changes in the environment of the organism. These models can be applied to many other problems, ranging from viral adaptation to evolution of life-history strategies in plants or animals.
López, Iván; Borzacconi, Liliana
2010-10-01
A model based on the work of Angelidaki et al. (1993) was applied to simulate the anaerobic biodegradation of ruminal contents. In this study, two fractions of solids with different biodegradation rates were considered. A first-order kinetic was used for the easily biodegradable fraction and a kinetic expression that is function of the extracellular enzyme concentration was used for the slowly biodegradable fraction. Batch experiments were performed to obtain an accumulated methane curve that was then used to obtain the model parameters. For this determination, a methodology derived from the "multiple-shooting" method was successfully used. Monte Carlo simulations allowed a confidence range to be obtained for each parameter. Simulations of a continuous reactor were performed using the optimal set of model parameters. The final steady-states were determined as functions of the operational conditions (solids load and residence time). The simulations showed that methane flow peaked at a flow rate of 0.5-0.8 Nm(3)/d/m(reactor)(3) at a residence time of 10-20 days. Simulations allow the adequate selection of operating conditions of a continuous reactor. (c) 2010 Elsevier Ltd. All rights reserved.
Physically-based modelling of high magnitude torrent events with uncertainty quantification
NASA Astrophysics Data System (ADS)
Wing-Yuen Chow, Candace; Ramirez, Jorge; Zimmermann, Markus; Keiler, Margreth
2017-04-01
High magnitude torrent events are associated with the rapid propagation of vast quantities of water and available sediment downslope where human settlements may be established. Assessing the vulnerability of built structures to these events is a part of consequence analysis, where hazard intensity is related to the degree of loss sustained. The specific contribution of the presented work describes a procedure simulate these damaging events by applying physically-based modelling and to include uncertainty information about the simulated results. This is a first step in the development of vulnerability curves based on several intensity parameters (i.e. maximum velocity, sediment deposition depth and impact pressure). The investigation process begins with the collection, organization and interpretation of detailed post-event documentation and photograph-based observation data of affected structures in three sites that exemplify the impact of highly destructive mudflows and flood occurrences on settlements in Switzerland. Hazard intensity proxies are then simulated with the physically-based FLO-2D model (O'Brien et al., 1993). Prior to modelling, global sensitivity analysis is conducted to support a better understanding of model behaviour, parameterization and the quantification of uncertainties (Song et al., 2015). The inclusion of information describing the degree of confidence in the simulated results supports the credibility of vulnerability curves developed with the modelled data. First, key parameters are identified and selected based on literature review. Truncated a priori ranges of parameter values were then defined by expert solicitation. Local sensitivity analysis is performed based on manual calibration to provide an understanding of the parameters relevant to the case studies of interest. Finally, automated parameter estimation is performed to comprehensively search for optimal parameter combinations and associated values, which are evaluated using the observed data collected in the first stage of the investigation. O'Brien, J.S., Julien, P.Y., Fullerton, W. T., 1993. Two-dimensional water flood and mudflow simulation. Journal of Hydraulic Engineering 119(2): 244-261. Song, X., Zhang, J., Zhan, C., Xuan, Y., Ye, M., Xu C., 2015. Global sensitivity analysis in hydrological modeling: Review of concepts, methods, theoretical frameworks, Journal of Hydrology 523: 739-757.
Control theory analysis of a three-axis VTOL flight director. M.S. Thesis - Pennsylvania State Univ.
NASA Technical Reports Server (NTRS)
Niessen, F. R.
1971-01-01
A control theory analysis of a VTOL flight director and the results of a fixed-based simulator evaluation of the flight-director commands are discussed. The VTOL configuration selected for this study is a helicopter-type VTOL which controls the direction of the thrust vector by means of vehicle-attitude changes and, furthermore, employs high-gain attitude stabilization. This configuration is the same as one which was simulated in actual instrument flight tests with a variable stability helicopter. Stability analyses are made for each of the flight-director commands, assuming a single input-output, multi-loop system model for each control axis. The analyses proceed from the inner-loops to the outer-loops, using an analytical pilot model selected on the basis of the innermost-loop dynamics. The time response of the analytical model of the system is primarily used to adjust system gains, while root locus plots are used to identify dominant modes and mode interactions.
Russ, Stefanie
2014-08-01
It is shown that a two-component percolation model on a simple cubic lattice can explain an experimentally observed behavior [Savage et al., Sens. Actuators B 79, 17 (2001); Sens. Actuators B 72, 239 (2001).], namely, that a network built up by a mixture of sintered nanocrystalline semiconducting n and p grains can exhibit selective behavior, i.e., respond with a resistance increase when exposed to a reducing gas A and with a resistance decrease in response to another reducing gas B. To this end, a simple model is developed, where the n and p grains are simulated by overlapping spheres, based on realistic assumptions about the gas reactions on the grain surfaces. The resistance is calculated by random walk simulations with nn, pp, and np bonds between the grains, and the results are found in very good agreement with the experiments. Contrary to former assumptions, the np bonds are crucial to obtain this accordance.
PhreeqcRM: A reaction module for transport simulators based on the geochemical model PHREEQC
Parkhurst, David L.; Wissmeier, Laurin
2015-01-01
PhreeqcRM is a geochemical reaction module designed specifically to perform equilibrium and kinetic reaction calculations for reactive transport simulators that use an operator-splitting approach. The basic function of the reaction module is to take component concentrations from the model cells of the transport simulator, run geochemical reactions, and return updated component concentrations to the transport simulator. If multicomponent diffusion is modeled (e.g., Nernst–Planck equation), then aqueous species concentrations can be used instead of component concentrations. The reaction capabilities are a complete implementation of the reaction capabilities of PHREEQC. In each cell, the reaction module maintains the composition of all of the reactants, which may include minerals, exchangers, surface complexers, gas phases, solid solutions, and user-defined kinetic reactants.PhreeqcRM assigns initial and boundary conditions for model cells based on standard PHREEQC input definitions (files or strings) of chemical compositions of solutions and reactants. Additional PhreeqcRM capabilities include methods to eliminate reaction calculations for inactive parts of a model domain, transfer concentrations and other model properties, and retrieve selected results. The module demonstrates good scalability for parallel processing by using multiprocessing with MPI (message passing interface) on distributed memory systems, and limited scalability using multithreading with OpenMP on shared memory systems. PhreeqcRM is written in C++, but interfaces allow methods to be called from C or Fortran. By using the PhreeqcRM reaction module, an existing multicomponent transport simulator can be extended to simulate a wide range of geochemical reactions. Results of the implementation of PhreeqcRM as the reaction engine for transport simulators PHAST and FEFLOW are shown by using an analytical solution and the reactive transport benchmark of MoMaS.
Response to Selection in Finite Locus Models with Nonadditive Effects.
Esfandyari, Hadi; Henryon, Mark; Berg, Peer; Thomasen, Jørn Rind; Bijma, Piter; Sørensen, Anders Christian
2017-05-01
Under the finite-locus model in the absence of mutation, the additive genetic variation is expected to decrease when directional selection is acting on a population, according to quantitative-genetic theory. However, some theoretical studies of selection suggest that the level of additive variance can be sustained or even increased when nonadditive genetic effects are present. We tested the hypothesis that finite-locus models with both additive and nonadditive genetic effects maintain more additive genetic variance (VA) and realize larger medium- to long-term genetic gains than models with only additive effects when the trait under selection is subject to truncation selection. Four genetic models that included additive, dominance, and additive-by-additive epistatic effects were simulated. The simulated genome for individuals consisted of 25 chromosomes, each with a length of 1 M. One hundred bi-allelic QTL, 4 on each chromosome, were considered. In each generation, 100 sires and 100 dams were mated, producing 5 progeny per mating. The population was selected for a single trait (h2 = 0.1) for 100 discrete generations with selection on phenotype or BLUP-EBV. VA decreased with directional truncation selection even in presence of nonadditive genetic effects. Nonadditive effects influenced long-term response to selection and among genetic models additive gene action had highest response to selection. In addition, in all genetic models, BLUP-EBV resulted in a greater fixation of favorable and unfavorable alleles and higher response than phenotypic selection. In conclusion, for the schemes we simulated, the presence of nonadditive genetic effects had little effect in changes of additive variance and VA decreased by directional selection. © The American Genetic Association 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Lin, Yumei; Wu, Wenxiang; Ge, Quansheng
2015-11-01
Climate change would cause negative impacts on future agricultural production and food security. Adaptive measures should be taken to mitigate the adverse effects. The objectives of this study were to simulate the potential effects of climate change on maize yields in Heilongjiang Province and to evaluate two selected typical household-level autonomous adaptive measures (cultivar changes and planting time adjustments) for mitigating the risks of climate change based on the CERES-Maize model. The results showed that flowering duration and maturity duration of maize would be shortened in the future climate and thus maize yield would reduce by 11-46% during 2011-2099 relative to 1981-2010. Increased CO2 concentration would not benefit maize production significantly. However, substituting local cultivars with later-maturing ones and delaying the planting date could increase yields as the climate changes. The results provide insight regarding the likely impacts of climate change on maize yields and the efficacy of selected adaptive measures by presenting evidence-based implications and mitigation strategies for the potential negative impacts of future climate change. © 2014 Society of Chemical Industry.
An Expanded Lateral Interactive Clonal Selection Algorithm and Its Application
NASA Astrophysics Data System (ADS)
Gao, Shangce; Dai, Hongwei; Zhang, Jianchen; Tang, Zheng
Based on the clonal selection principle proposed by Burnet, in the immune response process there is no crossover of genetic material between members of the repertoire, i. e., there is no knowledge communication during different elite pools in the previous clonal selection models. As a result, the search performance of these models is ineffective. To solve this problem, inspired by the concept of the idiotypic network theory, an expanded lateral interactive clonal selection algorithm (LICS) is put forward. In LICS, an antibody is matured not only through the somatic hypermutation and the receptor editing from the B cell, but also through the stimuli from other antibodies. The stimuli is realized by memorizing some common gene segment on the idiotypes, based on which a lateral interactive receptor editing operator is also introduced. Then, LICS is applied to several benchmark instances of the traveling salesman problem. Simulation results show the efficiency and robustness of LICS when compared to other traditional algorithms.
NASA Astrophysics Data System (ADS)
Takemura, Kazuhiro; Guo, Hao; Sakuraba, Shun; Matubayasi, Nobuyuki; Kitao, Akio
2012-12-01
We propose a method to evaluate binding free energy differences among distinct protein-protein complex model structures through all-atom molecular dynamics simulations in explicit water using the solution theory in the energy representation. Complex model structures are generated from a pair of monomeric structures using the rigid-body docking program ZDOCK. After structure refinement by side chain optimization and all-atom molecular dynamics simulations in explicit water, complex models are evaluated based on the sum of their conformational and solvation free energies, the latter calculated from the energy distribution functions obtained from relatively short molecular dynamics simulations of the complex in water and of pure water based on the solution theory in the energy representation. We examined protein-protein complex model structures of two protein-protein complex systems, bovine trypsin/CMTI-1 squash inhibitor (PDB ID: 1PPE) and RNase SA/barstar (PDB ID: 1AY7), for which both complex and monomer structures were determined experimentally. For each system, we calculated the energies for the crystal complex structure and twelve generated model structures including the model most similar to the crystal structure and very different from it. In both systems, the sum of the conformational and solvation free energies tended to be lower for the structure similar to the crystal. We concluded that our energy calculation method is useful for selecting low energy complex models similar to the crystal structure from among a set of generated models.
Takemura, Kazuhiro; Guo, Hao; Sakuraba, Shun; Matubayasi, Nobuyuki; Kitao, Akio
2012-12-07
We propose a method to evaluate binding free energy differences among distinct protein-protein complex model structures through all-atom molecular dynamics simulations in explicit water using the solution theory in the energy representation. Complex model structures are generated from a pair of monomeric structures using the rigid-body docking program ZDOCK. After structure refinement by side chain optimization and all-atom molecular dynamics simulations in explicit water, complex models are evaluated based on the sum of their conformational and solvation free energies, the latter calculated from the energy distribution functions obtained from relatively short molecular dynamics simulations of the complex in water and of pure water based on the solution theory in the energy representation. We examined protein-protein complex model structures of two protein-protein complex systems, bovine trypsin/CMTI-1 squash inhibitor (PDB ID: 1PPE) and RNase SA/barstar (PDB ID: 1AY7), for which both complex and monomer structures were determined experimentally. For each system, we calculated the energies for the crystal complex structure and twelve generated model structures including the model most similar to the crystal structure and very different from it. In both systems, the sum of the conformational and solvation free energies tended to be lower for the structure similar to the crystal. We concluded that our energy calculation method is useful for selecting low energy complex models similar to the crystal structure from among a set of generated models.
Evaluation of variable selection methods for random forests and omics data sets.
Degenhardt, Frauke; Seifert, Stephan; Szymczak, Silke
2017-10-16
Machine learning methods and in particular random forests are promising approaches for prediction based on high dimensional omics data sets. They provide variable importance measures to rank predictors according to their predictive power. If building a prediction model is the main goal of a study, often a minimal set of variables with good prediction performance is selected. However, if the objective is the identification of involved variables to find active networks and pathways, approaches that aim to select all relevant variables should be preferred. We evaluated several variable selection procedures based on simulated data as well as publicly available experimental methylation and gene expression data. Our comparison included the Boruta algorithm, the Vita method, recurrent relative variable importance, a permutation approach and its parametric variant (Altmann) as well as recursive feature elimination (RFE). In our simulation studies, Boruta was the most powerful approach, followed closely by the Vita method. Both approaches demonstrated similar stability in variable selection, while Vita was the most robust approach under a pure null model without any predictor variables related to the outcome. In the analysis of the different experimental data sets, Vita demonstrated slightly better stability in variable selection and was less computationally intensive than Boruta.In conclusion, we recommend the Boruta and Vita approaches for the analysis of high-dimensional data sets. Vita is considerably faster than Boruta and thus more suitable for large data sets, but only Boruta can also be applied in low-dimensional settings. © The Author 2017. Published by Oxford University Press.
The Resource Usage Aware Backfilling
NASA Astrophysics Data System (ADS)
Guim, Francesc; Rodero, Ivan; Corbalan, Julita
Job scheduling policies for HPC centers have been extensively studied in the last few years, especially backfilling based policies. Almost all of these studies have been done using simulation tools. All the existent simulators use the runtime (either estimated or real) provided in the workload as a basis of their simulations. In our previous work we analyzed the impact on system performance of considering the resource sharing (memory bandwidth) of running jobs including a new resource model in the Alvio simulator. Based on this studies we proposed the LessConsume and LessConsume Threshold resource selection policies. Both are oriented to reduce the saturation of the shared resources thus increasing the performance of the system. The results showed how both resource allocation policies shown how the performance of the system can be improved by considering where the jobs are finally allocated.
Solution of AntiSeepage for Mengxi River Based on Numerical Simulation of Unsaturated Seepage
Ji, Youjun; Zhang, Linzhi; Yue, Jiannan
2014-01-01
Lessening the leakage of surface water can reduce the waste of water resources and ground water pollution. To solve the problem that Mengxi River could not store water enduringly, geology investigation, theoretical analysis, experiment research, and numerical simulation analysis were carried out. Firstly, the seepage mathematical model was established based on unsaturated seepage theory; secondly, the experimental equipment for testing hydraulic conductivity of unsaturated soil was developed to obtain the curve of two-phase flow. The numerical simulation of leakage in natural conditions proves the previous inference and leakage mechanism of river. At last, the seepage control capacities of different impervious materials were compared by numerical simulations. According to the engineering actuality, the impervious material was selected. The impervious measure in this paper has been proved to be effectible by hydrogeological research today. PMID:24707199
Gottfried, Jennifer L
2011-07-01
The potential of laser-induced breakdown spectroscopy (LIBS) to discriminate biological and chemical threat simulant residues prepared on multiple substrates and in the presence of interferents has been explored. The simulant samples tested include Bacillus atrophaeus spores, Escherichia coli, MS-2 bacteriophage, α-hemolysin from Staphylococcus aureus, 2-chloroethyl ethyl sulfide, and dimethyl methylphosphonate. The residue samples were prepared on polycarbonate, stainless steel and aluminum foil substrates by Battelle Eastern Science and Technology Center. LIBS spectra were collected by Battelle on a portable LIBS instrument developed by A3 Technologies. This paper presents the chemometric analysis of the LIBS spectra using partial least-squares discriminant analysis (PLS-DA). The performance of PLS-DA models developed based on the full LIBS spectra, and selected emission intensities and ratios have been compared. The full-spectra models generally provided better classification results based on the inclusion of substrate emission features; however, the intensity/ratio models were able to correctly identify more types of simulant residues in the presence of interferents. The fusion of the two types of PLS-DA models resulted in a significant improvement in classification performance for models built using multiple substrates. In addition to identifying the major components of residue mixtures, minor components such as growth media and solvents can be identified with an appropriately designed PLS-DA model.
Modeling the impact of preflushing on CTE in proton irradiated CCD-based detectors
NASA Astrophysics Data System (ADS)
Philbrick, R. H.
2002-04-01
A software model is described that performs a "real world" simulation of the operation of several types of charge-coupled device (CCD)-based detectors in order to accurately predict the impact that high-energy proton radiation has on image distortion and modulation transfer function (MTF). The model was written primarily to predict the effectiveness of vertical preflushing on the custom full frame CCD-based detectors intended for use on the proposed Kepler Discovery mission, but it is capable of simulating many other types of CCD detectors and operating modes as well. The model keeps track of the occupancy of all phosphorous-silicon (P-V), divacancy (V-V) and oxygen-silicon (O-V) defect centers under every CCD electrode over the entire detector area. The integrated image is read out by simulating every electrode-to-electrode charge transfer in both the vertical and horizontal CCD registers. A signal level dependency on the capture and emission of signal is included and the current state of each electrode (e.g., barrier or storage) is considered when distributing integrated and emitted signal. Options for performing preflushing, preflashing, and including mini-channels are available on both the vertical and horizontal CCD registers. In addition, dark signal generation and image transfer smear can be selectively enabled or disabled. A comparison of the charge transfer efficiency (CTE) data measured on the Hubble space telescope imaging spectrometer (STIS) CCD with the CTE extracted from model simulations of the STIS CCD show good agreement.
An Approach to Experimental Design for the Computer Analysis of Complex Phenomenon
NASA Technical Reports Server (NTRS)
Rutherford, Brian
2000-01-01
The ability to make credible system assessments, predictions and design decisions related to engineered systems and other complex phenomenon is key to a successful program for many large-scale investigations in government and industry. Recently, many of these large-scale analyses have turned to computational simulation to provide much of the required information. Addressing specific goals in the computer analysis of these complex phenomenon is often accomplished through the use of performance measures that are based on system response models. The response models are constructed using computer-generated responses together with physical test results where possible. They are often based on probabilistically defined inputs and generally require estimation of a set of response modeling parameters. As a consequence, the performance measures are themselves distributed quantities reflecting these variabilities and uncertainties. Uncertainty in the values of the performance measures leads to uncertainties in predicted performance and can cloud the decisions required of the analysis. A specific goal of this research has been to develop methodology that will reduce this uncertainty in an analysis environment where limited resources and system complexity together restrict the number of simulations that can be performed. An approach has been developed that is based on evaluation of the potential information provided for each "intelligently selected" candidate set of computer runs. Each candidate is evaluated by partitioning the performance measure uncertainty into two components - one component that could be explained through the additional computational simulation runs and a second that would remain uncertain. The portion explained is estimated using a probabilistic evaluation of likely results for the additional computational analyses based on what is currently known about the system. The set of runs indicating the largest potential reduction in uncertainty is then selected and the computational simulations are performed. Examples are provided to demonstrate this approach on small scale problems. These examples give encouraging results. Directions for further research are indicated.
INDIVIDUAL-BASED MODELS: POWERFUL OR POWER STRUGGLE?
Willem, L; Stijven, S; Hens, N; Vladislavleva, E; Broeckhove, J; Beutels, P
2015-01-01
Individual-based models (IBMs) offer endless possibilities to explore various research questions but come with high model complexity and computational burden. Large-scale IBMs have become feasible but the novel hardware architectures require adapted software. The increased model complexity also requires systematic exploration to gain thorough system understanding. We elaborate on the development of IBMs for vaccine-preventable infectious diseases and model exploration with active learning. Investment in IBM simulator code can lead to significant runtime reductions. We found large performance differences due to data locality. Sorting the population once, reduced simulation time by a factor two. Storing person attributes separately instead of using person objects also seemed more efficient. Next, we improved model performance up to 70% by structuring potential contacts based on health status before processing disease transmission. The active learning approach we present is based on iterative surrogate modelling and model-guided experimentation. Symbolic regression is used for nonlinear response surface modelling with automatic feature selection. We illustrate our approach using an IBM for influenza vaccination. After optimizing the parameter spade, we observed an inverse relationship between vaccination coverage and the clinical attack rate reinforced by herd immunity. These insights can be used to focus and optimise research activities, and to reduce both dimensionality and decision uncertainty.
Zeng, Qianglin; Li, Dandan; Huang, Gui; Xia, Jin; Wang, Xiaoming; Zhang, Yamei; Tang, Wanping; Zhou, Hui
2016-08-31
Short-term forecast of pertussis incidence is helpful for advanced warning and planning resource needs for future epidemics. By utilizing the Auto-Regressive Integrated Moving Average (ARIMA) model and Exponential Smoothing (ETS) model as alterative models with R software, this paper analyzed data from Chinese Center for Disease Control and Prevention (China CDC) between January 2005 and June 2016. The ARIMA (0,1,0)(1,1,1)12 model (AICc = 1342.2 BIC = 1350.3) was selected as the best performing ARIMA model and the ETS (M,N,M) model (AICc = 1678.6, BIC = 1715.4) was selected as the best performing ETS model, and the ETS (M,N,M) model with the minimum RMSE was finally selected for in-sample-simulation and out-of-sample forecasting. Descriptive statistics showed that the reported number of pertussis cases by China CDC increased by 66.20% from 2005 (4058 cases) to 2015 (6744 cases). According to Hodrick-Prescott filter, there was an apparent cyclicity and seasonality in the pertussis reports. In out of sample forecasting, the model forecasted a relatively high incidence cases in 2016, which predicates an increasing risk of ongoing pertussis resurgence in the near future. In this regard, the ETS model would be a useful tool in simulating and forecasting the incidence of pertussis, and helping decision makers to take efficient decisions based on the advanced warning of disease incidence.
Construction of dynamic stochastic simulation models using knowledge-based techniques
NASA Technical Reports Server (NTRS)
Williams, M. Douglas; Shiva, Sajjan G.
1990-01-01
Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571
Description of waste pretreatment and interfacing systems dynamic simulation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garbrick, D.J.; Zimmerman, B.D.
1995-05-01
The Waste Pretreatment and Interfacing Systems Dynamic Simulation Model was created to investigate the required pretreatment facility processing rates for both high level and low level waste so that the vitrification of tank waste can be completed according to the milestones defined in the Tri-Party Agreement (TPA). In order to achieve this objective, the processes upstream and downstream of the pretreatment facilities must also be included. The simulation model starts with retrieval of tank waste and ends with vitrification for both low level and high level wastes. This report describes the results of three simulation cases: one based on suggestedmore » average facility processing rates, one with facility rates determined so that approximately 6 new DSTs are required, and one with facility rates determined so that approximately no new DSTs are required. It appears, based on the simulation results, that reasonable facility processing rates can be selected so that no new DSTs are required by the TWRS program. However, this conclusion must be viewed with respect to the modeling assumptions, described in detail in the report. Also included in the report, in an appendix, are results of two sensitivity cases: one with glass plant water recycle steams recycled versus not recycled, and one employing the TPA SST retrieval schedule versus a more uniform SST retrieval schedule. Both recycling and retrieval schedule appear to have a significant impact on overall tank usage.« less
Cornelius, M P; Jacobson, C; Dobson, R; Besier, R B
2016-04-15
This study utilised computer simulation modelling (Risk Management Model for Nematodes) to investigate the impact of different parasite refugia scenarios on the development of anthelmintic resistance and worm control effectiveness. The simulations were conducted for adult ewe flocks in a Mediterranean climatic region over a 20 year time period. Factors explored in the simulation exercise were environment (different weather conditions), drug efficacy, the percentage of the flock left untreated, the timing of anthelmintic treatments, the initial worm egg count, and the number of drenches per annum. The model was run with variable proportions of the flock untreated (0, 10, 20, 30, 40 and 50%), with ewes selected at random so that reductions in the mean worm burden or egg count were proportional to the treated section of the flock. Treatments to ewes were given either in summer (December; low refugia potential, hence highly selective) or autumn (March; less selective due to a greater refugia potential), and the use of different anthelmintics was simulated to indicate the difference between active ingredients of different efficacy. Each model scenario was run for two environments, specifically a lower rainfall area (more selective) and a higher rainfall area (less selective) within a Mediterranean climatic zone, characterised by hot, dry summers and cool, wet winters. Univariate general linear models with least square difference post-hoc tests were used to examine differences between means of factors. The results confirmed that leaving a proportion of sheep in a flock untreated was effective in delaying the development of anthelmintic resistance, with as low as 10% of a flock untreated sufficient to significantly delay resistance, although this strategy was associated with a small reduction in worm control. Administering anthelmintics in autumn rather than summer was also effective in delaying the development of anthelmintic resistance in the lower rainfall environment where all sheep were treated, although the effect of treatment timing on worm control effectiveness varied between the environments and the proportion of ewes left untreated. The use of anthelmintics with higher efficacy delayed the development of resistance, but the initial worm egg count or number of annual treatments had no effect on either the time to resistance development or worm control effectiveness. In conclusion, the modelling study suggests that leaving a small proportion of ewes untreated, or changing the time of treatment, can delay the onset of anthelmintic resistance in a highly selective environment. Copyright © 2016 Elsevier B.V. All rights reserved.
Alex, J; Kolisch, G; Krause, K
2002-01-01
The objective of this presented project is to use the results of an CFD simulation to automatically, systematically and reliably generate an appropriate model structure for simulation of the biological processes using CSTR activated sludge compartments. Models and dynamic simulation have become important tools for research but also increasingly for the design and optimisation of wastewater treatment plants. Besides the biological models several cases are reported about the application of computational fluid dynamics ICFD) to wastewater treatment plants. One aim of the presented method to derive model structures from CFD results is to exclude the influence of empirical structure selection to the result of dynamic simulations studies of WWTPs. The second application of the approach developed is the analysis of badly performing treatment plants where the suspicion arises that bad flow behaviour such as short cut flows is part of the problem. The method suggested requires as the first step the calculation of fluid dynamics of the biological treatment step at different loading situations by use of 3-dimensional CFD simulation. The result of this information is used to generate a suitable model structure for conventional dynamic simulation of the treatment plant by use of a number of CSTR modules with a pattern of exchange flows between the tanks automatically. The method is explained in detail and the application to the WWTP Wuppertal Buchenhofen is presented.
Halford, K.J.
1998-01-01
Ground-water flow through the surficial aquifer system at Naval Station Mayport near Jacksonville, Florida, was simulated with a two-layer finite-difference model as part of an investigation conducted by the U.S. Geological Survey. The model was calibrated to 229 water-level measurements from 181 wells during three synoptic surveys (July 17, 1995; July 31, 1996; and October 24, 1996). A quantifiable understanding of ground-water flow through the surficial aquifer was needed to evaluate remedial-action alternatives under consideration by the Naval Station Mayport to control the possible movement of contaminants from sites on the station. Multi-well aquifer tests, single-well tests, and slug tests were conducted to estimate the hydraulic properties of the surficial aquifer system, which was divided into three geohydrologic units?an S-zone and an I-zone separated by a marsh-muck confining unit. The recharge rate was estimated to range from 4 to 15 inches per year (95 percent confidence limits), based on a chloride-ratio method. Most of the simulations following model calibration were based on a recharge rate of 8 inches per year to unirrigated pervious areas. The advective displacement of saline pore water during the last 200 years was simulated using a particle-tracking routine, MODPATH, applied to calibrated steady-state and transient models of the Mayport peninsula. The surficial aquifer system at Naval Station Mayport has been modified greatly by natural and anthropogenic forces so that the freshwater flow system is expanding and saltwater is being flushed from the system. A new MODFLOW package (VAR1) was written to simulate the temporal variation of hydraulic properties caused by construction activities at Naval Station Mayport. The transiently simulated saltwater distribution after 200 years of displacement described the chloride distribution in the I-zone (determined from measurements made during 1993 and 1996) better than the steady-state simulation. The advective movement of contaminants from selected sites within the solid waste management units to discharge points was simulated using MODPATH. Most of the particles were discharged to the nearest surface-water feature after traveling less than 1,000 feet in the ground-water system. Most areas within 1,000 feet of a surface-water feature or storm sewer had traveltimes of less than 50 years, based on an effective porosity of 40 percent. Contributing areas, traveltimes, and pathlines were identified for 224 wells at Naval Station Mayport under steady-state and transient conditions by back-tracking a particle from the midpoint of the wetted screen of each well. Traveltimes to contributing areas that ranged between 15 and 50 years, estimated by the steady-state model, differed most from the transient traveltime estimates. Estimates of traveltimes and pathlines based on steady-state model results typically were 10 to 20 years more and about twice as long as corresponding estimates from the transient model. The models differed because the steady-state model simulated 1996 conditions when Naval Station Mayport had more impervious surfaces than at any earlier time. The expansion of the impervious surfaces increased the average distance between contributing areas and observation wells.
Numerical simulation of controlled directional solidification under microgravity conditions
NASA Astrophysics Data System (ADS)
Holl, S.; Roos, D.; Wein, J.
The computer-assisted simulation of solidification processes influenced by gravity has gained increased importance during the previous years regarding ground-based as well as microgravity research. Depending on the specific needs of the investigator, the simulation model ideally covers a broad spectrum of applications. These primarily include the optimization of furnace design in interaction with selected process parameters to meet the desired crystallization conditions. Different approaches concerning the complexity of the simulation models as well as their dedicated applications will be discussed in this paper. Special emphasis will be put on the potential of software tools to increase the scientific quality and cost-efficiency of microgravity experimentation. The results gained so far in the context of TEXUS, FSLP, D-1 and D-2 (preparatory program) experiments, highlighting their simulation-supported preparation and evaluation will be discussed. An outlook will then be given on the possibilities to enhance the efficiency of pre-industrial research in the Columbus era through the incorporation of suitable simulation methods and tools.
Spatial Analysis of Traffic and Routing Path Methods for Tsunami Evacuation
NASA Astrophysics Data System (ADS)
Fakhrurrozi, A.; Sari, A. M.
2018-02-01
Tsunami disaster occurred relatively very fast. Thus, it has a very large-scale impact on both non-material and material aspects. Community evacuation caused mass panic, crowds, and traffic congestion. A further research in spatial based modelling, traffic engineering and splitting zone evacuation simulation is very crucial as an effort to reduce higher losses. This topic covers some information from the previous research. Complex parameters include route selection, destination selection, the spontaneous timing of both the departure of the source and the arrival time to destination and other aspects of the result parameter in various methods. The simulation process and its results, traffic modelling, and routing analysis emphasized discussion which is the closest to real conditions in the tsunami evacuation process. The method that we should highlight is Clearance Time Estimate based on Location Priority in which the computation result is superior to others despite many drawbacks. The study is expected to have input to improve and invent a new method that will be a part of decision support systems for disaster risk reduction of tsunamis disaster.
NASA Astrophysics Data System (ADS)
Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin
2016-12-01
This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.
Predicting the difficulty of pure, strict, epistatic models: metrics for simulated model selection.
Urbanowicz, Ryan J; Kiralis, Jeff; Fisher, Jonathan M; Moore, Jason H
2012-09-26
Algorithms designed to detect complex genetic disease associations are initially evaluated using simulated datasets. Typical evaluations vary constraints that influence the correct detection of underlying models (i.e. number of loci, heritability, and minor allele frequency). Such studies neglect to account for model architecture (i.e. the unique specification and arrangement of penetrance values comprising the genetic model), which alone can influence the detectability of a model. In order to design a simulation study which efficiently takes architecture into account, a reliable metric is needed for model selection. We evaluate three metrics as predictors of relative model detection difficulty derived from previous works: (1) Penetrance table variance (PTV), (2) customized odds ratio (COR), and (3) our own Ease of Detection Measure (EDM), calculated from the penetrance values and respective genotype frequencies of each simulated genetic model. We evaluate the reliability of these metrics across three very different data search algorithms, each with the capacity to detect epistatic interactions. We find that a model's EDM and COR are each stronger predictors of model detection success than heritability. This study formally identifies and evaluates metrics which quantify model detection difficulty. We utilize these metrics to intelligently select models from a population of potential architectures. This allows for an improved simulation study design which accounts for differences in detection difficulty attributed to model architecture. We implement the calculation and utilization of EDM and COR into GAMETES, an algorithm which rapidly and precisely generates pure, strict, n-locus epistatic models.
Tripathy, Swayansiddha; Azam, Mohammed Afzal; Jupudi, Srikanth; Sahu, Susanta Kumar
2017-10-11
FtsZ is an appealing target for the design of antimicrobial agent that can be used to defeat the multidrug-resistant bacterial pathogens. Pharmacophore modelling, molecular docking and molecular dynamics (MD) simulation studies were performed on a series of three-substituted benzamide derivatives. In the present study a five-featured pharmacophore model with one hydrogen bond acceptors, one hydrogen bond donors, one hydrophobic and two aromatic rings was developed using 97 molecules having MIC values ranging from .07 to 957 μM. A statistically significant 3D-QSAR model was obtained using this pharmacophore hypothesis with a good correlation coefficient (R 2 = .8319), cross validated coefficient (Q 2 = .6213) and a high Fisher ratio (F = 103.9) with three component PLS factor. A good correlation between experimental and predicted activity of the training (R 2 = .83) and test set (R 2 = .67) molecules were displayed by ADHRR.1682 model. The generated model was further validated by enrichment studies using the decoy test and MAE-based criteria to measure the efficiency of the model. The docking studies of all selected inhibitors in the active site of FtsZ protein showed crucial hydrogen bond interactions with Val 207, Asn 263, Leu 209, Gly 205 and Asn-299 residues. The binding free energies of these inhibitors were calculated by the molecular mechanics/generalized born surface area VSGB 2.0 method. Finally, a 15 ns MD simulation was done to confirm the stability of the 4DXD-ligand complex. On a wider scope, the prospect of present work provides insight in designing molecules with better selective FtsZ inhibitory potential.
Reactive transport of metal contaminants in alluvium - Model comparison and column simulation
Brown, J.G.; Bassett, R.L.; Glynn, P.D.
2000-01-01
A comparative assessment of two reactive-transport models, PHREEQC and HYDROGEOCHEM (HGC), was done to determine the suitability of each for simulating the movement of acidic contamination in alluvium. For simulations that accounted for aqueous complexation, precipitation and dissolution, the breakthrough and rinseout curves generated by each model were similar. The differences in simulated equilibrium concentrations between models were minor and were related to (1) different units in model output, (2) different activity coefficients, and (3) ionic-strength calculations. When adsorption processes were added to the models, the rinseout pH simulated by PHREEQC using the diffuse double-layer adsorption model rose to a pH of 6 after pore volume 15, about 1 pore volume later than the pH simulated by HGC using the constant-capacitance model. In PHREEQC simulation of a laboratory column experiment, the inability of the model to match measured outflow concentrations of selected constituents was related to the evident lack of local geochemical equilibrium in the column. The difference in timing and size of measured and simulated breakthrough of selected constituents indicated that the redox and adsorption reactions in the column occurred slowly when compared with the modeled reactions. MINTEQA2 and PHREEQC simulations of the column experiment indicated that the number of surface sites that took part in adsorption reactions was less than that estimated from the measured concentration of Fe hydroxide in the alluvium.
NASA Astrophysics Data System (ADS)
Karmalkar, A.
2017-12-01
Ensembles of dynamically downscaled climate change simulations are routinely used to capture uncertainty in projections at regional scales. I assess the reliability of two such ensembles for North America - NARCCAP and NA-CORDEX - by investigating the impact of model selection on representing uncertainty in regional projections, and the ability of the regional climate models (RCMs) to provide reliable information. These aspects - discussed for the six regions used in the US National Climate Assessment - provide an important perspective on the interpretation of downscaled results. I show that selecting general circulation models for downscaling based on their equilibrium climate sensitivities is a reasonable choice, but the six models chosen for NA-CORDEX do a poor job at representing uncertainty in winter temperature and precipitation projections in many parts of the eastern US, which lead to overconfident projections. The RCM performance is highly variable across models, regions, and seasons and the ability of the RCMs to provide improved seasonal mean performance relative to their parent GCMs seems limited in both RCM ensembles. Additionally, the ability of the RCMs to simulate historical climates is not strongly related to their ability to simulate climate change across the ensemble. This finding suggests limited use of models' historical performance to constrain their projections. Given these challenges in dynamical downscaling, the RCM results should not be used in isolation. Information on how well the RCM ensembles represent known uncertainties in regional climate change projections discussed here needs to be communicated clearly to inform maagement decisions.
Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.
Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun
2016-01-01
Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.
Pe'er, Guy; Zurita, Gustavo A.; Schober, Lucia; Bellocq, Maria I.; Strer, Maximilian; Müller, Michael; Pütz, Sandro
2013-01-01
Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model “G-RaFFe” generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature. PMID:23724108
Pe'er, Guy; Zurita, Gustavo A; Schober, Lucia; Bellocq, Maria I; Strer, Maximilian; Müller, Michael; Pütz, Sandro
2013-01-01
Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model "G-RaFFe" generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature.
Multiscale modelling of palisade formation in gliobastoma multiforme.
Caiazzo, Alfonso; Ramis-Conde, Ignacio
2015-10-21
Palisades are characteristic tissue aberrations that arise in glioblastomas. Observation of palisades is considered as a clinical indicator of the transition from a noninvasive to an invasive tumour. In this paper we propose a computational model to study the influence of the hypoxic switch in palisade formation. For this we produced three-dimensional realistic simulations, based on a multiscale hybrid model, coupling the evolution of tumour cells and the oxygen diffusion in tissue, that depict the shape of palisades during its formation. Our results can be summarized as follows: (1) the presented simulations can provide clinicians and biologists with a better understanding of three-dimensional structure of palisades as well as of glioblastomas growth dynamics; (2) we show that heterogeneity in cell response to hypoxia is a relevant factor in palisade and pseudopalisade formation; (3) we show how selective processes based on the hypoxia switch influence the tumour proliferation. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhao, Lifei; Li, Zhen; Caswell, Bruce; Ouyang, Jie; Karniadakis, George Em
2018-06-01
We simulate complex fluids by means of an on-the-fly coupling of the bulk rheology to the underlying microstructure dynamics. In particular, a continuum model of polymeric fluids is constructed without a pre-specified constitutive relation, but instead it is actively learned from mesoscopic simulations where the dynamics of polymer chains is explicitly computed. To couple the bulk rheology of polymeric fluids and the microscale dynamics of polymer chains, the continuum approach (based on the finite volume method) provides the transient flow field as inputs for the (mesoscopic) dissipative particle dynamics (DPD), and in turn DPD returns an effective constitutive relation to close the continuum equations. In this multiscale modeling procedure, we employ an active learning strategy based on Gaussian process regression (GPR) to minimize the number of expensive DPD simulations, where adaptively selected DPD simulations are performed only as necessary. Numerical experiments are carried out for flow past a circular cylinder of a non-Newtonian fluid, modeled at the mesoscopic level by bead-spring chains. The results show that only five DPD simulations are required to achieve an effective closure of the continuum equations at Reynolds number Re = 10. Furthermore, when Re is increased to 100, only one additional DPD simulation is required for constructing an extended GPR-informed model closure. Compared to traditional message-passing multiscale approaches, applying an active learning scheme to multiscale modeling of non-Newtonian fluids can significantly increase the computational efficiency. Although the method demonstrated here obtains only a local viscosity from the polymer dynamics, it can be extended to other multiscale models of complex fluids whose macro-rheology is unknown.
NASA Astrophysics Data System (ADS)
Zhou, Jianzhong; Zhang, Hairong; Zhang, Jianyun; Zeng, Xiaofan; Ye, Lei; Liu, Yi; Tayyab, Muhammad; Chen, Yufan
2017-07-01
An accurate flood forecasting with long lead time can be of great value for flood prevention and utilization. This paper develops a one-way coupled hydro-meteorological modeling system consisting of the mesoscale numerical weather model Weather Research and Forecasting (WRF) model and the Chinese Xinanjiang hydrological model to extend flood forecasting lead time in the Jinshajiang River Basin, which is the largest hydropower base in China. Focusing on four typical precipitation events includes: first, the combinations and mode structures of parameterization schemes of WRF suitable for simulating precipitation in the Jinshajiang River Basin were investigated. Then, the Xinanjiang model was established after calibration and validation to make up the hydro-meteorological system. It was found that the selection of the cloud microphysics scheme and boundary layer scheme has a great impact on precipitation simulation, and only a proper combination of the two schemes could yield accurate simulation effects in the Jinshajiang River Basin and the hydro-meteorological system can provide instructive flood forecasts with long lead time. On the whole, the one-way coupled hydro-meteorological model could be used for precipitation simulation and flood prediction in the Jinshajiang River Basin because of its relatively high precision and long lead time.
A Gaussian Mixture Model-based continuous Boundary Detection for 3D sensor networks.
Chen, Jiehui; Salim, Mariam B; Matsumoto, Mitsuji
2010-01-01
This paper proposes a high precision Gaussian Mixture Model-based novel Boundary Detection 3D (BD3D) scheme with reasonable implementation cost for 3D cases by selecting a minimum number of Boundary sensor Nodes (BNs) in continuous moving objects. It shows apparent advantages in that two classes of boundary and non-boundary sensor nodes can be efficiently classified using the model selection techniques for finite mixture models; furthermore, the set of sensor readings within each sensor node's spatial neighbors is formulated using a Gaussian Mixture Model; different from DECOMO [1] and COBOM [2], we also formatted a BN Array with an additional own sensor reading to benefit selecting Event BNs (EBNs) and non-EBNs from the observations of BNs. In particular, we propose a Thick Section Model (TSM) to solve the problem of transition between 2D and 3D. It is verified by simulations that the BD3D 2D model outperforms DECOMO and COBOM in terms of average residual energy and the number of BNs selected, while the BD3D 3D model demonstrates sound performance even for sensor networks with low densities especially when the value of the sensor transmission range (r) is larger than the value of Section Thickness (d) in TSM. We have also rigorously proved its correctness for continuous geometric domains and full robustness for sensor networks over 3D terrains.
NASA Astrophysics Data System (ADS)
Noe, Frank
To efficiently simulate and generate understanding from simulations of complex macromolecular systems, the concept of slow collective coordinates or reaction coordinates is of fundamental importance. Here we will introduce variational approaches to approximate the slow coordinates and the reaction coordinates between selected end-states given MD simulations of the macromolecular system and a (possibly large) basis set of candidate coordinates. We will then discuss how to select physically intuitive order paremeters that are good surrogates of this variationally optimal result. These result can be used in order to construct Markov state models or other models of the stationary and kinetics properties, in order to parametrize low-dimensional / coarse-grained model of the dynamics. Deutsche Forschungsgemeinschaft, European Research Council.
An opinion formation based binary optimization approach for feature selection
NASA Astrophysics Data System (ADS)
Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo
2018-02-01
This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.
Bresso, Emmanuel; Togawa, Roberto; Hammond-Kosack, Kim; Urban, Martin; Maigret, Bernard; Martins, Natalia Florencio
2016-12-15
Fusarium graminearum (FG) is one of the major cereal infecting pathogens causing high economic losses worldwide and resulting in adverse effects on human and animal health. Therefore, the development of new fungicides against FG is an important issue to reduce cereal infection and economic impact. In the strategy for developing new fungicides, a critical step is the identification of new targets against which innovative chemicals weapons can be designed. As several G-protein coupled receptors (GPCRs) are implicated in signaling pathways critical for the fungi development and survival, such proteins could be valuable efficient targets to reduce Fusarium growth and therefore to prevent food contamination. In this study, GPCRs were predicted in the FG proteome using a manually curated pipeline dedicated to the identification of GPCRs. Based on several successive filters, the most appropriate GPCR candidate target for developing new fungicides was selected. Searching for new compounds blocking this particular target requires the knowledge of its 3D-structure. As no experimental X-Ray structure of the selected protein was available, a 3D model was built by homology modeling. The model quality and stability was checked by 100 ns of molecular dynamics simulations. Two stable conformations representative of the conformational families of the protein were extracted from the 100 ns simulation and were used for an ensemble docking campaign. The model quality and stability was checked by 100 ns of molecular dynamics simulations previously to the virtual screening step. The virtual screening step comprised the exploration of a chemical library with 11,000 compounds that were docked to the GPCR model. Among these compounds, we selected the ten top-ranked nontoxic molecules proposed to be experimentally tested to validate the in silico simulation. This study provides an integrated process merging genomics, structural bioinformatics and drug design for proposing innovative solutions to a world wide threat to grain producers and consumers.
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Przekop, Adam
2005-01-01
An investigation of the effect of basis selection on geometric nonlinear response prediction using a reduced-order nonlinear modal simulation is presented. The accuracy is dictated by the selection of the basis used to determine the nonlinear modal stiffness. This study considers a suite of available bases including bending modes only, bending and membrane modes, coupled bending and companion modes, and uncoupled bending and companion modes. The nonlinear modal simulation presented is broadly applicable and is demonstrated for nonlinear quasi-static and random acoustic response of flat beam and plate structures with isotropic material properties. Reduced-order analysis predictions are compared with those made using a numerical simulation in physical degrees-of-freedom to quantify the error associated with the selected modal bases. Bending and membrane responses are separately presented to help differentiate the bases.
Fast Virtual Stenting with Active Contour Models in Intracranical Aneurysm
Zhong, Jingru; Long, Yunling; Yan, Huagang; Meng, Qianqian; Zhao, Jing; Zhang, Ying; Yang, Xinjian; Li, Haiyun
2016-01-01
Intracranial stents are becoming increasingly a useful option in the treatment of intracranial aneurysms (IAs). Image simulation of the releasing stent configuration together with computational fluid dynamics (CFD) simulation prior to intervention will help surgeons optimize intervention scheme. This paper proposed a fast virtual stenting of IAs based on active contour model (ACM) which was able to virtually release stents within any patient-specific shaped vessel and aneurysm models built on real medical image data. In this method, an initial stent mesh was generated along the centerline of the parent artery without the need for registration between the stent contour and the vessel. Additionally, the diameter of the initial stent volumetric mesh was set to the maximum inscribed sphere diameter of the parent artery to improve the stenting accuracy and save computational cost. At last, a novel criterion for terminating virtual stent expanding that was based on the collision detection of the axis aligned bounding boxes was applied, making the stent expansion free of edge effect. The experiment results of the virtual stenting and the corresponding CFD simulations exhibited the efficacy and accuracy of the ACM based method, which are valuable to intervention scheme selection and therapy plan confirmation. PMID:26876026
Ma, Xiaoli; He, Jiawei; Yan, Jin; Wang, Qing; Li, Hui
2016-03-25
Mycophenolic sodium is an immunosuppressive agent that is always combined administration with corticosteroid in clinical practice. Considering the distribution and side-effect of the drug may change when co-administrated drug exist, this paper comparatively analyzed the binding ability of mycophenolic sodium and meprednisone toward human serum albumin by nuclear magnetic resonance relaxation data and docking simulation. The nuclear magnetic resonance approach was based on the analysis of proton selective and non-selective relaxation rate enhancement of the ligand in the absence and presence of macromolecules. The contribution of the bound ligand fraction to the observed relaxation rate in relation to protein concentration allowed the calculation of the affinity index. This approach allowed the comparison of the binding affinity of mycophenolic sodium and meprednisone. Molecular modeling was operated to simulate the binding model of ligand and albumin through Autodock 4.2.5. Competitive binding of mycophenolic sodium and meprednisone was further conducted through fluorescence spectroscopy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies
Manitz, Juliane; Burger, Patricia; Amos, Christopher I.; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike
2017-01-01
The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility. PMID:28785300
Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies.
Friedrichs, Stefanie; Manitz, Juliane; Burger, Patricia; Amos, Christopher I; Risch, Angela; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike; Hofner, Benjamin
2017-01-01
The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility.
NASA Astrophysics Data System (ADS)
Sessoms, D. A.; Amon, A.; Courbin, L.; Panizza, P.
2010-10-01
The binary path selection of droplets reaching a T junction is regulated by time-delayed feedback and nonlinear couplings. Such mechanisms result in complex dynamics of droplet partitioning: numerous discrete bifurcations between periodic regimes are observed. We introduce a model based on an approximation that makes this problem tractable. This allows us to derive analytical formulae that predict the occurrence of the bifurcations between consecutive regimes, establish selection rules for the period of a regime, and describe the evolutions of the period and complexity of droplet pattern in a cycle with the key parameters of the system. We discuss the validity and limitations of our model which describes semiquantitatively both numerical simulations and microfluidic experiments.
Selection of an appropriate animal model for study of bone loss in weightlessness
NASA Technical Reports Server (NTRS)
Wolinsky, I.
1986-01-01
Prolonged weightlessness in space flight results in a slow progressive demineralization of bone accompanied by an increased calcium output in the urine resulting in negative calcium balances. This possibly irreversible bone loss may constitute a serious limiting factor to long duration manned space flight. A number of preventative measures have been suggested, i.e., exercise during flight, dietary calcium supplements, use of specific prophylactic drugs. In order to facilitate research in these areas it is necessary to develop appropriate ground-based animal models that simulate the human condition of osteoporsis. An appropriate animal model would permit bone density studies, calcium balance studies, biochemical analyses, ground-based simulation models of weightlessness (bed rest, restraint, immobilization) and the planning of inflight experiments. Several animal models have been proposed in the biomedical research literature, but have inherent deficiencies. The purpose of this project was to evaluate models in the literature and determine which of these most closely simulates the phenomenon of bone loss in humans with regard to growth, bone remodeling, structural, chemical and mineralization similarities to human. This was accomplished by a comprehensive computer assisted literature search and report. Three animal models were examined closely for their relative suitability: the albino rat, monkey, and Beagle.
Directional selection can drive the evolution of modularity in complex traits
Melo, Diogo; Marroig, Gabriel
2015-01-01
Modularity is a central concept in modern biology, providing a powerful framework for the study of living organisms on many organizational levels. Two central and related questions can be posed in regard to modularity: How does modularity appear in the first place, and what forces are responsible for keeping and/or changing modular patterns? We approached these questions using a quantitative genetics simulation framework, building on previous results obtained with bivariate systems and extending them to multivariate systems. We developed an individual-based model capable of simulating many traits controlled by many loci with variable pleiotropic relations between them, expressed in populations subject to mutation, recombination, drift, and selection. We used this model to study the problem of the emergence of modularity, and hereby show that drift and stabilizing selection are inefficient at creating modular variational structures. We also demonstrate that directional selection can have marked effects on the modular structure between traits, actively promoting a restructuring of genetic variation in the selected population and potentially facilitating the response to selection. Furthermore, we give examples of complex covariation created by simple regimes of combined directional and stabilizing selection and show that stabilizing selection is important in the maintenance of established covariation patterns. Our results are in full agreement with previous results for two-trait systems and further extend them to include scenarios of greater complexity. Finally, we discuss the evolutionary consequences of modular patterns being molded by directional selection. PMID:25548154
Directional selection can drive the evolution of modularity in complex traits.
Melo, Diogo; Marroig, Gabriel
2015-01-13
Modularity is a central concept in modern biology, providing a powerful framework for the study of living organisms on many organizational levels. Two central and related questions can be posed in regard to modularity: How does modularity appear in the first place, and what forces are responsible for keeping and/or changing modular patterns? We approached these questions using a quantitative genetics simulation framework, building on previous results obtained with bivariate systems and extending them to multivariate systems. We developed an individual-based model capable of simulating many traits controlled by many loci with variable pleiotropic relations between them, expressed in populations subject to mutation, recombination, drift, and selection. We used this model to study the problem of the emergence of modularity, and hereby show that drift and stabilizing selection are inefficient at creating modular variational structures. We also demonstrate that directional selection can have marked effects on the modular structure between traits, actively promoting a restructuring of genetic variation in the selected population and potentially facilitating the response to selection. Furthermore, we give examples of complex covariation created by simple regimes of combined directional and stabilizing selection and show that stabilizing selection is important in the maintenance of established covariation patterns. Our results are in full agreement with previous results for two-trait systems and further extend them to include scenarios of greater complexity. Finally, we discuss the evolutionary consequences of modular patterns being molded by directional selection.
NASA Astrophysics Data System (ADS)
Jia, Xin-Hong; Wu, Zheng-Mao; Xia, Guang-Qiong
2006-12-01
It is well known that the gain-clamped semiconductor optical amplifier (GC-SOA) based on lasing effect is subject to transmission rate restriction because of relaxation oscillation. The GC-SOA based on compensating effect between signal light and amplified spontaneous emission by combined SOA and fiber Bragg grating (FBG) can be used to overcome this problem. In this paper, the theoretical model on GC-SOA based on compensating light has been constructed. The numerical simulations demonstrate that good gain and noise figure characteristics can be realized by selecting reasonably the FBG insertion position, the peak reflectivity of FBG and the biasing current of GC-SOA.
On the information content of hydrological signatures and their relationship to catchment attributes
NASA Astrophysics Data System (ADS)
Addor, Nans; Clark, Martyn P.; Prieto, Cristina; Newman, Andrew J.; Mizukami, Naoki; Nearing, Grey; Le Vine, Nataliya
2017-04-01
Hydrological signatures, which are indices characterizing hydrologic behavior, are increasingly used for the evaluation, calibration and selection of hydrological models. Their key advantage is to provide more direct insights into specific hydrological processes than aggregated metrics (e.g., the Nash-Sutcliffe efficiency). A plethora of signatures now exists, which enable characterizing a variety of hydrograph features, but also makes the selection of signatures for new studies challenging. Here we propose that the selection of signatures should be based on their information content, which we estimated using several approaches, all leading to similar conclusions. To explore the relationship between hydrological signatures and the landscape, we extended a previously published data set of hydrometeorological time series for 671 catchments in the contiguous United States, by characterizing the climatic conditions, topography, soil, vegetation and stream network of each catchment. This new catchment attributes data set will soon be in open access, and we are looking forward to introducing it to the community. We used this data set in a data-learning algorithm (random forests) to explore whether hydrological signatures could be inferred from catchment attributes alone. We find that some signatures can be predicted remarkably well by random forests and, interestingly, the same signatures are well captured when simulating discharge using a conceptual hydrological model. We discuss what this result reveals about our understanding of hydrological processes shaping hydrological signatures. We also identify which catchment attributes exert the strongest control on catchment behavior, in particular during extreme hydrological events. Overall, climatic attributes have the most significant influence, and strongly condition how well hydrological signatures can be predicted by random forests and simulated by the hydrological model. In contrast, soil characteristics at the catchment scale are not found to be significant predictors by random forests, which raises questions on how to best use soil data for hydrological modeling, for instance for parameter estimation. We finally demonstrate that signatures with high spatial variability are poorly captured by random forests and model simulations, which makes their regionalization delicate. We conclude with a ranking of signatures based on their information content, and propose that the signatures with high information content are best suited for model calibration, model selection and understanding hydrologic similarity.
Optimization of an electromagnetic linear actuator using a network and a finite element model
NASA Astrophysics Data System (ADS)
Neubert, Holger; Kamusella, Alfred; Lienig, Jens
2011-03-01
Model based design optimization leads to robust solutions only if the statistical deviations of design, load and ambient parameters from nominal values are considered. We describe an optimization methodology that involves these deviations as stochastic variables for an exemplary electromagnetic actuator used to drive a Braille printer. A combined model simulates the dynamic behavior of the actuator and its non-linear load. It consists of a dynamic network model and a stationary magnetic finite element (FE) model. The network model utilizes lookup tables of the magnetic force and the flux linkage computed by the FE model. After a sensitivity analysis using design of experiment (DoE) methods and a nominal optimization based on gradient methods, a robust design optimization is performed. Selected design variables are involved in form of their density functions. In order to reduce the computational effort we use response surfaces instead of the combined system model obtained in all stochastic analysis steps. Thus, Monte-Carlo simulations can be applied. As a result we found an optimum system design meeting our requirements with regard to function and reliability.
Trend Estimates of AERONET-Observed and Model-Simulated AOTs Between 1993 and 2013
NASA Technical Reports Server (NTRS)
Yoon, J.; Pozzer, A.; Chang, D. Y.; Lelieveld, J.; Kim, J.; Kim, M.; Lee, Y. G.; Koo, J.-H.; Lee, J.; Moon, K. J.
2015-01-01
Recently, temporal changes in Aerosol Optical Thickness (AOT) have been investigated based on model simulations, satellite and ground-based observations. Most AOT trend studies used monthly or annual arithmetic means that discard details of the generally right-skewed AOT distributions. Potentially, such results can be biased by extreme values (including outliers). This study additionally uses percentiles (i.e., the lowest 5%, 25%, 50%, 75% and 95% of the monthly cumulative distributions fitted to Aerosol Robotic Network (AERONET)-observed and ECHAM/MESSy Atmospheric Chemistry (EMAC)-model simulated AOTs) that are less affected by outliers caused by measurement error, cloud contamination and occasional extreme aerosol events. Since the limited statistical representativeness of monthly percentiles and means can lead to bias, this study adopts the number of observations as a weighting factor, which improves the statistical robustness of trend estimates. By analyzing the aerosol composition of AERONET-observed and EMAC-simulated AOTs in selected regions of interest, we distinguish the dominant aerosol types and investigate the causes of regional AOT trends. The simulated and observed trends are generally consistent with a high correlation coefficient (R = 0.89) and small bias (slope+/-2(sigma) = 0.75 +/- 0.19). A significant decrease in EMAC-decomposed AOTs by water-soluble compounds and black carbon is found over the USA and the EU due to environmental regulation. In particular, a clear reversal in the AERONET AOT trend percentiles is found over the USA, probably related to the AOT diurnal cycle and the frequency of wildfires. In most of the selected regions of interest, EMAC-simulated trends are mainly attributed to the significant changes of the dominant aerosols; e.g., significant decrease in sea salt and water soluble compounds over Central America, increase in dust over Northern Africa and Middle East, and decrease in black carbon and organic carbon over Australia.
High resolution global flood hazard map from physically-based hydrologic and hydraulic models.
NASA Astrophysics Data System (ADS)
Begnudelli, L.; Kaheil, Y.; McCollum, J.
2017-12-01
The global flood map published online at http://www.fmglobal.com/research-and-resources/global-flood-map at 90m resolution is being used worldwide to understand flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs. The modeling system is based on a physically-based hydrologic model to simulate river discharges, and 2D shallow-water hydrodynamic model to simulate inundation. The model can be applied to large-scale flood hazard mapping thanks to several solutions that maximize its efficiency and the use of parallel computing. The hydrologic component of the modeling system is the Hillslope River Routing (HRR) hydrologic model. HRR simulates hydrological processes using a Green-Ampt parameterization, and is calibrated against observed discharge data from several publicly-available datasets. For inundation mapping, we use a 2D Finite-Volume Shallow-Water model with wetting/drying. We introduce here a grid Up-Scaling Technique (UST) for hydraulic modeling to perform simulations at higher resolution at global scale with relatively short computational times. A 30m SRTM is now available worldwide along with higher accuracy and/or resolution local Digital Elevation Models (DEMs) in many countries and regions. UST consists of aggregating computational cells, thus forming a coarser grid, while retaining the topographic information from the original full-resolution mesh. The full-resolution topography is used for building relationships between volume and free surface elevation inside cells and computing inter-cell fluxes. This approach almost achieves computational speed typical of the coarse grids while preserving, to a significant extent, the accuracy offered by the much higher resolution available DEM. The simulations are carried out along each river of the network by forcing the hydraulic model with the streamflow hydrographs generated by HRR. Hydrographs are scaled so that the peak corresponds to the return period corresponding to the hazard map being produced (e.g. 100 years, 500 years). Each numerical simulation models one river reach, except for the longest reaches which are split in smaller parts. Here we show results for selected river basins worldwide.
NASA Astrophysics Data System (ADS)
Wöhling, T.; Schöniger, A.; Geiges, A.; Nowak, W.; Gayler, S.
2013-12-01
The objective selection of appropriate models for realistic simulations of coupled soil-plant processes is a challenging task since the processes are complex, not fully understood at larger scales, and highly non-linear. Also, comprehensive data sets are scarce, and measurements are uncertain. In the past decades, a variety of different models have been developed that exhibit a wide range of complexity regarding their approximation of processes in the coupled model compartments. We present a method for evaluating experimental design for maximum confidence in the model selection task. The method considers uncertainty in parameters, measurements and model structures. Advancing the ideas behind Bayesian Model Averaging (BMA), we analyze the changes in posterior model weights and posterior model choice uncertainty when more data are made available. This allows assessing the power of different data types, data densities and data locations in identifying the best model structure from among a suite of plausible models. The models considered in this study are the crop models CERES, SUCROS, GECROS and SPASS, which are coupled to identical routines for simulating soil processes within the modelling framework Expert-N. The four models considerably differ in the degree of detail at which crop growth and root water uptake are represented. Monte-Carlo simulations were conducted for each of these models considering their uncertainty in soil hydraulic properties and selected crop model parameters. Using a Bootstrap Filter (BF), the models were then conditioned on field measurements of soil moisture, matric potential, leaf-area index, and evapotranspiration rates (from eddy-covariance measurements) during a vegetation period of winter wheat at a field site at the Swabian Alb in Southwestern Germany. Following our new method, we derived model weights when using all data or different subsets thereof. We discuss to which degree the posterior mean outperforms the prior mean and all individual posterior models, how informative the data types were for reducing prediction uncertainty of evapotranspiration and deep drainage, and how well the model structure can be identified based on the different data types and subsets. We further analyze the impact of measurement uncertainty und systematic model errors on the effective sample size of the BF and the resulting model weights.
Modeling Dynamic Food Choice Processes to Understand Dietary Intervention Effects.
Marcum, Christopher Steven; Goldring, Megan R; McBride, Colleen M; Persky, Susan
2018-02-17
Meal construction is largely governed by nonconscious and habit-based processes that can be represented as a collection of in dividual, micro-level food choices that eventually give rise to a final plate. Despite this, dietary behavior intervention research rarely captures these micro-level food choice processes, instead measuring outcomes at aggregated levels. This is due in part to a dearth of analytic techniques to model these dynamic time-series events. The current article addresses this limitation by applying a generalization of the relational event framework to model micro-level food choice behavior following an educational intervention. Relational event modeling was used to model the food choices that 221 mothers made for their child following receipt of an information-based intervention. Participants were randomized to receive either (a) control information; (b) childhood obesity risk information; (c) childhood obesity risk information plus a personalized family history-based risk estimate for their child. Participants then made food choices for their child in a virtual reality-based food buffet simulation. Micro-level aspects of the built environment, such as the ordering of each food in the buffet, were influential. Other dynamic processes such as choice inertia also influenced food selection. Among participants receiving the strongest intervention condition, choice inertia decreased and the overall rate of food selection increased. Modeling food selection processes can elucidate the points at which interventions exert their influence. Researchers can leverage these findings to gain insight into nonconscious and uncontrollable aspects of food selection that influence dietary outcomes, which can ultimately improve the design of dietary interventions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCabe, M. F.; Ershadi, A.; Jimenez, C.
Determining the spatial distribution and temporal development of evaporation at regional and global scales is required to improve our understanding of the coupled water and energy cycles and to better monitor any changes in observed trends and variability of linked hydrological processes. With recent international efforts guiding the development of long-term and globally distributed flux estimates, continued product assessments are required to inform upon the selection of suitable model structures and also to establish the appropriateness of these multi-model simulations for global application. In support of the objectives of the Global Energy and Water Cycle Exchanges (GEWEX) LandFlux project, fourmore » commonly used evaporation models are evaluated against data from tower-based eddy-covariance observations, distributed across a range of biomes and climate zones. The selected schemes include the Surface Energy Balance System (SEBS) approach, the Priestley–Taylor Jet Propulsion Laboratory (PT-JPL) model, the Penman–Monteith-based Mu model (PM-Mu) and the Global Land Evaporation Amsterdam Model (GLEAM). Here we seek to examine the fidelity of global evaporation simulations by examining the multi-model response to varying sources of forcing data. To do this, we perform parallel and collocated model simulations using tower-based data together with a global-scale grid-based forcing product. Through quantifying the multi-model response to high-quality tower data, a better understanding of the subsequent model response to the coarse-scale globally gridded data that underlies the LandFlux product can be obtained, while also providing a relative evaluation and assessment of model performance. Using surface flux observations from 45 globally distributed eddy-covariance stations as independent metrics of performance, the tower-based analysis indicated that PT-JPL provided the highest overall statistical performance (0.72; 61 W m –2; 0.65), followed closely by GLEAM (0.68; 64 W m –2; 0.62), with values in parentheses representing the R 2, RMSD and Nash–Sutcliffe efficiency (NSE), respectively. PM-Mu (0.51; 78 W m –2; 0.45) tended to underestimate fluxes, while SEBS (0.72; 101 W m –2; 0.24) overestimated values relative to observations. A focused analysis across specific biome types and climate zones showed considerable variability in the performance of all models, with no single model consistently able to outperform any other. Results also indicated that the global gridded data tended to reduce the performance for all of the studied models when compared to the tower data, likely a response to scale mismatch and issues related to forcing quality. Rather than relying on any single model simulation, the spatial and temporal variability at both the tower- and grid-scale highlighted the potential benefits of developing an ensemble or blended evaporation product for global-scale LandFlux applications. Hence, challenges related to the robust assessment of the LandFlux product are also discussed.« less
McCabe, M. F.; Ershadi, A.; Jimenez, C.; ...
2016-01-26
Determining the spatial distribution and temporal development of evaporation at regional and global scales is required to improve our understanding of the coupled water and energy cycles and to better monitor any changes in observed trends and variability of linked hydrological processes. With recent international efforts guiding the development of long-term and globally distributed flux estimates, continued product assessments are required to inform upon the selection of suitable model structures and also to establish the appropriateness of these multi-model simulations for global application. In support of the objectives of the Global Energy and Water Cycle Exchanges (GEWEX) LandFlux project, fourmore » commonly used evaporation models are evaluated against data from tower-based eddy-covariance observations, distributed across a range of biomes and climate zones. The selected schemes include the Surface Energy Balance System (SEBS) approach, the Priestley–Taylor Jet Propulsion Laboratory (PT-JPL) model, the Penman–Monteith-based Mu model (PM-Mu) and the Global Land Evaporation Amsterdam Model (GLEAM). Here we seek to examine the fidelity of global evaporation simulations by examining the multi-model response to varying sources of forcing data. To do this, we perform parallel and collocated model simulations using tower-based data together with a global-scale grid-based forcing product. Through quantifying the multi-model response to high-quality tower data, a better understanding of the subsequent model response to the coarse-scale globally gridded data that underlies the LandFlux product can be obtained, while also providing a relative evaluation and assessment of model performance. Using surface flux observations from 45 globally distributed eddy-covariance stations as independent metrics of performance, the tower-based analysis indicated that PT-JPL provided the highest overall statistical performance (0.72; 61 W m –2; 0.65), followed closely by GLEAM (0.68; 64 W m –2; 0.62), with values in parentheses representing the R 2, RMSD and Nash–Sutcliffe efficiency (NSE), respectively. PM-Mu (0.51; 78 W m –2; 0.45) tended to underestimate fluxes, while SEBS (0.72; 101 W m –2; 0.24) overestimated values relative to observations. A focused analysis across specific biome types and climate zones showed considerable variability in the performance of all models, with no single model consistently able to outperform any other. Results also indicated that the global gridded data tended to reduce the performance for all of the studied models when compared to the tower data, likely a response to scale mismatch and issues related to forcing quality. Rather than relying on any single model simulation, the spatial and temporal variability at both the tower- and grid-scale highlighted the potential benefits of developing an ensemble or blended evaporation product for global-scale LandFlux applications. Hence, challenges related to the robust assessment of the LandFlux product are also discussed.« less
NASA Astrophysics Data System (ADS)
Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.
2017-06-01
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holoien, Thomas W. -S.; Marshall, Philip J.; Wechsler, Risa H.
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of amore » subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.« less
Cross-validation to select Bayesian hierarchical models in phylogenetics.
Duchêne, Sebastián; Duchêne, David A; Di Giallonardo, Francesca; Eden, John-Sebastian; Geoghegan, Jemma L; Holt, Kathryn E; Ho, Simon Y W; Holmes, Edward C
2016-05-26
Recent developments in Bayesian phylogenetic models have increased the range of inferences that can be drawn from molecular sequence data. Accordingly, model selection has become an important component of phylogenetic analysis. Methods of model selection generally consider the likelihood of the data under the model in question. In the context of Bayesian phylogenetics, the most common approach involves estimating the marginal likelihood, which is typically done by integrating the likelihood across model parameters, weighted by the prior. Although this method is accurate, it is sensitive to the presence of improper priors. We explored an alternative approach based on cross-validation that is widely used in evolutionary analysis. This involves comparing models according to their predictive performance. We analysed simulated data and a range of viral and bacterial data sets using a cross-validation approach to compare a variety of molecular clock and demographic models. Our results show that cross-validation can be effective in distinguishing between strict- and relaxed-clock models and in identifying demographic models that allow growth in population size over time. In most of our empirical data analyses, the model selected using cross-validation was able to match that selected using marginal-likelihood estimation. The accuracy of cross-validation appears to improve with longer sequence data, particularly when distinguishing between relaxed-clock models. Cross-validation is a useful method for Bayesian phylogenetic model selection. This method can be readily implemented even when considering complex models where selecting an appropriate prior for all parameters may be difficult.
NASA Technical Reports Server (NTRS)
Bergeron, H. P.; Haynie, A. T.; Mcdede, J. B.
1980-01-01
A general aviation single pilot instrument flight rule simulation capability was developed. Problems experienced by single pilots flying in IFR conditions were investigated. The simulation required a three dimensional spatial navaid environment of a flight navigational area. A computer simulation of all the navigational aids plus 12 selected airports located in the Washington/Norfolk area was developed. All programmed locations in the list were referenced to a Cartesian coordinate system with the origin located at a specified airport's reference point. All navigational aids with their associated frequencies, call letters, locations, and orientations plus runways and true headings are included in the data base. The simulation included a TV displayed out-the-window visual scene of country and suburban terrain and a scaled model runway complex. Any of the programmed runways, with all its associated navaids, can be referenced to a runway on the airport in this visual scene. This allows a simulation of a full mission scenario including breakout and landing.
2008-10-01
Healthcare Systems Will Be Those That Work With Data/Info In New Ways • Artificial Intelligence Will Come to the Fore o Effectively Acquire...Education • Artificial Intelligence Will Assist in o History and Physical Examination o Imaging Selection via algorithms o Test Selection via algorithms...medical language into a simulation model based upon artificial intelligence , and • the content verification and validation of the cognitive
Modeling of laser welding of steel and titanium plates with a composite insert
NASA Astrophysics Data System (ADS)
Isaev, V. I.; Cherepanov, A. N.; Shapeev, V. P.
2017-10-01
A 3D model of laser welding proposed before by the authors was extended to the case of welding of metallic plates made of dissimilar materials with a composite multilayer intermediate insert. The model simulates heat transfer in the welded plates and takes into account phase transitions. It was proposed to select the composition of several metals and dimensions of the insert to avoid the formation of brittle intermetallic phases in the weld joint negatively affecting its strength properties. The model accounts for key physical phenomena occurring during the complex process of laser welding. It is capable to calculate temperature regimes at each point of the plates. The model can be used to select the welding parameters reducing the risk of formation of intermetallic plates. It can forecast the dimensions and crystalline structure of the solidified melt. Based on the proposed model a numerical algorithm was constructed. Simulations were carried out for the welding of titanium and steel plates with a composite insert comprising four different metals: copper and niobium (intermediate plates) with steel and titanium (outer plates). The insert is produced by explosion welding. Temperature fields and the processes of melting, evaporation, and solidification were studied.
Novel Harmonic Regularization Approach for Variable Selection in Cox's Proportional Hazards Model
Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan
2014-01-01
Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods. PMID:25506389
Spatial and Feature-Based Attention in a Layered Cortical Microcircuit Model
Wagatsuma, Nobuhiko; Potjans, Tobias C.; Diesmann, Markus; Sakai, Ko; Fukai, Tomoki
2013-01-01
Directing attention to the spatial location or the distinguishing feature of a visual object modulates neuronal responses in the visual cortex and the stimulus discriminability of subjects. However, the spatial and feature-based modes of attention differently influence visual processing by changing the tuning properties of neurons. Intriguingly, neurons' tuning curves are modulated similarly across different visual areas under both these modes of attention. Here, we explored the mechanism underlying the effects of these two modes of visual attention on the orientation selectivity of visual cortical neurons. To do this, we developed a layered microcircuit model. This model describes multiple orientation-specific microcircuits sharing their receptive fields and consisting of layers 2/3, 4, 5, and 6. These microcircuits represent a functional grouping of cortical neurons and mutually interact via lateral inhibition and excitatory connections between groups with similar selectivity. The individual microcircuits receive bottom-up visual stimuli and top-down attention in different layers. A crucial assumption of the model is that feature-based attention activates orientation-specific microcircuits for the relevant feature selectively, whereas spatial attention activates all microcircuits homogeneously, irrespective of their orientation selectivity. Consequently, our model simultaneously accounts for the multiplicative scaling of neuronal responses in spatial attention and the additive modulations of orientation tuning curves in feature-based attention, which have been observed widely in various visual cortical areas. Simulations of the model predict contrasting differences between excitatory and inhibitory neurons in the two modes of attentional modulations. Furthermore, the model replicates the modulation of the psychophysical discriminability of visual stimuli in the presence of external noise. Our layered model with a biologically suggested laminar structure describes the basic circuit mechanism underlying the attention-mode specific modulations of neuronal responses and visual perception. PMID:24324628
Methodology for balancing design and process tradeoffs for deep-subwavelength technologies
NASA Astrophysics Data System (ADS)
Graur, Ioana; Wagner, Tina; Ryan, Deborah; Chidambarrao, Dureseti; Kumaraswamy, Anand; Bickford, Jeanne; Styduhar, Mark; Wang, Lee
2011-04-01
For process development of deep-subwavelength technologies, it has become accepted practice to use model-based simulation to predict systematic and parametric failures. Increasingly, these techniques are being used by designers to ensure layout manufacturability, as an alternative to, or complement to, restrictive design rules. The benefit of model-based simulation tools in the design environment is that manufacturability problems are addressed in a design-aware way by making appropriate trade-offs, e.g., between overall chip density and manufacturing cost and yield. The paper shows how library elements and the full ASIC design flow benefit from eliminating hot spots and improving design robustness early in the design cycle. It demonstrates a path to yield optimization and first time right designs implemented in leading edge technologies. The approach described herein identifies those areas in the design that could benefit from being fixed early, leading to design updates and avoiding later design churn by careful selection of design sensitivities. This paper shows how to achieve this goal by using simulation tools incorporating various models from sparse to rigorously physical, pattern detection and pattern matching, checking and validating failure thresholds.
NASA Astrophysics Data System (ADS)
Noh, S. J.; Tachikawa, Y.; Shiiba, M.; Yorozu, K.; Kim, S.
2012-04-01
Data assimilation methods have received increased attention to accomplish uncertainty assessment and enhancement of forecasting capability in various areas. Despite of their potentials, applicable software frameworks to probabilistic approaches and data assimilation are still limited because the most of hydrologic modeling software are based on a deterministic approach. In this study, we developed a hydrological modeling framework for sequential data assimilation, so called MPI-OHyMoS. MPI-OHyMoS allows user to develop his/her own element models and to easily build a total simulation system model for hydrological simulations. Unlike process-based modeling framework, this software framework benefits from its object-oriented feature to flexibly represent hydrological processes without any change of the main library. Sequential data assimilation based on the particle filters is available for any hydrologic models based on MPI-OHyMoS considering various sources of uncertainty originated from input forcing, parameters and observations. The particle filters are a Bayesian learning process in which the propagation of all uncertainties is carried out by a suitable selection of randomly generated particles without any assumptions about the nature of the distributions. In MPI-OHyMoS, ensemble simulations are parallelized, which can take advantage of high performance computing (HPC) system. We applied this software framework for short-term streamflow forecasting of several catchments in Japan using a distributed hydrologic model. Uncertainty of model parameters and remotely-sensed rainfall data such as X-band or C-band radar is estimated and mitigated in the sequential data assimilation.
2009-04-01
Capabilities Co ns tr uc tio n En gi ne er in g R es ea rc h La bo ra to ry Kathy L. Simunich, Timothy K. Perkins, David M. Bailey, David Brown, and...inversion height in convective condition is estimated with a one- dimensional model of the atmospheric boundary layer based on the Drie- donks slab model...tool and its capabilities. Installation geospatial data, in CAD format, were obtained for select buildings, roads, and topographic features in
Properties of a memory network in psychology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wedemann, Roseli S.; Donangelo, Raul; Carvalho, Luis A. V. de
We have previously described neurotic psychopathology and psychoanalytic working-through by an associative memory mechanism, based on a neural network model, where memory was modelled by a Boltzmann machine (BM). Since brain neural topology is selectively structured, we simulated known microscopic mechanisms that control synaptic properties, showing that the network self-organizes to a hierarchical, clustered structure. Here, we show some statistical mechanical properties of the complex networks which result from this self-organization. They indicate that a generalization of the BM may be necessary to model memory.
Voss, Frank D.; Mastin, Mark C.
2012-01-01
A database was developed to automate model execution and to provide users with Internet access to voluminous data products ranging from summary figures to model output timeseries. Database-enabled Internet tools were developed to allow users to create interactive graphs of output results based on their analysis needs. For example, users were able to create graphs by selecting time intervals, greenhouse gas emission scenarios, general circulation models, and specific hydrologic variables.
Properties of a memory network in psychology
NASA Astrophysics Data System (ADS)
Wedemann, Roseli S.; Donangelo, Raul; de Carvalho, Luís A. V.
2007-12-01
We have previously described neurotic psychopathology and psychoanalytic working-through by an associative memory mechanism, based on a neural network model, where memory was modelled by a Boltzmann machine (BM). Since brain neural topology is selectively structured, we simulated known microscopic mechanisms that control synaptic properties, showing that the network self-organizes to a hierarchical, clustered structure. Here, we show some statistical mechanical properties of the complex networks which result from this self-organization. They indicate that a generalization of the BM may be necessary to model memory.
NASA Astrophysics Data System (ADS)
Carrico, T.; Langster, T.; Carrico, J.; Alfano, S.; Loucks, M.; Vallado, D.
The authors present several spacecraft rendezvous and close proximity maneuvering techniques modeled with a high-precision numerical integrator using full force models and closed loop control with a Fuzzy Logic intelligent controller to command the engines. The authors document and compare the maneuvers, fuel use, and other parameters. This paper presents an innovative application of an existing capability to design, simulate and analyze proximity maneuvers; already in use for operational satellites performing other maneuvers. The system has been extended to demonstrate the capability to develop closed loop control laws to maneuver spacecraft in close proximity to another, including stand-off, docking, lunar landing and other operations applicable to space situational awareness, space based surveillance, and operational satellite modeling. The fully integrated end-to-end trajectory ephemerides are available from the authors in electronic ASCII text by request. The benefits of this system include: A realistic physics-based simulation for the development and validation of control laws A collaborative engineering environment for the design, development and tuning of spacecraft law parameters, sizing actuators (i.e., rocket engines), and sensor suite selection. An accurate simulation and visualization to communicate the complexity, criticality, and risk of spacecraft operations. A precise mathematical environment for research and development of future spacecraft maneuvering engineering tasks, operational planning and forensic analysis. A closed loop, knowledge-based control example for proximity operations. This proximity operations modeling and simulation environment will provide a valuable adjunct to programs in military space control, space situational awareness and civil space exploration engineering and decision making processes.
Projected climate change impacts on winter recreation in the ...
A physically-based water and energy balance model is used to simulate natural snow accumulation at 247 winter recreation locations across the continental United States. We combine this model with projections of snowmaking conditions to determine downhill skiing, cross-country skiing, and snowmobiling season lengths under baseline and future climates, using data from five climate models and two emissions scenarios. The present-day simulations from the snow model without snowmaking are validated with observations of snow-water-equivalent from snow monitoring sites. Projected season lengths are combined with baseline estimates of winter recreation activity to monetize impacts to the selected winter recreation activity categories for the years 2050 and 2090. Estimate the physical and economic impact of climate change on winter recreation in the contiguous U.S.
Model-free adaptive speed control on travelling wave ultrasonic motor
NASA Astrophysics Data System (ADS)
Di, Sisi; Li, Huafeng
2018-01-01
This paper introduced a new data-driven control (DDC) method for the speed control of ultrasonic motor (USM). The model-free adaptive control (MFAC) strategy was presented in terms of its principles, algorithms, and parameter selection. To verify the efficiency of the proposed method, a speed-frequency-time model, which contained all the measurable nonlinearity and uncertainties based on experimental data was established for simulation to mimic the USM operation system. Furthermore, the model was identified using particle swarm optimization (PSO) method. Then, the control of the simulated system using MFAC was evaluated under different expectations in terms of overshoot, rise time and steady-state error. Finally, the MFAC results were compared with that of proportion iteration differentiation (PID) to demonstrate its advantages in controlling general random system.
POPCORN: a Supervisory Control Simulation for Workload and Performance Research
NASA Technical Reports Server (NTRS)
Hart, S. G.; Battiste, V.; Lester, P. T.
1984-01-01
A multi-task simulation of a semi-automatic supervisory control system was developed to provide an environment in which training, operator strategy development, failure detection and resolution, levels of automation, and operator workload can be investigated. The goal was to develop a well-defined, but realistically complex, task that would lend itself to model-based analysis. The name of the task (POPCORN) reflects the visual display that depicts different task elements milling around waiting to be released and pop out to be performed. The operator's task was to complete each of 100 task elements that ere represented by different symbols, by selecting a target task and entering the desired a command. The simulated automatic system then completed the selected function automatically. Highly significant differences in performance, strategy, and rated workload were found as a function of all experimental manipulations (except reward/penalty).
Results of a study on polarization mix selection for the NSCAT scatterometer
NASA Technical Reports Server (NTRS)
Long, David G.; Dunbar, R. Scott; Shaffer, Scott; Freilich, Michael H.; Hsiao, S. Vincent
1989-01-01
The NASA scatterometer (NSCAT) is an instrument designed to measure the radar backscatter of the ocean's surface for estimating the near-surface wind velocity. A given resolution element is observed from several different azimuth angles. From these measurements the near-surface vector wind over the ocean may be inferred using a geophysical model function relating the normalized radar backscatter coefficient (sigma0) to the near-surface wind. The results of a study to select a polarization mix for NSCAT using an end-to-end simulation of the NSCAT scatterometer and ground processing of the sigma0 measurements into unambiguous wind fields using a median-filter-based ambiguity-removal algorithm are presented. The system simulation was used to compare the wind measurement accuracy and ambiguity removal skill over a set of realistic mesoscale wind fields for various polarization mixes. Considerations in the analysis and simulation are discussed, and a recommended polarization mix is given.
NASA Astrophysics Data System (ADS)
Khotimah, Chusnul; Purnami, Santi Wulan; Prastyo, Dedy Dwi; Chosuvivatwong, Virasakdi; Sriplung, Hutcha
2017-11-01
Support Vector Machines (SVMs) has been widely applied for prediction in many fields. Recently, SVM is also developed for survival analysis. In this study, Additive Survival Least Square SVM (A-SURLSSVM) approach is used to analyze cervical cancer dataset and its performance is compared with the Cox model as a benchmark. The comparison is evaluated based on the prognostic index produced: concordance index (c-index), log rank, and hazard ratio. The higher prognostic index represents the better performance of the corresponding methods. This work also applied feature selection to choose important features using backward elimination technique based on the c-index criterion. The cervical cancer dataset consists of 172 patients. The empirical results show that nine out of the twelve features: age at marriage, age of first getting menstruation, age, parity, type of treatment, history of family planning, stadium, long-time of menstruation, and anemia status are selected as relevant features that affect the survival time of cervical cancer patients. In addition, the performance of the proposed method is evaluated through a simulation study with the different number of features and censoring percentages. Two out of three performance measures (c-index and hazard ratio) obtained from A-SURLSSVM consistently yield better results than the ones obtained from Cox model when it is applied on both simulated and cervical cancer data. Moreover, the simulation study showed that A-SURLSSVM performs better when the percentage of censoring data is small.
Buu, Anne; Johnson, Norman J.; Li, Runze; Tan, Xianming
2011-01-01
Zero-inflated count data are very common in health surveys. This study develops new variable selection methods for the zero-inflated Poisson regression model. Our simulations demonstrate the negative consequences which arise from the ignorance of zero-inflation. Among the competing methods, the one-step SCAD method is recommended because it has the highest specificity, sensitivity, exact fit, and lowest estimation error. The design of the simulations is based on the special features of two large national databases commonly used in the alcoholism and substance abuse field so that our findings can be easily generalized to the real settings. Applications of the methodology are demonstrated by empirical analyses on the data from a well-known alcohol study. PMID:21563207
tICA-Metadynamics: Accelerating Metadynamics by Using Kinetically Selected Collective Variables.
M Sultan, Mohammad; Pande, Vijay S
2017-06-13
Metadynamics is a powerful enhanced molecular dynamics sampling method that accelerates simulations by adding history-dependent multidimensional Gaussians along selective collective variables (CVs). In practice, choosing a small number of slow CVs remains challenging due to the inherent high dimensionality of biophysical systems. Here we show that time-structure based independent component analysis (tICA), a recent advance in Markov state model literature, can be used to identify a set of variationally optimal slow coordinates for use as CVs for Metadynamics. We show that linear and nonlinear tICA-Metadynamics can complement existing MD studies by explicitly sampling the system's slowest modes and can even drive transitions along the slowest modes even when no such transitions are observed in unbiased simulations.
A tool for multi-scale modelling of the renal nephron
Nickerson, David P.; Terkildsen, Jonna R.; Hamilton, Kirk L.; Hunter, Peter J.
2011-01-01
We present the development of a tool, which provides users with the ability to visualize and interact with a comprehensive description of a multi-scale model of the renal nephron. A one-dimensional anatomical model of the nephron has been created and is used for visualization and modelling of tubule transport in various nephron anatomical segments. Mathematical models of nephron segments are embedded in the one-dimensional model. At the cellular level, these segment models use models encoded in CellML to describe cellular and subcellular transport kinetics. A web-based presentation environment has been developed that allows the user to visualize and navigate through the multi-scale nephron model, including simulation results, at the different spatial scales encompassed by the model description. The Zinc extension to Firefox is used to provide an interactive three-dimensional view of the tubule model and the native Firefox rendering of scalable vector graphics is used to present schematic diagrams for cellular and subcellular scale models. The model viewer is embedded in a web page that dynamically presents content based on user input. For example, when viewing the whole nephron model, the user might be presented with information on the various embedded segment models as they select them in the three-dimensional model view. Alternatively, the user chooses to focus the model viewer on a cellular model located in a particular nephron segment in order to view the various membrane transport proteins. Selecting a specific protein may then present the user with a description of the mathematical model governing the behaviour of that protein—including the mathematical model itself and various simulation experiments used to validate the model against the literature. PMID:22670210
NASA Astrophysics Data System (ADS)
Braun, Marco; Chaumont, Diane
2013-04-01
Using climate model output to explore climate change impacts on hydrology requires several considerations, choices and methods in the post treatment of the datasets. In the effort of producing a comprehensive data base of climate change scenarios for over 300 watersheds in the Canadian province of Québec, a selection of state of the art procedures were applied to an ensemble comprising 87 climate simulations. The climate data ensemble is based on global climate simulations from the Coupled Model Intercomparison Project - Phase 3 (CMIP3) and regional climate simulations from the North American Regional Climate Change Assessment Program (NARCCAP) and operational simulations produced at Ouranos. Information on the response of hydrological systems to changing climate conditions can be derived by linking climate simulations with hydrological models. However, the direct use of raw climate model output variables as drivers for hydrological models is limited by issues such as spatial resolution and the calibration of hydro models with observations. Methods for downscaling and bias correcting the data are required to achieve seamless integration of climate simulations with hydro models. The effects on the results of four different approaches to data post processing were explored and compared. We present the lessons learned from building the largest data base yet for multiple stakeholders in the hydro power and water management sector in Québec putting an emphasis on the benefits and pitfalls in choosing simulations, extracting the data, performing bias corrections and documenting the results. A discussion of the sources and significance of uncertainties in the data will also be included. The climatological data base was subsequently used by the state owned hydro power company Hydro-Québec and the Centre d'expertise hydrique du Québec (CEHQ), the provincial water authority, to simulate future stream flows and analyse the impacts on hydrological indicators. While this submission focuses on the production of climatic scenarios for application in hydrology, the submission « The (cQ)2 project: assessing watershed scale hydrological changes for the province of Québec at the 2050 horizon, a collaborative framework » by Catherine Guay describes how Hydro-Québec and CEHQ put the data into use.
Photometric Modeling of Simulated Surace-Resolved Bennu Images
NASA Astrophysics Data System (ADS)
Golish, D.; DellaGiustina, D. N.; Clark, B.; Li, J. Y.; Zou, X. D.; Bennett, C. A.; Lauretta, D. S.
2017-12-01
The Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) is a NASA mission to study and return a sample of asteroid (101955) Bennu. Imaging data from the mission will be used to develop empirical surface-resolved photometric models of Bennu at a series of wavelengths. These models will be used to photometrically correct panchromatic and color base maps of Bennu, compensating for variations due to shadows and photometric angle differences, thereby minimizing seams in mosaicked images. Well-corrected mosaics are critical to the generation of a global hazard map and a global 1064-nm reflectance map which predicts LIDAR response. These data products directly feed into the selection of a site from which to safely acquire a sample. We also require photometric correction for the creation of color ratio maps of Bennu. Color ratios maps provide insight into the composition and geological history of the surface and allow for comparison to other Solar System small bodies. In advance of OSIRIS-REx's arrival at Bennu, we use simulated images to judge the efficacy of both the photometric modeling software and the mission observation plan. Our simulation software is based on USGS's Integrated Software for Imagers and Spectrometers (ISIS) and uses a synthetic shape model, a camera model, and an empirical photometric model to generate simulated images. This approach gives us the flexibility to create simulated images of Bennu based on analog surfaces from other small Solar System bodies and to test our modeling software under those conditions. Our photometric modeling software fits image data to several conventional empirical photometric models and produces the best fit model parameters. The process is largely automated, which is crucial to the efficient production of data products during proximity operations. The software also produces several metrics on the quality of the observations themselves, such as surface coverage and the completeness of the data set for evaluating the phase and disk functions of the surface. Application of this software to simulated mission data has revealed limitations in the initial mission design, which has fed back into the planning process. The entire photometric pipeline further serves as an exercise of planned activities for proximity operations.
Conditioning 3D object-based models to dense well data
NASA Astrophysics Data System (ADS)
Wang, Yimin C.; Pyrcz, Michael J.; Catuneanu, Octavian; Boisvert, Jeff B.
2018-06-01
Object-based stochastic simulation models are used to generate categorical variable models with a realistic representation of complicated reservoir heterogeneity. A limitation of object-based modeling is the difficulty of conditioning to dense data. One method to achieve data conditioning is to apply optimization techniques. Optimization algorithms can utilize an objective function measuring the conditioning level of each object while also considering the geological realism of the object. Here, an objective function is optimized with implicit filtering which considers constraints on object parameters. Thousands of objects conditioned to data are generated and stored in a database. A set of objects are selected with linear integer programming to generate the final realization and honor all well data, proportions and other desirable geological features. Although any parameterizable object can be considered, objects from fluvial reservoirs are used to illustrate the ability to simultaneously condition multiple types of geologic features. Channels, levees, crevasse splays and oxbow lakes are parameterized based on location, path, orientation and profile shapes. Functions mimicking natural river sinuosity are used for the centerline model. Channel stacking pattern constraints are also included to enhance the geological realism of object interactions. Spatial layout correlations between different types of objects are modeled. Three case studies demonstrate the flexibility of the proposed optimization-simulation method. These examples include multiple channels with high sinuosity, as well as fragmented channels affected by limited preservation. In all cases the proposed method reproduces input parameters for the object geometries and matches the dense well constraints. The proposed methodology expands the applicability of object-based simulation to complex and heterogeneous geological environments with dense sampling.
Optimization of HAART with genetic algorithms and agent-based models of HIV infection.
Castiglione, F; Pappalardo, F; Bernaschi, M; Motta, S
2007-12-15
Highly Active AntiRetroviral Therapies (HAART) can prolong life significantly to people infected by HIV since, although unable to eradicate the virus, they are quite effective in maintaining control of the infection. However, since HAART have several undesirable side effects, it is considered useful to suspend the therapy according to a suitable schedule of Structured Therapeutic Interruptions (STI). In the present article we describe an application of genetic algorithms (GA) aimed at finding the optimal schedule for a HAART simulated with an agent-based model (ABM) of the immune system that reproduces the most significant features of the response of an organism to the HIV-1 infection. The genetic algorithm helps in finding an optimal therapeutic schedule that maximizes immune restoration, minimizes the viral count and, through appropriate interruptions of the therapy, minimizes the dose of drug administered to the simulated patient. To validate the efficacy of the therapy that the genetic algorithm indicates as optimal, we ran simulations of opportunistic diseases and found that the selected therapy shows the best survival curve among the different simulated control groups. A version of the C-ImmSim simulator is available at http://www.iac.cnr.it/~filippo/c-ImmSim.html
Landfill mining: Development of a cost simulation model.
Wolfsberger, Tanja; Pinkel, Michael; Polansek, Stephanie; Sarc, Renato; Hermann, Robert; Pomberger, Roland
2016-04-01
Landfill mining permits recovering secondary raw materials from landfills. Whether this purpose is economically feasible, however, is a matter of various aspects. One is the amount of recoverable secondary raw material (like metals) that can be exploited with a profit. Other influences are the costs for excavation, for processing the waste at the landfill site and for paying charges on the secondary disposal of waste. Depending on the objectives of a landfill mining project (like the recovery of a ferrous and/or a calorific fraction) these expenses and revenues are difficult to assess in advance. This situation complicates any previous assessment of the economic feasibility and is the reason why many landfills that might be suitable for landfill mining are continuingly operated as active landfills, generating aftercare costs and leaving potential hazards to later generations. This article presents a newly developed simulation model for landfill mining projects. It permits identifying the quantities and qualities of output flows that can be recovered by mining and by mobile on-site processing of the waste based on treatment equipment selected by the landfill operator. Thus, charges for disposal and expected revenues from secondary raw materials can be assessed. Furthermore, investment, personnel, operation, servicing and insurance costs are assessed and displayed, based on the selected mobile processing procedure and its throughput, among other things. For clarity, the simulation model is described in this article using the example of a real Austrian sanitary landfill. © The Author(s) 2016.
The Red Queen model of recombination hot-spot evolution: a theoretical investigation.
Latrille, Thibault; Duret, Laurent; Lartillot, Nicolas
2017-12-19
In humans and many other species, recombination events cluster in narrow and short-lived hot spots distributed across the genome, whose location is determined by the Zn-finger protein PRDM9. To explain these fast evolutionary dynamics, an intra-genomic Red Queen model has been proposed, based on the interplay between two antagonistic forces: biased gene conversion, mediated by double-strand breaks, resulting in hot-spot extinction, followed by positive selection favouring new PRDM9 alleles recognizing new sequence motifs. Thus far, however, this Red Queen model has not been formalized as a quantitative population-genetic model, fully accounting for the intricate interplay between biased gene conversion, mutation, selection, demography and genetic diversity at the PRDM9 locus. Here, we explore the population genetics of the Red Queen model of recombination. A Wright-Fisher simulator was implemented, allowing exploration of the behaviour of the model (mean equilibrium recombination rate, diversity at the PRDM9 locus or turnover rate) as a function of the parameters (effective population size, mutation and erosion rates). In a second step, analytical results based on self-consistent mean-field approximations were derived, reproducing the scaling relations observed in the simulations. Empirical fit of the model to current data from the mouse suggests both a high mutation rate at PRDM9 and strong biased gene conversion on its targets.This article is part of the themed issue 'Evolutionary causes and consequences of recombination rate variation in sexual organisms'. © 2017 The Authors.
The Red Queen model of recombination hot-spot evolution: a theoretical investigation
Latrille, Thibault; Duret, Laurent
2017-01-01
In humans and many other species, recombination events cluster in narrow and short-lived hot spots distributed across the genome, whose location is determined by the Zn-finger protein PRDM9. To explain these fast evolutionary dynamics, an intra-genomic Red Queen model has been proposed, based on the interplay between two antagonistic forces: biased gene conversion, mediated by double-strand breaks, resulting in hot-spot extinction, followed by positive selection favouring new PRDM9 alleles recognizing new sequence motifs. Thus far, however, this Red Queen model has not been formalized as a quantitative population-genetic model, fully accounting for the intricate interplay between biased gene conversion, mutation, selection, demography and genetic diversity at the PRDM9 locus. Here, we explore the population genetics of the Red Queen model of recombination. A Wright–Fisher simulator was implemented, allowing exploration of the behaviour of the model (mean equilibrium recombination rate, diversity at the PRDM9 locus or turnover rate) as a function of the parameters (effective population size, mutation and erosion rates). In a second step, analytical results based on self-consistent mean-field approximations were derived, reproducing the scaling relations observed in the simulations. Empirical fit of the model to current data from the mouse suggests both a high mutation rate at PRDM9 and strong biased gene conversion on its targets. This article is part of the themed issue ‘Evolutionary causes and consequences of recombination rate variation in sexual organisms’. PMID:29109226
Competitive seeds-selection in complex networks
NASA Astrophysics Data System (ADS)
Zhao, Jiuhua; Liu, Qipeng; Wang, Lin; Wang, Xiaofan
2017-02-01
This paper investigates a competitive diffusion model where two competitors simultaneously select a set of nodes (seeds) in the network to influence. We focus on the problem of how to select these seeds such that, when the diffusion process terminates, a competitor can obtain more supports than its opponent. Instead of studying this problem in the game-theoretic framework as in the existing work, in this paper we design several heuristic seed-selection strategies inspired by commonly used centrality measures-Betweenness Centrality (BC), Closeness Centrality (CC), Degree Centrality (DC), Eigenvector Centrality (EC), and K-shell Centrality (KS). We mainly compare three centrality-based strategies, which have better performances in competing with the random selection strategy, through simulations on both real and artificial networks. Even though network structure varies across different networks, we find certain common trend appearing in all of these networks. Roughly speaking, BC-based strategy and DC-based strategy are better than CC-based strategy. Moreover, if a competitor adopts CC-based strategy, then BC-based strategy is a better strategy than DC-based strategy for his opponent, and the superiority of BC-based strategy decreases as the heterogeneity of the network decreases.
NASA Astrophysics Data System (ADS)
Tobin, K. J.; Bennett, M. E.
2008-05-01
The Cimarron River Basin (3110 sq km) between Dodge and Guthrie, Oklahoma is located in northern Oklahoma and was used as a test bed to compare the hydrological model performance associated with different methods of precipitation quantification. The Soil and Water Assessment Tool (SWAT) was selected for this project, which is a comprehensive model that, besides quantifying watershed hydrology, can simulate water quality as well as nutrient and sediment loading within stream reaches. An advantage of this location is the extensive monitoring of MET parameters (precipitation, temperature, relative humidity, wind speed, solar radiation) afforded by the Oklahoma Mesonet, which has been documented to improve the performance of SWAT. The utility of TRMM 3B42 and NEXRAD Stage III data in supporting the hydrologic modeling of Cimarron River Basin is demonstrated. Minor adjustments to selected model parameters were made to make parameter values more realistic based on results from previous studies and information and to more realistically simulate base flow. Significantly, no ad hoc adjustments to major parameters such as Curve Number or Available Soil Water were made and robust simulations were obtained. TRMM and NEXRAD data are aggregated into an average daily estimate of precipitation for each TRMM grid cell (0.25 degree X 0.25 degree). Preliminary simulation of stream flow (year 2004 to 2006) in the Cimarron River Basin yields acceptable monthly results with very little adjustment of model parameters using TRMM 3B42 precipitation data (mass balance error = 3 percent; Monthly Nash-Sutcliffe efficiency coefficients (NS) = 0.77). However, both Oklahoma Mesonet rain gauge (mass balance error = 13 percent; Monthly NS = 0.91; Daily NS = 0.64) and NEXRAD Stage III data (mass balance error = -5 percent; Monthly NS = 0.95; Daily NS = 0.69) produces superior simulations even at a sub-monthly time scale; daily results are time averaged over a three day period. Note that all types of precipitation data perform better than a synthetic precipitation dataset generated using a weather simulator (mass balance error = 12 percent; Monthly NS = 0.40). Our study again documents that merged precipitation satellite products, such as TRMM 3B42, can support semi-distributed hydrologic modeling at the watershed scale. However, apparently additional work is required to improve TRMM precipitation retrievals over land to generate a product that yields more robust hydrological simulations especially at finer time scales. Additionally, ongoing work in this basin will compare TRMM results with stream flow model results generated using CMORPH precipitation estimates. Finally, in the future we plan to use simulated, semi-distributed soil moisture values determined by SWAT for comparison with gridded soil moisture estimates from TRMM-TMI that should provide further validation of our modeling efforts.
Disturbance characteristics of half-selected cells in a cross-point resistive switching memory array
NASA Astrophysics Data System (ADS)
Chen, Zhe; Li, Haitong; Chen, Hong-Yu; Chen, Bing; Liu, Rui; Huang, Peng; Zhang, Feifei; Jiang, Zizhen; Ye, Hongfei; Gao, Bin; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng; Wong, H.-S. Philip; Yu, Shimeng
2016-05-01
Disturbance characteristics of cross-point resistive random access memory (RRAM) arrays are comprehensively studied in this paper. An analytical model is developed to quantify the number of pulses (#Pulse) the cell can bear before disturbance occurs under various sub-switching voltage stresses based on physical understanding. An evaluation methodology is proposed to assess the disturb behavior of half-selected (HS) cells in cross-point RRAM arrays by combining the analytical model and SPICE simulation. The characteristics of cross-point RRAM arrays such as energy consumption, reliable operating cycles and total error bits are evaluated by the methodology. A possible solution to mitigate disturbance is proposed.
Simulation of 2D rarefied gas flows based on the numerical solution of the Boltzmann equation
NASA Astrophysics Data System (ADS)
Poleshkin, Sergey O.; Malkov, Ewgenij A.; Kudryavtsev, Alexey N.; Shershnev, Anton A.; Bondar, Yevgeniy A.; Kohanchik, A. A.
2017-10-01
There are various methods for calculating rarefied gas flows, in particular, statistical methods and deterministic methods based on the finite-difference solutions of the Boltzmann nonlinear kinetic equation and on the solutions of model kinetic equations. There is no universal method; each has its disadvantages in terms of efficiency or accuracy. The choice of the method depends on the problem to be solved and on parameters of calculated flows. Qualitative theoretical arguments help to determine the range of parameters of effectively solved problems for each method; however, it is advisable to perform comparative tests of calculations of the classical problems performed by different methods and with different parameters to have quantitative confirmation of this reasoning. The paper provides the results of the calculations performed by the authors with the help of the Direct Simulation Monte Carlo method and finite-difference methods of solving the Boltzmann equation and model kinetic equations. Based on this comparison, conclusions are made on selecting a particular method for flow simulations in various ranges of flow parameters.
Optimizing legacy molecular dynamics software with directive-based offload
NASA Astrophysics Data System (ADS)
Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; Thakkar, Foram M.; Plimpton, Steven J.
2015-10-01
Directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In this paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also result in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMPS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel® Xeon Phi™ coprocessors and NVIDIA GPUs. The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS.
LISA: a java API for performing simulations of trajectories for all types of balloons
NASA Astrophysics Data System (ADS)
Conessa, Huguette
2016-07-01
LISA (LIbrarie de Simulation pour les Aerostats) is a java API for performing simulations of trajectories for all types of balloons (Zero Pressure Balloons, Pressurized Balloons, Infrared Montgolfier), and for all phases of flight (ascent, ceiling, descent). This library has for goals to establish a reliable repository of Balloons flight physics models, to capitalize developments and control models used in different tools. It is already used for flight physics study software in CNES, to understand and reproduce the behavior of balloons, observed during real flights. It will be used operationally for the ground segment of the STRATEOLE2 mission. It was developed with quality rules of "critical software." It is based on fundamental generic concepts, linking the simulation state variables to interchangeable calculation models. Each LISA model defines how to calculate a consistent set of state variables combining validity checks. To perform a simulation for a type of balloon and a phase of flight, it is necessary to select or create a macro-model that is to say, a consistent set of models to choose from among those offered by LISA, defining the behavior of the environment and the balloon. The purpose of this presentation is to introduce the main concepts of LISA, and the new perspectives offered by this library.
Exclusive data-based modeling of neutron-nuclear reactions below 20 MeV
NASA Astrophysics Data System (ADS)
Savin, Dmitry; Kosov, Mikhail
2017-09-01
We are developing CHIPS-TPT physics library for exclusive simulation of neutron-nuclear reactions below 20 MeV. Exclusive modeling reproduces each separate scattering and thus requires conservation of energy, momentum and quantum numbers in each reaction. Inclusive modeling reproduces only selected values while averaging over the others and imposes no such constraints. Therefore the exclusive modeling allows to simulate additional quantities like secondary particle correlations and gamma-lines broadening and avoid artificial fluctuations. CHIPS-TPT is based on the formerly included in Geant4 CHIPS library, which follows the exclusive approach, and extends it to incident neutrons with the energy below 20 MeV. The NeutronHP model for neutrons below 20 MeV included in Geant4 follows the inclusive approach like the well known MCNP code. Unfortunately, the available data in this energy region is mostly presented in ENDF-6 format and semi-inclusive. Imposing additional constraints on secondary particles complicates modeling but also allows to detect inconsistencies in the input data and to avoid errors that may remain unnoticed in inclusive modeling.
Simulating the Role of Visual Selective Attention during the Development of Perceptual Completion
ERIC Educational Resources Information Center
Schlesinger, Matthew; Amso, Dima; Johnson, Scott P.
2012-01-01
We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of…
Models of microbiome evolution incorporating host and microbial selection.
Zeng, Qinglong; Wu, Steven; Sukumaran, Jeet; Rodrigo, Allen
2017-09-25
Numerous empirical studies suggest that hosts and microbes exert reciprocal selective effects on their ecological partners. Nonetheless, we still lack an explicit framework to model the dynamics of both hosts and microbes under selection. In a previous study, we developed an agent-based forward-time computational framework to simulate the neutral evolution of host-associated microbial communities in a constant-sized, unstructured population of hosts. These neutral models allowed offspring to sample microbes randomly from parents and/or from the environment. Additionally, the environmental pool of available microbes was constituted by fixed and persistent microbial OTUs and by contributions from host individuals in the preceding generation. In this paper, we extend our neutral models to allow selection to operate on both hosts and microbes. We do this by constructing a phenome for each microbial OTU consisting of a sample of traits that influence host and microbial fitnesses independently. Microbial traits can influence the fitness of hosts ("host selection") and the fitness of microbes ("trait-mediated microbial selection"). Additionally, the fitness effects of traits on microbes can be modified by their hosts ("host-mediated microbial selection"). We simulate the effects of these three types of selection, individually or in combination, on microbiome diversities and the fitnesses of hosts and microbes over several thousand generations of hosts. We show that microbiome diversity is strongly influenced by selection acting on microbes. Selection acting on hosts only influences microbiome diversity when there is near-complete direct or indirect parental contribution to the microbiomes of offspring. Unsurprisingly, microbial fitness increases under microbial selection. Interestingly, when host selection operates, host fitness only increases under two conditions: (1) when there is a strong parental contribution to microbial communities or (2) in the absence of a strong parental contribution, when host-mediated selection acts on microbes concomitantly. We present a computational framework that integrates different selective processes acting on the evolution of microbiomes. Our framework demonstrates that selection acting on microbes can have a strong effect on microbial diversities and fitnesses, whereas selection on hosts can have weaker outcomes.
Linking 1D coastal ocean modelling to environmental management: an ensemble approach
NASA Astrophysics Data System (ADS)
Mussap, Giulia; Zavatarelli, Marco; Pinardi, Nadia
2017-12-01
The use of a one-dimensional interdisciplinary numerical model of the coastal ocean as a tool contributing to the formulation of ecosystem-based management (EBM) is explored. The focus is on the definition of an experimental design based on ensemble simulations, integrating variability linked to scenarios (characterised by changes in the system forcing) and to the concurrent variation of selected, and poorly constrained, model parameters. The modelling system used was previously specifically designed for the use in "data-rich" areas, so that horizontal dynamics can be resolved by a diagnostic approach and external inputs can be parameterised by nudging schemes properly calibrated. Ensembles determined by changes in the simulated environmental (physical and biogeochemical) dynamics, under joint forcing and parameterisation variations, highlight the uncertainties associated to the application of specific scenarios that are relevant to EBM, providing an assessment of the reliability of the predicted changes. The work has been carried out by implementing the coupled modelling system BFM-POM1D in an area of Gulf of Trieste (northern Adriatic Sea), considered homogeneous from the point of view of hydrological properties, and forcing it by changing climatic (warming) and anthropogenic (reduction of the land-based nutrient input) pressure. Model parameters affected by considerable uncertainties (due to the lack of relevant observations) were varied jointly with the scenarios of change. The resulting large set of ensemble simulations provided a general estimation of the model uncertainties related to the joint variation of pressures and model parameters. The information of the model result variability aimed at conveying efficiently and comprehensibly the information on the uncertainties/reliability of the model results to non-technical EBM planners and stakeholders, in order to have the model-based information effectively contributing to EBM.
System monitoring and diagnosis with qualitative models
NASA Technical Reports Server (NTRS)
Kuipers, Benjamin
1991-01-01
A substantial foundation of tools for model-based reasoning with incomplete knowledge was developed: QSIM (a qualitative simulation program) and its extensions for qualitative simulation; Q2, Q3 and their successors for quantitative reasoning on a qualitative framework; and the CC (component-connection) and QPC (Qualitative Process Theory) model compilers for building QSIM QDE (qualitative differential equation) models starting from different ontological assumptions. Other model-compilers for QDE's, e.g., using bond graphs or compartmental models, have been developed elsewhere. These model-building tools will support automatic construction of qualitative models from physical specifications, and further research into selection of appropriate modeling viewpoints. For monitoring and diagnosis, plausible hypotheses are unified against observations to strengthen or refute the predicted behaviors. In MIMIC (Model Integration via Mesh Interpolation Coefficients), multiple hypothesized models of the system are tracked in parallel in order to reduce the 'missing model' problem. Each model begins as a qualitative model, and is unified with a priori quantitative knowledge and with the stream of incoming observational data. When the model/data unification yields a contradiction, the model is refuted. When there is no contradiction, the predictions of the model are progressively strengthened, for use in procedure planning and differential diagnosis. Only under a qualitative level of description can a finite set of models guarantee the complete coverage necessary for this performance. The results of this research are presented in several publications. Abstracts of these published papers are presented along with abtracts of papers representing work that was synergistic with the NASA grant but funded otherwise. These 28 papers include but are not limited to: 'Combined qualitative and numerical simulation with Q3'; 'Comparative analysis and qualitative integral representations'; 'Model-based monitoring of dynamic systems'; 'Numerical behavior envelopes for qualitative models'; 'Higher-order derivative constraints in qualitative simulation'; and 'Non-intersection of trajectories in qualitative phase space: a global constraint for qualitative simulation.'
Hattori, Masasi; Oaksford, Mike
2007-09-10
In this article, 41 models of covariation detection from 2 × 2 contingency tables were evaluated against past data in the literature and against data from new experiments. A new model was also included based on a limiting case of the normative phi-coefficient under an extreme rarity assumption, which has been shown to be an important factor in covariation detection (McKenzie & Mikkelsen, 2007) and data selection (Hattori, 2002; Oaksford & Chater, 1994, 2003). The results were supportive of the new model. To investigate its explanatory adequacy, a rational analysis using two computer simulations was conducted. These simulations revealed the environmental conditions and the memory restrictions under which the new model best approximates the normative model of covariation detection in these tasks. They thus demonstrated the adaptive rationality of the new model. 2007 Cognitive Science Society, Inc.
Combined proportional and additive residual error models in population pharmacokinetic modelling.
Proost, Johannes H
2017-11-15
In pharmacokinetic modelling, a combined proportional and additive residual error model is often preferred over a proportional or additive residual error model. Different approaches have been proposed, but a comparison between approaches is still lacking. The theoretical background of the methods is described. Method VAR assumes that the variance of the residual error is the sum of the statistically independent proportional and additive components; this method can be coded in three ways. Method SD assumes that the standard deviation of the residual error is the sum of the proportional and additive components. Using datasets from literature and simulations based on these datasets, the methods are compared using NONMEM. The different coding of methods VAR yield identical results. Using method SD, the values of the parameters describing residual error are lower than for method VAR, but the values of the structural parameters and their inter-individual variability are hardly affected by the choice of the method. Both methods are valid approaches in combined proportional and additive residual error modelling, and selection may be based on OFV. When the result of an analysis is used for simulation purposes, it is essential that the simulation tool uses the same method as used during analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
KU-Band rendezvous radar performance computer simulation model
NASA Technical Reports Server (NTRS)
Griffin, J. W.
1980-01-01
The preparation of a real time computer simulation model of the KU band rendezvous radar to be integrated into the shuttle mission simulator (SMS), the shuttle engineering simulator (SES), and the shuttle avionics integration laboratory (SAIL) simulator is described. To meet crew training requirements a radar tracking performance model, and a target modeling method were developed. The parent simulation/radar simulation interface requirements, and the method selected to model target scattering properties, including an application of this method to the SPAS spacecraft are described. The radar search and acquisition mode performance model and the radar track mode signal processor model are examined and analyzed. The angle, angle rate, range, and range rate tracking loops are also discussed.
NASA Astrophysics Data System (ADS)
Suzuki, Tomoya; Ohkura, Yuushi
2016-01-01
In order to examine the predictability and profitability of financial markets, we introduce three ideas to improve the traditional technical analysis to detect investment timings more quickly. Firstly, a nonlinear prediction model is considered as an effective way to enhance this detection power by learning complex behavioral patterns hidden in financial markets. Secondly, the bagging algorithm can be applied to quantify the confidence in predictions and compose new technical indicators. Thirdly, we also introduce how to select more profitable stocks to improve investment performance by the two-step selection: the first step selects more predictable stocks during the learning period, and then the second step adaptively and dynamically selects the most confident stock showing the most significant technical signal in each investment. Finally, some investment simulations based on real financial data show that these ideas are successful in overcoming complex financial markets.
Sensorimotor Learning Biases Choice Behavior: A Learning Neural Field Model for Decision Making
Schöner, Gregor; Gail, Alexander
2012-01-01
According to a prominent view of sensorimotor processing in primates, selection and specification of possible actions are not sequential operations. Rather, a decision for an action emerges from competition between different movement plans, which are specified and selected in parallel. For action choices which are based on ambiguous sensory input, the frontoparietal sensorimotor areas are considered part of the common underlying neural substrate for selection and specification of action. These areas have been shown capable of encoding alternative spatial motor goals in parallel during movement planning, and show signatures of competitive value-based selection among these goals. Since the same network is also involved in learning sensorimotor associations, competitive action selection (decision making) should not only be driven by the sensory evidence and expected reward in favor of either action, but also by the subject's learning history of different sensorimotor associations. Previous computational models of competitive neural decision making used predefined associations between sensory input and corresponding motor output. Such hard-wiring does not allow modeling of how decisions are influenced by sensorimotor learning or by changing reward contingencies. We present a dynamic neural field model which learns arbitrary sensorimotor associations with a reward-driven Hebbian learning algorithm. We show that the model accurately simulates the dynamics of action selection with different reward contingencies, as observed in monkey cortical recordings, and that it correctly predicted the pattern of choice errors in a control experiment. With our adaptive model we demonstrate how network plasticity, which is required for association learning and adaptation to new reward contingencies, can influence choice behavior. The field model provides an integrated and dynamic account for the operations of sensorimotor integration, working memory and action selection required for decision making in ambiguous choice situations. PMID:23166483
Jaman, Ajmery; Latif, Mahbub A H M; Bari, Wasimul; Wahed, Abdus S
2016-05-20
In generalized estimating equations (GEE), the correlation between the repeated observations on a subject is specified with a working correlation matrix. Correct specification of the working correlation structure ensures efficient estimators of the regression coefficients. Among the criteria used, in practice, for selecting working correlation structure, Rotnitzky-Jewell, Quasi Information Criterion (QIC) and Correlation Information Criterion (CIC) are based on the fact that if the assumed working correlation structure is correct then the model-based (naive) and the sandwich (robust) covariance estimators of the regression coefficient estimators should be close to each other. The sandwich covariance estimator, used in defining the Rotnitzky-Jewell, QIC and CIC criteria, is biased downward and has a larger variability than the corresponding model-based covariance estimator. Motivated by this fact, a new criterion is proposed in this paper based on the bias-corrected sandwich covariance estimator for selecting an appropriate working correlation structure in GEE. A comparison of the proposed and the competing criteria is shown using simulation studies with correlated binary responses. The results revealed that the proposed criterion generally performs better than the competing criteria. An example of selecting the appropriate working correlation structure has also been shown using the data from Madras Schizophrenia Study. Copyright © 2015 John Wiley & Sons, Ltd.
Role of Quantitative Clinical Pharmacology in Pediatric Approval and Labeling.
Mehrotra, Nitin; Bhattaram, Atul; Earp, Justin C; Florian, Jeffry; Krudys, Kevin; Lee, Jee Eun; Lee, Joo Yeon; Liu, Jiang; Mulugeta, Yeruk; Yu, Jingyu; Zhao, Ping; Sinha, Vikram
2016-07-01
Dose selection is one of the key decisions made during drug development in pediatrics. There are regulatory initiatives that promote the use of model-based drug development in pediatrics. Pharmacometrics or quantitative clinical pharmacology enables development of models that can describe factors affecting pharmacokinetics and/or pharmacodynamics in pediatric patients. This manuscript describes some examples in which pharmacometric analysis was used to support approval and labeling in pediatrics. In particular, the role of pharmacokinetic (PK) comparison of pediatric PK to adults and utilization of dose/exposure-response analysis for dose selection are highlighted. Dose selection for esomeprazole in pediatrics was based on PK matching to adults, whereas for adalimumab, exposure-response, PK, efficacy, and safety data together were useful to recommend doses for pediatric Crohn's disease. For vigabatrin, demonstration of similar dose-response between pediatrics and adults allowed for selection of a pediatric dose. Based on model-based pharmacokinetic simulations and safety data from darunavir pediatric clinical studies with a twice-daily regimen, different once-daily dosing regimens for treatment-naïve human immunodeficiency virus 1-infected pediatric subjects 3 to <12 years of age were evaluated. The role of physiologically based pharmacokinetic modeling (PBPK) in predicting pediatric PK is rapidly evolving. However, regulatory review experiences and an understanding of the state of science indicate that there is a lack of established predictive performance of PBPK in pediatric PK prediction. Moving forward, pharmacometrics will continue to play a key role in pediatric drug development contributing toward decisions pertaining to dose selection, trial designs, and assessing disease similarity to adults to support extrapolation of efficacy. Copyright © 2016 U.S. Government work not protected by U.S. copyright.
The effects of ion channel blockers validate the conductance-based model of saccadic oscillations
Shaikh, Aasef G.; Zee, David S.; Optican, Lance M.; Miura, Kenichiro; Ramat, Stefano; Leigh, R. John
2012-01-01
Conductance-based models of reciprocally inhibiting burst neurons suggest that intrinsic membrane properties and postinhibitory rebound (PIR) determine the amplitude and frequency of saccadic oscillations. Reduction of the low-threshold calcium currents (IT) in the model decreased the amplitude but increased the frequency of the simulated oscillations. Combined reduction of hyperpolarization-activated cation current (Ih) and IT in the model abolished the simulated oscillations. We measured the effects of a selective blocker of IT (ethosuximide) in healthy subjects on the amplitude and frequency of saccadic oscillations evoked by eye closure and of a nonselective blocker of Ih and IT (propronolol) in a patient with microsaccadic oscillation and limb tremor syndrome (mSOLT). Ethosuximide significantly reduced the amplitude but increased the frequency of the saccadic oscillations during eye closure in healthy subjects. Propranolol abolished saccadic oscillations in the mSOLT patient. These results support the hypothetical role of postinhibitory rebound, Ih, and IT, in generation of saccadic oscillations and determining their kinematic properties. PMID:21950976
Computational design and engineering of polymeric orthodontic aligners.
Barone, S; Paoli, A; Razionale, A V; Savignano, R
2016-10-05
Transparent and removable aligners represent an effective solution to correct various orthodontic malocclusions through minimally invasive procedures. An aligner-based treatment requires patients to sequentially wear dentition-mating shells obtained by thermoforming polymeric disks on reference dental models. An aligner is shaped introducing a geometrical mismatch with respect to the actual tooth positions to induce a loading system, which moves the target teeth toward the correct positions. The common practice is based on selecting the aligner features (material, thickness, and auxiliary elements) by only considering clinician's subjective assessments. In this article, a computational design and engineering methodology has been developed to reconstruct anatomical tissues, to model parametric aligner shapes, to simulate orthodontic movements, and to enhance the aligner design. The proposed approach integrates computer-aided technologies, from tomographic imaging to optical scanning, from parametric modeling to finite element analyses, within a 3-dimensional digital framework. The anatomical modeling provides anatomies, including teeth (roots and crowns), jaw bones, and periodontal ligaments, which are the references for the down streaming parametric aligner shaping. The biomechanical interactions between anatomical models and aligner geometries are virtually reproduced using a finite element analysis software. The methodology allows numerical simulations of patient-specific conditions and the comparative analyses of different aligner configurations. In this article, the digital framework has been used to study the influence of various auxiliary elements on the loading system delivered to a maxillary and a mandibular central incisor during an orthodontic tipping movement. Numerical simulations have shown a high dependency of the orthodontic tooth movement on the auxiliary element configuration, which should then be accurately selected to maximize the aligner's effectiveness. Copyright © 2016 John Wiley & Sons, Ltd.