Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.
Kang, Yun; Lanchier, Nicolas
2011-06-01
We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the probability of extinction for the stochastic model.
NASA Astrophysics Data System (ADS)
Marsudi, Hidayat, Noor; Wibowo, Ratno Bagus Edy
2017-12-01
In this article, we present a deterministic model for the transmission dynamics of HIV/AIDS in which condom campaign and antiretroviral therapy are both important for the disease management. We calculate the effective reproduction number using the next generation matrix method and investigate the local and global stability of the disease-free equilibrium of the model. Sensitivity analysis of the effective reproduction number with respect to the model parameters were carried out. Our result shows that efficacy rate of condom campaign, transmission rate for contact with the asymptomatic infective, progression rate from the asymptomatic infective to the pre-AIDS infective, transmission rate for contact with the pre-AIDS infective, ARV therapy rate, proportion of the susceptible receiving condom campaign and proportion of the pre-AIDS receiving ARV therapy are highly sensitive parameters that effect the transmission dynamics of HIV/AIDS infection.
Deterministic phase slips in mesoscopic superconducting rings
Petković, I.; Lollo, A.; Glazman, L. I.; Harris, J. G. E.
2016-01-01
The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter's free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg–Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. We also demonstrate that phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity. PMID:27882924
Deterministic phase slips in mesoscopic superconducting rings.
Petković, I; Lollo, A; Glazman, L I; Harris, J G E
2016-11-24
The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter's free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg-Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. We also demonstrate that phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity.
Trend analysis of Arctic sea ice extent
NASA Astrophysics Data System (ADS)
Silva, M. E.; Barbosa, S. M.; Antunes, Luís; Rocha, Conceição
2009-04-01
The extent of Arctic sea ice is a fundamental parameter of Arctic climate variability. In the context of climate change, the area covered by ice in the Arctic is a particularly useful indicator of recent changes in the Arctic environment. Climate models are in near universal agreement that Arctic sea ice extent will decline through the 21st century as a consequence of global warming and many studies predict a ice free Arctic as soon as 2012. Time series of satellite passive microwave observations allow to assess the temporal changes in the extent of Arctic sea ice. Much of the analysis of the ice extent time series, as in most climate studies from observational data, have been focussed on the computation of deterministic linear trends by ordinary least squares. However, many different processes, including deterministic, unit root and long-range dependent processes can engender trend like features in a time series. Several parametric tests have been developed, mainly in econometrics, to discriminate between stationarity (no trend), deterministic trend and stochastic trends. Here, these tests are applied in the trend analysis of the sea ice extent time series available at National Snow and Ice Data Center. The parametric stationary tests, Augmented Dickey-Fuller (ADF), Phillips-Perron (PP) and the KPSS, do not support an overall deterministic trend in the time series of Arctic sea ice extent. Therefore, alternative parametrizations such as long-range dependence should be considered for characterising long-term Arctic sea ice variability.
Characterizing Uncertainty and Variability in PBPK Models ...
Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro
Deterministic phase slips in mesoscopic superconducting rings
Petković, Ivana; Lollo, A.; Glazman, L. I.; ...
2016-11-24
The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter’s free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg–Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. Furthermore, we also demonstrate thatmore » phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity.« less
Deterministic and stochastic models for middle east respiratory syndrome (MERS)
NASA Astrophysics Data System (ADS)
Suryani, Dessy Rizki; Zevika, Mona; Nuraini, Nuning
2018-03-01
World Health Organization (WHO) data stated that since September 2012, there were 1,733 cases of Middle East Respiratory Syndrome (MERS) with 628 death cases that occurred in 27 countries. MERS was first identified in Saudi Arabia in 2012 and the largest cases of MERS outside Saudi Arabia occurred in South Korea in 2015. MERS is a disease that attacks the respiratory system caused by infection of MERS-CoV. MERS-CoV transmission occurs directly through direct contact between infected individual with non-infected individual or indirectly through contaminated object by the free virus. Suspected, MERS can spread quickly because of the free virus in environment. Mathematical modeling is used to illustrate the transmission of MERS disease using deterministic model and stochastic model. Deterministic model is used to investigate the temporal dynamic from the system to analyze the steady state condition. Stochastic model approach using Continuous Time Markov Chain (CTMC) is used to predict the future states by using random variables. From the models that were built, the threshold value for deterministic models and stochastic models obtained in the same form and the probability of disease extinction can be computed by stochastic model. Simulations for both models using several of different parameters are shown, and the probability of disease extinction will be compared with several initial conditions.
Olariu, Victor; Manesso, Erica; Peterson, Carsten
2017-06-01
Depicting developmental processes as movements in free energy genetic landscapes is an illustrative tool. However, exploring such landscapes to obtain quantitative or even qualitative predictions is hampered by the lack of free energy functions corresponding to the biochemical Michaelis-Menten or Hill rate equations for the dynamics. Being armed with energy landscapes defined by a network and its interactions would open up the possibility of swiftly identifying cell states and computing optimal paths, including those of cell reprogramming, thereby avoiding exhaustive trial-and-error simulations with rate equations for different parameter sets. It turns out that sigmoidal rate equations do have approximate free energy associations. With this replacement of rate equations, we develop a deterministic method for estimating the free energy surfaces of systems of interacting genes at different noise levels or temperatures. Once such free energy landscape estimates have been established, we adapt a shortest path algorithm to determine optimal routes in the landscapes. We explore the method on three circuits for haematopoiesis and embryonic stem cell development for commitment and reprogramming scenarios and illustrate how the method can be used to determine sequential steps for onsets of external factors, essential for efficient reprogramming.
Olariu, Victor; Manesso, Erica
2017-01-01
Depicting developmental processes as movements in free energy genetic landscapes is an illustrative tool. However, exploring such landscapes to obtain quantitative or even qualitative predictions is hampered by the lack of free energy functions corresponding to the biochemical Michaelis–Menten or Hill rate equations for the dynamics. Being armed with energy landscapes defined by a network and its interactions would open up the possibility of swiftly identifying cell states and computing optimal paths, including those of cell reprogramming, thereby avoiding exhaustive trial-and-error simulations with rate equations for different parameter sets. It turns out that sigmoidal rate equations do have approximate free energy associations. With this replacement of rate equations, we develop a deterministic method for estimating the free energy surfaces of systems of interacting genes at different noise levels or temperatures. Once such free energy landscape estimates have been established, we adapt a shortest path algorithm to determine optimal routes in the landscapes. We explore the method on three circuits for haematopoiesis and embryonic stem cell development for commitment and reprogramming scenarios and illustrate how the method can be used to determine sequential steps for onsets of external factors, essential for efficient reprogramming. PMID:28680655
Epidemic spreading on adaptively weighted scale-free networks.
Sun, Mengfeng; Zhang, Haifeng; Kang, Huiyan; Zhu, Guanghu; Fu, Xinchu
2017-04-01
We introduce three modified SIS models on scale-free networks that take into account variable population size, nonlinear infectivity, adaptive weights, behavior inertia and time delay, so as to better characterize the actual spread of epidemics. We develop new mathematical methods and techniques to study the dynamics of the models, including the basic reproduction number, and the global asymptotic stability of the disease-free and endemic equilibria. We show the disease-free equilibrium cannot undergo a Hopf bifurcation. We further analyze the effects of local information of diseases and various immunization schemes on epidemic dynamics. We also perform some stochastic network simulations which yield quantitative agreement with the deterministic mean-field approach.
NASA Astrophysics Data System (ADS)
Han, Jiang; Chen, Ye-Hwa; Zhao, Xiaomin; Dong, Fangfang
2018-04-01
A novel fuzzy dynamical system approach to the control design of flexible joint manipulators with mismatched uncertainty is proposed. Uncertainties of the system are assumed to lie within prescribed fuzzy sets. The desired system performance includes a deterministic phase and a fuzzy phase. First, by creatively implanting a fictitious control, a robust control scheme is constructed to render the system uniformly bounded and uniformly ultimately bounded. Both the manipulator modelling and control scheme are deterministic and not IF-THEN heuristic rules-based. Next, a fuzzy-based performance index is proposed. An optimal design problem for a control design parameter is formulated as a constrained optimisation problem. The global solution to this problem can be obtained from solving two quartic equations. The fuzzy dynamical system approach is systematic and is able to assure the deterministic performance as well as to minimise the fuzzy performance index.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-08-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-01-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784
Analysis of a model of gambiense sleeping sickness in humans and cattle.
Ndondo, A M; Munganga, J M W; Mwambakana, J N; Saad-Roy, C M; van den Driessche, P; Walo, R O
2016-01-01
Human African Trypanosomiasis (HAT) and Nagana in cattle, commonly called sleeping sickness, is caused by trypanosome protozoa transmitted by bites of infected tsetse flies. We present a deterministic model for the transmission of HAT caused by Trypanosoma brucei gambiense between human hosts, cattle hosts and tsetse flies. The model takes into account the growth of the tsetse fly, from its larval stage to the adult stage. Disease in the tsetse fly population is modeled by three compartments, and both the human and cattle populations are modeled by four compartments incorporating the two stages of HAT. We provide a rigorous derivation of the basic reproduction number R0. For R0 < 1, the disease free equilibrium is globally asymptotically stable, thus HAT dies out; whereas (assuming no return to susceptibility) for R0 >1, HAT persists. Elasticity indices for R0 with respect to different parameters are calculated with baseline parameter values appropriate for HAT in West Africa; indicating parameters that are important for control strategies to bring R0 below 1. Numerical simulations with R0 > 1 show values for the infected populations at the endemic equilibrium, and indicate that with certain parameter values, HAT could not persist in the human population in the absence of cattle.
Schrag, Yann; Tremea, Alessandro; Lagger, Cyril; Ohana, Noé; Mohr, Christine
2016-01-01
Studies indicated that people behave less responsibly after exposure to information containing deterministic statements as compared to free will statements or neutral statements. Thus, deterministic primes should lead to enhanced risk-taking behavior. We tested this prediction in two studies with healthy participants. In experiment 1, we tested 144 students (24 men) in the laboratory using the Iowa Gambling Task. In experiment 2, we tested 274 participants (104 men) online using the Balloon Analogue Risk Task. In the Iowa Gambling Task, the free will priming condition resulted in more risky decisions than both the deterministic and neutral priming conditions. We observed no priming effects on risk-taking behavior in the Balloon Analogue Risk Task. To explain these unpredicted findings, we consider the somatic marker hypothesis, a gain frequency approach as well as attention to gains and / or inattention to losses. In addition, we highlight the necessity to consider both pro free will and deterministic priming conditions in future studies. Importantly, our and previous results indicate that the effects of pro free will and deterministic priming do not oppose each other on a frequently assumed continuum. PMID:27018854
Schrag, Yann; Tremea, Alessandro; Lagger, Cyril; Ohana, Noé; Mohr, Christine
2016-01-01
Studies indicated that people behave less responsibly after exposure to information containing deterministic statements as compared to free will statements or neutral statements. Thus, deterministic primes should lead to enhanced risk-taking behavior. We tested this prediction in two studies with healthy participants. In experiment 1, we tested 144 students (24 men) in the laboratory using the Iowa Gambling Task. In experiment 2, we tested 274 participants (104 men) online using the Balloon Analogue Risk Task. In the Iowa Gambling Task, the free will priming condition resulted in more risky decisions than both the deterministic and neutral priming conditions. We observed no priming effects on risk-taking behavior in the Balloon Analogue Risk Task. To explain these unpredicted findings, we consider the somatic marker hypothesis, a gain frequency approach as well as attention to gains and / or inattention to losses. In addition, we highlight the necessity to consider both pro free will and deterministic priming conditions in future studies. Importantly, our and previous results indicate that the effects of pro free will and deterministic priming do not oppose each other on a frequently assumed continuum.
Lahodny, G E; Gautam, R; Ivanek, R
2015-01-01
Indirect transmission through the environment, pathogen shedding by infectious hosts, replication of free-living pathogens within the environment, and environmental decontamination are suspected to play important roles in the spread and control of environmentally transmitted infectious diseases. To account for these factors, the classic Susceptible-Infectious-Recovered-Susceptible epidemic model is modified to include a compartment representing the amount of free-living pathogen within the environment. The model accounts for host demography, direct and indirect transmission, replication of free-living pathogens in the environment, and removal of free-living pathogens by natural death or environmental decontamination. Based on the assumptions of the deterministic model, a continuous-time Markov chain model is developed. An estimate for the probability of disease extinction or a major outbreak is obtained by approximating the Markov chain with a multitype branching process. Numerical simulations illustrate important differences between the deterministic and stochastic counterparts, relevant for outbreak prevention, that depend on indirect transmission, pathogen shedding by infectious hosts, replication of free-living pathogens, and environmental decontamination. The probability of a major outbreak is computed for salmonellosis in a herd of dairy cattle as well as cholera in a human population. An explicit expression for the probability of disease extinction or a major outbreak in terms of the model parameters is obtained for systems with no direct transmission or replication of free-living pathogens.
Automated Calibration For Numerical Models Of Riverflow
NASA Astrophysics Data System (ADS)
Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey
2017-04-01
Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-01-01
Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-11-02
We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.
Bifurcation analysis of dengue transmission model in Baguio City, Philippines
NASA Astrophysics Data System (ADS)
Libatique, Criselda P.; Pajimola, Aprimelle Kris J.; Addawe, Joel M.
2017-11-01
In this study, we formulate a deterministic model for the transmission dynamics of dengue fever in Baguio City, Philippines. We analyzed the existence of the equilibria of the dengue model. We computed and obtained conditions for the existence of the equilibrium states. Stability analysis for the system is carried out for disease free equilibrium. We showed that the system becomes stable under certain conditions of the parameters. A particular parameter is taken and with the use of the Theory of Centre Manifold, the proposed model demonstrates a bifurcation phenomenon. We performed numerical simulation to verify the analytical results.
NASA Astrophysics Data System (ADS)
Fischer, P.; Jardani, A.; Lecoq, N.
2018-02-01
In this paper, we present a novel inverse modeling method called Discrete Network Deterministic Inversion (DNDI) for mapping the geometry and property of the discrete network of conduits and fractures in the karstified aquifers. The DNDI algorithm is based on a coupled discrete-continuum concept to simulate numerically water flows in a model and a deterministic optimization algorithm to invert a set of observed piezometric data recorded during multiple pumping tests. In this method, the model is partioned in subspaces piloted by a set of parameters (matrix transmissivity, and geometry and equivalent transmissivity of the conduits) that are considered as unknown. In this way, the deterministic optimization process can iteratively correct the geometry of the network and the values of the properties, until it converges to a global network geometry in a solution model able to reproduce the set of data. An uncertainty analysis of this result can be performed from the maps of posterior uncertainties on the network geometry or on the property values. This method has been successfully tested for three different theoretical and simplified study cases with hydraulic responses data generated from hypothetical karstic models with an increasing complexity of the network geometry, and of the matrix heterogeneity.
Traveling Salesman Problem for Surveillance Mission Using Particle Swarm Optimization
2001-03-20
design of experiments, results of the experiments, and qualitative and quantitative analysis . Conclusions and recommendations based on the qualitative and...characterize the algorithm. Such analysis and comparison between LK and a non-deterministic algorithm produces claims such as "Lin-Kernighan algorithm takes... based on experiments 5 and 6. All other parameters are the same as the baseline (see 4.2.1.2). 4.2.2.6 Experiment 10 - Fine Tuning PSO AS: 85,95% Global
Parameter Estimation in Epidemiology: from Simple to Complex Dynamics
NASA Astrophysics Data System (ADS)
Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico
2011-09-01
We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2014-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development ofmore » an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.« less
Evaluation of SNS Beamline Shielding Configurations using MCNPX Accelerated by ADVANTG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Risner, Joel M; Johnson, Seth R.; Remec, Igor
2015-01-01
Shielding analyses for the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory pose significant computational challenges, including highly anisotropic high-energy sources, a combination of deep penetration shielding and an unshielded beamline, and a desire to obtain well-converged nearly global solutions for mapping of predicted radiation fields. The majority of these analyses have been performed using MCNPX with manually generated variance reduction parameters (source biasing and cell-based splitting and Russian roulette) that were largely based on the analyst's insight into the problem specifics. Development of the variance reduction parameters required extensive analyst time, and was often tailored to specific portionsmore » of the model phase space. We previously applied a developmental version of the ADVANTG code to an SNS beamline study to perform a hybrid deterministic/Monte Carlo analysis and showed that we could obtain nearly global Monte Carlo solutions with essentially uniform relative errors for mesh tallies that cover extensive portions of the model with typical voxel spacing of a few centimeters. The use of weight window maps and consistent biased sources produced using the FW-CADIS methodology in ADVANTG allowed us to obtain these solutions using substantially less computer time than the previous cell-based splitting approach. While those results were promising, the process of using the developmental version of ADVANTG was somewhat laborious, requiring user-developed Python scripts to drive much of the analysis sequence. In addition, limitations imposed by the size of weight-window files in MCNPX necessitated the use of relatively coarse spatial and energy discretization for the deterministic Denovo calculations that we used to generate the variance reduction parameters. We recently applied the production version of ADVANTG to this beamline analysis, which substantially streamlined the analysis process. We also tested importance function collapsing (in space and energy) capabilities in ADVANTG. These changes, along with the support for parallel Denovo calculations using the current version of ADVANTG, give us the capability to improve the fidelity of the deterministic portion of the hybrid analysis sequence, obtain improved weight-window maps, and reduce both the analyst and computational time required for the analysis process.« less
Heart rate variability as determinism with jump stochastic parameters.
Zheng, Jiongxuan; Skufca, Joseph D; Bollt, Erik M
2013-08-01
We use measured heart rate information (RR intervals) to develop a one-dimensional nonlinear map that describes short term deterministic behavior in the data. Our study suggests that there is a stochastic parameter with persistence which causes the heart rate and rhythm system to wander about a bifurcation point. We propose a modified circle map with a jump process noise term as a model which can qualitatively capture such this behavior of low dimensional transient determinism with occasional (stochastically defined) jumps from one deterministic system to another within a one parameter family of deterministic systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petković, Ivana; Lollo, A.; Glazman, L. I.
The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter’s free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg–Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. Furthermore, we also demonstrate thatmore » phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity.« less
Mwanga, Gasper G; Haario, Heikki; Capasso, Vicenzo
2015-03-01
The main scope of this paper is to study the optimal control practices of malaria, by discussing the implementation of a catalog of optimal control strategies in presence of parameter uncertainties, which is typical of infectious diseases data. In this study we focus on a deterministic mathematical model for the transmission of malaria, including in particular asymptomatic carriers and two age classes in the human population. A partial qualitative analysis of the relevant ODE system has been carried out, leading to a realistic threshold parameter. For the deterministic model under consideration, four possible control strategies have been analyzed: the use of Long-lasting treated mosquito nets, indoor residual spraying, screening and treatment of symptomatic and asymptomatic individuals. The numerical results show that using optimal control the disease can be brought to a stable disease free equilibrium when all four controls are used. The Incremental Cost-Effectiveness Ratio (ICER) for all possible combinations of the disease-control measures is determined. The numerical simulations of the optimal control in the presence of parameter uncertainty demonstrate the robustness of the optimal control: the main conclusions of the optimal control remain unchanged, even if inevitable variability remains in the control profiles. The results provide a promising framework for the designing of cost-effective strategies for disease controls with multiple interventions, even under considerable uncertainty of model parameters. Copyright © 2014 Elsevier Inc. All rights reserved.
Dotov, D G; Kim, S; Frank, T D
2015-02-01
We derive explicit expressions for the non-equilibrium thermodynamical variables of a canonical-dissipative limit cycle oscillator describing rhythmic motion patterns of active systems. These variables are statistical entropy, non-equilibrium internal energy, and non-equilibrium free energy. In particular, the expression for the non-equilibrium free energy is derived as a function of a suitable control parameter. The control parameter determines the Hopf bifurcation point of the deterministic active system and describes the effective pumping of the oscillator. In analogy to the equilibrium free energy of the Landau theory, it is shown that the non-equilibrium free energy decays as a function of the control parameter. In doing so, a similarity between certain equilibrium and non-equilibrium phase transitions is pointed out. Data from an experiment on human rhythmic movements is presented. Estimates for pumping intensity as well as the thermodynamical variables are reported. It is shown that in the experiment the non-equilibrium free energy decayed when pumping intensity was increased, which is consistent with the theory. Moreover, pumping intensities close to zero could be observed at relatively slow intended rhythmic movements. In view of the Hopf bifurcation underlying the limit cycle oscillator model, this observation suggests that the intended limit cycle movements were actually more similar to trajectories of a randomly perturbed stable focus. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Knapp, Julia L. A.; Cirpka, Olaf A.
2017-06-01
The complexity of hyporheic flow paths requires reach-scale models of solute transport in streams that are flexible in their representation of the hyporheic passage. We use a model that couples advective-dispersive in-stream transport to hyporheic exchange with a shape-free distribution of hyporheic travel times. The model also accounts for two-site sorption and transformation of reactive solutes. The coefficients of the model are determined by fitting concurrent stream-tracer tests of conservative (fluorescein) and reactive (resazurin/resorufin) compounds. The flexibility of the shape-free models give rise to multiple local minima of the objective function in parameter estimation, thus requiring global-search algorithms, which is hindered by the large number of parameter values to be estimated. We present a local-in-global optimization approach, in which we use a Markov-Chain Monte Carlo method as global-search method to estimate a set of in-stream and hyporheic parameters. Nested therein, we infer the shape-free distribution of hyporheic travel times by a local Gauss-Newton method. The overall approach is independent of the initial guess and provides the joint posterior distribution of all parameters. We apply the described local-in-global optimization method to recorded tracer breakthrough curves of three consecutive stream sections, and infer section-wise hydraulic parameter distributions to analyze how hyporheic exchange processes differ between the stream sections.
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S; Windus, Theresa L; Dick-Perez, Marilu
2017-03-27
A newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides, important for metal extraction chemistry, are parametrized using ParFit. ParFit is in an open source program available for free on GitHub ( https://github.com/fzahari/ParFit ).
A study of parameter identification
NASA Technical Reports Server (NTRS)
Herget, C. J.; Patterson, R. E., III
1978-01-01
A set of definitions for deterministic parameter identification ability were proposed. Deterministic parameter identificability properties are presented based on four system characteristics: direct parameter recoverability, properties of the system transfer function, properties of output distinguishability, and uniqueness properties of a quadratic cost functional. Stochastic parameter identifiability was defined in terms of the existence of an estimation sequence for the unknown parameters which is consistent in probability. Stochastic parameter identifiability properties are presented based on the following characteristics: convergence properties of the maximum likelihood estimate, properties of the joint probability density functions of the observations, and properties of the information matrix.
Chaos in an imperfectly premixed model combustor.
Kabiraj, Lipika; Saurabh, Aditya; Karimi, Nader; Sailor, Anna; Mastorakos, Epaminondas; Dowling, Ann P; Paschereit, Christian O
2015-02-01
This article reports nonlinear bifurcations observed in a laboratory scale, turbulent combustor operating under imperfectly premixed mode with global equivalence ratio as the control parameter. The results indicate that the dynamics of thermoacoustic instability correspond to quasi-periodic bifurcation to low-dimensional, deterministic chaos, a route that is common to a variety of dissipative nonlinear systems. The results support the recent identification of bifurcation scenarios in a laminar premixed flame combustor (Kabiraj et al., Chaos: Interdiscip. J. Nonlinear Sci. 22, 023129 (2012)) and extend the observation to a practically relevant combustor configuration.
Pigache, Francois; Messine, Frédéric; Nogarede, Bertrand
2007-07-01
This paper deals with a deterministic and rational way to design piezoelectric transformers in radial mode. The proposed approach is based on the study of the inverse problem of design and on its reformulation as a mixed constrained global optimization problem. The methodology relies on the association of the analytical models for describing the corresponding optimization problem and on an exact global optimization software, named IBBA and developed by the second author to solve it. Numerical experiments are presented and compared in order to validate the proposed approach.
Aspen succession in the Intermountain West: A deterministic model
Dale L. Bartos; Frederick R. Ward; George S. Innis
1983-01-01
A deterministic model of succession in aspen forests was developed using existing data and intuition. The degree of uncertainty, which was determined by allowing the parameter values to vary at random within limits, was larger than desired. This report presents results of an analysis of model sensitivity to changes in parameter values. These results have indicated...
Optimum Parameters of a Tuned Liquid Column Damper in a Wind Turbine Subject to Stochastic Load
NASA Astrophysics Data System (ADS)
Alkmim, M. H.; de Morais, M. V. G.; Fabro, A. T.
2017-12-01
Parameter optimization for tuned liquid column dampers (TLCD), a class of passive structural control, have been previously proposed in the literature for reducing vibration in wind turbines, and several other applications. However, most of the available work consider the wind excitation as either a deterministic harmonic load or random load with white noise spectra. In this paper, a global direct search optimization algorithm to reduce vibration of a tuned liquid column damper (TLCD), a class of passive structural control device, is presented. The objective is to find optimized parameters for the TLCD under stochastic load from different wind power spectral density. A verification is made considering the analytical solution of undamped primary system under white noise excitation by comparing with result from the literature. Finally, it is shown that different wind profiles can significantly affect the optimum TLCD parameters.
NASA Technical Reports Server (NTRS)
Davis, M. W.
1984-01-01
A Real-Time Self-Adaptive (RTSA) active vibration controller was used as the framework in developing a computer program for a generic controller that can be used to alleviate helicopter vibration. Based upon on-line identification of system parameters, the generic controller minimizes vibration in the fuselage by closed-loop implementation of higher harmonic control in the main rotor system. The new generic controller incorporates a set of improved algorithms that gives the capability to readily define many different configurations by selecting one of three different controller types (deterministic, cautious, and dual), one of two linear system models (local and global), and one or more of several methods of applying limits on control inputs (external and/or internal limits on higher harmonic pitch amplitude and rate). A helicopter rotor simulation analysis was used to evaluate the algorithms associated with the alternative controller types as applied to the four-bladed H-34 rotor mounted on the NASA Ames Rotor Test Apparatus (RTA) which represents the fuselage. After proper tuning all three controllers provide more effective vibration reduction and converge more quickly and smoothly with smaller control inputs than the initial RTSA controller (deterministic with external pitch-rate limiting). It is demonstrated that internal limiting of the control inputs a significantly improves the overall performance of the deterministic controller.
A mixed SIR-SIS model to contain a virus spreading through networks with two degrees
NASA Astrophysics Data System (ADS)
Essouifi, Mohamed; Achahbar, Abdelfattah
Due to the fact that the “nodes” and “links” of real networks are heterogeneous, to model computer viruses prevalence throughout the Internet, we borrow the idea of the reduced scale free network which was introduced recently. The purpose of this paper is to extend the previous deterministic two subchains of Susceptible-Infected-Susceptible (SIS) model into a mixed Susceptible-Infected-Recovered and Susceptible-Infected-Susceptible (SIR-SIS) model to contain the computer virus spreading over networks with two degrees. Moreover, we develop its stochastic counterpart. Due to the high protection and security taken for hubs class, we suggest to treat it by using SIR epidemic model rather than the SIS one. The analytical study reveals that the proposed model admits a stable viral equilibrium. Thus, it is shown numerically that the mean dynamic behavior of the stochastic model is in agreement with the deterministic one. Unlike the infection densities i2 and i which both tend to a viral equilibrium for both approaches as in the previous study, i1 tends to the virus-free equilibrium. Furthermore, since a proportion of infectives are recovered, the global infection density i is minimized. Therefore, the permanent presence of viruses in the network due to the lower-degree nodes class. Many suggestions are put forward for containing viruses propagation and minimizing their damages.
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Fasanella, Edwin L.; Melis, Matthew; Carney, Kelly; Gabrys, Jonathan
2004-01-01
The Space Shuttle Columbia Accident Investigation Board (CAIB) made several recommendations for improving the NASA Space Shuttle Program. An extensive experimental and analytical program has been developed to address two recommendations related to structural impact analysis. The objective of the present work is to demonstrate the application of probabilistic analysis to assess the effect of uncertainties on debris impacts on Space Shuttle Reinforced Carbon-Carbon (RCC) panels. The probabilistic analysis is used to identify the material modeling parameters controlling the uncertainty. A comparison of the finite element results with limited experimental data provided confidence that the simulations were adequately representing the global response of the material. Five input parameters were identified as significantly controlling the response.
Damage detection of structures identified with deterministic-stochastic models using seismic data.
Huang, Ming-Chih; Wang, Yen-Po; Chang, Ming-Lian
2014-01-01
A deterministic-stochastic subspace identification method is adopted and experimentally verified in this study to identify the equivalent single-input-multiple-output system parameters of the discrete-time state equation. The method of damage locating vector (DLV) is then considered for damage detection. A series of shaking table tests using a five-storey steel frame has been conducted. Both single and multiple damage conditions at various locations have been considered. In the system identification analysis, either full or partial observation conditions have been taken into account. It has been shown that the damaged stories can be identified from global responses of the structure to earthquakes if sufficiently observed. In addition to detecting damage(s) with respect to the intact structure, identification of new or extended damages of the as-damaged counterpart has also been studied. This study gives further insights into the scheme in terms of effectiveness, robustness, and limitation for damage localization of frame systems.
Mayo, Christie; Shelley, Courtney; MacLachlan, N. James; Gardner, Ian; Hartley, David; Barker, Christopher
2016-01-01
The global distribution of bluetongue virus (BTV) has been changing recently, perhaps as a result of climate change. To evaluate the risk of BTV infection and transmission in a BTV-endemic region of California, sentinel dairy cows were evaluated for BTV infection, and populations of Culicoides vectors were collected at different sites using carbon dioxide. A deterministic model was developed to quantify risk and guide future mitigation strategies to reduce BTV infection in California dairy cattle. The greatest risk of BTV transmission was predicted within the warm Central Valley of California that contains the highest density of dairy cattle in the United States. Temperature and parameters associated with Culicoides vectors (transmission probabilities, carrying capacity, and survivorship) had the greatest effect on BTV’s basic reproduction number, R0. Based on these analyses, optimal control strategies for reducing BTV infection risk in dairy cattle will be highly reliant upon early efforts to reduce vector abundance during the months prior to peak transmission. PMID:27812161
Dynamics of a stochastic multi-strain SIS epidemic model driven by Lévy noise
NASA Astrophysics Data System (ADS)
Chen, Can; Kang, Yanmei
2017-01-01
A stochastic multi-strain SIS epidemic model is formulated by introducing Lévy noise into the disease transmission rate of each strain. First, we prove that the stochastic model admits a unique global positive solution, and, by the comparison theorem, we show that the solution remains within a positively invariant set almost surely. Next we investigate stochastic stability of the disease-free equilibrium, including stability in probability and pth moment asymptotic stability. Then sufficient conditions for persistence in the mean of the disease are established. Finally, based on an Euler scheme for Lévy-driven stochastic differential equations, numerical simulations for a stochastic two-strain model are carried out to verify the theoretical results. Moreover, numerical comparison results of the stochastic two-strain model and the deterministic version are also given. Lévy noise can cause the two strains to become extinct almost surely, even though there is a dominant strain that persists in the deterministic model. It can be concluded that the introduction of Lévy noise reduces the disease extinction threshold, which indicates that Lévy noise may suppress the disease outbreak.
The fully actuated traffic control problem solved by global optimization and complementarity
NASA Astrophysics Data System (ADS)
Ribeiro, Isabel M.; de Lurdes de Oliveira Simões, Maria
2016-02-01
Global optimization and complementarity are used to determine the signal timing for fully actuated traffic control, regarding effective green and red times on each cycle. The average values of these parameters can be used to estimate the control delay of vehicles. In this article, a two-phase queuing system for a signalized intersection is outlined, based on the principle of minimization of the total waiting time for the vehicles. The underlying model results in a linear program with linear complementarity constraints, solved by a sequential complementarity algorithm. Departure rates of vehicles during green and yellow periods were treated as deterministic, while arrival rates of vehicles were assumed to follow a Poisson distribution. Several traffic scenarios were created and solved. The numerical results reveal that it is possible to use global optimization and complementarity over a reasonable number of cycles and determine with efficiency effective green and red times for a signalized intersection.
Dynamics of stochastic SEIS epidemic model with varying population size
NASA Astrophysics Data System (ADS)
Liu, Jiamin; Wei, Fengying
2016-12-01
We introduce the stochasticity into a deterministic model which has state variables susceptible-exposed-infected with varying population size in this paper. The infected individuals could return into susceptible compartment after recovering. We show that the stochastic model possesses a unique global solution under building up a suitable Lyapunov function and using generalized Itô's formula. The densities of the exposed and infected tend to extinction when some conditions are being valid. Moreover, the conditions of persistence to a global solution are derived when the parameters are subject to some simple criteria. The stochastic model admits a stationary distribution around the endemic equilibrium, which means that the disease will prevail. To check the validity of the main results, numerical simulations are demonstrated as end of this contribution.
Equilibrium reconstruction in an iron core tokamak using a deterministic magnetisation model
NASA Astrophysics Data System (ADS)
Appel, L. C.; Lupelli, I.; JET Contributors
2018-02-01
In many tokamaks ferromagnetic material, usually referred to as an iron-core, is present in order to improve the magnetic coupling between the solenoid and the plasma. The presence of the iron core in proximity to the plasma changes the magnetic topology with consequent effects on the magnetic field structure and the plasma boundary. This paper considers the problem of obtaining the free-boundary plasma equilibrium solution in the presence of ferromagnetic material based on measured constraints. The current approach employs a model described by O'Brien et al. (1992) in which the magnetisation currents at the iron-air boundary are represented by a set of free parameters and appropriate boundary conditions are enforced via a set of quasi-measurements on the material boundary. This can lead to the possibility of overfitting the data and hiding underlying issues with the measured signals. Although the model typically achieves good fits to measured magnetic signals there are significant discrepancies in the inferred magnetic topology compared with other plasma diagnostic measurements that are independent of the magnetic field. An alternative approach for equilibrium reconstruction in iron-core tokamaks, termed the deterministic magnetisation model is developed and implemented in EFIT++. The iron is represented by a boundary current with the gradients in the magnetisation dipole state generating macroscopic internal magnetisation currents. A model for the boundary magnetisation currents at the iron-air interface is developed using B-Splines enabling continuity to arbitrary order; internal magnetisation currents are allocated to triangulated regions within the iron, and a method to enable adaptive refinement is implemented. The deterministic model has been validated by comparing it with a synthetic 2-D electromagnetic model of JET. It is established that the maximum field discrepancy is less than 1.5 mT throughout the vacuum region enclosing the plasma. The discrepancies of simulated magnetic probe signals are accurate to within 1% for signals with absolute magnitude greater than 100mT; in all other cases agreement is to within 1mT. The effect of neglecting the internal magnetisation currents increases the maximum discrepancy in the vacuum region to >20mT, resulting in errors of 5%-10% in the simulated probe signals. The fact that the previous model neglects the internal magnetisation currents (and also has additional free parameters when fitting the measured data) makes it unsuitable for analysing data in the absence of plasma current. The discrepancy of the poloidal magnetic flux within the vacuum vessel is to within 0.1Wb. Finally the deterministic model is applied to an equilibrium force-balance solution of a JET discharge using experimental data. It is shown that the discrepancies of the outboard separatrix position, and the outer strike-point position inferred from Thomson Scattering and Infrared camera data are much improved beyond the routine equilibrium reconstruction, whereas the discrepancy of the inner strike-point position is similar.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.
2008-05-15
We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less
Inverse kinematic problem for a random gradient medium in geometric optics approximation
NASA Astrophysics Data System (ADS)
Petersen, N. V.
1990-03-01
Scattering at random inhomogeneities in a gradient medium results in systematic deviations of the rays and travel times of refracted body waves from those corresponding to the deterministic velocity component. The character of the difference depends on the parameters of the deterministic and random velocity component. However, at great distances to the source, independently of the velocity parameters (weakly or strongly inhomogeneous medium), the most probable depth of the ray turning point is smaller than that corresponding to the deterministic velocity component, the most probable travel times also being lower. The relative uncertainty in the deterministic velocity component, derived from the mean travel times using methods developed for laterally homogeneous media (for instance, the Herglotz-Wiechert method), is systematic in character, but does not exceed the contrast of velocity inhomogeneities by magnitude. The gradient of the deterministic velocity component has a significant effect on the travel-time fluctuations. The variance at great distances to the source is mainly controlled by shallow inhomogeneities. The travel-time flucutations are studied only for weakly inhomogeneous media.
2016-06-24
degrees of freedomper qubit [6], so the decompositionmust have at least N3 free parameters. During the sequence at least -N 1of the qubitsmust eventually...possiblemust include at leastN global operations, for a total of -N3 1 free parameters. One additional degree of freedom remains, so wemust add a last gate...adjustedmust only be specified up to a collective Z rotation afterwards, since this rotation can be absorbed into the phase. This removes one free parameter
The value of believing in free will: encouraging a belief in determinism increases cheating.
Vohs, Kathleen D; Schooler, Jonathan W
2008-01-01
Does moral behavior draw on a belief in free will? Two experiments examined whether inducing participants to believe that human behavior is predetermined would encourage cheating. In Experiment 1, participants read either text that encouraged a belief in determinism (i.e., that portrayed behavior as the consequence of environmental and genetic factors) or neutral text. Exposure to the deterministic message increased cheating on a task in which participants could passively allow a flawed computer program to reveal answers to mathematical problems that they had been instructed to solve themselves. Moreover, increased cheating behavior was mediated by decreased belief in free will. In Experiment 2, participants who read deterministic statements cheated by overpaying themselves for performance on a cognitive task; participants who read statements endorsing free will did not. These findings suggest that the debate over free will has societal, as well as scientific and theoretical, implications.
Multigroup SIR epidemic model with stochastic perturbation
NASA Astrophysics Data System (ADS)
Ji, Chunyan; Jiang, Daqing; Shi, Ningzhong
2011-05-01
In this paper, we discuss a multigroup SIR model with stochastic perturbation. We deduce the globally asymptotic stability of the disease-free equilibrium when R0≤1, which means the disease will die out. On the other hand, when R0>1, we derive the disease will prevail, which is measured through the difference between the solution and the endemic equilibrium of the deterministic model in time average. Furthermore, we prove the system is persistent in the mean which also reflects the disease will prevail. The key to our analysis is choosing appropriate Lyapunov functions. Finally, we illustrate the dynamic behavior of the model with n=2 and their approximations via a range of numerical experiments.
NASA Astrophysics Data System (ADS)
Cao, Xiangyu; Le Doussal, Pierre; Rosso, Alberto; Santachiara, Raoul
2018-04-01
We study transitions in log-correlated random energy models (logREMs) that are related to the violation of a Seiberg bound in Liouville field theory (LFT): the binding transition and the termination point transition (a.k.a., pre-freezing). By means of LFT-logREM mapping, replica symmetry breaking and traveling-wave equation techniques, we unify both transitions in a two-parameter diagram, which describes the free-energy large deviations of logREMs with a deterministic background log potential, or equivalently, the joint moments of the free energy and Gibbs measure in logREMs without background potential. Under the LFT-logREM mapping, the transitions correspond to the competition of discrete and continuous terms in a four-point correlation function. Our results provide a statistical interpretation of a peculiar nonlocality of the operator product expansion in LFT. The results are rederived by a traveling-wave equation calculation, which shows that the features of LFT responsible for the transitions are reproduced in a simple model of diffusion with absorption. We examine also the problem by a replica symmetry breaking analysis. It complements the previous methods and reveals a rich large deviation structure of the free energy of logREMs with a deterministic background log potential. Many results are verified in the integrable circular logREM, by a replica-Coulomb gas integral approach. The related problem of common length (overlap) distribution is also considered. We provide a traveling-wave equation derivation of the LFT predictions announced in a precedent work.
Deterministic Computer-Controlled Polishing Process for High-Energy X-Ray Optics
NASA Technical Reports Server (NTRS)
Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian
2010-01-01
A deterministic computer-controlled polishing process for large X-ray mirror mandrels is presented. Using tool s influence function and material removal rate extracted from polishing experiments, design considerations of polishing laps and optimized operating parameters are discussed
Magnified gradient function with deterministic weight modification in adaptive learning.
Ng, Sin-Chun; Cheung, Chi-Chung; Leung, Shu-Hung
2004-11-01
This paper presents two novel approaches, backpropagation (BP) with magnified gradient function (MGFPROP) and deterministic weight modification (DWM), to speed up the convergence rate and improve the global convergence capability of the standard BP learning algorithm. The purpose of MGFPROP is to increase the convergence rate by magnifying the gradient function of the activation function, while the main objective of DWM is to reduce the system error by changing the weights of a multilayered feedforward neural network in a deterministic way. Simulation results show that the performance of the above two approaches is better than BP and other modified BP algorithms for a number of learning problems. Moreover, the integration of the above two approaches forming a new algorithm called MDPROP, can further improve the performance of MGFPROP and DWM. From our simulation results, the MDPROP algorithm always outperforms BP and other modified BP algorithms in terms of convergence rate and global convergence capability.
An ITK framework for deterministic global optimization for medical image registration
NASA Astrophysics Data System (ADS)
Dru, Florence; Wachowiak, Mark P.; Peters, Terry M.
2006-03-01
Similarity metric optimization is an essential step in intensity-based rigid and nonrigid medical image registration. For clinical applications, such as image guidance of minimally invasive procedures, registration accuracy and efficiency are prime considerations. In addition, clinical utility is enhanced when registration is integrated into image analysis and visualization frameworks, such as the popular Insight Toolkit (ITK). ITK is an open source software environment increasingly used to aid the development, testing, and integration of new imaging algorithms. In this paper, we present a new ITK-based implementation of the DIRECT (Dividing Rectangles) deterministic global optimization algorithm for medical image registration. Previously, it has been shown that DIRECT improves the capture range and accuracy for rigid registration. Our ITK class also contains enhancements over the original DIRECT algorithm by improving stopping criteria, adaptively adjusting a locality parameter, and by incorporating Powell's method for local refinement. 3D-3D registration experiments with ground-truth brain volumes and clinical cardiac volumes show that combining DIRECT with Powell's method improves registration accuracy over Powell's method used alone, is less sensitive to initial misorientation errors, and, with the new stopping criteria, facilitates adequate exploration of the search space without expending expensive iterations on non-improving function evaluations. Finally, in this framework, a new parallel implementation for computing mutual information is presented, resulting in near-linear speedup with two processors.
ERIC Educational Resources Information Center
Raveendran, Aswathy; Chunawala, Sugra
2015-01-01
Several educators have emphasized that students need to understand science as a human endeavor that is not value free. In the exploratory study reported here, we investigated how doctoral students of biology understand the intersection of values and science in the context of genetic determinism. Deterministic research claims have been critiqued…
Deterministic quantum dense coding networks
NASA Astrophysics Data System (ADS)
Roy, Saptarshi; Chanda, Titas; Das, Tamoghna; Sen(De), Aditi; Sen, Ujjwal
2018-07-01
We consider the scenario of deterministic classical information transmission between multiple senders and a single receiver, when they a priori share a multipartite quantum state - an attempt towards building a deterministic dense coding network. Specifically, we prove that in the case of two or three senders and a single receiver, generalized Greenberger-Horne-Zeilinger (gGHZ) states are not beneficial for sending classical information deterministically beyond the classical limit, except when the shared state is the GHZ state itself. On the other hand, three- and four-qubit generalized W (gW) states with specific parameters as well as the four-qubit Dicke states can provide a quantum advantage of sending the information in deterministic dense coding. Interestingly however, numerical simulations in the three-qubit scenario reveal that the percentage of states from the GHZ-class that are deterministic dense codeable is higher than that of states from the W-class.
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Schluessel, Peter
2009-01-01
Surface and atmospheric thermodynamic parameters retrieved with advanced ultraspectral remote sensors aboard Earth observing satellites are critical to general atmospheric and Earth science research, climate monitoring, and weather prediction. Ultraspectral resolution infrared radiance obtained from nadir observations provide atmospheric, surface, and cloud information. Presented here is the global surface IR emissivity retrieved from Infrared Atmospheric Sounding Interferometer (IASI) measurements under "clear-sky" conditions. Fast radiative transfer models, applied to the cloud-free (or clouded) atmosphere, are used for atmospheric profile and surface parameter (or cloud parameter) retrieval. The inversion scheme, dealing with cloudy as well as cloud-free radiances observed with ultraspectral infrared sounders, has been developed to simultaneously retrieve atmospheric thermodynamic and surface (or cloud microphysical) parameters. Rapidly produced surface emissivity is initially evaluated through quality control checks on the retrievals of other impacted atmospheric and surface parameters. Surface emissivity and surface skin temperature from the current and future operational satellites can and will reveal critical information on the Earth s ecosystem and land surface type properties, which can be utilized as part of long-term monitoring for the Earth s environment and global climate change.
A deterministic global optimization using smooth diagonal auxiliary functions
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.
2015-04-01
In many practical decision-making problems it happens that functions involved in optimization process are black-box with unknown analytical representations and hard to evaluate. In this paper, a global optimization problem is considered where both the goal function f (x) and its gradient f‧ (x) are black-box functions. It is supposed that f‧ (x) satisfies the Lipschitz condition over the search hyperinterval with an unknown Lipschitz constant K. A new deterministic 'Divide-the-Best' algorithm based on efficient diagonal partitions and smooth auxiliary functions is proposed in its basic version, its convergence conditions are studied and numerical experiments executed on eight hundred test functions are presented.
Soil pH mediates the balance between stochastic and deterministic assembly of bacteria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathi, Binu M.; Stegen, James C.; Kim, Mincheol
Little is known about the factors affecting the relative influence of stochastic and deterministic processes that governs the assembly of microbial communities in successional soils. Here, we conducted a meta-analysis of bacterial communities using six different successional soils data sets, scattered across different regions, with different pH conditions in early and late successional soils. We found that soil pH was the best predictor of bacterial community assembly and the relative importance of stochastic and deterministic processes along successional soils. Extreme acidic or alkaline pH conditions lead to assembly of phylogenetically more clustered bacterial communities through deterministic processes, whereas pH conditionsmore » close to neutral lead to phylogenetically less clustered bacterial communities with more stochasticity. We suggest that the influence of pH, rather than successional age, is the main driving force in producing trends in phylogenetic assembly of bacteria, and that pH also influences the relative balance of stochastic and deterministic processes along successional soils. Given that pH had a much stronger association with community assembly than did successional age, we evaluated whether the inferred influence of pH was maintained when studying globally-distributed samples collected without regard for successional age. This dataset confirmed the strong influence of pH, suggesting that the influence of soil pH on community assembly processes occurs globally. Extreme pH conditions likely exert more stringent limits on survival and fitness, imposing strong selective pressures through ecological and evolutionary time. Taken together, these findings suggest that the degree to which stochastic vs. deterministic processes shape soil bacterial community assembly is a consequence of soil pH rather than successional age.« less
Slosh-Free Positioning of Containers with Liquids and Flexible Conveyor Belt
NASA Astrophysics Data System (ADS)
Hubinský, Peter; Pospiech, Thomas
2010-03-01
This paper describes a method for slosh-free moving of containers with a liquid at which the conveyor belt is flexible. It shows, by means of experimental results, that a container filled with a liquid can be regarded as a damped pendulum. Upon parameter identification of the two single-mode subsystems, theoretical modelling of the complete system is described. With respect to industrial application, feedforward control is used to transfer the container in horizontal direction without sloshing. The applied method requires deterministic and hard real time communication between the PLC and the servo amplifier which is realized here with Ethernet POWERLINK. The principle structure, calculations, time duration and the robustness of the basic types of input shaper are described and compared with each other. Additionally the positioning results of the two-mode system are presented. Finally, a possibility of the principle software implementation is presented.
Gödel, Tarski, Turing and the Conundrum of Free Will
NASA Astrophysics Data System (ADS)
Nayakar, Chetan S. Mandayam; Srikanth, R.
2014-07-01
The problem of defining and locating free will (FW) in physics is studied. On the basis of logical paradoxes, we argue that FW has a metatheoretic character, like the concept of truth in Tarski's undefinability theorem. Free will exists relative to a base theory if there is freedom to deviate from the deterministic or indeterministic dynamics in the theory, with the deviations caused by parameters (representing will) in the meta-theory. By contrast, determinism and indeterminism do not require meta-theoretic considerations in their formalization, making FW a fundamentally new causal primitive. FW exists relative to the meta-theory if there is freedom for deviation, due to higher-order causes. Absolute free will, which corresponds to our intuitive introspective notion of free will, exists if this meta-theoretic hierarchy is infinite. We argue that this hierarchy corresponds to higher levels of uncomputability. In other words, at any finitely high order in the hierarchy, there are uncomputable deviations from the law at that order. Applied to the human condition, the hierarchy corresponds to deeper levels of the subconscious or unconscious mind. Possible ramifications of our model for physics, neuroscience and artificial intelligence (AI) are briefly considered.
NASA Astrophysics Data System (ADS)
Ramos, José A.; Mercère, Guillaume
2016-12-01
In this paper, we present an algorithm for identifying two-dimensional (2D) causal, recursive and separable-in-denominator (CRSD) state-space models in the Roesser form with deterministic-stochastic inputs. The algorithm implements the N4SID, PO-MOESP and CCA methods, which are well known in the literature on 1D system identification, but here we do so for the 2D CRSD Roesser model. The algorithm solves the 2D system identification problem by maintaining the constraint structure imposed by the problem (i.e. Toeplitz and Hankel) and computes the horizontal and vertical system orders, system parameter matrices and covariance matrices of a 2D CRSD Roesser model. From a computational point of view, the algorithm has been presented in a unified framework, where the user can select which of the three methods to use. Furthermore, the identification task is divided into three main parts: (1) computing the deterministic horizontal model parameters, (2) computing the deterministic vertical model parameters and (3) computing the stochastic components. Specific attention has been paid to the computation of a stabilised Kalman gain matrix and a positive real solution when required. The efficiency and robustness of the unified algorithm have been demonstrated via a thorough simulation example.
NASA Astrophysics Data System (ADS)
Lye, Ribin; Tan, James Peng Lung; Cheong, Siew Ann
2012-11-01
We describe a bottom-up framework, based on the identification of appropriate order parameters and determination of phase diagrams, for understanding progressively refined agent-based models and simulations of financial markets. We illustrate this framework by starting with a deterministic toy model, whereby N independent traders buy and sell M stocks through an order book that acts as a clearing house. The price of a stock increases whenever it is bought and decreases whenever it is sold. Price changes are updated by the order book before the next transaction takes place. In this deterministic model, all traders based their buy decisions on a call utility function, and all their sell decisions on a put utility function. We then make the agent-based model more realistic, by either having a fraction fb of traders buy a random stock on offer, or a fraction fs of traders sell a random stock in their portfolio. Based on our simulations, we find that it is possible to identify useful order parameters from the steady-state price distributions of all three models. Using these order parameters as a guide, we find three phases: (i) the dead market; (ii) the boom market; and (iii) the jammed market in the phase diagram of the deterministic model. Comparing the phase diagrams of the stochastic models against that of the deterministic model, we realize that the primary effect of stochasticity is to eliminate the dead market phase.
NASA Astrophysics Data System (ADS)
Ye, Liming; Yang, Guixia; Van Ranst, Eric; Tang, Huajun
2013-03-01
A generalized, structural, time series modeling framework was developed to analyze the monthly records of absolute surface temperature, one of the most important environmental parameters, using a deterministicstochastic combined (DSC) approach. Although the development of the framework was based on the characterization of the variation patterns of a global dataset, the methodology could be applied to any monthly absolute temperature record. Deterministic processes were used to characterize the variation patterns of the global trend and the cyclic oscillations of the temperature signal, involving polynomial functions and the Fourier method, respectively, while stochastic processes were employed to account for any remaining patterns in the temperature signal, involving seasonal autoregressive integrated moving average (SARIMA) models. A prediction of the monthly global surface temperature during the second decade of the 21st century using the DSC model shows that the global temperature will likely continue to rise at twice the average rate of the past 150 years. The evaluation of prediction accuracy shows that DSC models perform systematically well against selected models of other authors, suggesting that DSC models, when coupled with other ecoenvironmental models, can be used as a supplemental tool for short-term (˜10-year) environmental planning and decision making.
NASA Astrophysics Data System (ADS)
Xu, Ruirui; Ma, Zhongyu; Muether, Herbert; van Dalen, E. N. E.; Liu, Tinjin; Zhang, Yue; Zhang, Zhi; Tian, Yuan
2017-09-01
A relativistic microscopic optical model potential, named CTOM, for nucleon-nucleus scattering is investigated in the framework of Dirac-Brueckner-Hartree-Fock approach. The microscopic feature of CTOM is guaranteed through rigorously adopting the isospin dependent DBHF calculation within the subtracted T matrix scheme. In order to verify its prediction power, a global study n, p+ A scattering are carried out. The predicted scattering observables coincide with experimental data within a good accuracy over a broad range of targets and a large region of energies only with two free items, namely the free-range factor t in the applied improved local density approximation and minor adjustments of the scalar and vector potentials in the low-density region. In addition, to estimate the uncertainty of the theoretical results, the deterministic simple least square approach is preliminarily employed to derive the covariance of predicted angular distributions, which is also briefly contained in this paper.
A model for HIV/AIDS pandemic with optimal control
NASA Astrophysics Data System (ADS)
Sule, Amiru; Abdullah, Farah Aini
2015-05-01
Human immunodeficiency virus and acquired immune deficiency syndrome (HIV/AIDS) is pandemic. It has affected nearly 60 million people since the detection of the disease in 1981 to date. In this paper basic deterministic HIV/AIDS model with mass action incidence function are developed. Stability analysis is carried out. And the disease free equilibrium of the basic model was found to be locally asymptotically stable whenever the threshold parameter (RO) value is less than one, and unstable otherwise. The model is extended by introducing two optimal control strategies namely, CD4 counts and treatment for the infective using optimal control theory. Numerical simulation was carried out in order to illustrate the analytic results.
A probabilistic approach for the estimation of earthquake source parameters from spectral inversion
NASA Astrophysics Data System (ADS)
Supino, M.; Festa, G.; Zollo, A.
2017-12-01
The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.
Hybrid Monte Carlo/deterministic methods for radiation shielding problems
NASA Astrophysics Data System (ADS)
Becker, Troy L.
For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods can be used to achieve user-specified Monte Carlo distributions. Overall, the Transform approach performed more efficiently than the weight window methods, but it performed much more efficiently for source-detector problems than for global problems.
Effective Biot theory and its generalization to poroviscoelastic models
NASA Astrophysics Data System (ADS)
Liu, Xu; Greenhalgh, Stewart; Zhou, Bing; Greenhalgh, Mark
2018-02-01
A method is suggested to express the effective bulk modulus of the solid frame of a poroelastic material as a function of the saturated bulk modulus. This method enables effective Biot theory to be described through the use of seismic dispersion measurements or other models developed for the effective saturated bulk modulus. The effective Biot theory is generalized to a poroviscoelastic model of which the moduli are represented by the relaxation functions of the generalized fractional Zener model. The latter covers the general Zener and the Cole-Cole models as special cases. A global search method is described to determine the parameters of the relaxation functions, and a simple deterministic method is also developed to find the defining parameters of the single Cole-Cole model. These methods enable poroviscoelastic models to be constructed, which are based on measured seismic attenuation functions, and ensure that the model dispersion characteristics match the observations.
Parameter estimation of a pulp digester model with derivative-free optimization strategies
NASA Astrophysics Data System (ADS)
Seiça, João C.; Romanenko, Andrey; Fernandes, Florbela P.; Santos, Lino O.; Fernandes, Natércia C. P.
2017-07-01
The work concerns the parameter estimation in the context of the mechanistic modelling of a pulp digester. The problem is cast as a box bounded nonlinear global optimization problem in order to minimize the mismatch between the model outputs with the experimental data observed at a real pulp and paper plant. MCSFilter and Simulated Annealing global optimization methods were used to solve the optimization problem. While the former took longer to converge to the global minimum, the latter terminated faster at a significantly higher value of the objective function and, thus, failed to find the global solution.
Barton, Hugh A; Chiu, Weihsueh A; Setzer, R Woodrow; Andersen, Melvin E; Bailer, A John; Bois, Frédéric Y; Dewoskin, Robert S; Hays, Sean; Johanson, Gunnar; Jones, Nancy; Loizou, George; Macphail, Robert C; Portier, Christopher J; Spendiff, Martin; Tan, Yu-Mei
2007-10-01
Physiologically based pharmacokinetic (PBPK) models are used in mode-of-action based risk and safety assessments to estimate internal dosimetry in animals and humans. When used in risk assessment, these models can provide a basis for extrapolating between species, doses, and exposure routes or for justifying nondefault values for uncertainty factors. Characterization of uncertainty and variability is increasingly recognized as important for risk assessment; this represents a continuing challenge for both PBPK modelers and users. Current practices show significant progress in specifying deterministic biological models and nondeterministic (often statistical) models, estimating parameters using diverse data sets from multiple sources, using them to make predictions, and characterizing uncertainty and variability of model parameters and predictions. The International Workshop on Uncertainty and Variability in PBPK Models, held 31 Oct-2 Nov 2006, identified the state-of-the-science, needed changes in practice and implementation, and research priorities. For the short term, these include (1) multidisciplinary teams to integrate deterministic and nondeterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through improved documentation of model structure(s), parameter values, sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include (1) theoretical and practical methodological improvements for nondeterministic/statistical modeling; (2) better methods for evaluating alternative model structures; (3) peer-reviewed databases of parameters and covariates, and their distributions; (4) expanded coverage of PBPK models across chemicals with different properties; and (5) training and reference materials, such as cases studies, bibliographies/glossaries, model repositories, and enhanced software. The multidisciplinary dialogue initiated by this Workshop will foster the collaboration, research, data collection, and training necessary to make characterizing uncertainty and variability a standard practice in PBPK modeling and risk assessment.
Deterministic switching of hierarchy during wrinkling in quasi-planar bilayers
Saha, Sourabh K.; Culpepper, Martin L.
2016-04-25
Emergence of hierarchy during compression of quasi-planar bilayers is preceded by a mode-locked state during which the quasi-planar form persists. Transition to hierarchy is determined entirely by geometrically observable parameters. This results in a universal transition phase diagram that enables one to deterministically tune hierarchy even with limited knowledge about material properties.
Recurrence quantification analysis of global stock markets
NASA Astrophysics Data System (ADS)
Bastos, João A.; Caiado, Jorge
2011-04-01
This study investigates the presence of deterministic dependencies in international stock markets using recurrence plots and recurrence quantification analysis (RQA). The results are based on a large set of free float-adjusted market capitalization stock indices, covering a period of 15 years. The statistical tests suggest that the dynamics of stock prices in emerging markets is characterized by higher values of RQA measures when compared to their developed counterparts. The behavior of stock markets during critical financial events, such as the burst of the technology bubble, the Asian currency crisis, and the recent subprime mortgage crisis, is analyzed by performing RQA in sliding windows. It is shown that during these events stock markets exhibit a distinctive behavior that is characterized by temporary decreases in the fraction of recurrence points contained in diagonal and vertical structures.
Predictability of short-range forecasting: a multimodel approach
NASA Astrophysics Data System (ADS)
García-Moya, Jose-Antonio; Callado, Alfons; Escribà, Pau; Santos, Carlos; Santos-Muñoz, Daniel; Simarro, Juan
2011-05-01
Numerical weather prediction (NWP) models (including mesoscale) have limitations when it comes to dealing with severe weather events because extreme weather is highly unpredictable, even in the short range. A probabilistic forecast based on an ensemble of slightly different model runs may help to address this issue. Among other ensemble techniques, Multimodel ensemble prediction systems (EPSs) are proving to be useful for adding probabilistic value to mesoscale deterministic models. A Multimodel Short Range Ensemble Prediction System (SREPS) focused on forecasting the weather up to 72 h has been developed at the Spanish Meteorological Service (AEMET). The system uses five different limited area models (LAMs), namely HIRLAM (HIRLAM Consortium), HRM (DWD), the UM (UKMO), MM5 (PSU/NCAR) and COSMO (COSMO Consortium). These models run with initial and boundary conditions provided by five different global deterministic models, namely IFS (ECMWF), UM (UKMO), GME (DWD), GFS (NCEP) and CMC (MSC). AEMET-SREPS (AE) validation on the large-scale flow, using ECMWF analysis, shows a consistent and slightly underdispersive system. For surface parameters, the system shows high skill forecasting binary events. 24-h precipitation probabilistic forecasts are verified using an up-scaling grid of observations from European high-resolution precipitation networks, and compared with ECMWF-EPS (EC).
Dominating Scale-Free Networks Using Generalized Probabilistic Methods
Molnár,, F.; Derzsy, N.; Czabarka, É.; Székely, L.; Szymanski, B. K.; Korniss, G.
2014-01-01
We study ensemble-based graph-theoretical methods aiming to approximate the size of the minimum dominating set (MDS) in scale-free networks. We analyze both analytical upper bounds of dominating sets and numerical realizations for applications. We propose two novel probabilistic dominating set selection strategies that are applicable to heterogeneous networks. One of them obtains the smallest probabilistic dominating set and also outperforms the deterministic degree-ranked method. We show that a degree-dependent probabilistic selection method becomes optimal in its deterministic limit. In addition, we also find the precise limit where selecting high-degree nodes exclusively becomes inefficient for network domination. We validate our results on several real-world networks, and provide highly accurate analytical estimates for our methods. PMID:25200937
NASA Astrophysics Data System (ADS)
Wang, Yi; Cao, Jinde; Alsaedi, Ahmed; Hayat, Tasawar
2017-02-01
In this paper, we formulate a deterministic model by including the vacant sites, which represent inactive individuals or potential contacts, to investigate the spreading dynamics of sexually transmitted diseases in heterogeneous networks. We first analytically derive the basic reproduction number R 0, which completely determines global dynamics of the system in the long run. Specifically, if R 0 < 1, the disease-free equilibrium is globally asymptotically stable, i.e. disease disappears from the network irrespective of initial infected numbers and distributions, whereas if R 0 > 1, the system is uniformly persistent around a unique endemic equilibrium, i.e. disease persists in the network. Furthermore, by using a suitable Lyapunov function the global stability of endemic equilibrium for low/high-risk infected individuals only is proved. Finally, the effects of three immunization schemes are studied and compared, and extensive numerical simulations are performed to investigate the effect of network topology and population turnover on disease spread. Our results suggest that population turnover could have great impact on the sexually transmitted disease system in heterogeneous networks, including the basic reproduction number and infection prevalence.
NASA Astrophysics Data System (ADS)
Soltanzadeh, I.; Azadi, M.; Vakili, G. A.
2011-07-01
Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.
Two Strain Dengue Model with Temporary Cross Immunity and Seasonality
NASA Astrophysics Data System (ADS)
Aguiar, Maíra; Ballesteros, Sebastien; Stollenwerk, Nico
2010-09-01
Models on dengue fever epidemiology have previously shown critical fluctuations with power law distributions and also deterministic chaos in some parameter regions due to the multi-strain structure of the disease pathogen. In our first model including well known biological features, we found a rich dynamical structure including limit cycles, symmetry breaking bifurcations, torus bifurcations, coexisting attractors including isola solutions and deterministic chaos (as indicated by positive Lyapunov exponents) in a much larger parameter region, which is also biologically more plausible than the previous results of other researches. Based on these findings we will investigate the model structures further including seasonality.
Two Strain Dengue Model with Temporary Cross Immunity and Seasonality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguiar, Maira; Ballesteros, Sebastien; Stollenwerk, Nico
Models on dengue fever epidemiology have previously shown critical fluctuations with power law distributions and also deterministic chaos in some parameter regions due to the multi-strain structure of the disease pathogen. In our first model including well known biological features, we found a rich dynamical structure including limit cycles, symmetry breaking bifurcations, torus bifurcations, coexisting attractors including isola solutions and deterministic chaos (as indicated by positive Lyapunov exponents) in a much larger parameter region, which is also biologically more plausible than the previous results of other researches. Based on these findings we will investigate the model structures further including seasonality.
Mathematical analysis of epidemiological models with heterogeneity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Ark, J.W.
1992-01-01
For many diseases in human populations the disease shows dissimilar characteristics in separate subgroups of the population; for example, the probability of disease transmission for gonorrhea or AIDS is much higher from male to female than from female to male. There is reason to construct and analyze epidemiological models which allow this heterogeneity of population, and to use these models to run computer simulations of the disease to predict the incidence and prevalence of the disease. In the models considered here the heterogeneous population is separated into subpopulations whose internal and external interactions are homogeneous in the sense that eachmore » person in the population can be assumed to have all average actions for the people of that subpopulation. The first model considered is an SIRS models; i.e., the Susceptible can become Infected, and if so he eventually Recovers with temporary immunity, and after a period of time becomes Susceptible again. Special cases allow for permanent immunity or other variations. This model is analyzed and threshold conditions are given which determine whether the disease dies out or persists. A deterministic model is presented; this model is constructed using difference equations, and it has been used in computer simulations for the AIDS epidemic in the homosexual population in San Francisco. The homogeneous version and the heterogeneous version of the differential-equations and difference-equations versions of the deterministic model are analyzed mathematically. In the analysis, equilibria are identified and threshold conditions are set forth for the disease to die out if the disease is below the threshold so that the disease-free equilibrium is globally asymptotically stable. Above the threshold the disease persists so that the disease-free equilibrium is unstable and there is a unique endemic equilibrium.« less
Guymon, Gary L.; Yen, Chung-Cheng
1990-01-01
The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.
NASA Astrophysics Data System (ADS)
Guymon, Gary L.; Yen, Chung-Cheng
1990-07-01
The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.
York, L; Heffernan, C; Rymer, C; Panda, N
2017-05-01
In the global South, dairying is often promoted as a means of poverty alleviation. Yet, under conditions of climate warming, little is known regarding the ability of small-scale dairy producers to maintain production and/or the robustness of possible adaptation options in meeting the challenges presented, particularly heat stress. The authors created a simple, deterministic model to explore the influence of breed and heat stress relief options on smallholder dairy farmers in Odisha, India. Breeds included indigenous Indian (non-descript), low-grade Jersey crossbreed and high-grade Jersey crossbreed. Relief strategies included providing shade, fanning and bathing. The impact of predicted critical global climate parameters, a 2°C and 4°C temperature rise were explored. A feed price scenario was modelled to illustrate the importance of feed in impact estimation. Feed costs were increased by 10% to 30%. Across the simulations, high-grade Jersey crossbreeds maintained higher milk yields, despite being the most sensitive to the negative effects of temperature. Low-capital relief strategies were the most effective at reducing heat stress impacts on household income. However, as feed costs increased the lower-grade Jersey crossbreed became the most profitable breed. The high-grade Jersey crossbreed was only marginally (4.64%) more profitable than the indigenous breed. The results demonstrate the importance of understanding the factors and practical trade-offs that underpin adaptation. The model also highlights the need for hot-climate dairying projects and programmes to consider animal genetic resources alongside environmentally sustainable adaptation measures for greatest poverty impact.
Global Soil Moisture Estimation through a Coupled CLM4-RTM-DART Land Data Assimilation System
NASA Astrophysics Data System (ADS)
Zhao, L.; Yang, Z. L.; Hoar, T. J.
2016-12-01
Very few frameworks exist that estimate global-scale soil moisture through microwave land data assimilation (DA). Toward this goal, we have developed such a framework by linking the Community Land Model version 4 (CLM4) and a microwave radiative transfer model (RTM) with the Data Assimilation Research Testbed (DART). The deterministic Ensemble Adjustment Kalman Filter (EAKF) within the DART is utilized to estimate global multi-layer soil moisture by assimilating brightness temperature observations from the Advanced Microwave Scanning Radiometer for Earth Observing System (AMSR-E). A 40-member of Community Atmosphere Model version 4 (CAM4) reanalysis is adopted to drive CLM4 simulations. Spatial-specific time-invariant microwave parameters are pre-calibrated to minimize uncertainties in RTM. Besides, various methods are designed in consideration of computational efficiency. A series of experiments are conducted to quantify the DA sensitivity to microwave parameters, choice of assimilated observations, and different CLM4 updating schemes. Evaluation results indicate that the newly established CLM4-RTM-DART framework improves the open-loop CLM4 simulated soil moisture. Pre-calibrated microwave parameters, rather than their default values, can ensure a more robust global-scale performance. In addition, updating near-surface soil moisture is capable of improving soil moisture in deeper layers, while simultaneously updating multi-layer soil moisture fails to obtain intended improvements. We will show in this presentation the architecture of the CLM4-RTM-DART system and the evaluations on AMSR-E DA. Preliminary results on multi-sensor DA that integrates various satellite observations including GRACE, MODIS, and AMSR-E will also be presented. ReferenceZhao, L., Z.-L. Yang, and T. J. Hoar, 2016. Global Soil Moisture Estimation by Assimilating AMSR-E Brightness Temperatures in a Coupled CLM4-RTM-DART System. Journal of Hydrometeorology, DOI: 10.1175/JHM-D-15-0218.1.
Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate
NASA Astrophysics Data System (ADS)
Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing
2014-09-01
We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.
Modeling disease transmission near eradication: An equation free approach
NASA Astrophysics Data System (ADS)
Williams, Matthew O.; Proctor, Joshua L.; Kutz, J. Nathan
2015-01-01
Although disease transmission in the near eradication regime is inherently stochastic, deterministic quantities such as the probability of eradication are of interest to policy makers and researchers. Rather than running large ensembles of discrete stochastic simulations over long intervals in time to compute these deterministic quantities, we create a data-driven and deterministic "coarse" model for them using the Equation Free (EF) framework. In lieu of deriving an explicit coarse model, the EF framework approximates any needed information, such as coarse time derivatives, by running short computational experiments. However, the choice of the coarse variables (i.e., the state of the coarse system) is critical if the resulting model is to be accurate. In this manuscript, we propose a set of coarse variables that result in an accurate model in the endemic and near eradication regimes, and demonstrate this on a compartmental model representing the spread of Poliomyelitis. When combined with adaptive time-stepping coarse projective integrators, this approach can yield over a factor of two speedup compared to direct simulation, and due to its lower dimensionality, could be beneficial when conducting systems level tasks such as designing eradication or monitoring campaigns.
NASA Astrophysics Data System (ADS)
Mannattil, Manu; Pandey, Ambrish; Verma, Mahendra K.; Chakraborty, Sagar
2017-12-01
Constructing simpler models, either stochastic or deterministic, for exploring the phenomenon of flow reversals in fluid systems is in vogue across disciplines. Using direct numerical simulations and nonlinear time series analysis, we illustrate that the basic nature of flow reversals in convecting fluids can depend on the dimensionless parameters describing the system. Specifically, we find evidence of low-dimensional behavior in flow reversals occurring at zero Prandtl number, whereas we fail to find such signatures for reversals at infinite Prandtl number. Thus, even in a single system, as one varies the system parameters, one can encounter reversals that are fundamentally different in nature. Consequently, we conclude that a single general low-dimensional deterministic model cannot faithfully characterize flow reversals for every set of parameter values.
C-field cosmological models: revisited
NASA Astrophysics Data System (ADS)
Yadav, Anil Kumar; Tawfiq Ali, Ahmad; Ray, Saibal; Rahaman, Farook; Hossain Sardar, Iftikar
2016-12-01
We investigate plane symmetric spacetime filled with perfect fluid in the C-field cosmology of Hoyle and Narlikar. A new class of exact solutions has been obtained by considering the creation field C as a function of time only. To get the deterministic solution, it has been assumed that the rate of creation of matter-energy density is proportional to the strength of the existing C-field energy density. Several physical aspects and geometrical properties of the models are discussed in detail, especially showing that some of our solutions of C-field cosmology are free from singularity in contrast to the Big Bang cosmology. A comparative study has been carried out between two models, one singular and the other nonsingular, by contrasting the behaviour of the physical parameters. We note that the model in a unique way represents both the features of the accelerating as well as decelerating universe depending on the parameters and thus seems to provide glimpses of the oscillating or cyclic model of the universe without invoking any other agent or theory in allowing cyclicity.
Sarkar, Kanchan; Sharma, Rahul; Bhattacharyya, S P
2010-03-09
A density matrix based soft-computing solution to the quantum mechanical problem of computing the molecular electronic structure of fairly long polythiophene (PT) chains is proposed. The soft-computing solution is based on a "random mutation hill climbing" scheme which is modified by blending it with a deterministic method based on a trial single-particle density matrix [P((0))(R)] for the guessed structural parameters (R), which is allowed to evolve under a unitary transformation generated by the Hamiltonian H(R). The Hamiltonian itself changes as the geometrical parameters (R) defining the polythiophene chain undergo mutation. The scale (λ) of the transformation is optimized by making the energy [E(λ)] stationary with respect to λ. The robustness and the performance levels of variants of the algorithm are analyzed and compared with those of other derivative free methods. The method is further tested successfully with optimization of the geometry of bipolaron-doped long PT chains.
Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones
NASA Astrophysics Data System (ADS)
Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto
2015-04-01
Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions Euros, shows that geological and geophysical investigations necessary to assess a reliable deterministic hazard evaluation are largely justified.
Pappenberger, F; Jendritzky, G; Staiger, H; Dutra, E; Di Giuseppe, F; Richardson, D S; Cloke, H L
2015-03-01
Although over a hundred thermal indices can be used for assessing thermal health hazards, many ignore the human heat budget, physiology and clothing. The Universal Thermal Climate Index (UTCI) addresses these shortcomings by using an advanced thermo-physiological model. This paper assesses the potential of using the UTCI for forecasting thermal health hazards. Traditionally, such hazard forecasting has had two further limitations: it has been narrowly focused on a particular region or nation and has relied on the use of single 'deterministic' forecasts. Here, the UTCI is computed on a global scale, which is essential for international health-hazard warnings and disaster preparedness, and it is provided as a probabilistic forecast. It is shown that probabilistic UTCI forecasts are superior in skill to deterministic forecasts and that despite global variations, the UTCI forecast is skilful for lead times up to 10 days. The paper also demonstrates the utility of probabilistic UTCI forecasts on the example of the 2010 heat wave in Russia.
Metallic-thin-film instability with spatially correlated thermal noise.
Diez, Javier A; González, Alejandro G; Fernández, Roberto
2016-01-01
We study the effects of stochastic thermal fluctuations on the instability of the free surface of a flat liquid metallic film on a solid substrate. These fluctuations are represented by a stochastic noise term added to the deterministic equation for the film thickness within the long-wave approximation. Unlike the case of polymeric films, we find that this noise, while remaining white in time, must be colored in space, at least in some regimes. The corresponding noise term is characterized by a nonzero correlation length, ℓ_{c}, which, combined with the size of the system, leads to a dimensionless parameter β that accounts for the relative importance of the spatial correlation (β∼ℓ_{c}^{-1}). We perform the linear stability analysis (LSA) of the film both with and without the noise term and find that for ℓ_{c} larger than some critical value (depending on the system size), the wavelength of the peak of the spectrum is larger than that corresponding to the deterministic case, while for smaller ℓ_{c} this peak corresponds to smaller wavelength than the latter. Interestingly, whatever the value of ℓ_{c}, the peak always approaches the deterministic one for larger times. We compare LSA results with the numerical simulations of the complete nonlinear problem and find a good agreement in the power spectra for early times at different values of β. For late times, we find that the stochastic LSA predicts well the position of the dominant wavelength, showing that nonlinear interactions do not modify the trends of the early linear stages. Finally, we fit the theoretical spectra to experimental data from a nanometric laser-melted copper film and find that at later times, the adjustment requires smaller values of β (larger space correlations).
Metallic-thin-film instability with spatially correlated thermal noise
NASA Astrophysics Data System (ADS)
Diez, Javier A.; González, Alejandro G.; Fernández, Roberto
2016-01-01
We study the effects of stochastic thermal fluctuations on the instability of the free surface of a flat liquid metallic film on a solid substrate. These fluctuations are represented by a stochastic noise term added to the deterministic equation for the film thickness within the long-wave approximation. Unlike the case of polymeric films, we find that this noise, while remaining white in time, must be colored in space, at least in some regimes. The corresponding noise term is characterized by a nonzero correlation length, ℓc, which, combined with the size of the system, leads to a dimensionless parameter β that accounts for the relative importance of the spatial correlation (β ˜ℓc-1 ). We perform the linear stability analysis (LSA) of the film both with and without the noise term and find that for ℓc larger than some critical value (depending on the system size), the wavelength of the peak of the spectrum is larger than that corresponding to the deterministic case, while for smaller ℓc this peak corresponds to smaller wavelength than the latter. Interestingly, whatever the value of ℓc, the peak always approaches the deterministic one for larger times. We compare LSA results with the numerical simulations of the complete nonlinear problem and find a good agreement in the power spectra for early times at different values of β . For late times, we find that the stochastic LSA predicts well the position of the dominant wavelength, showing that nonlinear interactions do not modify the trends of the early linear stages. Finally, we fit the theoretical spectra to experimental data from a nanometric laser-melted copper film and find that at later times, the adjustment requires smaller values of β (larger space correlations).
Development of probabilistic multimedia multipathway computer codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, C.; LePoire, D.; Gnanapragasam, E.
2002-01-01
The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributionsmore » for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.« less
NASA Astrophysics Data System (ADS)
Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu; Zhu, Feng
2017-10-01
Accurate material parameters are critical to construct the high biofidelity finite element (FE) models. However, it is hard to obtain the brain tissue parameters accurately because of the effects of irregular geometry and uncertain boundary conditions. Considering the complexity of material test and the uncertainty of friction coefficient, a computational inverse method for viscoelastic material parameters identification of brain tissue is presented based on the interval analysis method. Firstly, the intervals are used to quantify the friction coefficient in the boundary condition. And then the inverse problem of material parameters identification under uncertain friction coefficient is transformed into two types of deterministic inverse problem. Finally the intelligent optimization algorithm is used to solve the two types of deterministic inverse problems quickly and accurately, and the range of material parameters can be easily acquired with no need of a variety of samples. The efficiency and convergence of this method are demonstrated by the material parameters identification of thalamus. The proposed method provides a potential effective tool for building high biofidelity human finite element model in the study of traffic accident injury.
Sanz, Luis; Alonso, Juan Antonio
2017-12-01
In this work we develop approximate aggregation techniques in the context of slow-fast linear population models governed by stochastic differential equations and apply the results to the treatment of populations with spatial heterogeneity. Approximate aggregation techniques allow one to transform a complex system involving many coupled variables and in which there are processes with different time scales, by a simpler reduced model with a fewer number of 'global' variables, in such a way that the dynamics of the former can be approximated by that of the latter. In our model we contemplate a linear fast deterministic process together with a linear slow process in which the parameters are affected by additive noise, and give conditions for the solutions corresponding to positive initial conditions to remain positive for all times. By letting the fast process reach equilibrium we build a reduced system with a lesser number of variables, and provide results relating the asymptotic behaviour of the first- and second-order moments of the population vector for the original and the reduced system. The general technique is illustrated by analysing a multiregional stochastic system in which dispersal is deterministic and the rate growth of the populations in each patch is affected by additive noise.
Multielevation calibration of frequency-domain electromagnetic data
Minsley, Burke J.; Kass, M. Andy; Hodges, Greg; Smith, Bruce D.
2014-01-01
Systematic calibration errors must be taken into account because they can substantially impact the accuracy of inverted subsurface resistivity models derived from frequency-domain electromagnetic data, resulting in potentially misleading interpretations. We have developed an approach that uses data acquired at multiple elevations over the same location to assess calibration errors. A significant advantage is that this method does not require prior knowledge of subsurface properties from borehole or ground geophysical data (though these can be readily incorporated if available), and is, therefore, well suited to remote areas. The multielevation data were used to solve for calibration parameters and a single subsurface resistivity model that are self consistent over all elevations. The deterministic and Bayesian formulations of the multielevation approach illustrate parameter sensitivity and uncertainty using synthetic- and field-data examples. Multiplicative calibration errors (gain and phase) were found to be better resolved at high frequencies and when data were acquired over a relatively conductive area, whereas additive errors (bias) were reasonably resolved over conductive and resistive areas at all frequencies. The Bayesian approach outperformed the deterministic approach when estimating calibration parameters using multielevation data at a single location; however, joint analysis of multielevation data at multiple locations using the deterministic algorithm yielded the most accurate estimates of calibration parameters. Inversion results using calibration-corrected data revealed marked improvement in misfit, lending added confidence to the interpretation of these models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huthmacher, Klaus; Molberg, Andreas K.; Rethfeld, Bärbel
2016-10-01
A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron–phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron–electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate howmore » electron–electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.« less
Short-range solar radiation forecasts over Sweden
NASA Astrophysics Data System (ADS)
Landelius, Tomas; Lindskog, Magnus; Körnich, Heiner; Andersson, Sandra
2018-04-01
In this article the performance for short-range solar radiation forecasts by the global deterministic and ensemble models from the European Centre for Medium-Range Weather Forecasts (ECMWF) is compared with an ensemble of the regional mesoscale model HARMONIE-AROME used by the national meteorological services in Sweden, Norway and Finland. Note however that only the control members and the ensemble means are included in the comparison. The models resolution differs considerably with 18 km for the ECMWF ensemble, 9 km for the ECMWF deterministic model, and 2.5 km for the HARMONIE-AROME ensemble. The models share the same radiation code. It turns out that they all underestimate systematically the Direct Normal Irradiance (DNI) for clear-sky conditions. Except for this shortcoming, the HARMONIE-AROME ensemble model shows the best agreement with the distribution of observed Global Horizontal Irradiance (GHI) and DNI values. During mid-day the HARMONIE-AROME ensemble mean performs best. The control member of the HARMONIE-AROME ensemble also scores better than the global deterministic ECMWF model. This is an interesting result since mesoscale models have so far not shown good results when compared to the ECMWF models. Three days with clear, mixed and cloudy skies are used to illustrate the possible added value of a probabilistic forecast. It is shown that in these cases the mesoscale ensemble could provide decision support to a grid operator in terms of forecasts of both the amount of solar power and its probabilities.
Statistical Maps of Ground Magnetic Disturbance Derived from Global Geospace Models
NASA Astrophysics Data System (ADS)
Rigler, E. J.; Wiltberger, M. J.; Love, J. J.
2017-12-01
Electric currents in space are the principal driver of magnetic variations measured at Earth's surface. These in turn induce geoelectric fields that present a natural hazard for technological systems like high-voltage power distribution networks. Modern global geospace models can reasonably simulate large-scale geomagnetic response to solar wind variations, but they are less successful at deterministic predictions of intense localized geomagnetic activity that most impacts technological systems on the ground. Still, recent studies have shown that these models can accurately reproduce the spatial statistical distributions of geomagnetic activity, suggesting that their physics are largely correct. Since the magnetosphere is a largely externally driven system, most model-measurement discrepancies probably arise from uncertain boundary conditions. So, with realistic distributions of solar wind parameters to establish its boundary conditions, we use the Lyon-Fedder-Mobarry (LFM) geospace model to build a synthetic multivariate statistical model of gridded ground magnetic disturbance. From this, we analyze the spatial modes of geomagnetic response, regress on available measurements to fill in unsampled locations on the grid, and estimate the global probability distribution of extreme magnetic disturbance. The latter offers a prototype geomagnetic "hazard map", similar to those used to characterize better-known geophysical hazards like earthquakes and floods.
Consistent Parameter and Transfer Function Estimation using Context Free Grammars
NASA Astrophysics Data System (ADS)
Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten
2017-04-01
This contribution presents a method for the inference of transfer functions for rainfall-runoff models. Here, transfer functions are defined as parametrized (functional) relationships between a set of spatial predictors (e.g. elevation, slope or soil texture) and model parameters. They are ultimately used for estimation of consistent, spatially distributed model parameters from a limited amount of lumped global parameters. Additionally, they provide a straightforward method for parameter extrapolation from one set of basins to another and can even be used to derive parameterizations for multi-scale models [see: Samaniego et al., 2010]. Yet, currently an actual knowledge of the transfer functions is often implicitly assumed. As a matter of fact, for most cases these hypothesized transfer functions can rarely be measured and often remain unknown. Therefore, this contribution presents a general method for the concurrent estimation of the structure of transfer functions and their respective (global) parameters. Note, that by consequence an estimation of the distributed parameters of the rainfall-runoff model is also undertaken. The method combines two steps to achieve this. The first generates different possible transfer functions. The second then estimates the respective global transfer function parameters. The structural estimation of the transfer functions is based on the context free grammar concept. Chomsky first introduced context free grammars in linguistics [Chomsky, 1956]. Since then, they have been widely applied in computer science. But, to the knowledge of the authors, they have so far not been used in hydrology. Therefore, the contribution gives an introduction to context free grammars and shows how they can be constructed and used for the structural inference of transfer functions. This is enabled by new methods from evolutionary computation, such as grammatical evolution [O'Neill, 2001], which make it possible to exploit the constructed grammar as a search space for equations. The parametrization of the transfer functions is then achieved through a second optimization routine. The contribution explores different aspects of the described procedure through a set of experiments. These experiments can be divided into three categories: (1) The inference of transfer functions from directly measurable parameters; (2) The estimation of global parameters for given transfer functions from runoff data; and (3) The estimation of sets of completely unknown transfer functions from runoff data. The conducted tests reveal different potentials and limits of the procedure. In concrete it is shown that example (1) and (2) work remarkably well. Example (3) is much more dependent on the setup. In general, it can be said that in that case much more data is needed to derive transfer function estimations, even for simple models and setups. References: - Chomsky, N. (1956): Three Models for the Description of Language. IT IRETr. 2(3), p 113-124 - O'Neil, M. (2001): Grammatical Evolution. IEEE ToEC, Vol.5, No. 4 - Samaniego, L.; Kumar, R.; Attinger, S. (2010): Multiscale parameter regionalization of a grid-based hydrologic model at the mesoscale. WWR, Vol. 46, W05523, doi:10.1029/2008WR007327
Dual ant colony operational modal analysis parameter estimation method
NASA Astrophysics Data System (ADS)
Sitarz, Piotr; Powałka, Bartosz
2018-01-01
Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.
Acceleration techniques in the univariate Lipschitz global optimization
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.; De Franco, Angela
2016-10-01
Univariate box-constrained Lipschitz global optimization problems are considered in this contribution. Geometric and information statistical approaches are presented. The novel powerful local tuning and local improvement techniques are described in the contribution as well as the traditional ways to estimate the Lipschitz constant. The advantages of the presented local tuning and local improvement techniques are demonstrated using the operational characteristics approach for comparing deterministic global optimization algorithms on the class of 100 widely used test functions.
NASA Astrophysics Data System (ADS)
de Pascale, P.; Vasile, M.; Casotto, S.
The design of interplanetary trajectories requires the solution of an optimization problem, which has been traditionally solved by resorting to various local optimization techniques. All such approaches, apart from the specific method employed (direct or indirect), require an initial guess, which deeply influences the convergence to the optimal solution. The recent developments in low-thrust propulsion have widened the perspectives of exploration of the Solar System, while they have at the same time increased the difficulty related to the trajectory design process. Continuous thrust transfers, typically characterized by multiple spiraling arcs, have a broad number of design parameters and thanks to the flexibility offered by such engines, they typically turn out to be characterized by a multi-modal domain, with a consequent larger number of optimal solutions. Thus the definition of the first guesses is even more challenging, particularly for a broad search over the design parameters, and it requires an extensive investigation of the domain in order to locate the largest number of optimal candidate solutions and possibly the global optimal one. In this paper a tool for the preliminary definition of interplanetary transfers with coast-thrust arcs and multiple swing-bys is presented. Such goal is achieved combining a novel methodology for the description of low-thrust arcs, with a global optimization algorithm based on a hybridization of an evolutionary step and a deterministic step. Low thrust arcs are described in a 3D model in order to account the beneficial effects of low-thrust propulsion for a change of inclination, resorting to a new methodology based on an inverse method. The two-point boundary values problem (TPBVP) associated with a thrust arc is solved by imposing a proper parameterized evolution of the orbital parameters, by which, the acceleration required to follow the given trajectory with respect to the constraints set is obtained simply through algebraic computation. By this method a low-thrust transfer satisfying the boundary conditions on position and velocity can be quickly assessed, with low computational effort since no numerical propagation is required. The hybrid global optimization algorithm is made of a double step. Through the evolutionary search a large number of optima, and eventually the global one, are located, while the deterministic step consists of a branching process that exhaustively partitions the domain in order to have an extensive characterization of such a complex space of solutions. Furthermore, the approach implements a novel direct constraint-handling technique allowing the treatment of mixed-integer nonlinear programming problems (MINLP) typical of multiple swingby trajectories. A low-thrust transfer to Mars is studied as a test bed for the low-thrust model, thus presenting the main characteristics of the different shapes proposed and the features of the possible sub-arcs segmentations between two planets with respect to different objective functions: minimum time and minimum fuel consumption transfers. Other various test cases are also shown and further optimized, proving the effective capability of the proposed tool.
In Search of Determinism-Sensitive Region to Avoid Artefacts in Recurrence Plots
NASA Astrophysics Data System (ADS)
Wendi, Dadiyorto; Marwan, Norbert; Merz, Bruno
As an effort to reduce parameter uncertainties in constructing recurrence plots, and in particular to avoid potential artefacts, this paper presents a technique to derive artefact-safe region of parameter sets. This technique exploits both deterministic (incl. chaos) and stochastic signal characteristics of recurrence quantification (i.e. diagonal structures). It is useful when the evaluated signal is known to be deterministic. This study focuses on the recurrence plot generated from the reconstructed phase space in order to represent many real application scenarios when not all variables to describe a system are available (data scarcity). The technique involves random shuffling of the original signal to destroy its original deterministic characteristics. Its purpose is to evaluate whether the determinism values of the original and the shuffled signal remain closely together, and therefore suggesting that the recurrence plot might comprise artefacts. The use of such determinism-sensitive region shall be accompanied by standard embedding optimization approaches, e.g. using indices like false nearest neighbor and mutual information, to result in a more reliable recurrence plot parameterization.
Probabilistic direct counterfactual quantum communication
NASA Astrophysics Data System (ADS)
Zhang, Sheng
2017-02-01
It is striking that the quantum Zeno effect can be used to launch a direct counterfactual communication between two spatially separated parties, Alice and Bob. So far, existing protocols of this type only provide a deterministic counterfactual communication service. However, this counterfactuality should be payed at a price. Firstly, the transmission time is much longer than a classical transmission costs. Secondly, the chained-cycle structure makes them more sensitive to channel noises. Here, we extend the idea of counterfactual communication, and present a probabilistic-counterfactual quantum communication protocol, which is proved to have advantages over the deterministic ones. Moreover, the presented protocol could evolve to a deterministic one solely by adjusting the parameters of the beam splitters. Project supported by the National Natural Science Foundation of China (Grant No. 61300203).
A Deterministic Transport Code for Space Environment Electrons
NASA Technical Reports Server (NTRS)
Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.
2010-01-01
A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.
Ten-year global distribution of downwelling longwave radiation
NASA Astrophysics Data System (ADS)
Pavlakis, K. G.; Hatzidimitriou, D.; Matsoukas, C.; Drakakis, E.; Hatzianastassiou, N.; Vardavas, I.
2003-10-01
Downwelling longwave fluxes, DLFs, have been derived for each month over a ten year period (1984-1993), on a global scale with a resolution of 2.5° × 2.5°. The fluxes were computed using a deterministic model for atmospheric radiation transfer, along with satellite and reanalysis data for the key atmospheric input parameters, i.e. cloud properties, and specific humidity and temperature profiles. The cloud climatologies were taken from the latest released and improved International Satellite Climatology Project D2 series. Specific humidity and temperature vertical profiles were taken from three different reanalysis datasets; NCEP/NCAR, GEOS, and ECMWF (acronyms explained in main text). DLFs were computed for each reanalysis dataset, with differences reaching values as high as 30 Wm-2 in specific regions, particularly over high altitude areas and deserts. However, globally, the agreement is good, with the rms of the difference between the DLFs derived from the different reanalysis datasets ranging from 5 to 7 Wm-2. The results are presented as geographical distributions and as time series of hemispheric and global averages. The DLF time series based on the different reanalysis datasets show similar seasonal and inter-annual variations, and similar anomalies related to the 86/87 El Niño and 89/90 La Niña events. The global ten-year average of the DLF was found to be between 342.2 Wm-2 and 344.3 Wm-2, depending on the dataset. We also conducted a detailed sensitivity analysis of the calculated DLFs to the key input data. Plots are given that can be used to obtain a quick assessment of the sensitivity of the DLF to each of the three key climatic quantities, for specific climatic conditions corresponding to different regions of the globe. Our model downwelling fluxes are validated against available data from ground-based stations distributed over the globe, as given by the Baseline Surface Radiation Network. There is a negative bias of the model fluxes when compared against BSRN fluxes, ranging from -7 to -9 Wm-2, mostly caused by low cloud amount differences between the station and satellite measurements, particularly in cold climates. Finally, we compare our model results with those of other deterministic models and general circulation models.
Ten-year global distribution of downwelling longwave radiation
NASA Astrophysics Data System (ADS)
Pavlakis, K. G.; Hatzidimitriou, D.; Matsoukas, C.; Drakakis, E.; Hatzianastassiou, N.; Vardavas, I.
2004-01-01
Downwelling longwave fluxes, DLFs, have been derived for each month over a ten year period (1984-1993), on a global scale with a spatial resolution of 2.5x2.5 degrees and a monthly temporal resolution. The fluxes were computed using a deterministic model for atmospheric radiation transfer, along with satellite and reanalysis data for the key atmospheric input parameters, i.e. cloud properties, and specific humidity and temperature profiles. The cloud climatologies were taken from the latest released and improved International Satellite Climatology Project D2 series. Specific humidity and temperature vertical profiles were taken from three different reanalysis datasets; NCEP/NCAR, GEOS, and ECMWF (acronyms explained in main text). DLFs were computed for each reanalysis dataset, with differences reaching values as high as 30 Wm-2 in specific regions, particularly over high altitude areas and deserts. However, globally, the agreement is good, with the rms of the difference between the DLFs derived from the different reanalysis datasets ranging from 5 to 7 Wm-2. The results are presented as geographical distributions and as time series of hemispheric and global averages. The DLF time series based on the different reanalysis datasets show similar seasonal and inter-annual variations, and similar anomalies related to the 86/87 El Niño and 89/90 La Niña events. The global ten-year average of the DLF was found to be between 342.2 Wm-2 and 344.3 Wm-2, depending on the dataset. We also conducted a detailed sensitivity analysis of the calculated DLFs to the key input data. Plots are given that can be used to obtain a quick assessment of the sensitivity of the DLF to each of the three key climatic quantities, for specific climatic conditions corresponding to different regions of the globe. Our model downwelling fluxes are validated against available data from ground-based stations distributed over the globe, as given by the Baseline Surface Radiation Network. There is a negative bias of the model fluxes when compared against BSRN fluxes, ranging from -7 to -9 Wm-2, mostly caused by low cloud amount differences between the station and satellite measurements, particularly in cold climates. Finally, we compare our model results with those of other deterministic models and general circulation models.
Equation-free analysis of agent-based models and systematic parameter determination
NASA Astrophysics Data System (ADS)
Thomas, Spencer A.; Lloyd, David J. B.; Skeldon, Anne C.
2016-12-01
Agent based models (ABM)s are increasingly used in social science, economics, mathematics, biology and computer science to describe time dependent systems in circumstances where a description in terms of equations is difficult. Yet few tools are currently available for the systematic analysis of ABM behaviour. Numerical continuation and bifurcation analysis is a well-established tool for the study of deterministic systems. Recently, equation-free (EF) methods have been developed to extend numerical continuation techniques to systems where the dynamics are described at a microscopic scale and continuation of a macroscopic property of the system is considered. To date, the practical use of EF methods has been limited by; (1) the over-head of application-specific implementation; (2) the laborious configuration of problem-specific parameters; and (3) large ensemble sizes (potentially) leading to computationally restrictive run-times. In this paper we address these issues with our tool for the EF continuation of stochastic systems, which includes algorithms to systematically configuration problem specific parameters and enhance robustness to noise. Our tool is generic and can be applied to any 'black-box' simulator and determines the essential EF parameters prior to EF analysis. Robustness is significantly improved using our convergence-constraint with a corrector-repeat (C3R) method. This algorithm automatically detects outliers based on the dynamics of the underlying system enabling both an order of magnitude reduction in ensemble size and continuation of systems at much higher levels of noise than classical approaches. We demonstrate our method with application to several ABM models, revealing parameter dependence, bifurcation and stability analysis of these complex systems giving a deep understanding of the dynamical behaviour of the models in a way that is not otherwise easily obtainable. In each case we demonstrate our systematic parameter determination stage for configuring the system specific EF parameters.
Data Mining for Efficient and Accurate Large Scale Retrieval of Geophysical Parameters
NASA Astrophysics Data System (ADS)
Obradovic, Z.; Vucetic, S.; Peng, K.; Han, B.
2004-12-01
Our effort is devoted to developing data mining technology for improving efficiency and accuracy of the geophysical parameter retrievals by learning a mapping from observation attributes to the corresponding parameters within the framework of classification and regression. We will describe a method for efficient learning of neural network-based classification and regression models from high-volume data streams. The proposed procedure automatically learns a series of neural networks of different complexities on smaller data stream chunks and then properly combines them into an ensemble predictor through averaging. Based on the idea of progressive sampling the proposed approach starts with a very simple network trained on a very small chunk and then gradually increases the model complexity and the chunk size until the learning performance no longer improves. Our empirical study on aerosol retrievals from data obtained with the MISR instrument mounted at Terra satellite suggests that the proposed method is successful in learning complex concepts from large data streams with near-optimal computational effort. We will also report on a method that complements deterministic retrievals by constructing accurate predictive algorithms and applying them on appropriately selected subsets of observed data. The method is based on developing more accurate predictors aimed to catch global and local properties synthesized in a region. The procedure starts by learning the global properties of data sampled over the entire space, and continues by constructing specialized models on selected localized regions. The global and local models are integrated through an automated procedure that determines the optimal trade-off between the two components with the objective of minimizing the overall mean square errors over a specific region. Our experimental results on MISR data showed that the combined model can increase the retrieval accuracy significantly. The preliminary results on various large heterogeneous spatial-temporal datasets provide evidence that the benefits of the proposed methodology for efficient and accurate learning exist beyond the area of retrieval of geophysical parameters.
Quantum repeaters based on trapped ions with decoherence-free subspace encoding
NASA Astrophysics Data System (ADS)
Zwerger, M.; Lanyon, B. P.; Northup, T. E.; Muschik, C. A.; Dür, W.; Sangouard, N.
2017-12-01
Quantum repeaters provide an efficient solution to distribute Bell pairs over arbitrarily long distances. While scalable architectures are demanding regarding the number of qubits that need to be controlled, here we present a quantum repeater scheme aiming to extend the range of present day quantum communications that could be implemented in the near future with trapped ions in cavities. We focus on an architecture where ion-photon entangled states are created locally and subsequently processed with linear optics to create elementary links of ion-ion entangled states. These links are then used to distribute entangled pairs over long distances using successive entanglement swapping operations performed using deterministic ion-ion gates. We show how this architecture can be implemented while encoding the qubits in a decoherence-free subspace to protect them against collective dephasing. This results in a protocol that can be used to violate a Bell inequality over distances of about 800 km assuming state-of-the-art parameters. We discuss how this could be improved to several thousand kilometres in future setups.
Stochastic Evolutionary Algorithms for Planning Robot Paths
NASA Technical Reports Server (NTRS)
Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard
2006-01-01
A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.
Duret, Steven; Guillier, Laurent; Hoang, Hong-Minh; Flick, Denis; Laguerre, Onrawee
2014-06-16
Deterministic models describing heat transfer and microbial growth in the cold chain are widely studied. However, it is difficult to apply them in practice because of several variable parameters in the logistic supply chain (e.g., ambient temperature varying due to season and product residence time in refrigeration equipment), the product's characteristics (e.g., pH and water activity) and the microbial characteristics (e.g., initial microbial load and lag time). This variability can lead to different bacterial growth rates in food products and has to be considered to properly predict the consumer's exposure and identify the key parameters of the cold chain. This study proposes a new approach that combines deterministic (heat transfer) and stochastic (Monte Carlo) modeling to account for the variability in the logistic supply chain and the product's characteristics. The model generates a realistic time-temperature product history , contrary to existing modeling whose describe time-temperature profile Contrary to existing approaches that use directly a time-temperature profile, the proposed model predicts product temperature evolution from the thermostat setting and the ambient temperature. The developed methodology was applied to the cold chain of cooked ham including, the display cabinet, transport by the consumer and the domestic refrigerator, to predict the evolution of state variables, such as the temperature and the growth of Listeria monocytogenes. The impacts of the input factors were calculated and ranked. It was found that the product's time-temperature history and the initial contamination level are the main causes of consumers' exposure. Then, a refined analysis was applied, revealing the importance of consumer behaviors on Listeria monocytogenes exposure. Copyright © 2014. Published by Elsevier B.V.
Current issues and actions in radiation protection of patients.
Holmberg, Ola; Malone, Jim; Rehani, Madan; McLean, Donald; Czarwinski, Renate
2010-10-01
Medical application of ionizing radiation is a massive and increasing activity globally. While the use of ionizing radiation in medicine brings tremendous benefits to the global population, the associated risks due to stochastic and deterministic effects make it necessary to protect patients from potential harm. Current issues in radiation protection of patients include not only the rapidly increasing collective dose to the global population from medical exposure, but also that a substantial percentage of diagnostic imaging examinations are unnecessary, and the cumulative dose to individuals from medical exposure is growing. In addition to this, continued reports on deterministic injuries from safety related events in the medical use of ionizing radiation are raising awareness on the necessity for accident prevention measures. The International Atomic Energy Agency is engaged in several activities to reverse the negative trends of these current issues, including improvement of the justification process, the tracking of radiation history of individual patients, shared learning of safety significant events, and the use of comprehensive quality audits in the clinical environment. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.
2018-03-01
Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.
Deterministic chaos in entangled eigenstates
NASA Astrophysics Data System (ADS)
Schlegel, K. G.; Förster, S.
2008-05-01
We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.
Greek classicism in living structure? Some deductive pathways in animal morphology.
Zweers, G A
1985-01-01
Classical temples in ancient Greece show two deterministic illusionistic principles of architecture, which govern their functional design: geometric proportionalism and a set of illusion-strengthening rules in the proportionalism's "stochastic margin". Animal morphology, in its mechanistic-deductive revival, applies just one architectural principle, which is not always satisfactory. Whether a "Greek Classical" situation occurs in the architecture of living structure is to be investigated by extreme testing with deductive methods. Three deductive methods for explanation of living structure in animal morphology are proposed: the parts, the compromise, and the transformation deduction. The methods are based upon the systems concept for an organism, the flow chart for a functionalistic picture, and the network chart for a structuralistic picture, whereas the "optimal design" serves as the architectural principle for living structure. These methods show clearly the high explanatory power of deductive methods in morphology, but they also make one open end most explicit: neutral issues do exist. Full explanation of living structure asks for three entries: functional design within architectural and transformational constraints. The transformational constraint brings necessarily in a stochastic component: an at random variation being a sort of "free management space". This variation must be a variation from the deterministic principle of the optimal design, since any transformation requires space for plasticity in structure and action, and flexibility in role fulfilling. Nevertheless, finally the question comes up whether for animal structure a similar situation exists as in Greek Classical temples. This means that the at random variation, that is found when the optimal design is used to explain structure, comprises apart from a stochastic part also real deviations being yet another deterministic part. This deterministic part could be a set of rules that governs actualization in the "free management space".
Associative memory in an analog iterated-map neural network
NASA Astrophysics Data System (ADS)
Marcus, C. M.; Waugh, F. R.; Westervelt, R. M.
1990-03-01
The behavior of an analog neural network with parallel dynamics is studied analytically and numerically for two associative-memory learning algorithms, the Hebb rule and the pseudoinverse rule. Phase diagrams in the parameter space of analog gain β and storage ratio α are presented. For both learning rules, the networks have large ``recall'' phases in which retrieval states exist and convergence to a fixed point is guaranteed by a global stability criterion. We also demonstrate numerically that using a reduced analog gain increases the probability of recall starting from a random initial state. This phenomenon is comparable to thermal annealing used to escape local minima but has the advantage of being deterministic, and therefore easily implemented in electronic hardware. Similarities and differences between analog neural networks and networks with two-state neurons at finite temperature are also discussed.
Effects of structural error on the estimates of parameters of dynamical systems
NASA Technical Reports Server (NTRS)
Hadaegh, F. Y.; Bekey, G. A.
1986-01-01
In this paper, the notion of 'near-equivalence in probability' is introduced for identifying a system in the presence of several error sources. Following some basic definitions, necessary and sufficient conditions for the identifiability of parameters are given. The effects of structural error on the parameter estimates for both the deterministic and stochastic cases are considered.
Bayesian Estimation of the DINA Model with Gibbs Sampling
ERIC Educational Resources Information Center
Culpepper, Steven Andrew
2015-01-01
A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…
Computing Interactions Of Free-Space Radiation With Matter
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Cucinotta, F. A.; Shinn, J. L.; Townsend, L. W.; Badavi, F. F.; Tripathi, R. K.; Silberberg, R.; Tsao, C. H.; Badwar, G. D.
1995-01-01
High Charge and Energy Transport (HZETRN) computer program computationally efficient, user-friendly package of software adressing problem of transport of, and shielding against, radiation in free space. Designed as "black box" for design engineers not concerned with physics of underlying atomic and nuclear radiation processes in free-space environment, but rather primarily interested in obtaining fast and accurate dosimetric information for design and construction of modules and devices for use in free space. Computational efficiency achieved by unique algorithm based on deterministic approach to solution of Boltzmann equation rather than computationally intensive statistical Monte Carlo method. Written in FORTRAN.
NASA Astrophysics Data System (ADS)
Turnbull, Heather; Omenzetter, Piotr
2018-03-01
vDifficulties associated with current health monitoring and inspection practices combined with harsh, often remote, operational environments of wind turbines highlight the requirement for a non-destructive evaluation system capable of remotely monitoring the current structural state of turbine blades. This research adopted a physics based structural health monitoring methodology through calibration of a finite element model using inverse techniques. A 2.36m blade from a 5kW turbine was used as an experimental specimen, with operational modal analysis techniques utilised to realize the modal properties of the system. Modelling the experimental responses as fuzzy numbers using the sub-level technique, uncertainty in the response parameters was propagated back through the model and into the updating parameters. Initially, experimental responses of the blade were obtained, with a numerical model of the blade created and updated. Deterministic updating was carried out through formulation and minimisation of a deterministic objective function using both firefly algorithm and virus optimisation algorithm. Uncertainty in experimental responses were modelled using triangular membership functions, allowing membership functions of updating parameters (Young's modulus and shear modulus) to be obtained. Firefly algorithm and virus optimisation algorithm were again utilised, however, this time in the solution of fuzzy objective functions. This enabled uncertainty associated with updating parameters to be quantified. Varying damage location and severity was simulated experimentally through addition of small masses to the structure intended to cause a structural alteration. A damaged model was created, modelling four variable magnitude nonstructural masses at predefined points and updated to provide a deterministic damage prediction and information in relation to the parameters uncertainty via fuzzy updating.
Identification of gene regulation models from single-cell data
NASA Astrophysics Data System (ADS)
Weber, Lisa; Raymond, William; Munsky, Brian
2018-09-01
In quantitative analyses of biological processes, one may use many different scales of models (e.g. spatial or non-spatial, deterministic or stochastic, time-varying or at steady-state) or many different approaches to match models to experimental data (e.g. model fitting or parameter uncertainty/sloppiness quantification with different experiment designs). These different analyses can lead to surprisingly different results, even when applied to the same data and the same model. We use a simplified gene regulation model to illustrate many of these concerns, especially for ODE analyses of deterministic processes, chemical master equation and finite state projection analyses of heterogeneous processes, and stochastic simulations. For each analysis, we employ MATLAB and PYTHON software to consider a time-dependent input signal (e.g. a kinase nuclear translocation) and several model hypotheses, along with simulated single-cell data. We illustrate different approaches (e.g. deterministic and stochastic) to identify the mechanisms and parameters of the same model from the same simulated data. For each approach, we explore how uncertainty in parameter space varies with respect to the chosen analysis approach or specific experiment design. We conclude with a discussion of how our simulated results relate to the integration of experimental and computational investigations to explore signal-activated gene expression models in yeast (Neuert et al 2013 Science 339 584–7) and human cells (Senecal et al 2014 Cell Rep. 8 75–83)5.
Hall, Gunnsteinn; Liang, Wenxuan; Li, Xingde
2017-10-01
Collagen fiber alignment derived from second harmonic generation (SHG) microscopy images can be important for disease diagnostics. Image processing algorithms are needed to robustly quantify the alignment in images with high sensitivity and reliability. Fourier transform (FT) magnitude, 2D power spectrum, and image autocorrelation have previously been used to extract fiber information from images by assuming a certain mathematical model (e.g. Gaussian distribution of the fiber-related parameters) and fitting. The fitting process is slow and fails to converge when the data is not Gaussian. Herein we present an efficient constant-time deterministic algorithm which characterizes the symmetricity of the FT magnitude image in terms of a single parameter, named the fiber alignment anisotropy R ranging from 0 (randomized fibers) to 1 (perfect alignment). This represents an important improvement of the technology and may bring us one step closer to utilizing the technology for various applications in real time. In addition, we present a digital image phantom-based framework for characterizing and validating the algorithm, as well as assessing the robustness of the algorithm against different perturbations.
Motion prediction of a non-cooperative space target
NASA Astrophysics Data System (ADS)
Zhou, Bang-Zhao; Cai, Guo-Ping; Liu, Yun-Meng; Liu, Pan
2018-01-01
Capturing a non-cooperative space target is a tremendously challenging research topic. Effective acquisition of motion information of the space target is the premise to realize target capture. In this paper, motion prediction of a free-floating non-cooperative target in space is studied and a motion prediction algorithm is proposed. In order to predict the motion of the free-floating non-cooperative target, dynamic parameters of the target must be firstly identified (estimated), such as inertia, angular momentum and kinetic energy and so on; then the predicted motion of the target can be acquired by substituting these identified parameters into the Euler's equations of the target. Accurate prediction needs precise identification. This paper presents an effective method to identify these dynamic parameters of a free-floating non-cooperative target. This method is based on two steps, (1) the rough estimation of the parameters is computed using the motion observation data to the target, and (2) the best estimation of the parameters is found by an optimization method. In the optimization problem, the objective function is based on the difference between the observed and the predicted motion, and the interior-point method (IPM) is chosen as the optimization algorithm, which starts at the rough estimate obtained in the first step and finds a global minimum to the objective function with the guidance of objective function's gradient. So the speed of IPM searching for the global minimum is fast, and an accurate identification can be obtained in time. The numerical results show that the proposed motion prediction algorithm is able to predict the motion of the target.
Global stability of self-gravitating discs in modified gravity
NASA Astrophysics Data System (ADS)
Ghafourian, Neda; Roshan, Mahmood
2017-07-01
Using N-body simulations, we study the global stability of a self-gravitating disc in the context of modified gravity (MOG). This theory is a relativistic scalar-tensor-vector theory of gravity and it is presented to address the dark matter problem. In the weak field limit, MOG possesses two free parameters α and μ0, which have already been determined using the rotation curve data of spiral galaxies. The evolution of a stellar self-gravitating disc and, more specifically, the bar instability in MOG are investigated and compared to a Newtonian case. Our models have exponential and Mestel-like surface densities as Σ ∝ exp (-r/h) and Σ ∝ 1/r. It is found that, surprisingly, the discs are more stable against the bar mode in MOG than in Newtonian gravity. In other words, the bar growth rate is effectively slower than the Newtonian discs. Also, we show that both free parameters (I.e. α and μ0) have stabilizing effects. In other words, an increase in these parameters will decrease the bar growth rate.
Sensitivity analysis in a Lassa fever deterministic mathematical model
NASA Astrophysics Data System (ADS)
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
The Existence and Stability Analysis of the Equilibria in Dengue Disease Infection Model
NASA Astrophysics Data System (ADS)
Anggriani, N.; Supriatna, A. K.; Soewono, E.
2015-06-01
In this paper we formulate an SIR (Susceptible - Infective - Recovered) model of Dengue fever transmission with constant recruitment. We found a threshold parameter K0, known as the Basic Reproduction Number (BRN). This model has two equilibria, disease-free equilibrium and endemic equilibrium. By constructing suitable Lyapunov function, we show that the disease- free equilibrium is globally asymptotic stable whenever BRN is less than one and when it is greater than one, the endemic equilibrium is globally asymptotic stable. Numerical result shows the dynamic of each compartment together with effect of multiple bio-agent intervention as a control to the dengue transmission.
Effects of deterministic and random refuge in a prey-predator model with parasite infection.
Mukhopadhyay, B; Bhattacharyya, R
2012-09-01
Most natural ecosystem populations suffer from various infectious diseases and the resulting host-pathogen dynamics is dependent on host's characteristics. On the other hand, empirical evidences show that for most host pathogen systems, a part of the host population always forms a refuge. To study the role of refuge on the host-pathogen interaction, we study a predator-prey-pathogen model where the susceptible and the infected prey can undergo refugia of constant size to evade predator attack. The stability aspects of the model system is investigated from a local and global perspective. The study reveals that the refuge sizes for the susceptible and the infected prey are the key parameters that control possible predator extinction as well as species co-existence. Next we perform a global study of the model system using Lyapunov functions and show the existence of a global attractor. Finally we perform a stochastic extension of the basic model to study the phenomenon of random refuge arising from various intrinsic, habitat-related and environmental factors. The stochastic model is analyzed for exponential mean square stability. Numerical study of the stochastic model shows that increasing the refuge rates has a stabilizing effect on the stochastic dynamics. Copyright © 2012 Elsevier Inc. All rights reserved.
Computer modeling of dynamic necking in bars
NASA Astrophysics Data System (ADS)
Partom, Yehuda; Lindenfeld, Avishay
2017-06-01
Necking of thin bodies (bars, plates, shells) is one form of strain localization in ductile materials that may lead to fracture. The phenomenon of necking has been studied extensively, initially for quasistatic loading and then also for dynamic loading. Nevertheless, many issues concerning necking are still unclear. Among these are: 1) is necking a random or deterministic process; 2) how does the specimen choose the final neck location; 3) to what extent do perturbations (material or geometrical) influence the neck forming process; and 4) how do various parameters (material, geometrical, loading) influence the neck forming process. Here we address these issues and others using computer simulations with a hydrocode. Among other things we find that: 1) neck formation is a deterministic process, and by changing one of the parameters influencing it monotonously, the final neck location moves monotonously as well; 2) the final neck location is sensitive to the radial velocity of the end boundaries, and as motion of these boundaries is not fully controlled in tests, this may be the reason why neck formation is sometimes regarded as a random process; and 3) neck formation is insensitive to small perturbations, which is probably why it is a deterministic process.
Modeling ebola virus disease transmissions with reservoir in a complex virus life ecology.
Berge, Tsanou; Bowong, Samuel; Lubuma, Jean; Manyombe, Martin Luther Mann
2018-02-01
We propose a new deterministic mathematical model for the transmission dynamics of Ebola Virus Disease (EVD) in a complex Ebola virus life ecology. Our model captures as much as possible the features and patterns of the disease evolution as a three cycle transmission process in the two ways below. Firstly it involves the synergy between the epizootic phase (during which the disease circulates periodically amongst non-human primates populations and decimates them), the enzootic phase (during which the disease always remains in fruit bats population) and the epidemic phase (during which the EVD threatens and decimates human populations). Secondly it takes into account the well-known, the probable/suspected and the hypothetical transmission mechanisms (including direct and indirect routes of contamination) between and within the three different types of populations consisting of humans, animals and fruit bats. The reproduction number R0 for the full model with the environmental contamination is derived and the global asymptotic stability of the disease free equilibrium is established when R0andlt;1. It is conjectured that there exists a unique globally asymptotically stable endemic equilibrium for the full model when R0andgt;1. The role of a contaminated environment is assessed by comparing the human infected component for the sub-model without the environment with that of the full model. Similarly, the sub-model without animals on the one hand and the sub-model without bats on the other hand are studied. It is shown that bats influence more the dynamics of EVD than the animals. Global sensitivity analysis shows that the effective contact rate between humans and fruit bats and the mortality rate for bats are the most influential parameters on the latent and infected human individuals. Numerical simulations, apart from supporting the theoretical results and the existence of a unique globally asymptotically stable endemic equilibrium for the full model, suggest further that: (1) fruit bats are more important in the transmission processes and the endemicity level of EVD than animals. This is in line with biological findings which identified bats as reservoir of Ebola viruses; (2) the indirect environmental contamination is detrimental to human beings, while it is almost insignificant for the transmission in bats.
An extension of stochastic hierarchy equations of motion for the equilibrium correlation functions
NASA Astrophysics Data System (ADS)
Ke, Yaling; Zhao, Yi
2017-06-01
A traditional stochastic hierarchy equations of motion method is extended into the correlated real-time and imaginary-time propagations, in this paper, for its applications in calculating the equilibrium correlation functions. The central idea is based on a combined employment of stochastic unravelling and hierarchical techniques for the temperature-dependent and temperature-free parts of the influence functional, respectively, in the path integral formalism of the open quantum systems coupled to a harmonic bath. The feasibility and validity of the proposed method are justified in the emission spectra of homodimer compared to those obtained through the deterministic hierarchy equations of motion. Besides, it is interesting to find that the complex noises generated from a small portion of real-time and imaginary-time cross terms can be safely dropped to produce the stable and accurate position and flux correlation functions in a broad parameter regime.
Application of the Probabilistic Dynamic Synthesis Method to the Analysis of a Realistic Structure
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a new technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. A previous work verified the feasibility of the PDS method on a simple seven degree-of-freedom spring-mass system. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
Application of the Probabilistic Dynamic Synthesis Method to Realistic Structures
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. In previous work, the feasibility of the PDS method applied to a simple seven degree-of-freedom spring-mass system was verified. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
The Deterministic Origins of Sexism.
ERIC Educational Resources Information Center
Perry, Melissa J.; Albee, George W.
1998-01-01
Discusses the physical, sexual, and psychological ramifications of biological determinism using examples from the global status of women's health, the continuation of female genital mutilation, and the history of sexist beliefs in psychology that serve a social control function of creating and defining women's psychopathology. (Author/SLD)
Spatial scaling patterns and functional redundancies in a changing boreal lake landscape
Angeler, David G.; Allen, Craig R.; Uden, Daniel R.; Johnson, Richard K.
2015-01-01
Global transformations extend beyond local habitats; therefore, larger-scale approaches are needed to assess community-level responses and resilience to unfolding environmental changes. Using longterm data (1996–2011), we evaluated spatial patterns and functional redundancies in the littoral invertebrate communities of 85 Swedish lakes, with the objective of assessing their potential resilience to environmental change at regional scales (that is, spatial resilience). Multivariate spatial modeling was used to differentiate groups of invertebrate species exhibiting spatial patterns in composition and abundance (that is, deterministic species) from those lacking spatial patterns (that is, stochastic species). We then determined the functional feeding attributes of the deterministic and stochastic invertebrate species, to infer resilience. Between one and three distinct spatial patterns in invertebrate composition and abundance were identified in approximately one-third of the species; the remainder were stochastic. We observed substantial differences in metrics between deterministic and stochastic species. Functional richness and diversity decreased over time in the deterministic group, suggesting a loss of resilience in regional invertebrate communities. However, taxon richness and redundancy increased monotonically in the stochastic group, indicating the capacity of regional invertebrate communities to adapt to change. Our results suggest that a refined picture of spatial resilience emerges if patterns of both the deterministic and stochastic species are accounted for. Spatially extensive monitoring may help increase our mechanistic understanding of community-level responses and resilience to regional environmental change, insights that are critical for developing management and conservation agendas in this current period of rapid environmental transformation.
NASA Astrophysics Data System (ADS)
Sochi, Taha
2016-09-01
Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azunre, P.
Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less
Structural Aspects of System Identification
NASA Technical Reports Server (NTRS)
Glover, Keith
1973-01-01
The problem of identifying linear dynamical systems is studied by considering structural and deterministic properties of linear systems that have an impact on stochastic identification algorithms. In particular considered is parametrization of linear systems so that there is a unique solution and all systems in appropriate class can be represented. It is assumed that a parametrization of system matrices has been established from a priori knowledge of the system, and the question is considered of when the unknown parameters of this system can be identified from input/output observations. It is assumed that the transfer function can be asymptotically identified, and the conditions are derived for the local, global and partial identifiability of the parametrization. Then it is shown that, with the right formulation, identifiability in the presence of feedback can be treated in the same way. Similarly the identifiability of parametrizations of systems driven by unobserved white noise is considered using the results from the theory of spectral factorization.
PROBABILISTIC SAFETY ASSESSMENT OF OPERATIONAL ACCIDENTS AT THE WASTE ISOLATION PILOT PLANT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rucker, D.F.
2000-09-01
This report presents a probabilistic safety assessment of radioactive doses as consequences from accident scenarios to complement the deterministic assessment presented in the Waste Isolation Pilot Plant (WIPP) Safety Analysis Report (SAR). The International Council of Radiation Protection (ICRP) recommends both assessments be conducted to ensure that ''an adequate level of safety has been achieved and that no major contributors to risk are overlooked'' (ICRP 1993). To that end, the probabilistic assessment for the WIPP accident scenarios addresses the wide range of assumptions, e.g. the range of values representing the radioactive source of an accident, that could possibly have beenmore » overlooked by the SAR. Routine releases of radionuclides from the WIPP repository to the environment during the waste emplacement operations are expected to be essentially zero. In contrast, potential accidental releases from postulated accident scenarios during waste handling and emplacement could be substantial, which necessitates the need for radiological air monitoring and confinement barriers (DOE 1999). The WIPP Safety Analysis Report (SAR) calculated doses from accidental releases to the on-site (at 100 m from the source) and off-site (at the Exclusive Use Boundary and Site Boundary) public by a deterministic approach. This approach, as demonstrated in the SAR, uses single-point values of key parameters to assess the 50-year, whole-body committed effective dose equivalent (CEDE). The basic assumptions used in the SAR to formulate the CEDE are retained for this report's probabilistic assessment. However, for the probabilistic assessment, single-point parameter values were replaced with probability density functions (PDF) and were sampled over an expected range. Monte Carlo simulations were run, in which 10,000 iterations were performed by randomly selecting one value for each parameter and calculating the dose. Statistical information was then derived from the 10,000 iteration batch, which included 5%, 50%, and 95% dose likelihood, and the sensitivity of each assumption to the calculated doses. As one would intuitively expect, the doses from the probabilistic assessment for most scenarios were found to be much less than the deterministic assessment. The lower dose of the probabilistic assessment can be attributed to a ''smearing'' of values from the high and low end of the PDF spectrum of the various input parameters. The analysis also found a potential weakness in the deterministic analysis used in the SAR, a detail on drum loading was not taken into consideration. Waste emplacement operations thus far have handled drums from each shipment as a single unit, i.e. drums from each shipment are kept together. Shipments typically come from a single waste stream, and therefore the curie loading of each drum can be considered nearly identical to that of its neighbor. Calculations show that if there are large numbers of drums used in the accident scenario assessment, e.g. 28 drums in the waste hoist failure scenario (CH5), then the probabilistic dose assessment calculations will diverge from the deterministically determined doses. As it is currently calculated, the deterministic dose assessment assumes one drum loaded to the maximum allowable (80 PE-Ci), and the remaining are 10% of the maximum. The effective average of drum curie content is therefore less in the deterministic assessment than the probabilistic assessment for a large number of drums. EEG recommends that the WIPP SAR calculations be revisited and updated to include a probabilistic safety assessment.« less
Brautigam, Chad A; Zhao, Huaying; Vargas, Carolyn; Keller, Sandro; Schuck, Peter
2016-05-01
Isothermal titration calorimetry (ITC) is a powerful and widely used method to measure the energetics of macromolecular interactions by recording a thermogram of differential heating power during a titration. However, traditional ITC analysis is limited by stochastic thermogram noise and by the limited information content of a single titration experiment. Here we present a protocol for bias-free thermogram integration based on automated shape analysis of the injection peaks, followed by combination of isotherms from different calorimetric titration experiments into a global analysis, statistical analysis of binding parameters and graphical presentation of the results. This is performed using the integrated public-domain software packages NITPIC, SEDPHAT and GUSSI. The recently developed low-noise thermogram integration approach and global analysis allow for more precise parameter estimates and more reliable quantification of multisite and multicomponent cooperative and competitive interactions. Titration experiments typically take 1-2.5 h each, and global analysis usually takes 10-20 min.
Sakai, Kenshi; Upadhyaya, Shrinivasa K; Andrade-Sanchez, Pedro; Sviridova, Nina V
2017-03-01
Real-world processes are often combinations of deterministic and stochastic processes. Soil failure observed during farm tillage is one example of this phenomenon. In this paper, we investigated the nonlinear features of soil failure patterns in a farm tillage process. We demonstrate emerging determinism in soil failure patterns from stochastic processes under specific soil conditions. We normalized the deterministic nonlinear prediction considering autocorrelation and propose it as a robust way of extracting a nonlinear dynamical system from noise contaminated motion. Soil is a typical granular material. The results obtained here are expected to be applicable to granular materials in general. From a global scale to nano scale, the granular material is featured in seismology, geotechnology, soil mechanics, and particle technology. The results and discussions presented here are applicable in these wide research areas. The proposed method and our findings are useful with respect to the application of nonlinear dynamics to investigate complex motions generated from granular materials.
Total systems design analysis of high performance structures
NASA Technical Reports Server (NTRS)
Verderaime, V.
1993-01-01
Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.
Conflict Detection and Resolution for Future Air Transportation Management
NASA Technical Reports Server (NTRS)
Krozel, Jimmy; Peters, Mark E.; Hunter, George
1997-01-01
With a Free Flight policy, the emphasis for air traffic control is shifting from active control to passive air traffic management with a policy of intervention by exception. Aircraft will be allowed to fly user preferred routes, as long as safety Alert Zones are not violated. If there is a potential conflict, two (or more) aircraft must be able to arrive at a solution for conflict resolution without controller intervention. Thus, decision aid tools are needed in Free Flight to detect and resolve conflicts, and several problems must be solved to develop such tools. In this report, we analyze and solve problems of proximity management, conflict detection, and conflict resolution under a Free Flight policy. For proximity management, we establish a system based on Delaunay Triangulations of aircraft at constant flight levels. Such a system provides a means for analyzing the neighbor relationships between aircraft and the nearby free space around air traffic which can be utilized later in conflict resolution. For conflict detection, we perform both 2-dimensional and 3-dimensional analyses based on the penetration of the Protected Airspace Zone. Both deterministic and non-deterministic analyses are performed. We investigate several types of conflict warnings including tactical warnings prior to penetrating the Protected Airspace Zone, methods based on the reachability overlap of both aircraft, and conflict probability maps to establish strategic Alert Zones around aircraft.
Rapid isolation of cancer cells using microfluidic deterministic lateral displacement structure.
Liu, Zongbin; Huang, Fei; Du, Jinghui; Shu, Weiliang; Feng, Hongtao; Xu, Xiaoping; Chen, Yan
2013-01-01
This work reports a microfluidic device with deterministic lateral displacement (DLD) arrays allowing rapid and label-free cancer cell separation and enrichment from diluted peripheral whole blood, by exploiting the size-dependent hydrodynamic forces. Experiment data and theoretical simulation are presented to evaluate the isolation efficiency of various types of cancer cells in the microfluidic DLD structure. We also demonstrated the use of both circular and triangular post arrays for cancer cell separation in cell solution and blood samples. The device was able to achieve high cancer cell isolation efficiency and enrichment factor with our optimized design. Therefore, this platform with DLD structure shows great potential on fundamental and clinical studies of circulating tumor cells.
Deterministic Bragg Coherent Diffraction Imaging.
Pavlov, Konstantin M; Punegov, Vasily I; Morgan, Kaye S; Schmalz, Gerd; Paganin, David M
2017-04-25
A deterministic variant of Bragg Coherent Diffraction Imaging is introduced in its kinematical approximation, for X-ray scattering from an imperfect crystal whose imperfections span no more than half of the volume of the crystal. This approach provides a unique analytical reconstruction of the object's structure factor and displacement fields from the 3D diffracted intensity distribution centred around any particular reciprocal lattice vector. The simple closed-form reconstruction algorithm, which requires only one multiplication and one Fourier transformation, is not restricted by assumptions of smallness of the displacement field. The algorithm performs well in simulations incorporating a variety of conditions, including both realistic levels of noise and departures from ideality in the reference (i.e. imperfection-free) part of the crystal.
A Probabilistic Approach to Predict Thermal Fatigue Life for Ball Grid Array Solder Joints
NASA Astrophysics Data System (ADS)
Wei, Helin; Wang, Kuisheng
2011-11-01
Numerous studies of the reliability of solder joints have been performed. Most life prediction models are limited to a deterministic approach. However, manufacturing induces uncertainty in the geometry parameters of solder joints, and the environmental temperature varies widely due to end-user diversity, creating uncertainties in the reliability of solder joints. In this study, a methodology for accounting for variation in the lifetime prediction for lead-free solder joints of ball grid array packages (PBGA) is demonstrated. The key aspects of the solder joint parameters and the cyclic temperature range related to reliability are involved. Probabilistic solutions of the inelastic strain range and thermal fatigue life based on the Engelmaier model are developed to determine the probability of solder joint failure. The results indicate that the standard deviation increases significantly when more random variations are involved. Using the probabilistic method, the influence of each variable on the thermal fatigue life is quantified. This information can be used to optimize product design and process validation acceptance criteria. The probabilistic approach creates the opportunity to identify the root causes of failed samples from product fatigue tests and field returns. The method can be applied to better understand how variation affects parameters of interest in an electronic package design with area array interconnections.
A data driven nonlinear stochastic model for blood glucose dynamics.
Zhang, Yan; Holt, Tim A; Khovanova, Natalia
2016-03-01
The development of adequate mathematical models for blood glucose dynamics may improve early diagnosis and control of diabetes mellitus (DM). We have developed a stochastic nonlinear second order differential equation to describe the response of blood glucose concentration to food intake using continuous glucose monitoring (CGM) data. A variational Bayesian learning scheme was applied to define the number and values of the system's parameters by iterative optimisation of free energy. The model has the minimal order and number of parameters to successfully describe blood glucose dynamics in people with and without DM. The model accounts for the nonlinearity and stochasticity of the underlying glucose-insulin dynamic process. Being data-driven, it takes full advantage of available CGM data and, at the same time, reflects the intrinsic characteristics of the glucose-insulin system without detailed knowledge of the physiological mechanisms. We have shown that the dynamics of some postprandial blood glucose excursions can be described by a reduced (linear) model, previously seen in the literature. A comprehensive analysis demonstrates that deterministic system parameters belong to different ranges for diabetes and controls. Implications for clinical practice are discussed. This is the first study introducing a continuous data-driven nonlinear stochastic model capable of describing both DM and non-DM profiles. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
A probabilistic approach to emissions from transportation sector in the coming decades
NASA Astrophysics Data System (ADS)
Yan, F.; Winijkul, E.; Bond, T. C.; Streets, D. G.
2010-12-01
Future emission estimates are necessary for understanding climate change, designing national and international strategies for air quality control and evaluating mitigation policies. Emission inventories are uncertain and future projections even more so. Most current emission projection models are deterministic; in other words, there is only single answer for each scenario. As a result, uncertainties have not been included in the estimation of climate forcing or other environmental effects, but it is important to quantify the uncertainty inherent in emission projections. We explore uncertainties of emission projections from transportation sector in the coming decades by sensitivity analysis and Monte Carlo simulations. These projections are based on a technology driven model: the Speciated Pollutants Emission Wizard (SPEW)-Trend, which responds to socioeconomic conditions in different economic and mitigation scenarios. The model contains detail about technology stock, including consumption growth rates, retirement rates, timing of emission standards, deterioration rates and transition rates from normal vehicles to vehicles with extremely high emission factors (termed “superemitters”). However, understanding of these parameters, as well as relationships with socioeconomic conditions, is uncertain. We project emissions from transportation sectors under four different IPCC scenarios (A1B, A2, B1, and B2). Due to the later implementation of advanced emission standards, Africa has the highest annual growth rate (1.2-3.1%) from 2010 to 2050. Superemitters begin producing more than 50% of global emissions around year 2020. We estimate uncertainties from the relationships between technological change and socioeconomic conditions and examine their impact on future emissions. Sensitivities to parameters governing retirement rates are highest, causing changes in global emissions from-26% to +55% on average from 2010 to 2050. We perform Monte Carlo simulations to examine how these uncertainties will affect total emissions if any input parameter that has inherent the uncertainties is substituted by a range of values-probability distribution and varies at the same time; the 95% confidence interval of global emission annual growth rate is -1.9% to +0.2% per year.
Parker, Melissa; Allen, Tim
2014-01-01
Large amounts of funding are being allocated to the control of neglected tropical diseases. Strategies primarily rely on the mass distribution of drugs to adults and children living in endemic areas. The approach is presented as morally appropriate, technically effective, and context-free. Drawing on research undertaken in East Africa, we discuss ways in which normative ideas about global health programs are used to set aside social and biological evidence. In particular, there is a tendency to ignore local details, including information about actual drug take up. Ferguson’s ‘anti-politics’ thesis is a useful starting point for analyzing why this happens, but is overly deterministic. Anti-politics discourse about healing the suffering poor may shape thinking and help explain cognitive dissonance. However, use of such discourse is also a means of strategically promoting vested interests and securing funding. Whatever the underlying motivations, rhetoric and realities are conflated, with potentially counterproductive consequences. PMID:24761976
Factors Affecting the Item Parameter Estimation and Classification Accuracy of the DINA Model
ERIC Educational Resources Information Center
de la Torre, Jimmy; Hong, Yuan; Deng, Weiling
2010-01-01
To better understand the statistical properties of the deterministic inputs, noisy "and" gate cognitive diagnosis (DINA) model, the impact of several factors on the quality of the item parameter estimates and classification accuracy was investigated. Results of the simulation study indicate that the fully Bayes approach is most accurate when the…
Dynamic Modeling of Cell-Free Biochemical Networks Using Effective Kinetic Models
2015-03-16
sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity Analysis of the Reduced Order Coagulation...sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the performance of the reduced order model [69]. We...Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates
NASA Astrophysics Data System (ADS)
Sumin, V. I.; Smolentseva, T. E.; Belokurov, S. V.; Lankin, O. V.
2018-03-01
In the work the process of formation of trainee characteristics with their subsequent change is analyzed and analyzed. Characteristics of trainees were obtained as a result of testing for each section of information on the chosen discipline. The results obtained during testing were input to the dynamic system. The area of control actions consisting of elements of the dynamic system is formed. The limit of deterministic predictability of element trajectories in dynamical systems based on local or global attractors is revealed. The dimension of the phase space of the dynamic system is determined, which allows estimating the parameters of the initial system. On the basis of time series of observations, it is possible to determine the predictability interval of all parameters, which make it possible to determine the behavior of the system discretely in time. Then the measure of predictability will be the sum of Lyapunov’s positive indicators, which are a quantitative measure for all elements of the system. The components for the formation of an algorithm allowing to determine the correlation dimension of the attractor for known initial experimental values of the variables are revealed. The generated algorithm makes it possible to carry out an experimental study of the dynamics of changes in the trainee’s parameters with initial uncertainty.
Deterministic quantum annealing expectation-maximization algorithm
NASA Astrophysics Data System (ADS)
Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki
2017-11-01
Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.
Modeling the within-host dynamics of cholera: bacterial-viral interaction.
Wang, Xueying; Wang, Jin
2017-08-01
Novel deterministic and stochastic models are proposed in this paper for the within-host dynamics of cholera, with a focus on the bacterial-viral interaction. The deterministic model is a system of differential equations describing the interaction among the two types of vibrios and the viruses. The stochastic model is a system of Markov jump processes that is derived based on the dynamics of the deterministic model. The multitype branching process approximation is applied to estimate the extinction probability of bacteria and viruses within a human host during the early stage of the bacterial-viral infection. Accordingly, a closed-form expression is derived for the disease extinction probability, and analytic estimates are validated with numerical simulations. The local and global dynamics of the bacterial-viral interaction are analysed using the deterministic model, and the result indicates that there is a sharp disease threshold characterized by the basic reproduction number [Formula: see text]: if [Formula: see text], vibrios ingested from the environment into human body will not cause cholera infection; if [Formula: see text], vibrios will grow with increased toxicity and persist within the host, leading to human cholera. In contrast, the stochastic model indicates, more realistically, that there is always a positive probability of disease extinction within the human host.
The direct effect of aerosols on solar radiation over the broader Mediterranean basin
NASA Astrophysics Data System (ADS)
Papadimas, C. D.; Hatzianastassiou, N.; Matsoukas, C.; Kanakidou, M.; Mihalopoulos, N.; Vardavas, I.
2011-11-01
For the first time, the direct radiative effect (DRE) of aerosols on solar radiation is computed over the entire Mediterranean basin, one of the most climatically sensitive world regions, by using a deterministic spectral radiation transfer model (RTM). The DRE effects on the outgoing shortwave radiation at the top of atmosphere (TOA), DRETOA, on the absorption of solar radiation in the atmospheric column, DREatm, and on the downward and absorbed surface solar radiation (SSR), DREsurf and DREnetsurf, respectively, are computed separately. The model uses input data for the period 2000-2007 for various surface and atmospheric parameters, taken from satellite (International Satellite Cloud Climatology Project, ISCCP-D2), Global Reanalysis projects (National Centers for Environmental Prediction - National Center for Atmospheric Research, NCEP/NCAR), and other global databases. The spectral aerosol optical properties (aerosol optical depth, AOD, asymmetry parameter, gaer and single scattering albedo, ωaer), are taken from the MODerate resolution Imaging Spectroradiometer (MODIS) of NASA (National Aeronautics and Space Administration) and they are Supplemented by the Global Aerosol Data Set (GADS). The model SSR fluxes have been successfully validated against measurements from 80 surface stations of the Global Energy Balance Archive (GEBA) covering the period 2000-2007. A planetary cooling is found above the Mediterranean on an annual basis (regional mean DRETOA = -2.4 Wm-2). Though planetary cooling is found over most of the region, up to -7 Wm-2, large positive DRETOA values (up to +25 Wm-2) are found over North Africa, indicating a strong planetary warming, as well as over the Alps (+0.5 Wm-2). Aerosols are found to increase the absorption of solar radiation in the atmospheric column over the region (DREatm = +11.1 Wm-2) and to decrease SSR (DREsurf = -16.5 Wm-2 and DREnetsurf -13.5 Wm-2) inducing thus significant atmospheric warming and surface radiative cooling. The calculated seasonal and monthly DREs are even larger, reaching -25.4 Wm-2 (for DREsurf). Sensitivity tests show that regional DREs are most sensitive to ωaer and secondarily to AOD, showing a quasi-linear dependence to these aerosol parameters.
Daniell method for power spectral density estimation in atomic force microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Labuda, Aleksander
An alternative method for power spectral density (PSD) estimation—the Daniell method—is revisited and compared to the most prevalent method used in the field of atomic force microscopy for quantifying cantilever thermal motion—the Bartlett method. Both methods are shown to underestimate the Q factor of a simple harmonic oscillator (SHO) by a predictable, and therefore correctable, amount in the absence of spurious deterministic noise sources. However, the Bartlett method is much more prone to spectral leakage which can obscure the thermal spectrum in the presence of deterministic noise. By the significant reduction in spectral leakage, the Daniell method leads to amore » more accurate representation of the true PSD and enables clear identification and rejection of deterministic noise peaks. This benefit is especially valuable for the development of automated PSD fitting algorithms for robust and accurate estimation of SHO parameters from a thermal spectrum.« less
Wei, Kun; Gao, Shilong; Zhong, Suchuan; Ma, Hong
2012-01-01
In dynamical systems theory, a system which can be described by differential equations is called a continuous dynamical system. In studies on genetic oscillation, most deterministic models at early stage are usually built on ordinary differential equations (ODE). Therefore, gene transcription which is a vital part in genetic oscillation is presupposed to be a continuous dynamical system by default. However, recent studies argued that discontinuous transcription might be more common than continuous transcription. In this paper, by appending the inserted silent interval lying between two neighboring transcriptional events to the end of the preceding event, we established that the running time for an intact transcriptional event increases and gene transcription thus shows slow dynamics. By globally replacing the original time increment for each state increment by a larger one, we introduced fractional differential equations (FDE) to describe such globally slow transcription. The impact of fractionization on genetic oscillation was then studied in two early stage models--the Goodwin oscillator and the Rössler oscillator. By constructing a "dual memory" oscillator--the fractional delay Goodwin oscillator, we suggested that four general requirements for generating genetic oscillation should be revised to be negative feedback, sufficient nonlinearity, sufficient memory and proper balancing of timescale. The numerical study of the fractional Rössler oscillator implied that the globally slow transcription tends to lower the chance of a coupled or more complex nonlinear genetic oscillatory system behaving chaotically.
Model selection for integrated pest management with stochasticity.
Akman, Olcay; Comar, Timothy D; Hrozencik, Daniel
2018-04-07
In Song and Xiang (2006), an integrated pest management model with periodically varying climatic conditions was introduced. In order to address a wider range of environmental effects, the authors here have embarked upon a series of studies resulting in a more flexible modeling approach. In Akman et al. (2013), the impact of randomly changing environmental conditions is examined by incorporating stochasticity into the birth pulse of the prey species. In Akman et al. (2014), the authors introduce a class of models via a mixture of two birth-pulse terms and determined conditions for the global and local asymptotic stability of the pest eradication solution. With this work, the authors unify the stochastic and mixture model components to create further flexibility in modeling the impacts of random environmental changes on an integrated pest management system. In particular, we first determine the conditions under which solutions of our deterministic mixture model are permanent. We then analyze the stochastic model to find the optimal value of the mixing parameter that minimizes the variance in the efficacy of the pesticide. Additionally, we perform a sensitivity analysis to show that the corresponding pesticide efficacy determined by this optimization technique is indeed robust. Through numerical simulations we show that permanence can be preserved in our stochastic model. Our study of the stochastic version of the model indicates that our results on the deterministic model provide informative conclusions about the behavior of the stochastic model. Copyright © 2017 Elsevier Ltd. All rights reserved.
Vellela, Melissa; Qian, Hong
2009-10-06
Schlögl's model is the canonical example of a chemical reaction system that exhibits bistability. Because the biological examples of bistability and switching behaviour are increasingly numerous, this paper presents an integrated deterministic, stochastic and thermodynamic analysis of the model. After a brief review of the deterministic and stochastic modelling frameworks, the concepts of chemical and mathematical detailed balances are discussed and non-equilibrium conditions are shown to be necessary for bistability. Thermodynamic quantities such as the flux, chemical potential and entropy production rate are defined and compared across the two models. In the bistable region, the stochastic model exhibits an exchange of the global stability between the two stable states under changes in the pump parameters and volume size. The stochastic entropy production rate shows a sharp transition that mirrors this exchange. A new hybrid model that includes continuous diffusion and discrete jumps is suggested to deal with the multiscale dynamics of the bistable system. Accurate approximations of the exponentially small eigenvalue associated with the time scale of this switching and the full time-dependent solution are calculated using Matlab. A breakdown of previously known asymptotic approximations on small volume scales is observed through comparison with these and Monte Carlo results. Finally, in the appendix section is an illustration of how the diffusion approximation of the chemical master equation can fail to represent correctly the mesoscopically interesting steady-state behaviour of the system.
NASA Technical Reports Server (NTRS)
Gross, Bernard
1996-01-01
Material characterization parameters obtained from naturally flawed specimens are necessary for reliability evaluation of non-deterministic advanced ceramic structural components. The least squares best fit method is applied to the three parameter uniaxial Weibull model to obtain the material parameters from experimental tests on volume or surface flawed specimens subjected to pure tension, pure bending, four point or three point loading. Several illustrative example problems are provided.
Role of demographic stochasticity in a speciation model with sexual reproduction
NASA Astrophysics Data System (ADS)
Lafuerza, Luis F.; McKane, Alan J.
2016-03-01
Recent theoretical studies have shown that demographic stochasticity can greatly increase the tendency of asexually reproducing phenotypically diverse organisms to spontaneously evolve into localized clusters, suggesting a simple mechanism for sympatric speciation. Here we study the role of demographic stochasticity in a model of competing organisms subject to assortative mating. We find that in models with sexual reproduction, noise can also lead to the formation of phenotypic clusters in parameter ranges where deterministic models would lead to a homogeneous distribution. In some cases, noise can have a sizable effect, rendering the deterministic modeling insufficient to understand the phenotypic distribution.
How synapses can enhance sensibility of a neural network
NASA Astrophysics Data System (ADS)
Protachevicz, P. R.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Baptista, M. S.; Viana, R. L.; Lameu, E. L.; Macau, E. E. N.; Batista, A. M.
2018-02-01
In this work, we study the dynamic range in a neural network modelled by cellular automaton. We consider deterministic and non-deterministic rules to simulate electrical and chemical synapses. Chemical synapses have an intrinsic time-delay and are susceptible to parameter variations guided by learning Hebbian rules of behaviour. The learning rules are related to neuroplasticity that describes change to the neural connections in the brain. Our results show that chemical synapses can abruptly enhance sensibility of the neural network, a manifestation that can become even more predominant if learning rules of evolution are applied to the chemical synapses.
Nonlinear consolidation in randomly heterogeneous highly compressible aquitards
NASA Astrophysics Data System (ADS)
Zapata-Norberto, Berenice; Morales-Casique, Eric; Herrera, Graciela S.
2018-05-01
Severe land subsidence due to groundwater extraction may occur in multiaquifer systems where highly compressible aquitards are present. The highly compressible nature of the aquitards leads to nonlinear consolidation where the groundwater flow parameters are stress-dependent. The case is further complicated by the heterogeneity of the hydrogeologic and geotechnical properties of the aquitards. The effect of realistic vertical heterogeneity of hydrogeologic and geotechnical parameters on the consolidation of highly compressible aquitards is investigated by means of one-dimensional Monte Carlo numerical simulations where the lower boundary represents the effect of an instant drop in hydraulic head due to groundwater pumping. Two thousand realizations are generated for each of the following parameters: hydraulic conductivity ( K), compression index ( C c), void ratio ( e) and m (an empirical parameter relating hydraulic conductivity and void ratio). The correlation structure, the mean and the variance for each parameter were obtained from a literature review about field studies in the lacustrine sediments of Mexico City. The results indicate that among the parameters considered, random K has the largest effect on the ensemble average behavior of the system when compared to a nonlinear consolidation model with deterministic initial parameters. The deterministic solution underestimates the ensemble average of total settlement when initial K is random. In addition, random K leads to the largest variance (and therefore largest uncertainty) of total settlement, groundwater flux and time to reach steady-state conditions.
Meng, Yilin; Roux, Benoît
2015-08-11
The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.
2015-01-01
The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost. PMID:26574437
Developments in label-free microfluidic methods for single-cell analysis and sorting.
Carey, Thomas R; Cotner, Kristen L; Li, Brian; Sohn, Lydia L
2018-04-24
Advancements in microfluidic technologies have led to the development of many new tools for both the characterization and sorting of single cells without the need for exogenous labels. Label-free microfluidics reduce the preparation time, reagents needed, and cost of conventional methods based on fluorescent or magnetic labels. Furthermore, these devices enable analysis of cell properties such as mechanical phenotype and dielectric parameters that cannot be characterized with traditional labels. Some of the most promising technologies for current and future development toward label-free, single-cell analysis and sorting include electronic sensors such as Coulter counters and electrical impedance cytometry; deformation analysis using optical traps and deformation cytometry; hydrodynamic sorting such as deterministic lateral displacement, inertial focusing, and microvortex trapping; and acoustic sorting using traveling or standing surface acoustic waves. These label-free microfluidic methods have been used to screen, sort, and analyze cells for a wide range of biomedical and clinical applications, including cell cycle monitoring, rapid complete blood counts, cancer diagnosis, metastatic progression monitoring, HIV and parasite detection, circulating tumor cell isolation, and point-of-care diagnostics. Because of the versatility of label-free methods for characterization and sorting, the low-cost nature of microfluidics, and the rapid prototyping capabilities of modern microfabrication, we expect this class of technology to continue to be an area of high research interest going forward. New developments in this field will contribute to the ongoing paradigm shift in cell analysis and sorting technologies toward label-free microfluidic devices, enabling new capabilities in biomedical research tools as well as clinical diagnostics. This article is categorized under: Diagnostic Tools > Biosensing Diagnostic Tools > Diagnostic Nanodevices. © 2018 Wiley Periodicals, Inc.
Watershed scale response to climate change--Yampa River Basin, Colorado
Hay, Lauren E.; Battaglin, William A.; Markstrom, Steven L.
2012-01-01
General Circulation Model simulations of future climate through 2099 project a wide range of possible scenarios. To determine the sensitivity and potential effect of long-term climate change on the freshwater resources of the United States, the U.S. Geological Survey Global Change study, "An integrated watershed scale response to global change in selected basins across the United States" was started in 2008. The long-term goal of this national study is to provide the foundation for hydrologically based climate change studies across the nation. Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Yampa River Basin at Steamboat Springs, Colorado.
Spiral bacterial foraging optimization method: Algorithm, evaluation and convergence analysis
NASA Astrophysics Data System (ADS)
Kasaiezadeh, Alireza; Khajepour, Amir; Waslander, Steven L.
2014-04-01
A biologically-inspired algorithm called Spiral Bacterial Foraging Optimization (SBFO) is investigated in this article. SBFO, previously proposed by the same authors, is a multi-agent, gradient-based algorithm that minimizes both the main objective function (local cost) and the distance between each agent and a temporary central point (global cost). A random jump is included normal to the connecting line of each agent to the central point, which produces a vortex around the temporary central point. This random jump is also suitable to cope with premature convergence, which is a feature of swarm-based optimization methods. The most important advantages of this algorithm are as follows: First, this algorithm involves a stochastic type of search with a deterministic convergence. Second, as gradient-based methods are employed, faster convergence is demonstrated over GA, DE, BFO, etc. Third, the algorithm can be implemented in a parallel fashion in order to decentralize large-scale computation. Fourth, the algorithm has a limited number of tunable parameters, and finally SBFO has a strong certainty of convergence which is rare in existing global optimization algorithms. A detailed convergence analysis of SBFO for continuously differentiable objective functions has also been investigated in this article.
Xiao, Min; Ge, Haitao; Khundrakpam, Budhachandra S.; Xu, Junhai; Bezgin, Gleb; Leng, Yuan; Zhao, Lu; Tang, Yuchun; Ge, Xinting; Jeon, Seun; Xu, Wenjian; Evans, Alan C.; Liu, Shuwei
2016-01-01
Functional neuroimaging studies have indicated the involvement of separate brain areas in three distinct attention systems: alerting, orienting, and executive control (EC). However, the structural correlates underlying attention remains unexplored. Here, we utilized graph theory to examine the neuroanatomical substrates of the three attention systems measured by attention network test (ANT) in 65 healthy subjects. White matter connectivity, assessed with diffusion tensor imaging deterministic tractography was modeled as a structural network comprising 90 nodes defined by the automated anatomical labeling (AAL) template. Linear regression analyses were conducted to explore the relationship between topological parameters and the three attentional effects. We found a significant positive correlation between EC function and global efficiency of the whole brain network. At the regional level, node-specific correlations were discovered between regional efficiency and all three ANT components, including dorsolateral superior frontal gyrus, thalamus and parahippocampal gyrus for EC, thalamus and inferior parietal gyrus for alerting, and paracentral lobule and inferior occipital gyrus for orienting. Our findings highlight the fundamental architecture of interregional structural connectivity involved in attention and could provide new insights into the anatomical basis underlying human behavior. PMID:27777556
Chaotic dynamics and control of deterministic ratchets.
Family, Fereydoon; Larrondo, H A; Zarlenga, D G; Arizmendi, C M
2005-11-30
Deterministic ratchets, in the inertial and also in the overdamped limit, have a very complex dynamics, including chaotic motion. This deterministically induced chaos mimics, to some extent, the role of noise, changing, on the other hand, some of the basic properties of thermal ratchets; for example, inertial ratchets can exhibit multiple reversals in the current direction. The direction depends on the amount of friction and inertia, which makes it especially interesting for technological applications such as biological particle separation. We overview in this work different strategies to control the current of inertial ratchets. The control parameters analysed are the strength and frequency of the periodic external force, the strength of the quenched noise that models a non-perfectly-periodic potential, and the mass of the particles. Control mechanisms are associated with the fractal nature of the basins of attraction of the mean velocity attractors. The control of the overdamped motion of noninteracting particles in a rocking periodic asymmetric potential is also reviewed. The analysis is focused on synchronization of the motion of the particles with the external sinusoidal driving force. Two cases are considered: a perfect lattice without disorder and a lattice with noncorrelated quenched noise. The amplitude of the driving force and the strength of the quenched noise are used as control parameters.
Yet one more dwell time algorithm
NASA Astrophysics Data System (ADS)
Haberl, Alexander; Rascher, Rolf
2017-06-01
The current demand of even more powerful and efficient microprocessors, for e.g. deep learning, has led to an ongoing trend of reducing the feature size of the integrated circuits. These processors are patterned with EUV-lithography which enables 7 nm chips [1]. To produce mirrors which satisfy the needed requirements is a challenging task. Not only increasing requirements on the imaging properties, but also new lens shapes, such as aspheres or lenses with free-form surfaces, require innovative production processes. However, these lenses need new deterministic sub-aperture polishing methods that have been established in the past few years. These polishing methods are characterized, by an empirically determined TIF and local stock removal. Such a deterministic polishing method is ion-beam-figuring (IBF). The beam profile of an ion beam is adjusted to a nearly ideal Gaussian shape by various parameters. With the known removal function, a dwell time profile can be generated for each measured error profile. Such a profile is always generated pixel-accurately to the predetermined error profile, with the aim always of minimizing the existing surface structures up to the cut-off frequency of the tool used [2]. The processing success of a correction-polishing run depends decisively on the accuracy of the previously computed dwell-time profile. So the used algorithm to calculate the dwell time has to accurately reflect the reality. But furthermore the machine operator should have no influence on the dwell-time calculation. Conclusively there mustn't be any parameters which have an influence on the calculation result. And lastly it should take a minimum of machining time to get a minimum of remaining error structures. Unfortunately current dwell time algorithm calculations are divergent, user-dependent, tending to create high processing times and need several parameters to bet set. This paper describes an, realistic, convergent and user independent dwell time algorithm. The typical processing times are reduced to about 80 % up to 50 % compared to conventional algorithms (Lucy-Richardson, Van-Cittert …) as used in established machines. To verify its effectiveness a plane surface was machined on an IBF.
Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5
Wang, Yong; Zhang, Guang J.
2016-09-29
In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less
Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yong; Zhang, Guang J.
In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less
Abnormal brain white matter network in young smokers: a graph theory analysis study.
Zhang, Yajuan; Li, Min; Wang, Ruonan; Bi, Yanzhi; Li, Yangding; Yi, Zhang; Liu, Jixin; Yu, Dahua; Yuan, Kai
2018-04-01
Previous diffusion tensor imaging (DTI) studies had investigated the white matter (WM) integrity abnormalities in some specific fiber bundles in smokers. However, little is known about the changes in topological organization of WM structural network in young smokers. In current study, we acquired DTI datasets from 58 male young smokers and 51 matched nonsmokers and constructed the WM networks by the deterministic fiber tracking approach. Graph theoretical analysis was used to compare the topological parameters of WM network (global and nodal) and the inter-regional fractional anisotropy (FA) weighted WM connections between groups. The results demonstrated that both young smokers and nonsmokers had small-world topology in WM network. Further analysis revealed that the young smokers exhibited the abnormal topological organization, i.e., increased network strength, global efficiency, and decreased shortest path length. In addition, the increased nodal efficiency predominately was located in frontal cortex, striatum and anterior cingulate gyrus (ACG) in smokers. Moreover, based on network-based statistic (NBS) approach, the significant increased FA-weighted WM connections were mainly found in the PFC, ACG and supplementary motor area (SMA) regions. Meanwhile, the network parameters were correlated with the nicotine dependence severity (FTND) scores, and the nodal efficiency of orbitofrontal cortex was positive correlation with the cigarette per day (CPD) in young smokers. We revealed the abnormal topological organization of WM network in young smokers, which may improve our understanding of the neural mechanism of young smokers form WM topological organization level.
Free Electron coherent sources: From microwave to X-rays
NASA Astrophysics Data System (ADS)
Dattoli, Giuseppe; Di Palma, Emanuele; Pagnutti, Simonetta; Sabia, Elio
2018-04-01
The term Free Electron Laser (FEL) will be used, in this paper, to indicate a wide collection of devices aimed at providing coherent electromagnetic radiation from a beam of "free" electrons, unbound at the atomic or molecular states. This article reviews the similarities that link different sources of coherent radiation across the electromagnetic spectrum from microwaves to X-rays, and compares the analogies with conventional laser sources. We explore developing a point of view that allows a unified analytical treatment of these devices, by the introduction of appropriate global variables (e.g. gain, saturation intensity, inhomogeneous broadening parameters, longitudinal mode coupling strength), yielding a very effective way for the determination of the relevant design parameters. The paper looks also at more speculative aspects of FEL physics, which may address the relevance of quantum effects in the lasing process.
Wu, Wei; Wang, Jin
2013-09-28
We established a potential and flux field landscape theory to quantify the global stability and dynamics of general spatially dependent non-equilibrium deterministic and stochastic systems. We extended our potential and flux landscape theory for spatially independent non-equilibrium stochastic systems described by Fokker-Planck equations to spatially dependent stochastic systems governed by general functional Fokker-Planck equations as well as functional Kramers-Moyal equations derived from master equations. Our general theory is applied to reaction-diffusion systems. For equilibrium spatially dependent systems with detailed balance, the potential field landscape alone, defined in terms of the steady state probability distribution functional, determines the global stability and dynamics of the system. The global stability of the system is closely related to the topography of the potential field landscape in terms of the basins of attraction and barrier heights in the field configuration state space. The effective driving force of the system is generated by the functional gradient of the potential field alone. For non-equilibrium spatially dependent systems, the curl probability flux field is indispensable in breaking detailed balance and creating non-equilibrium condition for the system. A complete characterization of the non-equilibrium dynamics of the spatially dependent system requires both the potential field and the curl probability flux field. While the non-equilibrium potential field landscape attracts the system down along the functional gradient similar to an electron moving in an electric field, the non-equilibrium flux field drives the system in a curly way similar to an electron moving in a magnetic field. In the small fluctuation limit, the intrinsic potential field as the small fluctuation limit of the potential field for spatially dependent non-equilibrium systems, which is closely related to the steady state probability distribution functional, is found to be a Lyapunov functional of the deterministic spatially dependent system. Therefore, the intrinsic potential landscape can characterize the global stability of the deterministic system. The relative entropy functional of the stochastic spatially dependent non-equilibrium system is found to be the Lyapunov functional of the stochastic dynamics of the system. Therefore, the relative entropy functional quantifies the global stability of the stochastic system with finite fluctuations. Our theory offers an alternative general approach to other field-theoretic techniques, to study the global stability and dynamics of spatially dependent non-equilibrium field systems. It can be applied to many physical, chemical, and biological spatially dependent non-equilibrium systems.
DIETARY EXPOSURES OF YOUNG CHILDREN, PART 3: MODELLING
A deterministic model was used to model dietary exposure of young children. Parameters included pesticide residue on food before handling, surface pesticide loading, transfer efficiencies and children's activity patterns. Three components of dietary pesticide exposure were includ...
Interactive Reliability Model for Whisker-toughened Ceramics
NASA Technical Reports Server (NTRS)
Palko, Joseph L.
1993-01-01
Wider use of ceramic matrix composites (CMC) will require the development of advanced structural analysis technologies. The use of an interactive model to predict the time-independent reliability of a component subjected to multiaxial loads is discussed. The deterministic, three-parameter Willam-Warnke failure criterion serves as the theoretical basis for the reliability model. The strength parameters defining the model are assumed to be random variables, thereby transforming the deterministic failure criterion into a probabilistic criterion. The ability of the model to account for multiaxial stress states with the same unified theory is an improvement over existing models. The new model was coupled with a public-domain finite element program through an integrated design program. This allows a design engineer to predict the probability of failure of a component. A simple structural problem is analyzed using the new model, and the results are compared to existing models.
Self-Organized Dynamic Flocking Behavior from a Simple Deterministic Map
NASA Astrophysics Data System (ADS)
Krueger, Wesley
2007-10-01
Coherent motion exhibiting large-scale order, such as flocking, swarming, and schooling behavior in animals, can arise from simple rules applied to an initial random array of self-driven particles. We present a completely deterministic dynamic map that exhibits emergent, collective, complex motion for a group of particles. Each individual particle is driven with a constant speed in two dimensions adopting the average direction of a fixed set of non-spatially related partners. In addition, the particle changes direction by π as it reaches a circular boundary. The dynamical patterns arising from these rules range from simple circular-type convective motion to highly sophisticated, complex, collective behavior which can be easily interpreted as flocking, schooling, or swarming depending on the chosen parameters. We present the results as a series of short movies and we also explore possible order parameters and correlation functions capable of quantifying the resulting coherence.
Parameter estimation in spiking neural networks: a reverse-engineering approach.
Rostro-Gonzalez, H; Cessac, B; Vieville, T
2012-04-01
This paper presents a reverse engineering approach for parameter estimation in spiking neural networks (SNNs). We consider the deterministic evolution of a time-discretized network with spiking neurons, where synaptic transmission has delays, modeled as a neural network of the generalized integrate and fire type. Our approach aims at by-passing the fact that the parameter estimation in SNN results in a non-deterministic polynomial-time hard problem when delays are to be considered. Here, this assumption has been reformulated as a linear programming (LP) problem in order to perform the solution in a polynomial time. Besides, the LP problem formulation makes the fact that the reverse engineering of a neural network can be performed from the observation of the spike times explicit. Furthermore, we point out how the LP adjustment mechanism is local to each neuron and has the same structure as a 'Hebbian' rule. Finally, we present a generalization of this approach to the design of input-output (I/O) transformations as a practical method to 'program' a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided.
Deterministic Assembly of Complex Bacterial Communities in Guts of Germ-Free Cockroaches
Mikaelyan, Aram; Thompson, Claire L.; Hofer, Markus J.
2015-01-01
The gut microbiota of termites plays important roles in the symbiotic digestion of lignocellulose. However, the factors shaping the microbial community structure remain poorly understood. Because termites cannot be raised under axenic conditions, we established the closely related cockroach Shelfordella lateralis as a germ-free model to study microbial community assembly and host-microbe interactions. In this study, we determined the composition of the bacterial assemblages in cockroaches inoculated with the gut microbiota of termites and mice using pyrosequencing analysis of their 16S rRNA genes. Although the composition of the xenobiotic communities was influenced by the lineages present in the foreign inocula, their structure resembled that of conventional cockroaches. Bacterial taxa abundant in conventional cockroaches but rare in the foreign inocula, such as Dysgonomonas and Parabacteroides spp., were selectively enriched in the xenobiotic communities. Donor-specific taxa, such as endomicrobia or spirochete lineages restricted to the gut microbiota of termites, however, either were unable to colonize germ-free cockroaches or formed only small populations. The exposure of xenobiotic cockroaches to conventional adults restored their normal microbiota, which indicated that autochthonous lineages outcompete foreign ones. Our results provide experimental proof that the assembly of a complex gut microbiota in insects is deterministic. PMID:26655763
Ordinal optimization and its application to complex deterministic problems
NASA Astrophysics Data System (ADS)
Yang, Mike Shang-Yu
1998-10-01
We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.
Probabilistic Modeling of the Renal Stone Formation Module
NASA Technical Reports Server (NTRS)
Best, Lauren M.; Myers, Jerry G.; Goodenow, Debra A.; McRae, Michael P.; Jackson, Travis C.
2013-01-01
The Integrated Medical Model (IMM) is a probabilistic tool, used in mission planning decision making and medical systems risk assessments. The IMM project maintains a database of over 80 medical conditions that could occur during a spaceflight, documenting an incidence rate and end case scenarios for each. In some cases, where observational data are insufficient to adequately define the inflight medical risk, the IMM utilizes external probabilistic modules to model and estimate the event likelihoods. One such medical event of interest is an unpassed renal stone. Due to a high salt diet and high concentrations of calcium in the blood (due to bone depletion caused by unloading in the microgravity environment) astronauts are at a considerable elevated risk for developing renal calculi (nephrolithiasis) while in space. Lack of observed incidences of nephrolithiasis has led HRP to initiate the development of the Renal Stone Formation Module (RSFM) to create a probabilistic simulator capable of estimating the likelihood of symptomatic renal stone presentation in astronauts on exploration missions. The model consists of two major parts. The first is the probabilistic component, which utilizes probability distributions to assess the range of urine electrolyte parameters and a multivariate regression to transform estimated crystal density and size distributions to the likelihood of the presentation of nephrolithiasis symptoms. The second is a deterministic physical and chemical model of renal stone growth in the kidney developed by Kassemi et al. The probabilistic component of the renal stone model couples the input probability distributions describing the urine chemistry, astronaut physiology, and system parameters with the physical and chemical outputs and inputs to the deterministic stone growth model. These two parts of the model are necessary to capture the uncertainty in the likelihood estimate. The model will be driven by Monte Carlo simulations, continuously randomly sampling the probability distributions of the electrolyte concentrations and system parameters that are inputs into the deterministic model. The total urine chemistry concentrations are used to determine the urine chemistry activity using the Joint Expert Speciation System (JESS), a biochemistry model. Information used from JESS is then fed into the deterministic growth model. Outputs from JESS and the deterministic model are passed back to the probabilistic model where a multivariate regression is used to assess the likelihood of a stone forming and the likelihood of a stone requiring clinical intervention. The parameters used to determine to quantify these risks include: relative supersaturation (RS) of calcium oxalate, citrate/calcium ratio, crystal number density, total urine volume, pH, magnesium excretion, maximum stone width, and ureteral location. Methods and Validation: The RSFM is designed to perform a Monte Carlo simulation to generate probability distributions of clinically significant renal stones, as well as provide an associated uncertainty in the estimate. Initially, early versions will be used to test integration of the components and assess component validation and verification (V&V), with later versions used to address questions regarding design reference mission scenarios. Once integrated with the deterministic component, the credibility assessment of the integrated model will follow NASA STD 7009 requirements.
Adjoint-Based Climate Model Tuning: Application to the Planet Simulator
NASA Astrophysics Data System (ADS)
Lyu, Guokun; Köhl, Armin; Matei, Ion; Stammer, Detlef
2018-01-01
The adjoint method is used to calibrate the medium complexity climate model "Planet Simulator" through parameter estimation. Identical twin experiments demonstrate that this method can retrieve default values of the control parameters when using a long assimilation window of the order of 2 months. Chaos synchronization through nudging, required to overcome limits in the temporal assimilation window in the adjoint method, is employed successfully to reach this assimilation window length. When assimilating ERA-Interim reanalysis data, the observations of air temperature and the radiative fluxes are the most important data for adjusting the control parameters. The global mean net longwave fluxes at the surface and at the top of the atmosphere are significantly improved by tuning two model parameters controlling the absorption of clouds and water vapor. The global mean net shortwave radiation at the surface is improved by optimizing three model parameters controlling cloud optical properties. The optimized parameters improve the free model (without nudging terms) simulation in a way similar to that in the assimilation experiments. Results suggest a promising way for tuning uncertain parameters in nonlinear coupled climate models.
Ergodicity of Truncated Stochastic Navier Stokes with Deterministic Forcing and Dispersion
NASA Astrophysics Data System (ADS)
Majda, Andrew J.; Tong, Xin T.
2016-10-01
Turbulence in idealized geophysical flows is a very rich and important topic. The anisotropic effects of explicit deterministic forcing, dispersive effects from rotation due to the β -plane and F-plane, and topography together with random forcing all combine to produce a remarkable number of realistic phenomena. These effects have been studied through careful numerical experiments in the truncated geophysical models. These important results include transitions between coherent jets and vortices, and direct and inverse turbulence cascades as parameters are varied, and it is a contemporary challenge to explain these diverse statistical predictions. Here we contribute to these issues by proving with full mathematical rigor that for any values of the deterministic forcing, the β - and F-plane effects and topography, with minimal stochastic forcing, there is geometric ergodicity for any finite Galerkin truncation. This means that there is a unique smooth invariant measure which attracts all statistical initial data at an exponential rate. In particular, this rigorous statistical theory guarantees that there are no bifurcations to multiple stable and unstable statistical steady states as geophysical parameters are varied in contrast to claims in the applied literature. The proof utilizes a new statistical Lyapunov function to account for enstrophy exchanges between the statistical mean and the variance fluctuations due to the deterministic forcing. It also requires careful proofs of hypoellipticity with geophysical effects and uses geometric control theory to establish reachability. To illustrate the necessity of these conditions, a two-dimensional example is developed which has the square of the Euclidean norm as the Lyapunov function and is hypoelliptic with nonzero noise forcing, yet fails to be reachable or ergodic.
Cao, Pengxing; Tan, Xiahui; Donovan, Graham; Sanderson, Michael J; Sneyd, James
2014-08-01
The inositol trisphosphate receptor ([Formula: see text]) is one of the most important cellular components responsible for oscillations in the cytoplasmic calcium concentration. Over the past decade, two major questions about the [Formula: see text] have arisen. Firstly, how best should the [Formula: see text] be modeled? In other words, what fundamental properties of the [Formula: see text] allow it to perform its function, and what are their quantitative properties? Secondly, although calcium oscillations are caused by the stochastic opening and closing of small numbers of [Formula: see text], is it possible for a deterministic model to be a reliable predictor of calcium behavior? Here, we answer these two questions, using airway smooth muscle cells (ASMC) as a specific example. Firstly, we show that periodic calcium waves in ASMC, as well as the statistics of calcium puffs in other cell types, can be quantitatively reproduced by a two-state model of the [Formula: see text], and thus the behavior of the [Formula: see text] is essentially determined by its modal structure. The structure within each mode is irrelevant for function. Secondly, we show that, although calcium waves in ASMC are generated by a stochastic mechanism, [Formula: see text] stochasticity is not essential for a qualitative prediction of how oscillation frequency depends on model parameters, and thus deterministic [Formula: see text] models demonstrate the same level of predictive capability as do stochastic models. We conclude that, firstly, calcium dynamics can be accurately modeled using simplified [Formula: see text] models, and, secondly, to obtain qualitative predictions of how oscillation frequency depends on parameters it is sufficient to use a deterministic model.
A ripple-spreading genetic algorithm for the aircraft sequencing problem.
Hu, Xiao-Bing; Di Paolo, Ezequiel A
2011-01-01
When genetic algorithms (GAs) are applied to combinatorial problems, permutation representations are usually adopted. As a result, such GAs are often confronted with feasibility and memory-efficiency problems. With the aircraft sequencing problem (ASP) as a study case, this paper reports on a novel binary-representation-based GA scheme for combinatorial problems. Unlike existing GAs for the ASP, which typically use permutation representations based on aircraft landing order, the new GA introduces a novel ripple-spreading model which transforms the original landing-order-based ASP solutions into value-based ones. In the new scheme, arriving aircraft are projected as points into an artificial space. A deterministic method inspired by the natural phenomenon of ripple-spreading on liquid surfaces is developed, which uses a few parameters as input to connect points on this space to form a landing sequence. A traditional GA, free of feasibility and memory-efficiency problems, can then be used to evolve the ripple-spreading related parameters in order to find an optimal sequence. Since the ripple-spreading model is the centerpiece of the new algorithm, it is called the ripple-spreading GA (RSGA). The advantages of the proposed RSGA are illustrated by extensive comparative studies for the case of the ASP.
A Deterministic Computational Procedure for Space Environment Electron Transport
NASA Technical Reports Server (NTRS)
Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamcyk, Anne M.
2010-01-01
A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.
Balanced Branching in Transcription Termination
NASA Technical Reports Server (NTRS)
Harrington, K. J.; Laughlin, R. B.; Liang, S.
2001-01-01
The theory of stochastic transcription termination based on free-energy competition requires two or more reaction rates to be delicately balanced over a wide range of physical conditions. A large body of work on glasses and large molecules suggests that this should be impossible in such a large system in the absence of a new organizing principle of matter. We review the experimental literature of termination and find no evidence for such a principle but many troubling inconsistencies, most notably anomalous memory effects. These suggest that termination has a deterministic component and may conceivably be not stochastic at all. We find that a key experiment by Wilson and von Hippel allegedly refuting deterministic termination was an incorrectly analyzed regulatory effect of Mg(2+) binding.
Ontology-Based Peer Exchange Network (OPEN)
ERIC Educational Resources Information Center
Dong, Hui
2010-01-01
In current Peer-to-Peer networks, distributed and semantic free indexing is widely used by systems adopting "Distributed Hash Table" ("DHT") mechanisms. Although such systems typically solve a. user query rather fast in a deterministic way, they only support a very narrow search scheme, namely the exact hash key match. Furthermore, DHT systems put…
The Signal Importance of Noise
ERIC Educational Resources Information Center
Macy, Michael; Tsvetkova, Milena
2015-01-01
Noise is widely regarded as a residual category--the unexplained variance in a linear model or the random disturbance of a predictable pattern. Accordingly, formal models often impose the simplifying assumption that the world is noise-free and social dynamics are deterministic. Where noise is assigned causal importance, it is often assumed to be a…
NASA Astrophysics Data System (ADS)
Misra, Vasubandhu; Li, H.; Wu, Z.; DiNapoli, S.
2014-03-01
This paper shows demonstrable improvement in the global seasonal climate predictability of boreal summer (at zero lead) and fall (at one season lead) seasonal mean precipitation and surface temperature from a two-tiered seasonal hindcast forced with forecasted SST relative to two other contemporary operational coupled ocean-atmosphere climate models. The results from an extensive set of seasonal hindcasts are analyzed to come to this conclusion. This improvement is attributed to: (1) The multi-model bias corrected SST used to force the atmospheric model. (2) The global atmospheric model which is run at a relatively high resolution of 50 km grid resolution compared to the two other coupled ocean-atmosphere models. (3) The physics of the atmospheric model, especially that related to the convective parameterization scheme. The results of the seasonal hindcast are analyzed for both deterministic and probabilistic skill. The probabilistic skill analysis shows that significant forecast skill can be harvested from these seasonal hindcasts relative to the deterministic skill analysis. The paper concludes that the coupled ocean-atmosphere seasonal hindcasts have reached a reasonable fidelity to exploit their SST anomaly forecasts to force such relatively higher resolution two tier prediction experiments to glean further boreal summer and fall seasonal prediction skill.
Chakraverty, S; Sahoo, B K; Rao, T D; Karunakar, P; Sapra, B K
2018-02-01
Modelling radon transport in the earth crust is a useful tool to investigate the changes in the geo-physical processes prior to earthquake event. Radon transport is modeled generally through the deterministic advection-diffusion equation. However, in order to determine the magnitudes of parameters governing these processes from experimental measurements, it is necessary to investigate the role of uncertainties in these parameters. Present paper investigates this aspect by combining the concept of interval uncertainties in transport parameters such as soil diffusivity, advection velocity etc, occurring in the radon transport equation as applied to soil matrix. The predictions made with interval arithmetic have been compared and discussed with the results of classical deterministic model. The practical applicability of the model is demonstrated through a case study involving radon flux measurements at the soil surface with an accumulator deployed in steady-state mode. It is possible to detect the presence of very low levels of advection processes by applying uncertainty bounds on the variations in the observed concentration data in the accumulator. The results are further discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Improved Hybrid Modeling of Spent Fuel Storage Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bibber, Karl van
This work developed a new computational method for improving the ability to calculate the neutron flux in deep-penetration radiation shielding problems that contain areas with strong streaming. The “gold standard” method for radiation transport is Monte Carlo (MC) as it samples the physics exactly and requires few approximations. Historically, however, MC was not useful for shielding problems because of the computational challenge of following particles through dense shields. Instead, deterministic methods, which are superior in term of computational effort for these problems types but are not as accurate, were used. Hybrid methods, which use deterministic solutions to improve MC calculationsmore » through a process called variance reduction, can make it tractable from a computational time and resource use perspective to use MC for deep-penetration shielding. Perhaps the most widespread and accessible of these methods are the Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) methods. For problems containing strong anisotropies, such as power plants with pipes through walls, spent fuel cask arrays, active interrogation, and locations with small air gaps or plates embedded in water or concrete, hybrid methods are still insufficiently accurate. In this work, a new method for generating variance reduction parameters for strongly anisotropic, deep penetration radiation shielding studies was developed. This method generates an alternate form of the adjoint scalar flux quantity, Φ Ω, which is used by both CADIS and FW-CADIS to generate variance reduction parameters for local and global response functions, respectively. The new method, called CADIS-Ω, was implemented in the Denovo/ADVANTG software. Results indicate that the flux generated by CADIS-Ω incorporates localized angular anisotropies in the flux more effectively than standard methods. CADIS-Ω outperformed CADIS in several test problems. This initial work indicates that CADIS- may be highly useful for shielding problems with strong angular anisotropies. This is a benefit to the public by increasing accuracy for lower computational effort for many problems that have energy, security, and economic importance.« less
Failed rib region prediction in a human body model during crash events with precrash braking.
Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S
2018-02-28
The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.
Uncertainty analysis in 3D global models: Aerosol representation in MOZART-4
NASA Astrophysics Data System (ADS)
Gasore, J.; Prinn, R. G.
2012-12-01
The Probabilistic Collocation Method (PCM) has been proven to be an efficient general method of uncertainty analysis in atmospheric models (Tatang et al 1997, Cohen&Prinn 2011). However, its application has been mainly limited to urban- and regional-scale models and chemical source-sink models, because of the drastic increase in computational cost when the dimension of uncertain parameters increases. Moreover, the high-dimensional output of global models has to be reduced to allow a computationally reasonable number of polynomials to be generated. This dimensional reduction has been mainly achieved by grouping the model grids into a few regions based on prior knowledge and expectations; urban versus rural for instance. As the model output is used to estimate the coefficients of the polynomial chaos expansion (PCE), the arbitrariness in the regional aggregation can generate problems in estimating uncertainties. To address these issues in a complex model, we apply the probabilistic collocation method of uncertainty analysis to the aerosol representation in MOZART-4, which is a 3D global chemical transport model (Emmons et al., 2010). Thereafter, we deterministically delineate the model output surface into regions of homogeneous response using the method of Principal Component Analysis. This allows the quantification of the uncertainty associated with the dimensional reduction. Because only a bulk mass is calculated online in Mozart-4, a lognormal number distribution is assumed with a priori fixed scale and location parameters, to calculate the surface area for heterogeneous reactions involving tropospheric oxidants. We have applied the PCM to the six parameters of the lognormal number distributions of Black Carbon, Organic Carbon and Sulfate. We have carried out a Monte-Carlo sampling from the probability density functions of the six uncertain parameters, using the reduced PCE model. The global mean concentration of major tropospheric oxidants did not show a significant variation in response to the variation in input parameters. However, a substantial variation at regional and temporal scale has been found. Tatang M. A., Pan W., Prinn R G., McRae G. J., An efficient method for parametric uncertainty analysis of numerical geophysical models, J. Gephys. Res., 102, 21925-21932, 1997. Cohen, J.B., and R.G. Prinn, Development of a fast, urban chemistry metamodel for inclusion in global models,Atmos. Chem. Phys., 11, 7629-7656, doi:10.5194/acp-11-7629-2011, 2011. Emmons L. K., Walters S., Hess P. G., Lamarque J. -F., P_ster G. G., Fillmore D., Granier C., Guenther A., Kinnison D., Laepple T., Orlando J., Tie X., Tyndall G., Wiedinmyer C., Baughcum S. L., Kloster J. S., Description and evaluation of the Model for Ozone and Related chemical Tracers, version 4 (MOZART-4). Geosci. Model Dev., 3, 4367, 2010.
Will systems biology offer new holistic paradigms to life sciences?
Conti, Filippo; Valerio, Maria Cristina; Zbilut, Joseph P.
2008-01-01
A biological system, like any complex system, blends stochastic and deterministic features, displaying properties of both. In a certain sense, this blend is exactly what we perceive as the “essence of complexity” given we tend to consider as non-complex both an ideal gas (fully stochastic and understandable at the statistical level in the thermodynamic limit of a huge number of particles) and a frictionless pendulum (fully deterministic relative to its motion). In this commentary we make the statement that systems biology will have a relevant impact on nowadays biology if (and only if) will be able to capture the essential character of this blend that in our opinion is the generation of globally ordered collective modes supported by locally stochastic atomisms. PMID:19003440
Multi-scale dynamical behavior of spatially distributed systems: a deterministic point of view
NASA Astrophysics Data System (ADS)
Mangiarotti, S.; Le Jean, F.; Drapeau, L.; Huc, M.
2015-12-01
Physical and biophysical systems are spatially distributed systems. Their behavior can be observed or modelled spatially at various resolutions. In this work, a deterministic point of view is adopted to analyze multi-scale behavior taking a set of ordinary differential equation (ODE) as elementary part of the system.To perform analyses, scenes of study are thus generated based on ensembles of identical elementary ODE systems. Without any loss of generality, their dynamics is chosen chaotic in order to ensure sensitivity to initial conditions, that is, one fundamental property of atmosphere under instable conditions [1]. The Rössler system [2] is used for this purpose for both its topological and algebraic simplicity [3,4].Two cases are thus considered: the chaotic oscillators composing the scene of study are taken either independent, or in phase synchronization. Scale behaviors are analyzed considering the scene of study as aggregations (basically obtained by spatially averaging the signal) or as associations (obtained by concatenating the time series). The global modeling technique is used to perform the numerical analyses [5].One important result of this work is that, under phase synchronization, a scene of aggregated dynamics can be approximated by the elementary system composing the scene, but modifying its parameterization [6]. This is shown based on numerical analyses. It is then demonstrated analytically and generalized to a larger class of ODE systems. Preliminary applications to cereal crops observed from satellite are also presented.[1] Lorenz, Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130-141 (1963).[2] Rössler, An equation for continuous chaos, Phys. Lett. A, 57, 397-398 (1976).[3] Gouesbet & Letellier, Global vector-field reconstruction by using a multivariate polynomial L2 approximation on nets, Phys. Rev. E 49, 4955-4972 (1994).[4] Letellier, Roulin & Rössler, Inequivalent topologies of chaos in simple equations, Chaos, Solitons & Fractals, 28, 337-360 (2006).[5] Mangiarotti, Coudret, Drapeau, & Jarlan, Polynomial search and global modeling, Phys. Rev. E 86(4), 046205 (2012).[6] Mangiarotti, Modélisation globale et Caractérisation Topologique de dynamiques environnementales. Habilitation à Diriger des Recherches, Univ. Toulouse 3 (2014).
ERIC Educational Resources Information Center
Mirrlees, Tanner; Alvi, Shahid
2014-01-01
Since 2012, corporations, politicians, journalists and educators have asserted that MOOCs--massive open online courses--are radically changing North American and global education, and for the better. This article offers a counterpoint to the techno-deterministic and optimistic buzz surrounding for-profit MOOCs by contextualizing and analyzing…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayes, T.; Smith, K.S.; Severino, F.
A critical capability of the new RHIC low level rf (LLRF) system is the ability to synchronize signals across multiple locations. The 'Update Link' provides this functionality. The 'Update Link' is a deterministic serial data link based on the Xilinx RocketIO protocol that is broadcast over fiber optic cable at 1 gigabit per second (Gbps). The link provides timing events and data packets as well as time stamp information for synchronizing diagnostic data from multiple sources. The new RHIC LLRF was designed to be a flexible, modular system. The system is constructed of numerous independent RF Controller chassis. To providemore » synchronization among all of these chassis, the Update Link system was designed. The Update Link system provides a low latency, deterministic data path to broadcast information to all receivers in the system. The Update Link system is based on a central hub, the Update Link Master (ULM), which generates the data stream that is distributed via fiber optic links. Downstream chassis have non-deterministic connections back to the ULM that allow any chassis to provide data that is broadcast globally.« less
The Stochastic Multi-strain Dengue Model: Analysis of the Dynamics
NASA Astrophysics Data System (ADS)
Aguiar, Maíra; Stollenwerk, Nico; Kooi, Bob W.
2011-09-01
Dengue dynamics is well known to be particularly complex with large fluctuations of disease incidences. An epidemic multi-strain model motivated by dengue fever epidemiology shows deterministic chaos in wide parameter regions. The addition of seasonal forcing, mimicking the vectorial dynamics, and a low import of infected individuals, which is realistic in the dynamics of infectious diseases epidemics show complex dynamics and qualitatively a good agreement between empirical DHF monitoring data and the obtained model simulation. The addition of noise can explain the fluctuations observed in the empirical data and for large enough population size, the stochastic system can be well described by the deterministic skeleton.
Finite-size effects and switching times for Moran process with mutation.
DeVille, Lee; Galiardi, Meghan
2017-04-01
We consider the Moran process with two populations competing under an iterated Prisoner's Dilemma in the presence of mutation, and concentrate on the case where there are multiple evolutionarily stable strategies. We perform a complete bifurcation analysis of the deterministic system which arises in the infinite population size. We also study the Master equation and obtain asymptotics for the invariant distribution and metastable switching times for the stochastic process in the case of large but finite population. We also show that the stochastic system has asymmetries in the form of a skew for parameter values where the deterministic limit is symmetric.
Optimal Wonderful Life Utility Functions in Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Tumer, Kagan; Swanson, Keith (Technical Monitor)
2000-01-01
The mathematics of Collective Intelligence (COINs) is concerned with the design of multi-agent systems so as to optimize an overall global utility function when those systems lack centralized communication and control. Typically in COINs each agent runs a distinct Reinforcement Learning (RL) algorithm, so that much of the design problem reduces to how best to initialize/update each agent's private utility function, as far as the ensuing value of the global utility is concerned. Traditional team game solutions to this problem assign to each agent the global utility as its private utility function. In previous work we used the COIN framework to derive the alternative Wonderful Life Utility (WLU), and experimentally established that having the agents use it induces global utility performance up to orders of magnitude superior to that induced by use of the team game utility. The WLU has a free parameter (the clamping parameter) which we simply set to zero in that previous work. Here we derive the optimal value of the clamping parameter, and demonstrate experimentally that using that optimal value can result in significantly improved performance over that of clamping to zero, over and above the improvement beyond traditional approaches.
Ocean haline skin layer and turbulent surface convections
NASA Astrophysics Data System (ADS)
Zhang, Y.; Zhang, X.
2012-04-01
The ocean haline skin layer is of great interest to oceanographic applications, while its attribute is still subject to considerable uncertainty due to observational difficulties. By introducing Batchelor micro-scale, a turbulent surface convection model is developed to determine the depths of various ocean skin layers with same model parameters. These parameters are derived from matching cool skin layer observations. Global distributions of salinity difference across ocean haline layers are then simulated, using surface forcing data mainly from OAFlux project and ISCCP. It is found that, even though both thickness of the haline layer and salinity increment across are greater than the early global simulations, the microwave remote sensing error caused by the haline microlayer effect is still smaller than that from other geophysical error sources. It is shown that forced convections due to sea surface wind stress are dominant over free convections driven by surface cooling in most regions of oceans. The free convection instability is largely controlled by cool skin effect for the thermal microlayer is much thicker and becomes unstable much earlier than the haline microlayer. The similarity of the global distributions of temperature difference and salinity difference across cool and haline skin layers is investigated by comparing their forcing fields of heat fluxes. The turbulent convection model is also found applicable to formulating gas transfer velocity at low wind.
FACTORS INFLUENCING TOTAL DIETARY EXPOSURE OF YOUNG CHILDREN
A deterministic model was developed to identify critical input parameters to assess dietary intake of young children. The model was used as a framework for understanding important factors in data collection and analysis. Factors incorporated included transfer efficiencies of pest...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Latanision, R.M.
1990-12-01
Electrochemical corrosion is pervasive in virtually all engineering systems and in virtually all industrial circumstances. Although engineers now understand how to design systems to minimize corrosion in many instances, many fundamental questions remain poorly understood and, therefore, the development of corrosion control strategies is based more on empiricism than on a deep understanding of the processes by which metals corrode in electrolytes. Fluctuations in potential, or current, in electrochemical systems have been observed for many years. To date, all investigations of this phenomenon have utilized non-deterministic analyses. In this work it is proposed to study electrochemical noise from a deterministicmore » viewpoint by comparison of experimental parameters, such as first and second order moments (non-deterministic), with computer simulation of corrosion at metal surfaces. In this way it is proposed to analyze the origins of these fluctuations and to elucidate the relationship between these fluctuations and kinetic parameters associated with metal dissolution and cathodic reduction reactions. This research program addresses in essence two areas of interest: (a) computer modeling of corrosion processes in order to study the electrochemical processes on an atomistic scale, and (b) experimental investigations of fluctuations in electrochemical systems and correlation of experimental results with computer modeling. In effect, the noise generated by mathematical modeling will be analyzed and compared to experimental noise in electrochemical systems. 1 fig.« less
Costs of eliminating malaria and the impact of the global fund in 34 countries.
Zelman, Brittany; Kiszewski, Anthony; Cotter, Chris; Liu, Jenny
2014-01-01
International financing for malaria increased more than 18-fold between 2000 and 2011; the largest source came from The Global Fund to Fight AIDS, Tuberculosis and Malaria (Global Fund). Countries have made substantial progress, but achieving elimination requires sustained finances to interrupt transmission and prevent reintroduction. Since 2011, global financing for malaria has declined, fueling concerns that further progress will be impeded, especially for current malaria-eliminating countries that may face resurgent malaria if programs are disrupted. This study aims to 1) assess past total and Global Fund funding to the 34 current malaria-eliminating countries, and 2) estimate their future funding needs to achieve malaria elimination and prevent reintroduction through 2030. Historical funding is assessed against trends in country-level malaria annual parasite incidences (APIs) and income per capita. Following Kizewski et al. (2007), program costs to eliminate malaria and prevent reintroduction through 2030 are estimated using a deterministic model. The cost parameters are tailored to a package of interventions aimed at malaria elimination and prevention of reintroduction. The majority of Global Fund-supported countries experiencing increases in total funding from 2005 to 2010 coincided with reductions in malaria APIs and also overall GNI per capita average annual growth. The total amount of projected funding needed for the current malaria-eliminating countries to achieve elimination and prevent reintroduction through 2030 is approximately US$8.5 billion, or about $1.84 per person at risk per year (PPY) (ranging from $2.51 PPY in 2014 to $1.43 PPY in 2030). Although external donor funding, particularly from the Global Fund, has been key for many malaria-eliminating countries, sustained and sufficient financing is critical for furthering global malaria elimination. Projected cost estimates for elimination provide policymakers with an indication of the level of financial resources that should be mobilized to achieve malaria elimination goals.
Statistical emulation of landslide-induced tsunamis at the Rockall Bank, NE Atlantic
Guillas, S.; Georgiopoulou, A.; Dias, F.
2017-01-01
Statistical methods constitute a useful approach to understand and quantify the uncertainty that governs complex tsunami mechanisms. Numerical experiments may often have a high computational cost. This forms a limiting factor for performing uncertainty and sensitivity analyses, where numerous simulations are required. Statistical emulators, as surrogates of these simulators, can provide predictions of the physical process in a much faster and computationally inexpensive way. They can form a prominent solution to explore thousands of scenarios that would be otherwise numerically expensive and difficult to achieve. In this work, we build a statistical emulator of the deterministic codes used to simulate submarine sliding and tsunami generation at the Rockall Bank, NE Atlantic Ocean, in two stages. First we calibrate, against observations of the landslide deposits, the parameters used in the landslide simulations. This calibration is performed under a Bayesian framework using Gaussian Process (GP) emulators to approximate the landslide model, and the discrepancy function between model and observations. Distributions of the calibrated input parameters are obtained as a result of the calibration. In a second step, a GP emulator is built to mimic the coupled landslide-tsunami numerical process. The emulator propagates the uncertainties in the distributions of the calibrated input parameters inferred from the first step to the outputs. As a result, a quantification of the uncertainty of the maximum free surface elevation at specified locations is obtained. PMID:28484339
Statistical emulation of landslide-induced tsunamis at the Rockall Bank, NE Atlantic.
Salmanidou, D M; Guillas, S; Georgiopoulou, A; Dias, F
2017-04-01
Statistical methods constitute a useful approach to understand and quantify the uncertainty that governs complex tsunami mechanisms. Numerical experiments may often have a high computational cost. This forms a limiting factor for performing uncertainty and sensitivity analyses, where numerous simulations are required. Statistical emulators, as surrogates of these simulators, can provide predictions of the physical process in a much faster and computationally inexpensive way. They can form a prominent solution to explore thousands of scenarios that would be otherwise numerically expensive and difficult to achieve. In this work, we build a statistical emulator of the deterministic codes used to simulate submarine sliding and tsunami generation at the Rockall Bank, NE Atlantic Ocean, in two stages. First we calibrate, against observations of the landslide deposits, the parameters used in the landslide simulations. This calibration is performed under a Bayesian framework using Gaussian Process (GP) emulators to approximate the landslide model, and the discrepancy function between model and observations. Distributions of the calibrated input parameters are obtained as a result of the calibration. In a second step, a GP emulator is built to mimic the coupled landslide-tsunami numerical process. The emulator propagates the uncertainties in the distributions of the calibrated input parameters inferred from the first step to the outputs. As a result, a quantification of the uncertainty of the maximum free surface elevation at specified locations is obtained.
How the growth rate of host cells affects cancer risk in a deterministic way
NASA Astrophysics Data System (ADS)
Draghi, Clément; Viger, Louise; Denis, Fabrice; Letellier, Christophe
2017-09-01
It is well known that cancers are significantly more often encountered in some tissues than in other ones. In this paper, by using a deterministic model describing the interactions between host, effector immune and tumor cells at the tissue level, we show that this can be explained by the dependency of tumor growth on parameter values characterizing the type as well as the state of the tissue considered due to the "way of life" (environmental factors, food consumption, drinking or smoking habits, etc.). Our approach is purely deterministic and, consequently, the strong correlation (r = 0.99) between the number of detectable growing tumors and the growth rate of cells from the nesting tissue can be explained without evoking random mutation arising during DNA replications in nonmalignant cells or "bad luck". Strategies to limit the mortality induced by cancer could therefore be well based on improving the way of life, that is, by better preserving the tissue where mutant cells randomly arise.
On the usage of ultrasound computational models for decision making under ambiguity
NASA Astrophysics Data System (ADS)
Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron
2018-04-01
Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.
NASA Astrophysics Data System (ADS)
Rosenblum, Serge; Borne, Adrien; Dayan, Barak
2017-03-01
The long-standing goal of deterministic quantum interactions between single photons and single atoms was recently realized in various experiments. Among these, an appealing demonstration relied on single-photon Raman interaction (SPRINT) in a three-level atom coupled to a single-mode waveguide. In essence, the interference-based process of SPRINT deterministically swaps the qubits encoded in a single photon and a single atom, without the need for additional control pulses. It can also be harnessed to construct passive entangling quantum gates, and can therefore form the basis for scalable quantum networks in which communication between the nodes is carried out only by single-photon pulses. Here we present an analytical and numerical study of SPRINT, characterizing its limitations and defining parameters for its optimal operation. Specifically, we study the effect of losses, imperfect polarization, and the presence of multiple excited states. In all cases we discuss strategies for restoring the operation of SPRINT.
Coupled Multi-Disciplinary Optimization for Structural Reliability and Affordability
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2003-01-01
A computational simulation method is presented for Non-Deterministic Multidisciplinary Optimization of engine composite materials and structures. A hypothetical engine duct made with ceramic matrix composites (CMC) is evaluated probabilistically in the presence of combined thermo-mechanical loading. The structure is tailored by quantifying the uncertainties in all relevant design variables such as fabrication, material, and loading parameters. The probabilistic sensitivities are used to select critical design variables for optimization. In this paper, two approaches for non-deterministic optimization are presented. The non-deterministic minimization of combined failure stress criterion is carried out by: (1) performing probabilistic evaluation first and then optimization and (2) performing optimization first and then probabilistic evaluation. The first approach shows that the optimization feasible region can be bounded by a set of prescribed probability limits and that the optimization follows the cumulative distribution function between those limits. The second approach shows that the optimization feasible region is bounded by 0.50 and 0.999 probabilities.
Aguirre, Erik; Lopez-Iturri, Peio; Azpilicueta, Leire; Astrain, José Javier; Villadangos, Jesús; Falcone, Francisco
2015-02-05
One of the main challenges in the implementation and design of context-aware scenarios is the adequate deployment strategy for Wireless Sensor Networks (WSNs), mainly due to the strong dependence of the radiofrequency physical layer with the surrounding media, which can lead to non-optimal network designs. In this work, radioplanning analysis for WSN deployment is proposed by employing a deterministic 3D ray launching technique in order to provide insight into complex wireless channel behavior in context-aware indoor scenarios. The proposed radioplanning procedure is validated with a testbed implemented with a Mobile Ad Hoc Network WSN following a chain configuration, enabling the analysis and assessment of a rich variety of parameters, such as received signal level, signal quality and estimation of power consumption. The adoption of deterministic radio channel techniques allows the design and further deployment of WSNs in heterogeneous wireless scenarios with optimized behavior in terms of coverage, capacity, quality of service and energy consumption.
A random walk on water (Henry Darcy Medal Lecture)
NASA Astrophysics Data System (ADS)
Koutsoyiannis, D.
2009-04-01
Randomness and uncertainty had been well appreciated in hydrology and water resources engineering in their initial steps as scientific disciplines. However, this changed through the years and, following other geosciences, hydrology adopted a naïve view of randomness in natural processes. Such a view separates natural phenomena into two mutually exclusive types, random or stochastic, and deterministic. When a classification of a specific process into one of these two types fails, then a separation of the process into two different, usually additive, parts is typically devised, each of which may be further subdivided into subparts (e.g., deterministic subparts such as periodic and aperiodic or trends). This dichotomous logic is typically combined with a manichean perception, in which the deterministic part supposedly represents cause-effect relationships and thus is physics and science (the "good"), whereas randomness has little relationship with science and no relationship with understanding (the "evil"). Probability theory and statistics, which traditionally provided the tools for dealing with randomness and uncertainty, have been regarded by some as the "necessary evil" but not as an essential part of hydrology and geophysics. Some took a step further to banish them from hydrology, replacing them with deterministic sensitivity analysis and fuzzy-logic representations. Others attempted to demonstrate that irregular fluctuations observed in natural processes are au fond manifestations of underlying chaotic deterministic dynamics with low dimensionality, thus attempting to render probabilistic descriptions unnecessary. Some of the above recent developments are simply flawed because they make erroneous use of probability and statistics (which, remarkably, provide the tools for such analyses), whereas the entire underlying logic is just a false dichotomy. To see this, it suffices to recall that Pierre Simon Laplace, perhaps the most famous proponent of determinism in the history of philosophy of science (cf. Laplace's demon), is, at the same time, one of the founders of probability theory, which he regarded as "nothing but common sense reduced to calculation". This harmonizes with James Clerk Maxwell's view that "the true logic for this world is the calculus of Probabilities" and was more recently and epigrammatically formulated in the title of Edwin Thompson Jaynes's book "Probability Theory: The Logic of Science" (2003). Abandoning dichotomous logic, either on ontological or epistemic grounds, we can identify randomness or stochasticity with unpredictability. Admitting that (a) uncertainty is an intrinsic property of nature; (b) causality implies dependence of natural processes in time and thus suggests predictability; but, (c) even the tiniest uncertainty (e.g., in initial conditions) may result in unpredictability after a certain time horizon, we may shape a stochastic representation of natural processes that is consistent with Karl Popper's indeterministic world view. In this representation, probability quantifies uncertainty according to the Kolmogorov system, in which probability is a normalized measure, i.e., a function that maps sets (areas where the initial conditions or the parameter values lie) to real numbers (in the interval [0, 1]). In such a representation, predictability (suggested by deterministic laws) and unpredictability (randomness) coexist, are not separable or additive components, and it is a matter of specifying the time horizon of prediction to decide which of the two dominates. An elementary numerical example has been devised to illustrate the above ideas and demonstrate that they offer a pragmatic and useful guide for practice, rather than just pertaining to philosophical discussions. A chaotic model, with fully and a priori known deterministic dynamics and deterministic inputs (without any random agent), is assumed to represent the hydrological balance in an area partly covered by vegetation. Experimentation with this toy model demonstrates, inter alia, that: (1) for short time horizons the deterministic dynamics is able to give good predictions; but (2) these predictions become extremely inaccurate and useless for long time horizons; (3) for such horizons a naïve statistical prediction (average of past data) which fully neglects the deterministic dynamics is more skilful; and (4) if this statistical prediction, in addition to past data, is combined with the probability theory (the principle of maximum entropy, in particular), it can provide a more informative prediction. Also, the toy model shows that the trajectories of the system state (and derivative properties thereof) do not resemble a regular (e.g., periodic) deterministic process nor a purely random process, but exhibit patterns indicating anti-persistence and persistence (where the latter statistically complies with a Hurst-Kolmogorov behaviour). If the process is averaged over long time scales, the anti-persistent behaviour improves predictability, whereas the persistent behaviour substantially deteriorates it. A stochastic representation of this deterministic system, which incorporates dynamics, is not only possible, but also powerful as it provides good predictions for both short and long horizons and helps to decide on when the deterministic dynamics should be considered or neglected. Obviously, a natural system is extremely more complex than this simple toy model and hence unpredictability is naturally even more prominent in the former. In addition, in a complex natural system, we can never know the exact dynamics and we must infer it from past data, which implies additional uncertainty and an additional role of stochastics in the process of formulating the system equations and estimating the involved parameters. Data also offer the only solid grounds to test any hypothesis about the dynamics, and failure of performing such testing against evidence from data renders the hypothesised dynamics worthless. If this perception of natural phenomena is adequately plausible, then it may help in studying interesting fundamental questions regarding the current state and the trends of hydrological and water resources research and their promising future paths. For instance: (i) Will it ever be possible to achieve a fully "physically based" modelling of hydrological systems that will not depend on data or stochastic representations? (ii) To what extent can hydrological uncertainty be reduced and what are the effective means for such reduction? (iii) Are current stochastic methods in hydrology consistent with observed natural behaviours? What paths should we explore for their advancement? (iv) Can deterministic methods provide solid scientific grounds for water resources engineering and management? In particular, can there be risk-free hydraulic engineering and water management? (v) Is the current (particularly important) interface between hydrology and climate satisfactory?. In particular, should hydrology rely on climate models that are not properly validated (i.e., for periods and scales not used in calibration)? In effect, is the evolution of climate and its impacts on water resources deterministically predictable?
The q-G method : A q-version of the Steepest Descent method for global optimization.
Soterroni, Aline C; Galski, Roberto L; Scarabello, Marluce C; Ramos, Fernando M
2015-01-01
In this work, the q-Gradient (q-G) method, a q-version of the Steepest Descent method, is presented. The main idea behind the q-G method is the use of the negative of the q-gradient vector of the objective function as the search direction. The q-gradient vector, or simply the q-gradient, is a generalization of the classical gradient vector based on the concept of Jackson's derivative from the q-calculus. Its use provides the algorithm an effective mechanism for escaping from local minima. The q-G method reduces to the Steepest Descent method when the parameter q tends to 1. The algorithm has three free parameters and it is implemented so that the search process gradually shifts from global exploration in the beginning to local exploitation in the end. We evaluated the q-G method on 34 test functions, and compared its performance with 34 optimization algorithms, including derivative-free algorithms and the Steepest Descent method. Our results show that the q-G method is competitive and has a great potential for solving multimodal optimization problems.
Education and the Free Will Problem: A Spinozist Contribution
ERIC Educational Resources Information Center
Dahlbeck, Johan
2017-01-01
In this Spinozist defence of the educational promotion of students' autonomy I argue for a deterministic position where freedom of will is deemed unrealistic in the metaphysical sense, but important in the sense that it is an undeniable psychological fact. The paper is structured in three parts. The first part investigates the concept of autonomy…
FACTORS INFLUENCING TOTAL DIETARY EXPOSURES OF YOUNG CHILDREN
A deterministic model was developed to identify the critical input parameters needed to assess dietary intakes of young children. The model was used as a framework for understanding the important factors in data collection and data analysis. Factors incorporated into the model i...
NASA Astrophysics Data System (ADS)
de Macedo, Isadora A. S.; da Silva, Carolina B.; de Figueiredo, J. J. S.; Omoboya, Bode
2017-01-01
Wavelet estimation as well as seismic-to-well tie procedures are at the core of every seismic interpretation workflow. In this paper we perform a comparative study of wavelet estimation methods for seismic-to-well tie. Two approaches to wavelet estimation are discussed: a deterministic estimation, based on both seismic and well log data, and a statistical estimation, based on predictive deconvolution and the classical assumptions of the convolutional model, which provides a minimum-phase wavelet. Our algorithms, for both wavelet estimation methods introduce a semi-automatic approach to determine the optimum parameters of deterministic wavelet estimation and statistical wavelet estimation and, further, to estimate the optimum seismic wavelets by searching for the highest correlation coefficient between the recorded trace and the synthetic trace, when the time-depth relationship is accurate. Tests with numerical data show some qualitative conclusions, which are probably useful for seismic inversion and interpretation of field data, by comparing deterministic wavelet estimation and statistical wavelet estimation in detail, especially for field data example. The feasibility of this approach is verified on real seismic and well data from Viking Graben field, North Sea, Norway. Our results also show the influence of the washout zones on well log data on the quality of the well to seismic tie.
Combining Deterministic structures and stochastic heterogeneity for transport modeling
NASA Astrophysics Data System (ADS)
Zech, Alraune; Attinger, Sabine; Dietrich, Peter; Teutsch, Georg
2017-04-01
Contaminant transport in highly heterogeneous aquifers is extremely challenging and subject of current scientific debate. Tracer plumes often show non-symmetric but highly skewed plume shapes. Predicting such transport behavior using the classical advection-dispersion-equation (ADE) in combination with a stochastic description of aquifer properties requires a dense measurement network. This is in contrast to the available information for most aquifers. A new conceptual aquifer structure model is presented which combines large-scale deterministic information and the stochastic approach for incorporating sub-scale heterogeneity. The conceptual model is designed to allow for a goal-oriented, site specific transport analysis making use of as few data as possible. Thereby the basic idea is to reproduce highly skewed tracer plumes in heterogeneous media by incorporating deterministic contrasts and effects of connectivity instead of using unimodal heterogeneous models with high variances. The conceptual model consists of deterministic blocks of mean hydraulic conductivity which might be measured by pumping tests indicating values differing in orders of magnitudes. A sub-scale heterogeneity is introduced within every block. This heterogeneity can be modeled as bimodal or log-normal distributed. The impact of input parameters, structure and conductivity contrasts is investigated in a systematic manor. Furthermore, some first successful implementation of the model was achieved for the well known MADE site.
Demographic noise can reverse the direction of deterministic selection
Constable, George W. A.; Rogers, Tim; McKane, Alan J.; Tarnita, Corina E.
2016-01-01
Deterministic evolutionary theory robustly predicts that populations displaying altruistic behaviors will be driven to extinction by mutant cheats that absorb common benefits but do not themselves contribute. Here we show that when demographic stochasticity is accounted for, selection can in fact act in the reverse direction to that predicted deterministically, instead favoring cooperative behaviors that appreciably increase the carrying capacity of the population. Populations that exist in larger numbers experience a selective advantage by being more stochastically robust to invasions than smaller populations, and this advantage can persist even in the presence of reproductive costs. We investigate this general effect in the specific context of public goods production and find conditions for stochastic selection reversal leading to the success of public good producers. This insight, developed here analytically, is missed by the deterministic analysis as well as by standard game theoretic models that enforce a fixed population size. The effect is found to be amplified by space; in this scenario we find that selection reversal occurs within biologically reasonable parameter regimes for microbial populations. Beyond the public good problem, we formulate a general mathematical framework for models that may exhibit stochastic selection reversal. In this context, we describe a stochastic analog to r−K theory, by which small populations can evolve to higher densities in the absence of disturbance. PMID:27450085
NASA Astrophysics Data System (ADS)
Havemann, Frank; Heinz, Michael; Struck, Alexander; Gläser, Jochen
2011-01-01
We propose a new local, deterministic and parameter-free algorithm that detects fuzzy and crisp overlapping communities in a weighted network and simultaneously reveals their hierarchy. Using a local fitness function, the algorithm greedily expands natural communities of seeds until the whole graph is covered. The hierarchy of communities is obtained analytically by calculating resolution levels at which communities grow rather than numerically by testing different resolution levels. This analytic procedure is not only more exact than its numerical alternatives such as LFM and GCE but also much faster. Critical resolution levels can be identified by searching for intervals in which large changes of the resolution do not lead to growth of communities. We tested our algorithm on benchmark graphs and on a network of 492 papers in information science. Combined with a specific post-processing, the algorithm gives much more precise results on LFR benchmarks with high overlap compared to other algorithms and performs very similarly to GCE.
NASA Technical Reports Server (NTRS)
Franks, Shannon; Masek, Jeffrey G.; Headley, Rachel M.; Gasch, John; Arvidson, Terry
2009-01-01
The Global Land Survey (GLS) 2005 is a cloud-free, orthorectified collection of Landsat imagery acquired during the 2004-2007 epoch intended to support global land-cover and ecological monitoring. Due to the numerous complexities in selecting imagery for the GLS2005, NASA and the U.S. Geological Survey (USGS) sponsored the development of an automated scene selection tool, the Large Area Scene Selection Interface (LASSI), to aid in the selection of imagery for this data set. This innovative approach to scene selection applied a user-defined weighting system to various scene parameters: image cloud cover, image vegetation greenness, choice of sensor, and the ability of the Landsat 7 Scan Line Corrector (SLC)-off pair to completely fill image gaps, among others. The parameters considered in scene selection were weighted according to their relative importance to the data set, along with the algorithm's sensitivity to that weight. This paper describes the methodology and analysis that established the parameter weighting strategy, as well as the post-screening processes used in selecting the optimal data set for GLS2005.
A Global Analysis of Light and Charge Yields in Liquid Xenon
Lenardo, Brian; Kazkaz, Kareem; Manalaysay, Aaron; ...
2015-11-04
Here, we present an updated model of light and charge yields from nuclear recoils in liquid xenon with a simultaneously constrained parameter set. A global analysis is performed using measurements of electron and photon yields compiled from all available historical data, as well as measurements of the ratio of the two. These data sweep over energies from keV and external applied electric fields from V/cm. The model is constrained by constructing global cost functions and using a simulated annealing algorithm and a Markov Chain Monte Carlo approach to optimize and find confidence intervals on all free parameters in the model.more » This analysis contrasts with previous work in that we do not unnecessarily exclude datasets nor impose artificially conservative assumptions, do not use spline functions, and reduce the number of parameters used in NEST v 0.98. Here, we report our results and the calculated best-fit charge and light yields. These quantities are crucial to understanding the response of liquid xenon detectors in the energy regime important for rare event searches such as the direct detection of dark matter particles.« less
Long-range persistence in the global mean surface temperature and the global warming "time bomb"
NASA Astrophysics Data System (ADS)
Rypdal, M.; Rypdal, K.
2012-04-01
Detrended Fluctuation Analysis (DFA) and Maximum Likelihood Estimations (MLE) based on instrumental data over the last 160 years indicate that there is Long-Range Persistence (LRP) in Global Mean Surface Temperature (GMST) on time scales of months to decades. The persistence is much higher in sea surface temperature than in land temperatures. Power spectral analysis of multi-model, multi-ensemble runs of global climate models indicate further that this persistence may extend to centennial and maybe even millennial time-scales. We also support these conclusions by wavelet variogram analysis, DFA, and MLE of Northern hemisphere mean surface temperature reconstructions over the last two millennia. These analyses indicate that the GMST is a strongly persistent noise with Hurst exponent H>0.9 on time scales from decades up to at least 500 years. We show that such LRP can be very important for long-term climate prediction and for the establishment of a "time bomb" in the climate system due to a growing energy imbalance caused by the slow relaxation to radiative equilibrium under rising anthropogenic forcing. We do this by the construction of a multi-parameter dynamic-stochastic model for the GMST response to deterministic and stochastic forcing, where LRP is represented by a power-law response function. Reconstructed data for total forcing and GMST over the last millennium are used with this model to estimate trend coefficients and Hurst exponent for the GMST on multi-century time scale by means of MLE. Ensembles of solutions generated from the stochastic model also allow us to estimate confidence intervals for these estimates.
Solving a Multi Objective Transportation Problem(MOTP) Under Fuzziness on Using Interval Numbers
NASA Astrophysics Data System (ADS)
Saraj, Mansour; Mashkoorzadeh, Feryal
2010-09-01
In this paper we present a solution procedure of the Multi Objective Transportation Problem(MOTP) where the coefficients of the objective functions, the source and destination parameters which determined by the decision maker(DM) are symmetric triangular fuzzy numbers. The constraints with interval source and destination parameters have been converted in to deterministic ones. A numerical example is provided to illustrate the approach.
White, L J; Mandl, J N; Gomes, M G M; Bodley-Tickell, A T; Cane, P A; Perez-Brena, P; Aguilar, J C; Siqueira, M M; Portes, S A; Straliotto, S M; Waris, M; Nokes, D J; Medley, G F
2007-09-01
The nature and role of re-infection and partial immunity are likely to be important determinants of the transmission dynamics of human respiratory syncytial virus (hRSV). We propose a single model structure that captures four possible host responses to infection and subsequent reinfection: partial susceptibility, altered infection duration, reduced infectiousness and temporary immunity (which might be partial). The magnitude of these responses is determined by four homotopy parameters, and by setting some of these parameters to extreme values we generate a set of eight nested, deterministic transmission models. In order to investigate hRSV transmission dynamics, we applied these models to incidence data from eight international locations. Seasonality is included as cyclic variation in transmission. Parameters associated with the natural history of the infection were assumed to be independent of geographic location, while others, such as those associated with seasonality, were assumed location specific. Models incorporating either of the two extreme assumptions for immunity (none or solid and lifelong) were unable to reproduce the observed dynamics. Model fits with either waning or partial immunity to disease or both were visually comparable. The best fitting structure was a lifelong partial immunity to both disease and infection. Observed patterns were reproduced by stochastic simulations using the parameter values estimated from the deterministic models.
Linear regression metamodeling as a tool to summarize and present simulation model results.
Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M
2013-10-01
Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.
Global behavior analysis for stochastic system of 1,3-PD continuous fermentation
NASA Astrophysics Data System (ADS)
Zhu, Xi; Kliemann, Wolfgang; Li, Chunfa; Feng, Enmin; Xiu, Zhilong
2017-12-01
Global behavior for stochastic system of continuous fermentation in glycerol bio-dissimilation to 1,3-propanediol by Klebsiella pneumoniae is analyzed in this paper. This bioprocess cannot avoid the stochastic perturbation caused by internal and external disturbance which reflect on the growth rate. These negative factors can limit and degrade the achievable performance of controlled systems. Based on multiplicity phenomena, the equilibriums and bifurcations of the deterministic system are analyzed. Then, a stochastic model is presented by a bounded Markov diffusion process. In order to analyze the global behavior, we compute the control sets for the associated control system. The probability distributions of relative supports are also computed. The simulation results indicate that how the disturbed biosystem tend to stationary behavior globally.
NASA Astrophysics Data System (ADS)
Klos, Anna; Olivares, German; Teferle, Felix Norman; Bogusz, Janusz
2016-04-01
Station velocity uncertainties determined from a series of Global Navigation Satellite System (GNSS) position estimates depend on both the deterministic and stochastic models applied to the time series. While the deterministic model generally includes parameters for a linear and several periodic terms the stochastic model is a representation of the noise character of the time series in form of a power-law process. For both of these models the optimal model may vary from one time series to another while the models also depend, to some degree, on each other. In the past various power-law processes have been shown to fit the time series and the sources for the apparent temporally-correlated noise were attributed to, for example, mismodelling of satellites orbits, antenna phase centre variations, troposphere, Earth Orientation Parameters, mass loading effects and monument instabilities. Blewitt and Lavallée (2002) demonstrated how improperly modelled seasonal signals affected the estimates of station velocity uncertainties. However, in their study they assumed that the time series followed a white noise process with no consideration of additional temporally-correlated noise. Bos et al. (2010) empirically showed for a small number of stations that the noise character was much more important for the reliable estimation of station velocity uncertainties than the seasonal signals. In this presentation we pick up from Blewitt and Lavallée (2002) and Bos et al. (2010), and have derived formulas for the computation of the General Dilution of Precision (GDP) under presence of periodic signals and temporally-correlated noise in the time series. We show, based on simulated and real time series from globally distributed IGS (International GNSS Service) stations processed by the Jet Propulsion Laboratory (JPL), that periodic signals dominate the effect on the velocity uncertainties at short time scales while for those beyond four years, the type of noise becomes much more important. In other words, for time series long enough, the assumed periodic signals do not affect the velocity uncertainties as much as the assumed noise model. We calculated the GDP to be the ratio between two errors of velocity: without and with inclusion of seasonal terms of periods equal to one year and its overtones till 3rd. To all these cases power-law processes of white, flicker and random-walk noise were added separately. Few oscillations in GDP can be noticed for integer years, which arise from periodic terms added. Their amplitudes in GDP increase along with the increasing spectral index. Strong peaks of oscillations in GDP are indicated for short time scales, especially for random-walk processes. This means that badly monumented stations are affected the most. Local minima and maxima in GDP are also enlarged as the noise approaches random walk. We noticed that the semi-annual signal increased the local GDP minimum for white noise. This suggests that adding power-law noise to a deterministic model with annual term or adding a semi-annual term to white noise causes an increased velocity uncertainty even at the points, where determined velocity is not biased.
Boskova, Veronika; Bonhoeffer, Sebastian; Stadler, Tanja
2014-01-01
Quantifying epidemiological dynamics is crucial for understanding and forecasting the spread of an epidemic. The coalescent and the birth-death model are used interchangeably to infer epidemiological parameters from the genealogical relationships of the pathogen population under study, which in turn are inferred from the pathogen genetic sequencing data. To compare the performance of these widely applied models, we performed a simulation study. We simulated phylogenetic trees under the constant rate birth-death model and the coalescent model with a deterministic exponentially growing infected population. For each tree, we re-estimated the epidemiological parameters using both a birth-death and a coalescent based method, implemented as an MCMC procedure in BEAST v2.0. In our analyses that estimate the growth rate of an epidemic based on simulated birth-death trees, the point estimates such as the maximum a posteriori/maximum likelihood estimates are not very different. However, the estimates of uncertainty are very different. The birth-death model had a higher coverage than the coalescent model, i.e. contained the true value in the highest posterior density (HPD) interval more often (2–13% vs. 31–75% error). The coverage of the coalescent decreases with decreasing basic reproductive ratio and increasing sampling probability of infecteds. We hypothesize that the biases in the coalescent are due to the assumption of deterministic rather than stochastic population size changes. Both methods performed reasonably well when analyzing trees simulated under the coalescent. The methods can also identify other key epidemiological parameters as long as one of the parameters is fixed to its true value. In summary, when using genetic data to estimate epidemic dynamics, our results suggest that the birth-death method will be less sensitive to population fluctuations of early outbreaks than the coalescent method that assumes a deterministic exponentially growing infected population. PMID:25375100
Reach for the Stars: A Constellational Approach to Ethnographies of Elite Schools
ERIC Educational Resources Information Center
Prosser, Howard
2014-01-01
This paper offers a method for examining elite schools in a global setting by appropriating Theodor Adorno's constellational approach. I contend that arranging ideas and themes in a non-deterministic fashion can illuminate the social reality of elite schools. Drawing on my own fieldwork at an elite school in Argentina, I suggest that local and…
Louis R. Iverson; Anantha M. Prasad; Anantha M. Prasad
2002-01-01
Global climate change could have profound effects on the Earth's biota, including large redistributions of tree species and forest types. We used DISTRIB, a deterministic regression tree analysis model, to examine environmental drivers related to current forest-species distributions and then model potential suitable habitat under five climate change scenarios...
Toward Development of a Stochastic Wake Model: Validation Using LES and Turbine Loads
Moon, Jae; Manuel, Lance; Churchfield, Matthew; ...
2017-12-28
Wind turbines within an array do not experience free-stream undisturbed flow fields. Rather, the flow fields on internal turbines are influenced by wakes generated by upwind unit and exhibit different dynamic characteristics relative to the free stream. The International Electrotechnical Commission (IEC) standard 61400-1 for the design of wind turbines only considers a deterministic wake model for the design of a wind plant. This study is focused on the development of a stochastic model for waked wind fields. First, high-fidelity physics-based waked wind velocity fields are generated using Large-Eddy Simulation (LES). Stochastic characteristics of these LES waked wind velocity field,more » including mean and turbulence components, are analyzed. Wake-related mean and turbulence field-related parameters are then estimated for use with a stochastic model, using Multivariate Multiple Linear Regression (MMLR) with the LES data. To validate the simulated wind fields based on the stochastic model, wind turbine tower and blade loads are generated using aeroelastic simulation for utility-scale wind turbine models and compared with those based directly on the LES inflow. The study's overall objective is to offer efficient and validated stochastic approaches that are computationally tractable for assessing the performance and loads of turbines operating in wakes.« less
Toward Development of a Stochastic Wake Model: Validation Using LES and Turbine Loads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moon, Jae; Manuel, Lance; Churchfield, Matthew
Wind turbines within an array do not experience free-stream undisturbed flow fields. Rather, the flow fields on internal turbines are influenced by wakes generated by upwind unit and exhibit different dynamic characteristics relative to the free stream. The International Electrotechnical Commission (IEC) standard 61400-1 for the design of wind turbines only considers a deterministic wake model for the design of a wind plant. This study is focused on the development of a stochastic model for waked wind fields. First, high-fidelity physics-based waked wind velocity fields are generated using Large-Eddy Simulation (LES). Stochastic characteristics of these LES waked wind velocity field,more » including mean and turbulence components, are analyzed. Wake-related mean and turbulence field-related parameters are then estimated for use with a stochastic model, using Multivariate Multiple Linear Regression (MMLR) with the LES data. To validate the simulated wind fields based on the stochastic model, wind turbine tower and blade loads are generated using aeroelastic simulation for utility-scale wind turbine models and compared with those based directly on the LES inflow. The study's overall objective is to offer efficient and validated stochastic approaches that are computationally tractable for assessing the performance and loads of turbines operating in wakes.« less
Gridded Calibration of Ensemble Wind Vector Forecasts Using Ensemble Model Output Statistics
NASA Astrophysics Data System (ADS)
Lazarus, S. M.; Holman, B. P.; Splitt, M. E.
2017-12-01
A computationally efficient method is developed that performs gridded post processing of ensemble wind vector forecasts. An expansive set of idealized WRF model simulations are generated to provide physically consistent high resolution winds over a coastal domain characterized by an intricate land / water mask. Ensemble model output statistics (EMOS) is used to calibrate the ensemble wind vector forecasts at observation locations. The local EMOS predictive parameters (mean and variance) are then spread throughout the grid utilizing flow-dependent statistical relationships extracted from the downscaled WRF winds. Using data withdrawal and 28 east central Florida stations, the method is applied to one year of 24 h wind forecasts from the Global Ensemble Forecast System (GEFS). Compared to the raw GEFS, the approach improves both the deterministic and probabilistic forecast skill. Analysis of multivariate rank histograms indicate the post processed forecasts are calibrated. Two downscaling case studies are presented, a quiescent easterly flow event and a frontal passage. Strengths and weaknesses of the approach are presented and discussed.
Imitative and best response behaviors in a nonlinear Cournotian setting
NASA Astrophysics Data System (ADS)
Cerboni Baiardi, Lorenzo; Naimzada, Ahmad K.
2018-05-01
We consider the competition among quantity setting players in a deterministic nonlinear oligopoly framework characterized by an isoelastic demand curve. Players are characterized by having heterogeneous decisional mechanisms to set their outputs: some players are imitators, while the remaining others adopt a rational-like rule according to which their past decisions are adjusted towards their static expectation best response. The Cournot-Nash production level is a stationary state of our model together with a further production level that can be interpreted as the competitive outcome in case only imitators are present. We found that both the number of players and the relative fraction of imitators influence stability of the Cournot-Nash equilibrium with an ambiguous role, and double instability thresholds may be observed. Global analysis shows that a wide variety of complex dynamic scenarios emerge. Chaotic trajectories as well as multi-stabilities, where different attractors coexist, are robust phenomena that can be observed for a wide spectrum of parameter sets.
Azunre, P.
2016-09-21
Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less
A stochastic chemostat model with an inhibitor and noise independent of population sizes
NASA Astrophysics Data System (ADS)
Sun, Shulin; Zhang, Xiaolu
2018-02-01
In this paper, a stochastic chemostat model with an inhibitor is considered, here the inhibitor is input from an external source and two organisms in chemostat compete for a nutrient. Firstly, we show that the system has a unique global positive solution. Secondly, by constructing some suitable Lyapunov functions, we investigate that the average in time of the second moment of the solutions of the stochastic model is bounded for a relatively small noise. That is, the asymptotic behaviors of the stochastic system around the equilibrium points of the deterministic system are studied. However, the sufficient large noise can make the microorganisms become extinct with probability one, although the solutions to the original deterministic model may be persistent. Finally, the obtained analytical results are illustrated by computer simulations.
NASA Astrophysics Data System (ADS)
Arikan, Feza; Gulyaeva, Tamara; Sezen, Umut; Arikan, Orhan; Toker, Cenk; Hakan Tuna, MR.; Erdem, Esra
2016-07-01
International Reference Ionosphere is the most acknowledged climatic model of ionosphere that provides electron density profile and hourly, monthly median values of critical layer parameters of the ionosphere for a desired location, date and time between 60 to 2,000 km altitude. IRI is also accepted as the International Standard Ionosphere model. Recently, the IRI model is extended to the Global Positioning System (GPS) satellite orbital range of 20,000 km. The new version is called IRI-Plas and it can be obtained from http://ftp.izmiran.ru/pub/izmiran /SPIM/. A user-friendly online version is also provided at www.ionolab.org as a space weather service. Total Electron Content (TEC), which is defined as the line integral of electron density on a given ray path, is an observable parameter that can be estimated from earth based GPS receivers in a cost-effective manner as GPS-TEC. One of the most important advantages of IRI-Plas is the possible input of GPS-TEC to update the background deterministic ionospheric model to the current ionospheric state. This option is highly useful in regional and global tomography studies and HF link assessments. IONOLAB group currently implements IRI-Plas as a background model and updates the ionospheric state using GPS-TEC in IONOLAB-CIT and IONOLAB-RAY algorithms. The improved state of ionosphere allows the most reliable 4-D imaging of electron density profiles and HF and satellite communication link simulations.This study is supported by TUBITAK 115E915 and joint TUBITAK 114E092 and AS CR 14/001.
NASA Astrophysics Data System (ADS)
Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris
2018-03-01
Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.
Reliability Analysis of Retaining Walls Subjected to Blast Loading by Finite Element Approach
NASA Astrophysics Data System (ADS)
GuhaRay, Anasua; Mondal, Stuti; Mohiuddin, Hisham Hasan
2018-02-01
Conventional design methods adopt factor of safety as per practice and experience, which are deterministic in nature. The limit state method, though not completely deterministic, does not take into account effect of design parameters, which are inherently variable such as cohesion, angle of internal friction, etc. for soil. Reliability analysis provides a measure to consider these variations into analysis and hence results in a more realistic design. Several studies have been carried out on reliability of reinforced concrete walls and masonry walls under explosions. Also, reliability analysis of retaining structures against various kinds of failure has been done. However, very few research works are available on reliability analysis of retaining walls subjected to blast loading. Thus, the present paper considers the effect of variation of geotechnical parameters when a retaining wall is subjected to blast loading. However, it is found that the variation of geotechnical random variables does not have a significant effect on the stability of retaining walls subjected to blast loading.
NASA Astrophysics Data System (ADS)
Králik, Juraj; Králik, Juraj
2017-07-01
The paper presents the results from the deterministic and probabilistic analysis of the accidental torsional effect of reinforced concrete tall buildings due to earthquake even. The core-column structural system was considered with various configurations in plane. The methodology of the seismic analysis of the building structures in Eurocode 8 and JCSS 2000 is discussed. The possibilities of the utilization the LHS method to analyze the extensive and robust tasks in FEM is presented. The influence of the various input parameters (material, geometry, soil, masses and others) is considered. The deterministic and probability analysis of the seismic resistance of the structure was calculated in the ANSYS program.
Modelling and Optimal Control of Typhoid Fever Disease with Cost-Effective Strategies.
Tilahun, Getachew Teshome; Makinde, Oluwole Daniel; Malonza, David
2017-01-01
We propose and analyze a compartmental nonlinear deterministic mathematical model for the typhoid fever outbreak and optimal control strategies in a community with varying population. The model is studied qualitatively using stability theory of differential equations and the basic reproductive number that represents the epidemic indicator is obtained from the largest eigenvalue of the next-generation matrix. Both local and global asymptotic stability conditions for disease-free and endemic equilibria are determined. The model exhibits a forward transcritical bifurcation and the sensitivity analysis is performed. The optimal control problem is designed by applying Pontryagin maximum principle with three control strategies, namely, the prevention strategy through sanitation, proper hygiene, and vaccination; the treatment strategy through application of appropriate medicine; and the screening of the carriers. The cost functional accounts for the cost involved in prevention, screening, and treatment together with the total number of the infected persons averted. Numerical results for the typhoid outbreak dynamics and its optimal control revealed that a combination of prevention and treatment is the best cost-effective strategy to eradicate the disease.
Soft Polydimethylsiloxane Elastomers from Architecture-driven Entanglement Free Design
Cai, Li-Heng; Kodger, Thomas E.; Guerra, Rodrigo E.; Pegoraro, Adrian F.; Rubinstein, Michael; Weitz, David A.
2015-01-01
We fabricate soft, solvent-free polydimethylsiloxane (PDMS) elastomers by crosslinking bottlebrush polymers rather than linear polymers. We design the chemistry to allow commercially available linear PDMS precursors to deterministically form bottlebrush polymers, which are simultaneously crosslinked, enabling a one-step synthesis. The bottlebrush architecture prevents the formation of entanglements, resulting in elastomers with precisely controllable elastic moduli from ~1 to ~100 kPa, below the intrinsic lower limit of traditional elastomers. Moreover, the solvent-free nature of the soft PDMS elastomers enables a negligible contact adhesion compared to commercially available silicone products of similar stiffness. The exceptional combination of softness and negligible adhesiveness may greatly broaden the applications of PDMS elastomers in both industry and research. PMID:26259975
Optimal Alignment of Structures for Finite and Periodic Systems.
Griffiths, Matthew; Niblett, Samuel P; Wales, David J
2017-10-10
Finding the optimal alignment between two structures is important for identifying the minimum root-mean-square distance (RMSD) between them and as a starting point for calculating pathways. Most current algorithms for aligning structures are stochastic, scale exponentially with the size of structure, and the performance can be unreliable. We present two complementary methods for aligning structures corresponding to isolated clusters of atoms and to condensed matter described by a periodic cubic supercell. The first method (Go-PERMDIST), a branch and bound algorithm, locates the global minimum RMSD deterministically in polynomial time. The run time increases for larger RMSDs. The second method (FASTOVERLAP) is a heuristic algorithm that aligns structures by finding the global maximum kernel correlation between them using fast Fourier transforms (FFTs) and fast SO(3) transforms (SOFTs). For periodic systems, FASTOVERLAP scales with the square of the number of identical atoms in the system, reliably finds the best alignment between structures that are not too distant, and shows significantly better performance than existing algorithms. The expected run time for Go-PERMDIST is longer than FASTOVERLAP for periodic systems. For finite clusters, the FASTOVERLAP algorithm is competitive with existing algorithms. The expected run time for Go-PERMDIST to find the global RMSD between two structures deterministically is generally longer than for existing stochastic algorithms. However, with an earlier exit condition, Go-PERMDIST exhibits similar or better performance.
From statistical proofs of the Kochen-Specker theorem to noise-robust noncontextuality inequalities
NASA Astrophysics Data System (ADS)
Kunjwal, Ravi; Spekkens, Robert W.
2018-05-01
The Kochen-Specker theorem rules out models of quantum theory wherein projective measurements are assigned outcomes deterministically and independently of context. This notion of noncontextuality is not applicable to experimental measurements because these are never free of noise and thus never truly projective. For nonprojective measurements, therefore, one must drop the requirement that an outcome be assigned deterministically in the model and merely require that it be assigned a distribution over outcomes in a manner that is context-independent. By demanding context independence in the representation of preparations as well, one obtains a generalized principle of noncontextuality that also supports a quantum no-go theorem. Several recent works have shown how to derive inequalities on experimental data which, if violated, demonstrate the impossibility of finding a generalized-noncontextual model of this data. That is, these inequalities do not presume quantum theory and, in particular, they make sense without requiring an operational analog of the quantum notion of projectiveness. We here describe a technique for deriving such inequalities starting from arbitrary proofs of the Kochen-Specker theorem. It extends significantly previous techniques that worked only for logical proofs, which are based on sets of projective measurements that fail to admit of any deterministic noncontextual assignment, to the case of statistical proofs, which are based on sets of projective measurements that d o admit of some deterministic noncontextual assignments, but not enough to explain the quantum statistics.
Stochastic mixed-mode oscillations in a three-species predator-prey model
NASA Astrophysics Data System (ADS)
Sadhu, Susmita; Kuehn, Christian
2018-03-01
The effect of demographic stochasticity, in the form of Gaussian white noise, in a predator-prey model with one fast and two slow variables is studied. We derive the stochastic differential equations (SDEs) from a discrete model. For suitable parameter values, the deterministic drift part of the model admits a folded node singularity and exhibits a singular Hopf bifurcation. We focus on the parameter regime near the Hopf bifurcation, where small amplitude oscillations exist as stable dynamics in the absence of noise. In this regime, the stochastic model admits noise-driven mixed-mode oscillations (MMOs), which capture the intermediate dynamics between two cycles of population outbreaks. We perform numerical simulations to calculate the distribution of the random number of small oscillations between successive spikes for varying noise intensities and distance to the Hopf bifurcation. We also study the effect of noise on a suitable Poincaré map. Finally, we prove that the stochastic model can be transformed into a normal form near the folded node, which can be linked to recent results on the interplay between deterministic and stochastic small amplitude oscillations. The normal form can also be used to study the parameter influence on the noise level near folded singularities.
NASA Astrophysics Data System (ADS)
Korelin, Ivan A.; Porshnev, Sergey V.
2018-05-01
A model of the non-stationary queuing system (NQS) is described. The input of this model receives a flow of requests with input rate λ = λdet (t) + λrnd (t), where λdet (t) is a deterministic function depending on time; λrnd (t) is a random function. The parameters of functions λdet (t), λrnd (t) were identified on the basis of statistical information on visitor flows collected from various Russian football stadiums. The statistical modeling of NQS is carried out and the average statistical dependences are obtained: the length of the queue of requests waiting for service, the average wait time for the service, the number of visitors entered to the stadium on the time. It is shown that these dependencies can be characterized by the following parameters: the number of visitors who entered at the time of the match; time required to service all incoming visitors; the maximum value; the argument value when the studied dependence reaches its maximum value. The dependences of these parameters on the energy ratio of the deterministic and random component of the input rate are investigated.
The direct effect of aerosols on solar radiation over the broader Mediterranean basin
NASA Astrophysics Data System (ADS)
Papadimas, C. D.; Hatzianastassiou, N.; Matsoukas, C.; Kanakidou, M.; Mihalopoulos, N.; Vardavas, I.
2012-08-01
For the first time, the direct radiative effect (DRE) of aerosols on solar radiation is computed over the entire Mediterranean basin, one of the most climatically sensitive world regions, using a deterministic spectral radiation transfer model (RTM). The DRE effects on the outgoing shortwave radiation at the top of atmosphere (TOA), DRETOA, on the absorption of solar radiation in the atmospheric column, DREatm, and on the downward and absorbed surface solar radiation (SSR), DREsurf and DREnetsurf, respectively, are computed separately. The model uses input data for the period 2000-2007 for various surface and atmospheric parameters, taken from satellite (International Satellite Cloud Climatology Project, ISCCP-D2), Global Reanalysis projects (National Centers for Environmental Prediction - National Center for Atmospheric Research, NCEP/NCAR), and other global databases. The spectral aerosol optical properties (aerosol optical depth, AOD, asymmetry parameter, gaer and single scattering albedo, ωaer), are taken from the MODerate resolution Imaging Spectroradiometer (MODIS) of NASA (National Aeronautics and Space Administration) and they are supplemented by the Global Aerosol Data Set (GADS). The model SSR fluxes have been successfully validated against measurements from 80 surface stations of the Global Energy Balance Archive (GEBA) covering the period 2000-2007. A planetary cooling is found above the Mediterranean on an annual basis (regional mean DRETOA = -2.4 W m-2). Although a planetary cooling is found over most of the region, of up to -7 W m-2, large positive DRETOA values (up to +25 W m-2) are found over North Africa, indicating a strong planetary warming, and a weaker warming over the Alps (+0.5 W m-2). Aerosols are found to increase the absorption of solar radiation in the atmospheric column over the region (DREatm = +11.1 W m-2) and to decrease SSR (DREsurf = -16.5 W m-2 and DREnetsurf-13.5 W m-2) inducing thus significant atmospheric warming and surface radiative cooling. The calculated seasonal and monthly DREs are even larger, reaching -25.4 W m-2 (for DREsurf). Within the range of observed natural or anthropogenic variability of aerosol optical properties, AOD seems to be the main responsible parameter for modifications of regional aerosol radiative effects, which are found to be quasi-linearly dependent on AOD, ωaer and gaer.
Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.
Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O
2006-03-01
The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.
Global Sensitivity Analysis for Process Identification under Model Uncertainty
NASA Astrophysics Data System (ADS)
Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.
2015-12-01
The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.
Milly, P.C.D.; Shmakin, A.B.
2002-01-01
Land water and energy balances vary around the globe because of variations in amount and temporal distribution of water and energy supplies and because of variations in land characteristics. The former control (water and energy supplies) explains much more variance in water and energy balances than the latter (land characteristics). A largely untested hypothesis underlying most global models of land water and energy balance is the assumption that parameter values based on estimated geographic distributions of soil and vegetation characteristics improve the performance of the models relative to the use of globally constant land parameters. This hypothesis is tested here through an evaluation of the improvement in performance of one land model associated with the introduction of geographic information on land characteristics. The capability of the model to reproduce annual runoff ratios of large river basins, with and without information on the global distribution of albedo, rooting depth, and stomatal resistance, is assessed. To allow a fair comparison, the model is calibrated in both cases by adjusting globally constant scale factors for snow-free albedo, non-water-stressed bulk stomatal resistance, and critical root density (which is used to determine effective root-zone depth). The test is made in stand-alone mode, that is, using prescribed radiative and atmospheric forcing. Model performance is evaluated by comparing modeled runoff ratios with observed runoff ratios for a set of basins where precipitation biases have been shown to be minimal. The withholding of information on global variations in these parameters leads to a significant degradation of the capability of the model to simulate the annual runoff ratio. An additional set of optimization experiments, in which the parameters are examined individually, reveals that the stomatal resistance is, by far, the parameter among these three whose spatial variations add the most predictive power to the model in stand-alone mode. Further single-parameter experiments with surface roughness length, available water capacity, thermal conductivity, and thermal diffusivity show very little sensitivity to estimated global variations in these parameters. Finally, it is found that even the constant-parameter model performance exceeds that of the Budyko and generalized Turc-Pike water-balance equations, suggesting that the model benefits also from information on the geographic variability of the temporal structure of forcing.
SIMULATED CLIMATE CHANGE EFFECTS ON DISSOLVED OXYGEN CHARACTERISTICS IN ICE-COVERED LAKES. (R824801)
A deterministic, one-dimensional model is presented which simulates daily dissolved oxygen (DO) profiles and associated water temperatures, ice covers and snow covers for dimictic and polymictic lakes of the temperate zone. The lake parameters required as model input are surface ...
Introduction to “Global tsunami science: Past and future, Volume I”
Geist, Eric L.; Fritz, Hermann; Rabinovich, Alexander B.; Tanioka, Yuichiro
2016-01-01
Twenty-five papers on the study of tsunamis are included in Volume I of the PAGEOPH topical issue “Global Tsunami Science: Past and Future”. Six papers examine various aspects of tsunami probability and uncertainty analysis related to hazard assessment. Three papers relate to deterministic hazard and risk assessment. Five more papers present new methods for tsunami warning and detection. Six papers describe new methods for modeling tsunami hydrodynamics. Two papers investigate tsunamis generated by non-seismic sources: landslides and meteorological disturbances. The final three papers describe important case studies of recent and historical events. Collectively, this volume highlights contemporary trends in global tsunami research, both fundamental and applied toward hazard assessment and mitigation.
Introduction to "Global Tsunami Science: Past and Future, Volume I"
NASA Astrophysics Data System (ADS)
Geist, Eric L.; Fritz, Hermann M.; Rabinovich, Alexander B.; Tanioka, Yuichiro
2016-12-01
Twenty-five papers on the study of tsunamis are included in Volume I of the PAGEOPH topical issue "Global Tsunami Science: Past and Future". Six papers examine various aspects of tsunami probability and uncertainty analysis related to hazard assessment. Three papers relate to deterministic hazard and risk assessment. Five more papers present new methods for tsunami warning and detection. Six papers describe new methods for modeling tsunami hydrodynamics. Two papers investigate tsunamis generated by non-seismic sources: landslides and meteorological disturbances. The final three papers describe important case studies of recent and historical events. Collectively, this volume highlights contemporary trends in global tsunami research, both fundamental and applied toward hazard assessment and mitigation.
Stable Satellite Orbits for Global Coverage of the Moon
NASA Technical Reports Server (NTRS)
Ely, Todd; Lieb, Erica
2006-01-01
A document proposes a constellation of spacecraft to be placed in orbit around the Moon to provide navigation and communication services with global coverage required for exploration of the Moon. There would be six spacecraft in inclined elliptical orbits: three in each of two orthogonal orbital planes, suggestive of a linked-chain configuration. The orbits have been chosen to (1) provide 99.999-percent global coverage for ten years and (2) to be stable under perturbation by Earth gravitation and solar-radiation pressure, so that no deterministic firing of thrusters would be needed to maintain the orbits. However, a minor amount of orbit control might be needed to correct for such unmodeled effects as outgassing of the spacecraft.
Consistent Adjoint Driven Importance Sampling using Space, Energy and Angle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M
2012-08-01
For challenging radiation transport problems, hybrid methods combine the accuracy of Monte Carlo methods with the global information present in deterministic methods. One of the most successful hybrid methods is CADIS Consistent Adjoint Driven Importance Sampling. This method uses a deterministic adjoint solution to construct a biased source distribution and consistent weight windows to optimize a specific tally in a Monte Carlo calculation. The method has been implemented into transport codes using just the spatial and energy information from the deterministic adjoint and has been used in many applications to compute tallies with much higher figures-of-merit than analog calculations. CADISmore » also outperforms user-supplied importance values, which usually take long periods of user time to develop. This work extends CADIS to develop weight windows that are a function of the position, energy, and direction of the Monte Carlo particle. Two types of consistent source biasing are presented: one method that biases the source in space and energy while preserving the original directional distribution and one method that biases the source in space, energy, and direction. Seven simple example problems are presented which compare the use of the standard space/energy CADIS with the new space/energy/angle treatments.« less
Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Michael
Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the Pacific Ocean, and annual temperature extremes at a site in New York City. In each of these applications, our theoretical and computational innovations were directly motivated by the challenges posed by analyzing these and similar types of data.« less
D. Lee Taylor; Teresa N. Hollingsworth; Jack W. McFarland; Niall J. Lennon; Chad Nusbaum; Roger W. Ruess
2014-01-01
Fungi play key roles in ecosystems as mutualists, pathogens, and decomposers. Current estimates of global species richness are highly uncertain, and the importance of stochastic vs. deterministic forces in the assembly of fungal communities is unknown. Molecular studies have so far failed to reach saturated, comprehensive estimates of fungal diversity. To obtain a more...
Chaos in plasma simulation and experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watts, C.; Newman, D.E.; Sprott, J.C.
1993-09-01
We investigate the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas using data from both numerical simulations and experiment. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos. These tools include phase portraits and Poincard sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulate the plasma dynamics. These are -the DEBS code, which models global RFPmore » dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low,dimensional chaos and simple determinism. Experimental data were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or other simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less
Modelling the protocol stack in NCS with deterministic and stochastic petri net
NASA Astrophysics Data System (ADS)
Hui, Chen; Chunjie, Zhou; Weifeng, Zhu
2011-06-01
Protocol stack is the basis of the networked control systems (NCS). Full or partial reconfiguration of protocol stack offers both optimised communication service and system performance. Nowadays, field testing is unrealistic to determine the performance of reconfigurable protocol stack; and the Petri net formal description technique offers the best combination of intuitive representation, tool support and analytical capabilities. Traditionally, separation between the different layers of the OSI model has been a common practice. Nevertheless, such a layered modelling analysis framework of protocol stack leads to the lack of global optimisation for protocol reconfiguration. In this article, we proposed a general modelling analysis framework for NCS based on the cross-layer concept, which is to establish an efficiency system scheduling model through abstracting the time constraint, the task interrelation, the processor and the bus sub-models from upper and lower layers (application, data link and physical layer). Cross-layer design can help to overcome the inadequacy of global optimisation based on information sharing between protocol layers. To illustrate the framework, we take controller area network (CAN) as a case study. The simulation results of deterministic and stochastic Petri-net (DSPN) model can help us adjust the message scheduling scheme and obtain better system performance.
Structuring evolution: biochemical networks and metabolic diversification in birds.
Morrison, Erin S; Badyaev, Alexander V
2016-08-25
Recurrence and predictability of evolution are thought to reflect the correspondence between genomic and phenotypic dimensions of organisms, and the connectivity in deterministic networks within these dimensions. Direct examination of the correspondence between opportunities for diversification imbedded in such networks and realized diversity is illuminating, but is empirically challenging because both the deterministic networks and phenotypic diversity are modified in the course of evolution. Here we overcome this problem by directly comparing the structure of a "global" carotenoid network - comprising of all known enzymatic reactions among naturally occurring carotenoids - with the patterns of evolutionary diversification in carotenoid-producing metabolic networks utilized by birds. We found that phenotypic diversification in carotenoid networks across 250 species was closely associated with enzymatic connectivity of the underlying biochemical network - compounds with greater connectivity occurred the most frequently across species and were the hotspots of metabolic pathway diversification. In contrast, we found no evidence for diversification along the metabolic pathways, corroborating findings that the utilization of the global carotenoid network was not strongly influenced by history in avian evolution. The finding that the diversification in species-specific carotenoid networks is qualitatively predictable from the connectivity of the underlying enzymatic network points to significant structural determinism in phenotypic evolution.
NASA Astrophysics Data System (ADS)
Yuan, Hao; Zhang, Qin; Hong, Liang; Yin, Wen-jie; Xu, Dong
2014-08-01
We present a novel scheme for deterministic secure quantum communication (DSQC) over collective rotating noisy channel. Four special two-qubit states are found can constitute a noise-free subspaces, and so are utilized as quantum information carriers. In this scheme, the information carriers transmite over the quantum channel only one time, which can effectively reduce the influence of other noise existing in quantum channel. The information receiver need only perform two single-photon collective measurements to decode the secret messages, which can make the present scheme more convenient in practical application. It will be showed that our scheme has a relatively high information capacity and intrisic efficiency. Foremostly, the decoy photon pair checking technique and the order rearrangement of photon pairs technique guarantee that the present scheme is unconditionally secure.
Weinberg, Seth H.; Smith, Gregory D.
2014-01-01
Intracellular calcium (Ca2+) plays a significant role in many cell signaling pathways, some of which are localized to spatially restricted microdomains. Ca2+ binding proteins (Ca2+ buffers) play an important role in regulating Ca2+ concentration ([Ca2+]). Buffers typically slow [Ca2+] temporal dynamics and increase the effective volume of Ca2+ domains. Because fluctuations in [Ca2+] decrease in proportion to the square-root of a domain’s physical volume, one might conjecture that buffers decrease [Ca2+] fluctuations and, consequently, mitigate the significance of small domain volume concerning Ca2+ signaling. We test this hypothesis through mathematical and computational analysis of idealized buffer-containing domains and their stochastic dynamics during free Ca2+ influx with passive exchange of both Ca2+ and buffer with bulk concentrations. We derive Langevin equations for the fluctuating dynamics of Ca2+ and buffer and use these stochastic differential equations to determine the magnitude of [Ca2+] fluctuations for different buffer parameters (e.g., dissociation constant and concentration). In marked contrast to expectations based on a naive application of the principle of effective volume as employed in deterministic models of Ca2+ signaling, we find that mobile and rapid buffers typically increase the magnitude of domain [Ca2+] fluctuations during periods of Ca2+ influx, whereas stationary (immobile) Ca2+ buffers do not. Also contrary to expectations, we find that in the absence of Ca2+ influx, buffers influence the temporal characteristics, but not the magnitude, of [Ca2+] fluctuations. We derive an analytical formula describing the influence of rapid Ca2+ buffers on [Ca2+] fluctuations and, importantly, identify the stochastic analog of (deterministic) effective domain volume. Our results demonstrate that Ca2+ buffers alter the dynamics of [Ca2+] fluctuations in a nonintuitive manner. The finding that Ca2+ buffers do not suppress intrinsic domain [Ca2+] fluctuations raises the intriguing question of whether or not [Ca2+] fluctuations are a physiologically significant aspect of local Ca2+ signaling. PMID:24940787
Weinberg, Seth H; Smith, Gregory D
2014-06-17
Intracellular calcium (Ca(2+)) plays a significant role in many cell signaling pathways, some of which are localized to spatially restricted microdomains. Ca(2+) binding proteins (Ca(2+) buffers) play an important role in regulating Ca(2+) concentration ([Ca(2+)]). Buffers typically slow [Ca(2+)] temporal dynamics and increase the effective volume of Ca(2+) domains. Because fluctuations in [Ca(2+)] decrease in proportion to the square-root of a domain's physical volume, one might conjecture that buffers decrease [Ca(2+)] fluctuations and, consequently, mitigate the significance of small domain volume concerning Ca(2+) signaling. We test this hypothesis through mathematical and computational analysis of idealized buffer-containing domains and their stochastic dynamics during free Ca(2+) influx with passive exchange of both Ca(2+) and buffer with bulk concentrations. We derive Langevin equations for the fluctuating dynamics of Ca(2+) and buffer and use these stochastic differential equations to determine the magnitude of [Ca(2+)] fluctuations for different buffer parameters (e.g., dissociation constant and concentration). In marked contrast to expectations based on a naive application of the principle of effective volume as employed in deterministic models of Ca(2+) signaling, we find that mobile and rapid buffers typically increase the magnitude of domain [Ca(2+)] fluctuations during periods of Ca(2+) influx, whereas stationary (immobile) Ca(2+) buffers do not. Also contrary to expectations, we find that in the absence of Ca(2+) influx, buffers influence the temporal characteristics, but not the magnitude, of [Ca(2+)] fluctuations. We derive an analytical formula describing the influence of rapid Ca(2+) buffers on [Ca(2+)] fluctuations and, importantly, identify the stochastic analog of (deterministic) effective domain volume. Our results demonstrate that Ca(2+) buffers alter the dynamics of [Ca(2+)] fluctuations in a nonintuitive manner. The finding that Ca(2+) buffers do not suppress intrinsic domain [Ca(2+)] fluctuations raises the intriguing question of whether or not [Ca(2+)] fluctuations are a physiologically significant aspect of local Ca(2+) signaling. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rocha, Ana Maria A. C.; Costa, M. Fernanda P.; Fernandes, Edite M. G. P.
2016-12-01
This article presents a shifted hyperbolic penalty function and proposes an augmented Lagrangian-based algorithm for non-convex constrained global optimization problems. Convergence to an ?-global minimizer is proved. At each iteration k, the algorithm requires the ?-global minimization of a bound constrained optimization subproblem, where ?. The subproblems are solved by a stochastic population-based metaheuristic that relies on the artificial fish swarm paradigm and a two-swarm strategy. To enhance the speed of convergence, the algorithm invokes the Nelder-Mead local search with a dynamically defined probability. Numerical experiments with benchmark functions and engineering design problems are presented. The results show that the proposed shifted hyperbolic augmented Lagrangian compares favorably with other deterministic and stochastic penalty-based methods.
The global Minmax k-means algorithm.
Wang, Xiaoyan; Bai, Yanping
2016-01-01
The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.
Multiscale System for Environmentally-Driven Infectious Disease with Threshold Control Strategy
NASA Astrophysics Data System (ADS)
Sun, Xiaodan; Xiao, Yanni
A multiscale system for environmentally-driven infectious disease is proposed, in which control measures at three different scales are implemented when the number of infected hosts exceeds a certain threshold. Our coupled model successfully describes the feedback mechanisms of between-host dynamics on within-host dynamics by employing one-scale variable guided enhancement of interventions on other scales. The modeling approach provides a novel idea of how to link the large-scale dynamics to small-scale dynamics. The dynamic behaviors of the multiscale system on two time-scales, i.e. fast system and slow system, are investigated. The slow system is further simplified to a two-dimensional Filippov system. For the Filippov system, we study the dynamics of its two subsystems (i.e. free-system and control-system), the sliding mode dynamics, the boundary equilibrium bifurcations, as well as the global behaviors. We prove that both subsystems may undergo backward bifurcations and the sliding domain exists. Meanwhile, it is possible that the pseudo-equilibrium exists and is globally stable, or the pseudo-equilibrium, the disease-free equilibrium and the real equilibrium are tri-stable, or the pseudo-equilibrium and the real equilibrium are bi-stable, or the pseudo-equilibrium and disease-free equilibrium are bi-stable, which depends on the threshold value and other parameter values. The global stability of the pseudo-equilibrium reveals that we may maintain the number of infected hosts at a previously given value. Moreover, the bi-stability and tri-stability indicate that whether the number of infected individuals tends to zero or a previously given value or other positive values depends on the parameter values and the initial states of the system. These results highlight the challenges in the control of environmentally-driven infectious disease.
NASA Astrophysics Data System (ADS)
Necmioglu, O.; Meral Ozel, N.
2014-12-01
Accurate earthquake source parameters are essential for any tsunami hazard assessment and mitigation, including early warning systems. Complex tectonic setting makes the a priori accurate assumptions of earthquake source parameters difficult and characterization of the faulting type is a challenge. Information on tsunamigenic sources is of crucial importance in the Eastern Mediterranean and its Connected Seas, especially considering the short arrival times and lack of offshore sea-level measurements. In addition, the scientific community have had to abandon the paradigm of a ''maximum earthquake'' predictable from simple tectonic parameters (Ruff and Kanamori, 1980) in the wake of the 2004 Sumatra event (Okal, 2010) and one of the lessons learnt from the 2011 Tohoku event was that tsunami hazard maps may need to be prepared for infrequent gigantic earthquakes as well as more frequent smaller-sized earthquakes (Satake, 2011). We have initiated an extensive modeling study to perform a deterministic Tsunami Hazard Analysis for the Eastern Mediterranean and its Connected Seas. Characteristic earthquake source parameters (strike, dip, rake, depth, Mwmax) at each 0.5° x 0.5° size bin for 0-40 km depth (total of 310 bins) and for 40-100 km depth (total of 92 bins) in the Eastern Mediterranean, Aegean and Black Sea region (30°N-48°N and 22°E-44°E) have been assigned from the harmonization of the available databases and previous studies. These parameters have been used as input parameters for the deterministic tsunami hazard modeling. Nested Tsunami simulations of 6h duration with a coarse (2 arc-min) and medium (1 arc-min) grid resolution have been simulated at EC-JRC premises for Black Sea and Eastern and Central Mediterranean (30°N-41.5°N and 8°E-37°E) for each source defined using shallow water finite-difference SWAN code (Mader, 2004) for the magnitude range of 6.5 - Mwmax defined for that bin with a Mw increment of 0.1. Results show that not only the earthquakes resembling the well-known historical earthquakes such as AD 365 or AD 1303 in the Hellenic Arc, but also earthquakes with lower magnitudes do constitute to the tsunami hazard in the study area.
The effects of hard water consumption on kidney function: Insights from mathematical modelling
NASA Astrophysics Data System (ADS)
Tambaru, David; Djahi, Bertha S.; Ndii, Meksianis Z.
2018-03-01
Most water sources in Nusa Tenggara Timur contain higher concentration of calcium and magnesium ions, which is known as hard water. Long-term consumption of hard water can cause kidney dysfunction, which may lead to the other diseases such as cerebrovascular disease, diabetes and others. Therefore, understanding the effects of hard water consumption on kidney function is of importance. This paper studies the transmission dynamics of kidney dysfunction due to the consumption of hard water using a mathematical model. We propose a new deterministic mathematical model comprising human and water compartments and conduct a global sensitivity analysis to determine the most influential parameters of the model. The Routh-Hurwitz criterion is used to examine the stability of the steady states. The results shows that the model has two steady states, which are locally stable. Moreover, we found that the most influential parameters are the maximum concentration of magnesium and calcium in the water, the increase rate of calcium and magnesium concentration in the water and the rate of effectiveness of water treatment. The results suggest that better water treatments are required to reduce the concentration of magnesium and calcium in the water. This aid in minimizing the probability of humans to attract kidney dysfunction. Furthermore, water-related data need to be collected for further investigation.
El Niño$-$Southern Oscillation frequency cascade
Stuecker, Malte F.; Jin, Fei -Fei; Timmermann, Axel
2015-10-19
The El Niño$-$Southern Oscillation (ENSO) phenomenon, the most pronounced feature of internally generated climate variability, occurs on interannual timescales and impacts the global climate system through an interaction with the annual cycle. The tight coupling between ENSO and the annual cycle is particularly pronounced over the tropical Western Pacific. In this paper, we show that this nonlinear interaction results in a frequency cascade in the atmospheric circulation, which is characterized by deterministic high-frequency variability on near-annual and subannual timescales. Finally, through climate model experiments and observational analysis, it is documented that a substantial fraction of the anomalous Northwest Pacific anticyclonemore » variability, which is the main atmospheric link between ENSO and the East Asian Monsoon system, can be explained by these interactions and is thus deterministic and potentially predictable.« less
El Niño$-$Southern Oscillation frequency cascade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuecker, Malte F.; Jin, Fei -Fei; Timmermann, Axel
The El Niño$-$Southern Oscillation (ENSO) phenomenon, the most pronounced feature of internally generated climate variability, occurs on interannual timescales and impacts the global climate system through an interaction with the annual cycle. The tight coupling between ENSO and the annual cycle is particularly pronounced over the tropical Western Pacific. In this paper, we show that this nonlinear interaction results in a frequency cascade in the atmospheric circulation, which is characterized by deterministic high-frequency variability on near-annual and subannual timescales. Finally, through climate model experiments and observational analysis, it is documented that a substantial fraction of the anomalous Northwest Pacific anticyclonemore » variability, which is the main atmospheric link between ENSO and the East Asian Monsoon system, can be explained by these interactions and is thus deterministic and potentially predictable.« less
NASA Astrophysics Data System (ADS)
Bacchi, Vito; Duluc, Claire-Marie; Bertrand, Nathalie; Bardet, Lise
2017-04-01
In recent years, in the context of hydraulic risk assessment, much effort has been put into the development of sophisticated numerical model systems able reproducing surface flow field. These numerical models are based on a deterministic approach and the results are presented in terms of measurable quantities (water depths, flow velocities, etc…). However, the modelling of surface flows involves numerous uncertainties associated both to the numerical structure of the model, to the knowledge of the physical parameters which force the system and to the randomness inherent to natural phenomena. As a consequence, dealing with uncertainties can be a difficult task for both modelers and decision-makers [Ioss, 2011]. In the context of nuclear safety, IRSN assesses studies conducted by operators for different reference flood situations (local rain, small or large watershed flooding, sea levels, etc…), that are defined in the guide ASN N°13 [ASN, 2013]. The guide provides some recommendations to deal with uncertainties, by proposing a specific conservative approach to cover hydraulic modelling uncertainties. Depending of the situation, the influencing parameter might be the Strickler coefficient, levee behavior, simplified topographic assumptions, etc. Obviously, identifying the most influencing parameter and giving it a penalizing value is challenging and usually questionable. In this context, IRSN conducted cooperative (Compagnie Nationale du Rhone, I-CiTy laboratory of Polytech'Nice, Atomic Energy Commission, Bureau de Recherches Géologiques et Minières) research activities since 2011 in order to investigate feasibility and benefits of Uncertainties Analysis (UA) and Global Sensitivity Analysis (GSA) when applied to hydraulic modelling. A specific methodology was tested by using the computational environment Promethee, developed by IRSN, which allows carrying out uncertainties propagation study. This methodology was applied with various numerical models and in different contexts, as river flooding on the Rhône River (Nguyen et al., 2015) and on the Garonne River, for the studying of local rainfall (Abily et al., 2016) or for tsunami generation, in the framework of the ANR-research project TANDEM. The feedback issued from these previous studies is analyzed (technical problems, limitations, interesting results, etc…) and the perspectives and a discussion on how a probabilistic approach of uncertainties should improve the actual deterministic methodology for risk assessment (also for other engineering applications) will be finally given.
Thurman, Steven M.; Davey, Pinakin Gunvant; McCray, Kaydee Lynn; Paronian, Violeta; Seitz, Aaron R.
2016-01-01
Contrast sensitivity (CS) is widely used as a measure of visual function in both basic research and clinical evaluation. There is conflicting evidence on the extent to which measuring the full contrast sensitivity function (CSF) offers more functionally relevant information than a single measurement from an optotype CS test, such as the Pelli–Robson chart. Here we examine the relationship between functional CSF parameters and other measures of visual function, and establish a framework for predicting individual CSFs with effectively a zero-parameter model that shifts a standard-shaped template CSF horizontally and vertically according to independent measurements of high contrast acuity and letter CS, respectively. This method was evaluated for three different CSF tests: a chart test (CSV-1000), a computerized sine-wave test (M&S Sine Test), and a recently developed adaptive test (quick CSF). Subjects were 43 individuals with healthy vision or impairment too mild to be considered low vision (acuity range of −0.3 to 0.34 logMAR). While each test demands a slightly different normative template, results show that individual subject CSFs can be predicted with roughly the same precision as test–retest repeatability, confirming that individuals predominantly differ in terms of peak CS and peak spatial frequency. In fact, these parameters were sufficiently related to empirical measurements of acuity and letter CS to permit accurate estimation of the entire CSF of any individual with a deterministic model (zero free parameters). These results demonstrate that in many cases, measuring the full CSF may provide little additional information beyond letter acuity and contrast sensitivity. PMID:28006065
The making of might-have-beens: effects of free will belief on counterfactual thinking.
Alquist, Jessica L; Ainsworth, Sarah E; Baumeister, Roy F; Daly, Michael; Stillman, Tyler F
2015-02-01
Counterfactual thoughts are based on the assumption that one situation could result in multiple possible outcomes. This assumption underlies most theories of free will and contradicts deterministic views that there is only one possible outcome of any situation. Three studies tested the hypothesis that stronger belief in free will would lead to more counterfactual thinking. Experimental manipulations (Studies 1-2) and a measure (Studies 3-4) of belief in free will were linked to increased counterfactual thinking in response to autobiographical (Studies 1, 3, and 4) and hypothetical (Study 2) events. Belief in free will also predicted the kind of counterfactuals generated. Belief in free will was associated with an increase in the generation of self and upward counterfactuals, which have been shown to be particularly useful for learning. These findings fit the view that belief in free will is promoted by societies because it facilitates learning and culturally valued change. © 2014 by the Society for Personality and Social Psychology, Inc.
Development of a First-of-a-Kind Deterministic Decision-Making Tool for Supervisory Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cetiner, Sacit M; Kisner, Roger A; Muhlheim, Michael David
2015-07-01
Decision-making is the process of identifying and choosing alternatives where each alternative offers a different approach or path to move from a given state or condition to a desired state or condition. The generation of consistent decisions requires that a structured, coherent process be defined, immediately leading to a decision-making framework. The overall objective of the generalized framework is for it to be adopted into an autonomous decision-making framework and tailored to specific requirements for various applications. In this context, automation is the use of computing resources to make decisions and implement a structured decision-making process with limited or nomore » human intervention. The overriding goal of automation is to replace or supplement human decision makers with reconfigurable decision- making modules that can perform a given set of tasks reliably. Risk-informed decision making requires a probabilistic assessment of the likelihood of success given the status of the plant/systems and component health, and a deterministic assessment between plant operating parameters and reactor protection parameters to prevent unnecessary trips and challenges to plant safety systems. The implementation of the probabilistic portion of the decision-making engine of the proposed supervisory control system was detailed in previous milestone reports. Once the control options are identified and ranked based on the likelihood of success, the supervisory control system transmits the options to the deterministic portion of the platform. The deterministic multi-attribute decision-making framework uses variable sensor data (e.g., outlet temperature) and calculates where it is within the challenge state, its trajectory, and margin within the controllable domain using utility functions to evaluate current and projected plant state space for different control decisions. Metrics to be evaluated include stability, cost, time to complete (action), power level, etc. The integration of deterministic calculations using multi-physics analyses (i.e., neutronics, thermal, and thermal-hydraulics) and probabilistic safety calculations allows for the examination and quantification of margin recovery strategies. This also provides validation of the control options identified from the probabilistic assessment. Thus, the thermal-hydraulics analyses are used to validate the control options identified from the probabilistic assessment. Future work includes evaluating other possible metrics and computational efficiencies.« less
NASA Astrophysics Data System (ADS)
Mehanee, Salah A.
2015-01-01
This paper describes a new method for tracing paleo-shear zones of the continental crust by self-potential (SP) data inversion. The method falls within the deterministic inversion framework, and it is exclusively applicable for the interpretation of the SP anomalies measured along a profile over sheet-type structures such as conductive thin films of interconnected graphite precipitations formed on shear planes. The inverse method fits a residual SP anomaly by a single thin sheet and recovers the characteristic parameters (depth to the top h, extension in depth a, amplitude coefficient k, and amount and direction of dip θ) of the sheet. This method minimizes an objective functional in the space of the logarithmed and non-logarithmed model parameters (log( h), log( a), log( k), and θ) successively by the steepest descent (SD) and Gauss-Newton (GN) techniques in order to essentially maintain the stability and convergence of this inverse method. Prior to applying the method to real data, its accuracy, convergence, and stability are successfully verified on numerical examples with and without noise. The method is then applied to SP profiles from the German Continental Deep Drilling Program (Kontinentales Tiefbohrprogramm der Bundesrepublik Deutschla - KTB), Rittsteig, and Grossensees sites in Germany for tracing paleo-shear planes coated with graphitic deposits. The comparisons of geologic sections constructed in this paper (based on the proposed deterministic approach) against the existing published interpretations (obtained based on trial-and-error modeling) for the SP data of the KTB and Rittsteig sites have revealed that the deterministic approach suggests some new details that are of some geological significance. The findings of the proposed inverse scheme are supported by available drilling and other geophysical data. Furthermore, the real SP data of the Grossensees site have been interpreted (apparently for the first time ever) by the deterministic inverse scheme from which interpretive geologic cross sections are suggested. The computational efficiency, analysis of the numerical examples investigated, and comparisons of the real data inverted here have demonstrated that the developed deterministic approach is advantageous to the existing interpretation methods, and it is suitable for meaningful interpretation of SP data acquired elsewhere over graphitic occurrences on fault planes.
A statistical approach to nuclear fuel design and performance
NASA Astrophysics Data System (ADS)
Cunning, Travis Andrew
As CANDU fuel failures can have significant economic and operational consequences on the Canadian nuclear power industry, it is essential that factors impacting fuel performance are adequately understood. Current industrial practice relies on deterministic safety analysis and the highly conservative "limit of operating envelope" approach, where all parameters are assumed to be at their limits simultaneously. This results in a conservative prediction of event consequences with little consideration given to the high quality and precision of current manufacturing processes. This study employs a novel approach to the prediction of CANDU fuel reliability. Probability distributions are fitted to actual fuel manufacturing datasets provided by Cameco Fuel Manufacturing, Inc. They are used to form input for two industry-standard fuel performance codes: ELESTRES for the steady-state case and ELOCA for the transient case---a hypothesized 80% reactor outlet header break loss of coolant accident. Using a Monte Carlo technique for input generation, 105 independent trials are conducted and probability distributions are fitted to key model output quantities. Comparing model output against recognized industrial acceptance criteria, no fuel failures are predicted for either case. Output distributions are well removed from failure limit values, implying that margin exists in current fuel manufacturing and design. To validate the results and attempt to reduce the simulation burden of the methodology, two dimensional reduction methods are assessed. Using just 36 trials, both methods are able to produce output distributions that agree strongly with those obtained via the brute-force Monte Carlo method, often to a relative discrepancy of less than 0.3% when predicting the first statistical moment, and a relative discrepancy of less than 5% when predicting the second statistical moment. In terms of global sensitivity, pellet density proves to have the greatest impact on fuel performance, with an average sensitivity index of 48.93% on key output quantities. Pellet grain size and dish depth are also significant contributors, at 31.53% and 13.46%, respectively. A traditional limit of operating envelope case is also evaluated. This case produces output values that exceed the maximum values observed during the 105 Monte Carlo trials for all output quantities of interest. In many cases the difference between the predictions of the two methods is very prominent, and the highly conservative nature of the deterministic approach is demonstrated. A reliability analysis of CANDU fuel manufacturing parametric data, specifically pertaining to the quantification of fuel performance margins, has not been conducted previously. Key Words: CANDU, nuclear fuel, Cameco, fuel manufacturing, fuel modelling, fuel performance, fuel reliability, ELESTRES, ELOCA, dimensional reduction methods, global sensitivity analysis, deterministic safety analysis, probabilistic safety analysis.
NASA Astrophysics Data System (ADS)
Canli, Ekrem; Thiebes, Benni; Petschko, Helene; Glade, Thomas
2015-04-01
By now there is a broad consensus that due to human-induced global change the frequency and magnitude of heavy precipitation events is expected to increase in certain parts of the world. Given the fact, that rainfall serves as the most common triggering agent for landslide initiation, also an increased landside activity can be expected there. Landslide occurrence is a globally spread phenomenon that clearly needs to be handled. The present and well known problems in modelling landslide susceptibility and hazard give uncertain results in the prediction. This includes the lack of a universal applicable modelling solution for adequately assessing landslide susceptibility (which can be seen as the relative indication of the spatial probability of landslide initiation). Generally speaking, there are three major approaches for performing landslide susceptibility analysis: heuristic, statistical and deterministic models, all with different assumptions, its distinctive data requirements and differently interpretable outcomes. Still, detailed comparison of resulting landslide susceptibility maps are rare. In this presentation, the susceptibility modelling outputs of a deterministic model (Stability INdex MAPping - SINMAP) and a statistical modelling approach (generalized additive model - GAM) are compared. SINMAP is an infinite slope stability model which requires parameterization of soil mechanical parameters. Modelling with the generalized additive model, which represents a non-linear extension of a generalized linear model, requires a high quality landslide inventory that serves as the dependent variable in the statistical approach. Both methods rely on topographical data derived from the DTM. The comparison has been carried out in a study area located in the district of Waidhofen/Ybbs in Lower Austria. For the whole district (ca. 132 km²), 1063 landslides have been mapped and partially used within the analysis and the validation of the model outputs. The respective susceptibility maps have been reclassified to contain three susceptibility classes each. The comparison of the susceptibility maps was performed on a grid cell basis. A match of the maps was observed for grid cells located in the same susceptibility class. In contrast, a mismatch or deviation was observed for locations with different assigned susceptibility classes (up to two classes' difference). Although the modelling approaches differ significantly, more than 70% of the pixels reveal a match in the same susceptibility class. A mismatch by two classes' difference occurred in less than 2% of all pixels. Although the result looks promising and strengthens the confidence in the susceptibility zonation for this area, some of the general drawbacks related to the respective approaches still have to be addressed in further detail. Future work is heading towards an integration of probabilistic aspects into deterministic modelling.
Dijkshoorn, J P; Schutyser, M A I; Sebris, M; Boom, R M; Wagterveld, R M
2017-10-26
Deterministic lateral displacement technology was originally developed in the realm of microfluidics, but has potential for larger scale separation as well. In our previous studies, we proposed a sieve-based lateral displacement device inspired on the principle of deterministic lateral displacement. The advantages of this new device is that it gives a lower pressure drop, lower risk of particle accumulation, higher throughput and is simpler to manufacture. However, until now this device has only been investigated for its separation of large particles of around 785 µm diameter. To separate smaller particles, we investigate several design parameters for their influence on the critical particle diameter. In a dimensionless evaluation, device designs with different geometry and dimensions were compared. It was found that sieve-based lateral displacement devices are able to displace particles due to the crucial role of the flow profile, despite of their unusual and asymmetric design. These results demonstrate the possibility to actively steer the velocity profile in order to reduce the critical diameter in deterministic lateral displacement devices, which makes this separation principle more accessible for large-scale, high throughput applications.
Effects of Noise on Ecological Invasion Processes: Bacteriophage-mediated Competition in Bacteria
NASA Astrophysics Data System (ADS)
Joo, Jaewook; Eric, Harvill; Albert, Reka
2007-03-01
Pathogen-mediated competition, through which an invasive species carrying and transmitting a pathogen can be a superior competitor to a more vulnerable resident species, is one of the principle driving forces influencing biodiversity in nature. Using an experimental system of bacteriophage-mediated competition in bacterial populations and a deterministic model, we have shown in [Joo et al 2005] that the competitive advantage conferred by the phage depends only on the relative phage pathology and is independent of the initial phage concentration and other phage and host parameters such as the infection-causing contact rate, the spontaneous and infection-induced lysis rates, and the phage burst size. Here we investigate the effects of stochastic fluctuations on bacterial invasion facilitated by bacteriophage, and examine the validity of the deterministic approach. We use both numerical and analytical methods of stochastic processes to identify the source of noise and assess its magnitude. We show that the conclusions obtained from the deterministic model are robust against stochastic fluctuations, yet deviations become prominently large when the phage are more pathological to the invading bacterial strain.
Parameter estimation for stiff deterministic dynamical systems via ensemble Kalman filter
NASA Astrophysics Data System (ADS)
Arnold, Andrea; Calvetti, Daniela; Somersalo, Erkki
2014-10-01
A commonly encountered problem in numerous areas of applications is to estimate the unknown coefficients of a dynamical system from direct or indirect observations at discrete times of some of the components of the state vector. A related problem is to estimate unobserved components of the state. An egregious example of such a problem is provided by metabolic models, in which the numerous model parameters and the concentrations of the metabolites in tissue are to be estimated from concentration data in the blood. A popular method for addressing similar questions in stochastic and turbulent dynamics is the ensemble Kalman filter (EnKF), a particle-based filtering method that generalizes classical Kalman filtering. In this work, we adapt the EnKF algorithm for deterministic systems in which the numerical approximation error is interpreted as a stochastic drift with variance based on classical error estimates of numerical integrators. This approach, which is particularly suitable for stiff systems where the stiffness may depend on the parameters, allows us to effectively exploit the parallel nature of particle methods. Moreover, we demonstrate how spatial prior information about the state vector, which helps the stability of the computed solution, can be incorporated into the filter. The viability of the approach is shown by computed examples, including a metabolic system modeling an ischemic episode in skeletal muscle, with a high number of unknown parameters.
Comparative study on neutronics characteristics of a 1500 MWe metal fuel sodium-cooled fast reactor
Ohgama, Kazuya; Aliberti, Gerardo; Stauff, Nicolas E.; ...
2017-02-28
Under the cooperative effort of the Civil Nuclear Energy R&D Working Group within the framework of the U.S.-Japan bilateral, Argonne National Laboratory (ANL) and Japan Atomic Energy Agency (JAEA) have been performing benchmark study using Japan Sodium-cooled Fast Reactor (JSFR) design with metal fuel. In this benchmark study, core characteristic parameters at the beginning of cycle were evaluated by the best estimate deterministic and stochastic methodologies of ANL and JAEA. The results obtained by both institutions show a good agreement with less than 200 pcm of discrepancy on the neutron multiplication factor, and less than 3% of discrepancy on themore » sodium void reactivity, Doppler reactivity, and control rod worth. The results by the stochastic and deterministic approaches were compared in each party to investigate impacts of the deterministic approximation and to understand potential variations in the results due to different calculation methodologies employed. From the detailed analysis of methodologies, it was found that the good agreement in multiplication factor from the deterministic calculations comes from the cancellation of the differences on the methodology (0.4%) and nuclear data (0.6%). The different treatment in reflector cross section generation was estimated as the major cause of the discrepancy between the multiplication factors by the JAEA and ANL deterministic methodologies. Impacts of the nuclear data libraries were also investigated using a sensitivity analysis methodology. Furthermore, the differences on the inelastic scattering cross sections of U-238, ν values and fission cross sections of Pu-239 and µ-average of Na-23 are the major contributors to the difference on the multiplication factors.« less
Comparative study on neutronics characteristics of a 1500 MWe metal fuel sodium-cooled fast reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohgama, Kazuya; Aliberti, Gerardo; Stauff, Nicolas E.
Under the cooperative effort of the Civil Nuclear Energy R&D Working Group within the framework of the U.S.-Japan bilateral, Argonne National Laboratory (ANL) and Japan Atomic Energy Agency (JAEA) have been performing benchmark study using Japan Sodium-cooled Fast Reactor (JSFR) design with metal fuel. In this benchmark study, core characteristic parameters at the beginning of cycle were evaluated by the best estimate deterministic and stochastic methodologies of ANL and JAEA. The results obtained by both institutions show a good agreement with less than 200 pcm of discrepancy on the neutron multiplication factor, and less than 3% of discrepancy on themore » sodium void reactivity, Doppler reactivity, and control rod worth. The results by the stochastic and deterministic approaches were compared in each party to investigate impacts of the deterministic approximation and to understand potential variations in the results due to different calculation methodologies employed. From the detailed analysis of methodologies, it was found that the good agreement in multiplication factor from the deterministic calculations comes from the cancellation of the differences on the methodology (0.4%) and nuclear data (0.6%). The different treatment in reflector cross section generation was estimated as the major cause of the discrepancy between the multiplication factors by the JAEA and ANL deterministic methodologies. Impacts of the nuclear data libraries were also investigated using a sensitivity analysis methodology. Furthermore, the differences on the inelastic scattering cross sections of U-238, ν values and fission cross sections of Pu-239 and µ-average of Na-23 are the major contributors to the difference on the multiplication factors.« less
Hu, Weigang; Zhang, Qi; Tian, Tian; Li, Dingyao; Cheng, Gang; Mu, Jing; Wu, Qingbai; Niu, Fujun; Stegen, James C; An, Lizhe; Feng, Huyuan
2015-01-01
Understanding the processes that influence the structure of biotic communities is one of the major ecological topics, and both stochastic and deterministic processes are expected to be at work simultaneously in most communities. Here, we investigated the vertical distribution patterns of bacterial communities in a 10-m-long soil core taken within permafrost of the Qinghai-Tibet Plateau. To get a better understanding of the forces that govern these patterns, we examined the diversity and structure of bacterial communities, and the change in community composition along the vertical distance (spatial turnover) from both taxonomic and phylogenetic perspectives. Measures of taxonomic and phylogenetic beta diversity revealed that bacterial community composition changed continuously along the soil core, and showed a vertical distance-decay relationship. Multiple stepwise regression analysis suggested that bacterial alpha diversity and phylogenetic structure were strongly correlated with soil conductivity and pH but weakly correlated with depth. There was evidence that deterministic and stochastic processes collectively drived bacterial vertically-structured pattern. Bacterial communities in five soil horizons (two originated from the active layer and three from permafrost) of the permafrost core were phylogenetically random, indicator of stochastic processes. However, we found a stronger effect of deterministic processes related to soil pH, conductivity, and organic carbon content that were structuring the bacterial communities. We therefore conclude that the vertical distribution of bacterial communities was governed primarily by deterministic ecological selection, although stochastic processes were also at work. Furthermore, the strong impact of environmental conditions (for example, soil physicochemical parameters and seasonal freeze-thaw cycles) on these communities underlines the sensitivity of permafrost microorganisms to climate change and potentially subsequent permafrost thaw.
Tian, Tian; Li, Dingyao; Cheng, Gang; Mu, Jing; Wu, Qingbai; Niu, Fujun; Stegen, James C.; An, Lizhe; Feng, Huyuan
2015-01-01
Understanding the processes that influence the structure of biotic communities is one of the major ecological topics, and both stochastic and deterministic processes are expected to be at work simultaneously in most communities. Here, we investigated the vertical distribution patterns of bacterial communities in a 10-m-long soil core taken within permafrost of the Qinghai-Tibet Plateau. To get a better understanding of the forces that govern these patterns, we examined the diversity and structure of bacterial communities, and the change in community composition along the vertical distance (spatial turnover) from both taxonomic and phylogenetic perspectives. Measures of taxonomic and phylogenetic beta diversity revealed that bacterial community composition changed continuously along the soil core, and showed a vertical distance-decay relationship. Multiple stepwise regression analysis suggested that bacterial alpha diversity and phylogenetic structure were strongly correlated with soil conductivity and pH but weakly correlated with depth. There was evidence that deterministic and stochastic processes collectively drived bacterial vertically-structured pattern. Bacterial communities in five soil horizons (two originated from the active layer and three from permafrost) of the permafrost core were phylogenetically random, indicator of stochastic processes. However, we found a stronger effect of deterministic processes related to soil pH, conductivity, and organic carbon content that were structuring the bacterial communities. We therefore conclude that the vertical distribution of bacterial communities was governed primarily by deterministic ecological selection, although stochastic processes were also at work. Furthermore, the strong impact of environmental conditions (for example, soil physicochemical parameters and seasonal freeze-thaw cycles) on these communities underlines the sensitivity of permafrost microorganisms to climate change and potentially subsequent permafrost thaw. PMID:26699734
NASA Astrophysics Data System (ADS)
Dhamala, Mukeshwar; Lai, Ying-Cheng
1999-02-01
Transient chaos is a common phenomenon in nonlinear dynamics of many physical, biological, and engineering systems. In applications it is often desirable to maintain sustained chaos even in parameter regimes of transient chaos. We address how to sustain transient chaos in deterministic flows. We utilize a simple and practical method, based on extracting the fundamental dynamics from time series, to maintain chaos. The method can result in control of trajectories from almost all initial conditions in the original basin of the chaotic attractor from which transient chaos is created. We apply our method to three problems: (1) voltage collapse in electrical power systems, (2) species preservation in ecology, and (3) elimination of undesirable bursting behavior in a chemical reaction system.
Deterministic multi-zone ice accretion modeling
NASA Technical Reports Server (NTRS)
Yamaguchi, K.; Hansman, R. John, Jr.; Kazmierczak, Michael
1991-01-01
The focus here is on a deterministic model of the surface roughness transition behavior of glaze ice. The initial smooth/rough transition location, bead formation, and the propagation of the transition location are analyzed. Based on the hypothesis that the smooth/rough transition location coincides with the laminar/turbulent boundary layer transition location, a multizone model is implemented in the LEWICE code. In order to verify the effectiveness of the model, ice accretion predictions for simple cylinders calculated by the multizone LEWICE are compared to experimental ice shapes. The glaze ice shapes are found to be sensitive to the laminar surface roughness and bead thickness parameters controlling the transition location, while the ice shapes are found to be insensitive to the turbulent surface roughness.
Cluster-type entangled coherent states: Generation and application
DOE Office of Scientific and Technical Information (OSTI.GOV)
An, Nguyen Ba; Kim, Jaewan; Korea Institute for Advanced Study, 207-43 Cheongryangni 2-dong, Dongdaemun-gu, Seoul 130-722
2009-10-15
We consider a type of (M+N)-mode entangled coherent states and propose a simple deterministic scheme to generate these states that can fly freely in space. We then exploit such free-flying states to teleport certain kinds of superpositions of multimode coherent states. We also address the issue of manipulating size and type of entangled coherent states by means of linear optics elements only.
Cluster-type entangled coherent states: Generation and application
NASA Astrophysics Data System (ADS)
An, Nguyen Ba; Kim, Jaewan
2009-10-01
We consider a type of (M+N) -mode entangled coherent states and propose a simple deterministic scheme to generate these states that can fly freely in space. We then exploit such free-flying states to teleport certain kinds of superpositions of multimode coherent states. We also address the issue of manipulating size and type of entangled coherent states by means of linear optics elements only.
Unifying mechanical and thermodynamic descriptions across the thioredoxin protein family.
Mottonen, James M; Xu, Minli; Jacobs, Donald J; Livesay, Dennis R
2009-05-15
We compare various predicted mechanical and thermodynamic properties of nine oxidized thioredoxins (TRX) using a Distance Constraint Model (DCM). The DCM is based on a nonadditive free energy decomposition scheme, where entropic contributions are determined from rigidity and flexibility of structure based on distance constraints. We perform averages over an ensemble of constraint topologies to calculate several thermodynamic and mechanical response functions that together yield quantitative stability/flexibility relationships (QSFR). Applied to the TRX protein family, QSFR metrics display a rich variety of similarities and differences. In particular, backbone flexibility is well conserved across the family, whereas cooperativity correlation describing mechanical and thermodynamic couplings between the residue pairs exhibit distinctive features that readily standout. The diversity in predicted QSFR metrics that describe cooperativity correlation between pairs of residues is largely explained by a global flexibility order parameter describing the amount of intrinsic flexibility within the protein. A free energy landscape is calculated as a function of the flexibility order parameter, and key values are determined where the native-state, transition-state, and unfolded-state are located. Another key value identifies a mechanical transition where the global nature of the protein changes from flexible to rigid. The key values of the flexibility order parameter help characterize how mechanical and thermodynamic response is linked. Variation in QSFR metrics and key characteristics of global flexibility are related to the native state X-ray crystal structure primarily through the hydrogen bond network. Furthermore, comparison of three TRX redox pairs reveals differences in thermodynamic response (i.e., relative melting point) and mechanical properties (i.e., backbone flexibility and cooperativity correlation) that are consistent with experimental data on thermal stabilities and NMR dynamical profiles. The results taken together demonstrate that small-scale structural variations are amplified into discernible global differences by propagating mechanical couplings through the H-bond network.
Baschet, Louise; Bourguignon, Sandrine; Marque, Sébastien; Durand-Zaleski, Isabelle; Teiger, Emmanuel; Wilquin, Fanny; Levesque, Karine
2016-01-01
To determine the cost-effectiveness of drug-eluting stents (DES) compared with bare-metal stents (BMS) in patients requiring a percutaneous coronary intervention in France, using a recent meta-analysis including second-generation DES. A cost-effectiveness analysis was performed in the French National Health Insurance setting. Effectiveness settings were taken from a meta-analysis of 117 762 patient-years with 76 randomised trials. The main effectiveness criterion was major cardiac event-free survival. Effectiveness and costs were modelled over a 5-year horizon using a three-state Markov model. Incremental cost-effectiveness ratios and a cost-effectiveness acceptability curve were calculated for a range of thresholds for willingness to pay per year without major cardiac event gain. Deterministic and probabilistic sensitivity analyses were performed. Base case results demonstrated that DES are dominant over BMS, with an increase in event-free survival and a cost-reduction of €184, primarily due to a diminution of second revascularisations, and an absence of myocardial infarction and stent thrombosis. These results are robust for uncertainty on one-way deterministic and probabilistic sensitivity analyses. Using a cost-effectiveness threshold of €7000 per major cardiac event-free year gained, DES has a >95% probability of being cost-effective versus BMS. Following DES price decrease, new-generation DES development and taking into account recent meta-analyses results, the DES can now be considered cost-effective regardless of selective indication in France, according to European recommendations.
A real time sorting algorithm to time sort any deterministic time disordered data stream
NASA Astrophysics Data System (ADS)
Saini, J.; Mandal, S.; Chakrabarti, A.; Chattopadhyay, S.
2017-12-01
In new generation high intensity high energy physics experiments, millions of free streaming high rate data sources are to be readout. Free streaming data with associated time-stamp can only be controlled by thresholds as there is no trigger information available for the readout. Therefore, these readouts are prone to collect large amount of noise and unwanted data. For this reason, these experiments can have output data rate of several orders of magnitude higher than the useful signal data rate. It is therefore necessary to perform online processing of the data to extract useful information from the full data set. Without trigger information, pre-processing on the free streaming data can only be done with time based correlation among the data set. Multiple data sources have different path delays and bandwidth utilizations and therefore the unsorted merged data requires significant computational efforts for real time manifestation of sorting before analysis. Present work reports a new high speed scalable data stream sorting algorithm with its architectural design, verified through Field programmable Gate Array (FPGA) based hardware simulation. Realistic time based simulated data likely to be collected in an high energy physics experiment have been used to study the performance of the algorithm. The proposed algorithm uses parallel read-write blocks with added memory management and zero suppression features to make it efficient for high rate data-streams. This algorithm is best suited for online data streams with deterministic time disorder/unsorting on FPGA like hardware.
Inverse design of bulk morphologies in block copolymers using particle swarm optimization
NASA Astrophysics Data System (ADS)
Khadilkar, Mihir; Delaney, Kris; Fredrickson, Glenn
Multiblock polymers are a versatile platform for creating a large range of nanostructured materials with novel morphologies and properties. However, achieving desired structures or property combinations is difficult due to a vast design space comprised of parameters including monomer species, block sequence, block molecular weights and dispersity, copolymer architecture, and binary interaction parameters. Navigating through such vast design spaces to achieve an optimal formulation for a target structure or property set requires an efficient global optimization tool wrapped around a forward simulation technique such as self-consistent field theory (SCFT). We report on such an inverse design strategy utilizing particle swarm optimization (PSO) as the global optimizer and SCFT as the forward prediction engine. To avoid metastable states in forward prediction, we utilize pseudo-spectral variable cell SCFT initiated from a library of defect free seeds of known block copolymer morphologies. We demonstrate that our approach allows for robust identification of block copolymers and copolymer alloys that self-assemble into a targeted structure, optimizing parameters such as block fractions, blend fractions, and Flory chi parameters.
Combined loading criterial influence on structural performance
NASA Technical Reports Server (NTRS)
Kuchta, B. J.; Sealey, D. M.; Howell, L. J.
1972-01-01
An investigation was conducted to determine the influence of combined loading criteria on the space shuttle structural performance. The study consisted of four primary phases: Phase (1) The determination of the sensitivity of structural weight to various loading parameters associated with the space shuttle. Phase (2) The determination of the sensitivity of structural weight to various levels of loading parameter variability and probability. Phase (3) The determination of shuttle mission loading parameters variability and probability as a function of design evolution and the identification of those loading parameters where inadequate data exists. Phase (4) The determination of rational methods of combining both deterministic time varying and probabilistic loading parameters to provide realistic design criteria. The study results are presented.
Field-free deterministic ultrafast creation of magnetic skyrmions by spin-orbit torques
NASA Astrophysics Data System (ADS)
Büttner, Felix; Lemesh, Ivan; Schneider, Michael; Pfau, Bastian; Günther, Christian M.; Hessing, Piet; Geilhufe, Jan; Caretta, Lucas; Engel, Dieter; Krüger, Benjamin; Viefhaus, Jens; Eisebitt, Stefan; Beach, Geoffrey S. D.
2017-11-01
Magnetic skyrmions are stabilized by a combination of external magnetic fields, stray field energies, higher-order exchange interactions and the Dzyaloshinskii-Moriya interaction (DMI). The last favours homochiral skyrmions, whose motion is driven by spin-orbit torques and is deterministic, which makes systems with a large DMI relevant for applications. Asymmetric multilayers of non-magnetic heavy metals with strong spin-orbit interactions and transition-metal ferromagnetic layers provide a large and tunable DMI. Also, the non-magnetic heavy metal layer can inject a vertical spin current with transverse spin polarization into the ferromagnetic layer via the spin Hall effect. This leads to torques that can be used to switch the magnetization completely in out-of-plane magnetized ferromagnetic elements, but the switching is deterministic only in the presence of a symmetry-breaking in-plane field. Although spin-orbit torques led to domain nucleation in continuous films and to stochastic nucleation of skyrmions in magnetic tracks, no practical means to create individual skyrmions controllably in an integrated device design at a selected position has been reported yet. Here we demonstrate that sub-nanosecond spin-orbit torque pulses can generate single skyrmions at custom-defined positions in a magnetic racetrack deterministically using the same current path as used for the shifting operation. The effect of the DMI implies that no external in-plane magnetic fields are needed for this aim. This implementation exploits a defect, such as a constriction in the magnetic track, that can serve as a skyrmion generator. The concept is applicable to any track geometry, including three-dimensional designs.
NASA Astrophysics Data System (ADS)
Baumeler, ńmin; Feix, Adrien; Wolf, Stefan
2014-10-01
Quantum theory in a global spacetime gives rise to nonlocal correlations, which cannot be explained causally in a satisfactory way; this motivates the study of theories with reduced global assumptions. Oreshkov, Costa, and Brukner [Nat. Commun. 3, 1092 (2012), 10.1038/ncomms2076] proposed a framework in which quantum theory is valid locally but where, at the same time, no global spacetime, i.e., predefined causal order, is assumed beyond the absence of logical paradoxes. It was shown for the two-party case, however, that a global causal order always emerges in the classical limit. Quite naturally, it has been conjectured that the same also holds in the multiparty setting. We show that, counter to this belief, classical correlations locally compatible with classical probability theory exist that allow for deterministic signaling between three or more parties incompatible with any predefined causal order.
NASA Astrophysics Data System (ADS)
Meneghello, Gianluca; Beyhaghi, Pooriya; Bewley, Thomas
2016-11-01
The identification of an optimized hydrofoil shape depends on an accurate characterization of both its geometry and the incoming, turbulent, free-stream flow. We analyze this dependence using the computationally inexpensive vortex lattice model implemented in AVL, coupled with the recently developed global, derivative-free optimization algorithm implemented in Δ - DOGS . Particular attention will be given to the effect of the free-stream turbulence level - as modeled by a change in the viscous drag coefficients - on the optimized values of the parameters describing the three dimensional shape of the foil. Because the simplicity of AVL, when contrasted with more complex and computationally expensive LES or RANS models, may cast doubts on its usefulness, its validity and limitations will be discussed by comparison with water tank measurement, and again taking into account the effect of the uncertainty in the free-stream characterization.
Hyperchaotic Dynamics for Light Polarization in a Laser Diode
NASA Astrophysics Data System (ADS)
Bonatto, Cristian
2018-04-01
It is shown that a highly randomlike behavior of light polarization states in the output of a free-running laser diode, covering the whole Poincaré sphere, arises as a result from a fully deterministic nonlinear process, which is characterized by a hyperchaotic dynamics of two polarization modes nonlinearly coupled with a semiconductor medium, inside the optical cavity. A number of statistical distributions were found to describe the deterministic data of the low-dimensional nonlinear flow, such as lognormal distribution for the light intensity, Gaussian distributions for the electric field components and electron densities, Rice and Rayleigh distributions, and Weibull and negative exponential distributions, for the modulus and intensity of the orthogonal linear components of the electric field, respectively. The presented results could be relevant for the generation of single units of compact light source devices to be used in low-dimensional optical hyperchaos-based applications.
On optimal control of linear systems in the presence of multiplicative noise
NASA Technical Reports Server (NTRS)
Joshi, S. M.
1976-01-01
This correspondence considers the problem of optimal regulator design for discrete time linear systems subjected to white state-dependent and control-dependent noise in addition to additive white noise in the input and the observations. A pseudo-deterministic problem is first defined in which multiplicative and additive input disturbances are present, but noise-free measurements of the complete state vector are available. This problem is solved via discrete dynamic programming. Next is formulated the problem in which the number of measurements is less than that of the state variables and the measurements are contaminated with state-dependent noise. The inseparability of control and estimation is brought into focus, and an 'enforced separation' solution is obtained via heuristic reasoning in which the control gains are shown to be the same as those in the pseudo-deterministic problem. An optimal linear state estimator is given in order to implement the controller.
Bohmian Photonics for Independent Control of the Phase and Amplitude of Waves
NASA Astrophysics Data System (ADS)
Yu, Sunkyu; Piao, Xianji; Park, Namkyoo
2018-05-01
The de Broglie-Bohm theory is one of the nonstandard interpretations of quantum phenomena that focuses on reintroducing definite positions of particles, in contrast to the indeterminism of the Copenhagen interpretation. In spite of intense debate on its measurement and nonlocality, the de Broglie-Bohm theory based on the reformulation of the Schrödinger equation allows for the description of quantum phenomena as deterministic trajectories embodied in the modified Hamilton-Jacobi mechanics. Here, we apply the Bohmian reformulation to Maxwell's equations to achieve the independent manipulation of optical phase evolution and energy confinement. After establishing the deterministic design method based on the Bohmian approach, we investigate the condition of optical materials enabling scattering-free light with bounded or random phase evolutions. We also demonstrate a unique form of optical confinement and annihilation that preserves the phase information of incident light. Our separate tailoring of wave information extends the notion and range of artificial materials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebermeister, Lars, E-mail: lars.liebermeister@physik.uni-muenchen.de; Petersen, Fabian; Münchow, Asmus v.
2014-01-20
A diamond nano-crystal hosting a single nitrogen vacancy (NV) center is optically selected with a confocal scanning microscope and positioned deterministically onto the subwavelength-diameter waist of a tapered optical fiber (TOF) with the help of an atomic force microscope. Based on this nano-manipulation technique, we experimentally demonstrate the evanescent coupling of single fluorescence photons emitted by a single NV-center to the guided mode of the TOF. By comparing photon count rates of the fiber-guided and the free-space modes and with the help of numerical finite-difference time domain simulations, we determine a lower and upper bound for the coupling efficiency ofmore » (9.5 ± 0.6)% and (10.4 ± 0.7)%, respectively. Our results are a promising starting point for future integration of single photon sources into photonic quantum networks and applications in quantum information science.« less
Noise stabilization of self-organized memories.
Povinelli, M L; Coppersmith, S N; Kadanoff, L P; Nagel, S R; Venkataramani, S C
1999-05-01
We investigate a nonlinear dynamical system which "remembers" preselected values of a system parameter. The deterministic version of the system can encode many parameter values during a transient period, but in the limit of long times, almost all of them are forgotten. Here we show that a certain type of stochastic noise can stabilize multiple memories, enabling many parameter values to be encoded permanently. We present analytic results that provide insight both into the memory formation and into the noise-induced memory stabilization. The relevance of our results to experiments on the charge-density wave material NbSe3 is discussed.
‘Why should I care?’ Challenging free will attenuates neural reaction to errors
Pourtois, Gilles; Brass, Marcel
2015-01-01
Whether human beings have free will has been a philosophical question for centuries. The debate about free will has recently entered the public arena through mass media and newspaper articles commenting on scientific findings that leave little to no room for free will. Previous research has shown that encouraging such a deterministic perspective influences behavior, namely by promoting cursory and antisocial behavior. Here we propose that such behavioral changes may, at least partly, stem from a more basic neurocognitive process related to response monitoring, namely a reduced error detection mechanism. Our results show that the error-related negativity, a neural marker of error detection, was reduced in individuals led to disbelieve in free will. This finding shows that reducing the belief in free will has a specific impact on error detection mechanisms. More generally, it suggests that abstract beliefs about intentional control can influence basic and automatic processes related to action control. PMID:24795441
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
Parametric study using modal analysis of a bi-material plate with defects
NASA Astrophysics Data System (ADS)
Esola, S.; Bartoli, I.; Horner, S. E.; Zheng, J. Q.; Kontsos, A.
2015-03-01
Global vibrational method feasibility as a non-destructive inspection tool for multi-layered composites is evaluated using a simulated parametric study approach. A finite element model of a composite consisting of two, isotropic layers of dissimilar materials and a third, thin isotropic layer of adhesive is constructed as the representative test subject. Next, artificial damage is inserted according to systematic variations of the defect morphology parameters. A free-vibrational modal analysis simulation is executed for pristine and damaged plate conditions. Finally, resultant mode shapes and natural frequencies are extracted, compared and analyzed for trends. Though other defect types may be explored, the focus of this research is on interfacial delamination and its effects on the global, free-vibrational behavior of a composite plate. This study is part of a multi-year research effort conducted for the U.S. Army Program Executive Office - Soldier.
Maljovec, D.; Liu, S.; Wang, B.; ...
2015-07-14
Here, dynamic probabilistic risk assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP and MELCOR) with simulation controller codes (e.g., RAVEN and ADAPT). Whereas system simulator codes model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic and operating procedures) and stochastic (e.g., component failures and parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by sampling values of a set of parameters and simulating the system behavior for that specific set of parameter values. For complex systems, a major challenge in using DPRA methodologies is to analyze the large number of scenarios generated,more » where clustering techniques are typically employed to better organize and interpret the data. In this paper, we focus on the analysis of two nuclear simulation datasets that are part of the risk-informed safety margin characterization (RISMC) boiling water reactor (BWR) station blackout (SBO) case study. We provide the domain experts a software tool that encodes traditional and topological clustering techniques within an interactive analysis and visualization environment, for understanding the structures of such high-dimensional nuclear simulation datasets. We demonstrate through our case study that both types of clustering techniques complement each other for enhanced structural understanding of the data.« less
Sattar, Ahmed M.A.; Raslan, Yasser M.
2013-01-01
While construction of the Aswan High Dam (AHD) has stopped concurrent flooding events, River Nile is still subject to low intensity flood waves resulting from controlled release of water from the dam reservoir. Analysis of flow released from New Naga-Hammadi Barrage, which is located at 3460 km downstream AHD indicated an increase in magnitude of flood released from the barrage in the past 10 years. A 2D numerical mobile bed model is utilized to investigate the possible morphological changes in the downstream of Naga-Hammadi Barrage from possible higher flood releases. Monte Carlo simulation analyses (MCS) is applied to the deterministic results of the 2D model to account for and assess the uncertainty of sediment parameters and formulations in addition to sacristy of field measurements. Results showed that the predicted volume of erosion yielded the highest uncertainty and variation from deterministic run, while navigation velocity yielded the least uncertainty. Furthermore, the error budget method is used to rank various sediment parameters for their contribution in the total prediction uncertainty. It is found that the suspended sediment contributed to output uncertainty more than other sediment parameters followed by bed load with 10% less order of magnitude. PMID:25685476
Sattar, Ahmed M A; Raslan, Yasser M
2014-01-01
While construction of the Aswan High Dam (AHD) has stopped concurrent flooding events, River Nile is still subject to low intensity flood waves resulting from controlled release of water from the dam reservoir. Analysis of flow released from New Naga-Hammadi Barrage, which is located at 3460 km downstream AHD indicated an increase in magnitude of flood released from the barrage in the past 10 years. A 2D numerical mobile bed model is utilized to investigate the possible morphological changes in the downstream of Naga-Hammadi Barrage from possible higher flood releases. Monte Carlo simulation analyses (MCS) is applied to the deterministic results of the 2D model to account for and assess the uncertainty of sediment parameters and formulations in addition to sacristy of field measurements. Results showed that the predicted volume of erosion yielded the highest uncertainty and variation from deterministic run, while navigation velocity yielded the least uncertainty. Furthermore, the error budget method is used to rank various sediment parameters for their contribution in the total prediction uncertainty. It is found that the suspended sediment contributed to output uncertainty more than other sediment parameters followed by bed load with 10% less order of magnitude.
Modeling seasonal measles transmission in China
NASA Astrophysics Data System (ADS)
Bai, Zhenguo; Liu, Dan
2015-08-01
A discrete-time deterministic measles model with periodic transmission rate is formulated and studied. The basic reproduction number R0 is defined and used as the threshold parameter in determining the dynamics of the model. It is shown that the disease will die out if R0 < 1 , and the disease will persist in the population if R0 > 1 . Parameters in the model are estimated on the basis of demographic and epidemiological data. Numerical simulations are presented to describe the seasonal fluctuation of measles infection in China.
NASA Technical Reports Server (NTRS)
Vanlunteren, A.; Stassen, H. G.
1973-01-01
Parameter estimation techniques are discussed with emphasis on unbiased estimates in the presence of noise. A distinction between open and closed loop systems is made. A method is given based on the application of external forcing functions consisting of a sun of sinusoids; this method is thus based on the estimation of Fourier coefficients and is applicable for models with poles and zeros in open and closed loop systems.
NASA Astrophysics Data System (ADS)
Burgos, C.; Cortés, J.-C.; Shaikhet, L.; Villanueva, R.-J.
2018-11-01
First, we propose a deterministic age-structured epidemiological model to study the diffusion of e-commerce in Spain. Afterwards, we determine the parameters (death, birth and growth rates) of the underlying demographic model as well as the parameters (transmission of the use of e-commerce rates) of the proposed epidemiological model that best fit real data retrieved from the Spanish National Statistical Institute. Motivated by the two following facts: first the dynamics of acquiring the use of a new technology as e-commerce is mainly driven by the feedback after interacting with our peers (family, friends, mates, mass media, etc.), hence having a certain delay, and second the inherent uncertainty of sampled real data and the social complexity of the phenomena under analysis, we introduce aftereffect and stochastic perturbations in the initial deterministic model. This leads to a delayed stochastic model for e-commerce. We then investigate sufficient conditions in order to guarantee the stability in probability of the equilibrium point of the dynamic e-commerce delayed stochastic model. Our theoretical findings are numerically illustrated using real data.
NASA Astrophysics Data System (ADS)
Wang, Ming-Ming; Qu, Zhi-Guo
2016-11-01
Quantum secure communication brings a new direction for information security. As an important component of quantum secure communication, deterministic joint remote state preparation (DJRSP) could securely transmit a quantum state with 100 % success probability. In this paper, we study how the efficiency of DJRSP is affected when qubits involved in the protocol are subjected to noise or decoherence. Taking a GHZ-based DJRSP scheme as an example, we study all types of noise usually encountered in real-world implementations of quantum communication protocols, i.e., the bit-flip, phase-flip (phase-damping), depolarizing and amplitude-damping noise. Our study shows that the fidelity of the output state depends on the phase factor, the amplitude factor and the noise parameter in the bit-flip noise, while the fidelity only depends on the amplitude factor and the noise parameter in the other three types of noise. And the receiver will get different output states depending on the first preparer's measurement result in the amplitude-damping noise. Our results will be helpful for improving quantum secure communication in real implementation.
Chaotic behavior in the locomotion of Amoeba proteus.
Miyoshi, H; Kagawa, Y; Tsuchiya, Y
2001-01-01
The locomotion of Amoeba proteus has been investigated by algorithms evaluating correlation dimension and Lyapunov spectrum developed in the field of nonlinear science. It is presumed by these parameters whether the random behavior of the system is stochastic or deterministic. For the analysis of the nonlinear parameters, n-dimensional time-delayed vectors have been reconstructed from a time series of periphery and area of A. proteus images captured with a charge-coupled-device camera, which characterize its random motion. The correlation dimension analyzed has shown the random motion of A. proteus is subjected only to 3-4 macrovariables, though the system is a complex system composed of many degrees of freedom. Furthermore, the analysis of the Lyapunov spectrum has shown its largest exponent takes positive values. These results indicate the random behavior of A. proteus is chaotic and deterministic motion on an attractor with low dimension. It may be important for the elucidation of the cell locomotion to take account of nonlinear interactions among a small number of dynamics such as the sol-gel transformation, the cytoplasmic streaming, and the relating chemical reaction occurring in the cell.
Parameter optimization for reproducible cardiac 1 H-MR spectroscopy at 3 Tesla.
de Heer, Paul; Bizino, Maurice B; Lamb, Hildo J; Webb, Andrew G
2016-11-01
To optimize data acquisition parameters in cardiac proton MR spectroscopy, and to evaluate the intra- and intersession variability in myocardial triglyceride content. Data acquisition parameters at 3 Tesla (T) were optimized and reproducibility measured using, in total, 49 healthy subjects. The signal-to-noise-ratio (SNR) and the variance in metabolite amplitude between averages were measured for: (i) global versus local power optimization; (ii) static magnetic field (B 0 ) shimming performed during free-breathing or within breathholds; (iii) post R-wave peak measurement times between 50 and 900 ms; (iv) without respiratory compensation, with breathholds and with navigator triggering; and (v) frequency selective excitation, Chemical Shift Selective (CHESS) and Multiply Optimized Insensitive Suppression Train (MOIST) water suppression techniques. Using the optimized parameters intra- and intersession myocardial triglyceride content reproducibility was measured. Two cardiac proton spectra were acquired with the same parameters and compared (intrasession reproducibility) after which the subject was removed from the scanner and placed back in the scanner and a third spectrum was acquired which was compared with the first measurement (intersession reproducibility). Local power optimization increased SNR on average by 22% compared with global power optimization (P = 0.0002). The average linewidth was not significantly different for pencil beam B 0 shimming using free-breathing or breathholds (19.1 Hz versus 17.5 Hz; P = 0.15). The highest signal stability occurred at a cardiac trigger delay around 240 ms. The mean amplitude variation was significantly lower for breathholds versus free-breathing (P = 0.03) and for navigator triggering versus free-breathing (P = 0.03) as well as for navigator triggering versus breathhold (P = 0.02). The mean residual water signal using CHESS (1.1%, P = 0.01) or MOIST (0.7%, P = 0.01) water suppression was significantly lower than using frequency selective excitation water suppression (7.0%). Using the optimized parameters an intrasession limits of agreement of the myocardial triglyceride content of -0.11% to +0.04%, and an intersession of -0.15% to +0.9%, were achieved. The coefficient of variation was 5% for the intrasession reproducibility and 6.5% for the intersession reproducibility. Using approaches designed to optimize SNR and minimize the variation in inter-average signal intensities and frequencies/phases, a protocol was developed to perform cardiac MR spectroscopy on a clinical 3T system with high reproducibility. J. Magn. Reson. Imaging 2016;44:1151-1158. © 2016 International Society for Magnetic Resonance in Medicine.
Patients' understanding of and responses to multiplex genetic susceptibility test results.
Kaphingst, Kimberly A; McBride, Colleen M; Wade, Christopher; Alford, Sharon Hensley; Reid, Robert; Larson, Eric; Baxevanis, Andreas D; Brody, Lawrence C
2012-07-01
Examination of patients' responses to direct-to-consumer genetic susceptibility tests is needed to inform clinical practice. This study examined patients' recall and interpretation of, and responses to, genetic susceptibility test results provided directly by mail. This observational study had three prospective assessments (before testing, 10 days after receiving results, and 3 months later). Participants were 199 patients aged 25-40 years who received free genetic susceptibility testing for eight common health conditions. More than 80% of the patients correctly recalled their results for the eight health conditions. Patients were unlikely to interpret genetic results as deterministic of health outcomes (mean = 6.0, s.d. = 0.8 on a scale of 1-7, 1 indicating strongly deterministic). In multivariate analysis, patients with the least deterministic interpretations were white (P = 0.0098), more educated (P = 0.0093), and least confused by results (P = 0.001). Only 1% talked about their results with a provider. Findings suggest that most patients will correctly recall their results and will not interpret genetics as the sole cause of diseases. The subset of those confused by results could benefit from consultation with a health-care provider, which could emphasize that health habits currently are the best predictors of risk. Providers could leverage patients' interest in genetic tests to encourage behavior changes to reduce disease risk.
On Transform Domain Communication Systems under Spectrum Sensing Mismatch: A Deterministic Analysis.
Jin, Chuanxue; Hu, Su; Huang, Yixuan; Luo, Qu; Huang, Dan; Li, Yi; Gao, Yuan; Cheng, Shaochi
2017-07-08
Towards the era of mobile Internet and the Internet of Things (IoT), numerous sensors and devices are being introduced and interconnected. To support such an amount of data traffic, traditional wireless communication technologies are facing challenges both in terms of the increasing shortage of spectrum resources and massive multiple access. The transform-domain communication system (TDCS) is considered as an alternative multiple access system, where 5G and mobile IoT are mainly focused. However, previous studies about TDCS are under the assumption that the transceiver has the global spectrum information, without the consideration of spectrum sensing mismatch (SSM). In this paper, we present the deterministic analysis of TDCS systems under arbitrary given spectrum sensing scenarios, especially the influence of the SSM pattern to the signal to noise ratio (SNR) performance. Simulation results show that arbitrary SSM pattern can lead to inferior bit error rate (BER) performance.
Converting differential-equation models of biological systems to membrane computing.
Muniyandi, Ravie Chandren; Zin, Abdullah Mohd; Sanders, J W
2013-12-01
This paper presents a method to convert the deterministic, continuous representation of a biological system by ordinary differential equations into a non-deterministic, discrete membrane computation. The dynamics of the membrane computation is governed by rewrite rules operating at certain rates. That has the advantage of applying accurately to small systems, and to expressing rates of change that are determined locally, by region, but not necessary globally. Such spatial information augments the standard differentiable approach to provide a more realistic model. A biological case study of the ligand-receptor network of protein TGF-β is used to validate the effectiveness of the conversion method. It demonstrates the sense in which the behaviours and properties of the system are better preserved in the membrane computing model, suggesting that the proposed conversion method may prove useful for biological systems in particular. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Romps, David M.
2016-03-01
Convective entrainment is a process that is poorly represented in existing convective parameterizations. By many estimates, convective entrainment is the leading source of error in global climate models. As a potential remedy, an Eulerian implementation of the Stochastic Parcel Model (SPM) is presented here as a convective parameterization that treats entrainment in a physically realistic and computationally efficient way. Drawing on evidence that convecting clouds comprise air parcels subject to Poisson-process entrainment events, the SPM calculates the deterministic limit of an infinite number of such parcels. For computational efficiency, the SPM groups parcels at each height by their purity, whichmore » is a measure of their total entrainment up to that height. This reduces the calculation of convective fluxes to a sequence of matrix multiplications. The SPM is implemented in a single-column model and compared with a large-eddy simulation of deep convection.« less
On Transform Domain Communication Systems under Spectrum Sensing Mismatch: A Deterministic Analysis
Jin, Chuanxue; Hu, Su; Huang, Yixuan; Luo, Qu; Huang, Dan; Li, Yi; Cheng, Shaochi
2017-01-01
Towards the era of mobile Internet and the Internet of Things (IoT), numerous sensors and devices are being introduced and interconnected. To support such an amount of data traffic, traditional wireless communication technologies are facing challenges both in terms of the increasing shortage of spectrum resources and massive multiple access. The transform-domain communication system (TDCS) is considered as an alternative multiple access system, where 5G and mobile IoT are mainly focused. However, previous studies about TDCS are under the assumption that the transceiver has the global spectrum information, without the consideration of spectrum sensing mismatch (SSM). In this paper, we present the deterministic analysis of TDCS systems under arbitrary given spectrum sensing scenarios, especially the influence of the SSM pattern to the signal to noise ratio (SNR) performance. Simulation results show that arbitrary SSM pattern can lead to inferior bit error rate (BER) performance. PMID:28698477
Stability analysis and application of a mathematical cholera model.
Liao, Shu; Wang, Jin
2011-07-01
In this paper, we conduct a dynamical analysis of the deterministic cholera model proposed in [9]. We study the stability of both the disease-free and endemic equilibria so as to explore the complex epidemic and endemic dynamics of the disease. We demonstrate a real-world application of this model by investigating the recent cholera outbreak in Zimbabwe. Meanwhile, we present numerical simulation results to verify the analytical predictions.
Non-Local Diffusion of Energetic Electrons during Solar Flares
NASA Astrophysics Data System (ADS)
Bian, N. H.; Emslie, G.; Kontar, E.
2017-12-01
The transport of the energy contained in suprathermal electrons in solar flares plays a key role in our understanding of many aspects of flare physics, from the spatial distributions of hard X-ray emission and energy deposition in the ambient atmosphere to global energetics. Historically the transport of these particles has been largely treated through a deterministic approach, in which first-order secular energy loss to electrons in the ambient target is treated as the dominant effect, with second-order diffusive terms (in both energy and angle) generally being either treated as a small correction or even neglected. Here, we critically analyze this approach, and we show that spatial diffusion through pitch-angle scattering necessarily plays a very significant role in the transport of electrons. We further show that a satisfactory treatment of the diffusion process requires consideration of non-local effects, so that the electron flux depends not just on the local gradient of the electron distribution function but on the value of this gradient within an extended region encompassing a significant fraction of a mean free path. Our analysis applies generally to pitch-angle scattering by a variety of mechanisms, from Coulomb collisions to turbulent scattering. We further show that the spatial transport of electrons along the magnetic field of a flaring loop can be modeled as a Continuous Time Random Walk with velocity-dependent probability distribution functions of jump sizes and occurrences, both of which can be expressed in terms of the scattering mean free path.
A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates
An, Qian; Kang, Jian; Song, Ruiguang; Hall, H. Irene
2016-01-01
Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. PMID:26567891
Monte Carlo modeling of spatial coherence: free-space diffraction
Fischer, David G.; Prahl, Scott A.; Duncan, Donald D.
2008-01-01
We present a Monte Carlo method for propagating partially coherent fields through complex deterministic optical systems. A Gaussian copula is used to synthesize a random source with an arbitrary spatial coherence function. Physical optics and Monte Carlo predictions of the first- and second-order statistics of the field are shown for coherent and partially coherent sources for free-space propagation, imaging using a binary Fresnel zone plate, and propagation through a limiting aperture. Excellent agreement between the physical optics and Monte Carlo predictions is demonstrated in all cases. Convergence criteria are presented for judging the quality of the Monte Carlo predictions. PMID:18830335
NASA Astrophysics Data System (ADS)
Yan, Y.; Barth, A.; Beckers, J. M.; Candille, G.; Brankart, J. M.; Brasseur, P.
2015-07-01
Sea surface height, sea surface temperature, and temperature profiles at depth collected between January and December 2005 are assimilated into a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. Sixty ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments. An incremental analysis update scheme is applied in order to reduce spurious oscillations due to the model state correction. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with independent/semiindependent observations. For deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations, in order to diagnose the ensemble distribution properties in a deterministic way. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centered random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system. The improvement of the assimilation is demonstrated using these validation metrics. Finally, the deterministic validation and the probabilistic validation are analyzed jointly. The consistency and complementarity between both validations are highlighted.
NASA Astrophysics Data System (ADS)
Song, Yiliao; Qin, Shanshan; Qu, Jiansheng; Liu, Feng
2015-10-01
The issue of air quality regarding PM pollution levels in China is a focus of public attention. To address that issue, to date, a series of studies is in progress, including PM monitoring programs, PM source apportionment, and the enactment of new ambient air quality index standards. However, related research concerning computer modeling for PM future trends estimation is rare, despite its significance to forecasting and early warning systems. Thereby, a study regarding deterministic and interval forecasts of PM is performed. In this study, data on hourly and 12 h-averaged air pollutants are applied to forecast PM concentrations within the Yangtze River Delta (YRD) region of China. The characteristics of PM emissions have been primarily examined and analyzed using different distribution functions. To improve the distribution fitting that is crucial for estimating PM levels, an artificial intelligence algorithm is incorporated to select the optimal parameters. Following that step, an ANF model is used to conduct deterministic forecasts of PM. With the identified distributions and deterministic forecasts, different levels of PM intervals are estimated. The results indicate that the lognormal or gamma distributions are highly representative of the recorded PM data with a goodness-of-fit R2 of approximately 0.998. Furthermore, the results of the evaluation metrics (MSE, MAPE and CP, AW) also show high accuracy within the deterministic and interval forecasts of PM, indicating that this method enables the informative and effective quantification of future PM trends.
Robust optimization of supersonic ORC nozzle guide vanes
NASA Astrophysics Data System (ADS)
Bufi, Elio A.; Cinnella, Paola
2017-03-01
An efficient Robust Optimization (RO) strategy is developed for the design of 2D supersonic Organic Rankine Cycle turbine expanders. The dense gas effects are not-negligible for this application and they are taken into account describing the thermodynamics by means of the Peng-Robinson-Stryjek-Vera equation of state. The design methodology combines an Uncertainty Quantification (UQ) loop based on a Bayesian kriging model of the system response to the uncertain parameters, used to approximate statistics (mean and variance) of the uncertain system output, a CFD solver, and a multi-objective non-dominated sorting algorithm (NSGA), also based on a Kriging surrogate of the multi-objective fitness function, along with an adaptive infill strategy for surrogate enrichment at each generation of the NSGA. The objective functions are the average and variance of the isentropic efficiency. The blade shape is parametrized by means of a Free Form Deformation (FFD) approach. The robust optimal blades are compared to the baseline design (based on the Method of Characteristics) and to a blade obtained by means of a deterministic CFD-based optimization.
NASA Astrophysics Data System (ADS)
Hunziker, Jürg; Laloy, Eric; Linde, Niklas
2016-04-01
Deterministic inversion procedures can often explain field data, but they only deliver one final subsurface model that depends on the initial model and regularization constraints. This leads to poor insights about the uncertainties associated with the inferred model properties. In contrast, probabilistic inversions can provide an ensemble of model realizations that accurately span the range of possible models that honor the available calibration data and prior information allowing a quantitative description of model uncertainties. We reconsider the problem of inferring the dielectric permittivity (directly related to radar velocity) structure of the subsurface by inversion of first-arrival travel times from crosshole ground penetrating radar (GPR) measurements. We rely on the DREAM_(ZS) algorithm that is a state-of-the-art Markov chain Monte Carlo (MCMC) algorithm. Such algorithms need several orders of magnitude more forward simulations than deterministic algorithms and often become infeasible in high parameter dimensions. To enable high-resolution imaging with MCMC, we use a recently proposed dimensionality reduction approach that allows reproducing 2D multi-Gaussian fields with far fewer parameters than a classical grid discretization. We consider herein a dimensionality reduction from 5000 to 257 unknowns. The first 250 parameters correspond to a spectral representation of random and uncorrelated spatial fluctuations while the remaining seven geostatistical parameters are (1) the standard deviation of the data error, (2) the mean and (3) the variance of the relative electric permittivity, (4) the integral scale along the major axis of anisotropy, (5) the anisotropy angle, (6) the ratio of the integral scale along the minor axis of anisotropy to the integral scale along the major axis of anisotropy and (7) the shape parameter of the Matérn function. The latter essentially defines the type of covariance function (e.g., exponential, Whittle, Gaussian). We present an improved formulation of the dimensionality reduction, and numerically show how it reduces artifacts in the generated models and provides better posterior estimation of the subsurface geostatistical structure. We next show that the results of the method compare very favorably against previous deterministic and stochastic inversion results obtained at the South Oyster Bacterial Transport Site in Virginia, USA. The long-term goal of this work is to enable MCMC-based full waveform inversion of crosshole GPR data.
Study of a tri-trophic prey-dependent food chain model of interacting populations.
Haque, Mainul; Ali, Nijamuddin; Chakravarty, Santabrata
2013-11-01
The current paper accounts for the influence of intra-specific competition among predators in a prey dependent tri-trophic food chain model of interacting populations. We offer a detailed mathematical analysis of the proposed food chain model to illustrate some of the significant results that has arisen from the interplay of deterministic ecological phenomena and processes. Biologically feasible equilibria of the system are observed and the behaviours of the system around each of them are described. In particular, persistence, stability (local and global) and bifurcation (saddle-node, transcritical, Hopf-Andronov) analysis of this model are obtained. Relevant results from previous well known food chain models are compared with the current findings. Global stability analysis is also carried out by constructing appropriate Lyapunov functions. Numerical simulations show that the present system is capable enough to produce chaotic dynamics when the rate of self-interaction is very low. On the other hand such chaotic behaviour disappears for a certain value of the rate of self interaction. In addition, numerical simulations with experimented parameters values confirm the analytical results and shows that intra-specific competitions bears a potential role in controlling the chaotic dynamics of the system; and thus the role of self interactions in food chain model is illustrated first time. Finally, a discussion of the ecological applications of the analytical and numerical findings concludes the paper. Copyright © 2013 Elsevier Inc. All rights reserved.
Hunt, R.J.; Feinstein, D.T.; Pint, C.D.; Anderson, M.P.
2006-01-01
As part of the USGS Water, Energy, and Biogeochemical Budgets project and the NSF Long-Term Ecological Research work, a parameter estimation code was used to calibrate a deterministic groundwater flow model of the Trout Lake Basin in northern Wisconsin. Observations included traditional calibration targets (head, lake stage, and baseflow observations) as well as unconventional targets such as groundwater flows to and from lakes, depth of a lake water plume, and time of travel. The unconventional data types were important for parameter estimation convergence and allowed the development of a more detailed parameterization capable of resolving model objectives with well-constrained parameter values. Independent estimates of groundwater inflow to lakes were most important for constraining lakebed leakance and the depth of the lake water plume was important for determining hydraulic conductivity and conceptual aquifer layering. The most important target overall, however, was a conventional regional baseflow target that led to correct distribution of flow between sub-basins and the regional system during model calibration. The use of an automated parameter estimation code: (1) facilitated the calibration process by providing a quantitative assessment of the model's ability to match disparate observed data types; and (2) allowed assessment of the influence of observed targets on the calibration process. The model calibration required the use of a 'universal' parameter estimation code in order to include all types of observations in the objective function. The methods described in this paper help address issues of watershed complexity and non-uniqueness common to deterministic watershed models. ?? 2005 Elsevier B.V. All rights reserved.
Dai, Junyi; Gunn, Rachel L; Gerst, Kyle R; Busemeyer, Jerome R; Finn, Peter R
2016-10-01
Previous studies have demonstrated that working memory capacity plays a central role in delay discounting in people with externalizing psychopathology. These studies used a hyperbolic discounting model, and its single parameter-a measure of delay discounting-was estimated using the standard method of searching for indifference points between intertemporal options. However, there are several problems with this approach. First, the deterministic perspective on delay discounting underlying the indifference point method might be inappropriate. Second, the estimation procedure using the R2 measure often leads to poor model fit. Third, when parameters are estimated using indifference points only, much of the information collected in a delay discounting decision task is wasted. To overcome these problems, this article proposes a random utility model of delay discounting. The proposed model has 2 parameters, 1 for delay discounting and 1 for choice variability. It was fit to choice data obtained from a recently published data set using both maximum-likelihood and Bayesian parameter estimation. As in previous studies, the delay discounting parameter was significantly associated with both externalizing problems and working memory capacity. Furthermore, choice variability was also found to be significantly associated with both variables. This finding suggests that randomness in decisions may be a mechanism by which externalizing problems and low working memory capacity are associated with poor decision making. The random utility model thus has the advantage of disclosing the role of choice variability, which had been masked by the traditional deterministic model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watts, Christopher A.
In this dissertation the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas is investigated. To properly assess this possibility, data from both numerical simulations and experiment are analyzed. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos in the data. These tools include phase portraits and Poincare sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulatemore » the plasma dynamics. These are the DEBS code, which models global RFP dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low dimensional chaos and simple determinism. Experimental date were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or low simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less
Cell-Free Optogenetic Gene Expression System.
Jayaraman, Premkumar; Yeoh, Jing Wui; Jayaraman, Sudhaghar; Teh, Ai Ying; Zhang, Jingyun; Poh, Chueh Loo
2018-04-20
Optogenetic tools provide a new and efficient way to dynamically program gene expression with unmatched spatiotemporal precision. To date, their vast potential remains untapped in the field of cell-free synthetic biology, largely due to the lack of simple and efficient light-switchable systems. Here, to bridge the gap between cell-free systems and optogenetics, we studied our previously engineered one component-based blue light-inducible Escherichia coli promoter in a cell-free environment through experimental characterization and mathematical modeling. We achieved >10-fold dynamic expression and demonstrated rapid and reversible activation of the target gene to generate oscillatory response. The deterministic model developed was able to recapitulate the system behavior and helped to provide quantitative insights to optimize dynamic response. This in vitro optogenetic approach could be a powerful new high-throughput screening technology for rapid prototyping of complex biological networks in both space and time without the need for chemical induction.
Simulation of Glacial Cycles Before and After the Mid-Pleistocene Transition
NASA Astrophysics Data System (ADS)
Ganopolski, A.; Willeit, M.; Calov, R.
2017-12-01
In spite of significant progress achieved in understanding of glacial cycles, the cause of Mid-Pleistocene transition (MPT) is still not fully understood. To study possible mechanisms of the MPT we used the Earth system model of intermediate complexity CLIMBER-2 which incorporates all major components of the Earth system - atmosphere, ocean, land surface, northern hemisphere ice sheets, terrestrial biota and soil carbon, aeolian dust and marine biogeochemistry. We run the model through the entire Quaternary. The only prescribed forcing in these simulations is variations in Earth orbital parameters. In addition we prescribed gradually evolving in time terrestrial sediment cover and global volcanic outgassing. We found that gradual removal of terrestrial sediment from the Northern Hemisphere continent by glacial processes is sufficient to explain transition from 40-ka to 100-ka worlds around 1 million years ago. By starting the model at different times and using the same initial conditions we found that modeling results converge to the same solution which depends only on the orbital forcing and lower boundary conditions. Our results thus strongly suggest that Quaternary glacial cycles are externally forced and nearly deterministic.
Valuing Precaution in Climate Change Policy Analysis (Invited)
NASA Astrophysics Data System (ADS)
Howarth, R. B.
2010-12-01
The U.N. Framework Convention on Climate Change calls for stabilizing greenhouse gas concentrations to prevent “dangerous anthropogenic interference” (DAI) with the global environment. This treaty language emphasizes a precautionary approach to climate change policy in a setting characterized by substantial uncertainty regarding the timing, magnitude, and impacts of climate change. In the economics of climate change, however, analysts often work with deterministic models that assign best-guess values to parameters that are highly uncertain. Such models support a “policy ramp” approach in which only limited steps should be taken to reduce the future growth of greenhouse gas emissions. This presentation will explore how uncertainties related to (a) climate sensitivity and (b) climate-change damages can be satisfactorily addressed in a coupled model of climate-economy dynamics. In this model, capping greenhouse gas concentrations at ~450 ppm of carbon dioxide equivalent provides substantial net benefits by reducing the risk of low-probability, catastrophic impacts. This result formalizes the intuition embodied in the DAI criterion in a manner consistent with rational decision-making under uncertainty.
Stomp, A M
1994-01-01
To meet the demands for goods and services of an exponentially growing human population, global ecosystems will come under increasing human management. The hallmark of successful ecosystem management will be long-term ecosystem stability. Ecosystems and the genetic information and processes which underlie interactions of organisms with the environment in populations and communities exhibit behaviors which have nonlinear characteristics. Nonlinear mathematical formulations describing deterministic chaos have been used successfully to model such systems in physics, chemistry, economics, physiology, and epidemiology. This approach can be extended to ecotoxicology and can be used to investigate how changes in genetic information determine the behavior of populations and communities. This article seeks to provide the arguments for such an approach and to give initial direction to the search for the boundary conditions within which lies ecosystem stability. The identification of a theoretical framework for ecotoxicology and the parameters which drive the underlying model is a critical component in the formulation of a prioritized research agenda and appropriate ecosystem management policy and regulation. PMID:7713038
Correlation Dimension Estimates of Global and Local Temperature Data.
NASA Astrophysics Data System (ADS)
Wang, Qiang
1995-11-01
The author has attempted to detect the presence of low-dimensional deterministic chaos in temperature data by estimating the correlation dimension with the Hill estimate that has been recently developed by Mikosch and Wang. There is no convincing evidence of low dimensionality with either global dataset (Southern Hemisphere monthly average temperatures from 1858 to 1984) or local temperature dataset (daily minimums at Auckland, New Zealand). Any apparent reduction in the dimension estimates appears to be due large1y, if not entirely, to effects of statistical bias, but neither is it a purely random stochastic process. The dimension of the climatic attractor may be significantly larger than 10.
Anticolonial climates: physiology, ecology, and global population, 1920s-1950s.
Bashford, Alison
2012-01-01
Historiography on tropical medicine and determinist ideas about climate and racial difference rightly focuses on links with nineteenth- and twentieth-century colonial rule. Occasionally and counterintuitively, however, these ideas have been redeployed as anticolonial argument. This article looks at one such instance; the racial physiology of Indian economist, ecologist, and anticolonial nationalist Radhakamal Mukerjee (1889-1968). It argues that the explanatory context was mid-twentieth-century discussion of global population growth, which raised questions of density and belonging to land. Ecology offered a new language and scientific system within which people and place were conceptually integrated, in this instance to anticolonial ends.
NASA Astrophysics Data System (ADS)
Mathieu, Jean-Philippe; Inal, Karim; Berveiller, Sophie; Diard, Olivier
2010-11-01
Local approach to brittle fracture for low-alloyed steels is discussed in this paper. A bibliographical introduction intends to highlight general trends and consensual points of the topic and evokes debatable aspects. French RPV steel 16MND5 (equ. ASTM A508 Cl.3), is then used as a model material to study the influence of temperature on brittle fracture. A micromechanical modelling of brittle fracture at the elementary volume scale already used in previous work is then recalled. It involves a multiscale modelling of microstructural plasticity which has been tuned on experimental inter-phase and inter-granular stresses heterogeneities measurements. Fracture probability of the elementary volume can then be computed using a randomly attributed defect size distribution based on realistic carbides repartition. This defect distribution is then deterministically correlated to stress heterogeneities simulated within the microstructure using a weakest-link hypothesis on the elementary volume, which results in a deterministic stress to fracture. Repeating the process allows to compute Weibull parameters on the elementary volume. This tool is then used to investigate the physical mechanisms that could explain the already experimentally observed temperature dependence of Beremin's parameter for 16MND5 steel. It is showed that, assuming that the hypothesis made in this work about cleavage micro-mechanisms are correct, effective equivalent surface energy (i.e. surface energy plus plastically dissipated energy when blunting the crack tip) for propagating a crack has to be temperature dependent to explain Beremin's parameters temperature evolution.
DAISY: a new software tool to test global identifiability of biological and physiological systems.
Bellu, Giuseppina; Saccomani, Maria Pia; Audoly, Stefania; D'Angiò, Leontina
2007-10-01
A priori global identifiability is a structural property of biological and physiological models. It is considered a prerequisite for well-posed estimation, since it concerns the possibility of recovering uniquely the unknown model parameters from measured input-output data, under ideal conditions (noise-free observations and error-free model structure). Of course, determining if the parameters can be uniquely recovered from observed data is essential before investing resources, time and effort in performing actual biomedical experiments. Many interesting biological models are nonlinear but identifiability analysis for nonlinear system turns out to be a difficult mathematical problem. Different methods have been proposed in the literature to test identifiability of nonlinear models but, to the best of our knowledge, so far no software tools have been proposed for automatically checking identifiability of nonlinear models. In this paper, we describe a software tool implementing a differential algebra algorithm to perform parameter identifiability analysis for (linear and) nonlinear dynamic models described by polynomial or rational equations. Our goal is to provide the biological investigator a completely automatized software, requiring minimum prior knowledge of mathematical modelling and no in-depth understanding of the mathematical tools. The DAISY (Differential Algebra for Identifiability of SYstems) software will potentially be useful in biological modelling studies, especially in physiology and clinical medicine, where research experiments are particularly expensive and/or difficult to perform. Practical examples of use of the software tool DAISY are presented. DAISY is available at the web site http://www.dei.unipd.it/~pia/.
NASA Astrophysics Data System (ADS)
Junker, Philipp; Jaeger, Stefanie; Kastner, Oliver; Eggeler, Gunther; Hackl, Klaus
2015-07-01
In this work, we present simulations of shape memory alloys which serve as first examples demonstrating the predicting character of energy-based material models. We begin with a theoretical approach for the derivation of the caloric parts of the Helmholtz free energy. Afterwards, experimental results for DSC measurements are presented. Then, we recall a micromechanical model based on the principle of the minimum of the dissipation potential for the simulation of polycrystalline shape memory alloys. The previously determined caloric parts of the Helmholtz free energy close the set of model parameters without the need of parameter fitting. All quantities are derived directly from experiments. Finally, we compare finite element results for tension tests to experimental data and show that the model identified by thermal measurements can predict mechanically induced phase transformations and thus rationalize global material behavior without any further assumptions.
Time Domain and Frequency Domain Deterministic Channel Modeling for Tunnel/Mining Environments.
Zhou, Chenming; Jacksha, Ronald; Yan, Lincan; Reyes, Miguel; Kovalchik, Peter
2017-01-01
Understanding wireless channels in complex mining environments is critical for designing optimized wireless systems operated in these environments. In this paper, we propose two physics-based, deterministic ultra-wideband (UWB) channel models for characterizing wireless channels in mining/tunnel environments - one in the time domain and the other in the frequency domain. For the time domain model, a general Channel Impulse Response (CIR) is derived and the result is expressed in the classic UWB tapped delay line model. The derived time domain channel model takes into account major propagation controlling factors including tunnel or entry dimensions, frequency, polarization, electrical properties of the four tunnel walls, and transmitter and receiver locations. For the frequency domain model, a complex channel transfer function is derived analytically. Based on the proposed physics-based deterministic channel models, channel parameters such as delay spread, multipath component number, and angular spread are analyzed. It is found that, despite the presence of heavy multipath, both channel delay spread and angular spread for tunnel environments are relatively smaller compared to that of typical indoor environments. The results and findings in this paper have application in the design and deployment of wireless systems in underground mining environments.
Time Domain and Frequency Domain Deterministic Channel Modeling for Tunnel/Mining Environments
Zhou, Chenming; Jacksha, Ronald; Yan, Lincan; Reyes, Miguel; Kovalchik, Peter
2018-01-01
Understanding wireless channels in complex mining environments is critical for designing optimized wireless systems operated in these environments. In this paper, we propose two physics-based, deterministic ultra-wideband (UWB) channel models for characterizing wireless channels in mining/tunnel environments — one in the time domain and the other in the frequency domain. For the time domain model, a general Channel Impulse Response (CIR) is derived and the result is expressed in the classic UWB tapped delay line model. The derived time domain channel model takes into account major propagation controlling factors including tunnel or entry dimensions, frequency, polarization, electrical properties of the four tunnel walls, and transmitter and receiver locations. For the frequency domain model, a complex channel transfer function is derived analytically. Based on the proposed physics-based deterministic channel models, channel parameters such as delay spread, multipath component number, and angular spread are analyzed. It is found that, despite the presence of heavy multipath, both channel delay spread and angular spread for tunnel environments are relatively smaller compared to that of typical indoor environments. The results and findings in this paper have application in the design and deployment of wireless systems in underground mining environments.† PMID:29457801
NASA Technical Reports Server (NTRS)
Onwubiko, Chin-Yere; Onyebueke, Landon
1996-01-01
The structural design, or the design of machine elements, has been traditionally based on deterministic design methodology. The deterministic method considers all design parameters to be known with certainty. This methodology is, therefore, inadequate to design complex structures that are subjected to a variety of complex, severe loading conditions. A nonlinear behavior that is dependent on stress, stress rate, temperature, number of load cycles, and time is observed on all components subjected to complex conditions. These complex conditions introduce uncertainties; hence, the actual factor of safety margin remains unknown. In the deterministic methodology, the contingency of failure is discounted; hence, there is a use of a high factor of safety. It may be most useful in situations where the design structures are simple. The probabilistic method is concerned with the probability of non-failure performance of structures or machine elements. It is much more useful in situations where the design is characterized by complex geometry, possibility of catastrophic failure, sensitive loads and material properties. Also included: Comparative Study of the use of AGMA Geometry Factors and Probabilistic Design Methodology in the Design of Compact Spur Gear Set.
PCEMCAN - Probabilistic Ceramic Matrix Composites Analyzer: User's Guide, Version 1.0
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Mital, Subodh K.; Murthy, Pappu L. N.
1998-01-01
PCEMCAN (Probabalistic CEramic Matrix Composites ANalyzer) is an integrated computer code developed at NASA Lewis Research Center that simulates uncertainties associated with the constituent properties, manufacturing process, and geometric parameters of fiber reinforced ceramic matrix composites and quantifies their random thermomechanical behavior. The PCEMCAN code can perform the deterministic as well as probabilistic analyses to predict thermomechanical properties. This User's guide details the step-by-step procedure to create input file and update/modify the material properties database required to run PCEMCAN computer code. An overview of the geometric conventions, micromechanical unit cell, nonlinear constitutive relationship and probabilistic simulation methodology is also provided in the manual. Fast probability integration as well as Monte-Carlo simulation methods are available for the uncertainty simulation. Various options available in the code to simulate probabilistic material properties and quantify sensitivity of the primitive random variables have been described. The description of deterministic as well as probabilistic results have been described using demonstration problems. For detailed theoretical description of deterministic and probabilistic analyses, the user is referred to the companion documents "Computational Simulation of Continuous Fiber-Reinforced Ceramic Matrix Composite Behavior," NASA TP-3602, 1996 and "Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites", NASA TM 4766, June 1997.
On the deterministic and stochastic use of hydrologic models
Farmer, William H.; Vogel, Richard M.
2016-01-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
Parameterizing sorption isotherms using a hybrid global-local fitting procedure.
Matott, L Shawn; Singh, Anshuman; Rabideau, Alan J
2017-05-01
Predictive modeling of the transport and remediation of groundwater contaminants requires an accurate description of the sorption process, which is usually provided by fitting an isotherm model to site-specific laboratory data. Commonly used calibration procedures, listed in order of increasing sophistication, include: trial-and-error, linearization, non-linear regression, global search, and hybrid global-local search. Given the considerable variability in fitting procedures applied in published isotherm studies, we investigated the importance of algorithm selection through a series of numerical experiments involving 13 previously published sorption datasets. These datasets, considered representative of state-of-the-art for isotherm experiments, had been previously analyzed using trial-and-error, linearization, or non-linear regression methods. The isotherm expressions were re-fit using a 3-stage hybrid global-local search procedure (i.e. global search using particle swarm optimization followed by Powell's derivative free local search method and Gauss-Marquardt-Levenberg non-linear regression). The re-fitted expressions were then compared to previously published fits in terms of the optimized weighted sum of squared residuals (WSSR) fitness function, the final estimated parameters, and the influence on contaminant transport predictions - where easily computed concentration-dependent contaminant retardation factors served as a surrogate measure of likely transport behavior. Results suggest that many of the previously published calibrated isotherm parameter sets were local minima. In some cases, the updated hybrid global-local search yielded order-of-magnitude reductions in the fitness function. In particular, of the candidate isotherms, the Polanyi-type models were most likely to benefit from the use of the hybrid fitting procedure. In some cases, improvements in fitness function were associated with slight (<10%) changes in parameter values, but in other cases significant (>50%) changes in parameter values were noted. Despite these differences, the influence of isotherm misspecification on contaminant transport predictions was quite variable and difficult to predict from inspection of the isotherms. Copyright © 2017 Elsevier B.V. All rights reserved.
Fast-slow asymptotics for a Markov chain model of fast sodium current
NASA Astrophysics Data System (ADS)
Starý, Tomáš; Biktashev, Vadim N.
2017-09-01
We explore the feasibility of using fast-slow asymptotics to eliminate the computational stiffness of discrete-state, continuous-time deterministic Markov chain models of ionic channels underlying cardiac excitability. We focus on a Markov chain model of fast sodium current, and investigate its asymptotic behaviour with respect to small parameters identified in different ways.
An Estimation Procedure for the Structural Parameters of the Unified Cognitive/IRT Model.
ERIC Educational Resources Information Center
Jiang, Hai; And Others
L. V. DiBello, W. F. Stout, and L. A. Roussos (1993) have developed a new item response model, the Unified Model, which brings together the discrete, deterministic aspects of cognition favored by cognitive scientists, and the continuous, stochastic aspects of test response behavior that underlie item response theory (IRT). The Unified Model blends…
NASA Astrophysics Data System (ADS)
Caumont, Olivier; Hally, Alan; Garrote, Luis; Richard, Évelyne; Weerts, Albrecht; Delogu, Fabio; Fiori, Elisabetta; Rebora, Nicola; Parodi, Antonio; Mihalović, Ana; Ivković, Marija; Dekić, Ljiljana; van Verseveld, Willem; Nuissier, Olivier; Ducrocq, Véronique; D'Agostino, Daniele; Galizia, Antonella; Danovaro, Emanuele; Clematis, Andrea
2015-04-01
The FP7 DRIHM (Distributed Research Infrastructure for Hydro-Meteorology, http://www.drihm.eu, 2011-2015) project intends to develop a prototype e-Science environment to facilitate the collaboration between meteorologists, hydrologists, and Earth science experts for accelerated scientific advances in Hydro-Meteorology Research (HMR). As the project comes to its end, this presentation will summarize the HMR results that have been obtained in the framework of DRIHM. The vision shaped and implemented in the framework of the DRIHM project enables the production and interpretation of numerous, complex compositions of hydrometeorological simulations of flood events from rainfall, either simulated or modelled, down to discharge. Each element of a composition is drawn from a set of various state-of-the-art models. Atmospheric simulations providing high-resolution rainfall forecasts involve different global and limited-area convection-resolving models, the former being used as boundary conditions for the latter. Some of these models can be run as ensembles, i.e. with perturbed boundary conditions, initial conditions and/or physics, thus sampling the probability density function of rainfall forecasts. In addition, a stochastic downscaling algorithm can be used to create high-resolution rainfall ensemble forecasts from deterministic lower-resolution forecasts. All these rainfall forecasts may be used as input to various rainfall-discharge hydrological models that compute the resulting stream flows for catchments of interest. In some hydrological simulations, physical parameters are perturbed to take into account model errors. As a result, six different kinds of rainfall data (either deterministic or probabilistic) can currently be compared with each other and combined with three different hydrological model engines running either in deterministic or probabilistic mode. HMR topics which are allowed or facilitated by such unprecedented sets of hydrometerological forecasts include: physical process studies, intercomparison of models and ensembles, sensitivity studies to a particular component of the forecasting chain, and design of flash-flood early-warning systems. These benefits will be illustrated with the different key cases that have been under investigation in the course of the project. These are four catastrophic cases of flooding, namely the case of 4 November 2011 in Genoa, Italy, 6 November 2011 in Catalonia, Spain, 13-16 May 2014 in eastern Europe, and 9 October 2014, again in Genoa, Italy.
Integrated Risk-Informed Decision-Making for an ALMR PRISM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muhlheim, Michael David; Belles, Randy; Denning, Richard S.
Decision-making is the process of identifying decision alternatives, assessing those alternatives based on predefined metrics, selecting an alternative (i.e., making a decision), and then implementing that alternative. The generation of decisions requires a structured, coherent process, or a decision-making process. The overall objective for this work is that the generalized framework is adopted into an autonomous decision-making framework and tailored to specific requirements for various applications. In this context, automation is the use of computing resources to make decisions and implement a structured decision-making process with limited or no human intervention. The overriding goal of automation is to replace ormore » supplement human decision makers with reconfigurable decision-making modules that can perform a given set of tasks rationally, consistently, and reliably. Risk-informed decision-making requires a probabilistic assessment of the likelihood of success given the status of the plant/systems and component health, and a deterministic assessment between plant operating parameters and reactor protection parameters to prevent unnecessary trips and challenges to plant safety systems. The probabilistic portion of the decision-making engine of the supervisory control system is based on the control actions associated with an ALMR PRISM. Newly incorporated into the probabilistic models are the prognostic/diagnostic models developed by Pacific Northwest National Laboratory. These allow decisions to incorporate the health of components into the decision–making process. Once the control options are identified and ranked based on the likelihood of success, the supervisory control system transmits the options to the deterministic portion of the platform. The deterministic portion of the decision-making engine uses thermal-hydraulic modeling and components for an advanced liquid-metal reactor Power Reactor Inherently Safe Module. The deterministic multi-attribute decision-making framework uses various sensor data (e.g., reactor outlet temperature, steam generator drum level) and calculates its position within the challenge state, its trajectory, and its margin within the controllable domain using utility functions to evaluate current and projected plant state space for different control decisions. The metrics that are evaluated are based on reactor trip set points. The integration of the deterministic calculations using multi-physics analyses and probabilistic safety calculations allows for the examination and quantification of margin recovery strategies. This also provides validation of the control options identified from the probabilistic assessment. Thus, the thermalhydraulics analyses are used to validate the control options identified from the probabilistic assessment. Future work includes evaluating other possible metrics and computational efficiencies, and developing a user interface to mimic display panels at a modern nuclear power plant.« less
NASA Astrophysics Data System (ADS)
Nascetti, A.; Di Rita, M.; Ravanelli, R.; Amicuzi, M.; Esposito, S.; Crespi, M.
2017-05-01
The high-performance cloud-computing platform Google Earth Engine has been developed for global-scale analysis based on the Earth observation data. In particular, in this work, the geometric accuracy of the two most used nearly-global free DSMs (SRTM and ASTER) has been evaluated on the territories of four American States (Colorado, Michigan, Nevada, Utah) and one Italian Region (Trentino Alto- Adige, Northern Italy) exploiting the potentiality of this platform. These are large areas characterized by different terrain morphology, land covers and slopes. The assessment has been performed using two different reference DSMs: the USGS National Elevation Dataset (NED) and a LiDAR acquisition. The DSMs accuracy has been evaluated through computation of standard statistic parameters, both at global scale (considering the whole State/Region) and in function of the terrain morphology using several slope classes. The geometric accuracy in terms of Standard deviation and NMAD, for SRTM range from 2-3 meters in the first slope class to about 45 meters in the last one, whereas for ASTER, the values range from 5-6 to 30 meters. In general, the performed analysis shows a better accuracy for the SRTM in the flat areas whereas the ASTER GDEM is more reliable in the steep areas, where the slopes increase. These preliminary results highlight the GEE potentialities to perform DSM assessment on a global scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jie; Draxl, Caroline; Hopson, Thomas
Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applicationsmore » (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.« less
Composing Data and Process Descriptions in the Design of Software Systems.
1988-05-01
accompanying ’data’ specification. So, for example, the bank account of Section 2.2.3 became ACC = open? d -- ACCIin(d) ACCA = payin? p --* ACCeosi(Ap) wdraw...w --* ACCtidraw(A,w) bal! balance(A) --+ ACCA I close -+ STOP where A has abstract type Account , with operators (that is, side-effect free functions...n accounts .................. 43 3.5 Non-deterministic merge ........ ........................... 45 4.1 Specification of a ticket machine system
On the Complexity of Item Response Theory Models.
Bonifay, Wes; Cai, Li
2017-01-01
Complexity in item response theory (IRT) has traditionally been quantified by simply counting the number of freely estimated parameters in the model. However, complexity is also contingent upon the functional form of the model. We examined four popular IRT models-exploratory factor analytic, bifactor, DINA, and DINO-with different functional forms but the same number of free parameters. In comparison, a simpler (unidimensional 3PL) model was specified such that it had 1 more parameter than the previous models. All models were then evaluated according to the minimum description length principle. Specifically, each model was fit to 1,000 data sets that were randomly and uniformly sampled from the complete data space and then assessed using global and item-level fit and diagnostic measures. The findings revealed that the factor analytic and bifactor models possess a strong tendency to fit any possible data. The unidimensional 3PL model displayed minimal fitting propensity, despite the fact that it included an additional free parameter. The DINA and DINO models did not demonstrate a proclivity to fit any possible data, but they did fit well to distinct data patterns. Applied researchers and psychometricians should therefore consider functional form-and not goodness-of-fit alone-when selecting an IRT model.
Earth Global Reference Atmospheric Model 2007 (Earth-GRAM07)
NASA Technical Reports Server (NTRS)
Leslie, Fred W.; Justus, C. G.
2008-01-01
GRAM is a Fortran software package that can run on a variety of platforms including PC's. GRAM provides values of atmospheric quantities such as temperature, pressure, density, winds, constituents, etc. GRAM99 covers all global locations, all months, and heights from the surface to approx. 1000 km). Dispersions (perturbations) of these parameters are also provided and are spatially and temporally correlated. GRAM can be run in a stand-alone mode or called as a subroutine from a trajectory program. GRAM07 is diagnostic, not prognostic (i.e., it describes the atmosphere, but it does not forecast). The source code is distributed free-of-charge to eligible recipients.
Unary probabilistic and quantum automata on promise problems
NASA Astrophysics Data System (ADS)
Gainutdinova, Aida; Yakaryılmaz, Abuzer
2018-02-01
We continue the systematic investigation of probabilistic and quantum finite automata (PFAs and QFAs) on promise problems by focusing on unary languages. We show that bounded-error unary QFAs are more powerful than bounded-error unary PFAs, and, contrary to the binary language case, the computational power of Las-Vegas QFAs and bounded-error PFAs is equivalent to the computational power of deterministic finite automata (DFAs). Then, we present a new family of unary promise problems defined with two parameters such that when fixing one parameter QFAs can be exponentially more succinct than PFAs and when fixing the other parameter PFAs can be exponentially more succinct than DFAs.
NASA Astrophysics Data System (ADS)
Ruiz, Rafael O.; Meruane, Viviana
2017-06-01
The goal of this work is to describe a framework to propagate uncertainties in piezoelectric energy harvesters (PEHs). These uncertainties are related to the incomplete knowledge of the model parameters. The framework presented could be employed to conduct prior robust stochastic predictions. The prior analysis assumes a known probability density function for the uncertain variables and propagates the uncertainties to the output voltage. The framework is particularized to evaluate the behavior of the frequency response functions (FRFs) in PEHs, while its implementation is illustrated by the use of different unimorph and bimorph PEHs subjected to different scenarios: free of uncertainties, common uncertainties, and uncertainties as a product of imperfect clamping. The common variability associated with the PEH parameters are tabulated and reported. A global sensitivity analysis is conducted to identify the Sobol indices. Results indicate that the elastic modulus, density, and thickness of the piezoelectric layer are the most relevant parameters of the output variability. The importance of including the model parameter uncertainties in the estimation of the FRFs is revealed. In this sense, the present framework constitutes a powerful tool in the robust design and prediction of PEH performance.
Comparison of probabilistic and deterministic fiber tracking of cranial nerves.
Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H
2017-09-01
OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p < 0.001; Wilcoxon signed-rank test). The false-positive error of the last obtained depiction was also significantly lower in probabilistic than in deterministic tracking (p < 0.001). The HCP data yielded significantly better results in terms of the Dice coefficient in probabilistic tracking (p < 0.001, Mann-Whitney U-test) and in deterministic tracking (p = 0.02). The false-positive errors were smaller in HCP data in deterministic tracking (p < 0.001) and showed a strong trend toward significance in probabilistic tracking (p = 0.06). In the clinical cases, the probabilistic method visualized 7 of 10 attempted CNs accurately, compared with 3 correct depictions with deterministic tracking. CONCLUSIONS High angular resolution DTI scans are preferable for the DTI-based depiction of the cranial nerves. Probabilistic tracking with a gradual PICo threshold increase is more effective for this task than the previously described deterministic tracking with a gradual FA threshold increase and might represent a method that is useful for depicting cranial nerves with DTI since it eliminates the erroneous fibers without manual intervention.
Numerical modeling of the transmission dynamics of drug-sensitive and drug-resistant HSV-2
NASA Astrophysics Data System (ADS)
Gumel, A. B.
2001-03-01
A competitive finite-difference method will be constructed and used to solve a modified deterministic model for the spread of herpes simplex virus type-2 (HSV-2) within a given population. The model monitors the transmission dynamics and control of drug-sensitive and drug-resistant HSV-2. Unlike the fourth-order Runge-Kutta method (RK4), which fails when the discretization parameters exceed certain values, the novel numerical method to be developed in this paper gives convergent results for all parameter values.
Richter, Ann-Kathrin; Klimek, Ludger; Merk, Hans F; Mülleneisen, Norbert; Renz, Harald; Wehrmann, Wolfgang; Werfel, Thomas; Hamelmann, Eckard; Siebert, Uwe; Sroczynski, Gaby; Wasem, Jürgen; Biermann-Stallwitz, Janine
2018-03-24
Specific immunotherapy is the only causal treatment in respiratory allergy. Due to high treatment cost and possible severe side effects subcutaneous immunotherapy (SCIT) is not indicated in all patients. Nevertheless, reported treatment rates seem to be low. This study aims to analyze the effects of increasing treatment rates of SCIT in respiratory allergy in terms of costs and quality-adjusted life years (QALYs). A state-transition Markov model simulates the course of disease of patients with allergic rhinitis, allergic asthma and both diseases over 10 years including a symptom-free state and death. Treatment comprises symptomatic pharmacotherapy alone or combined with SCIT. The model compares two strategies of increased and status quo treatment rates. Transition probabilities are based on routine data. Costs are calculated from the societal perspective applying German unit costs to literature-derived resource consumption. QALYs are determined by translating the mean change in non-preference-based quality of life scores to a change in utility. Key parameters are subjected to deterministic sensitivity analyses. Increasing treatment rates is a cost-effective strategy with an incremental cost-effectiveness ratio (ICER) of 3484€/QALY compared to the status quo. The most influential parameters are SCIT discontinuation rates, treatment effects on the transition probabilities and cost of SCIT. Across all parameter variations, the best case leads to dominance of increased treatment rates while the worst case ICER is 34,315€/QALY. Excluding indirect cost leads to a twofold increase in the ICER. Measures to increase SCIT initiation rates should be implemented and also address improving adherence.
Scattering effects of machined optical surfaces
NASA Astrophysics Data System (ADS)
Thompson, Anita Kotha
1998-09-01
Optical fabrication is one of the most labor-intensive industries in existence. Lensmakers use pitch to affix glass blanks to metal chucks that hold the glass as they grind it with tools that have not changed much in fifty years. Recent demands placed on traditional optical fabrication processes in terms of surface accuracy, smoothnesses, and cost effectiveness has resulted in the exploitation of precision machining technology to develop a new generation of computer numerically controlled (CNC) optical fabrication equipment. This new kind of precision machining process is called deterministic microgrinding. The most conspicuous feature of optical surfaces manufactured by the precision machining processes (such as single-point diamond turning or deterministic microgrinding) is the presence of residual cutting tool marks. These residual tool marks exhibit a highly structured topography of periodic azimuthal or radial deterministic marks in addition to random microroughness. These distinct topographic features give rise to surface scattering effects that can significantly degrade optical performance. In this dissertation project we investigate the scattering behavior of machined optical surfaces and their imaging characteristics. In particular, we will characterize the residual optical fabrication errors and relate the resulting scattering behavior to the tool and machine parameters in order to evaluate and improve the deterministic microgrinding process. Other desired information derived from the investigation of scattering behavior is the optical fabrication tolerances necessary to satisfy specific image quality requirements. Optical fabrication tolerances are a major cost driver for any precision optical manufacturing technology. The derivation and control of the optical fabrication tolerances necessary for different applications and operating wavelength regimes will play a unique and central role in establishing deterministic microgrinding as a preferred and a cost-effective optical fabrication process. Other well understood optical fabrication processes will also be reviewed and a performance comparison with the conventional grinding and polishing technique will be made to determine any inherent advantages in the optical quality of surfaces produced by other techniques.
Diffusive and subdiffusive dynamics of indoor microclimate: a time series modeling.
Maciejewska, Monika; Szczurek, Andrzej; Sikora, Grzegorz; Wyłomańska, Agnieszka
2012-09-01
The indoor microclimate is an issue in modern society, where people spend about 90% of their time indoors. Temperature and relative humidity are commonly used for its evaluation. In this context, the two parameters are usually considered as behaving in the same manner, just inversely correlated. This opinion comes from observation of the deterministic components of temperature and humidity time series. We focus on the dynamics and the dependency structure of the time series of these parameters, without deterministic components. Here we apply the mean square displacement, the autoregressive integrated moving average (ARIMA), and the methodology for studying anomalous diffusion. The analyzed data originated from five monitoring locations inside a modern office building, covering a period of nearly one week. It was found that the temperature data exhibited a transition between diffusive and subdiffusive behavior, when the building occupancy pattern changed from the weekday to the weekend pattern. At the same time the relative humidity consistently showed diffusive character. Also the structures of the dependencies of the temperature and humidity data sets were different, as shown by the different structures of the ARIMA models which were found appropriate. In the space domain, the dynamics and dependency structure of the particular parameter were preserved. This work proposes an approach to describe the very complex conditions of indoor air and it contributes to the improvement of the representative character of microclimate monitoring.
Stochastic reduced order models for inverse problems under uncertainty
Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.
2014-01-01
This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115
NASA Astrophysics Data System (ADS)
mouloud, Hamidatou
2016-04-01
The objective of this paper is to analyze the seismic activity and the statistical treatment of seismicity catalog the Constantine region between 1357 and 2014 with 7007 seismic event. Our research is a contribution to improving the seismic risk management by evaluating the seismic hazard in the North-East Algeria. In the present study, Earthquake hazard maps for the Constantine region are calculated. Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristic earthquake. The method is based on the site-dependent evaluation of the probability of exceedance for the chosen strong-motion parameter. We proposed five sismotectonique zones. Four steps are necessary: (i) identification of potential sources of future earthquakes, (ii) assessment of their geological, geophysical and geometric, (iii) identification of the attenuation pattern of seismic motion, (iv) calculation of the hazard at a site and finally (v) hazard mapping for a region. In this study, the procedure of the earthquake hazard evaluation recently developed by Kijko and Sellevoll (1992) is used to estimate seismic hazard parameters in the northern part of Algeria.
Thermal treatment of the minority game
NASA Astrophysics Data System (ADS)
Burgos, E.; Ceva, Horacio; Perazzo, R. P.
2002-03-01
We study a cost function for the aggregate behavior of all the agents involved in the minority game (MG) or the bar attendance model (BAM). The cost function allows us to define a deterministic, synchronous dynamic that yields results that have the main relevant features than those of the probabilistic, sequential dynamics used for the MG or the BAM. We define a temperature through a Langevin approach in terms of the fluctuations of the average attendance. We prove that the cost function is an extensive quantity that can play the role of an internal energy of the many-agent system while the temperature so defined is an intensive parameter. We compare the results of the thermal perturbation to the deterministic dynamics and prove that they agree with those obtained with the MG or BAM in the limit of very low temperature.
Thermal treatment of the minority game.
Burgos, E; Ceva, Horacio; Perazzo, R P J
2002-03-01
We study a cost function for the aggregate behavior of all the agents involved in the minority game (MG) or the bar attendance model (BAM). The cost function allows us to define a deterministic, synchronous dynamic that yields results that have the main relevant features than those of the probabilistic, sequential dynamics used for the MG or the BAM. We define a temperature through a Langevin approach in terms of the fluctuations of the average attendance. We prove that the cost function is an extensive quantity that can play the role of an internal energy of the many-agent system while the temperature so defined is an intensive parameter. We compare the results of the thermal perturbation to the deterministic dynamics and prove that they agree with those obtained with the MG or BAM in the limit of very low temperature.
NASA Technical Reports Server (NTRS)
Bogdanoff, J. L.; Kayser, K.; Krieger, W.
1977-01-01
The paper describes convergence and response studies in the low frequency range of complex systems, particularly with low values of damping of different distributions, and reports on the modification of the relaxation procedure required under these conditions. A new method is presented for response estimation in complex lumped parameter linear systems under random or deterministic steady state excitation. The essence of the method is the use of relaxation procedures with a suitable error function to find the estimated response; natural frequencies and normal modes are not computed. For a 45 degree of freedom system, and two relaxation procedures, convergence studies and frequency response estimates were performed. The low frequency studies are considered in the framework of earlier studies (Kayser and Bogdanoff, 1975) involving the mid to high frequency range.
Evaluation of the Plant-Craig stochastic convection scheme in an ensemble forecasting system
NASA Astrophysics Data System (ADS)
Keane, R. J.; Plant, R. S.; Tennant, W. J.
2015-12-01
The Plant-Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic element only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant-Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant-Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.
Evolution with Stochastic Fitness and Stochastic Migration
Rice, Sean H.; Papadopoulos, Anthony
2009-01-01
Background Migration between local populations plays an important role in evolution - influencing local adaptation, speciation, extinction, and the maintenance of genetic variation. Like other evolutionary mechanisms, migration is a stochastic process, involving both random and deterministic elements. Many models of evolution have incorporated migration, but these have all been based on simplifying assumptions, such as low migration rate, weak selection, or large population size. We thus have no truly general and exact mathematical description of evolution that incorporates migration. Methodology/Principal Findings We derive an exact equation for directional evolution, essentially a stochastic Price equation with migration, that encompasses all processes, both deterministic and stochastic, contributing to directional change in an open population. Using this result, we show that increasing the variance in migration rates reduces the impact of migration relative to selection. This means that models that treat migration as a single parameter tend to be biassed - overestimating the relative impact of immigration. We further show that selection and migration interact in complex ways, one result being that a strategy for which fitness is negatively correlated with migration rates (high fitness when migration is low) will tend to increase in frequency, even if it has lower mean fitness than do other strategies. Finally, we derive an equation for the effective migration rate, which allows some of the complex stochastic processes that we identify to be incorporated into models with a single migration parameter. Conclusions/Significance As has previously been shown with selection, the role of migration in evolution is determined by the entire distributions of immigration and emigration rates, not just by the mean values. The interactions of stochastic migration with stochastic selection produce evolutionary processes that are invisible to deterministic evolutionary theory. PMID:19816580
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S.; ...
2017-02-23
Here, a newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides,more » important for metal extraction chemistry, are parametrized using ParFit.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S.
Here, a newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides,more » important for metal extraction chemistry, are parametrized using ParFit.« less
A modified Leslie-Gower predator-prey interaction model and parameter identifiability
NASA Astrophysics Data System (ADS)
Tripathi, Jai Prakash; Meghwani, Suraj S.; Thakur, Manoj; Abbas, Syed
2018-01-01
In this work, bifurcation and a systematic approach for estimation of identifiable parameters of a modified Leslie-Gower predator-prey system with Crowley-Martin functional response and prey refuge is discussed. Global asymptotic stability is discussed by applying fluctuation lemma. The system undergoes into Hopf bifurcation with respect to parameters intrinsic growth rate of predators (s) and prey reserve (m). The stability of Hopf bifurcation is also discussed by calculating Lyapunov number. The sensitivity analysis of the considered model system with respect to all variables is performed which also supports our theoretical study. To estimate the unknown parameter from the data, an optimization procedure (pseudo-random search algorithm) is adopted. System responses and phase plots for estimated parameters are also compared with true noise free data. It is found that the system dynamics with true set of parametric values is similar to the estimated parametric values. Numerical simulations are presented to substantiate the analytical findings.
NASA Astrophysics Data System (ADS)
Meza Conde, Eustorgio
The Hybrid Wave Model (HWM) is a deterministic nonlinear wave model developed for the computation of wave properties in the vicinity of ocean wave measurements. The HWM employs both Mode-Coupling and Phase Modulation Methods to model the wave-wave interactions in an ocean wave field. Different from other nonlinear wave models, the HWM decouples the nonlinear wave interactions from ocean wave field measurements and decomposes the wave field into a set of free-wave components. In this dissertation the HWM is applied to the prediction of wave elevation from pressure measurements and to the quantification of energy during breaking of long-crested irregular surface waves. 1.A transient wave train was formed in a two-dimensional wave flume by sequentially generating a series of waves from high to low frequencies that superposed at a downstream location. The predicted wave elevation using the HWM based on the pressure measurement of a very steep transient wave train is in excellent agreement with the corresponding elevation measurement, while that using Linear Wave Theory (LWT) has relatively large discrepancies. Furthermore, the predicted elevation using the HWM is not sensitive to the choice of the cutoff frequency, while that using LWT is very sensitive. 2.Several transient wave trains containing an isolated plunging or spilling breaker at a prescribed location were generated in a two-dimensional wave flume using the same superposition technique. Surface elevation measurements of each transient wave train were made at locations before and after breaking. Applying the HWM nonlinear deterministic decomposition to the measured elevation, the free-wave components comprising the transient wave train were derived. By comparing the free-wave spectra before and after breaking it is found that energy loss was almost exclusively from wave components at frequencies higher than the spectral peak frequency. Even though the wave components near the peak frequency are the largest, they do not significantly gain or lose energy after breaking. It was also observed that wave components of frequencies significantly below or near the peak frequency gain a small portion of energy lost by the high-frequency waves. These findings may have important implications to the ocean wave energy budget.
NASA Technical Reports Server (NTRS)
Challa, M.; Natanson, G.
1998-01-01
Two different algorithms - a deterministic magnetic-field-only algorithm and a Kalman filter for gyroless spacecraft - are used to estimate the attitude and rates of the Rossi X-Ray Timing Explorer (RXTE) using only measurements from a three-axis magnetometer. The performance of these algorithms is examined using in-flight data from various scenarios. In particular, significant enhancements in accuracies are observed when' the telemetered magnetometer data are accurately calibrated using a recently developed calibration algorithm. Interesting features observed in these studies of the inertial-pointing RXTE include a remarkable sensitivity of the filter to the numerical values of the noise parameters and relatively long convergence time spans. By analogy, the accuracy of the deterministic scheme is noticeably lower as a result of reduced rates of change of the body-fixed geomagnetic field. Preliminary results show the filter-per-axis attitude accuracies ranging between 0.1 and 0.5 deg and rate accuracies between 0.001 deg/sec and 0.005 deg./sec, whereas the deterministic method needs a more sophisticated techniques for smoothing time derivatives of the measured geomagnetic field to clearly distinguish both attitude and rate solutions from the numerical noise. Also included is a new theoretical development in the deterministic algorithm: the transformation of a transcendental equation in the original theory into an 8th-order polynomial equation. It is shown that this 8th-order polynomial reduces to quadratic equations in the two limiting cases-infinitely high wheel momentum, and constant rates-discussed in previous publications.
The Diffusion Model Is Not a Deterministic Growth Model: Comment on Jones and Dzhafarov (2014)
Smith, Philip L.; Ratcliff, Roger; McKoon, Gail
2015-01-01
Jones and Dzhafarov (2014) claim that several current models of speeded decision making in cognitive tasks, including the diffusion model, can be viewed as special cases of other general models or model classes. The general models can be made to match any set of response time (RT) distribution and accuracy data exactly by a suitable choice of parameters and so are unfalsifiable. The implication of their claim is that models like the diffusion model are empirically testable only by artificially restricting them to exclude unfalsifiable instances of the general model. We show that Jones and Dzhafarov’s argument depends on enlarging the class of “diffusion” models to include models in which there is little or no diffusion. The unfalsifiable models are deterministic or near-deterministic growth models, from which the effects of within-trial variability have been removed or in which they are constrained to be negligible. These models attribute most or all of the variability in RT and accuracy to across-trial variability in the rate of evidence growth, which is permitted to be distributed arbitrarily and to vary freely across experimental conditions. In contrast, in the standard diffusion model, within-trial variability in evidence is the primary determinant of variability in RT. Across-trial variability, which determines the relative speed of correct responses and errors, is theoretically and empirically constrained. Jones and Dzhafarov’s attempt to include the diffusion model in a class of models that also includes deterministic growth models misrepresents and trivializes it and conveys a misleading picture of cognitive decision-making research. PMID:25347314
Multi-parametric variational data assimilation for hydrological forecasting
NASA Astrophysics Data System (ADS)
Alvarado-Montero, R.; Schwanenberg, D.; Krahe, P.; Helmke, P.; Klein, B.
2017-12-01
Ensemble forecasting is increasingly applied in flow forecasting systems to provide users with a better understanding of forecast uncertainty and consequently to take better-informed decisions. A common practice in probabilistic streamflow forecasting is to force deterministic hydrological model with an ensemble of numerical weather predictions. This approach aims at the representation of meteorological uncertainty but neglects uncertainty of the hydrological model as well as its initial conditions. Complementary approaches use probabilistic data assimilation techniques to receive a variety of initial states or represent model uncertainty by model pools instead of single deterministic models. This paper introduces a novel approach that extends a variational data assimilation based on Moving Horizon Estimation to enable the assimilation of observations into multi-parametric model pools. It results in a probabilistic estimate of initial model states that takes into account the parametric model uncertainty in the data assimilation. The assimilation technique is applied to the uppermost area of River Main in Germany. We use different parametric pools, each of them with five parameter sets, to assimilate streamflow data, as well as remotely sensed data from the H-SAF project. We assess the impact of the assimilation in the lead time performance of perfect forecasts (i.e. observed data as forcing variables) as well as deterministic and probabilistic forecasts from ECMWF. The multi-parametric assimilation shows an improvement of up to 23% for CRPS performance and approximately 20% in Brier Skill Scores with respect to the deterministic approach. It also improves the skill of the forecast in terms of rank histogram and produces a narrower ensemble spread.
A Comparison of Monte Carlo and Deterministic Solvers for keff and Sensitivity Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haeck, Wim; Parsons, Donald Kent; White, Morgan Curtis
Verification and validation of our solutions for calculating the neutron reactivity for nuclear materials is a key issue to address for many applications, including criticality safety, research reactors, power reactors, and nuclear security. Neutronics codes solve variations of the Boltzmann transport equation. The two main variants are Monte Carlo versus deterministic solutions, e.g. the MCNP [1] versus PARTISN [2] codes, respectively. There have been many studies over the decades that examined the accuracy of such solvers and the general conclusion is that when the problems are well-posed, either solver can produce accurate results. However, the devil is always in themore » details. The current study examines the issue of self-shielding and the stress it puts on deterministic solvers. Most Monte Carlo neutronics codes use continuous-energy descriptions of the neutron interaction data that are not subject to this effect. The issue of self-shielding occurs because of the discretisation of data used by the deterministic solutions. Multigroup data used in these solvers are the average cross section and scattering parameters over an energy range. Resonances in cross sections can occur that change the likelihood of interaction by one to three orders of magnitude over a small energy range. Self-shielding is the numerical effect that the average cross section in groups with strong resonances can be strongly affected as neutrons within that material are preferentially absorbed or scattered out of the resonance energies. This affects both the average cross section and the scattering matrix.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNcemore » reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).« less
A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates.
An, Qian; Kang, Jian; Song, Ruiguang; Hall, H Irene
2016-04-30
Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. Copyright © 2015 John Wiley & Sons, Ltd.
Fault Tolerant Optimal Control.
1982-08-01
subsystem is modelled by deterministic or stochastic finite-dimensional vector differential or difference equations. The parameters of these equations...is no partial differential equation that must be solved. Thus we can sidestep the inability to solve the Bellman equation for control problems with x...transition models and cost functionals can be reduced to the search for solutions of nonlinear partial differential equations using ’verification
Agent-Based Deterministic Modeling of the Bone Marrow Homeostasis.
Kurhekar, Manish; Deshpande, Umesh
2016-01-01
Modeling of stem cells not only describes but also predicts how a stem cell's environment can control its fate. The first stem cell populations discovered were hematopoietic stem cells (HSCs). In this paper, we present a deterministic model of bone marrow (that hosts HSCs) that is consistent with several of the qualitative biological observations. This model incorporates stem cell death (apoptosis) after a certain number of cell divisions and also demonstrates that a single HSC can potentially populate the entire bone marrow. It also demonstrates that there is a production of sufficient number of differentiated cells (RBCs, WBCs, etc.). We prove that our model of bone marrow is biologically consistent and it overcomes the biological feasibility limitations of previously reported models. The major contribution of our model is the flexibility it allows in choosing model parameters which permits several different simulations to be carried out in silico without affecting the homeostatic properties of the model. We have also performed agent-based simulation of the model of bone marrow system proposed in this paper. We have also included parameter details and the results obtained from the simulation. The program of the agent-based simulation of the proposed model is made available on a publicly accessible website.
A Stochastic Differential Equation Model for the Spread of HIV amongst People Who Inject Drugs.
Liang, Yanfeng; Greenhalgh, David; Mao, Xuerong
2016-01-01
We introduce stochasticity into the deterministic differential equation model for the spread of HIV amongst people who inject drugs (PWIDs) studied by Greenhalgh and Hay (1997). This was based on the original model constructed by Kaplan (1989) which analyses the behaviour of HIV/AIDS amongst a population of PWIDs. We derive a stochastic differential equation (SDE) for the fraction of PWIDs who are infected with HIV at time. The stochasticity is introduced using the well-known standard technique of parameter perturbation. We first prove that the resulting SDE for the fraction of infected PWIDs has a unique solution in (0, 1) provided that some infected PWIDs are initially present and next construct the conditions required for extinction and persistence. Furthermore, we show that there exists a stationary distribution for the persistence case. Simulations using realistic parameter values are then constructed to illustrate and support our theoretical results. Our results provide new insight into the spread of HIV amongst PWIDs. The results show that the introduction of stochastic noise into a model for the spread of HIV amongst PWIDs can cause the disease to die out in scenarios where deterministic models predict disease persistence.
Experimental study and simulation of space charge stimulated discharge
NASA Astrophysics Data System (ADS)
Noskov, M. D.; Malinovski, A. S.; Cooke, C. M.; Wright, K. A.; Schwab, A. J.
2002-11-01
The electrical discharge of volume distributed space charge in poly(methylmethacrylate) (PMMA) has been investigated both experimentally and by computer simulation. The experimental space charge was implanted in dielectric samples by exposure to a monoenergetic electron beam of 3 MeV. Electrical breakdown through the implanted space charge region within the sample was initiated by a local electric field enhancement applied to the sample surface. A stochastic-deterministic dynamic model for electrical discharge was developed and used in a computer simulation of these breakdowns. The model employs stochastic rules to describe the physical growth of the discharge channels, and deterministic laws to describe the electric field, the charge, and energy dynamics within the discharge channels and the dielectric. Simulated spatial-temporal and current characteristics of the expanding discharge structure during physical growth are quantitatively compared with the experimental data to confirm the discharge model. It was found that a single fixed set of physically based dielectric parameter values was adequate to simulate the complete family of experimental space charge discharges in PMMA. It is proposed that such a set of parameters also provides a useful means to quantify the breakdown properties of other dielectrics.
Tradeoff methods in multiobjective insensitive design of airplane control systems
NASA Technical Reports Server (NTRS)
Schy, A. A.; Giesy, D. P.
1984-01-01
The latest results of an ongoing study of computer-aided design of airplane control systems are given. Constrained minimization algorithms are used, with the design objectives in the constraint vector. The concept of Pareto optimiality is briefly reviewed. It is shown how an experienced designer can use it to find designs which are well-balanced in all objectives. Then the problem of finding designs which are insensitive to uncertainty in system parameters are discussed, introducing a probabilistic vector definition of sensitivity which is consistent with the deterministic Pareto optimal problem. Insensitivity is important in any practical design, but it is particularly important in the design of feedback control systems, since it is considered to be the most important distinctive property of feedback control. Methods of tradeoff between deterministic and stochastic-insensitive (SI) design are described, and tradeoff design results are presented for the example of the a Shuttle lateral stability augmentation system. This example is used because careful studies have been made of the uncertainty in Shuttle aerodynamics. Finally, since accurate statistics of uncertain parameters are usually not available, the effects of crude statistical models on SI designs are examined.
Deterministic ripple-spreading model for complex networks.
Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel
2011-04-01
This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.
Dinov, Martin; Leech, Robert
2017-01-01
Part of the process of EEG microstate estimation involves clustering EEG channel data at the global field power (GFP) maxima, very commonly using a modified K-means approach. Clustering has also been done deterministically, despite there being uncertainties in multiple stages of the microstate analysis, including the GFP peak definition, the clustering itself and in the post-clustering assignment of microstates back onto the EEG timecourse of interest. We perform a fully probabilistic microstate clustering and labeling, to account for these sources of uncertainty using the closest probabilistic analog to KM called Fuzzy C-means (FCM). We train softmax multi-layer perceptrons (MLPs) using the KM and FCM-inferred cluster assignments as target labels, to then allow for probabilistic labeling of the full EEG data instead of the usual correlation-based deterministic microstate label assignment typically used. We assess the merits of the probabilistic analysis vs. the deterministic approaches in EEG data recorded while participants perform real or imagined motor movements from a publicly available data set of 109 subjects. Though FCM group template maps that are almost topographically identical to KM were found, there is considerable uncertainty in the subsequent assignment of microstate labels. In general, imagined motor movements are less predictable on a time point-by-time point basis, possibly reflecting the more exploratory nature of the brain state during imagined, compared to during real motor movements. We find that some relationships may be more evident using FCM than using KM and propose that future microstate analysis should preferably be performed probabilistically rather than deterministically, especially in situations such as with brain computer interfaces, where both training and applying models of microstates need to account for uncertainty. Probabilistic neural network-driven microstate assignment has a number of advantages that we have discussed, which are likely to be further developed and exploited in future studies. In conclusion, probabilistic clustering and a probabilistic neural network-driven approach to microstate analysis is likely to better model and reveal details and the variability hidden in current deterministic and binarized microstate assignment and analyses.
Dinov, Martin; Leech, Robert
2017-01-01
Part of the process of EEG microstate estimation involves clustering EEG channel data at the global field power (GFP) maxima, very commonly using a modified K-means approach. Clustering has also been done deterministically, despite there being uncertainties in multiple stages of the microstate analysis, including the GFP peak definition, the clustering itself and in the post-clustering assignment of microstates back onto the EEG timecourse of interest. We perform a fully probabilistic microstate clustering and labeling, to account for these sources of uncertainty using the closest probabilistic analog to KM called Fuzzy C-means (FCM). We train softmax multi-layer perceptrons (MLPs) using the KM and FCM-inferred cluster assignments as target labels, to then allow for probabilistic labeling of the full EEG data instead of the usual correlation-based deterministic microstate label assignment typically used. We assess the merits of the probabilistic analysis vs. the deterministic approaches in EEG data recorded while participants perform real or imagined motor movements from a publicly available data set of 109 subjects. Though FCM group template maps that are almost topographically identical to KM were found, there is considerable uncertainty in the subsequent assignment of microstate labels. In general, imagined motor movements are less predictable on a time point-by-time point basis, possibly reflecting the more exploratory nature of the brain state during imagined, compared to during real motor movements. We find that some relationships may be more evident using FCM than using KM and propose that future microstate analysis should preferably be performed probabilistically rather than deterministically, especially in situations such as with brain computer interfaces, where both training and applying models of microstates need to account for uncertainty. Probabilistic neural network-driven microstate assignment has a number of advantages that we have discussed, which are likely to be further developed and exploited in future studies. In conclusion, probabilistic clustering and a probabilistic neural network-driven approach to microstate analysis is likely to better model and reveal details and the variability hidden in current deterministic and binarized microstate assignment and analyses. PMID:29163110
Topology optimization under stochastic stiffness
NASA Astrophysics Data System (ADS)
Asadpoure, Alireza
Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.
STOCHASTIC OPTICS: A SCATTERING MITIGATION FRAMEWORK FOR RADIO INTERFEROMETRIC IMAGING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Michael D., E-mail: mjohnson@cfa.harvard.edu
2016-12-10
Just as turbulence in the Earth’s atmosphere can severely limit the angular resolution of optical telescopes, turbulence in the ionized interstellar medium fundamentally limits the resolution of radio telescopes. We present a scattering mitigation framework for radio imaging with very long baseline interferometry (VLBI) that partially overcomes this limitation. Our framework, “stochastic optics,” derives from a simplification of strong interstellar scattering to separate small-scale (“diffractive”) effects from large-scale (“refractive”) effects, thereby separating deterministic and random contributions to the scattering. Stochastic optics extends traditional synthesis imaging by simultaneously reconstructing an unscattered image and its refractive perturbations. Its advantages over direct imagingmore » come from utilizing the many deterministic properties of the scattering—such as the time-averaged “blurring,” polarization independence, and the deterministic evolution in frequency and time—while still accounting for the stochastic image distortions on large scales. These distortions are identified in the image reconstructions through regularization by their time-averaged power spectrum. Using synthetic data, we show that this framework effectively removes the blurring from diffractive scattering while reducing the spurious image features from refractive scattering. Stochastic optics can provide significant improvements over existing scattering mitigation strategies and is especially promising for imaging the Galactic Center supermassive black hole, Sagittarius A*, with the Global mm-VLBI Array and with the Event Horizon Telescope.« less
Peak Dose Assessment for Proposed DOE-PPPO Authorized Limits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maldonado, Delis
2012-06-01
The Oak Ridge Institute for Science and Education (ORISE), a U.S. Department of Energy (DOE) prime contractor, was contracted by the DOE Portsmouth/Paducah Project Office (DOE-PPPO) to conduct a peak dose assessment in support of the Authorized Limits Request for Solid Waste Disposal at Landfill C-746-U at the Paducah Gaseous Diffusion Plant (DOE-PPPO 2011a). The peak doses were calculated based on the DOE-PPPO Proposed Single Radionuclides Soil Guidelines and the DOE-PPPO Proposed Authorized Limits (AL) Volumetric Concentrations available in DOE-PPPO 2011a. This work is provided as an appendix to the Dose Modeling Evaluations and Technical Support Document for the Authorizedmore » Limits Request for the C-746-U Landfill at the Paducah Gaseous Diffusion Plant, Paducah, Kentucky (ORISE 2012). The receptors evaluated in ORISE 2012 were selected by the DOE-PPPO for the additional peak dose evaluations. These receptors included a Landfill Worker, Trespasser, Resident Farmer (onsite), Resident Gardener, Recreational User, Outdoor Worker and an Offsite Resident Farmer. The RESRAD (Version 6.5) and RESRAD-OFFSITE (Version 2.5) computer codes were used for the peak dose assessments. Deterministic peak dose assessments were performed for all the receptors and a probabilistic dose assessment was performed only for the Offsite Resident Farmer at the request of the DOE-PPPO. In a deterministic analysis, a single input value results in a single output value. In other words, a deterministic analysis uses single parameter values for every variable in the code. By contrast, a probabilistic approach assigns parameter ranges to certain variables, and the code randomly selects the values for each variable from the parameter range each time it calculates the dose (NRC 2006). The receptor scenarios, computer codes and parameter input files were previously used in ORISE 2012. A few modifications were made to the parameter input files as appropriate for this effort. Some of these changes included increasing the time horizon beyond 1,050 years (yr), and using the radionuclide concentrations provided by the DOE-PPPO as inputs into the codes. The deterministic peak doses were evaluated within time horizons of 70 yr (for the Landfill Worker and Trespasser), 1,050 yr, 10,000 yr and 100,000 yr (for the Resident Farmer [onsite], Resident Gardener, Recreational User, Outdoor Worker and Offsite Resident Farmer) at the request of the DOE-PPPO. The time horizons of 10,000 yr and 100,000 yr were used at the request of the DOE-PPPO for informational purposes only. The probabilistic peak of the mean dose assessment was performed for the Offsite Resident Farmer using Technetium-99 (Tc-99) and a time horizon of 1,050 yr. The results of the deterministic analyses indicate that among all receptors and time horizons evaluated, the highest projected dose, 2,700 mrem/yr, occurred for the Resident Farmer (onsite) at 12,773 yr. The exposure pathways contributing to the peak dose are ingestion of plants, external gamma, and ingestion of milk, meat and soil. However, this receptor is considered an implausible receptor. The only receptors considered plausible are the Landfill Worker, Recreational User, Outdoor Worker and the Offsite Resident Farmer. The maximum projected dose among the plausible receptors is 220 mrem/yr for the Outdoor Worker and it occurs at 19,045 yr. The exposure pathways contributing to the dose for this receptor are external gamma and soil ingestion. The results of the probabilistic peak of the mean dose analysis for the Offsite Resident Farmer indicate that the average (arithmetic mean) of the peak of the mean doses for this receptor is 0.98 mrem/yr and it occurs at 1,050 yr. This dose corresponds to Tc-99 within the time horizon of 1,050 yr.« less
Open Source Tools for Numerical Simulation of Urban Greenhouse Gas Emissions
NASA Astrophysics Data System (ADS)
Nottrott, A.; Tan, S. M.; He, Y.
2016-12-01
There is a global movement toward urbanization. Approximately 7% of the global population lives in just 28 megacities, occupying less than 0.1% of the total land area used by human activity worldwide. These cities contribute a significant fraction of the global budget of anthropogenic primary pollutants and greenhouse gasses. The 27 largest cities consume 9.9%, 9.3%, 6.7% and 3.0% of global gasoline, electricity, energy and water use, respectively. This impact motivates novel approaches to quantify and mitigate the growing contribution of megacity emissions to global climate change. Cities are characterized by complex topography, inhomogeneous turbulence, and variable pollutant source distributions. These features create a scale separation between local sources and urban scale emissions estimates known as the Grey-Zone. Modern computational fluid dynamics (CFD) techniques provide a quasi-deterministic, physically based toolset to bridge the scale separation gap between source level dynamics, local measurements, and urban scale emissions inventories. CFD has the capability to represent complex building topography and capture detailed 3D turbulence fields in the urban boundary layer. This presentation discusses the application of OpenFOAM to urban CFD simulations of natural gas leaks in cities. OpenFOAM is an open source software for advanced numerical simulation of engineering and environmental fluid flows. When combined with free or low cost computer aided drawing and GIS, OpenFOAM generates a detailed, 3D representation of urban wind fields. OpenFOAM was applied to model methane (CH4) emissions from various components of the natural gas distribution system, to investigate the impact of urban meteorology on mobile CH4 measurements. The numerical experiments demonstrate that CH4 concentration profiles are highly sensitive to the relative location of emission sources and buildings. Sources separated by distances of 5-10 meters showed significant differences in vertical dispersion of the plume due to building wake effects. The OpenFOAM flow fields were combined with an inverse, stochastic dispersion model to quantify and visualize the sensitivity of point sensors to upwind sources in various built environments.
Implementation and verification of global optimization benchmark problems
NASA Astrophysics Data System (ADS)
Posypkin, Mikhail; Usov, Alexander
2017-12-01
The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.
Semenov, Mikhail A; Terkel, Dmitri A
2003-01-01
This paper analyses the convergence of evolutionary algorithms using a technique which is based on a stochastic Lyapunov function and developed within the martingale theory. This technique is used to investigate the convergence of a simple evolutionary algorithm with self-adaptation, which contains two types of parameters: fitness parameters, belonging to the domain of the objective function; and control parameters, responsible for the variation of fitness parameters. Although both parameters mutate randomly and independently, they converge to the "optimum" due to the direct (for fitness parameters) and indirect (for control parameters) selection. We show that the convergence velocity of the evolutionary algorithm with self-adaptation is asymptotically exponential, similar to the velocity of the optimal deterministic algorithm on the class of unimodal functions. Although some martingale inequalities have not be proved analytically, they have been numerically validated with 0.999 confidence using Monte-Carlo simulations.
Wei, Yu-Jia; He, Yu-Ming; Chen, Ming-Cheng; Hu, Yi-Nan; He, Yu; Wu, Dian; Schneider, Christian; Kamp, Martin; Höfling, Sven; Lu, Chao-Yang; Pan, Jian-Wei
2014-11-12
Single photons are attractive candidates of quantum bits (qubits) for quantum computation and are the best messengers in quantum networks. Future scalable, fault-tolerant photonic quantum technologies demand both stringently high levels of photon indistinguishability and generation efficiency. Here, we demonstrate deterministic and robust generation of pulsed resonance fluorescence single photons from a single semiconductor quantum dot using adiabatic rapid passage, a method robust against fluctuation of driving pulse area and dipole moments of solid-state emitters. The emitted photons are background-free, have a vanishing two-photon emission probability of 0.3% and a raw (corrected) two-photon Hong-Ou-Mandel interference visibility of 97.9% (99.5%), reaching a precision that places single photons at the threshold for fault-tolerant surface-code quantum computing. This single-photon source can be readily scaled up to multiphoton entanglement and used for quantum metrology, boson sampling, and linear optical quantum computing.
Hybrid deterministic-stochastic modeling of x-ray beam bowtie filter scatter on a CT system.
Liu, Xin; Hsieh, Jiang
2015-01-01
Knowledge of scatter generated by bowtie filter (i.e. x-ray beam compensator) is crucial for providing artifact free images on the CT scanners. Our approach is to use a hybrid deterministic-stochastic simulation to estimate the scatter level generated by a bowtie filter made of a material with low atomic number. First, major components of CT systems, such as source, flat filter, bowtie filter, body phantom, are built into a 3D model. The scattered photon fluence and the primary transmitted photon fluence are simulated by MCNP - a Monte Carlo simulation toolkit. The rejection of scattered photon by the post patient collimator (anti-scatter grid) is simulated with an analytical formula. The biased sinogram is created by superimposing scatter signal generated by the simulation onto the primary x-ray beam signal. Finally, images with artifacts are reconstructed with the biased signal. The effect of anti-scatter grid height on scatter rejection are also discussed and demonstrated.
Hardware-efficient Bell state preparation using Quantum Zeno Dynamics in superconducting circuits
NASA Astrophysics Data System (ADS)
Flurin, Emmanuel; Blok, Machiel; Hacohen-Gourgy, Shay; Martin, Leigh S.; Livingston, William P.; Dove, Allison; Siddiqi, Irfan
By preforming a continuous joint measurement on a two qubit system, we restrict the qubit evolution to a chosen subspace of the total Hilbert space. This extension of the quantum Zeno effect, called Quantum Zeno Dynamics, has already been explored in various physical systems such as superconducting cavities, single rydberg atoms, atomic ensembles and Bose Einstein condensates. In this experiment, two superconducting qubits are strongly dispersively coupled to a high-Q cavity (χ >> κ) allowing for the doubly excited state | 11 〉 to be selectively monitored. The Quantum Zeno Dynamics in the complementary subspace enables us to coherently prepare a Bell state. As opposed to dissipation engineering schemes, we emphasize that our protocol is deterministic, does not rely direct coupling between qubits and functions only using single qubit controls and cavity readout. Such Quantum Zeno Dynamics can be generalized to larger Hilbert space enabling deterministic generation of many-body entangled states, and thus realizes a decoherence-free subspace allowing alternative noise-protection schemes.
Experimental realization of real-time feedback-control of single-atom arrays
NASA Astrophysics Data System (ADS)
Kim, Hyosub; Lee, Woojun; Ahn, Jaewook
2016-05-01
Deterministic loading of neutral atoms on particular locations has remained a challenging problem. Here we show, in a proof-of-principle experimental demonstration, that such deterministic loading can be achieved by rearrangement of atoms. In the experiment, cold rubidium atom were trapped by optical tweezers, which are the hologram images made by a liquid-crystal spatial light modulator (LC-SLM). After the initial occupancy was identified, the hologram was actively controlled to rearrange the captured atoms on to unfilled sites. For this, we developed a new flicker-free hologram algorithm that enables holographic atom translation. Our demonstration show that up to N=9 atoms were simultaneously moved in the 2D plane with the movable degrees of freedom of 2N=18 and the fidelity of 99% for single-atom 5- μm translation. It is hoped that our in situ atom rearrangement becomes useful in scaling quantum computers. Samsung Science and Technology Foundation [SSTF-BA1301-12].
NASA Technical Reports Server (NTRS)
Papadopoulos, Michael; Tolson, Robert H.
1993-01-01
The Modal Identification Experiment (MIE) is a proposed experiment to define the dynamic characteristics of Space Station Freedom. Previous studies emphasized free-decay modal identification. The feasibility of using a forced response method (Observer/Kalman Filter Identification (OKID)) is addressed. The interest in using OKID is to determine the input mode shape matrix which can be used for controller design or control-structure interaction analysis, and investigate if forced response methods may aid in separating closely spaced modes. A model of the SC-7 configuration of Space Station Freedom was excited using simulated control system thrusters to obtain acceleration output. It is shown that an 'optimum' number of outputs exists for OKID. To recover global mode shapes, a modified method called Global-Local OKID was developed. This study shows that using data from a long forced response followed by free-decay leads to the 'best' modal identification. Twelve out of the thirteen target modes were identified for such an output.
Bianchi cosmologies with p-form gauge fields
NASA Astrophysics Data System (ADS)
Normann, Ben David; Hervik, Sigbjørn; Ricciardone, Angelo; Thorsrud, Mikjel
2018-05-01
In this paper the dynamics of free gauge fields in Bianchi type I–VII h space-times is investigated. The general equations for a matter sector consisting of a p-form field strength (p \\in \\{1, 3\\} ), a cosmological constant (4-form) and perfect fluid in Bianchi type I–VII h space-times are computed using the orthonormal frame method. The number of independent components of a p-form in all Bianchi types I–IX are derived and, by means of the dynamical systems approach, the behaviour of such fields in Bianchi type I and V are studied. Both a local and a global analysis are performed and strong global results regarding the general behaviour are obtained. New self-similar cosmological solutions appear both in Bianchi type I and Bianchi type V, in particular, a one-parameter family of self-similar solutions, ‘Wonderland (λ)’ appears generally in type V and in type I for λ=0 . Depending on the value of the equation of state parameter other new stable solutions are also found (‘The Rope’ and ‘The Edge’) containing a purely spatial field strength that rotates relative to the co-moving inertial tetrad. Using monotone functions, global results are given and the conditions under which exact solutions are (global) attractors are found.
DAISY: a new software tool to test global identifiability of biological and physiological systems
Bellu, Giuseppina; Saccomani, Maria Pia; Audoly, Stefania; D’Angiò, Leontina
2009-01-01
A priori global identifiability is a structural property of biological and physiological models. It is considered a prerequisite for well-posed estimation, since it concerns the possibility of recovering uniquely the unknown model parameters from measured input-output data, under ideal conditions (noise-free observations and error-free model structure). Of course, determining if the parameters can be uniquely recovered from observed data is essential before investing resources, time and effort in performing actual biomedical experiments. Many interesting biological models are nonlinear but identifiability analysis for nonlinear system turns out to be a difficult mathematical problem. Different methods have been proposed in the literature to test identifiability of nonlinear models but, to the best of our knowledge, so far no software tools have been proposed for automatically checking identifiability of nonlinear models. In this paper, we describe a software tool implementing a differential algebra algorithm to perform parameter identifiability analysis for (linear and) nonlinear dynamic models described by polynomial or rational equations. Our goal is to provide the biological investigator a completely automatized software, requiring minimum prior knowledge of mathematical modelling and no in-depth understanding of the mathematical tools. The DAISY (Differential Algebra for Identifiability of SYstems) software will potentially be useful in biological modelling studies, especially in physiology and clinical medicine, where research experiments are particularly expensive and/or difficult to perform. Practical examples of use of the software tool DAISY are presented. DAISY is available at the web site http://www.dei.unipd.it/~pia/. PMID:17707944
Noise-induced symmetry breaking far from equilibrium and the emergence of biological homochirality
NASA Astrophysics Data System (ADS)
Jafarpour, Farshid; Biancalani, Tommaso; Goldenfeld, Nigel
2017-03-01
The origin of homochirality, the observed single-handedness of biological amino acids and sugars, has long been attributed to autocatalysis, a frequently assumed precursor for early life self-replication. However, the stability of homochiral states in deterministic autocatalytic systems relies on cross-inhibition of the two chiral states, an unlikely scenario for early life self-replicators. Here we present a theory for a stochastic individual-level model of autocatalytic prebiotic self-replicators that are maintained out of thermal equilibrium. Without chiral inhibition, the racemic state is the global attractor of the deterministic dynamics, but intrinsic multiplicative noise stabilizes the homochiral states. Moreover, we show that this noise-induced bistability is robust with respect to diffusion of molecules of opposite chirality, and systems of diffusively coupled autocatalytic chemical reactions synchronize their final homochiral states when the self-replication is the dominant production mechanism for the chiral molecules. We conclude that nonequilibrium autocatalysis is a viable mechanism for homochirality, without imposing additional nonlinearities such as chiral inhibition.
NASA Astrophysics Data System (ADS)
Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman
2017-06-01
Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.
The Role of Diffusion in the Transport of Energetic Electrons during Solar Flares
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bian, Nicolas H.; Kontar, Eduard P.; Emslie, A. Gordon, E-mail: nicolas.bian@glasgow.gla.ac.uk, E-mail: emslieg@wku.edu
2017-02-01
The transport of the energy contained in suprathermal electrons in solar flares plays a key role in our understanding of many aspects of flare physics, from the spatial distributions of hard X-ray emission and energy deposition in the ambient atmosphere to global energetics. Historically the transport of these particles has been largely treated through a deterministic approach, in which first-order secular energy loss to electrons in the ambient target is treated as the dominant effect, with second-order diffusive terms (in both energy and angle) generally being either treated as a small correction or even neglected. Here, we critically analyze thismore » approach, and we show that spatial diffusion through pitch-angle scattering necessarily plays a very significant role in the transport of electrons. We further show that a satisfactory treatment of the diffusion process requires consideration of non-local effects, so that the electron flux depends not just on the local gradient of the electron distribution function but on the value of this gradient within an extended region encompassing a significant fraction of a mean free path. Our analysis applies generally to pitch-angle scattering by a variety of mechanisms, from Coulomb collisions to turbulent scattering. We further show that the spatial transport of electrons along the magnetic field of a flaring loop can be modeled rather effectively as a Continuous Time Random Walk with velocity-dependent probability distribution functions of jump sizes and occurrences, both of which can be expressed in terms of the scattering mean free path.« less
Spontaneous evolution of microstructure in materials
NASA Astrophysics Data System (ADS)
Kirkaldy, J. S.
1993-08-01
Microstructures which evolve spontaneously from random solutions in near isolation often exhibit patterns of remarkable symmetry which can only in part be explained by boundary and crystallographic effects. With reference to the detailed experimental record, we seek the source of causality in this natural tendency to constructive autonomy, usually designated as a principle of pattern or wavenumber selection in a free boundary problem. The phase field approach which incorporates detailed boundary structure and global rate equations has enjoyed some currency in removing internal degrees of freedom, and this will be examined critically in reference to the migration of phase-antiphase boundaries produced in an order-disorder transformation. Analogous problems for singular interfaces including solute trapping are explored. The microscopic solvability hypothesis has received much attention, particularly in relation to dendrite morphology and the Saffman-Taylor fingering problem in hydrodynamics. A weak form of this will be illustrated in relation to local equilibrium binary solidification cells which renders the free boundary problem unique. However, the main thrust of this article concerns dynamic configurations at anisotropic singular interfaces and the related patterns of eutectoid(ic)s, nonequilibrium cells, cellular dendrites, and Liesegang figures where there is a recognizable macroscopic phase space of pattern fluctuations and/or solitons. These possess a weakly defective stability point and thereby submit to a statistical principle of maximum path probability and to a variety of corollary dissipation principles in the determination of a unique average patterning behavior. A theoretical development of the principle based on Hamilton's principle for frictional systems is presented in an Appendix. Elements of the principles of scaling, universality, and deterministic chaos are illustrated.
NASA Astrophysics Data System (ADS)
Chung, Ming-Chien; Tan, Chih-Hao; Chen, Mien-Min; Su, Tai-Wei
2013-04-01
Taiwan is an active mountain belt created by the oblique collision between the northern Luzon arc and the Asian continental margin. The inherent complexities of geological nature create numerous discontinuities through rock masses and relatively steep hillside on the island. In recent years, the increase in the frequency and intensity of extreme natural events due to global warming or climate change brought significant landslides. The causes of landslides in these slopes are attributed to a number of factors. As is well known, rainfall is one of the most significant triggering factors for landslide occurrence. In general, the rainfall infiltration results in changing the suction and the moisture of soil, raising the unit weight of soil, and reducing the shear strength of soil in the colluvium of landslide. The stability of landslide is closely related to the groundwater pressure in response to rainfall infiltration, the geological and topographical conditions, and the physical and mechanical parameters. To assess the potential susceptibility to landslide, an effective modeling of rainfall-induced landslide is essential. In this paper, a deterministic approach is adopted to estimate the critical rainfall threshold of the rainfall-induced landslide. The critical rainfall threshold is defined as the accumulated rainfall while the safety factor of the slope is equal to 1.0. First, the process of deterministic approach establishes the hydrogeological conceptual model of the slope based on a series of in-situ investigations, including geological drilling, surface geological investigation, geophysical investigation, and borehole explorations. The material strength and hydraulic properties of the model were given by the field and laboratory tests. Second, the hydraulic and mechanical parameters of the model are calibrated with the long-term monitoring data. Furthermore, a two-dimensional numerical program, GeoStudio, was employed to perform the modelling practice. Finally, the critical rainfall threshold of the slope can be obtained by the coupled analysis of rainfall, infiltration, seepage, and slope stability. Taking the slope located at 50k+650 on Tainan county road No 174 as an example, it located at Zeng-Wun river watershed in the southern Taiwan, is an active landslide due to typhoon events. Coordinates for the case study site are 194925, 2567208 (TWD97). The site was selected as the results of previous reports and geological survey. According to the Central Weather Bureau, the annual precipitation is about 2,450 mm, the highest monthly value is in August with 630 mm, and the lowest value is in November with 13 mm. The results show that the critical rainfall threshold of the study case is around 640 mm. It means that there should be alarmed when the accumulated rainfall over 640 mm. Our preliminary results appear to be useful for rainfall-induced landslide hazard assessments. The findings are also a good reference to establish an early warning system of landslides and develop strategies to prevent so much misfortune from happening in the future.
NASA Astrophysics Data System (ADS)
Anghel, M.; Toroczkai, Zoltán; Bassler, Kevin E.; Korniss, G.
2004-02-01
Using the minority game as a model for competition dynamics, we investigate the effects of interagent communications across a network on the global evolution of the game. Agent communication across this network leads to the formation of an influence network, which is dynamically coupled to the evolution of the game, and it is responsible for the information flow driving the agents' actions. We show that the influence network spontaneously develops hubs with a broad distribution of in-degrees, defining a scale-free robust leadership structure. Furthermore, in realistic parameter ranges, facilitated by information exchange on the network, agents can generate a high degree of cooperation making the collective almost maximally efficient.
Optimal control strategy for a novel computer virus propagation model on scale-free networks
NASA Astrophysics Data System (ADS)
Zhang, Chunming; Huang, Haitao
2016-06-01
This paper aims to study the combined impact of reinstalling system and network topology on the spread of computer viruses over the Internet. Based on scale-free network, this paper proposes a novel computer viruses propagation model-SLBOSmodel. A systematic analysis of this new model shows that the virus-free equilibrium is globally asymptotically stable when its spreading threshold is less than one; nevertheless, it is proved that the viral equilibrium is permanent if the spreading threshold is greater than one. Then, the impacts of different model parameters on spreading threshold are analyzed. Next, an optimally controlled SLBOS epidemic model on complex networks is also studied. We prove that there is an optimal control existing for the control problem. Some numerical simulations are finally given to illustrate the main results.
Deterministic composite nanophotonic lattices in large area for broadband applications
NASA Astrophysics Data System (ADS)
Xavier, Jolly; Probst, Jürgen; Becker, Christiane
2016-12-01
Exotic manipulation of the flow of photons in nanoengineered materials with an aperiodic distribution of nanostructures plays a key role in efficiency-enhanced broadband photonic and plasmonic technologies for spectrally tailorable integrated biosensing, nanostructured thin film solarcells, white light emitting diodes, novel plasmonic ensembles etc. Through a generic deterministic nanotechnological route here we show subwavelength-scale silicon (Si) nanostructures on nanoimprinted glass substrate in large area (4 cm2) with advanced functional features of aperiodic composite nanophotonic lattices. These nanophotonic aperiodic lattices have easily tailorable supercell tiles with well-defined and discrete lattice basis elements and they show rich Fourier spectra. The presented nanophotonic lattices are designed functionally akin to two-dimensional aperiodic composite lattices with unconventional flexibility- comprising periodic photonic crystals and/or in-plane photonic quasicrystals as pattern design subsystems. The fabricated composite lattice-structured Si nanostructures are comparatively analyzed with a range of nanophotonic structures with conventional lattice geometries of periodic, disordered random as well as in-plane quasicrystalline photonic lattices with comparable lattice parameters. As a proof of concept of compatibility with advanced bottom-up liquid phase crystallized (LPC) Si thin film fabrication, the experimental structural analysis is further extended to double-side-textured deterministic aperiodic lattice-structured 10 μm thick large area LPC Si film on nanoimprinted substrates.
Deterministic photon-emitter coupling in chiral photonic circuits.
Söllner, Immo; Mahmoodian, Sahand; Hansen, Sofie Lindskov; Midolo, Leonardo; Javadi, Alisa; Kiršanskė, Gabija; Pregnolato, Tommaso; El-Ella, Haitham; Lee, Eun Hye; Song, Jin Dong; Stobbe, Søren; Lodahl, Peter
2015-09-01
Engineering photon emission and scattering is central to modern photonics applications ranging from light harvesting to quantum-information processing. To this end, nanophotonic waveguides are well suited as they confine photons to a one-dimensional geometry and thereby increase the light-matter interaction. In a regular waveguide, a quantum emitter interacts equally with photons in either of the two propagation directions. This symmetry is violated in nanophotonic structures in which non-transversal local electric-field components imply that photon emission and scattering may become directional. Here we show that the helicity of the optical transition of a quantum emitter determines the direction of single-photon emission in a specially engineered photonic-crystal waveguide. We observe single-photon emission into the waveguide with a directionality that exceeds 90% under conditions in which practically all the emitted photons are coupled to the waveguide. The chiral light-matter interaction enables deterministic and highly directional photon emission for experimentally achievable on-chip non-reciprocal photonic elements. These may serve as key building blocks for single-photon optical diodes, transistors and deterministic quantum gates. Furthermore, chiral photonic circuits allow the dissipative preparation of entangled states of multiple emitters for experimentally achievable parameters, may lead to novel topological photon states and could be applied for directional steering of light.
Deterministic photon-emitter coupling in chiral photonic circuits
NASA Astrophysics Data System (ADS)
Söllner, Immo; Mahmoodian, Sahand; Hansen, Sofie Lindskov; Midolo, Leonardo; Javadi, Alisa; Kiršanskė, Gabija; Pregnolato, Tommaso; El-Ella, Haitham; Lee, Eun Hye; Song, Jin Dong; Stobbe, Søren; Lodahl, Peter
2015-09-01
Engineering photon emission and scattering is central to modern photonics applications ranging from light harvesting to quantum-information processing. To this end, nanophotonic waveguides are well suited as they confine photons to a one-dimensional geometry and thereby increase the light-matter interaction. In a regular waveguide, a quantum emitter interacts equally with photons in either of the two propagation directions. This symmetry is violated in nanophotonic structures in which non-transversal local electric-field components imply that photon emission and scattering may become directional. Here we show that the helicity of the optical transition of a quantum emitter determines the direction of single-photon emission in a specially engineered photonic-crystal waveguide. We observe single-photon emission into the waveguide with a directionality that exceeds 90% under conditions in which practically all the emitted photons are coupled to the waveguide. The chiral light-matter interaction enables deterministic and highly directional photon emission for experimentally achievable on-chip non-reciprocal photonic elements. These may serve as key building blocks for single-photon optical diodes, transistors and deterministic quantum gates. Furthermore, chiral photonic circuits allow the dissipative preparation of entangled states of multiple emitters for experimentally achievable parameters, may lead to novel topological photon states and could be applied for directional steering of light.
Multi-Scale Modeling of the Gamma Radiolysis of Nitrate Solutions.
Horne, Gregory P; Donoclift, Thomas A; Sims, Howard E; Orr, Robin M; Pimblott, Simon M
2016-11-17
A multiscale modeling approach has been developed for the extended time scale long-term radiolysis of aqueous systems. The approach uses a combination of stochastic track structure and track chemistry as well as deterministic homogeneous chemistry techniques and involves four key stages: radiation track structure simulation, the subsequent physicochemical processes, nonhomogeneous diffusion-reaction kinetic evolution, and homogeneous bulk chemistry modeling. The first three components model the physical and chemical evolution of an isolated radiation chemical track and provide radiolysis yields, within the extremely low dose isolated track paradigm, as the input parameters for a bulk deterministic chemistry model. This approach to radiation chemical modeling has been tested by comparison with the experimentally observed yield of nitrite from the gamma radiolysis of sodium nitrate solutions. This is a complex radiation chemical system which is strongly dependent on secondary reaction processes. The concentration of nitrite is not just dependent upon the evolution of radiation track chemistry and the scavenging of the hydrated electron and its precursors but also on the subsequent reactions of the products of these scavenging reactions with other water radiolysis products. Without the inclusion of intratrack chemistry, the deterministic component of the multiscale model is unable to correctly predict experimental data, highlighting the importance of intratrack radiation chemistry in the chemical evolution of the irradiated system.
Deterministic composite nanophotonic lattices in large area for broadband applications
Xavier, Jolly; Probst, Jürgen; Becker, Christiane
2016-01-01
Exotic manipulation of the flow of photons in nanoengineered materials with an aperiodic distribution of nanostructures plays a key role in efficiency-enhanced broadband photonic and plasmonic technologies for spectrally tailorable integrated biosensing, nanostructured thin film solarcells, white light emitting diodes, novel plasmonic ensembles etc. Through a generic deterministic nanotechnological route here we show subwavelength-scale silicon (Si) nanostructures on nanoimprinted glass substrate in large area (4 cm2) with advanced functional features of aperiodic composite nanophotonic lattices. These nanophotonic aperiodic lattices have easily tailorable supercell tiles with well-defined and discrete lattice basis elements and they show rich Fourier spectra. The presented nanophotonic lattices are designed functionally akin to two-dimensional aperiodic composite lattices with unconventional flexibility- comprising periodic photonic crystals and/or in-plane photonic quasicrystals as pattern design subsystems. The fabricated composite lattice-structured Si nanostructures are comparatively analyzed with a range of nanophotonic structures with conventional lattice geometries of periodic, disordered random as well as in-plane quasicrystalline photonic lattices with comparable lattice parameters. As a proof of concept of compatibility with advanced bottom-up liquid phase crystallized (LPC) Si thin film fabrication, the experimental structural analysis is further extended to double-side-textured deterministic aperiodic lattice-structured 10 μm thick large area LPC Si film on nanoimprinted substrates. PMID:27941869
NASA Astrophysics Data System (ADS)
Börker, J.; Hartmann, J.; Amann, T.; Romero-Mujalli, G.
2018-04-01
Mapped unconsolidated sediments cover half of the global land surface. They are of considerable importance for many Earth surface processes like weathering, hydrological fluxes or biogeochemical cycles. Ignoring their characteristics or spatial extent may lead to misinterpretations in Earth System studies. Therefore, a new Global Unconsolidated Sediments Map database (GUM) was compiled, using regional maps specifically representing unconsolidated and quaternary sediments. The new GUM database provides insights into the regional distribution of unconsolidated sediments and their properties. The GUM comprises 911,551 polygons and describes not only sediment types and subtypes, but also parameters like grain size, mineralogy, age and thickness where available. Previous global lithological maps or databases lacked detail for reported unconsolidated sediment areas or missed large areas, and reported a global coverage of 25 to 30%, considering the ice-free land area. Here, alluvial sediments cover about 23% of the mapped total ice-free area, followed by aeolian sediments (˜21%), glacial sediments (˜20%), and colluvial sediments (˜16%). A specific focus during the creation of the database was on the distribution of loess deposits, since loess is highly reactive and relevant to understand geochemical cycles related to dust deposition and weathering processes. An additional layer compiling pyroclastic sediment is added, which merges consolidated and unconsolidated pyroclastic sediments. The compilation shows latitudinal abundances of sediment types related to climate of the past. The GUM database is available at the PANGAEA database (https://doi.org/10.1594/PANGAEA.884822).
Uniform Persistence and Global Stability for a Brain Tumor and Immune System Interaction
NASA Astrophysics Data System (ADS)
Khajanchi, Subhas
This paper describes the synergistic interaction between the growth of malignant gliomas and the immune system interactions using a system of coupled ordinary differential equations (ODEs). The proposed mathematical model comprises the interaction of glioma cells, macrophages, activated Cytotoxic T-Lymphocytes (CTLs), the immunosuppressive factor TGF-β and the immuno-stimulatory factor IFN-γ. The dynamical behavior of the proposed system both analytically and numerically is investigated from the point of view of stability. By constructing Lyapunov functions, the global behavior of the glioma-free and the interior equilibrium point have been analyzed under some assumptions. Finally, we perform numerical simulations in order to illustrate our analytical findings by varying the system parameters.
Global stability and periodic solution of the viral dynamics
NASA Astrophysics Data System (ADS)
Song, Xinyu; Neumann, Avidan U.
2007-05-01
It is well known that the mathematical models provide very important information for the research of human immunodeficiency virus-type 1 and hepatitis C virus (HCV). However, the infection rate of almost all mathematical models is linear. The linearity shows the simple interaction between the T cells and the viral particles. In this paper, we consider the classical mathematical model with saturation response of the infection rate. By stability analysis we obtain sufficient conditions on the parameters for the global stability of the infected steady state and the infection-free steady state. We also obtain the conditions for the existence of an orbitally asymptotically stable periodic solution. Numerical simulations are presented to illustrate the results.
Fully adaptive propagation of the quantum-classical Liouville equation
NASA Astrophysics Data System (ADS)
Horenko, Illia; Weiser, Martin; Schmidt, Burkhard; Schütte, Christof
2004-05-01
In mixed quantum-classical molecular dynamics few but important degrees of freedom of a dynamical system are modeled quantum-mechanically while the remaining ones are treated within the classical approximation. Rothe methods established in the theory of partial differential equations are used to control both temporal and spatial discretization errors on grounds of a global tolerance criterion. The TRAIL (trapezoidal rule for adaptive integration of Liouville dynamics) scheme [I. Horenko and M. Weiser, J. Comput. Chem. 24, 1921 (2003)] has been extended to account for nonadiabatic effects in molecular dynamics described by the quantum-classical Liouville equation. In the context of particle methods, the quality of the spatial approximation of the phase-space distributions is maximized while the numerical condition of the least-squares problem for the parameters of particles is minimized. The resulting dynamical scheme is based on a simultaneous propagation of moving particles (Gaussian and Dirac deltalike trajectories) in phase space employing a fully adaptive strategy to upgrade Dirac to Gaussian particles and, vice versa, downgrading Gaussians to Dirac-type trajectories. This allows for the combination of Monte-Carlo-based strategies for the sampling of densities and coherences in multidimensional problems with deterministic treatment of nonadiabatic effects. Numerical examples demonstrate the application of the method to spin-boson systems in different dimensionality. Nonadiabatic effects occurring at conical intersections are treated in the diabatic representation. By decreasing the global tolerance, the numerical solution obtained from the TRAIL scheme are shown to converge towards exact results.
Fully adaptive propagation of the quantum-classical Liouville equation.
Horenko, Illia; Weiser, Martin; Schmidt, Burkhard; Schütte, Christof
2004-05-15
In mixed quantum-classical molecular dynamics few but important degrees of freedom of a dynamical system are modeled quantum-mechanically while the remaining ones are treated within the classical approximation. Rothe methods established in the theory of partial differential equations are used to control both temporal and spatial discretization errors on grounds of a global tolerance criterion. The TRAIL (trapezoidal rule for adaptive integration of Liouville dynamics) scheme [I. Horenko and M. Weiser, J. Comput. Chem. 24, 1921 (2003)] has been extended to account for nonadiabatic effects in molecular dynamics described by the quantum-classical Liouville equation. In the context of particle methods, the quality of the spatial approximation of the phase-space distributions is maximized while the numerical condition of the least-squares problem for the parameters of particles is minimized. The resulting dynamical scheme is based on a simultaneous propagation of moving particles (Gaussian and Dirac deltalike trajectories) in phase space employing a fully adaptive strategy to upgrade Dirac to Gaussian particles and, vice versa, downgrading Gaussians to Dirac-type trajectories. This allows for the combination of Monte-Carlo-based strategies for the sampling of densities and coherences in multidimensional problems with deterministic treatment of nonadiabatic effects. Numerical examples demonstrate the application of the method to spin-boson systems in different dimensionality. Nonadiabatic effects occurring at conical intersections are treated in the diabatic representation. By decreasing the global tolerance, the numerical solution obtained from the TRAIL scheme are shown to converge towards exact results.
Mars double-aeroflyby free returns
NASA Astrophysics Data System (ADS)
Jesick, Mark
2017-09-01
Mars double-flyby free-return trajectories that pass twice through the Martian atmosphere are documented. This class of trajectories is advantageous for potential Mars atmospheric sample return missions because of its low geocentric energy at departure and arrival, because it would enable two sample collections at unique locations during different Martian seasons, and because of its lack of deterministic maneuvers. Free return opportunities are documented over Earth departure dates ranging from 2015 through 2100, with viable missions available every Earth-Mars synodic period. After constraining the maximum lift-to-drag ratio to be less than one, the minimum observed Earth departure hyperbolic excess speed is 3.23 km/s, the minimum Earth atmospheric entry speed is 11.42 km/s, and the minimum round-trip flight time is 805 days. An algorithm using simplified dynamics is developed along with a method to derive an initial estimate for trajectories in a more realistic dynamic model. Multiple examples are presented, including free returns that pass outside and inside of Mars's appreciable atmosphere.
Enzyme efficiency: An open reaction system perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banerjee, Kinshuk, E-mail: kb36@rice.edu; Bhattacharyya, Kamal, E-mail: pchemkb@gmail.com
2015-12-21
A measure of enzyme efficiency is proposed for an open reaction network that, in suitable form, applies to closed systems as well. The idea originates from the description of classical enzyme kinetics in terms of cycles. We derive analytical expressions for the efficiency measure by treating the network not only deterministically but also stochastically. The latter accounts for any significant amount of noise that can be present in biological systems and hence reveals its impact on efficiency. Numerical verification of the results is also performed. It is found that the deterministic equation overestimates the efficiency, the more so for verymore » small system sizes. Roles of various kinetics parameters and system sizes on the efficiency are thoroughly explored and compared with the standard definition k{sub 2}/K{sub M}. Study of substrate fluctuation also indicates an interesting efficiency-accuracy balance.« less
Lai, C.; Tsay, T.-K.; Chien, C.-H.; Wu, I.-L.
2009-01-01
Researchers at the Hydroinformatic Research and Development Team (HIRDT) of the National Taiwan University undertook a project to create a real time flood forecasting model, with an aim to predict the current in the Tamsui River Basin. The model was designed based on deterministic approach with mathematic modeling of complex phenomenon, and specific parameter values operated to produce a discrete result. The project also devised a rainfall-stage model that relates the rate of rainfall upland directly to the change of the state of river, and is further related to another typhoon-rainfall model. The geographic information system (GIS) data, based on precise contour model of the terrain, estimate the regions that were perilous to flooding. The HIRDT, in response to the project's progress, also devoted their application of a deterministic model to unsteady flow of thermodynamics to help predict river authorities issue timely warnings and take other emergency measures.
Stochastic inference with spiking neurons in the high-conductance state
NASA Astrophysics Data System (ADS)
Petrovici, Mihai A.; Bill, Johannes; Bytschok, Ilja; Schemmel, Johannes; Meier, Karlheinz
2016-10-01
The highly variable dynamics of neocortical circuits observed in vivo have been hypothesized to represent a signature of ongoing stochastic inference but stand in apparent contrast to the deterministic response of neurons measured in vitro. Based on a propagation of the membrane autocorrelation across spike bursts, we provide an analytical derivation of the neural activation function that holds for a large parameter space, including the high-conductance state. On this basis, we show how an ensemble of leaky integrate-and-fire neurons with conductance-based synapses embedded in a spiking environment can attain the correct firing statistics for sampling from a well-defined target distribution. For recurrent networks, we examine convergence toward stationarity in computer simulations and demonstrate sample-based Bayesian inference in a mixed graphical model. This points to a new computational role of high-conductance states and establishes a rigorous link between deterministic neuron models and functional stochastic dynamics on the network level.
Kanter, Ido; Butkovski, Maria; Peleg, Yitzhak; Zigzag, Meital; Aviad, Yaara; Reidler, Igor; Rosenbluh, Michael; Kinzel, Wolfgang
2010-08-16
Random bit generators (RBGs) constitute an important tool in cryptography, stochastic simulations and secure communications. The later in particular has some difficult requirements: high generation rate of unpredictable bit strings and secure key-exchange protocols over public channels. Deterministic algorithms generate pseudo-random number sequences at high rates, however, their unpredictability is limited by the very nature of their deterministic origin. Recently, physical RBGs based on chaotic semiconductor lasers were shown to exceed Gbit/s rates. Whether secure synchronization of two high rate physical RBGs is possible remains an open question. Here we propose a method, whereby two fast RBGs based on mutually coupled chaotic lasers, are synchronized. Using information theoretic analysis we demonstrate security against a powerful computational eavesdropper, capable of noiseless amplification, where all parameters are publicly known. The method is also extended to secure synchronization of a small network of three RBGs.
MIMICKING COUNTERFACTUAL OUTCOMES TO ESTIMATE CAUSAL EFFECTS.
Lok, Judith J
2017-04-01
In observational studies, treatment may be adapted to covariates at several times without a fixed protocol, in continuous time. Treatment influences covariates, which influence treatment, which influences covariates, and so on. Then even time-dependent Cox-models cannot be used to estimate the net treatment effect. Structural nested models have been applied in this setting. Structural nested models are based on counterfactuals: the outcome a person would have had had treatment been withheld after a certain time. Previous work on continuous-time structural nested models assumes that counterfactuals depend deterministically on observed data, while conjecturing that this assumption can be relaxed. This article proves that one can mimic counterfactuals by constructing random variables, solutions to a differential equation, that have the same distribution as the counterfactuals, even given past observed data. These "mimicking" variables can be used to estimate the parameters of structural nested models without assuming the treatment effect to be deterministic.
NASA Astrophysics Data System (ADS)
Keane, Richard J.; Plant, Robert S.; Tennant, Warren J.
2016-05-01
The Plant-Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic scheme only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant-Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant-Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.
Data-driven gradient algorithm for high-precision quantum control
NASA Astrophysics Data System (ADS)
Wu, Re-Bing; Chu, Bing; Owens, David H.; Rabitz, Herschel
2018-04-01
In the quest to achieve scalable quantum information processing technologies, gradient-based optimal control algorithms (e.g., grape) are broadly used for implementing high-precision quantum gates, but their performance is often hindered by deterministic or random errors in the system model and the control electronics. In this paper, we show that grape can be taught to be more effective by jointly learning from the design model and the experimental data obtained from process tomography. The resulting data-driven gradient optimization algorithm (d-grape) can in principle correct all deterministic gate errors, with a mild efficiency loss. The d-grape algorithm may become more powerful with broadband controls that involve a large number of control parameters, while other algorithms usually slow down due to the increased size of the search space. These advantages are demonstrated by simulating the implementation of a two-qubit controlled-not gate.
Stochastic Watershed Models for Risk Based Decision Making
NASA Astrophysics Data System (ADS)
Vogel, R. M.
2017-12-01
Over half a century ago, the Harvard Water Program introduced the field of operational or synthetic hydrology providing stochastic streamflow models (SSMs), which could generate ensembles of synthetic streamflow traces useful for hydrologic risk management. The application of SSMs, based on streamflow observations alone, revolutionized water resources planning activities, yet has fallen out of favor due, in part, to their inability to account for the now nearly ubiquitous anthropogenic influences on streamflow. This commentary advances the modern equivalent of SSMs, termed `stochastic watershed models' (SWMs) useful as input to nearly all modern risk based water resource decision making approaches. SWMs are deterministic watershed models implemented using stochastic meteorological series, model parameters and model errors, to generate ensembles of streamflow traces that represent the variability in possible future streamflows. SWMs combine deterministic watershed models, which are ideally suited to accounting for anthropogenic influences, with recent developments in uncertainty analysis and principles of stochastic simulation
Fencing data transfers in a parallel active messaging interface of a parallel computer
Blocksome, Michael A.; Mamidala, Amith R.
2015-08-11
Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint comprising a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI and through data communications resources including a deterministic data communications network, including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.
Fencing data transfers in a parallel active messaging interface of a parallel computer
Blocksome, Michael A.; Mamidala, Amith R.
2015-06-30
Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint comprising a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI and through data communications resources including a deterministic data communications network, including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.
Polarization changes in light beams trespassing anisotropic turbulence.
Korotkova, Olga
2015-07-01
The polarization properties of deterministic or random light with isotropic source correlations propagating in anisotropic turbulence along horizontal paths are considered for the first time and predicted to change on the basis of the second-order coherence theory of beam-like fields and the extended Huygens-Fresnel integral. Our examples illustrate that the beams whose degree of polarization is unaffected by free-space propagation or isotropic turbulence can either decrease or increase on traversing the anisotropic turbulence, depending on the polarization state of the source.
Li, Yidan; Wang, Yidan; Meng, Xiangli; Zhu, Weiwei; Lu, Xiuzhang
2017-11-01
The right ventricular longitudinal strain (RVLS) of pulmonary hypertension (PH) patients and its relationship with RV function parameters measured by echocardiography and hemodynamic parameters measured by right heart catheterization was investigated. According to the WHO functional class (FC), 66 PH patients were divided into FC I/II (group 1) and III/IV (group 2). RV function parameters were measured by echocardiographic examinations. Hemodynamic parameters were obtained by right heart catheterization. Patients in group 2 had higher systolic pulmonary artery pressure (sPAP; P < 0.05) than patients in group (1) significant between-group differences were observed in global RVLS (RVLS global ), free wall RVLS (RVLS FW ; P < 0.01), and RV conventional function parameters (all P < 0.05). Moreover, mPAP and PVR increased remarkably and CI decreased significantly in group (2) RVLS global had a positive correlation with 6-min walking distance (6MWD; r = 0.492, P < 0.001) and N-terminal pro-brain natriuretic peptide (NT-proBNP; r = 0.632, P < 0.001), while RVLS FW had a positive correlation with 6MWD (r = 0.483, P < 0.001) and NT-proBNP (r = 0.627, P < 0.001). Hemodynamics analysis revealed that RVLS global had a positive correlation with mPAP (r = 0.594, P < 0.001), PVR (r = 0.573, P < 0.001) and CI (r = 0.366, P = 0.003), while RVLS FW had a positive correlation with mPAP (r = 0.597, P < 0.001), PVR (r = 0.577, P < 0.001) and CI (r = 0.369, P = 0.002). According to receiver operating characteristic curves, the optimal cut-off values of RVLS global (-15.0%) and RVLS FW (-15.3%) for prognosis detection with good sensitivity and specificity. Evidence has shown that RVLS measurement can provide the much-needed and reliable information on RV function and hemodynamics. Therefore, this qualifies as a patient-friendly approach for the clinical management of PH patients.
Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina
2017-06-13
Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.
Global Langevin model of multidimensional biomolecular dynamics.
Schaudinnus, Norbert; Lickert, Benjamin; Biswas, Mithun; Stock, Gerhard
2016-11-14
Molecular dynamics simulations of biomolecular processes are often discussed in terms of diffusive motion on a low-dimensional free energy landscape F(). To provide a theoretical basis for this interpretation, one may invoke the system-bath ansatz á la Zwanzig. That is, by assuming a time scale separation between the slow motion along the system coordinate x and the fast fluctuations of the bath, a memory-free Langevin equation can be derived that describes the system's motion on the free energy landscape F(), which is damped by a friction field and driven by a stochastic force that is related to the friction via the fluctuation-dissipation theorem. While the theoretical formulation of Zwanzig typically assumes a highly idealized form of the bath Hamiltonian and the system-bath coupling, one would like to extend the approach to realistic data-based biomolecular systems. Here a practical method is proposed to construct an analytically defined global model of structural dynamics. Given a molecular dynamics simulation and adequate collective coordinates, the approach employs an "empirical valence bond"-type model which is suitable to represent multidimensional free energy landscapes as well as an approximate description of the friction field. Adopting alanine dipeptide and a three-dimensional model of heptaalanine as simple examples, the resulting Langevin model is shown to reproduce the results of the underlying all-atom simulations. Because the Langevin equation can also be shown to satisfy the underlying assumptions of the theory (such as a delta-correlated Gaussian-distributed noise), the global model provides a correct, albeit empirical, realization of Zwanzig's formulation. As an application, the model can be used to investigate the dependence of the system on parameter changes and to predict the effect of site-selective mutations on the dynamics.
Global Langevin model of multidimensional biomolecular dynamics
NASA Astrophysics Data System (ADS)
Schaudinnus, Norbert; Lickert, Benjamin; Biswas, Mithun; Stock, Gerhard
2016-11-01
Molecular dynamics simulations of biomolecular processes are often discussed in terms of diffusive motion on a low-dimensional free energy landscape F ( 𝒙 ) . To provide a theoretical basis for this interpretation, one may invoke the system-bath ansatz á la Zwanzig. That is, by assuming a time scale separation between the slow motion along the system coordinate x and the fast fluctuations of the bath, a memory-free Langevin equation can be derived that describes the system's motion on the free energy landscape F ( 𝒙 ) , which is damped by a friction field and driven by a stochastic force that is related to the friction via the fluctuation-dissipation theorem. While the theoretical formulation of Zwanzig typically assumes a highly idealized form of the bath Hamiltonian and the system-bath coupling, one would like to extend the approach to realistic data-based biomolecular systems. Here a practical method is proposed to construct an analytically defined global model of structural dynamics. Given a molecular dynamics simulation and adequate collective coordinates, the approach employs an "empirical valence bond"-type model which is suitable to represent multidimensional free energy landscapes as well as an approximate description of the friction field. Adopting alanine dipeptide and a three-dimensional model of heptaalanine as simple examples, the resulting Langevin model is shown to reproduce the results of the underlying all-atom simulations. Because the Langevin equation can also be shown to satisfy the underlying assumptions of the theory (such as a delta-correlated Gaussian-distributed noise), the global model provides a correct, albeit empirical, realization of Zwanzig's formulation. As an application, the model can be used to investigate the dependence of the system on parameter changes and to predict the effect of site-selective mutations on the dynamics.
NASA Astrophysics Data System (ADS)
Gerck, Ed
We present a new, comprehensive framework to qualitatively improve election outcome trustworthiness, where voting is modeled as an information transfer process. Although voting is deterministic (all ballots are counted), information is treated stochastically using Information Theory. Error considerations, including faults, attacks, and threats by adversaries, are explicitly included. The influence of errors may be corrected to achieve an election outcome error as close to zero as desired (error-free), with a provably optimal design that is applicable to any type of voting, with or without ballots. Sixteen voting system requirements, including functional, performance, environmental and non-functional considerations, are derived and rated, meeting or exceeding current public-election requirements. The voter and the vote are unlinkable (secret ballot) although each is identifiable. The Witness-Voting System (Gerck, 2001) is extended as a conforming implementation of the provably optimal design that is error-free, transparent, simple, scalable, robust, receipt-free, universally-verifiable, 100% voter-verified, and end-to-end audited.
NASA Technical Reports Server (NTRS)
Kanning, G.
1975-01-01
A digital computer program written in FORTRAN is presented that implements the system identification theory for deterministic systems using input-output measurements. The user supplies programs simulating the mathematical model of the physical plant whose parameters are to be identified. The user may choose any one of three options. The first option allows for a complete model simulation for fixed input forcing functions. The second option identifies up to 36 parameters of the model from wind tunnel or flight measurements. The third option performs a sensitivity analysis for up to 36 parameters. The use of each option is illustrated with an example using input-output measurements for a helicopter rotor tested in a wind tunnel.
Deterministic diffusion in flower-shaped billiards.
Harayama, Takahisa; Klages, Rainer; Gaspard, Pierre
2002-08-01
We propose a flower-shaped billiard in order to study the irregular parameter dependence of chaotic normal diffusion. Our model is an open system consisting of periodically distributed obstacles in the shape of a flower, and it is strongly chaotic for almost all parameter values. We compute the parameter dependent diffusion coefficient of this model from computer simulations and analyze its functional form using different schemes, all generalizing the simple random walk approximation of Machta and Zwanzig. The improved methods we use are based either on heuristic higher-order corrections to the simple random walk model, on lattice gas simulation methods, or they start from a suitable Green-Kubo formula for diffusion. We show that dynamical correlations, or memory effects, are of crucial importance in reproducing the precise parameter dependence of the diffusion coefficent.
NASA Astrophysics Data System (ADS)
Tice, Ian
2018-04-01
This paper concerns the dynamics of a layer of incompressible viscous fluid lying above a rigid plane and with an upper boundary given by a free surface. The fluid is subject to a constant external force with a horizontal component, which arises in modeling the motion of such a fluid down an inclined plane, after a coordinate change. We consider the problem both with and without surface tension for horizontally periodic flows. This problem gives rise to shear-flow equilibrium solutions, and the main thrust of this paper is to study the asymptotic stability of the equilibria in certain parameter regimes. We prove that there exists a parameter regime in which sufficiently small perturbations of the equilibrium at time t=0 give rise to global-in-time solutions that return to equilibrium exponentially in the case with surface tension and almost exponentially in the case without surface tension. We also establish a vanishing surface tension limit, which connects the solutions with and without surface tension.
A robust active control system for shimmy damping in the presence of free play and uncertainties
NASA Astrophysics Data System (ADS)
Orlando, Calogero; Alaimo, Andrea
2017-02-01
Shimmy vibration is the oscillatory motion of the fork-wheel assembly about the steering axis. It represents one of the major problem of aircraft landing gear because it can lead to excessive wear, discomfort as well as safety concerns. Based on the nonlinear model of the mechanics of a single wheel nose landing gear (NLG), electromechanical actuator and tire elasticity, a robust active controller capable of damping shimmy vibration is designed and investigated in this study. A novel Decline Population Swarm Optimization (PDSO) procedure is introduced and used to select the optimal parameters for the controller. The PDSO procedure is based on a decline demographic model and shows high global search capability with reduced computational costs. The open and closed loop system behavior is analyzed under different case studies of aeronautical interest and the effects of torsional free play on the nose landing gear response are also studied. Plant parameters probabilistic uncertainties are then taken into account to assess the active controller robustness using a stochastic approach.
A method of transition conflict resolving in hierarchical control
NASA Astrophysics Data System (ADS)
Łabiak, Grzegorz
2016-09-01
The paper concerns the problem of automatic solving of transition conflicts in hierarchical concurrent state machines (also known as UML state machine). Preparing by the designer a formal specification of a behaviour free from conflicts can be very complex. In this paper, it is proposed a method for solving conflicts through transition predicates modification. Partially specified predicates in the nondeterministic diagram are transformed into a symbolic Boolean space, whose points of the space code all possible valuations of transition predicates. Next, all valuations under partial specifications are logically multiplied by a function which represents all possible orthogonal predicate valuations. The result of this operation contains all possible collections of predicates, which under given partial specification make that the original diagram is conflict free and deterministic.
Contact angle and local wetting at contact line.
Li, Ri; Shan, Yanguang
2012-11-06
This theoretical study was motivated by recent experiments and theoretical work that had suggested the dependence of the static contact angle on the local wetting at the triple-phase contact line. We revisit this topic because the static contact angle as a local wetting parameter is still not widely understood and clearly known. To further clarify the relationship of the static contact angle with wetting, two approaches are applied to derive a general equation for the static contact angle of a droplet on a composite surface composed of heterogeneous components. A global approach based on the free surface energy of a thermodynamic system containing the droplet and solid surface shows the static contact angle as a function of local surface chemistry and local wetting state at the contact line. A local approach, in which only local forces acting on the contact line are considered, results in the same equation. The fact that the local approach agrees with the global approach further demonstrates the static contact angle as a local wetting parameter. Additionally, the study also suggests that the wetting described by the Wenzel and Cassie equations is also the local wetting of the contact line rather than the global wetting of the droplet.
NASA Astrophysics Data System (ADS)
Yan, Yajing; Barth, Alexander; Beckers, Jean-Marie; Candille, Guillem; Brankart, Jean-Michel; Brasseur, Pierre
2015-04-01
Sea surface height, sea surface temperature and temperature profiles at depth collected between January and December 2005 are assimilated into a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. 60 ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments. Incremental analysis update scheme is applied in order to reduce spurious oscillations due to the model state correction. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with observations used in the assimilation experiments and independent observations, which goes further than most previous studies and constitutes one of the original points of this paper. Regarding the deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations in order to diagnose the ensemble distribution properties in a deterministic way. Regarding the probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system. The improvement of the assimilation is demonstrated using these validation metrics. Finally, the deterministic validation and the probabilistic validation are analysed jointly. The consistency and complementarity between both validations are highlighted. High reliable situations, in which the RMS error and the CRPS give the same information, are identified for the first time in this paper.
Electro-rheological finishing for optical surfaces
NASA Astrophysics Data System (ADS)
Cheng, Haobo; Wang, Peng
2009-05-01
Many polishing techniques such as fixed-abrasive polishing, abrasive-free polishing and magnetorheological finishing etc., have been developed. Meanwhile, a new technique is proposed using the mixture of the electro-rheological (Er) fluid with abrasives as polishing slurry, which is a special process does not require pad. Electrorheological fluid is a special suspension liquid, whose viscosity has an approximate proportional relation with the electric strength applied. When the field strength reaches a certain limit, the phase transition occurs and the liquid acquires a solid like character, and while the electric field is removed, the fluid regains its original viscosity during the order of milliseconds. In this research work, we employed the characteristics of viscosity change of Er fluid to hold the polishing particles for micromachining. A point-contact electro-rheological finishing (Erf) tool was designed with a tip diameter 0.5~1mm. Both the anode and the cathode of the electric field were combined in the tool. The electric field could be controllable. When the tool moves across the profile of the work piece, by controlling the electric field strength as well as the other manufacturing parameters we can assure the deterministic material removal. Furthermore, the electro-rheological finishing process has been planned in detailed.
Variational Bayesian identification and prediction of stochastic nonlinear dynamic causal models.
Daunizeau, J; Friston, K J; Kiebel, S J
2009-11-01
In this paper, we describe a general variational Bayesian approach for approximate inference on nonlinear stochastic dynamic models. This scheme extends established approximate inference on hidden-states to cover: (i) nonlinear evolution and observation functions, (ii) unknown parameters and (precision) hyperparameters and (iii) model comparison and prediction under uncertainty. Model identification or inversion entails the estimation of the marginal likelihood or evidence of a model. This difficult integration problem can be finessed by optimising a free-energy bound on the evidence using results from variational calculus. This yields a deterministic update scheme that optimises an approximation to the posterior density on the unknown model variables. We derive such a variational Bayesian scheme in the context of nonlinear stochastic dynamic hierarchical models, for both model identification and time-series prediction. The computational complexity of the scheme is comparable to that of an extended Kalman filter, which is critical when inverting high dimensional models or long time-series. Using Monte-Carlo simulations, we assess the estimation efficiency of this variational Bayesian approach using three stochastic variants of chaotic dynamic systems. We also demonstrate the model comparison capabilities of the method, its self-consistency and its predictive power.
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC
NASA Technical Reports Server (NTRS)
1978-01-01
Research activities related to global weather, ocean/air interactions, and climate are reported. The global weather research is aimed at improving the assimilation of satellite-derived data in weather forecast models, developing analysis/forecast models that can more fully utilize satellite data, and developing new measures of forecast skill to properly assess the impact of satellite data on weather forecasting. The oceanographic research goal is to understand and model the processes that determine the general circulation of the oceans, focusing on those processes that affect sea surface temperature and oceanic heat storage, which are the oceanographic variables with the greatest influence on climate. The climate research objective is to support the development and effective utilization of space-acquired data systems in climate forecast models and to conduct sensitivity studies to determine the affect of lower boundary conditions on climate and predictability studies to determine which global climate features can be modeled either deterministically or statistically.
Towards quantifying uncertainty in Greenland's contribution to 21st century sea-level rise
NASA Astrophysics Data System (ADS)
Perego, M.; Tezaur, I.; Price, S. F.; Jakeman, J.; Eldred, M.; Salinger, A.; Hoffman, M. J.
2015-12-01
We present recent work towards developing a methodology for quantifying uncertainty in Greenland's 21st century contribution to sea-level rise. While we focus on uncertainties associated with the optimization and calibration of the basal sliding parameter field, the methodology is largely generic and could be applied to other (or multiple) sets of uncertain model parameter fields. The first step in the workflow is the solution of a large-scale, deterministic inverse problem, which minimizes the mismatch between observed and computed surface velocities by optimizing the two-dimensional coefficient field in a linear-friction sliding law. We then expand the deviation in this coefficient field from its estimated "mean" state using a reduced basis of Karhunen-Loeve Expansion (KLE) vectors. A Bayesian calibration is used to determine the optimal coefficient values for this expansion. The prior for the Bayesian calibration can be computed using the Hessian of the deterministic inversion or using an exponential covariance kernel. The posterior distribution is then obtained using Markov Chain Monte Carlo run on an emulator of the forward model. Finally, the uncertainty in the modeled sea-level rise is obtained by performing an ensemble of forward propagation runs. We present and discuss preliminary results obtained using a moderate-resolution model of the Greenland Ice sheet. As demonstrated in previous work, the primary difficulty in applying the complete workflow to realistic, high-resolution problems is that the effective dimension of the parameter space is very large.
Nonlinear response of the immune system to power-frequency magnetic fields.
Marino, A A; Wolcott, R M; Chervenak, R; Jourd'Heuil, F; Nilsen, E; Frilot, C
2000-09-01
Studies of the effects of power-frequency electromagnetic fields (EMFs) on the immune and other body systems produced positive and negative results, and this pattern was usually interpreted to indicate the absence of real effects. However, if the biological effects of EMFs were governed by nonlinear laws, deterministic responses to fields could occur that were both real and inconsistent, thereby leading to both types of results. The hypothesis of real inconsistent effects due to EMFs was tested by exposing mice to 1 G, 60 Hz for 1-105 days and observing the effect on 20 immune parameters, using flow cytometry and functional assays. The data were evaluated by means of a novel statistical procedure that avoided averaging away oppositely directed changes in different animals, which we perceived to be the problem in some of the earlier EMF studies. The reliability of the procedure was shown using appropriate controls. In three independent experiments involving exposure for 21 or more days, the field altered lymphoid phenotype even though the changes in individual immune parameters were inconsistent. When the data were evaluated using traditional linear statistical methods, no significant difference in any immune parameter was found. We were able to mimic the results by sampling from known chaotic systems, suggesting that deterministic chaos could explain the effect of fields on the immune system. We conclude that exposure to power-frequency fields produced changes in the immune system that were both real and inconsistent.
Developing Stochastic Models as Inputs for High-Frequency Ground Motion Simulations
NASA Astrophysics Data System (ADS)
Savran, William Harvey
High-frequency ( 10 Hz) deterministic ground motion simulations are challenged by our understanding of the small-scale structure of the earth's crust and the rupture process during an earthquake. We will likely never obtain deterministic models that can accurately describe these processes down to the meter scale length required for broadband wave propagation. Instead, we can attempt to explain the behavior, in a statistical sense, by including stochastic models defined by correlations observed in the natural earth and through physics based simulations of the earthquake rupture process. Toward this goal, we develop stochastic models to address both of the primary considerations for deterministic ground motion simulations: namely, the description of the material properties in the crust, and broadband earthquake source descriptions. Using borehole sonic log data recorded in Los Angeles basin, we estimate the spatial correlation structure of the small-scale fluctuations in P-wave velocities by determining the best-fitting parameters of a von Karman correlation function. We find that Hurst exponents, nu, between 0.0-0.2, vertical correlation lengths, az, of 15-150m, an standard deviation, sigma of about 5% characterize the variability in the borehole data. Usin these parameters, we generated a stochastic model of velocity and density perturbations and combined with leading seismic velocity models to perform a validation exercise for the 2008, Chino Hills, CA using heterogeneous media. We find that models of velocity and density perturbations can have significant effects on the wavefield at frequencies as low as 0.3 Hz, with ensemble median values of various ground motion metrics varying up to +/-50%, at certain stations, compared to those computed solely from the CVM. Finally, we develop a kinematic rupture generator based on dynamic rupture simulations on geometrically complex faults. We analyze 100 dynamic rupture simulations on strike-slip faults ranging from Mw 6.4-7.2. We find that our dynamic simulations follow empirical scaling relationships for inter-plate strike-slip events, and provide source spectra comparable with an o -2 model. Our rupture generator reproduces GMPE medians and intra-event standard deviations spectral accelerations for an ensemble of 10 Hz fully-deterministic ground motion simulations, as compared to NGA West2 GMPE relationships up to 0.2 seconds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Shuai; Xiong, Lihua; Li, Hong-Yi
2015-05-26
Hydrological simulations to delineate the impacts of climate variability and human activities are subjected to uncertainties related to both parameter and structure of the hydrological models. To analyze the impact of these uncertainties on the model performance and to yield more reliable simulation results, a global calibration and multimodel combination method that integrates the Shuffled Complex Evolution Metropolis (SCEM) and Bayesian Model Averaging (BMA) of four monthly water balance models was proposed. The method was applied to the Weihe River Basin (WRB), the largest tributary of the Yellow River, to determine the contribution of climate variability and human activities tomore » runoff changes. The change point, which was used to determine the baseline period (1956-1990) and human-impacted period (1991-2009), was derived using both cumulative curve and Pettitt’s test. Results show that the combination method from SCEM provides more skillful deterministic predictions than the best calibrated individual model, resulting in the smallest uncertainty interval of runoff changes attributed to climate variability and human activities. This combination methodology provides a practical and flexible tool for attribution of runoff changes to climate variability and human activities by hydrological models.« less
Ultra-low roughness magneto-rheological finishing for EUV mask substrates
NASA Astrophysics Data System (ADS)
Dumas, Paul; Jenkins, Richard; McFee, Chuck; Kadaksham, Arun J.; Balachandran, Dave K.; Teki, Ranganath
2013-09-01
EUV mask substrates, made of titania-doped fused silica, ideally require sub-Angstrom surface roughness, sub-30 nm flatness, and no bumps/pits larger than 1 nm in height/depth. To achieve the above specifications, substrates must undergo iterative global and local polishing processes. Magnetorheological finishing (MRF) is a local polishing technique which can accurately and deterministically correct substrate figure, but typically results in a higher surface roughness than the current requirements for EUV substrates. We describe a new super-fine MRF® polishing fluid whichis able to meet both flatness and roughness specifications for EUV mask blanks. This eases the burden on the subsequent global polishing process by decreasing the polishing time, and hence the defectivity and extent of figure distortion.
Avoiding space robot collisions utilizing the NASA/GSFC tri-mode skin sensor
NASA Technical Reports Server (NTRS)
Prinz, F. B.
1991-01-01
Sensor based robot motion planning research has primarily focused on mobile robots. Consider, however, the case of a robot manipulator expected to operate autonomously in a dynamic environment where unexpected collisions can occur with many parts of the robot. Only a sensor based system capable of generating collision free paths would be acceptable in such situations. Recently, work in this area has been reported in which a deterministic solution for 2DOF systems has been generated. The arm was sensitized with 'skin' of infra-red sensors. We have proposed a heuristic (potential field based) methodology for redundant robots with large DOF's. The key concepts are solving the path planning problem by cooperating global and local planning modules, the use of complete information from the sensors and partial (but appropriate) information from a world model, representation of objects with hyper-ellipsoids in the world model, and the use of variational planning. We intend to sensitize the robot arm with a 'skin' of capacitive proximity sensors. These sensors were developed at NASA, and are exceptionally suited for the space application. In the first part of the report, we discuss the development and modeling of the capacitive proximity sensor. In the second part we discuss the motion planning algorithm.
Global Optimal Trajectory in Chaos and NP-Hardness
NASA Astrophysics Data System (ADS)
Latorre, Vittorio; Gao, David Yang
This paper presents an unconventional theory and method for solving general nonlinear dynamical systems. Instead of the direct iterative methods, the discretized nonlinear system is first formulated as a global optimization problem via the least squares method. A newly developed canonical duality theory shows that this nonconvex minimization problem can be solved deterministically in polynomial time if a global optimality condition is satisfied. The so-called pseudo-chaos produced by linear iterative methods are mainly due to the intrinsic numerical error accumulations. Otherwise, the global optimization problem could be NP-hard and the nonlinear system can be really chaotic. A conjecture is proposed, which reveals the connection between chaos in nonlinear dynamics and NP-hardness in computer science. The methodology and the conjecture are verified by applications to the well-known logistic equation, a forced memristive circuit and the Lorenz system. Computational results show that the canonical duality theory can be used to identify chaotic systems and to obtain realistic global optimal solutions in nonlinear dynamical systems. The method and results presented in this paper should bring some new insights into nonlinear dynamical systems and NP-hardness in computational complexity theory.
Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)
NASA Astrophysics Data System (ADS)
Kędra, Mariola
2014-02-01
Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.
Ecological Succession Pattern of Fungal Community in Soil along a Retreating Glacier
Tian, Jianqing; Qiao, Yuchen; Wu, Bing; Chen, Huai; Li, Wei; Jiang, Na; Zhang, Xiaoling; Liu, Xingzhong
2017-01-01
Accelerated by global climate changing, retreating glaciers leave behind soil chronosequences of primary succession. Current knowledge of primary succession is mainly from studies of vegetation dynamics, whereas information about belowground microbes remains unclear. Here, we combined shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. We investigated fungal succession and community assembly via high-throughput sequencing along a well-established glacier forefront chronosequence that spans 2–188 years of deglaciation. Shannon diversity and evenness peaked at a distance of 370 m and declined afterwards. The response of fungal diversity to distance varied in different phyla. Basidiomycota Shannon diversity significantly decreased with distance, while the pattern of Rozellomycota Shannon diversity was unimodal. Abundance of most frequencies OTU2 (Cryptococcus terricola) increased with successional distance, whereas that of OTU65 (Tolypocladium tundrense) decreased. Based on null deviation analyses, composition of the fungal community was initially governed by deterministic processes strongly but later less deterministic processes. Our results revealed that distance, altitude, soil microbial biomass carbon, soil microbial biomass nitrogen and NH4+–N significantly correlated with fungal community composition along the chronosequence. These results suggest that the drivers of fungal community are dynamics in a glacier chronosequence, that may relate to fungal ecophysiological traits and adaptation in an evolving ecosystem. The information will provide understanding the mechanistic underpinnings of microbial community assembly during ecosystem succession under different scales and scenario. PMID:28649234
Ecological Succession Pattern of Fungal Community in Soil along a Retreating Glacier.
Tian, Jianqing; Qiao, Yuchen; Wu, Bing; Chen, Huai; Li, Wei; Jiang, Na; Zhang, Xiaoling; Liu, Xingzhong
2017-01-01
Accelerated by global climate changing, retreating glaciers leave behind soil chronosequences of primary succession. Current knowledge of primary succession is mainly from studies of vegetation dynamics, whereas information about belowground microbes remains unclear. Here, we combined shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. We investigated fungal succession and community assembly via high-throughput sequencing along a well-established glacier forefront chronosequence that spans 2-188 years of deglaciation. Shannon diversity and evenness peaked at a distance of 370 m and declined afterwards. The response of fungal diversity to distance varied in different phyla. Basidiomycota Shannon diversity significantly decreased with distance, while the pattern of Rozellomycota Shannon diversity was unimodal. Abundance of most frequencies OTU2 ( Cryptococcus terricola ) increased with successional distance, whereas that of OTU65 ( Tolypocladium tundrense ) decreased. Based on null deviation analyses, composition of the fungal community was initially governed by deterministic processes strongly but later less deterministic processes. Our results revealed that distance, altitude, soil microbial biomass carbon, soil microbial biomass nitrogen and [Formula: see text]-N significantly correlated with fungal community composition along the chronosequence. These results suggest that the drivers of fungal community are dynamics in a glacier chronosequence, that may relate to fungal ecophysiological traits and adaptation in an evolving ecosystem. The information will provide understanding the mechanistic underpinnings of microbial community assembly during ecosystem succession under different scales and scenario.
ZERODUR: deterministic approach for strength design
NASA Astrophysics Data System (ADS)
Hartmann, Peter
2012-12-01
There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two-parameter Weibull distribution approach and no longer subject to statistical uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madankan, R.; Pouget, S.; Singla, P., E-mail: psingla@buffalo.edu
Volcanic ash advisory centers are charged with forecasting the movement of volcanic ash plumes, for aviation, health and safety preparation. Deterministic mathematical equations model the advection and dispersion of these plumes. However initial plume conditions – height, profile of particle location, volcanic vent parameters – are known only approximately at best, and other features of the governing system such as the windfield are stochastic. These uncertainties make forecasting plume motion difficult. As a result of these uncertainties, ash advisories based on a deterministic approach tend to be conservative, and many times over/under estimate the extent of a plume. This papermore » presents an end-to-end framework for generating a probabilistic approach to ash plume forecasting. This framework uses an ensemble of solutions, guided by Conjugate Unscented Transform (CUT) method for evaluating expectation integrals. This ensemble is used to construct a polynomial chaos expansion that can be sampled cheaply, to provide a probabilistic model forecast. The CUT method is then combined with a minimum variance condition, to provide a full posterior pdf of the uncertain source parameters, based on observed satellite imagery. The April 2010 eruption of the Eyjafjallajökull volcano in Iceland is employed as a test example. The puff advection/dispersion model is used to hindcast the motion of the ash plume through time, concentrating on the period 14–16 April 2010. Variability in the height and particle loading of that eruption is introduced through a volcano column model called bent. Output uncertainty due to the assumed uncertain input parameter probability distributions, and a probabilistic spatial-temporal estimate of ash presence are computed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Da Cruz, D. F.; Rochman, D.; Koning, A. J.
2012-07-01
This paper discusses the uncertainty analysis on reactivity and inventory for a typical PWR fuel element as a result of uncertainties in {sup 235,238}U nuclear data. A typical Westinghouse 3-loop fuel assembly fuelled with UO{sub 2} fuel with 4.8% enrichment has been selected. The Total Monte-Carlo method has been applied using the deterministic transport code DRAGON. This code allows the generation of the few-groups nuclear data libraries by directly using data contained in the nuclear data evaluation files. The nuclear data used in this study is from the JEFF3.1 evaluation, and the nuclear data files for {sup 238}U and {supmore » 235}U (randomized for the generation of the various DRAGON libraries) are taken from the nuclear data library TENDL. The total uncertainty (obtained by randomizing all {sup 238}U and {sup 235}U nuclear data in the ENDF files) on the reactor parameters has been split into different components (different nuclear reaction channels). Results show that the TMC method in combination with a deterministic transport code constitutes a powerful tool for performing uncertainty and sensitivity analysis of reactor physics parameters. (authors)« less
Non-Deterministic Modelling of Food-Web Dynamics
Planque, Benjamin; Lindstrøm, Ulf; Subbey, Sam
2014-01-01
A novel approach to model food-web dynamics, based on a combination of chance (randomness) and necessity (system constraints), was presented by Mullon et al. in 2009. Based on simulations for the Benguela ecosystem, they concluded that observed patterns of ecosystem variability may simply result from basic structural constraints within which the ecosystem functions. To date, and despite the importance of these conclusions, this work has received little attention. The objective of the present paper is to replicate this original model and evaluate the conclusions that were derived from its simulations. For this purpose, we revisit the equations and input parameters that form the structure of the original model and implement a comparable simulation model. We restate the model principles and provide a detailed account of the model structure, equations, and parameters. Our model can reproduce several ecosystem dynamic patterns: pseudo-cycles, variation and volatility, diet, stock-recruitment relationships, and correlations between species biomass series. The original conclusions are supported to a large extent by the current replication of the model. Model parameterisation and computational aspects remain difficult and these need to be investigated further. Hopefully, the present contribution will make this approach available to a larger research community and will promote the use of non-deterministic-network-dynamics models as ‘null models of food-webs’ as originally advocated. PMID:25299245
A physically based model of global freshwater surface temperature
NASA Astrophysics Data System (ADS)
Beek, Ludovicus P. H.; Eikelboom, Tessa; Vliet, Michelle T. H.; Bierkens, Marc F. P.
2012-09-01
Temperature determines a range of physical properties of water and exerts a strong control on surface water biogeochemistry. Thus, in freshwater ecosystems the thermal regime directly affects the geographical distribution of aquatic species through their growth and metabolism and indirectly through their tolerance to parasites and diseases. Models used to predict surface water temperature range between physically based deterministic models and statistical approaches. Here we present the initial results of a physically based deterministic model of global freshwater surface temperature. The model adds a surface water energy balance to river discharge modeled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff, and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by shortwave and longwave radiation and sensible and latent heat fluxes. Also included are ice formation and its effect on heat storage and river hydraulics. We use the coupled surface water and energy balance model to simulate global freshwater surface temperature at daily time steps with a spatial resolution of 0.5° on a regular grid for the period 1976-2000. We opt to parameterize the model with globally available data and apply it without calibration in order to preserve its physical basis with the outlook of evaluating the effects of atmospheric warming on freshwater surface temperature. We validate our simulation results with daily temperature data from rivers and lakes (U.S. Geological Survey (USGS), limited to the USA) and compare mean monthly temperatures with those recorded in the Global Environment Monitoring System (GEMS) data set. Results show that the model is able to capture the mean monthly surface temperature for the majority of the GEMS stations, while the interannual variability as derived from the USGS and NOAA data was captured reasonably well. Results are poorest for the Arctic rivers because the timing of ice breakup is predicted too late in the year due to the lack of including a mechanical breakup mechanism. Moreover, surface water temperatures for tropical rivers were overestimated, most likely due to an overestimation of rainfall temperature and incoming shortwave radiation. The spatiotemporal variation of water temperature reveals large temperature differences between water and atmosphere for the higher latitudes, while considerable lateral transport of heat can be observed for rivers crossing hydroclimatic zones, such as the Nile, the Mississippi, and the large rivers flowing to the Arctic. Overall, our model results show promise for future projection of global surface freshwater temperature under global change.
A global parallel model based design of experiments method to minimize model output uncertainty.
Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E
2012-03-01
Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.
Contact Versus Non-Contact Measurement of a Helicopter Main Rotor Composite Blade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luczak, Marcin; Dziedziech, Kajetan; Peeters, Bart
2010-05-28
The dynamic characterization of lightweight structures is particularly complex as the impact of the weight of sensors and instrumentation (cables, mounting of exciters...) can distort the results. Varying mass loading or constraint effects between partial measurements may determine several errors on the final conclusions. Frequency shifts can lead to erroneous interpretations of the dynamics parameters. Typically these errors remain limited to a few percent. Inconsistent data sets however can result in major processing errors, with all related consequences towards applications based on the consistency assumption, such as global modal parameter identification, model-based damage detection and FRF-based matrix inversion in substructuring,more » load identification and transfer path analysis [1]. This paper addresses the subject of accuracy in the context of the measurement of the dynamic properties of a particular lightweight structure. It presents a comprehensive comparative study between the use of accelerometer, laser vibrometer (scanning LDV) and PU-probe (acoustic particle velocity and pressure) measurements to measure the structural responses, with as final aim the comparison of modal model quality assessment. The object of the investigation is a composite material blade from the main rotor of a helicopter. The presented results are part of an extensive test campaign performed with application of SIMO, MIMO, random and harmonic excitation, and the use of the mentioned contact and non-contact measurement techniques. The advantages and disadvantages of the applied instrumentation are discussed. Presented are real-life measurement problems related to the different set up conditions. Finally an analysis of estimated models is made in view of assessing the applicability of the various measurement approaches for successful fault detection based on modal parameters observation as well as in uncertain non-deterministic numerical model updating.« less
Ngonghala, Calistus N; Teboh-Ewungkem, Miranda I; Ngwa, Gideon A
2015-06-01
We derive and study a deterministic compartmental model for malaria transmission with varying human and mosquito populations. Our model considers disease-related deaths, asymptomatic immune humans who are also infectious, as well as mosquito demography, reproduction and feeding habits. Analysis of the model reveals the existence of a backward bifurcation and persistent limit cycles whose period and size is determined by two threshold parameters: the vectorial basic reproduction number Rm, and the disease basic reproduction number R0, whose size can be reduced by reducing Rm. We conclude that malaria dynamics are indeed oscillatory when the methodology of explicitly incorporating the mosquito's demography, feeding and reproductive patterns is considered in modeling the mosquito population dynamics. A sensitivity analysis reveals important control parameters that can affect the magnitudes of Rm and R0, threshold quantities to be taken into consideration when designing control strategies. Both Rm and the intrinsic period of oscillation are shown to be highly sensitive to the mosquito's birth constant λm and the mosquito's feeding success probability pw. Control of λm can be achieved by spraying, eliminating breeding sites or moving them away from human habitats, while pw can be controlled via the use of mosquito repellant and insecticide-treated bed-nets. The disease threshold parameter R0 is shown to be highly sensitive to pw, and the intrinsic period of oscillation is also sensitive to the rate at which reproducing mosquitoes return to breeding sites. A global sensitivity and uncertainty analysis reveals that the ability of the mosquito to reproduce and uncertainties in the estimations of the rates at which exposed humans become infectious and infectious humans recover from malaria are critical in generating uncertainties in the disease classes.
Contact Versus Non-Contact Measurement of a Helicopter Main Rotor Composite Blade
NASA Astrophysics Data System (ADS)
Luczak, Marcin; Dziedziech, Kajetan; Vivolo, Marianna; Desmet, Wim; Peeters, Bart; Van der Auweraer, Herman
2010-05-01
The dynamic characterization of lightweight structures is particularly complex as the impact of the weight of sensors and instrumentation (cables, mounting of exciters…) can distort the results. Varying mass loading or constraint effects between partial measurements may determine several errors on the final conclusions. Frequency shifts can lead to erroneous interpretations of the dynamics parameters. Typically these errors remain limited to a few percent. Inconsistent data sets however can result in major processing errors, with all related consequences towards applications based on the consistency assumption, such as global modal parameter identification, model-based damage detection and FRF-based matrix inversion in substructuring, load identification and transfer path analysis [1]. This paper addresses the subject of accuracy in the context of the measurement of the dynamic properties of a particular lightweight structure. It presents a comprehensive comparative study between the use of accelerometer, laser vibrometer (scanning LDV) and PU-probe (acoustic particle velocity and pressure) measurements to measure the structural responses, with as final aim the comparison of modal model quality assessment. The object of the investigation is a composite material blade from the main rotor of a helicopter. The presented results are part of an extensive test campaign performed with application of SIMO, MIMO, random and harmonic excitation, and the use of the mentioned contact and non-contact measurement techniques. The advantages and disadvantages of the applied instrumentation are discussed. Presented are real-life measurement problems related to the different set up conditions. Finally an analysis of estimated models is made in view of assessing the applicability of the various measurement approaches for successful fault detection based on modal parameters observation as well as in uncertain non-deterministic numerical model updating.
Optimisation of lateral car dynamics taking into account parameter uncertainties
NASA Astrophysics Data System (ADS)
Busch, Jochen; Bestle, Dieter
2014-02-01
Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.
Dynamics and profiles of a diffusive host-pathogen system with distinct dispersal rates
NASA Astrophysics Data System (ADS)
Wu, Yixiang; Zou, Xingfu
2018-04-01
In this paper, we investigate a diffusive host-pathogen model with heterogeneous parameters and distinct dispersal rates for the susceptible and infected hosts. We first prove that the solution of the model exists globally and the model system possesses a global attractor. We then identify the basic reproduction number R0 for the model and prove its threshold role: if R0 ≤ 1, the disease free equilibrium is globally asymptotically stable; if R0 > 1, the solution of the model is uniformly persistent and there exists a positive (pathogen persistent) steady state. Finally, we study the asymptotic profiles of the positive steady state as the dispersal rate of the susceptible or infected hosts approaches zero. Our result suggests that the infected hosts concentrate at certain points which can be characterized as the pathogen's most favoured sites when the mobility of the infected host is limited.
Global stability of an age-structure epidemic model with imperfect vaccination and relapse
NASA Astrophysics Data System (ADS)
Cao, Bin; Huo, Hai-Feng; Xiang, Hong
2017-11-01
A new age-structured epidemic model with imperfect vaccination and relapse is proposed. The total population of our model is partitioned into five subclasses: susceptible class S, vaccinated class V, exposed class E, infectious class I and removed class R. Age-structures are equipped with in exposed and recovered classes. Furthermore, imperfect vaccination is also introduced in our model. The basic reproduction number R0 is defined and proved as a threshold parameter of the model. Asymptotic smoothness of solutions and uniform persistence of the system are showed via reformulating the system as a system of Volterra integral equation. Furthermore, by constructing proper Volterra-type Lyapunov functional we get when R0 < 1, the disease-free equilibrium is globally asymptotically stable. When R0 > 1, the endemic equilibrium is globally stable. Our results show that to increase the efficiency of vaccination and reduce influence of relapse are vital essential for controlling epidemic.
Group theoretical formulation of free fall and projectile motion
NASA Astrophysics Data System (ADS)
Düztaş, Koray
2018-07-01
In this work we formulate the group theoretical description of free fall and projectile motion. We show that the kinematic equations for constant acceleration form a one parameter group acting on a phase space. We define the group elements ϕ t by their action on the points in the phase space. We also generalize this approach to projectile motion. We evaluate the group orbits regarding their relations to the physical orbits of particles and unphysical solutions. We note that the group theoretical formulation does not apply to more general cases involving a time-dependent acceleration. This method improves our understanding of the constant acceleration problem with its global approach. It is especially beneficial for students who want to pursue a career in theoretical physics.
NASA Technical Reports Server (NTRS)
Cummins, Phil R.; Wahr, John M.
1993-01-01
In this study we consider the influence of the earth's free core nutation (FCN) on diurnal tidal admittance estimates for 11 stations of the globally distributed International Deployment of Accelerometers network. The FCN causes a resonant enhancement of the diurnal admittances which can be used to estimate some properties of the FCN. Estimations of the parameters describing the FCN (period, Q, and resonance strength) are made using data from individual stations and many stations simultaneously. These yield a result for the period of 423-452 sidereal days, which is shorter than theory predicts but is in agreement with many previous studies and suggests that the dynamical ellipticity of the core may be greater than its hydrostatic value.
Nonlinear Rayleigh wave inversion based on the shuffled frog-leaping algorithm
NASA Astrophysics Data System (ADS)
Sun, Cheng-Yu; Wang, Yan-Yan; Wu, Dun-Shi; Qin, Xiao-Jun
2017-12-01
At present, near-surface shear wave velocities are mainly calculated through Rayleigh wave dispersion-curve inversions in engineering surface investigations, but the required calculations pose a highly nonlinear global optimization problem. In order to alleviate the risk of falling into a local optimal solution, this paper introduces a new global optimization method, the shuffle frog-leaping algorithm (SFLA), into the Rayleigh wave dispersion-curve inversion process. SFLA is a swarm-intelligence-based algorithm that simulates a group of frogs searching for food. It uses a few parameters, achieves rapid convergence, and is capability of effective global searching. In order to test the reliability and calculation performance of SFLA, noise-free and noisy synthetic datasets were inverted. We conducted a comparative analysis with other established algorithms using the noise-free dataset, and then tested the ability of SFLA to cope with data noise. Finally, we inverted a real-world example to examine the applicability of SFLA. Results from both synthetic and field data demonstrated the effectiveness of SFLA in the interpretation of Rayleigh wave dispersion curves. We found that SFLA is superior to the established methods in terms of both reliability and computational efficiency, so it offers great potential to improve our ability to solve geophysical inversion problems.
Alikhani, Jamal; Takacs, Imre; Al-Omari, Ahmed; Murthy, Sudhir; Massoudieh, Arash
2017-03-01
A parameter estimation framework was used to evaluate the ability of observed data from a full-scale nitrification-denitrification bioreactor to reduce the uncertainty associated with the bio-kinetic and stoichiometric parameters of an activated sludge model (ASM). Samples collected over a period of 150 days from the effluent as well as from the reactor tanks were used. A hybrid genetic algorithm and Bayesian inference were used to perform deterministic and parameter estimations, respectively. The main goal was to assess the ability of the data to obtain reliable parameter estimates for a modified version of the ASM. The modified ASM model includes methylotrophic processes which play the main role in methanol-fed denitrification. Sensitivity analysis was also used to explain the ability of the data to provide information about each of the parameters. The results showed that the uncertainty in the estimates of the most sensitive parameters (including growth rate, decay rate, and yield coefficients) decreased with respect to the prior information.
NASA Astrophysics Data System (ADS)
Chapman, Sandra; Stainforth, David; Watkins, Nicholas
2017-04-01
Global mean temperature (GMT) provides a simple means of benchmarking a broad ensemble of global climate models (GCMs) against past observed GMT which in turn provide headline assessments of the consequences of possible future forcing scenarios. The slow variations of past changes in GMT seen in different GCMs track each other [1] and the observed GMT reasonably closely. However, the different GCMs tend to generate GMT time-series which have absolute values that are offset with respect to each other [2]. Subtracting these offsets is an integral part of comparisons between ensembles of GCMs and observed past GMT. We will discuss how this constrains how the GCMs are related to each other. The GMT of a given GCM is a macroscopic reduced variable that tracks a subset of the full information contained in the time evolving solution of that GCM. If the GMT slow timescale dynamics of different GCMs is to a good approximation the same, subject to a linear translation, then the phenomenology captured by this dynamics is essentially linear; any feedback is to leading order linear in GMT. It then follows that a linear energy balance evolution equation for GMT is sufficient to reproduce the slow timescale GMT dynamics, provided that the appropriate effective heat capacity and feedback parameters are known. As a consequence, the GCM's GMT timeseries may underestimate the impact of, and uncertainty in, the outcomes of future forcing scenarios. The offset subtraction procedure identifies a slow time-scale dynamics in model generated GMT. Fluctuations on much faster timescales do not typically track each other from one GCM to another, with the exception of major forcing events such as volcanic eruptions. This suggests that the GMT time-series can be decomposed into a slow and fast timescale which naturally leads to stochastic reduced energy balance models for GMT. [1] IPCC Chapter 9 P743 and fig 9.8,IPCC TS.1 [2] see e.g. [Mauritsen et al., Tuning the Climate of a Global Model, Journal of Advances in Modelling Earth Systems, 2012] 4, IPCC SPM.6
Wave ensemble forecast system for tropical cyclones in the Australian region
NASA Astrophysics Data System (ADS)
Zieger, Stefan; Greenslade, Diana; Kepert, Jeffrey D.
2018-05-01
Forecasting of waves under extreme conditions such as tropical cyclones is vitally important for many offshore industries, but there remain many challenges. For Northwest Western Australia (NW WA), wave forecasts issued by the Australian Bureau of Meteorology have previously been limited to products from deterministic operational wave models forced by deterministic atmospheric models. The wave models are run over global (resolution 1/4∘) and regional (resolution 1/10∘) domains with forecast ranges of + 7 and + 3 day respectively. Because of this relatively coarse resolution (both in the wave models and in the forcing fields), the accuracy of these products is limited under tropical cyclone conditions. Given this limited accuracy, a new ensemble-based wave forecasting system for the NW WA region has been developed. To achieve this, a new dedicated 8-km resolution grid was nested in the global wave model. Over this grid, the wave model is forced with winds from a bias-corrected European Centre for Medium Range Weather Forecast atmospheric ensemble that comprises 51 ensemble members to take into account the uncertainties in location, intensity and structure of a tropical cyclone system. A unique technique is used to select restart files for each wave ensemble member. The system is designed to operate in real time during the cyclone season providing + 10-day forecasts. This paper will describe the wave forecast components of this system and present the verification metrics and skill for specific events.
75 FR 70714 - Global Free Flow of Information on the Internet
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-18
.... 100921457-0561-02] RIN 0660-XA20 Global Free Flow of Information on the Internet AGENCY: National... of comment period. SUMMARY: The Department of Commerce's Internet Policy Task Force announces that... on the global free flow of information on the Internet has been reopened and will extend until 5 p.m...
Application of Artificial Intelligence for Bridge Deterioration Model.
Chen, Zhang; Wu, Yangyang; Li, Li; Sun, Lijun
2015-01-01
The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention.
Application of Artificial Intelligence for Bridge Deterioration Model
Chen, Zhang; Wu, Yangyang; Sun, Lijun
2015-01-01
The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention. PMID:26601121
Stochastic dynamics and logistic population growth
NASA Astrophysics Data System (ADS)
Méndez, Vicenç; Assaf, Michael; Campos, Daniel; Horsthemke, Werner
2015-06-01
The Verhulst model is probably the best known macroscopic rate equation in population ecology. It depends on two parameters, the intrinsic growth rate and the carrying capacity. These parameters can be estimated for different populations and are related to the reproductive fitness and the competition for limited resources, respectively. We investigate analytically and numerically the simplest possible microscopic scenarios that give rise to the logistic equation in the deterministic mean-field limit. We provide a definition of the two parameters of the Verhulst equation in terms of microscopic parameters. In addition, we derive the conditions for extinction or persistence of the population by employing either the momentum-space spectral theory or the real-space Wentzel-Kramers-Brillouin approximation to determine the probability distribution function and the mean time to extinction of the population. Our analytical results agree well with numerical simulations.
Impact of Periodic Unsteadiness on Performance and Heat Load in Axial Flow Turbomachines
NASA Technical Reports Server (NTRS)
Sharma, Om P.; Stetson, Gary M.; Daniels, William A,; Greitzer, Edward M.; Blair, Michael F.; Dring, Robert P.
1997-01-01
Results of an analytical and experimental investigation, directed at the understanding of the impact of periodic unsteadiness on the time-averaged flows in axial flow turbomachines, are presented. Analysis of available experimental data, from a large-scale rotating rig (LSRR) (low speed rig), shows that in the time-averaged axisymmetric equations the magnitude of the terms representing the effect of periodic unsteadiness (deterministic stresses) are as large or larger than those due to random unsteadiness (turbulence). Numerical experiments, conducted to highlight physical mechanisms associated with the migration of combustor generated hot-streaks in turbine rotors, indicated that the effect can be simulated by accounting for deterministic stress like terms in the time-averaged mass and energy conservation equations. The experimental portion of this program shows that the aerodynamic loss for the second stator in a 1-1/2 stage turbine are influenced by the axial spacing between the second stator leading edge and the rotor trailing edge. However, the axial spacing has little impact on the heat transfer coefficient. These performance changes are believed to be associated with the change in deterministic stress at the inlet to the second stator. Data were also acquired to quantify the impact of indexing the first stator relative to the second stator. For the range of parameters examined, this effect was found to be of the same order as the effect of axial spacing.
NASA Astrophysics Data System (ADS)
Wong, B.; Kilthau, W.; Knopf, D. A.
2017-12-01
Immersion freezing is recognized as the most important ice crystal formation process in mixed-phase cloud environments. It is well established that mineral dust species can act as efficient ice nucleating particles. Previous research has focused on determination of the ice nucleation propensity of individual mineral dust species. In this study, the focus is placed on how different mineral dust species such as illite, kaolinite and feldspar, initiate freezing of water droplets when present in internal and external mixtures. The frozen fraction data for single and multicomponent mineral dust droplet mixtures are recorded under identical cooling rates. Additionally, the time dependence of freezing is explored. Externally and internally mixed mineral dust droplet samples are exposed to constant temperatures (isothermal freezing experiments) and frozen fraction data is recorded based on time intervals. Analyses of single and multicomponent mineral dust droplet samples include different stochastic and deterministic models such as the derivation of the heterogeneous ice nucleation rate coefficient (Jhet), the single contact angle (α) description, the α-PDF model, active sites representation, and the deterministic model. Parameter sets derived from freezing data of single component mineral dust samples are evaluated for prediction of cooling rate dependent and isothermal freezing of multicomponent externally or internally mixed mineral dust samples. The atmospheric implications of our findings are discussed.
NASA Astrophysics Data System (ADS)
Ghodsi, Seyed Hamed; Kerachian, Reza; Estalaki, Siamak Malakpour; Nikoo, Mohammad Reza; Zahmatkesh, Zahra
2016-02-01
In this paper, two deterministic and stochastic multilateral, multi-issue, non-cooperative bargaining methodologies are proposed for urban runoff quality management. In the proposed methodologies, a calibrated Storm Water Management Model (SWMM) is used to simulate stormwater runoff quantity and quality for different urban stormwater runoff management scenarios, which have been defined considering several Low Impact Development (LID) techniques. In the deterministic methodology, the best management scenario, representing location and area of LID controls, is identified using the bargaining model. In the stochastic methodology, uncertainties of some key parameters of SWMM are analyzed using the info-gap theory. For each water quality management scenario, robustness and opportuneness criteria are determined based on utility functions of different stakeholders. Then, to find the best solution, the bargaining model is performed considering a combination of robustness and opportuneness criteria for each scenario based on utility function of each stakeholder. The results of applying the proposed methodology in the Velenjak urban watershed located in the northeastern part of Tehran, the capital city of Iran, illustrate its practical utility for conflict resolution in urban water quantity and quality management. It is shown that the solution obtained using the deterministic model cannot outperform the result of the stochastic model considering the robustness and opportuneness criteria. Therefore, it can be concluded that the stochastic model, which incorporates the main uncertainties, could provide more reliable results.
Singularity-free dynamic equations of spacecraft-manipulator systems
NASA Astrophysics Data System (ADS)
From, Pål J.; Ytterstad Pettersen, Kristin; Gravdahl, Jan T.
2011-12-01
In this paper we derive the singularity-free dynamic equations of spacecraft-manipulator systems using a minimal representation. Spacecraft are normally modeled using Euler angles, which leads to singularities, or Euler parameters, which is not a minimal representation and thus not suited for Lagrange's equations. We circumvent these issues by introducing quasi-coordinates which allows us to derive the dynamics using minimal and globally valid non-Euclidean configuration coordinates. This is a great advantage as the configuration space of a spacecraft is non-Euclidean. We thus obtain a computationally efficient and singularity-free formulation of the dynamic equations with the same complexity as the conventional Lagrangian approach. The closed form formulation makes the proposed approach well suited for system analysis and model-based control. This paper focuses on the dynamic properties of free-floating and free-flying spacecraft-manipulator systems and we show how to calculate the inertia and Coriolis matrices in such a way that this can be implemented for simulation and control purposes without extensive knowledge of the mathematical background. This paper represents the first detailed study of modeling of spacecraft-manipulator systems with a focus on a singularity free formulation using the proposed framework.
Srinivasan, Mahesh; Dunham, Yarrow; Hicks, Catherine M; Barner, David
2016-01-01
Intuitive theories about the malleability of intellectual ability affect our motivation and achievement in life. But how are such theories shaped by the culture in which an individual is raised? We addressed this question by exploring how Indian children's and adults' attitudes toward the Hindu caste system--and its deterministic worldview--are related to differences in their intuitive theories. Strikingly, we found that, beginning at least in middle school and continuing into adulthood, individuals who placed more importance on caste were more likely to adopt deterministic intuitive theories. We also found a developmental change in the scope of this relationship, such that in children, caste attitudes were linked only to abstract beliefs about personal freedom, but that by adulthood, caste attitudes were also linked to beliefs about the potential achievement of members of different castes, personal intellectual ability, and personality attributes. These results are the first to directly relate the societal structure in which a person is raised to the specific intuitive theories they adopt. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Böhi, P.; Prevedel, R.; Jennewein, T.; Stefanov, A.; Tiefenbacher, F.; Zeilinger, A.
2007-12-01
In general, quantum computer architectures which are based on the dynamical evolution of quantum states, also require the processing of classical information, obtained by measurements of the actual qubits that make up the computer. This classical processing involves fast, active adaptation of subsequent measurements and real-time error correction (feed-forward), so that quantum gates and algorithms can be executed in a deterministic and hence error-free fashion. This is also true in the linear optical regime, where the quantum information is stored in the polarization state of photons. The adaptation of the photon’s polarization can be achieved in a very fast manner by employing electro-optical modulators, which change the polarization of a trespassing photon upon appliance of a high voltage. In this paper we discuss techniques for implementing fast, active feed-forward at the single photon level and we present their application in the context of photonic quantum computing. This includes the working principles and the characterization of the EOMs as well as a description of the switching logics, both of which allow quantum computation at an unprecedented speed.
A deterministic Lagrangian particle separation-based method for advective-diffusion problems
NASA Astrophysics Data System (ADS)
Wong, Ken T. M.; Lee, Joseph H. W.; Choi, K. W.
2008-12-01
A simple and robust Lagrangian particle scheme is proposed to solve the advective-diffusion transport problem. The scheme is based on relative diffusion concepts and simulates diffusion by regulating particle separation. This new approach generates a deterministic result and requires far less number of particles than the random walk method. For the advection process, particles are simply moved according to their velocity. The general scheme is mass conservative and is free from numerical diffusion. It can be applied to a wide variety of advective-diffusion problems, but is particularly suited for ecological and water quality modelling when definition of particle attributes (e.g., cell status for modelling algal blooms or red tides) is a necessity. The basic derivation, numerical stability and practical implementation of the NEighborhood Separation Technique (NEST) are presented. The accuracy of the method is demonstrated through a series of test cases which embrace realistic features of coastal environmental transport problems. Two field application examples on the tidal flushing of a fish farm and the dynamics of vertically migrating marine algae are also presented.
On the notion of free will in the Free Will Theorem
NASA Astrophysics Data System (ADS)
Landsman, Klaas
2017-02-01
The (Strong) Free Will Theorem (FWT) of Conway and Kochen (2009) on the one hand follows from uncontroversial parts of modern physics and elementary mathematical and logical reasoning, but on the other hand seems predicated on an undefined notion of free will (allowing physicists to "freely choose" the settings of their experiments). This makes the theorem philosophically vulnerable, especially if it is construed as a proof of indeterminism or even of libertarian free will (as Conway & Kochen suggest). However, Cator and Landsman (Foundations of Physics 44, 781-791, 2014) previously gave a reformulation of the FWT that does not presuppose indeterminism, but rather assumes a mathematically specific form of such "free choices" even in a deterministic world (based on a non-probabilistic independence assumption). In the present paper, which is a philosophical sequel to the one just mentioned, I argue that the concept of free will used in the latter version of the FWT is essentially the one proposed by Lewis (1981), also known as 'local miracle compatibilism' (of which I give a mathematical interpretation that might be of some independent interest also beyond its application to the FWT). As such, the (reformulated) FWT in my view challenges compatibilist free will à la Lewis (albeit in a contrived way via bipartite EPR-type experiments), falling short of supporting libertarian free will.
A family of small-world network models built by complete graph and iteration-function
NASA Astrophysics Data System (ADS)
Ma, Fei; Yao, Bing
2018-02-01
Small-world networks are popular in real-life complex systems. In the past few decades, researchers presented amounts of small-world models, in which some are stochastic and the rest are deterministic. In comparison with random models, it is not only convenient but also interesting to study the topological properties of deterministic models in some fields, such as graph theory, theorem computer sciences and so on. As another concerned darling in current researches, community structure (modular topology) is referred to as an useful statistical parameter to uncover the operating functions of network. So, building and studying such models with community structure and small-world character will be a demanded task. Hence, in this article, we build a family of sparse network space N(t) which is different from those previous deterministic models. Even though, our models are established in the same way as them, iterative generation. By randomly connecting manner in each time step, every resulting member in N(t) has no absolutely self-similar feature widely shared in a large number of previous models. This makes our insight not into discussing a class certain model, but into investigating a group various ones spanning a network space. Somewhat surprisingly, our results prove all members of N(t) to possess some similar characters: (a) sparsity, (b) exponential-scale feature P(k) ∼α-k, and (c) small-world property. Here, we must stress a very screming, but intriguing, phenomenon that the difference of average path length (APL) between any two members in N(t) is quite small, which indicates this random connecting way among members has no great effect on APL. At the end of this article, as a new topological parameter correlated to reliability, synchronization capability and diffusion properties of networks, the number of spanning trees on a representative member NB(t) of N(t) is studied in detail, then an exact analytical solution for its spanning trees entropy is also obtained.
End-of-life flows of multiple cycle consumer products.
Tsiliyannis, C A
2011-11-01
Explicit expressions for the end-of-life flows (EOL) of single and multiple cycle products (MCPs) are presented, including deterministic and stochastic EOL exit. The expressions are given in terms of the physical parameters (maximum lifetime, T, annual cycling frequency, f, number of cycles, N, and early discard or usage loss). EOL flows are also obtained for hi-tech products, which are rapidly renewed and thus may not attain steady state (e.g., electronic products, passenger cars). A ten-step recursive procedure for obtaining the dynamic EOL flow evolution is proposed. Applications of the EOL expressions and the ten-step procedure are given for electric household appliances, industrial machinery, tyres, vehicles and buildings, both for deterministic and stochastic EOL exit, (normal, Weibull and uniform exit distributions). The effect of the physical parameters and the stochastic characteristics on the EOL flow is investigated in the examples: it is shown that the EOL flow profile is determined primarily by the early discard dynamics; it also depends strongly on longevity and cycling frequency: higher lifetime or early discard/loss imply lower dynamic and steady state EOL flows. The stochastic exit shapes the overall EOL dynamic profile: Under symmetric EOL exit distribution, as the variance of the distribution increases (uniform to normal to deterministic) the initial EOL flow rise becomes steeper but the steady state or maximum EOL flow level is lower. The steepest EOL flow profile, featuring the highest steady state or maximum level, as well, corresponds to skew, earlier shifted EOL exit (e.g., Weibull). Since the EOL flow of returned products consists the sink of the reuse/remanufacturing cycle (sink to recycle) the results may be used in closed loop product lifecycle management operations for scheduling and sizing reverse manufacturing and for planning recycle logistics. Decoupling and quantification of both the full age EOL and of the early discard flows is useful, the latter being the target of enacted legislation aiming at increasing reuse. Copyright © 2011 Elsevier Ltd. All rights reserved.
Fractal Parameter Space of Lorenz-like Attractors: A Hierarchical Approach
NASA Astrophysics Data System (ADS)
Xing, Tingli; Wojcik, Jeremy; Zaks, Michael A.; Shilnikov, Andrey
2014-12-01
Using bi-parametric sweeping based on symbolic representation we reveal self-similar fractal structures induced by hetero- and homoclinic bifurcations of saddle singularities in the parameter space of two systems with deterministic chaos. We start with the system displaying a few homoclinic bifurcations of higher codimension: resonant saddle, orbitflip and inclination switch that all can give rise to the onset of the Lorenz-type attractor. It discloses a universal unfolding pattern in the case of systems of autonomous ordinary differential equations possessing two-fold symmetry or "Z2-systems" with the characteristic separatrix butterfly. The second system is the classic Lorenz model of 1963, originated in fluid mechanics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
NASA Technical Reports Server (NTRS)
Huyse, Luc; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
Free-form shape optimization of airfoils poses unexpected difficulties. Practical experience has indicated that a deterministic optimization for discrete operating conditions can result in dramatically inferior performance when the actual operating conditions are different from the - somewhat arbitrary - design values used for the optimization. Extensions to multi-point optimization have proven unable to adequately remedy this problem of "localized optimization" near the sampled operating conditions. This paper presents an intrinsically statistical approach and demonstrates how the shortcomings of multi-point optimization with respect to "localized optimization" can be overcome. The practical examples also reveal how the relative likelihood of each of the operating conditions is automatically taken into consideration during the optimization process. This is a key advantage over the use of multipoint methods.
Large-extent digital soil mapping approaches for total soil depth
NASA Astrophysics Data System (ADS)
Mulder, Titia; Lacoste, Marine; Saby, Nicolas P. A.; Arrouays, Dominique
2015-04-01
Total soil depth (SDt) plays a key role in supporting various ecosystem services and properties, including plant growth, water availability and carbon stocks. Therefore, predictive mapping of SDt has been included as one of the deliverables within the GlobalSoilMap project. In this work SDt was predicted for France following the directions of GlobalSoilMap, which requires modelling at 90m resolution. This first method, further referred to as DM, consisted of modelling the deterministic trend in SDt using data mining, followed by a bias correction and ordinary kriging of the residuals. Considering the total surface area of France, being about 540K km2, employed methods may need to be able dealing with large data sets. Therefore, a second method, multi-resolution kriging (MrK) for large datasets, was implemented. This method consisted of modelling the deterministic trend by a linear model, followed by interpolation of the residuals. For the two methods, the general trend was assumed to be explained by the biotic and abiotic environmental conditions, as described by the Soil-Landscape paradigm. The mapping accuracy was evaluated by an internal validation and its concordance with previous soil maps. In addition, the prediction interval for DM and the confidence interval for MrK were determined. Finally, the opportunities and limitations of both approaches were evaluated. The results showed consistency in mapped spatial patterns and a good prediction of the mean values. DM was better capable in predicting extreme values due to the bias correction. Also, DM was more powerful in capturing the deterministic trend than the linear model of the MrK approach. However, MrK was found to be more straightforward and flexible in delivering spatial explicit uncertainty measures. The validation indicated that DM was more accurate than MrK. Improvements for DM may be expected by predicting soil depth classes. MrK shows potential for modelling beyond the country level, at high resolution. Large-extent digital soil mapping approaches for SDt may be improved by (1) taking into account SDt observations which are censored and (2) using high-resolution biotic and abiotic environmental data. The latter may improve modelling the soil-landscape interactions influencing soil pedogenesis. Concluding, this work provided a robust and reproducible method (DM) for high-resolution soil property modelling, in accordance with the GlobalSoilMap requirements and an efficient alternative for large-extent digital soil mapping (MrK).
A model for oscillations and pattern formation in protoplasmic droplets of Physarum polycephalum
NASA Astrophysics Data System (ADS)
Radszuweit, M.; Engel, H.; Bär, M.
2010-12-01
A mechano-chemical model for the spatiotemporal dynamics of free calcium and the thickness in protoplasmic droplets of the true slime mold Physarum polycephalum is derived starting from a physiologically detailed description of intracellular calcium oscillations proposed by Smith and Saldana (Biopys. J. 61, 368 (1992)). First, we have modified the Smith-Saldana model for the temporal calcium dynamics in order to reproduce the experimentally observed phase relation between calcium and mechanical tension oscillations. Then, we formulate a model for spatiotemporal dynamics by adding spatial coupling in the form of calcium diffusion and advection due to calcium-dependent mechanical contraction. In another step, the resulting reaction-diffusion model with mechanical coupling is simplified to a reaction-diffusion model with global coupling that approximates the mechanical part. We perform a bifurcation analysis of the local dynamics and observe a Hopf bifurcation upon increase of a biochemical activity parameter. The corresponding reaction-diffusion model with global coupling shows regular and chaotic spatiotemporal behaviour for parameters with oscillatory dynamics. In addition, we show that the global coupling leads to a long-wavelength instability even for parameters where the local dynamics possesses a stable spatially homogeneous steady state. This instability causes standing waves with a wavelength of twice the system size in one dimension. Simulations of the model in two dimensions are found to exhibit defect-mediated turbulence as well as various types of spiral wave patterns in qualitative agreement with earlier experimental observation by Takagi and Ueda (Physica D, 237, 420 (2008)).
Dräger, Andreas; Kronfeld, Marcel; Ziller, Michael J; Supper, Jochen; Planatscher, Hannes; Magnus, Jørgen B; Oldiges, Marco; Kohlbacher, Oliver; Zell, Andreas
2009-01-01
Background To understand the dynamic behavior of cellular systems, mathematical modeling is often necessary and comprises three steps: (1) experimental measurement of participating molecules, (2) assignment of rate laws to each reaction, and (3) parameter calibration with respect to the measurements. In each of these steps the modeler is confronted with a plethora of alternative approaches, e. g., the selection of approximative rate laws in step two as specific equations are often unknown, or the choice of an estimation procedure with its specific settings in step three. This overall process with its numerous choices and the mutual influence between them makes it hard to single out the best modeling approach for a given problem. Results We investigate the modeling process using multiple kinetic equations together with various parameter optimization methods for a well-characterized example network, the biosynthesis of valine and leucine in C. glutamicum. For this purpose, we derive seven dynamic models based on generalized mass action, Michaelis-Menten and convenience kinetics as well as the stochastic Langevin equation. In addition, we introduce two modeling approaches for feedback inhibition to the mass action kinetics. The parameters of each model are estimated using eight optimization strategies. To determine the most promising modeling approaches together with the best optimization algorithms, we carry out a two-step benchmark: (1) coarse-grained comparison of the algorithms on all models and (2) fine-grained tuning of the best optimization algorithms and models. To analyze the space of the best parameters found for each model, we apply clustering, variance, and correlation analysis. Conclusion A mixed model based on the convenience rate law and the Michaelis-Menten equation, in which all reactions are assumed to be reversible, is the most suitable deterministic modeling approach followed by a reversible generalized mass action kinetics model. A Langevin model is advisable to take stochastic effects into account. To estimate the model parameters, three algorithms are particularly useful: For first attempts the settings-free Tribes algorithm yields valuable results. Particle swarm optimization and differential evolution provide significantly better results with appropriate settings. PMID:19144170
NASA Astrophysics Data System (ADS)
Vandromme, Rosalie; Thiéry, Yannick; Sedan, Olivier; Bernardie, Séverine
2016-04-01
Landslide hazard assessment is the estimation of a target area where landslides of a particular type, volume, runout and intensity may occur within a given period. The first step to analyze landslide hazard consists in assessing the spatial and temporal failure probability (when the information is available, i.e. susceptibility assessment). Two types of approach are generally recommended to achieve this goal: (i) qualitative approach (i.e. inventory based methods and knowledge data driven methods) and (ii) quantitative approach (i.e. data-driven methods or deterministic physically based methods). Among quantitative approaches, deterministic physically based methods (PBM) are generally used at local and/or site-specific scales (1:5,000-1:25,000 and >1:5,000, respectively). The main advantage of these methods is the calculation of probability of failure (safety factor) following some specific environmental conditions. For some models it is possible to integrate the land-uses and climatic change. At the opposite, major drawbacks are the large amounts of reliable and detailed data (especially materials type, their thickness and the geotechnical parameters heterogeneity over a large area) and the fact that only shallow landslides are taking into account. This is why they are often used at site-specific scales (> 1:5,000). Thus, to take into account (i) materials' heterogeneity , (ii) spatial variation of physical parameters, (iii) different landslide types, the French Geological Survey (i.e. BRGM) has developed a physically based model (PBM) implemented in a GIS environment. This PBM couples a global hydrological model (GARDENIA®) including a transient unsaturated/saturated hydrological component with a physically based model computing the stability of slopes (ALICE®, Assessment of Landslides Induced by Climatic Events) based on the Morgenstern-Price method for any slip surface. The variability of mechanical parameters is handled by Monte Carlo approach. The probability to obtain a safety factor below 1 represents the probability of occurrence of a landslide for a given triggering event. The dispersion of the distribution gives the uncertainty of the result. Finally, a map is created, displaying a probability of occurrence for each computing cell of the studied area. In order to take into account the land-uses change, a complementary module integrating the vegetation effects on soil properties has been recently developed. Last years, the model has been applied at different scales for different geomorphological environments: (i) at regional scale (1:50,000-1:25,000) in French West Indies and French Polynesian islands (ii) at local scale (i.e.1:10,000) for two complex mountainous areas; (iii) at the site-specific scale (1:2,000) for one landslide. For each study the 3D geotechnical model has been adapted. The different studies have allowed : (i) to discuss the different factors included in the model especially the initial 3D geotechnical models; (ii) to precise the location of probable failure following different hydrological scenarii; (iii) to test the effects of climatic change and land-use on slopes for two cases. In that way, future changes in temperature, precipitation and vegetation cover can be analyzed, permitting to address the impacts of global change on landslides. Finally, results show that it is possible to obtain reliable information about future slope failures at different scale of work for different scenarii with an integrated approach. The final information about landslide susceptibility (i.e. probability of failure) can be integrated in landslide hazard assessment and could be an essential information source for future land planning. As it has been performed in the ANR Project SAMCO (Society Adaptation for coping with Mountain risks in a global change COntext), this analysis constitutes a first step in the chain for risk assessment for different climate and economical development scenarios, to evaluate the resilience of mountainous areas.